[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNUnet-developers] Re: Cheap trick for connectiontable.

From: Christian Grothoff
Subject: [GNUnet-developers] Re: Cheap trick for connectiontable.
Date: Tue, 3 Sep 2002 11:54:26 -0500
User-agent: KMail/1.4.1

Hash: SHA1

On Tuesday 03 September 2002 11:41 am, Igor wrote:
> On Tue, 3 Sep 2002, Christian Grothoff wrote:
> > Why not add a 'BufferEntry * link' field to the BufferEntry struct and
> > use that to build linked lists for collisions? This way, we would not
> > need a global lock (the head of the tail in the big BE[] would have the
> > lock) and we could still deal with collisions.
> That's what I examined first. The problem is the many for() loops that
> go through the table, they should also go through the collision chains
> (not a problem). The problem comes from cronDecreaseLiveness and places
> where the entry is deleted and the links would have to be fixed (otherwise
> the chains could grow without bounds). Its not hard to code, but its not
> too pretty.

Yes, but I guess we should be able to live with 'not too pretty' for what is 
at the end a linked list. I think the only really "ugly" one would be if the 
head of the list is removed (and we'll *have* to preserve its lock since it 
would be the lock for the slot in the BE[], unless you have a better idea). 
Getting this to work right will be hacky, but, IMO, so be it. 

> > Note that we also are in big need for a (load based) policy on how many
> > connections we actually want to establish...
> This is coding. That is thinking. ;)
> > p.s.: Does the attached patch make your 0.4.6 GNUnet happier? :-)
> Might do. Are you sure delta^3 won't overflow int bounds?

I doubt it, since the range is [0,100] (unless you expect a network load of 
10000x what the user claimed to be "ok"). 

> Also what
> needs to be examined is (I experienced something like this when I
> toyed with this issue last time) that are all buffers treated equally?
> Last time it looked like that if 4 buffers were to be sent at the same
> time with same priority, the mechanism always favoured the first
> ones and never sent the last ones (supposing we could never send all 4
> buffers).

Well, the first one may push the load up, and then the others get dropped 
because the load is too low. This ties in with the pending problem of 
smoothing our load-methods (average over longer intervals) and increasing the 
cron frequency to more than once per second.

> Though I had additional kludge there, I made gnunetd count
> its own upstream load as well (otherwise we ended up in the priority
> independent "send all" / "send none" cycle) and add that to the rxdiff
> when time between calls was below 2 secs.

Sounds like a great idea. We'd need some hook in the (0.4.9) transport layer 
to notify statuscalls of the change in load, but that can (& should) be done. 

Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see


reply via email to

[Prev in Thread] Current Thread [Next in Thread]