[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] high disk-io

From: Christian
Subject: Re: [GNUnet-developers] high disk-io
Date: Wed, 7 Apr 2004 16:59:14 +0900

> > Conditions under which gnunetd is running here:
> > MySQL (1024MB)
> > free sais 100MB cached (its a small machine)
> > 90000 (set as up & down bandwidth in config file)
> >
> >  I think there should be similar way of control be used for disk-io like
> > for CPU and network load. To reduce padding at first and if it is not
> > enough to reduce even query lookups.
> Actually, there is one additional possibility, which is to make the disk-io
> less random.  If, for example, we gather more than just one block at a time
> from a file and instead read a dozen or two (possibly in sequence), that
> does not increase IO much but improves throughput dramatically.  The bad
> news is that it also makes deniability worse since chances of a peer that
> does not have the file pushing out closely related blocks from a presumably
> 'random' assembly of migrated blocks are low.  Anyway, together with the
> buffering of content by the active-migration thread (may cost memory) it
> should be possible to achieve an acceptable trade-off that can still be
> better than just not doing migration.

I don't suppose the migrated content could be "defragmented" like a 
journalizing filesystem so that closely related blocks are contiguous (I 
think mysql can do something like this). 

Yes it could do if there would be an order in the chaos but there isnt.
So nobody knows where the next access will go .. so nobody can order the

> Thanks for the data-point, I'll definitely keep it in mind (though I have
> my doubts that it holds for machines with less bandwidth, but then again,
> those may also not have problems with the IO load...)

Even with my relatively low bandwith, HD consumption is still an issue - 
especially since the GNUnet db is on the same HD as the rest of my 
applications. I think that users could actually benefit from this, albiet 
they could benefit more from other improvements.

Maybe the most sofisticated way would be to cupple randomiziation to the 
anonymity settings.
If we have a high sender-anonymity we need to randomize well if not we can send 
linear streams of anything. And at the same time having a user-adjustable 
disk-io limit would be perfect.

In fact i got a fix for myself but its real dirty and i dont want to show this 
anyone *hide* :)
An interesting effect was that after i cuttet disk-io by 60% (only dropping 
migration) i had abt. 150% of the former network throughput.

I suppose disk-io takes processing time also -> gets counted on CPU load -> 
drops more packages.
But thats only a quick guess.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]