gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] Databases benchmark and problem


From: Igor Wronsky
Subject: Re: [GNUnet-developers] Databases benchmark and problem
Date: Wed, 16 Apr 2003 18:25:56 +0300 (EEST)

On Tue, 15 Apr 2003, eric haumant wrote:

> I've just made some tests with the five database types of GNUnet. This
> results have been made with a 100Mo file (with random content) that have
> been inserted in the database. Here is the results :

I'd like to comment on your tests a little bit based on my
previous experience. The main issue here imho is not of how
long it takes to upload or download a 100mb file to/from an
empty database, but rather how the database performs when
its nearly full, as that's the state of operation that a
long-term node will most likely spend all its time in.

In the single database case (before buckets were used) I found
out that gdbm become worse and worse as the database grew. It
was fast to insert to an empty database, but when it started
to fill up, it got almost geometrically slower. You should already
be able to see this behaviour with 100 megs by just printing
out how long it takes to insert first megabyte, second megabyte,
etc. and perhaps do a gnuplot of the results and expect to see
a rising curve.

What I'd like to see (supposing someone has extra energy ;) )
are tests that would measure the db insert/fetch throughputs
when the system contains say 500 megs or 2 gigs of data. Another
interesting issue is with the simple database modules, how do
they cope when the priority distribution is natural and over
a long period of time? What does such a distribution look like?
Currently those modules use a filesystem file per priority.
What if, in real life, it converges to thousands and thousands
of different priorities?


Igor





reply via email to

[Prev in Thread] Current Thread [Next in Thread]