gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] vsftpd/wget/gnunet


From: Igor Wronsky
Subject: Re: [GNUnet-developers] vsftpd/wget/gnunet
Date: Wed, 25 Dec 2002 04:43:42 +0200 (EET)

On Tue, 24 Dec 2002, Jan Marco Alkema wrote:

> In mine opinion: If the end user categorize a file as "no high value" or
> owner "nobody" the gnunet management system can select an other AFS protocol
> based on vsftpd and wget protocol without time consuming insert procedures
> and boundaries in respect to file size. Flexibility is a great thing in a
> file sharing system.

Ok, ok. There's two different things you propose, this one
to the developer mailing list, and the other a privateish
message. I speculate a bit of both here, not that my speculation
would be authoritative, but because I'm probably the only one
geek enough to hang by my email during the xmas time.

Your first private suggestion regarded, again, the use of
ODBC/JDBC/whatever for GNUnet. There is just one point to
consider: "whats the gain?". I, for one, can't see it. Christian
has argued against it. The truth is that even though the
*industry* loves 3 and 4 letter words and drops them on
every occasion it can, I don't think those with academic
background (like some of the gnunet developers, I gather)
so much appreciate it. A business organization can probably
make better sales if it has a new combination of four letter
words to offer every year, but the fact is that in this
case it hardly makes our product (gnunet) significantly
better. A set of services we need from a dbmg are very
simple and our data is very uniform. Though I personally
like the phrase about "not reinventing wheels", I am
*even more* fond of the saying "if it works, don't fix
it". That is, if we can pull the thing off with a simple
and straighforward mechanism, that is what we should
go for.

If someone needs to use mysql/postgresql/somesuch with
GNUnet so that he/she can mouth statements like "oh boy
I have triggers now and locks and nested queries and lots
of things I don't understand", its very fortunate that
GNUnet 0.5.0 is programmed in quite modular fashion. It
should be relatively easy to code an optional *interface*
for commercial or highend databases, if someone wants
to use such. Also, in the spirit of open source, volunteers
are welcome to implement and submit such pieces of code
to be included to the project, providing that they meet
the necessary licence, quality, etc. considerations
(where applicable). Its just that the existing developers
go to directions they seem worthwhile, and in addition
they have much of other concerns. Often the most honest
advice to a more complex feature request is "if you
want it, do it yourself".

However, the strongest argument about this whole database
thing is related to bottlenecks, and the question is
if its really the database thats holding us back - if
we really need support for terabytes of data, as you
mentioned, or low disk overhead (that was the reason
for bloom filter implementation). I say no, not at this
point. Its the bandwidth whats the problem, and overhead in
routing. If most users can offer a 2kb trickle, 2kb trickle
it will be, or worse. It seems unlikely to me that there
would be major scale file sharing at such speeds, and
GNUnet quite favours symmetric connections, unlike
cable modems. If you can't even send queries out fast,
you shouldn't be dreaming of receiving anything fast. And
still many people are on lousy cablemodems that have upstream
bandwidth more like a dialup. You may say that I should
look to future, but I am more of a realist, and a
bit narrow minded at that. If at the moment everyone
is not using T3, and I've not seen GNUnet perform
on network of hosts on T3s, and there's no evidence
how it would perform at that situation, it leaves
quite a lot up to imagination. Perhaps the only idea here
is that we shouldn't be talking about terabytes if
transferring a couple of megs seems is too slow at
the moment. Usually looking too far ends up just in
writings in history that go on the lines of 'they were
ahead of their time but' and further discussions of what
went wrong.

...

Your second proposition regards lessening anonymity
for content that doesn't require it - if I understood
correctly. Again 0.5.0 modularity comes to help here:
it might be possible to code a non-anonymous filesharing
application/service on top of gnunet, just like AFS or
the chat is. Such a service could feature, for example,
distributed queries, but direct host-to-host links
for transferring actual data when found.

This too, can be argumented against. The first argument
that comes to my mind is similar to what is used when
talking about encrypted and nonencrypted email: if only
a few are using encryption, it will look like they
have something to hide. Similarly, it would make
anonymous traffic more suspicious than nonanonymous
traffic. The second argument is that coding such
a service shouldn't be taken into too lightly. Atleast
a careful design document (proof of concept) should
be created before even attempting it. That document
should issue things like what choices were made,
why they were made, perhaps shed light on problems
of other systems, how this thing works better than
them, and what it gains/suffers when implemented
on top of GNUnet.

But as I said, the developers have their own interests,
though most of us might be open to suggestions. ;) Currently
those interests, in GNUnet case and on the short-to-mid -range
are listed (to best of my knowledge) at the end of the
main GNUnet homepage, in a section called ROADMAP. Personally
I think that one useful direction is outlined in the *namespaces
proposal* which tells how we are thinking developing
something similar to Freenet's freesites or "author specific
subspaces" on GNUnet, in GNUnet style, naturally throwing
in some extra goodies like the idea of recursive
file collections (or directories). I encourage ppl
to take look at the document and criticize it, and
why not suggest something else instead. We've not
even sure who's to develop that one, in the end. Another
personal favourite of mine is empirical evaluation:
drag along your friends around the world with those T3s and
terabytes, share a lot of junk, dl a lot of junk, let stuff
propagate, become popular, always available, etc, and
then pull together some statistics and answer the most
important question: "Does it work in practice NOW?". If
not, why not? GNUnet of current size and content
can't answer that question. Best I've been able to see
up to now is something on the lines of "if its in
the network, the file will come through... eventually". That
is a strong claim, though, and twofold: 1) on the current
size network, disappearance of one node may mean disappearance
of a file and 2) with networks of this size, its possible
for the queries and answers to traverse the whole network. Meaning
roughly that if the file comes from single source, we will
rely on individual bandwidth, and on the other hand our
queries can always reach that single node if the network is small,
so its not a miracle. For a larger anonymous p2p-network, the
claim would be impressive. If GNUnet still enables to give that
claim on a network of 100 active nodes, or 1000 nodes, that is
a good start - such numbers already enable enough diversity
in interests and content to be able to offer something
for everyone, as seen from edonkey etc.

...

Ok, that was my no-life christmas rant. Its certainly
a creation of stuffing oneself full of pork and turkey and
sausages and stuff, and certainly not that of gnunet developer
community, which might have different views collectively.


Igor





reply via email to

[Prev in Thread] Current Thread [Next in Thread]