gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] update mechanism


From: Christian Grothoff
Subject: Re: [GNUnet-developers] update mechanism
Date: Mon, 19 Aug 2002 11:09:09 -0500
User-agent: KMail/1.4.1

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Monday 19 August 2002 07:05 am, you wrote:
> > You can *never* rank down. That's the same problem as in negative trust
> > (read: an excess based economy). Submitters are anonymous, but even if
> > they signed the RNode, they can use a pseudonym exactly once, in which
> > case negative ratings are useless (they would only apply after the damage
> > has been done). You can only do positive rankings.
>
> I *do* understand the current system. But I predict that it won't
> survive spam. The RIAA seems to be paying firms to use spam
> as DOS attack against file sharing networks. Everbody can censor
> everthing just by inserting a lot of junk under the same keyword.
>
> (BTW: negative ratings for one time pseudonyms aren't
>  useless if they are distributed to other people)

How do you prevent the RIAA to massively insert negative ratings (or positive 
ratings for that matter) and tell other people about it? Google's page-rank 
and /.'s moderation system can be foiled if you've got enough resources at 
your expense. I would not like to have a system where a powerful adversary 
can make nodes believe that a certain document is spam (worthless) and make 
the nodes remove it. That would not be very censorship resistant. And while I 
agree that you may be able to drown a keyword by inserting lots of useless 
results under the same name, it's harder to do this for 'all' keywords. 

And note that once we have directories, one good keyword can give me any 
number of files (and the user would of course quickly find out if the 
directory is useful/valuable or not).

> > > Why pseudonyms? Individuals are best identified by the hashes
> > > of their pub keys. Those can't by hijacked because the owner
> > > can prove with a signature that he is the owner of his pub
> > > key.
> >
> > Well, the hash of the public key *is* a pseudonym, as long as there is
> > not a
>
> Okay.
>
> > public database that links public keys to individuals. And since you can
> > make up new public keys anytime, you can make new pseudonyms at any time
> > - as many as you feel like.
>
> Yes. People might prefer to have many pseudonyms so that
> different pseudonymous activities can't be linked together.
>
> But I don't think that a newly created pseudonym should
> automatically be trusted enough to insert R blocks which
> then appear in the search results of everybody else.

I don't see how you would avoid this, in particular with problems like 
bootstrapping the system, pseudonyms that are used once (for one document) 
only, giving nodes an incentive to send replies, being censorship resistant 
*and* deniable such that nodes can claim that they did not know which RNodes 
they were routing. 

> A node should not insert (or return as search result)
> R blocks which come from less trusted pseudonyms if there
> are R blocks from higher trusted pseudonyms which match
> the same keywords. (it might be a good idea to randomly
> make exceptions to this rule)
>
> At the same time the user should rate the results locally
> using he's personal database. (That is what Igor proposes
> too.) But this won't help getting the bad R blocks out
> of the network.

True. But it may help each user to improve the results according to his 
preferences, which may be different compared to others anyway.

> Another idea is: The servers expire old blocks after some
> time (even when they are requested often.). To keep the good
> content on the network the search clients reinsert
> search results which are locally rated high.

We do expiration of content, but not of content if it is requested often. That 
would also not make sense economically (you have a file that many people 
request from you, you remove it!? That's like M$ stopping to sell Windows.). 
Also, keywords that were manually supplied by the users (which are the most 
valueable ones) would be lost in this kind of process because you can not 
just 'libextract' those.

Christian
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE9YRgl9tNtMeXQLkIRAiyrAJ4gWcjy3UqcmJl6wgS1WNJvX5W7wQCeMU7X
2gnpiZ0r4OpdZN9UyibDpqs=
=uy7U
-----END PGP SIGNATURE-----





reply via email to

[Prev in Thread] Current Thread [Next in Thread]