[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] update mechanism

From: Igor Wronsky
Subject: Re: [GNUnet-developers] update mechanism
Date: Sat, 17 Aug 2002 22:45:21 +0300 (EEST)

On Sat, 17 Aug 2002, Martin Uecker wrote:

> IMHO the solution to the spam problem are trust metrics.
> This basically solved the problem on the web (google, page rank).

How do you propose to do this on gnunet? I admit I don't
know page rank by heart, but I think it was based on
reasoning how the web pages link to one another. Is
there any natural measure now that can be used?

I don't see how content-level trust metrics could be done 
without forcing the users to explicitly rank some 
content/submitter as good or bad. If the trust
was entirely local (and not published), it would
have to be possible to query only for the content
inserted by the trusted people (trusted by the user) 
because otherwise the spam gets propagated again
(even if locally filtered out in the end). If e.g.
Bob is a person we trust, and we want only stuff
inserted by Bob, there should be something produceable
only by Bob, but that could be verified by any
node right when its passing through and of course queried 
by us (we need a query like "give me all rootnodes of 
insertions by Bob and only Bob to the forum Plants on 
day so-and-so"). I think Christian once tried to explain 
to me how a similar idea could be done with hashes 
and public keys, but I didn't quite get it. :(

With such a mechanism it would be possible to
locally decide from whom to request messages,
and perhaps take a small amount of messages
inserted by unknown people in addition. Of course people 
already trusted by us could introduce new pseudonyms 
(perhaps w/ some certain message type) that we could 
add to our local trustbook if so desired.

This trust shouldn't be confused with the node trust.

BTW, the same pseudonyms could be included to file
insertion (voluntarily), we could in a same way
look only for files inserted by certain pseodonyms.
This also fits nicely with planned collection/index
/directory -files, where spamming might be problem.

I can pull together the actual clientside trivialities
with keys, trust handling and content signing if somebody 
handles the prerequisite query formulation/answering/
verifying technique as described above.

Or are there some more sophisticated ideas? I don't,
after all, have dreams of spam prevention. ;-) 


reply via email to

[Prev in Thread] Current Thread [Next in Thread]