[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [GNUnet-developers] Trust Inflation / Deflation
Re: [GNUnet-developers] Trust Inflation / Deflation
Wed, 23 Mar 2005 06:44:35 -0500
On Tuesday 22 March 2005 23:40, Alen Peacock wrote:
> Thanks for the in-depth answers. You've filled in some nice gaps in
> my understanding and swatted away several of my concerns. I have a
> few followup questions:
> On Tue, 22 Mar 2005 16:01:18 -0500, Christian Grothoff
> <address@hidden> wrote:
> > In
> > GNUnet, we limit outgoing priorities to a range within a constant factor
> > of the observed average of requests that we receive (something like 2 *
> > (1+avg-received)). This ensures that our outgoing priorities are
> > 'plausible' and competitive. If the average item is sold for X, it does
> > not help to offer 100 * X -- in particular since you trust-score is so
> > bad that you're not good for X/2. GNUnet nodes do not try to "guess" on
> > their popularity (trust rating) with other nodes.
> Okay, got it. So outgoing request priorities are capped at
> somewhere just over 2x the average incoming priority. This seems
> absolutely reasonable, but I'm curious if there is some reasoning or
> observations from collected data that went into choosing that value?
The reasoning was that which is used in other places for anonymity: if a peer
is within a perceived "reasonable" range from the other peers, its behavior
does not stand out. If a peer were to send out a request with a priority of
999999 while nobody knows anyone with that much trust, it would be obvious
that that peer is the initiator (deanonymization). The assumption was that
being less-than about a factor of 2 of the running average of all requests is
> Also, how does a gnunet client pick an increment from its previous
> (rejected) priority? Have you observed that one strategy for picking
> the increment tends to work better than others?
The current code uses a crude heuristic without experimental data to back it
up in any way. I would love to see a decent (empirical or theoretical) study
done on how these heuristics and bounds should look like, but I did not have
the time to do so.
> I think it would be fascinating to simulate /just/ this part of the
> trust system, and use [insert your favorite machine learning technique
> (or other search) here] to determine what the best strategy is for
> getting requests answered fast, without 'spending' too much trust. I
> wonder particularly about the effects of diversity -- in a
> 'population' of diverse individuals implementing diverse strategies,
> the optimal choices would certainly vary with the overall
> characteristics of the population.
> Maybe you have studied some of these issues (which is why I'm
> asking). Or maybe I'm making way too much of this, and the system
> really reduces to a very simple behavior no matter what the individual
> participants do, but my [limited] experience in financial and futures
> markets tells me that strategies have to change as the individuals who
> participate in the market change. At any rate, don't take my
> questions as criticism -- I'm just very interested in solutions that
> are resilient to "gaming" -- nash equilibria and all that. Your work
> in this area is very appealing.
I've asked myself these questions when writing the code, but failed to come up
with a theoretically sound answer (and did not do any experimental studies).
This is also out of the scope of the original paper in that it does not
impact on the claims made -- this is merely an implementation detail that
will result in peers with a better implementation getting better performance.
It's like writing a paper saying 'capitalism is good' and someone else asking
for an optimal algorithm for buying and selling stock on wall-street. The
first claim does not need to provide an answer for the second -- however
desireable an answer for the second question would be for everyone.
If you want to work on these problems, I'll be more than interested to hear
about your results and/or even help where I can. However, my time is
> > If the existing nodes would actually raise the priorities in the
> > network to such extremely high values, the new node would only have to do
> > very little to earn trust quickly (since fullfilling any of these
> > requests would earn it a lot immediately).
> Ah, I hadn't considered that, though it seems obvious now.
> > However, it is unlikely that the amount of
> > trust put into queries ever bubbles up like that, since, as described
> > before, nodes try to minimize the amount of trust offered in order to
> > make it just high enough to be above the threshold at which responders
> > would drop the query to satisfy other requests.
> So are you saying that the system is resilient to 'bidding wars' in
I would think so -- anyone who starts such a war is the one who'd loose (trust
> What if I modified a gnunet client so that its sole purpose was to
> try to inflate pricing (doesn't care about actually getting its own
> requests fulfilled but fulfills as many incoming requests as possible,
> etc). Would its effect be shortlived because it burns through
> accumulated trust faster than its peers?
That's exactly what I'd expect to happen. In fact, buggy versions of GNUnet
(way back when...) had that behavior ;-).
> If I had an army of such
> clients could I disturb the economy to such a point that nodes end up
> spending way too much of their trust, too quickly, resulting in an
> economy that is crippled by the loss of knowledge (of trust)? Or
> would it all be futile, my little misbehaving client army never being
> able to impact the network significantly before burning itself out (of
Well, I guess it may depend a bit on how "little" your "army" is. If it is a
significant portion of the network (say 50%), than that portion of the
network is behaving economically irrational and thus allocating its share of
the resources inefficiently. However, the rest of the network is not going
to trust those peers and not give their requests any priority (most of the
time). So you're mostly limited to what you are contributing, plus possibly
some scalability loss in general (since now the network is twice as big with
50% less-effective nodes, which depending on how the network scales in
general may have some impact).
> I don't expect you to provide detailed answers to my imaginary
> scenarios -- just your thoughts on resilience to trust-system attacks
> in general. I realize that it is very hard, perhaps impossible, to
> design a system that is resilient to every conceivable attack, and I
> know that addressing attacks wasn't the goal of the paper. But I'm
> sure there are a good number of attacks, some good and some poor (not
> sure which category the preceeding example falls into :) ), which
> become possible only because of the economic model. The above attack
> could be considered a specialized DoS that just tries to negate, or at
> least lower, the collective knowledge expressed as trust. I'm
> interested in any similar attacks on the trust sytem / countermeasures
> that you've considered.
I did consider all kinds of collaboration attacks, yes. Now, theoretically I
still believe the system is sound in that the economy can insure that you
need to contribute more than you can consume minus excess (assuming linear
scaling of the network itself and non-malicious nodes applying perfect
heuristics for selecting priorities). The effect of "less" perfect priority
selection heuristics is essentially a correction factor that has to be
applied when weighing in "contributions" and "consumption" (if you're willing
to pay $5 for a coffee worth $1, you're making a loss...)
> > Well, you assume that when more nodes join the network it has to become
> > more busy. However, more nodes also means more resources. Now, yes, the
> > network may not scale linearly, but the economy cannot solve basic
> > scalability issues: if the network does not scale to a certain size, lots
> > of requests will have to be dropped once the network grows beyond that
> > size. And if that is the case, having enough trust to guarantee
> > performance in a network where most of the time requests are dropped will
> > have to be a scarce occurrence (but it can still be earned by answering
> > requests by others, so the economy is not disabled per-se).
> I did imply that the increase in traffic was a result of the network
> getting larger, and that is generally true, as you point out -- linear
> scaling can't be assumed.
> But for the sake of trying to make the argument that I should have
> made in the first place :), just assume instead that the network gets
> more busy because it gets more busy (same number of nodes). I'm
> imagining a graph, where the x-axis is busy-ness, y-axis #1 is number
> of unfulfilled requests, and y-axis #2 is accumulated trust. Isn't
> there a critical crossover point after which trust no longer
> accumulates, but diminishes? Shouldn't this crossover occur somewhere
> near the point in which nodes are busy more than 50% of the time?
> I recognize that this is an unfair question in at least one way --
> the trust system in GnuNET is premised on the idea that the economy
> /is excess-based/, and I'm asking about what happens when (for
> whatever reason) resources are no longer in excess.
Well, at that point nodes will start to also offer (perhaps slightly) more
trust for content that they receive (since their first, lower bids are more
likely to fail). Now, certainly as the load increases there will be a point
(but I'd suspect it is at 100% load) where the _average_ node would start to
see its overall trust diminish (since _on average_ the network is busy, so
the average node cannot be served all the time). However, nodes that
contribute more resources than they consume would still see their trust
increase. If the network scales so badly that at some points all nodes
consume _at least_ all of the resources that they contribute (100% overhead,
0 effectiveness) no node would be able to continue to accumulate trust.
Oh, and it's GNUnet, not GnuNET (we've nothing to do with .NET and lots with