[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] Re: Other way to updateable content

From: Tom Barnes-Lawrence
Subject: Re: [GNUnet-developers] Re: Other way to updateable content
Date: Fri, 31 Jan 2003 05:38:55 +0000
User-agent: Mutt/1.3.28i

On Thu, Jan 30, 2003 at 05:35:01PM +0200, Igor Wronsky wrote:
> Of course they can throw them away. And it wouldn't be much additional
> difficulty to throw away anything inserted by a known pseudonym
> (=="someone dealing in no-good content"), no matter which of the
> namespace schemes is used.
 I'd been under the impression that that serial# was unusual in its

> And it beats me why a node would want
> to discard something that is from an unknown pseudonym: the discarding
> will never help node to gain credit, and neither will the forwarding
> of old blocks,
 I wasn't thinking of ordinary nodes trying to cheat (at the time, at
least), I was thinking of malicious nodes (IE- nodes trying to harm
GNUnet one way or another).

> if the user requests a newer one.

 The main idea I felt, was that if the user can't know what is the
latest version, any attempt to mislead can be rather effective. Thinking
about it tho, there'd have to be quite a lot of the malicious nodes
or the initiator would most likely find stuff from another route.

 The point about the middleman nodes sending out further requests
for newer versions when they return a valid response, had suggested
to me that it was all their responsibility, I see now I look back
that the client must have to keep sending requests for versions later
than whatever it gets, until it gets nothing. I suppose that can work.

 However, the client doing this produces 2 obvious problems: it could be
pretty expensive to get that block (they might find that getting the
content for it afterwards would be sluggish), and it would take even
longer than the rest of gnunet. It's quite slow already!

 True, if each time someone sends requests the newer versions get more
widely dispersed, there should be less old versions floating about.

 Now I see that the latest flaw I spotted (a node has incentive to store
all versions of a rewritable block, and always return the earliest possible
version that matches the "greater-than" query) appears to have been
addressed in those emails I missed- though I'm not sure how the timestamps
(where the user is required to sort of guess that maybe the timestamp
corresponds to when the author had said they'd make the next update)
helps to prevent this abuse, nor why a cheating node could only cheat

 Sure, other nodes will have later versions and may send them straight away.
Doesn't hurt you. They don't *necessarily* have later versions than the
latest of yours. And they might try playing exactly the same game too, in
which case this could potentially take a VERY long time.

 If you're suggesting that when a user says "I don't think this version
is recent enough", that his node should in fact not credit whoever
responded with it, then that makes dealing with any request of this
type suddenly very unattractive, as the potential rewards involved
become far more uncertain (whilst the costs are pretty high).

 Have I finally got one of these things right?

> The namespace proposal - if I remember correctly <grin> -
> already contains such an option that enables users to publish
> 'editions' such that each has some id (encrypted) and that
> are not overwritable. The problem here is that greater-than
> queries can not be supported. I'd like to see (but its unlikely
> that such would be forthcoming soon) some sort of analysis
> showing how much the overwriting scheme gains by overwriting.
> On a quick thought, the trivial gains are that old entry
> points will be replaced, slightly reducing the used space
> and lessening the amount of transferring and querying for
> obsoleted data.

 Yep, fairly sure that's what I was thinking of. However,
my understanding of it, was that you specified (somewhere at
the start) what the frequency would be of those editions, such
that the client would be able to work out the timestamp for the most
recent version of the block and download that directly. In which case,
why would you need a greater-than query, when you could just make a now
query or a that-specific-time-long-ago query? As I said before, the
problem of what happens when someone can't be bothered to do an update or
drops dead, etc is not a huge problem (except for aforementioned dead chap)
as users can find the latest of the out-of-date copies by searching
backwards, or choose not to look.

 Many people putting up content would only choose to update perhaps
every fortnight or so, and this leaves reasonable time to do each
update, meaning gaps could be rare and short.
 And in this editions system, I don't see any way that nodes would
profit from returning anything other than the best response to
fit the user's request (as only one response could be what was
requested, no?)

> > to be a bit unconvinced of the idea, but at the same time,
> > so am I- or at least, the fproxy+HTML part of it.
> There's no immediate fear that I would be attempting to
> code either of them, or their combination, or

 Great, that's what I'd thought.

> If you wish to specify a GNML or use gopher or something less-expressive
> than html (and probably obsoleted) for such purpose, I don't
> have anything against it, but I don't most likely either have time
> or patience to participate it.

 I already understood you weren't interested in working on my approach
to the hyperlinked content idea, I really don't mind. I was just
checking if you were planning on copying Freenet's fproxy which you
did seem to be saying you did like (as you say next).

     <Snip big paragraph on wanting to keep gnunet's core>
   <bug-free and efficient rather than worrying about externals>
> This whining may sound silly and unconstructive,

 No, on the contrary, you seem to have your head firmly screwed on!
 A million talking paperclips don't help in the slightest if the
 server code isn't working right.

> Uh, a lenghty explanation, but it has a reason: as much
> as I dislike it, I rather spend the little patience I
> have in getting over the current problems instead of
> running to implement new ones. ;)

 I understand your attitude entirely, and IMHO it's a far better
attitude than "I only want to code whatever happens to interest
me regardless of what is really needed now".

> (of course it could've been
> written for technically aware coders interested to take part in
> gnunet development right now, but they seem to keep awfully quiet,
> and enables me to question their existence in the present time,
> or atleast those wanting to fix the core operation I don't
> hear about often... hehe, with less documentation the freenet
> project has much more eager beavers there).
 I myself have been wondering about that. Are all those other,
currently silent devs on holiday, have they left the project? Or
have they just nothing to say whatsoever recently?
Speak up, you lot! ;) Previous months seemed to show lots of people.

 As for the GNUnet documentation, in general it seems pretty good,
though sometimes hard to digest, especially when it requires a
deeper understanding of how gnunet works.

> Agreed. But "us guys" are not gods either, and personally I understand
> gnunet much less than I should, and would happily see new people
> in the core devel group.

 I try to "help" where I feel I understand enough to point out something
potentially useful, and try to butt out when I know for sure I don't.
I'm working through those papers off the website and seem to be
understanding most of it so far; the general idea there being that I
could be a lot more helpful than now if I was more certain of what I
was talking about :P

 Maybe writing the client stuff might make me reasonably familiar with
the server code, but otherwise I doubt I can track down any server bugs,
only observe/mention their existence.

Tom Barnes-Lawrence

reply via email to

[Prev in Thread] Current Thread [Next in Thread]