gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] Playing with GNUnet


From: peter
Subject: Re: [GNUnet-developers] Playing with GNUnet
Date: 12 Jul 2002 00:36:52 -0000

>> A little bit of thinking reveals the following "attack":
>> - Suppose I wish to suppress some specific samizdat of length >1K,
>>   e.g. I'm a Scientologist and I want to stop people quoting the NOTs.
>> - I generate all possible 1K windows of the material in question,
>>   encrypt them and hash them.  This produces output 20 times as large as
>>   the document I want to suppress.
>> - I get a court order forbidding people to serve blocks with the given
>>   hashes.  I haven't revealed the plaintext, but I've (partially) defeated
>>   deniability.

> Not deniability. Censorship resistance. You were successfully able to force 
> the defendant to whom the court order applies (which would be one individual 
> since you can't get such an order 'for the world') to blacklist a couple of 
> hashes. In a network with high connectivity, that would neither efficienctly 
> censor the material (since many other hosts would still route and replicate 
> the data) nor inhibit the defendant to share the content if one character per 
> 1k block is changed -- because then the defendant can again claim not to know 
> what it was. Thus this would rather be a waste of your time.

1) Are you *sure* *nobody* can get such a court order?  I've seen some
   impressive legal shenanigans lately, and laws can change.
   There might also be some equivalent threat that could be used
   (E.g. sending nasty letters to 
2) I don't quite understand your first two sentences.  (No verb.)
   The point is that I can put a large number of server operators on
   notice that they may not carry certain material.  I.e. if I've
   told them the forbidden hashes, they can't deny knowing they're
   carrying it.
3) You can likewise, given H(H(B)), threaten to sue people for forwarding
   3QUERIES for H(H(H(B))).  That would hurt availability, no?

Yes, there are defences at the publisher's end.  I just wanted to avoid
the need to make them manual.  Tools that don't have security
interactions with the payloads they carry are a Good Thing.


(I'm still not clear on what prevents a malicious node from answering
a query for H(H(B)) with a garbage block.  Only posession of H(B) lets
one distinguish a valid response from an invalid one, and intermediate
nodes don't have it.)

> This can be defended against by altering the quoted plaintext:
> punctuation, spacing, line-wrapping, or the like, but why should the
> system user have to go to the trouble?

> Having the option to remove content whenever you are seriously (legally) 
> threatened to do so -- without impacting availability (!) -- is something I'd 
> rather consider a good thing. 

It seems to me more of a Bad Thing, because it implies a failure of
deniability.  If you have the possibility of telling "good" from "bad"
(for some particular point of view), then you can (like Napster) be
held responsible for enforcing the distinction.  It's important to be a
"common carrier" without editorial control.

>> I agree that the value of having multiple insertions of the same
>> plaintext "collide" and merge is significant.  Is accidental
>> sharing of matching *blocks* in different plaintext sufficiently
>> likely to be worthwhile?

> I would say that this depends a lot on the application. append-only log-files,
> large headers (e.g. in doc or ps files, license texts at the beginning of 
> code) or very redundant data (all-zeros) come to mind. Aborted downloads may 
> be another example. 

Good points.  But that could be addressed by chaining in the hash of
the previous block (which is stored in the index, so you could still do
random-access output) into the hash of the current block.  That allows
common prefixes to share, but prevents precomputing all possible hashes
of excerpts.  (As for blocks of zeros, isn't that a bit wasteful of
bandwidth?  Shouldn't such files be compressed first?  Although,
admittedly, once it's been trasmitted once, it *is* cached...)

>> (I should check the source, but...) are the TTLs chosen from an infinite
>> (e.g. normal) or finite (e.g. uniform window) distribution?  The latter
>> allows definite statements about the locaiton of the source of query to
>> be made, particularly any time a value near the limit of the possible is
>> seen.  An infinite distribution just makes is less likely.

> TTLs are chosen from a window that grows exponentially over time and is thus 
> potentially infinite, but typically small.

Good.  You don't want large values to be *common*, just possible enough
to fit into "reasonable doubt."

> I was more thinking about the estimation on how long it could take for a 
> reply. The client and the nodes need an estimate on how long to wait until 
> either scheduling the query for a re-sent (client) or freeing the slot in the 
> routing table (node). If the TTL is 'randomized' like that, this estimate 
> could only be approximate (being wrong would not be a disaster, just bad for 
> efficiency). I'm not yet sure where to put my money for this trade-off, 
> especially since the initial TTLs are from a 'potentially' infinite window.

H'm... I'll have to look at the implementation in more detail.
Aren't forwards subject to random time delays anyway?

> Well, I have that book. Anyway, I doubt this will this work if during the 
> initialization (and checking, etc) the main method has already created a 
> couple of pthreads? If you then fork off the main method, could that not 
> result in having the threads run in a different process than the rest of the 
> main method and thus without sharing of global state?

Ah, yes, that *will* make a mess of things unless you understand the
pthread implementation.

> So we may need to re-factor the entire initialization to ensure that threads 
> are only created after we fork (which would be a bit more work than it is 
> worth at this point imo); or we could just fork at the beginning and do 
> error-checking after already being detached from the console (which would be 
> ugly). 

My preference is to write things without pthreads in the first place,
but I see how that's a bit problematic once you've started down that
path...

> Better suggestions?

Open a pipe, fork, have the child start the pthreads and send success
back to the parent.  The parent hangs around until the child has sent
the success code over the pipe, or has exited.  (Note that the latter
can be detected by the EOF on the pipe.)

Then the parent returns the appropriate error code to whatever invoked it.

> > Sounds like a bug. Added to the pile...
> > http://www.ovmj.org/~mantis/view_bug_page.php?f_id=324
>
> Thanks.  Looking at it is tricky:
> > Mantis - OVM Bugtracking
> >
> > ERROR: your account may be disabled or the username/password you entered
> > is incorrect.
> >
> > [ Click here to proceed ]
>
> Does Mantis require cookies enabled or some such?

> Yes. Use konqueror - there you can specify which sites are allowed to use 
> cookies and which are not.

I use galeon, which is similar.  But I also have to add it to the
junkbuster config files.  I believe in defence in depth.  Doubleclick,
for a while, was supporting 128-bit HTTPS delivery of its GIFs to get
around blocks.  This elicited the following response:

--- Begin /etc/bind/db.doubleclock
$ORIGIN net.
$TTL 86400
doubleclick             IN      SOA     science.horizon.com. 
hostmaster.horizon.com. ( 981002 3600 1800 3600000 86400 )
                        NS      science.horizon.com.
$ORIGIN doubleclick.net.
--- End /etc/bind/db.doubleclick


> If you consider the case that cron may have the next job in 1h and then 
> suddenly a job gets added for 'in 30s' (and you'd have to cancel the previous
> wait), I'll be very happy about a patch :-)

Hint taken.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]