gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNUnet-developers] Playing with GNUnet


From: peter
Subject: [GNUnet-developers] Playing with GNUnet
Date: 7 Jul 2002 23:55:10 -0000

Let's see... a few issues that arose while setting up a GNUnet server.
(In the usual style, this is all complaints and no praise.
Sorry, I haven't used it enough to be really impressed yet.
I'm still working on finding something more interesting
than the GPL on it.)

I presume you've already heard the suggestion that you migrate from
Blowfish to AES.  530 encryption operations to schedule 4K of S-boxes
just to do 128 encryption operations per 1K payload seems... excessive.


The promised FAQ entry on disk usage efficiency and ext2 doesn't
seem to exist.  I presume that it's telling me that my current
file system parameters of 4K blocks and 16K bytes/inode will
not work very well for GNUnet, which wants 1K for each.

One problem with the 1K block size is that, since you don't have any
inter-block mixing, it might be vulnerable to code book attacks for
low-entropy sources.  Unfortunately, some file-wide mixing might
remove the easy random-access you have right now.

One possible solution would be to define the encryption key e_i for
block i with parent block j, to be not equal to h_i, but e_i = f(h_i,
e_j), for some suitable combining function f (perhaps XOR?).  For this
purpose, the root's e_j is taken to be 0.

This makes the ciphertext for the whole file depend on the hash of the
whole file, without requiring any more data than is already present in
the indexes.

I'm not quite sure how to interpret the daemon's debugging output, but
it occurs to me that TTL values could be made more deniable.  An easy
one would be to change to only decrementing them half the time (and
halving the initial value to compensate).  Thus, just because I sent
a query with a given TTL doesn't mean that I (or one of my peers)
originated it...


gnunetd really should fork into the background once it's completed
initialization, like a good daemon.  While I can just run it in the
background in the first place, that makes it impossible to check for
errors.

When setting gnunetd up in a chroot jail, I was a little confused as to why
it needs all the libextractor_* libraries (which pull in libvorbis*).
Shouldn't that only be used by the administrative commands?

In fact, removing -lextractor from the Makefiles in utils, common,
and server produced a server without this dependency.  Perhaps the
build process could be tweaked a little?


The need for /proc/loadavg, /proc/net/dev, /dev/null and /dev/urandom
might be worth documenting.  Fortunately, it's always possible to
"mount -r --bind /proc/foo /jail/proc/foo", as long as you have touched
/jail/proc/foo beforehand.

gnunetd by default creates its directories with mode 0700.  gnunet-search
wants to look at data/hosts/*, which is impossible from another uid.


I notice that gnunet forks children which spend a lot of time in
loops like:

getppid()                               = 17459
poll([{fd=8, events=POLLIN}], 1, 2000)  = 0
getppid()                               = 17459
poll([{fd=8, events=POLLIN}], 1, 2000)  = 0
getppid()                               = 17459
poll([{fd=8, events=POLLIN}], 1, 2000)  = 0
getppid()                               = 17459

There is an easy way to block on the parent exiting, using a pipe.
Create a pipe which only the parent can write to (children close the fd
after forking), and it will poll ready-to-read (EOF) when the parent exits
(and closes the write end).

There's another process which spends its time doing:

nanosleep({1, 0}, {1, 0})         = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
nanosleep({1, 0}, {1, 0})         = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
nanosleep({1, 0}, {1, 0})         = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
nanosleep({1, 0}, {1, 0})         = 0
time([1026070510])                = 1026070510
time([1026070510])                = 1026070510
time([1026070510])                = 1026070510
... 190 calls to time() deleted ...
time([1026070510])                = 1026070510
time([1026070510])                = 1026070510
time([1026070510])                = 1026070510
rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
nanosleep({1, 0}, {1, 0})         = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
nanosleep({1, 0}, {1, 0})         = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
nanosleep({1, 0}, {1, 0})         = 0

It is unclear why
- It has to check once per second that SIGCHLD is still set to SIG_DFL
- It has to block SIGCHLD around this unnecessary check
- It has to periodically make 196 calls to time(), all within
  the same second.


If I might suggest a smaller bit of source code, modern processors
with branch prediction don't need unrolled loops:


/* Copyright abandoned; this code is in the public domain. */
#include <limits.h>

/* Avoid wasting space on 8-byte longs. */
#if UINT_MAX >= 0xffffffff
typedef unsigned int atleast_32;
#elif ULONG_MAX >= 0xffffffff
typedef unsigned long atleast_32;
#else
#error This compiler is not ANSI-compliant!
#endif

#define POLYNOMIAL (atleast_32)0xedb88320
static atleast_32 crc_table[256];

/*
 * This routine writes each crc_table entry exactly once,
 * with the ccorrect final value.  Thus, it is safe to call
 * even on a table that someone else is using concurrently.
 */
static void
make_crc_table()
{
        unsigned int i, j;
        atleast_32 h = 1;
        crc_table[0] = 0;
        for (i = 128; i; i >>= 1) {
                h = (h >> 1) ^ ((h & 1) ? POLYNOMIAL : 0);
                /* h is now crc_table[i] */
                for (j = 0; j < 256; j += 2*i)
                        crc_table[i+j] = crc_table[j] ^ h;
        }
}

/*
 * This computes the standard preset and inverted CRC, as used
 * by most networking standards.  Start by passing in an initial
 * chaining value of 0, and then pass in the return value from the
 * previous crc32() call.  The final return value is the CRC.
 * Note that this is a little-endian CRC, which is best used with
 * data transmitted lsbit-first, and it should, itself, be appended
 * to data in little-endian byte and bit order to preserve the
 * property of detecting all burst errors of length 32 bits or less.
 */
atleast_32
crc32(atleast_32 crc, char const *buf, size_t len)
{
        if (crc_table[255] == 0)
                make_crc_table();
        crc ^= 0xffffffff;
        while (len--)
                crc = (crc >> 8) ^ crctable[(crc ^ *buf++) & 0xff];
        return crc ^ 0xffffffff;
}



reply via email to

[Prev in Thread] Current Thread [Next in Thread]