bug-gnu-utils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[BUG] gdbm , gdbm_open 1.7.3 & 1.8.0


From: Andrei Pelinescu-Onciul
Subject: [BUG] gdbm , gdbm_open 1.7.3 & 1.8.0
Date: Fri, 06 Jul 2001 20:30:38 +0200

I've found the following bug in gdbm_open:

When creating a new database (GDBM_NEWDB) and when the block_size is 0
or < 512, the file_block_size will be taken from a fstat call:

     if (block_size < 512)
        file_block_size = STATBLKSIZE;
      else
        file_block_size = block_size;

Then we have:
     dbf->header->block_size = file_block_size;
     [...]
      dbf->header->dir_size = 8 * sizeof (off_t);
      dbf->header->dir_bits = 3;
      while (dbf->header->dir_size < dbf->header->block_size)
        {
          dbf->header->dir_size <<= 1;
          dbf->header->dir_bits += 1;
        }
       if (dbf->header->dir_size != dbf->header->block_size)
        {
          gdbm_close (dbf);
          gdbm_errno = GDBM_BLOCK_SIZE_ERROR;
          return NULL;
        }

But if file_block_size is not of the form 2^n, it will never be equal to
dbf->header->dir_size, and gdbm_open will fail.
(the block size returned by stat is supposed to be the optimal size for
disk
i/o and not the actual filesystem blocksize.)

This happened to me on a Linux xfs filesystem, created with sunit=128
and swidth=384 (sunit= stripe unit, swidth = stripe width, the numbers
are 512 bytes blocks). swidth is used to report the preferred io size
returned by the stat syscall.
On my xfs filesystem fstat will report a 196608 io block size. 196608 =
3 * 65536, so it's not of the 2^n form and the piece of code above will
always fail.

Another problem (pointed to me by Steve Lord) is if you follow blindly
the value reported by the kernel you'll get big and inefficient dbm
files. Perhaps it would be better if the block size will be limited on
Linux to a maximum value (maybe 4K, 64K seems to be already too much).


Andrei



reply via email to

[Prev in Thread] Current Thread [Next in Thread]