[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: df reporting problem with raid
From: |
Jim Meyering |
Subject: |
Re: df reporting problem with raid |
Date: |
Sat, 09 Sep 2006 10:01:56 +0200 |
Steven Fishback <address@hidden> wrote:
> I recently setup raid 1 disks on my Linux OS.
> The other 300+ kickstart nodes are exactly the same minus the raid setup
> and are working fine.
> So I'm only guessing the raid setup has df reporting incorrectly for the
> nfs mounts???
> I'm using RHEL 3.0AS 2.4.21-4.ELsmp
> df is from coreutils-4.5.3-26
4.5.3 is pretty old, even if it has 26 backported patches.
But that's not your main problem. See below.
> The output that off is for a NFS mounted NetApp storage device.
> See the last four lines at the bottom of the output.
>
> # df -h
> Filesystem Size Used Avail Use% Mounted on
> trunk:/vol/scr 1.1T 28G 1.1T 3% /scr
> trunk:/vol/scr 1.1T 28G 1.1T 3% /netscr
> trunk:/vol/nas 821G -8.0Z 1019G 101% /nas
> AFS 1.1P 0 1.1P 0% /afs
>
> It should report this:
> trunk:/vol/scr 3.1T 2.1T 1.1T 66% /scr
> trunk:/vol/scr 3.1T 2.1T 1.1T 66% /netscr
> trunk:/vol/nas 2.9T 1.9T 1019G 65% /nas
> AFS 8.6G 0 8.6G 0% /afs
>From your strace output, you can see that the "Size" values that df
is reporting are consistent with what statfs reports. For example
for /scr, f_blocks * f_bsize does give 1.1T.
statfs("/scr", {f_type="NFS_SUPER_MAGIC", f_bsize=512,
f_blocks=2346262944, f_bfree=2268743128, f_bavail=2268743128,
Note the negative "Used" value (-8.0Z) for /nas above.
That is because statfs is reporting more free blocks than total:
[here, f_bfree is larger than f_blocks]
statfs("/nas", {f_type="NFS_SUPER_MAGIC", f_bsize=512,
f_blocks=1719877224, f_bfree=2144061328, f_bavail=2144061328,
Newer upstream releases like coreutils-5.97 and coreutils-6.1
have a work-around so that df doesn't print negative numbers,
even when its inputs are bogus.
For the invalid f_blocks values, you'll have to look beyond coreutils
for a fix.