coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: how to sell network nodes


From: L A Walsh
Subject: Re: how to sell network nodes
Date: Fri, 20 Nov 2020 21:04:57 -0800
User-agent: Thunderbird

On 2020/11/14 02:13, Michael J. Baars wrote:
On Fri, 2020-11-13 at 14:36 -0800, L A Walsh wrote:
On 2020/11/12 00:48, Michael J. Baars wrote:
Hi,

I needed to zero out my hard drive because one of my nodes has become unstable. 
To this purpose I used coreutils dd with the following command line
arguments

dd if=/dev/zero of=/dev/mmc... status=progress

and I noticed how slow this program is in doing the job. So I tried a couple of 
different settings, like

bs=1048576 oflag=direct

but without significant improvement. The results are always the same... around 
25 mb/s.

Then I remembered this little benchmark I write not so long ago, please do have 
a look at it, it won't destroy your drive. I included the results obtained
by running the benchmark on the computer I'm currently working on, so you can 
compare them to your own.
---
    Your benchmark uses 'random' as input, which I seem to remember having
it's own slowness.

    Using '/dev/zero' as input, I immediately got 49MB/s.

I get 83.2 mb/s using /dev/zero.

The actual input to the benchmark is neither /dev/zero or /dev/random, it's 
./ftesti :) Trying to achieve the same with dd:

dd if=./ftesti of=./ftesto bs=1048576 count=64 oflag=direct status=progress
---
ya, that maybe what you want, but your makefile that created the above, started with: dd if=/dev/random of=./ftesti bs=1048576 count=64 oflag=direct status=progress
gives a speed of 88.0 mb/s.

As you can see from the log, with given blocksize, my benchmark does this exact 
same thing at a rate of 5663.7168 mb/s and at a maximum rate of 16589.7124 mb/s
with blocksize 67108864 (the entire file at once).
----
First problem: >>> As I can see from what log? <<<

I don't see any log.

What type of media are you using?

You say max was 16589MB/s writing out 64MB.  You realize that's
about 1/300th of a second.  I don't believe you are doing your
benchmark accurate.

Set it up to run for 5 minutes, and look at the high/low/average time.

Make sure you drop your caches and flush before stopping timer, like
file dropcaches:
----
#!/bin/bash
dropcaches () {
 echo -n "3"|sudo dd status=none of=/proc/sys/vm/drop_caches
}
time dropcaches
----
and 'sync;sync' after dropping caches.

Why, I ask you, is dd that much slower? What is it 'actually' doing with all 
the processing power available?
----
   It's not, it's quite efficient.  I use it for testing max transfer
speeds.  But in copying a file, you are testing the file
system as well -- how fast it finds free space and how fast
it allocates the space.

   The kernel supports finding out why your program is delayed.
There is delay accounting in the kernel and a program to display
the delays:

getdelays [-dilv] [-w logfile] [-r bufsize] [-m cpumask] [-t tgid] [-p pid]
 -d: print delayacct stats
 -i: print IO accounting (works only with -p)
 -l: listen forever
 -v: debug on
 -C: container path

I think you'll find that you need to use larger read/write sizes to
get consistent results.







reply via email to

[Prev in Thread] Current Thread [Next in Thread]