[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: using --remove-older-than takes a very long time?

From: EricZolf
Subject: Re: using --remove-older-than takes a very long time?
Date: Sat, 22 Feb 2020 11:37:12 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.1

Hi Derek,

sorry for not reacting earlier, just overburdened with other things...
Also absolutely sorry for my first answer, it wasn't going in the right

On 21/02/2020 18:21, Derek Atkins wrote:
> Eric (et al),
> Last night's backup session ran.  It started at 1am, finished a backup
> of a server at 3:30 (which is to be expected; it has a lot of data), and
> then proceeded to take the next NINE (9) HOURS to delete a single
> increment.
> I am happy to forward the data, but it's a relatively large email
> (multiple megabytes) so I'm hesitant to send it to the list or to anyone
> else without prior approval.
> Anyone else willing to look at this?

Given that nothing fails but is "only" slow (and I realize that "only
slow" is also an issue), I hesitate to spend too much time on it for
now. I would prefer to spend the time on overall performance
improvements (if any is possible) once we've simplified the code.

> I'll note that I have not completely ruled out filesystem slowness on
> deletions.  The underlying file system is EncFS/Fuse over NFS over ZFS,
> but raw Filesystem I/O tests show good throughput in general.

I wouldn't expect that throughput is the main issue but latency, as it's
more a question of many files being removed than few big files being
transferred. Also the raw value isn't really interesting (assuming most
of the operations done by rdiff-backup happen in cache).

With the new version of rdiff-backup, you could try the `--no-fsync`
option, at the slight risk of losing data, if something goes wrong at
the wrong time.

Check those times on a normal HDD:

$ time for i in $(seq 1 100); do touch $i; sync; done

real    0m7.212s
user    0m0.191s
sys     0m0.435s

$ time for i in $(seq 1 100); do rm $i; sync; done
real    0m7.789s
user    0m0.209s
sys     0m0.424s

$ time for i in $(seq 1 100); do touch $i; done
real    0m0.091s
user    0m0.044s
sys     0m0.054s

$ time for i in $(seq 1 100); do rm $i; done
real    0m0.097s
user    0m0.042s
sys     0m0.062s

We talk about a factor of 70 to 80 between synced and non-synced, and
it's almost only latency we're talking about here, which I'd expect to
be even worse in your case, with user space and network involved.

Perhaps your setup would be more performant and with less risk if you'd
use the remote features of rdiff-backup instead of using NFS, i.e. have
rdiff-backup being installed and reachable over SSH where you have your
NFS server running (if at all possible, and without promise that it
improves performance).

Hope this helps, because I don't think the issue is solely in the code
but rather in your setup. We can look at improving performance but I
don't expect any quick win (beside the slightly dangerous --no-fsync

KR, Eric

> -derek
> Derek Atkins <address@hidden> writes:
>> "Eric L." <address@hidden> writes:
>>> I don't think so but it probably depends on how many backups it has to
>>> remove.
>> So that first run it had to remove 7.  Normally it's just removing 1 (as
>> it runs once per day).  However the difference in runtime between
>> removing 7 and removing 1 was definitely not 7x the runtime.  But it
>> definitely spends more time on the remove-old-backups than it does
>> backing up the day's changes.
>> [snip]
>>> A profiling as such is possible at python level but I would start with
>>> just calling the removal option with `-v9`, it should give you/us a
>>> first hint as the date/times for each action are logged.
>> I now have some multi-megabyte emails detailing the last few days of
>> processing (modulo some VM system crashes that happened over the
>> weekend).  There are dozens of operations per second (or more -- I
>> haven't looked closely), but still thousands of operations as it runs
>> for hours.  Frankly I'm not sure what I am looking at/for here.
>> I'm happy to share a log email (privately) if someone wants to look at
>> it with me?
>> -derek

reply via email to

[Prev in Thread] Current Thread [Next in Thread]