rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] Memory usage during regressions


From: Maarten Bezemer
Subject: Re: [rdiff-backup-users] Memory usage during regressions
Date: Sat, 6 Aug 2011 19:51:04 +0200 (CEST)


On Sat, 6 Aug 2011, Claus-Justus Heine wrote:

So roughly 2.5 millon files, metadata is about 700M.

Still, this seems to be insane. Reading this month's archives (and given that the last release of rdiff-backup was about '09) it seems that I would have to fix that by myself, it seems. Or live with it. Or buy a backup server with more memory ...

Of course, in principle the core-size of rdiff-backup is not a problem, on a decent OS the part of the core currently not in use would just be swapped out. There is only a problem if the program constantly traverses the allocated buffers. This is what I suspect. It takes ages (2 or 3 days) to finally finish the regression.

I don't know the internals of rdiff-backup, but I do know that there is a runtime option to disable compression of .snapshot and .diff files. When file operations are done properly, this might be helpful (using memory mapped file access instead of decompressing/compressing, so effectively using pointers to disk instead of ram).

But I don't know whether it would work that way. The only thing I do know is that it will at best help you in the future, as it is not going to change anything wrt historic files.


--
Maarten



reply via email to

[Prev in Thread] Current Thread [Next in Thread]