duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Duplicity-talk] Scalability of duplicity


From: Tom Ekberg
Subject: [Duplicity-talk] Scalability of duplicity
Date: Tue, 3 Oct 2017 11:07:15 -0700 (PDT)
User-agent: Web Alpine 2.01 (LRH 1302 2010-07-20)

I'm using duplicity via duply and have run into a problem with one of our duply 
profiles used for postgres database backups. The backups are binary files (uses 
pgdump --format=custom). The backup duply points to is about 90GB. The duply 
runs started OK, then hung on 7/10/2017. The only recourse was to remove the 
files created by duplicity and the lock file. It hung again on 9/16/2017. By 
hung I mean that the duply and duplicity processes were still running but top 
didn't show any activity. On 9/20/2017 I removed the 54298 duplicity files. I 
didn't count the number of files removed on 7/10/2017.

Duply is set up to do incremental backups every day (the pgdump cron is run 
daily an hour eariler) with a full backup done every week. The retention period 
is a month.

Now for my questions:

  Has anyone run duply/duplicity with 90GB of binary files?

  The next time this happens, what data should I collect to try and diagnose 
this problem?

Tom Ekberg
Senior Computer Specialist, Lab Medicine
University of Washington Medical Center
1959 NE Pacific St, MS 357110
Seattle WA 98195
work: (206) 598-8544
email: address@hidden






reply via email to

[Prev in Thread] Current Thread [Next in Thread]