I was running a fairly large (7.3GB) backup yesterday from a remote ssh session, the ssh session died sometime over night. This was an initial, full, backup but had only completed 5GB when the ssh se
it should, but i wouldn't advise it. not sure if this is what you mean, but the following is the used workaround used by some with slow unstable upload channels. 1. do a backup to a local file:// tar
I'm not sure, but if the files themselves do not change, but only accumulate, duplicity may not be your solution. Duplicity is looking to compare files using librsync, then transferring only the de
I have an existing backup solution that creates a local pool of blocks/files named by their sha256 hashes that grows with each backup and I would like to transfer only the delta to a cloud backup. Du
easy. assuming you'd encrypt your backup using a machine public _and_ additionally your own public key, you will only need the machine's secret key to locally decrypt if need arises (synchronize arch
OK, let me try again: 1) the TemporaryFile() problems in windows and cygwin have been with us for a long while and it has not been fixed in Python, yet. The first patch that Howie suggested added d
wow, a misunderstanding on both points, that's a new record. :)) let me try again wrt. 1. i meant the "too many open files" bug solved by Howie. didn't see it in trunk so far. ok found it now http://
I looked it up, it was bug #1416344, and was fixed for 07.02 release on 2015-01-30, a long time ago. That was going from 0.96 to 1.0. We're on 2.01 now and all is well. ...Ken On Wed, May 24, 2017
ede, 1) librsync is good up thru version 2.x. Not sure exactly what version of duplicity it was fixed in, but the logs should have the bug number and we can track it from there. It's been a long
Ken, how is the state wrt. librsync patch, didn't see it in the repo so far? *and*, below is once again somebody complaining that suddenly a private key / passphrase is needed. would you agree that w
This is a known issue, I think the way to get past it is to change the backup command to just include the last 8 characters of the fingerprint of the key you used, and it should recognize that and wo
Hi, Yesterday my backup failed and today I wanted to resume it but I get the error message that it was signed with the key 12456213 instead of 9990006665666999888899912456213 (I changed the numbers).
Hello, duplicity 0.7.11 (Ubuntu 16.04) I made a new backup with --no-encryption (and --no-compression). On verify without --use-agent, a "GnuPG passphrase" prompt appears ! Hitting enter yields: Cann
Well, I did the clean, but there was nothing there, and there was no *.part files, so I ran the incremental again and it worked. I hope there are no orphaned files in the back end. Thanks for all you
ok, your log of the "actual failure job" did not contain any error. but if it really was still lingering in memory than there should have been a lockfile prohibiting your attempt to run the backup ag
My backup failed with an error message similar to this. https://www.hagen-bauer.de/2015/11/jessie-update-dupliciy-sftp.html Can I have duplicity resume the transfer of the files? If so, how? Also, I
Hi, I'm considering solutions for backing up my web server to S3. This tool looks promising but I have some questions. According to the website duplicity only backs up files which have changed after
hey Aphyr, please state your duply, duplicity, boto, python versions --num-retries should work over all backends run duply w/ '--preview' and check that --num-retries is propagated to the command lin