just for clarification, which backup was interrupted and resumed. the full right? yeah, resuming is a tricky beast, let's see what Ken has to say about. in the meanwhile, if you feel capable a patch
just for clarification, which backup was interrupted and resumed. the full right? yeah, resuming is a tricky beast, let's see what Ken has to say about. in the meanwhile, if you feel capable a patch
just for clarification, which backup was interrupted and resumed. the full right? yeah, resuming is a tricky beast, let's see what Ken has to say about. in the meanwhile, if you feel capable a patch
just for clarification, which backup was interrupted and resumed. the full right? yeah, resuming is a tricky beast, let's see what Ken has to say about. in the meanwhile, if you feel capable a patch
hey Mark, just for clarification, which backup was interrupted and resumed. the full right? yeah, resuming is a tricky beast, let's see what Ken has to say about. in the meanwhile, if you feel capabl
Hi List, I'm running duplicity (0.7.10) to back up several servers and recently I've hit the signature file >2GB bug when incrementals became too old and it tried to make a new full backup, so I had
Hi Tyler Actually I was testing and I read a little the man pages. I changed the volume size with the command line, and increased the verbosity: duplicity --volsize 50 --verbosity 8 It works pretty n
nope, there is no resume for restores currently. sorry. 1. did you try to upgrade your boto already? https://github.com/boto/boto/issues/2409 2. you can raise the number retries via --num-retries, ch
Could work either way. If we kept the checkpoint files on the local machine, and resumed from that machine, overhead would be reduced. A possibly better way to do it would be after N volumes, making
Hi, I think what you are describing is some expected behavior with rsync by default. It waits to write to the destination file until it has assembled all of that file's content in a temporary file on
I am trying to restore from an old backup from 2011 that I've had in S3, and the restore keeps getting interrupted with errors like this: Download s3+http://deja-dup-auto-akiaifjuquylnba7gurq/duplici
since 0.6 duplicity resuming is implemented. I only know of some problems with resuming of ssh/sftp backend. I am using duplicity for the first time, running from home I notice it takes a long time t
I notice the response to this question back in May 2007, was no. I am using duplicity for the first time, running from home I notice it takes a long time to upload to S3. I have 32G total, it ran for
I'm not sure if that setting affects the tools. On the tools, I think I used the -z option on ncftpput to get the same affect. The man page does not mention using the settings. Just don't know for su
does this setting also affect ncftpput? I understand you are using the tools directly again? Why is this setting not set by default by duplicity? Agreed. I saw the version 0.5.19 pending release whil
Have him do the following manually, as the user that runs ftplicity: $ ncftp Then it will automatically resume on uploads. Looks like I may need to put out a 0.5.19 soon... ...Thanks, ...Ken address@
I actually thought more of an amount of data, to say .. every 50MB or so ... ede -- That would not be true resumability, but checkpointing (I know, a nit). You are correct. That could be done every h
That would not be true resumability, but checkpointing (I know, a nit). You are correct. That could be done every hour or so and would make restart easier. ...Thanks, ...Ken Attachment: signature.asc