|
From: | Kenneth Loafman |
Subject: | Re: [Duplicity-talk] Move destination files to cold archive? |
Date: | Sat, 5 Oct 2019 10:32:15 -0500 |
I have an existing backup solution that creates a local pool of blocks/files named by their sha256 hashes that grows with each backup and I would like to transfer only the delta to a cloud backup. Duplicity seems to be perfect to do exactly that.
Unfortunately the upload is very slow and an initial backup will take weeks. Will duplicity still work flawlessly when interrupted multiple times per backup and be able to resume the same backup at exactly the right place?
My preferred solution would be to create a local duplicity backup (this will be fast and without interruption) and use an independant simple script to transfer all resulting files to a cold (cloud) archive and then delete all files that are copied (as I already have a local backup. I will keep the local archive directory, but will duplicity work correctly if I delete files at its "destination"? With the archive it should not be necessary to read remote files, but will the missing/invisible files cause any problems?
Kind regards,
Frank_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
[Prev in Thread] | Current Thread | [Next in Thread] |