|
From: | Cláudio Gil |
Subject: | Re: [Duplicity-talk] Verification of filelists, multiprocessing |
Date: | Wed, 27 Jan 2016 21:27:21 +0000 |
Hi,
About pararelism, from what I could see, duplicity processes your files as a stream, a long sequence of files. Encryption is also sequential by design (that's why GnuPG, what duplicity uses, does not use threads).
Maybe paralelization could be introduced in a few select points of duplicity. But the part that is mostly CPU-bound is the encryption and encrypting a file will use 1 core.
Since the entire volume is encrypted and not the files in it, you could test the difference between between backing up 250MB (10 volumes, by default) with encryption and backing up the same files without encryption and then encrypting all the volumes with multiple GnuPG processes (using "parallel" or something).
It's an interesting comparison. It's not obvious if it be faster and how changing the volume size affects the result.
Cheers,
Cláudio
Hello
If I run
$ duplicity --encrypt-sign-key 123ABC12 /path/to/dir file://backup1
and then
$ duplicity verify file://backup1 /path/to/dir
everything works fine.
However, my setup is as follows: I run
$ duplicity --encrypt-sign-key 123ABC12 --include-globbing-filelist list.txt / file://backup2
with list.txt consisting of directories as well as files:
/path/to/dir
+ /usr/another/dir
+ /etc/foo/file
+ /etc/bar/another/file
- **
Sorry if I don't grok the manpage, but how can I verify file://backup2 using
the filelist list.txt?
Moreover: Judging from the manpage I assume it is not possible to use multiple
CPU cores to speedup de/encryption?
Thanks,
Andreas
_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
[Prev in Thread] | Current Thread | [Next in Thread] |