[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] mixed storage classes on S3

From: edgar . soldin
Subject: Re: [Duplicity-talk] mixed storage classes on S3
Date: Mon, 8 Aug 2022 10:51:14 +0200
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.8.1

On 08.08.2022 10:43, hamish-duplicity--- via Duplicity-talk wrote:
On 8/8/22 18:24, edgar.soldin--- via Duplicity-talk wrote:
On 08.08.2022 07:30, hamish-duplicity--- via Duplicity-talk wrote:
I am backing up to S3, and I set --s3-use-ia to set the storage class. I am 
also using an S3 life cycle rule to transition the files to Glacier (flexible 
retrieval) after 120 days.

I think it would be useful to keep the metadata (manifest and signatures) in a 
different class than the data, so that I can purge my local cache of the 
metadata but get it back more readily if needed.

according to the source
you can set multiple `--s3-use-` options.

manifests currently will never use glacier, glacier_ir, deep_archive but use 
the other given class instead. eg.
 --s3-use-ia --s3-use-glacier
would put all files up as class glacier except the manifests which will be 
saved as standard_ia .

Thanks, that's good to know. I would guess that the manifest files are excluded 
because they are so small, and S3 Glacier has a minimum effective object size 
of 128kb.

i'd assume it is because they are needed for incrementals, downloaded if not 
existing (eg. deleted archive-dir cache). but not 100% sure

Unfortunately the S3 life cycle rules don't let you match by filename patterns, 
so the only way to do this is to add tags to the files (either when uploaded, 
or later, possibly with a lambda), or by setting the storage class differently 
on the files when they are uploaded. Although then the data would go straight 
to glacier rather than waiting 120 days.

Would anyone have any suggestions on this?

afaiu others use the `--file-prefix-*` options for this as S3 is able to work with those.

Alas, I'm storing my backups in subdirectories of the bucket (as I back up 
multiple hosts into the same bucket), so the S3 rules still won't match.

well, can't you create as many buckets as you like? why only one?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]