archive:
find "2013" -type f -regextype posix-egrep -regex ".*\.(NEF|nef)$" -print0|tar -cvf - --null -T - |openssl aes-256-cbc -a -salt -pass pass:password | aws s3 cp - s3://yours3backupbucket/2013.archive --storage-class DEEP_ARCHIVE --expected-size 1000000000000
restore:
you need to inicialize restore of your archive and wait about 48hours, then issue command:
aws s3 cp s3://yours3backupbucket/2013.archive - | openssl enc -aes-256-cbc -a -d|tar xvf -
only list restore:
aws s3 cp s3://yours3backupbucket/2013.archive - | openssl enc -aes-256-cbc -a -d|tar tvf -
Tips:
- use xz in pipe after tar step if you want compression (also unxz in restore step before tar)
- use --expected-size parameter (bytes) of aws s3 cp command, if you need larger archive to put into glacier (bigger than 5GB). glacier supports archive up to 40TB
- you can change s3 storage class, but if you want to keep costs to minimum, you should use DEEP_ARCHIVE option
No comments:
Post a Comment