Pages

April 11, 2019

How to stream encrypted data into amazon glacier deep archive

This will be very short tutorial. Imagine you want to archive your old raw files (.nef) from directory 2013 into amazon glacier deep archive using amazon s3 official linux client and use also openssl encryption. You need working aws cli. I used expected size of archive as 1TB. Change text in red to applicable to your environment. All this was tested on fully updated debian 9 and fedora 29.


archive:

find "2013" -type f -regextype posix-egrep -regex ".*\.(NEF|nef)$" -print0|tar -cvf - --null -T - |openssl aes-256-cbc -a -salt -pass pass:password | aws s3 cp - s3://yours3backupbucket/2013.archive --storage-class DEEP_ARCHIVE --expected-size 1000000000000

restore:

you need to inicialize restore of your archive and wait about 48hours, then issue command:
aws s3 cp s3://yours3backupbucket/2013.archive - | openssl enc -aes-256-cbc -a -d|tar xvf -

only list restore:

aws s3 cp s3://yours3backupbucket/2013.archive - | openssl enc -aes-256-cbc -a -d|tar tvf -





Tips:
- use xz in pipe after tar step if you want compression (also unxz in restore step before tar)
- use --expected-size parameter (bytes) of aws s3 cp command, if you need larger archive to put into glacier (bigger than 5GB). glacier supports archive up to 40TB
- you can change s3 storage class, but if you want to keep costs to minimum, you should use DEEP_ARCHIVE option

No comments:

Post a Comment