duplicity-team team mailing list archive
-
duplicity-team team
-
Mailing list archive
-
Message #03116
[Merge] lp:~mnjul/duplicity/s3-infreq-access into lp:duplicity
Min-Zhong "John" Lu has proposed merging lp:~mnjul/duplicity/s3-infreq-access into lp:duplicity.
Requested reviews:
duplicity-team (duplicity-team)
For more details, see:
https://code.launchpad.net/~mnjul/duplicity/s3-infreq-access/+merge/274037
This adds support for AWS S3's newly announced Infrequent Access storage class and is intended to implement Blueprint: https://blueprints.launchpad.net/duplicity/+spec/aws-s3-std-ia-class .
A new command line option, --s3-use-ia, is added, and boto backend will automatically use the correct storage class value depending on whether --s3-use-rrs and --s3-use-ia is set. Command line parser will prompt error if both --s3-use-ia and --s3-use-rrs are used together, as they conflict with each other.
The manpage has been updated giving a short explanation on the new option. Its wording derives from Amazon's official announcement: https://aws.amazon.com/about-aws/whats-new/2015/09/announcing-new-amazon-s3-storage-class-and-lower-glacier-prices/
--
Your team duplicity-team is requested to review the proposed merge of lp:~mnjul/duplicity/s3-infreq-access into lp:duplicity.
=== modified file 'bin/duplicity.1'
--- bin/duplicity.1 2015-09-10 11:49:10 +0000
+++ bin/duplicity.1 2015-10-10 00:28:16 +0000
@@ -726,6 +726,14 @@
Storage on S3.
.TP
+.BI "--s3-use-ia"
+Store volumes using Standard - Infrequent Access when uploading to Amazon S3.
+This storage class has a lower storage cost but a higher per-request cost, and
+the storage cost is calculated against a 30-day storage minimum. According to
+Amazon, this storage is ideal for long-term file storage, backups, and disaster
+recovery.
+
+.TP
.BI "--s3-use-multiprocessing"
Allow multipart volumne uploads to S3 through multiprocessing. This option
requires Python 2.6 and can be used to make uploads to S3 more efficient.
=== modified file 'duplicity/backends/_boto_single.py'
--- duplicity/backends/_boto_single.py 2015-08-04 13:19:29 +0000
+++ duplicity/backends/_boto_single.py 2015-10-10 00:28:16 +0000
@@ -221,6 +221,8 @@
if globals.s3_use_rrs:
storage_class = 'REDUCED_REDUNDANCY'
+ elif globals.s3_use_ia:
+ storage_class = 'STANDARD_IA'
else:
storage_class = 'STANDARD'
log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
=== modified file 'duplicity/commandline.py'
--- duplicity/commandline.py 2015-06-21 14:53:05 +0000
+++ duplicity/commandline.py 2015-10-10 00:28:16 +0000
@@ -523,6 +523,9 @@
# Whether to use S3 Reduced Redudancy Storage
parser.add_option("--s3-use-rrs", action="store_true")
+ # Whether to use S3 Infrequent Access Storage
+ parser.add_option("--s3-use-ia", action="store_true")
+
# Whether to use "new-style" subdomain addressing for S3 buckets. Such
# use is not backwards-compatible with upper-case buckets, or buckets
# that are otherwise not expressable in a valid hostname.
@@ -1057,6 +1060,8 @@
if globals.restore_dir:
command_line_error("restore option incompatible with %s backup"
% (action,))
+ if globals.s3_use_rrs and globals.s3_use_ia:
+ command_line_error("--s3-use-rrs and --s3-use-ia cannot be used together")
def ProcessCommandLine(cmdline_list):
=== modified file 'duplicity/globals.py'
--- duplicity/globals.py 2015-06-21 14:53:05 +0000
+++ duplicity/globals.py 2015-10-10 00:28:16 +0000
@@ -191,6 +191,9 @@
# Whether to use S3 Reduced Redudancy Storage
s3_use_rrs = False
+# Whether to use S3 Infrequent Access Storage
+s3_use_ia = False
+
# True if we should use boto multiprocessing version
s3_use_multiprocessing = False
Follow ups