Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Evaporatejs | 1,762 | 26 | 12 | 2 months ago | 51 | October 08, 2017 | 91 | JavaScript | ||
Javascript library for browser to S3 multipart resumable uploads | ||||||||||
Lambdaws | 1,277 | 9 | 6 years ago | 15 | March 26, 2015 | 18 | lgpl-3.0 | JavaScript | ||
Deploy, run and get results from Amazon AWS Lambda in a breeze | ||||||||||
Django S3direct | 602 | 76 | 1 | 9 months ago | 78 | June 17, 2022 | 33 | mit | Python | |
Directly upload files to S3 compatible services with Django. | ||||||||||
Meteor Slingshot | 599 | 4 years ago | 3 | June 30, 2016 | 98 | mit | JavaScript | |||
Upload files directly to AWS S3, Google Cloud Storage and others in meteor | ||||||||||
Gulp Awspublish | 398 | 1,530 | 199 | 3 months ago | 57 | March 04, 2022 | 21 | mit | JavaScript | |
gulp plugin to publish files to amazon s3 | ||||||||||
React Native Aws3 | 363 | 80 | 2 | 4 years ago | 7 | April 29, 2019 | 52 | mit | JavaScript | |
Pure JavaScript React Native library for uploading to AWS S3 | ||||||||||
S3 Upload Stream | 320 | 303 | 86 | 3 years ago | 19 | December 04, 2014 | 22 | mit | JavaScript | |
A Node.js module for streaming data to Amazon S3 via the multipart upload API | ||||||||||
S3 Parallel Put | 277 | 3 years ago | 8 | April 11, 2019 | 12 | mit | Python | |||
Parallel uploads to Amazon AWS S3 | ||||||||||
Lambda Uploader | 261 | 9 | 3 | a year ago | 19 | May 14, 2018 | 31 | apache-2.0 | Python | |
Helps package and upload Python lambda functions to AWS | ||||||||||
Node S3 Uploader | 232 | 78 | 6 | 5 years ago | 45 | November 24, 2016 | 31 | mit | JavaScript | |
Flexible and efficient resize, rename, and upload images to Amazon S3 disk storage. Uses the official AWS Node SDK for transfer, and ImageMagick for image processing. Support for multiple image versions targets. |
Script for uploading large files to AWS Glacier
Helpful AWS Glacier pages:
Running scripts in parallel:
Motivation
The one-liner upload-archive isn't recommend for files over 100 MB, and you should instead use upload-multipart. The difficult part of using using multiupload is that it is really three major commands, with the second needing to repeated for every file to upload, and a custom byte range needs to be defined for each file chunk that is being uploaded. For example, with a 4MB file (4194304 bytes) the first three files need the following argument. This is repeated 1945 times for my 8GB file.
We need a script to handle the math and autogenerate the code.
This script leverages the parallel library, so my 1945 upload scripts are kicked off in parallel, but are queued up until a core is done with one before proceeding to the next. There is even a progress bar built in that shows you what percent is complete, and an estimated wait time until it is done.
Prerequisites
All of the following items in the Prerequisites section only need to be done once to set things up.
This script depends on jq for dealing with json and parallel for submitting the upload commands in parallel. If you are using Fed/CentOS/RHEL, then run the following:
sudo dnf install jq
sudo dnf install parallel
It assumes you have an AWS account, and have signed up for the glacier service. In this example, I have already created the vault named media1 via AWS console.
It also assumes that you have the AWS Command Line Interface installed on your machine. Again, if you are using Fed/CentOS/RHEL, then here is how you would get it:
sudo pip install awscli
Configure your machine to pass credentials automatically. This allows you pass a single dash with the account-id argument.
aws configure
Before jumping into the script, verify that your connection works by describing the vault you have created, which is media1 in my case. Run this describ-vault command and you should see similiar json results.
aws glacier describe-vault --vault-name media1 --account-id -
{
"SizeInBytes": 11360932143,
"VaultARN": "arn:aws:glacier:us-east-1:<redacted>:vaults/media1",
"LastInventoryDate": "2015-12-16T01:23:18.678Z",
"NumberOfArchives": 7,
"CreationDate": "2015-12-12T02:22:24.956Z",
"VaultName": "media1"
}
Download the glacierupload.sh script:
wget https://raw.githubusercontent.com/benporter/aws-glacier-multipart-upload/master/glacierupload.sh
Make it executable:
chmod u+x glacierupload.sh
Script Usage
Tar and zip the files you want to upload:
tar -zcvf my-backup.tar.gz /location/to/zip/*
Now chunk out your zipped file into equal peice chunks. You can only pick multiples of 1MB up to 4MB. This example chunks out the my-backup.tar.gz file into 4MB chunks, giving all of them the prefix part which is what the script expects to see. If you choose something other than part, then you'll need to edit the script.
split --bytes=4194304 --verbose my-backup.tar.gz part
Now it is time to run the script. It assumes that your part* files are in the same directory as the script.
./glacierupload.sh