Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Awesome Aws | 11,514 | 4 days ago | 1 | December 21, 2015 | 65 | other | Python | |||
A curated list of awesome Amazon Web Services (AWS) libraries, open source repos, guides, blogs, and other resources. Featuring the Fiery Meter of AWSome. | ||||||||||
S3fs Fuse | 7,180 | a day ago | 4 | March 09, 2022 | 215 | gpl-2.0 | C++ | |||
FUSE-based file system backed by Amazon S3 | ||||||||||
Moto | 6,843 | 19 hours ago | 52 | apache-2.0 | Python | |||||
A library that allows you to easily mock out tests based on AWS infrastructure. | ||||||||||
Replibyte | 3,512 | 3 months ago | 82 | gpl-3.0 | Rust | |||||
Seed your development database with real data ⚡️ | ||||||||||
Wal E | 3,327 | 14 | a year ago | 33 | February 04, 2020 | 91 | bsd-3-clause | Python | ||
Continuous Archiving for Postgres | ||||||||||
0x4447_product_s3_email | 2,905 | a year ago | 2 | mit | ||||||
📫 A serverless email server on AWS using S3 and SES | ||||||||||
Mountpoint S3 | 2,572 | 2 days ago | 27 | apache-2.0 | Rust | |||||
A simple, high-throughput file client for mounting an Amazon S3 bucket as a local file system. | ||||||||||
Wal G | 2,561 | 19 hours ago | 62 | March 16, 2022 | 238 | other | Go | |||
Archival and Restoration for databases in the Cloud | ||||||||||
Mc | 2,463 | 15 | 15 | a day ago | 50 | April 22, 2021 | 37 | agpl-3.0 | Go | |
Simple | Fast tool to manage MinIO clusters :cloud: | ||||||||||
Aws Sdk Js V3 | 2,391 | 427 | 12 hours ago | 156 | September 27, 2022 | 350 | apache-2.0 | TypeScript | ||
Modularized AWS SDK for JavaScript. |
awspublish plugin for gulp
First, install gulp-awspublish
as a development dependency:
npm install --save-dev gulp-awspublish
Then, add it to your gulpfile.js
:
var awspublish = require("gulp-awspublish");
gulp.task("publish", function() {
// create a new publisher using S3 options
// http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property
var publisher = awspublish.create(
{
region: "your-region-id",
params: {
Bucket: "..."
}
},
{
cacheFileName: "your-cache-location"
}
);
// define custom headers
var headers = {
"Cache-Control": "max-age=315360000, no-transform, public"
// ...
};
return (
gulp
.src("./public/*.js")
// gzip, Set Content-Encoding headers and add .gz extension
.pipe(awspublish.gzip({ ext: ".gz" }))
// publisher will add Content-Length, Content-Type and headers specified above
// If not specified it will set x-amz-acl to public-read by default
.pipe(publisher.publish(headers))
// create a cache file to speed up consecutive uploads
.pipe(publisher.cache())
// print upload updates to console
.pipe(awspublish.reporter())
);
});
// output
// [gulp] [create] file1.js.gz
// [gulp] [create] file2.js.gz
// [gulp] [update] file3.js.gz
// [gulp] [cache] file3.js.gz
// ...
Note: If you follow the aws-sdk suggestions for providing your credentials you don't need to pass them in to create the publisher.
Note: In order for publish to work on S3, your policy has to allow the following S3 actions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::BUCKETNAME"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": ["arn:aws:s3:::BUCKETNAME/*"]
}
]
}
By default, the plugin works only when public access to the bucket is not blocked:
When dealing with a private bucket, make sure to pass the option { noAcl: true }
or a value for the x-amz-acl
header:
publisher.publish({}, { noAcl: true });
publisher.publish({ "x-amz-acl": "something" });
npm test
{
"params": {
"Bucket": "<test-bucket-name>"
},
"credentials": {
"accessKeyId": "<your-access-key-id>",
"secretAccessKey": "<your-secret-access-key>",
"signatureVersion": "v3"
}
}
create a through stream, that gzip file and add Content-Encoding header.
awspublish.gzip
. If you need an older node engine to work with gzipping, you can use v2.0.2.Available options:
Create a Publisher.
The AWSConfig object is used to create an aws-sdk
S3 client. At a minimum you must pass a Bucket
key, to define the site bucket. You can find all available options in the AWS SDK documentation.
The cacheOptions object allows you to define the location of the cached hash digests. By default, they will be saved in your projects root folder in a hidden file called '.awspublish-' + 'name-of-your-bucket'.
The AWS client has a default timeout which may be too low when pushing large files (> 50mb).
To adjust timeout, add httpOptions: { timeout: 300000 }
to the AWSConfig object.
By default, gulp-awspublish uses the credential chain specified in the AWS docs.
Here are some example credential configurations:
Hardcoded credentials (Note: We recommend you not hard-code credentials inside an application. Use this method only for small personal scripts or for testing purposes.):
var publisher = awspublish.create({
region: "your-region-id",
params: {
Bucket: "..."
},
credentials: {
accessKeyId: "akid",
secretAccessKey: "secret"
}
});
Using a profile by name from ~/.aws/credentials
:
var AWS = require("aws-sdk");
var publisher = awspublish.create({
region: "your-region-id",
params: {
Bucket: "..."
},
credentials: new AWS.SharedIniFileCredentials({ profile: "myprofile" })
});
Instead of putting anything in the configuration object, you can also provide the following environment variables: AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, AWS_SESSION_TOKEN
, AWS_PROFILE
. You can also define a [default]
profile in ~/.aws/credentials
which the SDK will use transparently without needing to set anything.
Create a through stream, that push files to s3.
force
)Files that go through the stream receive extra properties:
Note:
publish
will never delete files remotely. To clean up unused remote files usesync
.
Create a through stream that create or update a cache file using file s3 path and file etag. Consecutive runs of publish will use this file to avoid reuploading identical files.
Cache file is save in the current working dir and is named .awspublish-<bucket>
. The cache file is flushed to disk every 10 files just to be safe.
create a transform stream that delete old files from the bucket.
e.g.
// only directory bar will be synced
// files in folder /foo/bar and file baz.txt will not be removed from the bucket despite not being in your local folder
gulp
.src("./public/*")
.pipe(publisher.publish())
.pipe(publisher.sync("bar", [/^foo\/bar/, "baz.txt"]))
.pipe(awspublish.reporter());
warning
sync
will delete files in your bucket that are not in your local folder unless they're whitelisted.
// this will publish and sync bucket files with the one in your public directory
gulp
.src("./public/*")
.pipe(publisher.publish())
.pipe(publisher.sync())
.pipe(awspublish.reporter());
// output
// [gulp] [create] file1.js
// [gulp] [update] file2.js
// [gulp] [delete] file3.js
// ...
The aws-sdk
S3 client is exposed to let you do other s3 operations.
Create a reporter that logs s3.path and s3.state (delete, create, update, put, cache, skip).
Available options:
// this will publish,sync bucket files and print created, updated and deleted files
gulp
.src("./public/*")
.pipe(publisher.publish())
.pipe(publisher.sync())
.pipe(
awspublish.reporter({
states: ["create", "update", "delete"]
})
);
You can use gulp-rename
to rename your files on s3
// see examples/rename.js
gulp
.src("examples/fixtures/*.js")
.pipe(
rename(function(path) {
path.dirname += "/s3-examples";
path.basename += "-s3";
})
)
.pipe(publisher.publish())
.pipe(awspublish.reporter());
// output
// [gulp] [create] s3-examples/bar-s3.js
// [gulp] [create] s3-examples/foo-s3.js
You can use concurrent-transform
to upload files in parallel to your amazon bucket
var parallelize = require("concurrent-transform");
gulp
.src("examples/fixtures/*.js")
.pipe(parallelize(publisher.publish(), 10))
.pipe(awspublish.reporter());
You can use the merge-stream
plugin
to upload two streams in parallel, allowing sync
to work with mixed file
types
var merge = require("merge-stream");
var gzip = gulp.src("public/**/*.js").pipe(awspublish.gzip());
var plain = gulp.src(["public/**/*", "!public/**/*.js"]);
merge(gzip, plain)
.pipe(publisher.publish())
.pipe(publisher.sync())
.pipe(awspublish.reporter());
A router for defining file-specific rules https://www.npmjs.org/package/gulp-awspublish-router
Invalidate cloudfront cache based on output from awspublish https://www.npmjs.com/package/gulp-cloudfront-invalidate-aws-publish