Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Caprover | 10,121 | 3 days ago | 100 | other | TypeScript | |||||
Scalable PaaS (automated Docker+nginx) - aka Heroku on Steroids | ||||||||||
Up | 8,687 | 68 | 13 | 13 days ago | 11 | March 02, 2018 | 291 | mit | Go | |
Deploy infinitely scalable serverless apps, apis, and sites in seconds to AWS. | ||||||||||
Webiny Js | 6,456 | 113 | a day ago | 251 | September 07, 2022 | 201 | other | TypeScript | ||
Open-source serverless enterprise CMS. Includes a headless CMS, page builder, form builder, and file manager. Easy to customize and expand. Deploys to AWS. | ||||||||||
Aws Sam Cli | 6,237 | 31 | 12 | a day ago | 129 | June 29, 2022 | 364 | apache-2.0 | Python | |
CLI tool to build, test, debug, and deploy Serverless applications using AWS SAM | ||||||||||
Claudia | 3,709 | 193 | 29 | a year ago | 122 | March 17, 2022 | 14 | mit | JavaScript | |
Deploy Node.js projects to AWS Lambda and API Gateway easily | ||||||||||
Awesome Ecs | 2,784 | 10 days ago | 3 | |||||||
A curated list of awesome ECS guides, development tools, and resources | ||||||||||
Architect | 2,331 | 38 | 12 | 10 days ago | 376 | September 08, 2022 | 76 | apache-2.0 | JavaScript | |
The simplest, most powerful way to build a functional web app (fwa) | ||||||||||
Components | 2,305 | 13 | 31 | 4 months ago | 509 | January 19, 2022 | 151 | apache-2.0 | JavaScript | |
The Serverless Framework's new infrastructure provisioning technology — Build, compose, & deploy serverless apps in seconds... | ||||||||||
Aws Lambda Developer Guide | 2,111 | 3 days ago | 91 | other | Java | |||||
The AWS Lambda Developer Guide | ||||||||||
Serverless | 1,988 | 15 days ago | 120 | mit | TypeScript | |||||
This is intended to be a repo containing all of the official AWS Serverless architecture patterns built with CDK for developers to use. All patterns come in Typescript and Python with the exported CloudFormation also included. |
Concourse-Up has been replaced with Control Tower. First-time users should deploy using control-tower
and raise issues under that project.
A tool for easily deploying Concourse in a single command.
$ AWS_ACCESS_KEY_ID=<access-key-id> \
AWS_SECRET_ACCESS_KEY=<secret-access-key> \
concourse-up deploy <your-project-name>
$ GOOGLE_APPLICATION_CREDENTIALS=<path/to/googlecreds.json> \
concourse-up deploy --iaas gcp <your-project-name>
The goal of Concourse-Up is to be the world's easiest way to deploy and operate Concourse CI in production.
In just one command you can deploy a new Concourse environment for your team, on either AWS or GCP. Your Concourse-Up deployment will upgrade itself and self-heal, restoring the underlying VMs if needed. Using the same command-line tool you can do things like manage DNS, scale your environment, or manage firewall policy. CredHub is provided for secrets management and Grafana for viewing your Concourse metrics.
You can keep up to date on Concourse-Up announcements by reading the EngineerBetter Blog
Feature | AWS | GCP |
---|---|---|
Concourse IP whitelisting | + | + |
Credhub | + | + |
Custom domains | + | + |
Custom tagging | BOSH only | BOSH only |
Custom TLS certificates | + | + |
Database vertical scaling | + | + |
GitHub authentication | + | + |
Grafana | + | + |
Interruptable worker support | + | + |
Letsencrypt integration | + | + |
Namespace support | + | + |
Region selection | + | + |
Retrieving deployment information | + | + |
Retrieving deployment information as shell exports | + | + |
Retrieving deployment information in JSON | + | + |
Retrieving director NATS cert expiration | + | + |
Rotating director NATS cert | + | + |
Self-Update support | + | + |
Teardown deployment | + | + |
Web server vertical scaling | + | + |
Worker horizontal scaling | + | + |
Worker type selection | + | N/A |
Worker vertical scaling | + | + |
Zone selection | + | + |
Customised networking | + | + |
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
are set.~/.aws/credentials
are present.~/.aws/credentials
are present.GOOGLE_APPLICATION_CREDENTIALS_CONTENTS
set to the path to a GCP credentials json fileDownload the latest release and install it into your PATH
--region value
AWS or GCP region (default: "eu-west-1" on AWS and "europe-west1" on GCP) [$AWS_REGION]--namespace value
Any valid string that provides a meaningful namespace of the deployment - Used as part of the configuration bucket name [$NAMESPACE].
Note that if namespace has been provided in the initial
deploy
it will be required for any subsequentconcourse-up
calls against the same deployment.
The default IAAS for Concourse-Up is AWS. To choose a different IAAS use the --iaas
flag. For every IAAS provider apart from AWS this flag is required for all commands.
Supported IAAS values: AWS, GCP
--iaas value
(optional) IAAS, can be AWS or GCP (default: "AWS") [$IAAS]Deploy a new Concourse with:
concourse-up deploy <your-project-name>
eg:
$ concourse-up deploy ci
...
DEPLOY SUCCESSFUL. Log in with:
fly --target ci login --insecure --concourse-url https://10.0.0.0 --username --password
Metrics available at https://10.0.0.0:3000 using the same username and password
Log into credhub with:
eval "$(concourse-up info ci --env)"
A new deploy from scratch takes approximately 20 minutes.
All flags are optional. Configuration settings provided via flags will persist in later deployments unless explicitly overriden.
--domain value
Domain to use as endpoint for Concourse web interface (eg: ci.myproject.com) [$DOMAIN]
$ concourse-up deploy --domain chimichanga.engineerbetter.com chimichanga
In the example above concourse-up
will search for a hosted zone that matches chimichanga.engineerbetter.com
or engineerbetter.com
and add a record to the longest match (chimichanga.engineerbetter.com
in this example).
--tls-cert value
TLS cert to use with Concourse endpoint [$TLS_CERT]
--tls-key value
TLS private key to use with Concourse endpoint [$TLS_KEY]
By default concourse-up
will generate a self-signed cert using the given domain. If you'd like to provide your own certificate instead, pass the cert and private key as strings using the --tls-cert
and --tls-key
flags respectively. eg:
$ concourse-up deploy \
--domain chimichanga.engineerbetter.com \
--tls-cert "$(cat chimichanga.engineerbetter.com.crt)" \
--tls-key "$(cat chimichanga.engineerbetter.com.key)" \
chimichanga
--workers value
Number of Concourse worker instances to deploy (default: 1) [$WORKERS]
--worker-type
Specify a worker type for aws (m5 or m4) (default: "m4") [$WORKER_TYPE] (see comparison table below). Note: this is an AWS-specific option
AWS does not offer m5 instances in all regions, and even for regions that do offer m5 instances, not all zones within that region may offer them. To complicate matters further, each AWS account is assigned AWS zones at random - for instance,
eu-west-1a
for one account may be the same aseu-west-1b
in another account. If m5s are available in your chosen region but not the zone Concourse-Up has chosen, create a new deployment, this time specifying another--zone
.
--worker-size value
Size of Concourse workers. Can be medium, large, xlarge, 2xlarge, 4xlarge, 10xlarge, 12xlarge, 16xlarge or 24xlarge depending on the worker-type (see above) (default: "xlarge") [$WORKER_SIZE]
--worker-size | AWS m4 Instance type | AWS m5 Instance type* | GCP Instance type |
---|---|---|---|
medium | t2.medium | t2.medium | n1-standard-1 |
large | m4.large | m5.large | n1-standard-2 |
xlarge | m4.xlarge | m5.xlarge | n1-standard-4 |
2xlarge | m4.2xlarge | m5.2xlarge | n1-standard-8 |
4xlarge | m4.4xlarge | m5.4xlarge | n1-standard-16 |
10xlarge | m4.10xlarge | n1-standard-32 | |
12xlarge | m5.12xlarge | ||
16xlarge | m4.16xlarge | n1-standard-64 | |
24xlarge | m5.24xlarge |
* m5 instances not available in all regions and all zones. See --worker-type
for more info.
--web-size value
Size of Concourse web node. Can be small, medium, large, xlarge, 2xlarge (default: "small") [$WEB_SIZE]
--web-size | AWS Instance type | GCP Instance type |
---|---|---|
small | t2.small | n1-standard-1 |
medium | t2.medium | n1-standard-2 |
large | t2.large | n1-standard-4 |
xlarge | t2.xlarge | n1-standard-8 |
2xlarge | t2.2xlarge | n1-standard-16 |
--db-size value
Size of Concourse Postgres instance. Can be small, medium, large, xlarge, 2xlarge, or 4xlarge (default: "small") [$DB_SIZE]
Note that when changing the database size on an existing concourse-up deployment, the SQL instance will scaled by terraform resulting in approximately 3 minutes of downtime.
The following table shows the allowed database sizes and the corresponding AWS RDS & CloudSQL instance types
--db-size | AWS Instance type | GCP Instance type |
---|---|---|
small | db.t2.small | db-g1-small |
medium | db.t2.medium | db-custom-2-4096 |
large | db.m4.large | db-custom-2-8192 |
xlarge | db.m4.xlarge | db-custom-4-16384 |
2xlarge | db.m4.2xlarge | db-custom-8-32768 |
4xlarge | db.m4.4xlarge | db-custom-16-65536 |
--allow-ips value
Comma separated list of IP addresses or CIDR ranges to allow access to (default: "0.0.0.0/0") [$ALLOW_IPS]
Note:
allow-ips
governs what can access Concourse but not what can access the control plane (i.e. the BOSH director).
--github-auth-client-id value
Client ID for a github OAuth application - Used for Github Auth [$GITHUB_AUTH_CLIENT_ID]
--github-auth-client-secret value
Client Secret for a github OAuth application - Used for Github Auth [$GITHUB_AUTH_CLIENT_SECRET]
--add-tag key=value
Add a tag to the VMs that form your concourse-up
deployment. Can be used multiple times in a single deploy
command.
--spot=value
Use spot instances for workers. Can be true/false. Default is true.
Concourse Up uses spot instances for workers as a cost saving measure. Users requiring lower risk may switch this feature off by setting --spot=false.
--preemptible=value
Use preemptible instances for workers. Can be true/false. Default is true.
Be aware the preemptible instances will go down at least once every 24 hours so deployments with only one worker will experience downtime with this feature enabled. BOSH will ressurect falled workers automatically.
spot
and preemptible
are interchangeable so if either of them is set to false then interruptible instances will not be used regardless of your IaaS. i.e:
# Results in an AWS deployment using non-spot workers
concourse-up deploy --spot=true --preemptible=false <your-project-name>
# Results in an AWS deployment using non-spot workers
concourse-up deploy --preemptible=false <your-project-name>
# Results in a GCP deployment using non-preemptible workers
concourse-up deploy --iaas gcp --spot=false <your-project-name>
--zone
Specify an availability zone [$ZONE] (cannot be changed after the initial deployment)
If any of the following 5 flags is set, all the required ones from this group need to be set
--vpc-network-range value
Customise the VPC network CIDR to deploy into (required for AWS) [$VPC_NETWORK_RANGE]
--public-subnet-range value
Customise public network CIDR (if IAAS is AWS must be within --vpc-network-range) (required) [$PUBLIC_SUBNET_RANGE]
--private-subnet-range value
Customise private network CIDR (if IAAS is AWS must be within --vpc-network-range) (required) [$PRIVATE_SUBNET_RANGE]
--rds-subnet-range1 value
Customise first rds network CIDR (must be within --vpc-network-range) (required for AWS) [$RDS_SUBNET_RANGE1]
--rds-subnet-range2 value
Customise second rds network CIDR (must be within --vpc-network-range) (required for AWS) [$RDS_SUBNET_RANGE2]
All the ranges above should be in the CIDR format of IPv4/Mask. The sizes can vary as long as
vpc-network-range
is big enough to contain all others (in case IAAS is AWS). The smallest CIDR forpublic
andprivate
subnets is a /28. The smallest CIDR forrds1
andrds2
subnets is a /29
To fetch information about your concourse-up
deployment:
$ concourse-up info --json <your-project-name>
To load credentials into your environment from your concourse-up
deployment:
$ eval "$(concourse-up info --env <your-project-name>)"
To check the expiry of the BOSH Director's NATS CA certificate:
$ concourse-up info --cert-expiry <your-project-name>
Warning: if your deployment is approaching a year old, it may stop working due to expired certificates. For information please see this issue https://github.com/EngineerBetter/concourse-up/issues/81.
All flags are optional
--json
Output as json [$JSON]
--env
Output environment variables
--cert-expiry
Output the expiry of the BOSH director's NATS certificate
To destroy your Concourse:
$ concourse-up destroy <your-project-name>
Handles maintenance operations in concourse-up
All flags are optional
--renew-nats-cert
Rotate the NATS certificate on the director
Note that the NATS certificate is hardcoded to expire after 1 year. This command follows the istructions on bosh.io to rotate this certificate. This operation will cause downtime on your Concourse as it performs multiple full recreates.
--stage value
Specify a specific stage at which to start the NATS certificate renewal process. If not specified, the stage will be determined automatically. See the following table for details.
Stage | Description |
---|---|
0 | Adding new CA (create-env) |
1 | Recreating VMs for the first time (recreate) |
2 | Removing old CA (create-env) |
3 | Recreating VMs for the second time (recreate) |
4 | Cleaning up director-creds.yml |
When Concourse-up deploys Concourse, it now adds a pipeline to the new Concourse called concourse-up-self-update
. This pipeline continuously monitors our Github repo for new releases and updates Concourse in place whenever a new version of Concourse-up comes out.
This pipeline is paused by default, so just unpause it in the UI to enable the feature.
Patch releases of concourse-up
are compiled, tested and released automatically whenever a new stemcell or component release appears on bosh.io.
To upgrade your Concourse, grab the latest release and run concourse-up deploy <your-project-name>
again.
Concourse-up now automatically deploys Influxdb, Riemann, and Grafana on the web node. You can access Grafana on port 3000 of your regular concourse URL using the same username and password as your Concourse admin user. We put in a default dashboard that tracks
Concourse-up deploys the credhub service alongside Concourse and configures Concourse to use it. More detail on how credhub integrates with Concourse can be found here. You can log into credhub by running $ eval "$(concourse-up info --env --region $region $deployment)"
.
Concourse-up normally allows incoming traffic from any address to reach your web node. You can use the --allow-ips
flag to add firewall rules to prevent this.
For example to deploy Concourse-up and only allow traffic from your local machine, you could use the command concourse-up deploy --allow-ips $(dig +short myip.opendns.com @resolver1.opendns.com)
.
--allow-ips
takes a comma seperated list of IP addresses or CIDR ranges.
By default, concourse-up
deploys to the AWS eu-west-1 (Ireland) region or the GCP europe-west1 (Belgium) region, and uses spot instances for large and xlarge Concourse VMs. The estimated monthly cost is as follows:
Component | Size | Count | Price (USD) |
---|---|---|---|
BOSH director | t2.small | 1 | 18.30 |
Web Server | t2.small | 1 | 18.30 |
Worker | m4.xlarge (spot) | 1 | ~50.00 |
RDS instance | db.t2.small | 1 | 28.47 |
NAT Gateway | - | 1 | 35.15 |
gp2 storage | 20GB (bosh, web) | 2 | 4.40 |
gp2 storage | 200GB (worker) | 1 | 22.00 |
Total | 176.62 |
Component | Size | Count | Price (USD) |
---|---|---|---|
BOSH director | n1-standard-1 | 1 | 26.73 |
Web Server | n1-standard-1 | 1 | 26.73 |
Worker | n1-standard-4 (preemptible) | 1 | 32.12 |
DB instance | db-g1-small | 1 | 27.25 |
NAT Gateway | n1-standard-1 | 1 | 26.73 |
disk storage | 20GB (bosh, web) + 200GB (worker) | - | 40.80 |
Total | 180.35 |
concourse-up
first creates an S3 or GCS bucket to store its own configuration and saves a config.json
file there.
It then uses Terraform to deploy the following infrastructure:
Once the terraform step is complete, concourse-up
deploys a BOSH director on an t2.small/n1-standard-1 instance, and then uses that to deploy a Concourse with the following settings:
If you'd like to run concourse-up with it's own IAM account, create a user with the following permissions:
A IAM Primitive role of roles/owner
for the target GCP Project is required
CI Pipeline (deployed with Concourse Up!)
To build and test you'll need:
github.com/mattn/go-bindata
concourse-up
uses golang compile-time variables to set the release versions it uses. To build locally use the build_local.sh
script, rather than running go build
.
You will also need to clone concourse-up-ops
to the same level as concourse-up
to get the manifest and ops files necessary for building. Check the latest release of concourse-up
for the appropriate tag of concourse-up-ops
Tests use the Ginkgo Go testing framework. The tests require you to have set up AWS authentication locally.
Install ginkgo and run the tests with:
go get github.com/onsi/ginkgo/ginkgo
ginkgo -r
$ go get github.com/onsi/ginkgo/ginkgo
$ ginkgo -r
Go linting, shell linting, and unit tests can be run together in the same docker image CI uses with ./run_tests_local.sh
. This should be done before committing or raising a PR.
The pipeline listens for new patch or minor versions of manifest.yml
and ops/versions.json
coming from the concourse-up-ops
repo. In order to pick up a new major version first make sure it exists in the repo then modify tag_filter: X.*.*
in the concourse-up-ops
resource where X
is the major version you want to pin to.