Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Consul | 26,209 | 1,022 | 1,872 | 9 hours ago | 782 | September 20, 2022 | 1,245 | mpl-2.0 | Go | |
Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure. | ||||||||||
Nomad | 13,363 | 103 | 291 | 9 hours ago | 753 | September 14, 2022 | 1,414 | mpl-2.0 | Go | |
Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations. | ||||||||||
Fabio | 7,144 | 2 | 2 months ago | 50 | September 13, 2022 | 237 | mit | Go | ||
Consul Load-Balancing made simple | ||||||||||
Consul Template | 4,613 | 15 | 58 | 9 hours ago | 118 | August 18, 2022 | 143 | mpl-2.0 | Go | |
Template rendering, notifier, and supervisor for @HashiCorp Consul and Vault data. | ||||||||||
Gomplate | 2,103 | 32 | 5 days ago | 81 | September 13, 2022 | 35 | mit | Go | ||
A flexible commandline tool for template rendering. Supports lots of local and remote datasources. | ||||||||||
Envconsul | 1,931 | 3 days ago | 41 | July 19, 2022 | 29 | mpl-2.0 | Go | |||
Launch a subprocess with environment variables using data from @HashiCorp Consul and Vault. | ||||||||||
Vault Guides | 936 | 9 days ago | 4 | April 06, 2021 | 57 | mpl-2.0 | Shell | |||
Example usage of HashiCorp Vault secrets management | ||||||||||
Terraform Aws Vault | 653 | 2 months ago | 63 | August 18, 2021 | 71 | apache-2.0 | HCL | |||
A Terraform Module for how to run Vault on AWS using Terraform and Packer | ||||||||||
Consul K8s | 592 | 10 hours ago | 95 | June 17, 2022 | 153 | mpl-2.0 | Go | |||
First-class support for Consul Service Mesh on Kubernetes | ||||||||||
Hashi Up | 498 | 4 months ago | 32 | September 06, 2022 | 3 | mit | Go | |||
bootstrap HashiCorp Consul, Nomad, or Vault over SSH < 1 minute |
This project is an example of using Consul, Vault, and Vault UI in a high availability (HA) configuration. Conveniently packaged as Docker services for provisioning via Docker Compose.
Features:
Starting and stopping
section][#starting-and-stopping].Supplemental reading material:
Remove
--scale vault=3
if you want to start one instance of Vault.docker-compose up -d
would bring only Consul up in HA configuration.
./scripts/consul-agent.sh --bootstrap
docker-compose up --scale vault=3 -d
Configure your browser to use the SOCKS5 proxy listening on localhost:1080
.
With your browser configured to use the proxy visit
http://consul.service.consul:8500/
and wait for the cluster to be ready.
After the vault service has all nodes available, it is time to initialize vault.
If you wish to secure secret.txt
with GPG, then set the recipient_list
environment variable. For example, the following.
export recipient_list="<gpg fingerprint to your secret gpg key>"
If you do not use GPG or do not want to, then skip setting recipient_list
.
Initialize vault witht he following command.
./scripts/initialize-vault.sh
The credentials for vault are located in the file secret.txt
which is created
when Vault is initialized. Alternately, secret.txt.gpg
if using GPG
encryption.
Configure your web browser to use the SOCKS5 proxy listening on
localhost:1080
.
In Firefox, do the following:
localhost
, set Port to 1080
, and check SOCKS v5
boolean.Alternately install FoxyProxy extension which is an extension for quickly switching proxies on or off.
For other browsers, web search how to configure proxy settings or see what extensions are available for managing proxy settings.
Visit http://portal.service.consul/. It provides links to other web UIs and if you configure additional portal services, then they will also show up automatically.
Alternately, you can visit consul and vault directly at:
To log into Vault UI you must generate for yourself an admin token.
./scripts/get-admin-token.sh
The root user token for Vault is stored in secret.txt
at the root of this
repository after you initialize Vault.
For playing around with service discovery I have created other docker-compose files which will automatically register with this consul cluster. Here's a list of what I have created so far.
With HA enabled, container instances of consul and vault can be terminated with minor disruptions.
Consul can be scaled up on the fly. consul-template
will automatically update
dnsmasq to include new services. dnsmasq will experience zero downtime.
docker-compose up --scale vault=3 --scale consul-worker=6 -d
To play with failover for killing consul instances, it is recommended to review fault tolerance for consul HA deployments.
Because high availability clusters have to gossip across nodes you can't execute
a simple docker-compose down
without corrupting the clusters. Instead, you
have to gracefully shut down all clusters that depend on consul and then
gracefully shutdown consul itself. For this, I have provided a script.
Stop consul and vault cluster safely.
./scripts/graceful-shutdown.sh
Start the consul and vault clusters.
docker-compose up -d
Currently, output from the dnsmasq
and dnsmasq-secondary
servers are
minimal. Verbosity of output can be increased for troubleshooting. Edit
docker-compose.yml
and add --log-queries
to the dnsmasq command.
DNS client troubleshooting using Docker.
docker-compose run dns-troubleshoot
Using the dig
command inside of the container.
# rely on the internal container DNS
dig consul.service.consul
# specify the dnsmasq hostname as the DNS server
dig @dnsmasq vault.service.consul
# reference vault DNS by tags
dig active.vault.service.consul
dig standby.vault.service.consul
View vault logs.
docker-compose logs vault
User docker exec
to log into container names. It allows you to poke around
the runtime of the container.
Run a SOCKS5 proxy for use with your browser.
docker run --network docker-compose-ha-consul-vault-ui_internal --dns 172.16.238.2 --init -p 127.0.0.1:1080:1080 --rm serjs/go-socks5-proxy
Configure your browser to use SOCKS proxy at 127.0.0.1:1080
.
It's possible a cluster was shutdown uncleanly and put into an irrecoverable
state with no leader. If you have ever cleanly shut down consul, then it's
possible you have a backup in the backups/
directory.
If you're in this leaderless state, then wipe out your old cluster data with the following command (this will permanently delete all old data).
docker-compose down -v
Start a new cluster.
docker-compose up -d
The latest backup can be restored via the following script.
./scripts/restore-consul.sh
If you have a specific backup you wish to restore, then you can call it as an argument.
./scripts/restore-consul.sh backups/backup.snap