Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Consul | 26,229 | 1,022 | 1,872 | a day ago | 782 | September 20, 2022 | 1,244 | mpl-2.0 | Go | |
Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure. | ||||||||||
Homelab | 6,649 | a month ago | 17 | gpl-3.0 | Go | |||||
Modern self-hosting framework, fully automated from empty disk to operating services with a single command. | ||||||||||
Kubernetes External Secrets | 2,589 | 10 months ago | 10 | mit | JavaScript | |||||
Integrate external secret management systems with Kubernetes | ||||||||||
Bank Vaults | 1,824 | 16 | a day ago | 71 | April 05, 2022 | 227 | apache-2.0 | Go | ||
A Vault swiss-army knife: a K8s operator, Go client with automatic token renewal, automatic configuration, multiple unseal options and more. A CLI tool to init, unseal and configure Vault (auth methods, secret engines). Direct secret injection into Pods. | ||||||||||
Kubernetes Vault | 966 | 2 years ago | 1 | July 03, 2021 | apache-2.0 | Go | ||||
Use Vault to store secrets for Kubernetes! | ||||||||||
Helm Secrets | 921 | 9 days ago | 1 | apache-2.0 | Shell | |||||
A helm plugin that help manage secrets with Git workflow and store them anywhere | ||||||||||
Vault Helm | 885 | 21 hours ago | 163 | mpl-2.0 | Shell | |||||
Helm chart to install Vault and other associated components. | ||||||||||
Vault Operator | 733 | 3 years ago | 60 | apache-2.0 | Go | |||||
Run and manage Vault on Kubernetes simply and securely | ||||||||||
Vault K8s | 690 | 1 | a day ago | 45 | May 25, 2022 | 94 | mpl-2.0 | Go | ||
First-class support for Vault and Kubernetes. | ||||||||||
Argocd Vault Plugin | 605 | a day ago | 41 | September 19, 2022 | 53 | apache-2.0 | Go | |||
An Argo CD plugin to retrieve secrets from Secret Management tools and inject them into Kubernetes secrets |
This was a workshop conducted prior to the release of the official Vault Helm chart. For the official chart, see here. They may not reflect updates to the officially supported Vault or Consul charts.
This is workshop material for deploying Vault on Kubernetes. As a pre-requisite, this material requires a Kubernetes cluster with a proper auto-unseal mechanism. As a result, the initial set-up of the cluster depends on Google Kubernetes Engine. Additional Vault deployment attempts to remain agnostic of the provider, with some exceptions.
The flow of the workshop is outlined below:
At the conclusion of the workshop, we will have a Vault cluster and some example applications.
Initial cluster creation. This uses GKE and GCP constructs.
Vault auto-unseal. While we do not store the unseal keys in a GCP bucket, as the unseal keys can be stored to the organization's discretion, for ease of this workshop we auto-unseal the instance using GCP KMS.
Kubernetes Vault authentication. This step requires the retrieval of the
Kubernetes cluster certificate data. In GKE version 1.12+, clusters are not
generated with a cluster certificate by default. As a result, the
kubeconfig
does not store cluster certificate data and uses an OAuth token
instead. To address this concern, we call the GCP API for the cluster
certificate.
This workshop material demonstrates the use of several tools in the Kubernetes ecosystem, since its focus is running Vault on Kubernetes.
Helm: We use Helm to deploy and configure Consul and Vault. While these can be re-templated to a Kubernetes manifest, the complexity of deploying a highly available Vault and Consul clusters can be fairly difficult to organize. Furthermore, HashiCorp supports Helm charts for Consul.
Consul: There are many options for Vault backends, where the encrypted secrets are stored. To remain agnostic of a specific cloud provider or upstream technology, we want a Kubernetes hosted backend for Vault.
To start, you will need to have:
From the Google Cloud Shell (or general Linux shell), you must have the following packages installed:
gcloud CLI: This will already be installed in Google Cloud Shell.
helm: This will already be installed in Google Cloud Shell.
kubectl: This will already be installed in Google Cloud Shell.
Setup bash completion for kubectl.
source <(kubectl completion bash)
First, lets clone Github project into Google Cloud Shell workspace.
git clone https://github.com/hashicorp/hands-on-with-vault-on-kubernetes.git
We now need the Vault CLI tool. To install in Google Cloud Shell, run:
make 0-install-vault
Next, we'll build the cluster. We need to:
We can automate these steps via Terraform for more repeatable deployment and management but that is out of scope for this workshop. Instead, we'll run:
export GOOGLE_PROJECT=<project>
make 0-build-cluster
Note: If you are bringing your own cluster, make sure your kubeconfig is set correctly. You will also need to set:
export CLUSTER_NAME=<cluster name>
We have the option of many storage backends for Vault. In this workshop, we'll use Consul to remain agnostic of a particular cloud. Consul is a service discovery tool that includes a key-value store, which Vault can use for storing state.
To deploy, we run:
make 1-consul
In summary, this command will deploy 3 Consul servers as a StatefulSet, fronted with a service, and 3 Consul agents as DaemonSets. They are set up with Access Control Lists to allow Vault to store configuration into Consul.
This would be very close to a production configuration, with a few additions we might want to add later:
A "production" backend should maintain the following patterns:
Access control. This prevents anonymous or unauthorized access to the backend cluster
Non-root access. We generally should not require root access to the storage backend.
Resiliency. The store should quickly self-heal or be restored on failure.
Note: We are using Helm for deploying Consul and Vault. The official Helm chart for Consul can be used for other Consul configurations, such as connect.
We are using self-signed certificates. In a production environment, we might use Let's Encrypt for a proper certificate with a certificate authority. Use of certificates help control communication with Vault and only allow encrypted transmission of data. It is advisable to use TLS to encrypt all traffic.
Run the following command to create certificates in the tls/
directory.
make 2-certs
This will generate a self-signed certificate that allows access to the internal Kubernetes DNS endpoints of Vault. To logically isolate our Vault deployment from other resources, we use a Kubernetes namespace. We can apply access control and resource quotas to the namespace.
This section may be substituted with the official Vault Helm chart.
Now that we've set up the backend for Vault and generated certificates, we can deploy the Vault cluster to Kubernetes.
The Vault configuration we're deploying consists of three Vault instances. Each of them connect to the Consul agent, with the idea that any data gets forwarded to Consul servers. One of the Vault instances serves as the leader, while others serve as followers.
Let's review the following files in helm/vault-helm
.
server-ha-statefulset.yaml
: This contains the Vault StatefulSet that deploys
with sticky identities for each Vault server. Vault servers reference a Consul
agent via the underlying Kubernetes host IP and port 8500. We mount our
certificates as volume mounts and the Consul token for connection to the
backend as an environment variable.
ha-ui-service.yaml
: We use this manifest for a Vault client endpoint. This
is generated to allow a single load-balanced endpoint for access. We add this
to a configuration map for applications and other services to use.
server-ha-init-job.yaml
: We need to initialize Vault with the vault operator init
command. In this deployment, we use Google KMS to facilitate
auto-unseal. We are not storing the root token in a Google storage bucket.
Instead, we scrape it from the logs and temporarily use it as a Kubernetes
secret for additional ACL generation (next step).
Note: We are storing to the root token to facilitate the workshop and not storing the unseal keys. This pattern is not intended for production use. We prefer to store the unseal keys and root keys using a sidecar into a remote key management setup.
To deploy, run the command below.
make 3-vault
To restrict access to secrets, such as for a test application, we need to deploy an access control list to specific Vault paths. We'll associate a token (or identity) to the policy outlined by the access control list.
Let's review the following files in helm/vault-helm-acl
:
acl-config.yaml
: This is the configuration we'll use to configure an
administrator account so we do not use the Vault root token. The policy in
this file allows the creation and modification of other policies as well as
retrieval of secrets.
acl-init-job.yaml
: We use a Kubernetes Job to apply the ACL policy.
tests/test-runner.yaml
: This checks the administrator token for correct
policy and revokes the Vault root token once the test passes. In a production
setup, root tokens can be generated on-demand and should not be used for
Vault interactions.
Note: For this workshop, we will be adding the token as a Kubernetes secrets since we do not have an additional store, similar to the root token.
To apply the administrator ACL, run:
make 4-acl
In this step, we'll enable the Kubernetes authentication method in Vault in order to link a service account token to a Vault policy. Kubernetes uses JSON Web Tokens (JWTs) for its service accounts. We'll enable the authentication method and then configure Vault to talk to the Kubernetes cluster, using the cluster's hostname, certificate, and service account JWT.
Vault uses the Kubernetes Token Reviewer API to validate the JWT.
To enable and configure the Kubernetes authenticaiton method, run:
make 5-auth
To demonstrate how we would use a service account's JWT to access the secrets
for a given path, we'll create a policy to allow creation, deletion, updates,
and retrieval at the path secret/data/exampleapp/*
.
Then, we link the service account to a Vault named role. After that, we'll add a
secret to secret/data/exampleapp/config
to read later.
Apply configuration using:
make 6-policy
We need an application to access the static secret at
secret/data/exampleapp/config
. The application should run with the service
account we configured and with its JWT, allow us to retrieve the secret.
Deploy the example application by running:
make 7-simple
Let's view the example application in the browser. First, port forward from the pod to the Cloud Shell instance.
POD_NAME=$(kubectl get pods -l app=exampleapp-simple -o jsonpath='{.items[*].metadata.name}')
kubectl port-forward $POD_NAME 8080:8080 &
To view the example application in the browser, we can use the "Web Preview" feature in the Google Cloud Shell. It will open a new tab with the example application's landing page.
We see the empty application on the browser.
We will perform a Vault login on the behalf of the exampleapp
pod and get a
Vault token. The token can be used to retreive secrets for the exampleapp
application.
make 8-token
The above command will create a local file called local.env
that contains the
Vault root token and Vault address.
cat local.env
We will now use the Vault token generated above to retrieve secrets from Vault.
make 9-secret
We are retrieving the static secrets manually. Next, we'll discuss how to do it dynamically.
The sidecar pattern is common with Kubernetes applications and can be applied to access secrets from Vault.
Here is a diagram showcasing application secrets workflow with Vault.
An init container uses the service account JWT token in the pod and uses the Kubernetes auth method to authenticate with Vault. If the authentication is successful, Vault returns a token that can be used to fetch application secrets.
Consul template then uses the Vault token to fetch appliation secrets and write them into a shared volume so the application container can use it.
The application can read the secrets file. For this example, our application periodically reads the config file that has secrets from the shared volume.
Deploy exampleapp sidecar application.
make 10-sidecar
Port forward to the sidecar pod.
PODNAME=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" -l app=exampleapp-sidecar)
kubectl port-forward $PODNAME 8081:8080 &
When we open the Web Preview in Cloud Shell (be sure to change the port to 8081), we should see our secret displayed.
Let's try updating the secret in Vault.
source local.env
vault kv put secret/data/exampleapp/config ttl="5s" username="exampleapp" password="osc0nisawesome"
When we refresh the browser with the example application, we should see the secret updated.
In order to learn about Vault's dynamic credential generation capabilities, we will look at an example where we generate dynamically database credentials using Vault's Database secret engine.
Deploy a dummy MySQL database on Kubernetes.
make 11-mysql
Configure database secret engine in Vault.
make 12-database-secret-engine
Deploy dynamic secrets enabled sidecar application.
make 13-dynamic-secrets-sidecar
Next, port forward and check the Web Preview for the database credentials.
PODNAME=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" -l app=exampleapp-database-sidecar)
kubectl port-forward $PODNAME 8082:8080 &
Using the username and password from the example application web page, you should be able to access the database table.
This tutorial is based on Seth Vargo's Vault on GKE workshop.