Opinionated Terraform module for creating a Highly Available Kubernetes cluster running on
Container Linux by CoreOS (any channel) in an AWS
Virtual Private Cloud VPC. With prerequisites
make all will simply spin up a default cluster; and, since it is
based on Terraform, customization is much easier than
The default configuration includes Kubernetes add-ons: DNS, Dashboard and UI.
# prereqs $ brew update && brew install awscli cfssl jq kubernetes-cli terraform # build artifacts and deploy cluster $ make all # nodes $ kubectl get nodes # addons $ kubectl get pods --namespace=kube-system # verify dns - run after addons have fully loaded $ kubectl exec busybox -- nslookup kubernetes # open dashboard $ make dashboard # obliterate the cluster and all artifacts $ make clean
|component / tool||version|
|Container Linux by CoreOS||1409.7.0, 1465.3.0, 1492.1.0|
|aws-cli||aws-cli/1.11.129 Python/2.7.10 Darwin/16.7.0 botocore/1.5.92|
Quick install prerequisites on Mac OS X with Homebrew:
$ brew update && brew install awscli cfssl jq kubernetes-cli terraform
make all will create:
To open dashboard:
To display instance information:
To display status:
To destroy, remove and generally undo everything:
make all and
make clean should be idempotent - should an error occur simply try running
the command again and things should recover from that point.
Tack works in three phases:
The purpose of this phase is to prep the environment for Terraform execution. Some tasks are hard or messy to do in Terraform - a little prep work can go a long way here. Determining the Container Linux by CoreOS AMI for a given region, channel and VM Type for instance is easy enough to do with a simple shell script.
Terraform does the heavy lifting of resource creation and sequencing. Tack uses local
modules to partition the work in a logical way. Although it is of course possible to do all
of the Terraform work in a single
.tf file or collection of
.tf files, it becomes
unwieldy quickly and impossible to debug. Breaking the work into local modules makes the
flow much easier to follow and provides the basis for composing variable solutions down the track - for example converting the worker Auto Scaling Group to use spot instances.
Once the infrastructure has been configured and instantiated it will take some time for it to settle. Waiting for the 'master' ELB to become healthy is an example of this.
Like many great tools, tack has started out as a collection of scripts, makefiles and other tools. As tack matures and patterns crystalize it will evolve to a Terraform plugin and perhaps a Go-based cli tool for 'init-ing' new cluster configurations. The tooling will compose Terraform modules into a solution based on user preferences - think
npm init or better yet yeoman.
curl --cacert /etc/kubernetes/ssl/ca.pem --cert /etc/kubernetes/ssl/k8s-etcd.pem --key /etc/kubernetes/ssl/k8s-etcd-key.pem https://etcd.test.kz8s:2379/health openssl x509 -text -noout -in /etc/kubernetes/ssl/ca.pem openssl x509 -text -noout -in /etc/kubernetes/ssl/k8s-etcd.pem
To access Elasticseach and Kibana first start
$ kubectl proxy Starting to serve on localhost:8001
If you have an existing VPC you'd like to deploy a cluster into, there is an option for this with tack.
In order to test existing VPC support, we need to generate a VPC and then try the overrides with it. After that we can clean it all up. These instructions are meant for someone wanting to ensure that the tack existing VPC code works properly.
make allto generate a VPC with Terraform
make cleanto remove everything but the VPC and associated networking (we preserved it in the previous step)
make allto test out using an existing VPC
make cleanto clean up everything