Welcome to the home of the project!
With this project, you can build up in minutes a fully working k8s cluster (single master/HA) with as many worker nodes as you want.
It is a hobby project, so it's not supported for production usage, but feel free to open issues and/or contributing to it!
Kubernetes version that is installed can be choosen between:
Terraform will take care of the provisioning of:
It also takes care of preparing the host machine with needed packages, configuring:
You can customize the setup choosing:
The user is capable of logging via SSH too.
The playbook is meant to be ran against a local host or a remote host that has access to subnets that will be created, defined under vm_host group, depending on how many clusters you want to configure at once.
First of all, you need to install required collections to get started:
ansible-galaxy collection install -r requirements.yml
Once the collections are installed, you can simply run the playbook:
You can quickly make it work by configuring the needed vars, but you can go straight with the defaults!
You can also install your cluster using the Makefile with:
To install collections:
To install the cluster:
To build the EE image, jump in the execution-environment folder and run the build:
ansible-builder build -f execution-environment/execution-environment.yml -t k8s-ee
To run the playbooks use ansible navigator:
ansible-navigator run main.yml -m stdout
Recommended sizings are:
k8s: cluster_name: k8s-test cluster_os: Ubuntu cluster_version: 1.24 container_runtime: crio master_schedulable: false # Nodes configuration control_plane: vcpu: 2 mem: 2 vms: 3 disk: 30 worker_nodes: vcpu: 2 mem: 2 vms: 1 disk: 30 # Network configuration network: network_cidr: 192.168.200.0/24 domain: k8s.test additional_san: "" pod_cidr: 10.20.0.0/16 service_cidr: 10.110.0.0/16 cni_plugin: cilium rook_ceph: install_rook: false volume_size: 50 rook_cluster_size: 1 # Ingress controller configuration [nginx/haproxy] ingress_controller: install_ingress_controller: true type: haproxy node_port: http: 31080 https: 31443 # Section for metalLB setup metallb: install_metallb: false l2: iprange: 192.168.200.210-192.168.200.250
Size for disk and mem is in GB. disk allows to provision space in the cloud image for pod's ephemeral storage.
cluster_version can be 1.20, 1.21, 1.22, 1.23, 1.24, 1.25 to install the corresponding latest version for the release
VMS are created with these names by default (customizing them is work in progress):
- **cluster_name**-loadbalancer.**domain** - **cluster_name**-master-N.**domain** - **cluster_name**-worker-N.**domain**
It is possible to choose CentOS/Ubuntu as kubernetes hosts OS
Since last release, it is now possible to provision multiple clusters on the same host. Each cluster will be self consistent and will have its own folder under the //home/user/k8ssetup/clusters folder in playbook root folder.
clusters └── k8s-provisioner ├── admin.kubeconfig ├── haproxy.cfg ├── id_rsa ├── id_rsa.pub ├── libvirt-resources │ ├── libvirt-resources.tf │ └── terraform.tfstate ├── loadbalancer │ ├── cloud_init.cfg │ ├── k8s-loadbalancer.tf │ └── terraform.tfstate ├── masters │ ├── cloud_init.cfg │ ├── k8s-master.tf │ └── terraform.tfstate ├── workers │ ├── cloud_init.cfg │ ├── k8s-workers.tf │ └── terraform.tfstate └── workers-rook ├── cloud_init.cfg └── k8s-workers.tf
In the main folder will be provided a custom script for removing the single cluster, without touching others.
As well as a separated inventory for each cluster:
In order to keep clusters separated, ensure that you use a different k8s.cluster_name,k8s.network.domain and k8s.network.network_cidr variables.
Rook setup actually creates a dedicated kind of worker, with an additional volume on the VMs that are required. Now it is possible to select the size of Rook cluster using rook_ceph.rook_cluster_size variable in the settings.
Basic setup taken from the documentation. At the moment, the parameter l2 reports the IPs that can be used (defaults to some IPs in the same subnet of the hosts) as 'external' IPs for accessing the applications
Suggestion and improvements are highly recommended! Alex