Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
K3sup | 5,220 | 2 | 2 months ago | 21 | April 13, 2021 | 12 | other | Go | ||
bootstrap K3s over SSH in < 60s 🚀 | ||||||||||
Raspberry Pi Dramble | 1,648 | 5 months ago | 15 | mit | Shell | |||||
DEPRECATED - Raspberry Pi Kubernetes cluster that runs HA/HP Drupal 8 | ||||||||||
Raspberrymatic | 1,289 | 3 days ago | 132 | apache-2.0 | JavaScript | |||||
:house: A lightweight, buildroot-based Linux operating system alternative for your CCU3, ELV-Charly or for running your "HomeMatic CCU" IoT central as a pure virtual appliance (using Proxmox VE, VirtualBox, Docker/OCI, Kubernetes/K8s, Home Assistant, vmWare ESXi, etc.) or on your own RaspberryPi, Tinkerboard, ODROID, etc. SBC device... | ||||||||||
Kube Vip | 1,279 | 9 | 4 days ago | 32 | April 21, 2022 | 63 | apache-2.0 | Go | ||
Kubernetes Control Plane Virtual IP and Load-Balancer | ||||||||||
Aria2 Ariang Docker | 918 | 15 days ago | 2 | apache-2.0 | Shell | |||||
The Docker image for Aria2 + AriaNg + File Browser + Rclone | ||||||||||
Sitewhere | 854 | 6 | 19 | 2 years ago | 18 | June 19, 2017 | 112 | other | Java | |
SiteWhere is an industrial strength open-source application enablement platform for the Internet of Things (IoT). It provides a multi-tenant microservice-based infrastructure that includes device/asset management, data ingestion, big-data storage, and integration through a modern, scalable architecture. SiteWhere provides REST APIs for all system functionality. SiteWhere provides SDKs for many common device platforms including Android, iOS, Arduino, and any Java-capable platform such as Raspberry Pi rapidly accelerating the speed of innovation. | ||||||||||
K8s On Raspbian | 850 | 3 years ago | n,ull | mit | Shell | |||||
Kubernetes on Raspbian (Raspberry Pi) | ||||||||||
K8s On Raspbian | 777 | 3 years ago | 7 | mit | Shell | |||||
Kubernetes on Raspbian (Raspberry Pi) | ||||||||||
Kubeadm Workshop | 541 | 4 years ago | 30 | mit | Makefile | |||||
Showcasing a bare-metal multi-platform kubeadm setup with persistent storage and monitoring | ||||||||||
Kubernetes On Arm | 541 | 6 years ago | 28 | mit | Shell | |||||
Kubernetes ported to ARM boards like Raspberry Pi. |
Monorepo for my personal homelab. It contains applications and kubernetes manifests for deployment.
This assumes you have the following tools:
To start working:
make install-tools
make
to build all binariescmd
- Entry points to any bespoke applications.hack
- Node host specific config files and tweaks.internal
- Packages used throughout the application code.manifests
- Kubernetes manifests to run all my homelab applications.scripts
- Bash scripts for working within the repository.terraform
- Terraform files for managing infrastructure.vendor
- Vendored third-party code.Here's a list of third-party applications I'm using alongside my custom applications:
I've implemented several custom prometheus exporters in this repo that power my dashboards, these are:
coronavirus
- Exports UK coronavirus stats as prometheus metricshomehub
- Exports statistics from my BT HomeHub as prometheus metricspihole
- Exports statistics from my pihole as prometheus metricsspeedtest
- Exports speedtest results as prometheus metricsweather
- Exports current weather data as prometheus metricsworldping
- Exports world ping times for the local host as prometheus metricshome-assistant
- Proxies prometheus metrics from a home-assistant server.synology
- Exports statistics from my NAS drive.minecraft
- Exports statistics for my Minecraft server.Here are other tools I've implemented for use in the cluster.
bucket-object-cleaner
- Deletes objects in a blob bucket older than a configured age.grafana-backup
- Copies all dashboards and data sources from grafana and writes them to a MinIO bucket.ftp-backup
- Copies all files from a specified path of an FTP server and writes them to a MinIO bucket.This repo contains a few homemade user interfaces for navigation/overview of the applications running in the cluster.
directory
- A simple YAML configured link page to access different services in the homelab.health-dashboard
- A simple UI that returns the health check status of custom services using the pkg.dsb.dev
flavoured health checks.These are devices/services that the cluster interacts with, without being directly installed in the cluster.
*.homelab.dsb.dev
domain.Upgrading the k3s cluster itself is managed using Rancher's system-upgrade-controller.
It automates upgrading the cluster through the use of a CRD. To perform a cluster upgrade, see the plans
directory. Each upgrade is stored in its own directory named using the desired version, when the plan manifests get applied
via kustomize jobs will be started by the controller that upgrade the master node, followed by the worker nodes. The upgrade only takes
a few minutes and tools like k9s
and kubectl
will not be able to communicate with the cluster for a small amount of time while
the master node upgrades.
The hack diretory at the root of the repository contains files used on all nodes in the cluster.
The crontab
file describes scheduled tasks that clear out temporary and old files on the filesystem
(/tmp, /var/log etc) and performs package upgrades on a weekly basis. It will also prune container images that are no
longer in use.
The crontab file can be deployed to all nodes using the make install-cron-jobs
recipe. This command will copy over the
contents of the local crontab file to each node via SSH. You need to have used ssh copy-key-id
for each node, so you
don't get any password prompts.
The k3s.service
and k3s-agent.service
files are the systemd
service files that run the service and agent nodes. It
sets the cluster to communicate via the Tailscale network and stops k3s from installing traefik. This is because I run
traefik 2, whereas k3s comes with 1.7 by default.
The usercfg.txt
file is stored at /boot/firmware/usercfg.txt
and is used to set overclocking values for the Raspberry
Pis. Pretty certain this voids my warranty, so if you're not me and planning on using this repository you should keep that
in mind.
See Overclocking options in config.txt for more details on these values.
The multipath.conf
file is the configuration file for the multipath daemon. It is used to overwrite the built-in
configuration table of multipathd
. Any line whose first non-white-space character is a '#' is considered a comment line.
Empty lines are ignored.
The sole reason for this existing, is to handle an issue with longhorn that I was experiencing.
Some aspects of the homelab are managed using Terraform. These include DNS records via CloudFlare. To plan and apply
changes, use the Makefile
in the terraform directory. The make plan
and make apply
recipes will
perform changes.
The terraform state is included in this repository. It is encrypted using strongbox, which is installed when using
make install-tools
.
This list contains all terraform providers used in the project:
New postgres databases can be provisioned using a kubernetes Job
resource using the createdb
binary included in standard
postgres
docker images. Below is an example:
apiVersion: batch/v1
kind: Job
metadata:
name: example-db-init
spec:
template:
spec:
containers:
- image: postgres:13.1-alpine
name: createdb
command:
- createdb
env:
- name: PGHOST
value: postgres.storage.svc.cluster.local
- name: PGDATABASE
value: example
- name: PGUSER
valueFrom:
secretKeyRef:
key: postgres.user
name: postgres
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: postgres.password
name: postgres
restartPolicy: Never
backoffLimit: 0
You can view the documentation for the createdb
command here.
The cluster contains a deployment of the docker registry that is used as a pull-through proxy for any images hosted
on hub.docker.com. When referencing images stored in the main library, like postgres
or busybox
, you use the image
reference registry.homelab.dsb.dev/library
. Otherwise, you just use the repository/tag combination. This increases
the speed at which images are pulled and also helps with docker's recent change to add API request limits.