|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Teleport||15,467||8||a day ago||252||July 29, 2021||2,417||agpl-3.0||Go|
|Protect access to all of your infrastructure.|
|Cert Manager||10,919||223||5 days ago||399||October 27, 2023||207||apache-2.0||Go|
|Automatically provision and manage TLS certificates in Kubernetes|
|Sealed Secrets||6,718||23||5 days ago||144||November 15, 2023||72||apache-2.0||Go|
|A Kubernetes controller and tool for one-way encrypted Secrets|
|Kube Lego||2,196||2 years ago||1||August 26, 2021||104||apache-2.0||Go|
|DEPRECATED: Automatically request certificates for Kubernetes Ingress resources from Let's Encrypt|
|Grpc Health Probe||1,324||85||9||9 days ago||41||November 27, 2023||9||apache-2.0||Go|
|A command-line tool to perform health-checks for gRPC applications in Kubernetes and elsewhere|
|Dca||1,192||10 months ago||apache-2.0|
|Docker Certified Associate Exam Preparation Guide|
|Kube Cert Manager||1,015||6 years ago||1||August 20, 2017||6||apache-2.0||Go|
|Manage Lets Encrypt certificates for a Kubernetes cluster.|
|Kubernetes Reflector||721||7 days ago||18||mit||C#|
|Custom Kubernetes controller that can be used to replicate secrets, configmaps and certificates.|
|Gke Letsencrypt||632||4 years ago||4||apache-2.0|
|Tutorial for installing cert-manager on GKE get HTTPS certificates from Let’s Encrypt (⚠️NOW OBSOLETE⚠️)|
|Autocert||630||a day ago||29||November 28, 2023||16||apache-2.0||Go|
|⚓ A kubernetes add-on that automatically injects TLS/HTTPS certificates into your containers|
Demonstration of how to use the k8s.io/apiserver library to build a functional API server.
Note: go-get or vendor this package as
You may use this code if you want to build an Extension API Server to use with API Aggregation, or to build a stand-alone Kubernetes-style API server.
However, consider two other options:
If you do decide to use this repository, then the recommended pattern is to fork this repository, modify it to add your types, and then periodically rebase your changes on top of this repo, to pick up improvements and bug fixes to the apiserver.
HEAD of this repo will match HEAD of k8s.io/apiserver, k8s.io/apimachinery, and k8s.io/client-go.
sample-apiserver is synced from https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/sample-apiserver.
Code changes are made in that location, merged into
k8s.io/kubernetes and later synced here.
Like the rest of Kubernetes, sample-apiserver has used
$GOPATH for years and is
now adopting go 1.11 modules. There are thus two alternative ways to
go about fetching this demo and its dependencies.
When NOT using go 1.11 modules, you can use the following commands.
go get -d k8s.io/sample-apiserver cd $GOPATH/src/k8s.io/sample-apiserver # assuming your GOPATH has just one entry godep restore
When using go 1.11 modules (
GO111MODULE=on), issue the following
commands --- starting in whatever working directory you like.
git clone https://github.com/kubernetes/sample-apiserver.git cd sample-apiserver
Note, however, that if you intend to
generate code then you will also need the
code-generator repo to exist in an old-style location. One easy way
to do this is to use the command
go mod vendor to create and
If you are developing Kubernetes according to
then you already have a copy of this demo in
kubernetes/staging/src/k8s.io/sample-apiserver and its dependencies
--- including the code generator --- are in usable locations.
If you change the API object type definitions in any of the
pkg/apis/.../types.go files then you will need to update the files
generated from the type definitions. To do this, first
create the vendor directory if necessary
and then invoke
your current working directory; the script takes no arguments.
The normal build supports only a very spare selection of
authentication methods. There is a much larger set available in
. If you want your server to support one of those, such as
then add an import of the appropriate package to
sample-apiserver/main.go. Here is an example:
import _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
Alternatively you could add support for all of them, with an import like this:
import _ "k8s.io/client-go/plugin/pkg/client/auth"
sample-apiserver as your current working directory, issue the
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o artifacts/simple-image/kube-sample-apiserver
sample-apiserver as your current working directory, issue the
following commands with
MYTAG replaced by something
docker build -t MYPREFIX/kube-sample-apiserver:MYTAG ./artifacts/simple-image docker push MYPREFIX/kube-sample-apiserver:MYTAG
artifacts/example/deployment.yaml, updating the pod template's image
reference to match what you pushed and setting the
to something suitable. Then call:
kubectl apply -f artifacts/example
During development it is helpful to run sample-apiserver stand-alone, i.e. without
a Kubernetes API server for authn/authz and without aggregation. This is possible, but needs
a couple of flags, keys and certs as described below. You will still need some kubeconfig,
~/.kube/config, but the Kubernetes cluster is not used for authn/z. A minikube or
hack/local-up-cluster.sh cluster will work.
Instead of trusting the aggregator inside kube-apiserver, the described setup uses local
client certificate based X.509 authentication and authorization. This means that the client
certificate is trusted by a CA and the passed certificate contains the group membership
system:masters group. As we disable delegated authorization with
only this superuser group is authorized.
First we need a CA to later sign the client certificate:
openssl req -nodes -new -x509 -keyout ca.key -out ca.crt
Then we create a client cert signed by this CA for the user
development in the superuser group
openssl req -out client.csr -new -newkey rsa:4096 -nodes -keyout client.key -subj "/CN=development/O=system:masters" openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -sha256 -out client.crt
As curl requires client certificates in p12 format with password, do the conversion:
openssl pkcs12 -export -in ./client.crt -inkey ./client.key -out client.p12 -passout pass:password
With these keys and certs in-place, we start the server:
etcd & sample-apiserver --secure-port 8443 --etcd-servers http://127.0.0.1:2379 --v=7 \ --client-ca-file ca.crt \ --kubeconfig ~/.kube/config \ --authentication-kubeconfig ~/.kube/config \ --authorization-kubeconfig ~/.kube/config
The first kubeconfig is used for the shared informers to access
Kubernetes resources. The second kubeconfig passed to
--authentication-kubeconfig is used to satisfy the delegated
authenticator. The third kubeconfig passed to
--authorized-kubeconfig is used to satisfy the delegated
authorizer. Neither the authenticator, nor the authorizer will
actually be used: due to
--client-ca-file, our development X.509
certificate is accepted and authenticates us as
system:masters is the superuser group such that delegated
authorization is skipped.
Use curl to access the server using the client certificate in p12 format for authentication:
curl -fv -k --cert-type P12 --cert client.p12:password \ https://localhost:8443/apis/wardle.example.com/v1alpha1/namespaces/default/flunders
Or use wget:
wget -O- --no-check-certificate \ --certificate client.crt --private-key client.key \ https://localhost:8443/apis/wardle.example.com/v1alpha1/namespaces/default/flunders
Note: Recent OSX versions broke client certs with curl. On Mac try
brew install httpie and then:
http --verify=no --cert client.crt --cert-key client.key \ https://localhost:8443/apis/wardle.example.com/v1alpha1/namespaces/default/flunders