Awesome Open Source
Awesome Open Source
  • Kubectl Kubernetes CheatSheet :Cloud: :PROPERTIES: :type: kubernetes :export_file_name: cheatsheet-kubernetes-A4.pdf :END:



PRs Welcome #+END_HTML

  • PDF Link: [[][cheatsheet-kubernetes-A4.pdf]], Category: [[][Cloud]]
  • Blog URL:
  • Related posts: [[][Kubectl CheatSheet]], [[][Kubernetes Yaml]], [[][#denny-cheatsheets]]

File me [[][Issues]] or star [[][this repo]]. ** Common Commands | Name | Command | |--------------------------------------+-------------------------------------------------------------------------------------------| | Run curl test temporarily | =kubectl run --generator=run-pod/v1 --rm mytest --image=yauritux/busybox-curl -it= | | Run wget test temporarily | =kubectl run --generator=run-pod/v1 --rm mytest --image=busybox -it wget= | | Run nginx deployment with 2 replicas | =kubectl run my-nginx --image=nginx --replicas=2 --port=80= | | Run nginx pod and expose it | =kubectl run my-nginx --restart=Never --image=nginx --port=80 --expose= | | Run nginx deployment and expose it | =kubectl run my-nginx --image=nginx --port=80 --expose= | | List authenticated contexts | =kubectl config get-contexts=, =/.kube/config= | | Set namespace preference | =kubectl config set-context <context_name> --namespace=<ns_name>= | | List pods with nodes info | =kubectl get pod -o wide= | | List everything | =kubectl get all --all-namespaces= | | Get all services | =kubectl get service --all-namespaces= | | Get all deployments | =kubectl get deployments --all-namespaces= | | Show nodes with labels | =kubectl get nodes --show-labels= | | Get resources with json output | =kubectl get pods --all-namespaces -o json= | | Validate yaml file with dry run | =kubectl create --dry-run --validate -f pod-dummy.yaml= | | Start a temporary pod for testing | =kubectl run --rm -i -t --image=alpine test-$RANDOM -- sh= | | kubectl run shell command | =kubectl exec -it mytest -- ls -l /etc/hosts= | | Get system conf via configmap | =kubectl -n kube-system get cm kubeadm-config -o yaml= | | Get deployment yaml | =kubectl -n denny-websites get deployment mysql -o yaml= | | Explain resource | =kubectl explain pods=, =kubectl explain svc= | | Watch pods | =kubectl get pods -n wordpress --watch= | | Query healthcheck endpoint | =curl -L | | Open a bash terminal in a pod | =kubectl exec -it storage sh= | | Check pod environment variables | =kubectl exec redis-master-ft9ex env= | | Enable kubectl shell autocompletion | =echo "source <(kubectl completion bash)" >>/.bashrc=, and reload | | Use minikube dockerd in your laptop | =eval $(minikube docker-env)=, No need to push docker hub any more | | Kubectl apply a folder of yaml files | =kubectl apply -R -f .= | | Get services sorted by name | kubectl get services | | Get pods sorted by restart count | kubectl get pods --sort-by='.status.containerStatuses[0].restartCount' | | List pods and images | kubectl get pods -o=',Images:.spec.containers[].image' | | List all container images | [[][]] | | kubeconfig skip tls verification | [[][]] | | [[][Ubuntu install kubectl]] | ="deb kubernetes-xenial main"= | | Reference | [[][GitHub: kubernetes releases]] | | Reference | [[][minikube cheatsheet]], [[][docker cheatsheet]], [[][OpenShift CheatSheet]] | ** Check Performance | Name | Command | |----------------------------------------------+------------------------------------------------------| | Get node resource usage | =kubectl top node= | | Get pod resource usage | =kubectl top pod= | | Get resource usage for a given pod | =kubectl top --containers= | | List resource utilization for all containers | =kubectl top pod --all-namespaces --containers=true= | ** Resources Deletion | Name | Command | |-----------------------------------------+----------------------------------------------------------| | Delete pod | =kubectl delete pod/ -n = | | Delete pod by force | =kubectl delete pod/ --grace-period=0 --force= | | Delete pods by labels | =kubectl delete pod -l env=test= | | Delete deployments by labels | =kubectl delete deployment -l app=wordpress= | | Delete all resources filtered by labels | =kubectl delete pods,services -l name=myLabel= | | Delete resources under a namespace | =kubectl -n my-ns delete po,svc --all= | | Delete persist volumes by labels | =kubectl delete pvc -l app=wordpress= | | Delete state fulset only (not pods) | =kubectl delete sts/<stateful_set_name> --cascade=false= | #+BEGIN_HTML #+END_HTML ** Log & Conf Files | Name | Comment | |---------------------------+---------------------------------------------------------------------------| | Config folder | =/etc/kubernetes/= | | Certificate files | =/etc/kubernetes/pki/= | | Credentials to API server | =/etc/kubernetes/kubelet.conf= | | Superuser credentials | =/etc/kubernetes/admin.conf= | | kubectl config file | =~/.kube/config= | | Kubernetes working dir | =/var/lib/kubelet/= | | Docker working dir | =/var/lib/docker/=, =/var/log/containers/= | | Etcd working dir | =/var/lib/etcd/= | | Network cni | =/etc/cni/net.d/= | | Log files | =/var/log/pods/= | | log in worker node | =/var/log/kubelet.log=, =/var/log/kube-proxy.log= | | log in master node | =kube-apiserver.log=, =kube-scheduler.log=, =kube-controller-manager.log= | | Env | =/etc/systemd/system/kubelet.service.d/10-kubeadm.conf= | | Env | export KUBECONFIG=/etc/kubernetes/admin.conf | ** Pod | Name | Command | |------------------------------+-------------------------------------------------------------------------------------------| | List all pods | =kubectl get pods= | | List pods for all namespace | =kubectl get pods -all-namespaces= | | List all critical pods | =kubectl get -n kube-system pods -a= | | List pods with more info | =kubectl get pod -o wide=, =kubectl get pod/ -o yaml= | | Get pod info | =kubectl describe pod/srv-mysql-server= | | List all pods with labels | =kubectl get pods --show-labels= | | [[][List all unhealthy pods]] | kubectl get pods --field-selector=status.phase!=Running --all-namespaces | | List running pods | kubectl get pods --field-selector=status.phase=Running | | Get Pod initContainer status | =kubectl get pod --template '{{.status.initContainerStatuses}}' = | | kubectl run command | kubectl exec -it -n "$ns" "$podname" -- sh -c "echo $msg >>/dev/err.log" | | Watch pods | =kubectl get pods -n wordpress --watch= | | Get pod by selector | kubectl get pods --selector="app=syslog" -o jsonpath='{.items[]}' | | List pods and images | kubectl get pods -o=',Images:.spec.containers[].image' | | List pods and containers | -o=',CONTAINERS:.spec.containers[].name' | | Reference | [[][Link: kubernetes yaml templates]] | ** Label & Annotation | Name | Command | |----------------------------------+-------------------------------------------------------------------| | Filter pods by label | =kubectl get pods -l owner=denny= | | Manually add label to a pod | =kubectl label pods dummy-input owner=denny= | | Remove label | =kubectl label pods dummy-input owner-= | | Manually add annotation to a pod | =kubectl annotate pods dummy-input my-url= | ** Deployment & Scale | Name | Command | |------------------------------+--------------------------------------------------------------------------| | Scale out | =kubectl scale --replicas=3 deployment/nginx-app= | | online rolling upgrade | =kubectl rollout app-v1 app-v2 --image=img:v2= | | Roll backup | =kubectl rollout app-v1 app-v2 --rollback= | | List rollout | =kubectl get rs= | | Check update status | =kubectl rollout status deployment/nginx-app= | | Check update history | =kubectl rollout history deployment/nginx-app= | | Pause/Resume | =kubectl rollout pause deployment/nginx-deployment=, =resume= | | Rollback to previous version | =kubectl rollout undo deployment/nginx-deployment= | | Reference | [[][Link: kubernetes yaml templates]], [[][Link: Pausing and Resuming a Deployment]] | #+BEGIN_HTML #+END_HTML ** Quota & Limits & Resource | Name | Command | |-------------------------------+-------------------------------------------------------------------------| | List Resource Quota | =kubectl get resourcequota= | | List Limit Range | =kubectl get limitrange= | | Customize resource definition | =kubectl set resources deployment nginx -c=nginx --limits=cpu=200m= | | Customize resource definition | =kubectl set resources deployment nginx -c=nginx --limits=memory=512Mi= | | Reference | [[][Link: kubernetes yaml templates]] | ** Service | Name | Command | |---------------------------------+-----------------------------------------------------------------------------------| | List all services | =kubectl get services= | | List service endpoints | =kubectl get endpoints= | | Get service detail | =kubectl get service nginx-service -o yaml= | | Get service cluster ip | kubectl get service nginx-service -o go-template='{{.spec.clusterIP}}' | | Get service cluster port | kubectl get service nginx-service -o go-template='{{(index .spec.ports 0).port}}' | | Expose deployment as lb service | =kubectl expose deployment/my-app --type=LoadBalancer --name=my-service= | | Expose service as lb service | =kubectl expose service/wordpress-1-svc --type=LoadBalancer --name=ns1= | | Reference | [[][Link: kubernetes yaml templates]] | ** Secrets | Name | Command | |----------------------------------+-------------------------------------------------------------------------| | List secrets | =kubectl get secrets --all-namespaces= | | Generate secret | =echo -n 'mypasswd', then redirect to base64 --decode= | | Get secret | =kubectl get secret denny-cluster-kubeconfig= | | Get a specific field of a secret | kubectl get secret denny-cluster-kubeconfig -o jsonpath="{.data.value}" | | Create secret from cfg file | kubectl create secret generic db-user-pass --from-file=./username.txt | | Reference | [[][Link: kubernetes yaml templates]], [[][Link: Secrets]] | ** StatefulSet | Name | Command | |------------------------------------+----------------------------------------------------------| | List statefulset | =kubectl get sts= | | Delete statefulset only (not pods) | =kubectl delete sts/<stateful_set_name> --cascade=false= | | Scale statefulset | =kubectl scale sts/<stateful_set_name> --replicas=5= | | Reference | [[][Link: kubernetes yaml templates]] | ** Volumes & Volume Claims | Name | Command | |---------------------------+--------------------------------------------------------------| | List storage class | =kubectl get storageclass= | | Check the mounted volumes | =kubectl exec storage ls /data= | | Check persist volume | =kubectl describe pv/pv0001= | | Copy local file to pod | =kubectl cp /tmp/my /:/tmp/server= | | Copy pod file to local | =kubectl cp /:/tmp/server /tmp/my= | | Reference | [[][Link: kubernetes yaml templates]] | ** Events & Metrics | Name | Command | |---------------------------------+------------------------------------------------------------| | View all events | =kubectl get events --all-namespaces= | | List Events sorted by timestamp | kubectl get events --sort-by=.metadata.creationTimestamp | ** Node Maintenance | Name | Command | |-------------------------------------------+-------------------------------| | Mark node as unschedulable | =kubectl cordon $NODE_NAME= | | Mark node as schedulable | =kubectl uncordon $NODE_NAME= | | Drain node in preparation for maintenance | =kubectl drain $NODE_NAME= | ** Namespace & Security | Name | Command | |-------------------------------+-----------------------------------------------------------------------------------------------------| | List authenticated contexts | =kubectl config get-contexts=, =~/.kube/config= | | Set namespace preference | =kubectl config set-context <context_name> --namespace=<ns_name>= | | Switch context | =kubectl config use-context <context_name>= | | Load context from config file | =kubectl get cs --kubeconfig kube_config.yml= | | Delete the specified context | =kubectl config delete-context <context_name>= | | List all namespaces defined | =kubectl get namespaces= | | List certificates | =kubectl get csr= | | [[][Check user privilege]] | kubectl --as=system:serviceaccount:ns-denny:test-privileged-sa -n ns-denny auth can-i use pods/list | | [[][Check user privilege]] | =kubectl auth can-i use pods/list= | | Reference | [[][Link: kubernetes yaml templates]] | ** Network | Name | Command | |-----------------------------------+----------------------------------------------------------| | Temporarily add a port-forwarding | =kubectl port-forward redis-134 6379:6379= | | Add port-forwarding for deployment | =kubectl port-forward deployment/redis-master 6379:6379= | | Add port-forwarding for replicaset | =kubectl port-forward rs/redis-master 6379:6379= | | Add port-forwarding for service | =kubectl port-forward svc/redis-master 6379:6379= | | Get network policy | =kubectl get NetworkPolicy= | ** Patch | Name | Summary | |-------------------------------+---------------------------------------------------------------------| | Patch service to loadbalancer | kubectl patch svc $svc_name -p '{"spec": {"type": "LoadBalancer"}}' | ** Extenstions | Name | Summary | |-----------------------------------------+----------------------------| | Enumerates the resource types available | =kubectl api-resources= | | List api group | =kubectl api-versions= | | List all CRD | =kubectl get crd= | | List storageclass | =kubectl get storageclass= | #+BEGIN_HTML #+END_HTML ** Components & Services *** Services on Master Nodes | Name | Summary | |--------------------------+--------------------------------------------------------------------------------------------| | [[][kube-apiserver]] | API gateway. Exposes the Kubernetes API from master nodes | | [[][etcd]] | reliable data store for all k8s cluster data | | [[][kube-scheduler]] | schedule pods to run on selected nodes | | [[][kube-controller-manager]] | Reconcile the states. node/replication/endpoints/token controller and service account, etc | | cloud-controller-manager | | *** Services on Worker Nodes | Name | Summary | |-------------------+----------------------------------------------------------------------------------------------| | [[][kubelet]] | A node agent makes sure that containers are running in a pod | | [[][kube-proxy]] | Manage network connectivity to the containers. e.g, iptable, ipvs | | [[][Container Runtime]] | Kubernetes supported runtimes: dockerd, cri-o, runc and any [[][OCI runtime-spec]] implementation. |

*** Addons: pods and services that implement cluster features | Name | Summary | |-------------------------------+---------------------------------------------------------------------------| | DNS | serves DNS records for Kubernetes services | | Web UI | a general purpose, web-based UI for Kubernetes clusters | | Container Resource Monitoring | collect, store and serve container metrics | | Cluster-level Logging | save container logs to a central log store with search/browsing interface |

*** Tools | Name | Summary | |-----------------------+-------------------------------------------------------------| | [[][kubectl]] | the command line util to talk to k8s cluster | | [[][kubeadm]] | the command to bootstrap the cluster | | [[][kubefed]] | the command line to control a Kubernetes Cluster Federation | | Kubernetes Components | [[][Link: Kubernetes Components]] | ** More Resources License: Code is licensed under [[][MIT License]].


linkedin <img align="bottom"src="" alt="github" /> slack #+END_HTML

  • Tail pod log by label #+BEGIN_SRC sh namespace="mynamespace" mylabel="app=mylabel" kubectl get pod -l "$mylabel" -n "$namespace" | tail -n1
    | awk -F' ' '{print $1}' | xargs -I{}
    kubectl logs -n "$namespace" -f {} #+END_SRC

  • Get node hardware resource utilization #+BEGIN_SRC sh kubectl get nodes --no-headers
    | awk '{print $1}' | xargs -I {}
    sh -c 'echo {}; kubectl describe node {} | grep Allocated -A 5'

kubectl get nodes --no-headers | awk '{print $1}' | xargs -I {}
sh -c 'echo {}; kubectl describe node {} | grep Allocated -A 5
| grep -ve Event -ve Allocated -ve percent -ve -- ; echo' #+END_SRC

  • Apply the configuration in manifest.yaml and delete all the other configmaps that are not in the file.

#+BEGIN_EXAMPLE kubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap #+END_EXAMPLE

k8s provides declarative primitives for the "desired state"

Potentially the following scenarios; * Setting up ingresses and TLS * Fully configure something like Nginx Ingress Controller or Traefik. * Create TLS Secrets within Kubernetes, and use them in your ingress controller. * Managing RBAC (Don't know enough about this one, but sounds like a good concept to include) * Creating new roles, etc

I'll have a think and if anymore come to me, I'll let you know.

Denny Zhang (Github . Blogger) [1:19 AM] 👍

Will update per your suggestions tomorrow, Aaron ** TODO k8s add DNS challenges Gui [4:01 PM] Getting familiar with the concepts like pod, service, RC, deployment, etc.

[4:02] Try volume

[4:02] DNS.

Denny Zhang (Github . Blogger) [4:02 PM] I'm trying to cover the volume via mysql scenarios

Gui [4:02 PM] And other addons 1 reply Today at 4:03 PM View thread

Denny Zhang (Github . Blogger) [4:02 PM] For DNS, not sure whether I get your point

Gui [4:03 PM] I haven't tried a lot myself. 1 reply Today at 4:03 PM View thread

[4:03] Like every pod and service has an DNS name to talk to each other.

Denny Zhang (Github . Blogger) [4:04 PM] Yes, that makes sense

[4:04] For addons, do you have any recommended scenario? ** TODO k8s add challenge of addon ** TODO k8s networking models ** TODO k8s example: ** TODO Blog: Wordpress powered by k8s, docker swarm ** # --8<-------------------------- separator ------------------------>8-- :noexport: ** TODO [#A] absord: ** TODO [#A] absord: ** DONE kubectl config view CLOSED: [2017-12-31 Sun 10:40] ** DONE [#A] kubernetes persistent volume claim pending CLOSED: [2017-12-31 Sun 11:32]

kubectl get pvc kubectl get pv

#+BEGIN_EXAMPLE [email protected]:~$ kubectl describe pvc Name: ironic-gerbil-jenkins Namespace: default StorageClass: Status: Pending Volume: Labels: app=ironic-gerbil-jenkins chart=jenkins-0.10.2 heritage=Tiller release=ironic-gerbil Annotations: Capacity: Access Modes: Events: Type Reason Age From Message

Normal FailedBinding 37s (x261 over 2h) persistentvolume-controller no persistent volumes available for this claim and no storage class is set

Name: my-mysql-mysql Namespace: default StorageClass: Status: Pending Volume: Labels: app=my-mysql-mysql chart=mysql-0.3.2 heritage=Tiller release=my-mysql Annotations: Capacity: Access Modes: Events: Type Reason Age From Message

Normal FailedBinding 7s (x5 over 1m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set #+END_EXAMPLE ** DONE kubernetes start a container for testing: kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il CLOSED: [2017-12-31 Sun 11:26] ** DONE [#A] ReplicaSet is the next-generation Replication Controller. CLOSED: [2017-12-04 Mon 11:26] The only difference between a ReplicaSet and a Replication Controller right now is the selector support. Next generation Replication Controller

Set-based selector requirement

  • Expression: key, operator, value
  • Operators: In, NotIn, Exists, DoesNotExist

▪Generally created with Deployment ▪Enables Horizontal Pod Autoscaling ** DONE k8s yaml API version: CLOSED: [2017-12-03 Sun 12:50] ** DONE k8s cronjob CLOSED: [2018-01-03 Wed 12:26]

kubectl create -f ./cronjob.yaml kubectl get cronjob hello kubectl get jobs --watch kubectl delete cronjob hello

#+BEGIN_EXAMPLE apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure #+END_EXAMPLE ** DONE [#B] check k8s status: kubectl get cs CLOSED: [2018-01-03 Wed 11:57] ** BYPASS crictl not found in system path: warning CLOSED: [2018-01-03 Wed 12:36] ** DONE kubernetes default service type: ClusterIP CLOSED: [2018-01-02 Tue 11:07] ** DONE kubectl get nodes: Unable to connect to the server: x509: certificate signed by unknown authority: incorrect /etc/kubernetes/admin.conf CLOSED: [2018-01-04 Thu 00:09]

[email protected]:# kubectl get nodes Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") [email protected]:# echo $KUBECONFIG

[email protected]:# export KUBECONFIG=/etc/kubernetes/admin.conf [email protected]:# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s1 Ready master 29m v1.9.0 k8s2 NotReady 17m v1.9.0 ** DONE [#A] kubernetes-the-hard-way: CLOSED: [2017-12-04 Mon 15:49] *** CANCELED k8s hardway: etcdctl: Error: context deadline exceeded CLOSED: [2017-12-04 Mon 17:54] #+BEGIN_EXAMPLE [email protected]:~$ ETCDCTL_API=3 etcdctl member list Error: context deadline exceeded #+END_EXAMPLE

#+BEGIN_EXAMPLE [email protected]:~$ kubectl get componentstatuses NAME STATUS MESSAGE ERROR etcd-2 Unhealthy Get dial tcp getsockopt: connection refused controller-manager Healthy ok etcd-1 Unhealthy Get dial tcp getsockopt: connection refused scheduler Healthy ok etcd-0 Unhealthy Get net/http: TLS handshake timeout #+END_EXAMPLE ** DONE k8s livenessProbe(when to restart a Container), readinessProbe(when is ready to accept requests) CLOSED: [2018-01-08 Mon 07:41]

Probes have a number of fields that you can use to more precisely control the behavior of liveness and readiness checks:

initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. failureThreshold: When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the Pod. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.

#+BEGIN_EXAMPLE apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers:

  • args:
    • /bin/sh
    • -c
    • echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600 image: livenessProbe: exec: command:
      • cat
      • /tmp/health initialDelaySeconds: 15 timeoutSeconds: 1 name: liveness #+END_EXAMPLE ** DONE list all critical pods CLOSED: [2018-01-04 Thu 10:10] kubectl --namespace kube-system get pods

for pod in $(kubectl --namespace kube-system get pods -o jsonpath="{.items[*]}"); do node_info=$(kubectl --namespace kube-system describe pod $pod | grep "Node:") echo "Pod: $pod, $node_info" done ** DONE k8s cheatsheet: kube-shell CLOSED: [2017-12-31 Sun 10:47] ** DONE k8s configmap CLOSED: [2018-01-08 Mon 10:32] | Name | Summary | |-----------------------------------------------------+---------| | kubectl get configmaps my-wordpress-mariadb -o yaml | | ** DONE [#A] k8s initContainers debug: kubectl logs -c CLOSED: [2018-01-05 Fri 16:29] ** DONE Use GCE to setup k8s cluster deployment CLOSED: [2018-01-07 Sun 07:26] source /Users/mac/Downloads/google-cloud-sdk/ source /Users/mac/Downloads/google-cloud-sdk/ *** doc: gcloud setup #+BEGIN_EXAMPLE [28] us-central1-f [29] us-central1-c [30] us-central1-b [31] us-east1-d [32] us-east1-c [33] us-east1-b [34] us-east4-c [35] us-east4-a [36] us-east4-b [37] us-west1-a [38] us-west1-c [39] us-west1-b [40] Do not set default zone Please enter numeric choice or text value (must exactly match list item): 36

Your project default Compute Engine zone has been set to [us-east4-b]. You can change it by running [gcloud config set compute/zone NAME].

Your project default Compute Engine region has been set to [us-east4]. You can change it by running [gcloud config set compute/region NAME].

Created a default .boto configuration file at [/Users/mac/.boto]. See this file and [] for more information about configuring Google Cloud Storage. Your Google Cloud SDK is configured and ready to use!

  • Commands that require authentication will use [email protected] by default
  • Commands will reference project denny-k8s-test1 by default
  • Compute Engine commands will use region us-east4 by default
  • Compute Engine commands will use zone us-east4-b by default

Run gcloud help config to learn how to change individual settings

This gcloud configuration is called [default]. You can create additional configurations if you work with multiple accounts and/or projects. Run gcloud topic configurations to learn more.

Some things to try next:

  • Run gcloud --help to see the Cloud Platform services you can interact with. And run gcloud help COMMAND to get help on any gcloud command.
  • Run gcloud topic -h to learn about advanced features of the SDK like arg files and output formatting #+END_EXAMPLE *** TODO [#A] can't find gcloud :IMPORTANT: source /Users/mac/Downloads/google-cloud-sdk/ source /Users/mac/Downloads/google-cloud-sdk/ ** DONE kubectl get pod CLOSED: [2018-04-28 Sat 09:28] /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

#+BEGIN_EXAMPLE Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: #+END_EXAMPLE ** DONE pod CrashLoopBackOff: starting, then crashing, then starting again and crashing again.

CLOSED: [2018-01-05 Fri 15:47]

| Status | Meaning | |----------------------------+-------------------------------------------------------------| | Init:N/M | The Pod has M Init Containers, and N have completed so far. | | Init:Error | An Init Container has failed to execute. | | Init:CrashLoopBackOff | An Init Container has failed repeatedly. | | Pending | The Pod has not yet begun executing Init Containers. | | PodInitializing or Running | The Pod has already finished executing Init Containers. | ** DONE k8s ImagePullBackOff: describe pod $pod_name; No space CLOSED: [2018-06-25 Mon 14:28] ** DONE default pods for single node installation CLOSED: [2018-04-28 Sat 08:49] #+BEGIN_EXAMPLE [email protected]:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 75d08dd2b171[email protected]:c7036a8796fd20c16cb3b1cef803a8e980598bff499084c29f3c759bdb429cd2 "/usr/local/bin/ku..." 16 hours ago Up 16 hours k8s_kube-proxy_kube-proxy-jmcs9_kube-system_02a0eac8-4a75-11e8-afce-7aa5a78d07bd_0 0a769558ec4f "/pause" 16 hours ago Up 16 hours k8s_POD_kube-proxy-jmcs9_kube-system_02a0eac8-4a75-11e8-afce-7aa5a78d07bd_0 2af1fbfd581a[email protected]:1ba863c8e9b9edc6d1329ebf966e4aa308ca31b42a937b4430caf65aa11bdd12 "kube-apiserver --..." 16 hours ago Up 16 hours k8s_kube-apiserver_kube-apiserver-mdm-k8s-node2_kube-system_fee65b809c1e455cf1672ebe7efc4bc7_0 63c214ac8d1b[email protected]:922ac89166ea228cdeff43e4c445a5dc4204972cc0e265a8762beec07b6238bf "kube-controller-m..." 16 hours ago Up 16 hours k8s_kube-controller-manager_kube-controller-manager-mdm-k8s-node2_kube-system_5ad7a10c5a8589117db7258c7d499a33_0 324ff1a8d357[email protected]:5f50a339f66037f44223e2b4607a24888177da6203a7bc6c8554e0f09bd2b644 "kube-scheduler --..." 16 hours ago Up 16 hours k8s_kube-scheduler_kube-scheduler-mdm-k8s-node2_kube-system_aa8d5cab3ea096315de0c2003230d4f9_0 dce77d944669[email protected]:68235934469f3bc58917bcf7018bf0d3b72129e6303b0bef28186d96b2259317 "etcd --listen-cli..." 16 hours ago Up 16 hours k8s_etcd_etcd-mdm-k8s-node2_kube-system_59f847fe34319ab1263f0b3ee03df8a3_0 2af621e52e11 "/pause" 16 hours ago Up 16 hours k8s_POD_kube-apiserver-mdm-k8s-node2_kube-system_fee65b809c1e455cf1672ebe7efc4bc7_0 bdc64588b27d "/pause" 16 hours ago Up 16 hours k8s_POD_kube-controller-manager-mdm-k8s-node2_kube-system_5ad7a10c5a8589117db7258c7d499a33_0 14dd26427abf "/pause" 16 hours ago Up 16 hours k8s_POD_kube-scheduler-mdm-k8s-node2_kube-system_aa8d5cab3ea096315de0c2003230d4f9_0 17bfbb8af205 "/pause" 16 hours ago Up 16 hours k8s_POD_etcd-mdm-k8s-node2_kube-system_59f847fe34319ab1263f0b3ee03df8a3_0 #+END_EXAMPLE ** DONE One pod may have multiple containers CLOSED: [2018-06-19 Tue 14:31] If a pod has more than 1 containers then you need to provide the name of the specific container. ** DONE kubectl edit deployment parameters CLOSED: [2018-04-15 Sun 21:49] kubectl -n kube-system patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}'

kubectl --namespace=kube-system edit deployment/tiller-deploy and changed automountServiceAccountToken to true. ** DONE [#A] k8s sidecar CLOSED: [2018-07-15 Sun 22:50] #+BEGIN_EXAMPLE apiVersion: v1 kind: Pod metadata: name: counter spec: containers:

  • name: count image: busybox args:
    • /bin/sh
    • -c
    • i=0; while true; do echo "$i: $(date)" >> /var/log/1.log; echo "$(date) INFO $i" >> /var/log/2.log; i=$((i+1)); sleep 1; done volumeMounts:
    • name: varlog mountPath: /var/log
  • name: count-log-1 image: busybox args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log'] volumeMounts:
    • name: varlog mountPath: /var/log
  • name: count-log-2 image: busybox args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log'] volumeMounts:
    • name: varlog mountPath: /var/log volumes:
  • name: varlog emptyDir: {} #+END_EXAMPLE ** TODO [#A] k8s debug why termination takes time ** TODO Kubernetes availability *** TODO Building High-Availability Clusters: ** TODO [#A] Blog: Kubernetes Service Type: NodePort, ClusterIP and Loadbalancer? #+BEGIN_EXAMPLE

Publishing services - service types For some parts of your application (e.g. frontends) you may want to expose a Service onto an external (outside of your cluster) IP address.

Kubernetes ServiceTypes allow you to specify what kind of service you want. The default is ClusterIP.

Type values and their behaviors are:

ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType. NodePort: Exposes the service on each Node's IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You'll be able to contact the NodePort service, from outside the cluster, by requesting :. LoadBalancer: Exposes the service externally using a cloud provider's load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created. ExternalName: Maps the service to the contents of the externalName field (e.g., by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or higher of kube-dns. #+END_EXAMPLE *** Type: Loadbalancer *** Type: ClusterIP *** Type: NodePort If you set the type field to "NodePort", the Kubernetes master will allocate a port from a flag-configured range (default: 30000-32767) *** # --8<-------------------------- separator ------------------------>8-- :noexport: *** TODO Now if i access IP:NodePort, will it balance the load across multiple pods ? #+BEGIN_EXAMPLE Vivek Yadav [8:34 AM] Hey Denny, quick question -

 apiVersion: v1
 kind: Service
   name: span
     app: span
   type: NodePort
     - port: 80
       nodePort: 30080
     app: spa

 apiVersion: apps/v1beta2
 kind: Deployment
   name: spa
   replicas: 2
       app: spa
         app: spa
         - name: py
           image: viveky4d4v/local-simple-python:latest
             - containerPort: 8080
         - name: nginx
           image: viveky4d4v/local-nginx-lb:latest
             - containerPort: 80
         - name: regsecret

Now if i access IP:NodePort, will it balance the load across multiple pods ?

Denny Zhang (Github . Blogger) [8:35 AM] I don't think so #+END_EXAMPLE *** TODO How Does NodePort work behind the scene? *** # --8<-------------------------- separator ------------------------>8-- :noexport: *** TODO How Loadbalancer is implemented in code? *** # --8<-------------------------- separator ------------------------>8-- :noexport: *** TODO Does Loadbalancer works only for public cloud? *** TODO How I configure Ingress? ** TODO [#A] NodePort VS clusterIP :IMPORTANT:

clusterIP: You can only access this service while inside the cluster. ** TODO [#A] k8s feature watch list *** I want to check pod initContainer logs, but I don't want to specify initContainer by name #+BEGIN_EXAMPLE macs-MacBook-Pro:Scenario-401 mac$ kubectl logs my-jenkins-jenkins-89889ddb7-ct7jw -c 1 Error from server (BadRequest): container 1 is not valid for pod my-jenkins-jenkins-89889ddb7-ct7jw macs-MacBook-Pro:Scenario-401 mac$ kubectl logs my-jenkins-jenkins-89889ddb7-ct7jw -c copy-default-config Error from server (BadRequest): container "copy-default-config" in pod "my-jenkins-jenkins-89889ddb7-ct7jw" is waiting to start: PodInitializing macs-MacBook-Pro:Scenario-401 mac$ kubectl logs my-jenkins-jenkins-89889ddb7-ct7jw -c copy-default-config Error from server (BadRequest): container "copy-default-config" in pod "my-jenkins-jenkins-89889ddb7-ct7jw" is waiting to start: PodInitializing #+END_EXAMPLE *** Support using environment variables inside deployment yaml file ** TODO pod error: CreateContainerConfigError #+BEGIN_EXAMPLE bash-3.2$ kubectl get pod my-wordpress-wordpress-df987548d-btvf5 NAME READY STATUS RESTARTS AGE my-wordpress-wordpress-df987548d-btvf5 0/1 CreateContainerConfigError 0 2m bash-3.2$ #+END_EXAMPLE

#+BEGIN_EXAMPLE bash-3.2$ kubectl describe pod/my-wordpress-wordpress-df987548d-btvf5 Name: my-wordpress-wordpress-df987548d-btvf5 Namespace: default Node: minikube/ Start Time: Fri, 05 Jan 2018 16:41:27 -0600 Labels: app=my-wordpress-wordpress pod-template-hash=895431048 Annotations:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"my-wordpress-wordpress-df987548d","uid":"910e01e0-f269-11e7-b6d8... Status: Pending IP: Created By: ReplicaSet/my-wordpress-wordpress-df987548d Controlled By: ReplicaSet/my-wordpress-wordpress-df987548d Containers: my-wordpress-wordpress: Container ID: Image: bitnami/wordpress:4.9.1-r1 Image ID: Ports: 80/TCP, 443/TCP State: Waiting Reason: CreateContainerConfigError Ready: False Restart Count: 0 Requests: cpu: 300m memory: 512Mi Liveness: http-get http://:http/wp-login.php delay=120s timeout=5s period=10s #success=1 #failure=6 Readiness: http-get http://:http/wp-login.php delay=30s timeout=3s period=5s #success=1 #failure=3 Environment: ALLOW_EMPTY_PASSWORD: yes MARIADB_ROOT_PASSWORD: <set to the key 'mariadb-root-password' in secret 'my-wordpress-mariadb'> Optional: false MARIADB_HOST: my-wordpress-mariadb MARIADB_PORT_NUMBER: 3306 WORDPRESS_DATABASE_NAME: bitnami_wordpress WORDPRESS_DATABASE_USER: bn_wordpress WORDPRESS_DATABASE_PASSWORD: <set to the key 'mariadb-password' in secret 'my-wordpress-mariadb'> Optional: false WORDPRESS_USERNAME: admin WORDPRESS_PASSWORD: <set to the key 'wordpress-password' in secret 'my-wordpress-wordpress'> Optional: false WORDPRESS_EMAIL: [email protected] WORDPRESS_FIRST_NAME: FirstName WORDPRESS_LAST_NAME: LastName WORDPRESS_BLOG_NAME: My DevOps Blog! SMTP_HOST: SMTP_PORT: SMTP_USER: SMTP_PASSWORD: <set to the key 'smtp-password' in secret 'my-wordpress-wordpress'> Optional: false SMTP_USERNAME: SMTP_PROTOCOL: Mounts: /bitnami/apache from wordpress-data (rw) /bitnami/php from wordpress-data (rw) /bitnami/wordpress from wordpress-data (rw) /var/run/secrets/ from default-token-tc8kd (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: wordpress-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: my-wordpress-wordpress ReadOnly: false default-token-tc8kd: Type: Secret (a volume populated by a Secret) SecretName: default-token-tc8kd Optional: false QoS Class: Burstable Node-Selectors: Tolerations: Events: Type Reason Age From Message

Normal Scheduled 1m default-scheduler Successfully assigned my-wordpress-wordpress-df987548d-btvf5 to minikube Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "pvc-910644d3-f269-11e7-b6d8-08002782d6cd" Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tc8kd" Normal Pulled 1s (x7 over 1m) kubelet, minikube Container image "bitnami/wordpress:4.9.1-r1" already present on machine Warning Failed 1s (x7 over 1m) kubelet, minikube Error: lstat /tmp/hostpath-provisioner/pvc-910644d3-f269-11e7-b6d8-08002782d6cd: no such file or directory Warning FailedSync 1s (x7 over 1m) kubelet, minikube Error syncing pod bash-3.2$ #+END_EXAMPLE ** TODO [#A] Certified Kubernetes Administrator (CKA) :IMPORTANT:

It is an online, proctored, performance-based test that requires solving multiple issues from a command line.

Candidates have 3 hours to complete the tasks. ** HALF Difference in between selectors and labels ** TODO [#A] kubernetes mount a file to pod :IMPORTANT: ** TODO K8S label & Selector][challenges-leetcode-interesting]]

  • [#A] k8s metric server :noexport:IMPORTANT: Metrics Server is a cluster-wide aggregator of resource usage data.

Metrics Server registered in the main API server through Kubernetes aggregator. | Name | Summary | |----------------+-------------------------------------------------------------------| | Core metrics | node/container level metrics; CPU, memory, disk and network, etc. | | Custom metrics | refers to application metrics, e.g. HTTP request rate. |

Today (Kubernetes 1.7), there are several sources of metrics within a Kubernetes cluster | Name | Summary | |----------------+---------------------------------------------------------------------| | Heapster | k8s add-on | | Cadvisor | a standalone container/node metrics collection and monitoring tool. | | Kubernetes API | does not track metrics. But can get real time metrics | ** metric server Resource Metrics API is an effort to provide a first-class Kubernetes API (stable, versioned, discoverable, available through apiserver and with client support) that serves resource usage metrics for pods and nodes.

  • metric server is sort of a stripped-down version of Heapster
  • The metrics-server will collect "Core" metrics from cAdvisor APIs (currently embedded in the kubelet) and store them in memory as opposed to in etcd.
  • The metrics-server will provide a supported API for feeding schedulers and horizontal pod auto-scalers
  • All other Kubernetes components will supply their own metrics in a Prometheus format ** Cadvisor Cadvisor monitors node and container core metrics in addition to container events. It natively provides a Prometheus metrics endpoint The Kubernetes kublet has an embedded Cadvisor that only exposes the metrics, not the events. ** heapster Heapster is an add on to Kubernetes that collects and forwards both node, namespace, pod and container level metrics to one or more "sinks" (e.g. InfluxDB).

It also provides REST endpoints to gather those metrics. The metrics are constrained to CPU, filesystem, memory, network and uptime.

Heapster queries the kubelet for its data.

Today, heapster is the source of the time-series data for the Kubernetes Dashboard. ** # --8<-------------------------- separator ------------------------>8-- :noexport: ** TODO How to query metric server ** TODO Key scenarios of metric server The metrics-server will provide a much needed official API for the internal components of Kubernetes to make decisions about the utilization and performance of the cluster.

  • HPA(Horizontal Pod Autoscaler) need input to do good auto-scaling ** TODO There are plans for an "Infrastore", a Kubernetes component that keeps historical data and events ** # --8<-------------------------- separator ------------------------>8-- :noexport: ** TODO why from heapster to k8s metric server? ** TODO kube-aggregator ** TODO what is prometheus format? #+BEGIN_EXAMPLE Denny Zhang [12:34 AM] An easy introduction about k8s metric server. (It will replace heapster)

All other Kubernetes components will supply their own metrics in a Prometheus format

In logging domain, we can say syslog is the standard format

In metric domain, maybe we can choose prometheus as the standard format. #+END_EXAMPLE ** Metrics Use Cases

#+BEGIN_EXAMPLE Horizontal Pod Autoscaler: It scales pods automatically based on CPU or custom metrics (not explained here). More information here. Kubectl top: The command top of our beloved Kubernetes CLI display metrics directly in the terminal. Kubernetes dashboard: See Pod and Nodes metrics integrated into the main Kubernetes UI dashboard. More info here Scheduler: In the future Core Metrics will be considered in order to schedule best-effort Pods. #+END_EXAMPLE ** useful link

  • k8s loadbalancer :noexport: ** DONE k8s service: loadbalancer CLOSED: [2018-06-19 Tue 13:51] #+BEGIN_EXAMPLE cat > service.yml <<EOF apiVersion: v1 kind: Service metadata: name: lb namespace: logging spec: selector: app: kibana ports:
    • protocol: TCP port: 5601 type: LoadBalancer EOF #+END_EXAMPLE
  • k8s DaemonSet :noexport: ** DONE k8s daemonsets: ensures that all (or some) Nodes run a copy of a Pod. CLOSED: [2018-06-19 Tue 13:28]

As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:

  • running a cluster storage daemon, such as glusterd, ceph, on each node.
  • running a logs collection daemon on every node, such as fluentd or logstash.
    • running a node monitoring daemon on every node, such as Prometheus Node Exporter, collectd, Datadog agent, New Relic agent, or Ganglia gmond.
  • Install, then use kubectl-proxy to start
  • Create user and binding, then use token to login

#+BEGIN_EXAMPLE kubectl apply -f nohup kubectl proxy --port=8001 --address= &

curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/



cat > user.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system

apiVersion: kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: kind: ClusterRole name: cluster-admin subjects:

  • kind: ServiceAccount name: admin-user namespace: kube-system EOF #+END_EXAMPLE

kubectl apply -f user.yaml kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') *** DONE kubectl proxy listen on all network nics CLOSED: [2018-01-03 Wed 12:12] kubectl proxy --port=8001 --address=

▪Directory accessible to the containers in a pod ▪Volume outlives any containers in a pod ▪Common types hostPath nfs awsElasticBlockStore gcePersistentDisk

#+BEGIN_EXAMPLE Creating and using a persistent volume is a three step process:

  1. Provision: Administrator provision a networked storage in the cluster, such as AWS ElasticBlockStore volumes. This is called as PersistentVolume.
  2. Request storage: User requests storage for pods by using claims. Claims can specify levels of resources (CPU and memory), specific sizes and access modes (e.g. can be mounted once read/write or many times write only). This is called as PersistentVolumeClaim.
  3. Use claim: Claims are mounted as volumes and used in pods for storage. #+END_EXAMPLE ** DONE persistence.accessMode ReadWriteOnce or ReadOnly:][challenges-leetcode-interesting]] CLOSED: [2018-01-02 Tue 16:52] The access modes are:

ReadWriteOnce - the volume can be mounted as read-write by a single node ReadOnlyMany - the volume can be mounted read-only by many nodes ReadWriteMany - the volume can be mounted as read-write by many nodes

apiVersion: kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: kind: ClusterRole name: cluster-admin subjects:

kubectl create secret generic mysecret --from-literal=mysql_root_password=my-secret-pw kubectl get secret mysecret

#+BEGIN_EXAMPLE apiVersion: v1 kind: Pod metadata: name: secret-env-pod spec: containers:

  • name: mycontainer image: redis env:
    • name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username
    • name: SECRET_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password restartPolicy: Never #+END_EXAMPLE
  • HPA: Horizontal Pod Autoscaler :noexport:
  • Uncertainty & Uncomfortable things with K8S :noexport: ** Destroy namespace takes more than 15 minutes, with nowhere to check Testing in minikube ** Pod stucks in containercreating for a long time
  • HALF kubectl apply to a list of folder: kubectl apply -R -f namespace-drain-manifests/manifests :noexport:
  • GKE user access :noexport: #+BEGIN_EXAMPLE If y'all run into the following error: is forbidden: attempt to grant extra privileges: when trying to run kubectl apply -R -f ~/workspace/namespace-drain/manifests/ against a GKE cluster, then run the following command.

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account) #+END_EXAMPLE

  • Blog: How Enterprise Do XXX in Container world? :noexport:
  • TODO [#A] Blog: interview candidates for k8s experience :noexport: ** Explain concepts *** What's k8s context. Why we need it? *** What's initContainer? Why we need it? *** Network policy ** Comparision *** configmap vs secrets *** labels vs annotations What are k8s Annotations? What differences it is compared with labels:
  • Like labels, annotations are key/value pairs. Where labels have length limits, annotations can be quite large.
  • you can't query or select objects based on annotations.
  • Are used for non-identifying information. Stuff not used internally by k8s. (edited) *** clusterip, service, loadbalancer *** ClusterRole vs Role *** serviceaccount vs useraccount ** Scenarios/Experience *** tell me about k8s security model *** tell me about k8s scheduling model *** tell me about k8s HA model *** tell me about k8s trouble shooting experience ** Your Wish List *** layer of yaml *** ABBA on volumes *** apply one configmap to all namespace

  • Starting with Kubernetes 1.6 we support 5000 nodes clusters with 30 pods per node. ([[][link]])

If yes, could you give me two use scenarios why I would use it.

Fan Zhang [3:00 PM] 我听说过 其实就是kubelet直接管理的pod

Denny Zhang [3:01 PM] 是的,文档是这么说的.

Fan Zhang [3:01 PM] 我觉得这个是DeamonSet的补充

Denny Zhang [3:01 PM] 我在尝试理解这个背后的应用场景

Fan Zhang [3:02 PM] 因为有时候在node上需要有一些particular的service,但又不希望被kubernetes的schecular 管理

Denny Zhang [3:02 PM] 将OS的进程容器化 但这些只是OS级别,而不是k8s系统或app应用级别的进程 可以这样理解吗?

Fan Zhang [3:03 PM] 否则 drain之后 就没有了 可以这样理解

Denny Zhang [3:04 PM] 所以drain node不会把static pod删掉?

kubectl get pods --selector=name=nginx,type=frontend ** Containers inside a Pod can communicate with one another using localhost.

Networking Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost. When containers in a Pod communicate with entities outside the Pod, they must coordinate how they use the shared network resources (such as ports). ** How to restart a container inside a Pod?

Restarting a container in a Pod should not be confused with restarting the Pod. The Pod itself does not run, but is an environment the containers run in and persists until it is deleted. ** explain k8s components: apiserver, scheduler, controller-manager, kube-proxy ** get logs of failed container #+BEGIN_EXAMPLE If your container has previously crashed, you can access the previous container's crash log with:

$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} #+END_EXAMPLE ** Why k8s dashboard get deprecated?


time ls -1 /.yml | grep -v namespace | xargs -I{} kubectl apply -f {}


time ls -1r /.yml | grep -v namespace | xargs -I{} kubectl delete -f {} #+END_SRC


  • TODO autoscaling pod: try auto scaling :noexport:
  • TODO k8s volume: readwriteonce, readwritemany? :noexport:
  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • TODO grant more privileges to a given serviceaccount :noexport: kubectl get serviceaccount --all-namespaces



One reason why Kubernetes may be unable to unschedule the node is if the PodDisruptionBudget object has been configured in a way that allows 0 disruptions and only a single instance of the pod has been scheduled.

Kubernetes是一个编排(orchestration)工具,类似运行于Apache Mesos之上的Marathon,但是它是专门为Docker容器而创建的.

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure

Kubernetes来自Google,除了能在他们自己的Google Container Engine上工作之外,还支持VMware vSphere, Mesos, or Mesosphere DCOS,以及很多公有云,包括Amazon Web Services等.

Kubernetes 具备完善的集群管理能力,包括多层次的安全防护和准入机制多租户应用支撑能力透明的服务注册和服务发现机制内建负载均衡器故障发现和自我修复能力服务滚动升级和在线扩容可扩展的资源自动调度机制`多粒度的资源配额管理能力.

Kubernetes 还提供完善的管理工具,涵盖开发部署测试运维监控等各个环节.


curl -LO$(curl -s ** kubectl --help kubectl controls the Kubernetes cluster manager.

Find more information at

Basic Commands (Beginner): create Create a resource by filename or stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service run Run a particular image on the cluster set Set specific features on objects

Basic Commands (Intermediate): get Display one or many resources explain Documentation of resources edit Edit a resource on the server delete Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands: rollout Manage a deployment rollout rolling-update Perform a rolling update of the given ReplicationController scale Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster info top Display Resource (CPU/Memory/Storage) usage. cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes

Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers. auth Inspect authorization Advanced Commands: apply Apply a configuration to a resource by filename or stdin patch Update field(s) of a resource using strategic merge patch replace Replace a resource by filename or stdin convert Convert config files between different API versions

Settings Commands: label Update the labels on a resource annotate Update the annotations on a resource completion Output shell completion code for the specified shell (bash or zsh)

Other Commands: api-versions Print the supported API versions on the server, in the form of "group/version" config Modify kubeconfig files help Help about any command version Print the client and server version information

Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). ** kubernetes: The connection to the server localhost:8080 was refused - did you specify the right host or port? ** Layers ** DONE Principle: API的操作复杂度不能超过O(N) CLOSED: [2017-06-10 Sat 15:24] API操作复杂度与对象数量成正比.这一条主要是从系统性能角度考虑,要保证整个系统随着系统规模的扩大,性能不会迅速变慢到无法使用,那么最低的限定就是API的操作复杂度不能超过O(N),N是对象的数量,否则系统就不具备水平伸缩性了. ** Principle: API对象状态不能依赖于网络连接状态 ** # --8<-------------------------- separator ------------------------>8-- ** TODO [#A] fail to start minikube: "VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path". [email protected]:/tmp# minikube start Starting local Kubernetes v1.6.4 cluster... Starting VM... E0610 20:14:57.518198 27907 start.go:127] Error starting host: Error creating host: Error with pre-create check: "VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path".

Retrying. E0610 20:14:57.519201 27907 start.go:133] Error starting host: Error creating host: Error with pre-create check: "VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path" ** TODO how kubernetes use etcd ** TODO how healthcheck is implemented ** TODO What about alerting and reporting ** TODO what's fluentd ** # --8<-------------------------- separator ------------------------>8-- ** TODO [#A] k8s support rolling deployment :IMPORTANT: Kubernetes: zero downtime update at 1 million requests per second Kubernetes: zero downtime update at 10 million QPS ** TODO [#A] How to scale Pods with volumes configured :IMPORTANT: ** What is Kubernetes What is Kubernetes

Deployment, Scaling, Monitoring ** DONE Kubernetes hellworld CLOSED: [2017-07-11 Tue 08:42]

build image

docker build -t hello-node:v1 .

create deployment

kubectl run hello-node --image=hello-node:v1 --port=8080

View the Deployment

kubectl get deployments

Create service

kubectl expose deployment hello-node --type=LoadBalancer ** TODO [#A] Install minikube in headless Ubuntu server :IMPORTANT: | Name | Summary | |-----------------+---------| | minikube status | | ** DONE [#A] Ubuntu install kubernetes for all-in-one POC: minikube CLOSED: [2017-07-11 Tue 08:43] *** TODO minikube fail to start #+BEGIN_EXAMPLE [email protected]:/home/denny/minikube# ./minikube start --vm-driver=none --use-vendored-driver Starting local Kubernetes v1.6.4 cluster... Starting VM... Moving files into cluster...

Setting up certs... Starting cluster components... Connecting to cluster... Setting up kubeconfig... Kubectl is now configured to use the cluster.

WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks #+END_EXAMPLE *** useful link Kubernetes in 5 mins Setting up and using a single node Kubernetes cluster. Kubernetes - Local Testing The Illustrated Children's Guide to Kubernetes

  • TODO [#A] Run a task on every node in a cluster :noexport:
  • TODO kubectl get all won't get psp :noexport: #+BEGIN_EXAMPLE [email protected]:/tmp/build/4ecf0f02# kubectl get all --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/heapster-6d5f964dbd-2xxcm 1/1 Running 0 1d kube-system pod/kube-dns-6b697fcdbd-c4rmm 3/3 Running 0 1d kube-system pod/kubernetes-dashboard-785584f46b-9wmqj 1/1 Running 0 1d kube-system pod/metrics-server-6bbb689cf9-swtxc 1/1 Running 0 1d kube-system pod/monitoring-influxdb-76fd8dcff6-qws9m 1/1 Running 0 1d kube-system pod/wavefront-proxy-8498d5bbf4-gl6sw 4/4 Running 0 4m test-afjogacpjsqfetejycxx pod/busybox-io-ftpz8 1/1 Running 0 1d

NAMESPACE NAME DESIRED CURRENT READY AGE test-afjogacpjsqfetejycxx replicationcontroller/busybox-io 1 1 1 1d

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 443/TCP 1d kube-system service/heapster ClusterIP 8443/TCP 1d kube-system service/kube-dns ClusterIP 53/UDP,53/TCP 1d kube-system service/kubernetes-dashboard NodePort 443:32433/TCP 1d kube-system service/metrics-server ClusterIP 443/TCP 1d kube-system service/monitoring-influxdb ClusterIP 8086/TCP 1d

NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/heapster 1 1 1 1 1d kube-system deployment.apps/kube-dns 1 1 1 1 1d kube-system deployment.apps/kubernetes-dashboard 1 1 1 1 1d kube-system deployment.apps/metrics-server 1 1 1 1 1d kube-system deployment.apps/monitoring-influxdb 1 1 1 1 1d kube-system deployment.apps/wavefront-proxy 1 1 1 1 4m

NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/heapster-6d5f964dbd 1 1 1 1d kube-system replicaset.apps/kube-dns-6b697fcdbd 1 1 1 1d kube-system replicaset.apps/kubernetes-dashboard-785584f46b 1 1 1 1d kube-system replicaset.apps/metrics-server-6bbb689cf9 1 1 1 1d kube-system replicaset.apps/monitoring-influxdb-76fd8dcff6 1 1 1 1d kube-system replicaset.apps/wavefront-proxy-8498d5bbf4 1 1 1 4m [email protected]:/tmp/build/4ecf0f02# kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES kube-system-psp false * RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI [email protected]:/tmp/build/4ecf0f02# kubectl get all --all-namespaces | grep kube-system-psp #+END_EXAMPLE

  • echo 'Update /etc/nginx/conf.d/default.conf'
  • sed -i s/http_port_here/80/g /etc/nginx/conf.d/default.conf sed: cannot rename /etc/nginx/conf.d/sedz2uuPB: Device or resource busy #+END_EXAMPLE
  • TODO [#A] k8s mount configmap file, then edit it when process boostrap :noexport:
  • TODO gce disk: how and when the filesystem formating happens? :noexport:
  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • TODO k8s pod share volume within containers :noexport:
  • TODO gce use one disk in a small chunks :noexport:
  • TODO k8s mount jenkins home volume, then dockerfile copy/jenkins groovy. How to align? :noexport: COPY resources/jobs/ /usr/share/jenkins/ref/jobs/
  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • TODO k8s: when jenkins pod gets recreated, jenkins secret parameters need to be reconfigured :noexport:
  • TODO k8s: instruct application to run a clean shutdown or a safe restart :noexport:
  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • HALF doc: configmap cannot be mounted as a file :noexport:

ConfigMaps must be mounted as directories

curl -I

  • DONE why one pod has two docker images :noexport: CLOSED: [2019-08-01 Thu 14:31] One pod with two containers #+BEGIN_EXAMPLE [email protected] [ ~ ]# kubectl get pods -o=',Images:.spec.containers[*].image' --all-namespaces | grep sche kube-scheduler-422e158feb46fff15217b24e4f8ad20b my/kube-scheduler:v1.13.1,my/wcp-schedext: #+END_EXAMPLE

  • DONE kubectl get port nodeport :noexport: CLOSED: [2020-04-16 Thu 10:57] kubectl get service/wordpress -n blog -o json | jq '.spec.ports[].nodePort'

  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • TODO [#B] Create PVC workflow :noexport:

  • TODO [#B] Create CRD workflow :noexport:

  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • TODO Why we need kube-controller-manager :noexport:

  • TODO Why we need cluster-controller-manager :noexport:

  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • TODO k8s volume: CSI, vmdk, NFS :noexport:

  • TODO k8s dynamic PV provision vs static PV provision :noexport:

  • TODO [#A] k8s delete namespace hang :noexport: Related resources need to be deleted first

  • TODO [#A] k8s debugging loadbalancer service: external ip in state :noexport: #+BEGIN_EXAMPLE $ kubectl get svc -n blog NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql ClusterIP 3306/TCP 12m wordpress LoadBalancer 80:30407/TCP 12m

$ kubectl describe service/wordpress -n blog Name: wordpress Namespace: blog Labels: app=wordpress Annotations: Selector: app=wordpress Type: LoadBalancer IP: Port: 80/TCP TargetPort: 80/TCP NodePort: 30407/TCP Endpoints: Session Affinity: None External Traffic Policy: Cluster Events: 10:34

$ cat 21-wordpress-service.yaml apiVersion: v1 kind: Service metadata: labels: app: wordpress namespace: blog name: wordpress spec: type: LoadBalancer ports: - port: 80 targetPort: 80 protocol: TCP selector: app: wordpress #+END_EXAMPLE

  • TODO K8s networking :noexport:
  • container-to-container communication
  • pod-to-pod communication K8s itself won't do it for you. And CNI can be used to configure the network of a pod and provide a single IP per pod. CNI doesn't help you with pod-to-pod communication across nodes.
  • external-to-pod communication
  • Questions forked from CKA preparation :noexport: ** TODO how etcd is designed and implemented? ** TODO [#A] Only one IP address per Pod. How multiple containers talk with each other inside one pod? Two containers share the same network namespace of the thrid container, known as the pause container.
  • The pause container is used to get an IP address, then all containers in the pod will uses its network namespace.
  • To communicate with each other, containers can use the loopback interface, write to files on a common filesystem, or via IPC ** TODO Why ipv6 doesn't gain popularity ipv6 not backward compatible NAT ipv4 better management ** TODO How K8s reconcilation is done? ** TODO How the feature of cluster ip is implemented?

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
shell (10,553
kubernetes (1,809
containers (440
cheatsheets (41