Certified Kubernetes Administrator (CKA) Exam

The Certified Kubernetes Administrator (CKA) exam is a hands-on session where you need to follow the instructions to configure the system in a bash terminal on the web browser. In my experience, some shortcut keys (such as Alt+F) do not work, which slows me down a little bit. For each question, you need to switch kubectl context as instructed in the question. Some questions share the same context so it is very easy to omit this step. You can verify response with your own command but will not be told whether you scored in each question. During the CKA exam I tried to spin up a terminal session from within Vim editor and the terminal ran out of buffer. I had to reboot the machine with the help of proctor, and my completed work are saved.

In general this is an exam I enjoy preparing and writing because it is very hands on. Result is out a day after, and I passed at 96%. I heard about tight timelines but I managed to finish 15 minutes before the end, most likely owing to my dexterity with Linux commands. With that I’m happy to share my notes in preparation for the CKA exam.

Worker Node
Worker Node
kube-proxy
kube-proxy
kubelet
kubelet
container runtime
container runtime
container
container
container
container
container
container
container
container
container
container
container
container
Worker Node
Worker Node
kube-proxy
kube-proxy
kubelet
kubelet
container runtime
container runtime
container
container
container
container
container
container
container
container
container
container
container
container
Worker Node
Worker Node
kube-proxy
kube-proxy
kubelet
kubelet
container runtime
container runtime
container
container
container
container
container
container
container
container
container
container
container
container
Control  Plane
Control  Plane
kube-controller-manager
kube-controller-manager
kube-scheduler
kube-scheduler
kube-controller-manager
kube-controller-manager
kube-controller-manager
kube-controller-manager
kube-scheduler
kube-scheduler
kube-scheduler
kube-scheduler
cloud-controller-manager
cloud-controller-manager
cloud-controller-manager
cloud-controller-manager
cloud-controller-manager
cloud-controller-manager
kube-api-server
kube-api-server
etcd
etcd
Viewer does not support full SVG 1.1

For taking CKA exam, we should be familiar with the diagram above.

Tips for Troubleshooting

The CKA exam is hands-on and therefore requires quite a bit of troubleshooting. Here are my notes.

  • Check Node status to start with
  • Check core services on each node:
    • sudo systemctl status kubelet
    • sudo systemctl status docker
    • sudo journalctl -u kubelet
    • sudo journalctl -u docker
  • Check component logs (on hosting VM)
    • /var/log/kube-apiserver.log
    • /var/log/kube-scheduler.log
    • /var/log/kube-controller-manager.log
  • If cluster is built by kubeadm, then some of those services are running in Pods within kube-system namespace. Check those pods:
    • run interactive shell: > kubectl exec podname –stdin –tty — /bin/sh
    • there is an image for lots of useful network tool called nicolaka/netshoot
  • Store pod names to variable. e.g. > POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath=”{.items[0].metadata.name}”)
  • With kubectl, you may
    • alias it to k for faster typing
    • –dry-run: to run imperative command without creating object
    • –record: record the command that was used to make a change
    • -o: set output format, wide, yaml, or jsonpath=”expression”. For example, to get pod name: > kubectl get pods -l run=nginx -o jsonpath=”{.items[0].metadata.name}”
    • –sort-by: use JSONPath expression
    • –selector: filter results by label

Build K8s cluster using kubeadm

The CKA exam requires you to know how to build cluster with kubeadm. This involves installing four components (docker-ce, kubeadm, kubectl and kubelet), as outlined below:

stepcommand
1. Install docker-ce> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
> sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
> sudo apt-get update
> sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
> sudo apt-mark hold docker-ce
> sudo systemctl status docker
2. Install kubeadm, kubelet and kubectl> curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
> sudo apt-get update
> sudo apt-get install -y kubelet kubeadm kubectl
> sudo apt-mark hold kubelet kubeadm kubectl
3. Form a K8s clusterOn master node:
> sudo kubeadm init --pod-network-cidr=10.244.0.0/16
This command prints out a command for worker nodes to join.
On worker node:
sudo the command generated on master
4. Configure kubectlOn master node:
> mkdir -p $HOME/.kube
> sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
> sudo chown $(id -u):$(id -g) $HOME/.kube/config
Optionally on worker node:
> mkdir -p $HOME/.kube
then scp $HOME/.kube/config from control plane node
5. Set up cluster networking> echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
> sudo sysctl -p
Then from any environment with kubectl, bring up the system pods for cluster networking
> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Add new node to KubeAdm cluster

This is fairly simple with the help of kubeadm. The node to join cluster must be able to communicate with master node. Create a token and print join command from master node:

> kubeadm token create --print-join-command

Then from the node to join, run this command as sudo. You will see that it performs the TLS bootstrap for you. Once completed, the standard output will say this node has joined the cluster. You can confirm with command:

> kubectl get nodes

Sometimes one needs to migrate pods to the newly joined node. This can be done by draining the existing nodes.

Note you can also use kubespray to build K8s cluster as previously discussed, and here is a my IaC project to launch AWS instances and build a K8s cluster with kubespray on top of it. For my learning, I often create a GKE (Google Kubernetes Engine) cluster from GCP’s cloudshell. There is a guide on how to start a cluster but it comes down to three commands:

$ gcloud config set compute/zone us-east1-b
$ gcloud container clusters create tcluster --num-nodes=3
$ gcloud container clusters get-credentials tcluster

The third command above is to configure kubectl on the cloudshell. Follow this guide if you need to SSH to node.

Upgrade KubeAdm cluster

This involves upgrade three components (kubeadm, kubectl and kubelet) on two types of node: master node and worker node. They steps vary slightly for two nodes. But drain and uncordon is needed for both types of nodes. Pick a node and follow the steps below:

StepCommand
1. drain the node from kubectl client (e.g. master node)> sudo kubectl drain nodename --ignore-daemonsets
2. Determine kubeadm target version> apt-mark showhold
> sudo apt-mark unhold kubeadm kubectl kubelet
> apt list --installed | grep kube
> apt-cache show kubeadm | less
> sudo apt-get install -y kubeadm=1.20.2-00
3. update kubeadmOn master node:
> sudo kubeadm upgrade plan v1.20.2
> sudo kubeadm upgrade apply v1.20.2
On worker node:
> sudo kubeadm upgrade node
4. On the node to update, determine target version for kubectl and kubelet, then install> apt-cache show kubectl | less
> apt-cache show kubelet | less
> sudo apt-get install -y kubectl=1.20.2-00 kubelet=1.20.2-00
5. Restart kubelet> sudo systemctl daemon-reload
> sudo systemctl restart kubelet
6. Uncordon> kubectl uncordon nodename

Backup and restore Etcd

Etcd is a distributed key-value store. It uses Raft protocol for distributed consensus. Etcd is the third distributed system I touch on. The previous two are: Cassandra (using Paxos protocol for distributed consensus) and ZooKeeper (using ZAB protocol). Here is a good article that summarizes the protocols. As for the exam we only need to use etcd with the client tool.

The etcd itself can run on a cluster of servers, each running etcd as a systemd service as etcd/etcd (user/group). It can be deployed in two ways:

  • stacked etcd: an instance of etcd lives with kube-api-server on the same control plane node
  • external etcd: in a dedicated cluster of etcd

Alternatively, etcd can run as a pod, most likely in kube-system namespace.

The etcd service listens on port 2379 for client communication and on port 2380 for server (peer-to-peer) communication. When the systemd service was initialized there are a few key environment variables (e.g. cert locations, ETCD_DATA_DIR) privoded as configuration. To see them, run:

> cat /etc/systemd/system/etcd.service | grep Env

These environment variables (prefixed with ETCD_) are for the service only. They can provide current configuration information for us to use later.  When it’s running as a pod, check out the directory for static pod for the yaml declaration (e.g. /etc/Kubernetes/manifests/etcd.yaml), where these parameters are passed in as environment variable.

The etcdctl utility is a command line client for etcd. The default API version is 3 so no need any more to set ETCDCTL_API=3 before each command. The utility needs three arguments three arguments (–cacert, –cert, and –key) but we can pass the information via environment variables:

> export ETCDCTL_CACERT= /home/cloud_user/etcd-certs/etcd-ca.pem
> export ETCDCTL_CERT= /home/cloud_user/etcd-certs/etcd-server.crt
> export ETCDCTL_KEY= /home/cloud_user/etcd-certs/etcd-server.key
> export ETCDCTL_ENDPOINTS=https://etcd1:2379

The environment variable names are uppercase of the argument name with prefix ETCDCTL_. Only global options of arguments can be supplied via environment variables. They remain effective throughout the rest of activities. Also note that the CACERT is needed only when client-cert-auth is true. Now to backup, we can simply run:

> etcdctl snapshot save /home/cloud_user/etcd_backup.db

To restore from a file, you want to remove existing etcd data directory first. The directory can be found in ETCD_DATA_DIR variable. Suppose it is /var/lib/etcd, you need root permission to write to it, then correct ownership before starting the service:

> sudo systemctl stop etcd && sudo mv /var/lib/etcd/ /tmp/
> sudo etcdctl snapshot restore /home/cloud_user/etcd_backup.db --data-dir /var/lib/etcd
> sudo chown -R etcd:etcd /var/lib/etcd && sudo systemctl start etcd

To verify the restore result, simply run:

> etcdctl get cluster.name

Object Management

In the CKA exam, we need to interact with many types of built-in Kubernetes objects.

  • RBAC objects:
    • A Role defines permissions within namespace.
    • A ClusterRole defines cluster-wide permissions.
    • Both Roles and ClusterRoles are K8s objects that defines a set of permissions
    • RoleBinding and ClusterRoleBinding are objects that connect Roles and ClusterRoles to users.
  • Service Account: an account used by container processes within Pods to authenticate the K8s API. If your Pods need to communicate with the K8s API, you can use service accounts to control their access.

RoleBinding
* roleRef
* subjects
RoleBinding…
ClusterRoleBinding
* roleRef
* subjects
ClusterRoleBinding…
ServiceAccount
ServiceAccount
ClusterRole:
* rules 
   – apiGroups
   – resources
   – resourceNames
   – verbs
ClusterRole:…
Role:
* rules 
   – apiGroups
   – resources
   – resourceNames
   – verbs
Role:…
Viewer does not support full SVG 1.1

  • Inspect resource usage either with a K8s Metrics Server, or by command:
> kubectl top pod --sort-by <JSONPATH> --selector <selector>
  • Here is a good guide to install metrics server and dashboard (e.g. on docker-desktop).

Pods and Containers

  • ConfigMaps: store data in key-value map.
  • Secrets: same as ConfigMaps but for sensitive data only
  • Two ways to pass ConfigMap and Secret data to your container:
    • As environment variables in container operating system
    • As files presented on mounted volumes in container file system.
  • Container Resource management:
    • Resource requests: K8s scheduler will use resource requests to avoid scheduling pods on nodes that do not have enough available resources. 1 CPU unit = 1/1000 of one core
    • Resource limits: allow you to limit the amount of resources your containers can use. The container runtime is responsible for enforcement. The enforcement behaviour is different. For example, some terminates container that attempts to use more resource than the limit.
  • Probes
    • Liveness Probe: automatically determine whether or not a container application is in a healthy state. By default K8s does not consider a container to be down until the container process stops. Liveness Probe allow you to customize this detection mechanism and make it more sophisticated.
    • Startup Probes: similar to liveness probes. However, while liveness probes run constantly on a schedule, startup probes run at container startup and stop running once they succeed. Startup probes are used to determine when the application has successfully started up. It is especially useful for legacy applications that can have long startup times.
    • Readiness Probes: determine when a container is ready to accept requests. When you have a service backed by multiple container endpoints, user traffic will not be sent to a particular pod until its containers have all passed the readiness checks defined by their readinesse probes. Use readiness probes to prevent user traffic from being sent to pods that are still in the process of starting up.
  • Restart policy for self-healing pods
    • (default) Always: container will always be restarted if they stop, even if they completed successfully (returned 0).
    • OnFailure: container will be restarted if the container process exists with an error code, or the container is determined to be unhealthy by a liveness probe.
    • Never: let it be
  • Multi-container pods:
    • containers share the same networking namespace and can communicate with one another on any port, even if the port is not exposed to the cluster
    • Container can use volumes to share data in a Pod. Example: a legacy application is hard-coded to write log output to a file on disk. You use a sidecar container to read the log file from shared volume and prints it to the console so the log output will appear in the container log.
  • Init containers: containers that run once during the startup process of a pod. A pod can have any number of init containers, and they will each run once into completion, before the next init container starts. You may use init containers to perform a variety of startup tasks, they can contain and use software and setup scripts that are not needed by your main containers. They are often useful in keeping your main containers lighter and more secure by offloading startup tasks to a separate container. Use case include:
    • cause a pod to wait for another K8s resource to be created before finishing startup
    • perform sensitive startup steps securely outside of app containers
    • populate data into a shared volume at startup
    • communicate with another service at startup
  • Scheduling: Scheduler (a component in control plane) assigns Pods to a suitable Node so kubelets can run them. The factor taken into account:
    • resource request vs available node resources
    • various configurations that affect scheduling using node labels
  • Pod allocation
    • nodeSelector is an attribute of Pod to allow you to limit which Node(s) the Pod can be scheduled on. The selector is based on label.
    • nodeName is an attribute of Pod that allows you to bypass scheduling and assign Pod to a specific Node by name.
  • DaemonSet: automatically runs a copy of a Pod on each node. When a new node is added to the clsuter, DaemonSet will run a new copy of the Pod on it. DaemonSets also respect normal scheduling rules around node labels, taints and tolerations. If a pod would not normally be scheduled on a node, a DaemonSet will not create a copy of the Pod on that node.
  • Static Pod: A Pod that is managed directly by the kubelet on a node, not by the K8s API server. They can run even if there is not K8s API server present. Kubelet automatically creates static Pods from YAML manifest files located in the manifest path on the node.
  • Mirror Pod: Kubelet will create a mirror Pod for each static Pod. Mirror Pods allow you to see the status of the static Pod via the K8s API, but you cannot change or manage them via the API.
  • Taints: the opposite of node affinity. It allows a node to repel a set of pods. Taints are applied to nodes by specifying label and effect. Effect can be NoExecute or NoSchedule.
  • Tolerations: allows (but does not require) the pods to schedule onto nodes with matching taints. As soon as one or more taints are applied to a node; the node should not accept any pods that no not tolerate the taints.

Note that we can use both node affinity and taints & tolerations to control Pod scheduling behaviour.

  • Use Node Affinity when your scheduling rule is based on direct condition, i.e. schedule a Pod to this Node when XXX. In this case, you have well-known labels on nodes, and specify nodeAffinity on Pods.
  • Use Taints and Tolerations when your scheduling rule is based on inverse statement, i.e. do not schedule a Pod to this Node unless XXX. In this case, you put a taint “MyCondition:NoSchedule” on a Node, so that no Pod will ever get scheduled to this Node. The only exception is when a Pod has the Toleration “MyCondition:NoSchedule”.

Deployments

  • Deployment is an object that defines a desired state for a ReplicaSet (a set of replica Pods). The Deployment Controller seeks to maintain the desired state by creating, deleting, and replacing Pods with new configurations.
  • With Deployments, you can horizontally scale an application up and down by changing the number of replicas. You can perform rolling updates and rollback.

Networking

  • The K8s network model defines how Pods communicate with each other, regardless of which Node they are running on.
  • Each Pod has its own unique IP address within the cluster. Any Pod can reach any other Pod using that Pod’s IP address. This creates a virtual network that allows Pods to easily communicate with each other.
  • One type of K8s network plugin is CNI plugin. It has many flavours such as Calico. Each plugin has its own unique installation process. Kubenetes nodes will remain NotReady until a network plugin is installed.
  • The K8s virtual network uses a DNS (e.g. a Kubeadm cluster uses CoreDNS pod in kube-system namespace) to allow Pods to locate other Pods and Services using domain names. The Pod DNS name follows this format: pod-ip-address.namespace.pod.cluster.local
  • A K8s NetworkPolicy is an object that allows you to control the flow of network communication to and from Pods so you can isolate traffic. NetworkPolicy can apply to Ingress (using from selector), Egress (using to selector) or both.
  • NetworkPolicy has an attribute podSelector to determine to which Pods in the namespace the NetworkPolicy applies, by selecting Pods by with Pod labels.
  • By default, Pods are considered non-isolated and completed open to all communication. If any NetworkPolidy selects a Pod, the Pod is considered isolated and will only be open to traffic allowed by NetworkPolicies.
  • A variety of selector can be used: podSelector, namespaceSelector, ipBlockSelector and port.

Services

  • Services provide a way to expose an application running as a set of pods, so clients can access applications in an abstract way without needing to be aware of the application pods. In this model, client make requests to a Service, which routes traffic to its pods in a load-balanced fashion
  • Endpoints are the backend entities to which Services route traffic. If there are multiple Pods behind a service, each Pod will have an endpoint associated with the service.
  • Each service has a type that determines how and where service will expose your application.
    • ClusterIP: expose application inside the cluster network
    • NodePort: expose application outside the cluster network
    • LoadBalancer: expose application outside thecluster network, but use an extermal cloud load balancer from cloud platform.
  • Services are assigned with DNS names. The FQDN follows this format: service.namespace.svc.cluster-domain.example, which is used by pods across namespaces
  • Pods within the same namespace can reference service simply by service name.
  • To manage external access to service, you can also use Ingress object. Ingress object is capable of providing more functionality than a simple NodePort Service, such as SSL termination, advanced load balancing, or name-based virtual hosting. You must install one or more Ingerss controller (many different implementations) to back up the ingress objects.
  • Ingress defines a set of routing rules. Each rule has a set of paths, each with a backend. Requests matching a path will be routed to its associated backend.
  • If a Service uses a named port, an ingress can also use the port’s name (instead of port number) to choose to which port of a service it will route.

Storage

  • Volumes allow you to store data outside the container file system, while allowing the container to access the data at runtime. When Pod is gone, volumes do not persist.
  • Persistent Volumes are a slightly more advanced form of Volume. They allow you to treat storage as an abstract resource and consume it in Pods. PV can be provisioned separately by storage administrator, and they persist regardless of pod lifecycle. PV needs to be claimed by pods. PV uses a set of attributes to describe the underlying storage resource.
  • Both volumes and PVs each have a volume type: NFS, Cloud (AWS, Azure, GCP), ConfigMaps and Secrets, Simple Directory on node
  • Two volume types to distinguish:
    • hostPath: store data in a specified directory on K8s node
    • emptyDir: store data in dynamically created location on the node. The directory exists only as long as the Pod exists on the node. The directory and the data are deleted as Pod is removed. This type is useful for simply sharing data between containers in the same pod.
  • Both volumes and PVs are specified under Pod, and individual containers must include volumeMounts object to map volume name to local mountPath
  • Storage Class object allow K8s admins to specify the types of storage services they offer on their platform. A key property is allowVolumeExpansion. This allows PVC to resize. At storage class level, there are two reclaim policies: Retain and Delete. The default is Delete.
  • PV has an attribute named persistentVolumeReclaimPolicy. This is reclaim policy at PV level. If the attribute is not defined, it is inherited from storage class. The persistentVolumeReclaimPolicy has three options. When PVC is deleted:
    • Retain: keeps all data but requires admin to manually reclaim the volume (i.e. delete PV, clean up data, delete storage asset)
    • Delete (cloud storage only): deletes both PV and the underlying storage resource automatically
    • Recycle: scrub (rm -rf /vol/) all data in the underlying storage resource, and allow the volume to be reused.
  • PVC represents a user’s request for storage resources. It defines a set of attributes similiar to those of a PV. When a PVC is created, it will look for a PV that is able to meet the requested criteria. If it finds one, it will automatically be bound to the PV. PVC can be mounted to a Pod’s containers just like any other volume

In general the CKA exam experience is quite positive and rewarding. In future posts I will shift focus on Kubernetes not only for the CKA exam, but also for keeping track of my learning.

Good luck with your CKA exam.