We continued from previous posting about resource object, starting from storage related ones.
In Kubernetes, we use the term volume to refer to a section storage device. There are many plugins, compliant to Container Storage Interface (CSI), to allow heterogeneous storage resources to be surfaced as volumes in Kubernetes. CSI allows storage driver to operate in parallel to the main Kubernetes code tree. Any driver that complies with CSI would work with any orchestration platform that requires CSI, such as Docker Swarm, Kubernetes. Three main resources in the storage system are: PV (persistent volumes), PVC (persistent volume claims), and SC (storage classes).
Persistent Volumes (PV) allows you to map external storage onto the Kubernetes cluster. It is a representation of the external storage on the cluster. A single external storage volume can only be represented by a single PV. For example, you cannot have a 50GB external volume that has two 25GB PVs each representing half of it.
PV can be mounted in three options:
- RWO (ReadWriteOnce): allows single PVC to mount. This is common for block device.
- RWM (ReadWriteMany): allows multiple PVCs to bind as read and write. This is common for file and object level access.
- ROM (ReadOnlyMany): allows multiple PVCs to bind as read only. Think of it along the lines of ISO media.
Note that a PV can only be opened in one of the modes above. All connecting PVC (if multiple are allowed) will use that mode.
Persistent Volume Claims (PVC) act like tickets that authorize applications (Pods) to use a PV. Once a Pod has the PVC, it can bind the respective PV as a volume. You need to specify PV name when declaring a PVC to associate them. Pods do not act directly on PVs, they always act on the PVC object that is bound to the PV. When a PVC is released, two actions can be configured in the policy: Delete and Retain. The delete policy will delete the PV as well as associated storage resource on the external storage system. The retain policy will keep the associated PV object on the cluster as well as any data stored on the associated external assets.
The spec section of PVC object declaration must match the fields in the corresponding PV it binds to. For example access modes, capacity and storage class name.
Storage classes allow you to define different classes (or tiers) of storage using an external provisioner such as aws-ebs. This works well with cloud storage provider. As long as the plugin for storage backend is available, you can configure as many StorageClass object as you need, and even specify to encrypt them. Storage classes create PV dynamically, so you will need to create PVC object that reference the newly created storage class, in order to use cloud storage.
The whole purpose of storage class is to create PVs dynamically, for various storage backend/plugin. You just create the StorageClass object and use a plugin to tie it to a particular type of storage on a particular storage back-end. When matching PVCs appear, the StorageClass dynamically creates the required volume on the back-end storage system.
If a cluster has a default storage class, you can deploy a Pod using just PVC with PodSpec, without explicitly declare storage class per Pod. However, this is not recommended in production.
With modern application it is a good practice to decouple configurations from application execution environment. They are stored separately but brought together at runtime. ConfigMap (CM) allows you to store configuration data outside of a Pod, and dynamically inject the configuration data into a Pod at runtime. ConfigMaps are essentially key/value pairs, and each key/value pair is called an entry.
Once data is stored in a ConfigMap, it can be injected into containers at run-time via one of the three methods:
- environment variables: updates to ConfigMap is not updated
- arguments to the container’s startup command (very limited)
- files in a volume (most flexible): requires creating a ConfigMap volume in the Pod template and mounting. Eateries in the ConfigMap will appear in the container as individual files. You can make changes to entries after a container is deployed, and the change is seen in the file.
The application is unaware that the data originally came from a ConfigMap. Also note that ConfigMap is not to store sensitive data.
Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.
The name of a Secret object must be a valid DNS subdomain name. A Secret can be used with a Pod in three ways:
- As files in a volume mounted on one or more of its containers.
- As container environment variable.
- By the kubelet when pulling images for the Pod.
Ingress manages manages external access to the services in a cluster, typically HTTP. It may provide load balancing, SSL termination and name-based virtual hosting. Also, you must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.You can choose from a number of Ingress controllers. Nginx is a common flavour.
By default, containers run with unbounded compute resources on a Kubernetes cluster. With resource quotas, cluster administrators can restrict resource consumption and creation on a namespace basis. Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace’s resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace.
A LimitRange provides constraints that can:
- Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
- Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
- Enforce a ratio between request and limit for a resource in a namespace.
Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources. Resource quotas are a tool for administrators to address this concern.
A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.