A workload is an application running in one or more Kubernetes (K8s) pods. Pods are logical groupings of containers running in a Kubernetes cluster that controllers manage as a control loop (in the same way that a thermostat regulates a room’s temperature). A controller monitors the current state of a Kubernetes resource and makes the requests necessary to change its state to the desired state. Workload resources (to use the appropriate Kubernetes terminology) configure controllers that ensure the correct pods are running to match the desired state that you have defined for your application.
Workload Resource | Common Use Cases |
---|---|
Deployment and ReplicaSet | ReplicaSets are managed via declarative statements in a Deployment. Deployments are used for stateless applications such as API gateways. |
StatefulSet | As the name implies, StatefulSets are often used for stateful applications. StatefulSets can also be used for highly available applications and applications that need multiple pods and server leaders. For example, a highly available RabbitMQ messaging service. |
DaemonSet | DaemonSets are often used for log collection or node monitoring. For example, the Elasticsearch, Fluentd, and Kibana (EFK) stack can be used for log collection. |
Job and CronJob | CronJobs and Jobs are used to run pods that only need to run at specific times, such as creating a database backup. |
Custom Resource | Custom resources are often used for multiple purposes, such as top-level kubectl support or adding Kubernetes libraries and CLIs to create and update new resources. An example of a new resource is a Certificate Manager, which enables HTTPS and TLS support. |
A deployment provisions a ReplicaSet, which in turn provisions a pod according to its desired state. When a deployment is changed or updated, a new ReplicaSet is provisioned and replaces the previous ReplicaSet. This process is so seamless that deployment updates can occur with no downtime. Stateless applications, which do not save information about previous operations, are a common use for deployments. “Stateless” means operations always start from scratch. For example, if a search operation starts but stops before it completes, it will have to start again from the beginning. Here is an example of a deployment workload:
Deployment workloads commonly connect to databases that keep user and application while workloads remain stateless. Hence, provisioning of applications like Grafana in K8s happens as a Deployment.
Visualize Utilization Metrics | Set Resource Requests & Limits | Set Requests & Limits with Machine Learning | Identify mis-sized containers at a glance & automate resizing | Get Optimal Node Configuration Recommendations | |
---|---|---|---|---|---|
Kubernetes | |||||
Kubernetes + Densify |
StatefulSets run stateful applications. StatefulSets provision pods with unique identifiers and maintain an identity for each pod. The pods are created from the same specifications but are not interchangeable, as identifiers persist across rescheduling. Volumes can be provisioned and matched with pod identifiers even as pods are restarted, rescheduled, or in the event of failure.
StatefulSets also deploy in ordered states and require graceful starts. StatefulSets deploy, delete, and scale in an ordered manner. Deleting a StatefulSet does not delete its volumes to ensure data safety. Additionally, sometimes pods are not terminated if a StatefulSet is deleted.
Here is an example of a StatefulSet workload resource:
Stateful applications store user information and actions. StatefulSets can also be used to form clusters, whereas Deployments cannot. The unique identifiers of the pods allow cluster formation. There are multiple mechanisms capable of forming clusters. Detailing the different mechanisms for cluster formation is beyond the scope of this article. Benefits of cluster formation include synchronization of data across nodes which increases data safety and the application’s resilience. Another advantage is that a cluster can self-heal if a single pod is deleted, provided that the application supports self-healing. RabbitMQ is an example of an application that can form a cluster and has these features when configured correctly.
RabbitMQ is a messaging service that streamlines communication between services within a Kubernetes cluster. RabbitMQ can run as a single pod, but doing that results in having a single point of failure. Hence, it is recommended that RabbitMQ runs as a cluster in production. StatefulSets enable RabbitMQ and other applications to be run as stable stateful clusters and increase the stability and resilience of applications.
Pick the ideal instance type for your workload using an ML-powered visual catalog map
See how it worksA DaemonSet provisions a copy of a pod onto all (or some) nodes depending on the taints and labels. Pod provisioning occurs on new nodes added to the cluster. Similarly, pods are garbage collected from nodes removed from the cluster.
The deletion of a DaemonSet results in the deletion of all pods. DeamonSets are particularly useful for log and metrics collection across a cluster.
Below is an example of DaemonSet workload:
The node exporter DaemonSet collects and exposes metrics such as CPU utilization, memory usage, disk IOPS, etc.
Identify under/over-provisioned K8s resources and use Terraform to auto-optimize
WATCH 3-MIN VIDEOA Job creates one or more Pods and runs them until completion. It will retry until the required number has successfully run. When this occurs, the Job is complete. A CronJob, on the other hand, creates a job on a repeating schedule. You can write schedules in Cron format. An example of a CronJob workload resource is below:
This CronJob effectively backs up a database.
A free 30-day trial for cloud & Kubernetes resource control. Experience in your environment or use sample data. See optimization potential in 48 hours. No card required.
Free TrialCustom Resources extend the Kubernetes API. The Kubernetes API stores API objects, such as DaemonSets. A custom resource adds extensions to the Kubernetes API that is not available in its default setup. Custom Resources can be added and deleted from a cluster.
Once installed, Custom Resources are accessible via kubectl. For example, after installing Certificate Manager, provisioned certificates can be viewed via ‘kubectl get certificates‘. An example of a Custom Resource, such as the Certificate Manager, can be installed from the following link:
Users can also add custom controllers through custom resource definitions. Adding definitions allows custom resources to be combined with customer controllers, creating a declarative API similar to existing K8s workloads such as Deployments and DaemonSets. This combination will enable users to specify their desired state, and the control loop would continuously assess the actual state and provision resources as required to maintain the desired state. Custom controllers can also be added or deleted independently of a cluster’s lifecycle. Custom controllers tend to be more effective with custom resources but work with any type of resource.
Here are some of the most important K8s workloads best practices:
Always download the latest version because there tend to be security patches and new features.
Using Linux Alpine images where possible is recommended because the Alpine linux distribution tends to create container images that are up to 80% smaller than those from mainstream linux distributions. If you need to, you can start from a smaller Alpine image and add additional packages to suit your application requirements.
You can set resource limits as follows:
It is advantageous because it limits the amount of CPU and memory resources a container can use and the maximum at which it restarts, effectively preventing a domino effect across containers when node resources are depleted. Kubernetes offers a simplistic mechanism to dynamically allocate requests and limits using the Vertical Pod Autoscaler (VPA) ; however the best practice is to use machine learning and sophisticated algorithms to measure container resource usage and avoid allocation mistakes that can lead to outages or financial waste.
Use readinessProbe and livenessProbe. These probes ensure that pods are ready to start accepting traffic and determine whether it is healthy enough to keep accepting traffic, respectively.
Use Role-Based Access Controls (RBAC) to determine access policies. RBAC defines what a Kubernetes resource can access within the Kubernetes cluster.
Kubernetes workloads are the backbone of K8s. For example, the workload methodology for updating Deployments enables zero downtime updates, which increases application availability. Additionally, DaemonSets allow effective monitoring of applications as well, among other use cases. Because it is such a fundamental aspect of K8s, understanding Kubernetes Workloads is essential to learning how to use Kubernetes in a production environment.