The robust and scalable architecture of Kubernetes has changed the way we host our applications. When configured correctly, Kubernetes avoids application downtime. It means that you can prevent a planned downtime from deploying a new software release or even an unplanned downtime due to a hardware issue. Kubernetes Service plays a significant role in making this level of uptime possible.
In this article, we explain how service load balancing helps achieve high availability in a Kubernetes cluster. We cover the basic concept behind Kubernetes Service, review the different services available, and provide an example to get you started.
In the Kubernetes world, the pods, where the application lives, are temporary and get a new IP address every time they are launched. The pods are usually dynamically destroyed and recreated with each deployment. In the absence of the Kubernetes service, we would have to track the IP addresses of all active pods. It would be a difficult task, especially as our application scales up, thereby increasing the risk of downtime.
The Kubernetes service creates an abstraction that maps to one or more pods. This abstraction allows other applications to reach the service by simply referring to the service name. It means that other applications no longer need to know the IP addresses assigned to the pods. External applications and end-users can also access the services assuming that they are exposed publicly to the internet.
An example of a service definition, exposing the pods with the label app=redis
to a service named redis-service
on a TCP port of 6379
:
The selector ensures that we map the service correctly to the corresponding pods. When a service gets a matching label pod, it updates the pod’s IP address to a Kubernetes object called Endpoint
. An Endpoint tracks the IP address of all the matching pods, updates its list automatically. Each service creates its Endpoint object.
Let us look at our example of Redis Service. If you describe the service, you would see a line called Endpoints, a list of the pods’ IP addresses.
Kubernetes would create an Endpoint object with the same name as that of service:
Visualize Utilization Metrics | Set Resource Requests & Limits | Set Requests & Limits with Machine Learning | Identify mis-sized containers at a glance & automate resizing | Get Optimal Node Configuration Recommendations | |
---|---|---|---|---|---|
Kubernetes | |||||
Kubernetes + Densify |
We don’t need to get into more details about endpoints. Just remember that it’s the Endpoints that keep the list of IP addresses up to date for the service to forward its traffic. It is the most common way of defining a service (using a selector).
We can also define a service without the selector. For example, if we migrate our application to Kubernetes, we can evaluate how it behaves without migrating the Redis server. We want to use the existing Redis server, which is still in the old server. In such a scenario, we create a service as shown below:
Then we create an endpoint object with the same name and point it to the Redis server IP address:
We can then simply use the service name redis-service-without-ep
in our application to connect to the Redis server.
By default, Kubernetes creates a ClusterIP type of service. We can build different kinds of services by having a spec.type
property in the service YAML file.
The four types of services are:
Accessible within the cluster. Dependent applications can interact with other applications internally using the ClusterIP service.
NodePort services are accessible outside the cluster. It creates a mapping of pods to its hosting node/machine on a static port. For example, you have a node with IP address 10.0.0.20
and a Redis pod running under it. NodePort will expose 10.0.0.20:30038
, assuming the port exposed is 30038, which you can then access outside the Kubernetes cluster.
This service type creates load balancers in various Cloud providers like AWS, GCP, Azure, etc., to expose our application to the Internet. The Cloud provider will provide a mechanism for routing the traffic to the services. The most common example usage of this type is for a website or a web app.
Meet us at KubeCon + CloudNativeCon North America. Salt Lake City, Utah | November 13-15, 2024
Schedule a MeetingFor any pod to access an application outside of the Kubernetes cluster like the external DB server, we use the ExternalName service type. Unlike in the previous examples, instead of an endpoint object, the service will simply redirect to a CNAME of the external server.
Each service will get a DNS name that other microservices can use. The format of the DNS record would be: service-name.namespace.svc.cluster.local
Example: redis-service.default.svc.cluster.local
This DNS record will resolve to the Cluster IP address of a standard service. In contrast, a headless service will point to the individual IP addresses of the pods.
Additionally, one SRV record is also created for some special use-cases: _port-name._protocol.service.namespace.svc.cluster.local
Now that we know the concepts of services in Kubernetes, we need to understand how humans or other microservices can use them. We will divide broadly based on Internal and external accessibility.
For internal purposes, we use the ClusterIP
type. For example, pods of service-A
can talk to pods of service-B
, as long as they are in the same Kubernetes cluster.
We have two options to access:
service-name_SERVICE_HOST
service-name_PORT
service-name_PORT
Meet us at FinOps X Europe. Barcelona, Spain | November 12 -13, 2024.
Schedule a MeetingFor external access, we require either a NodePort
or LoadBalancer
type of service.
We can use the NodePort
service type if we have a limited number of services. It gives connectivity to our application without actually having a dedicated external load balancer.
Please keep in mind that it will work as long as the node is reachable via its IP addresses. In most cases, the worker nodes reside in our private network(like office networks or private VPC). In such cases, we can not access the NodePort
service from the Internet.
Another disadvantage of the NodePort
service type is that it creates a mapping to Node’s IP address on a static port. The allocatable port range is 30000–32767
, and the service must allocate the same port on each node while provisioning. It becomes problematic when the application scales up into multiple microservices.
The public cloud providers like AWS, GCP, Azure, etc., automatically create load balancers when creating a service with spec.type: LoadBalancer
.
LoadBalancer
type provides a Public IP address or DNS name to which the external users can connect. The traffic flows from the LoadBalancer to a mapped service on a designated port, which eventually forwards it to the healthy pods. Note that LoadBalancers doesn’t have a direct mapping to the pods.
Let us see how to create and use an External LB with LoadBalancer service type. In this example, we will expose a RabbitMQ (RMQ) Pod and connect its Admin GUI from the Internet. Please note that LoadBalancer doesn’t do any filtering of the incoming or outgoing traffic. It is just a mere proxy for the external world, forwarding the traffic to respective pods/services.
In this example, we are relying on AWS EKS for providing the Load Balancers. You can consider Kubernetes in any public cloud as long as they provide support for Load Balancer. If you do not have an EKS setup, you can refer to this user guide. You can also refer to this link, or these commands can help you to get started quickly:
Once we have our Kubernetes Cluster ready, we need to launch an RMQ pod. We will use the below Pod manifest, rmq-pod.yaml
:
Then create the pod:
Verify the pod is up and running:
Now, let us create the Service manifest, rmq-svc.yaml
:
Create the service:
Verify the service:
If you have noticed, we have an LB DNS name under EXTERNAL-IP
. It is the ELB created in AWS as shown below:
Copy the DNS name in a browser with port 15672. In our case, it would be http://a8e294d5ad3d74562ac0ca47e8eaec9a-34496842.ap-south-1.elb.amazonaws.com:15672
.
You can share this URL/DNS name with anyone who wants to have access to your RMQ Admin. We have seen how easy it is to create and configure an external Load Balancer to expose our application. However, there are some limitations in Load Balancer, which we will see in the next section, and how Ingress can help.
A free 30-day trial for cloud & Kubernetes resource control. Experience in your environment or use sample data. See optimization potential in 48 hours. No card required.
Free TrialWe saw that the LoadBalancer
service type creates an Application/Network Load Balancer for each service. It is fine as long as we have few services to expose. However, it could be expensive as well as hard to manage multiple services. Also, the LoadBalancer
type doesn’t support URL routing, SSL termination, etc.
Consider Ingress
as an extension of the NodePort
or LoadBalancer
service type. It sits between the external traffic and the Kubernetes Cluster, processing the traffic internally to determine which pods or services to forward. The main features of Ingress are load balancing, name-based virtual hosting, URL routing, and SSL termination.
The definition of such Ingress object would be:
In the above YAML representation, the rules under the spec section determine how the traffic from the end-users would flow. In this case, the Ingress Controller will forward all the traffic to https://abc.com/oders
to an internal service named the order service on port 8080
.
Please note that Ingress is not a Kubernetes service type like LoadBalancer
or NodePort
; instead, it is a collection of rules which uses LoadBalancer
or NodePort
service type for its functionality. For Ingress to work, the cluster needs an Ingress Controller.
Some of the well-known Ingress Controllers are:
Various controllers have different features and capabilities. You should evaluate them based on your requirements. Whichever controllers we use, Ingress makes it much easier to configure and manage the routing rules, implements SSL-based traffic, etc. We recommend reading more about Ingress.
Like traditional load balancers, Ingress controllers support various algorithms for their load balancing. The most commonly endorsed are Least connection and Round Robin.
For example, the AWS ALB Ingress controller supports the following algorithms:
round_robin
least_outstanding_requests
The Nginx Ingress controllers support both the above algorithms and Least Time Load Balancing
and IP Hashing
.
Depending on the workload, we can consider the optimal algorithms for the best performance of our applications. If you are unsure of it, just leave it to the default. The intention here is to be aware of such configurations which might help us in the future.
Services make it so simple for the microservices to interact and load balance with hundreds of resources. We have learned the differences between various service types, how to access each of those types, launch an external load balancer, and why Ingress is so cool. To use Kubernetes Services effectively, we need to understand which type will fit into each use-case and implement them accordingly. If set up correctly while designing the architect, it will surely save us time debugging and help maintain uptime SLA. Additionally, all the service definitions are declarative Kubernetes manifest in YAML or JSON, like any other Kubernetes object.