EKS Fargate: Features and Best Practices

EKS Best Practices
calendar January 11, 2023
Chapter 1 EKS Fargate: Features and Best Practices

EKS is Amazon’s managed Kubernetes service that enables users to provision managed Kubernetes clusters while mitigating the need to install, configure, scale, and upgrade the Kubernetes control plane.

Kubernetes clusters need worker nodes to run the user’s container workloads, which are the host machines that provide compute capacity for the cluster. Using EC2 instances as worker nodes is the traditional approach, allowing users the flexibility to manage their instances manually.

Fargate is a serverless approach to provisioning compute capacity for EKS clusters without needing to maintain a fleet of EC2 instances. Reducing manually managed infrastructure enables users to run their containers without being responsible for provisioning, configuring, patching, upgrading, and scaling a fleet of EC2 instances. This technique reduces operational overhead while empowering users to focus on deploying applications instead of managing infrastructure.

An EKS cluster with Fargate support can be launched easily by executing the following command:

eksctl create cluster --name my-fargate-cluster --region us-west-2 --fargate

This article discusses how EKS Fargate works, its use cases, and how it can be helpful for EKS users.

Summary of Key Concepts

What is EKS Fargate? EKS Fargate is a managed Kubernetes service providing serverless infrastructure for hosting containerized workloads.
Fargate architecture Fargate infrastructure is fully managed, which means AWS is responsible for provisioning, patching, upgrading, and repairing host machines.
Use cases Users benefit from utilizing EKS Fargate by reducing the operational overhead of managing EC2 infrastructure and spending more time on application design and development.
Getting started with EKS Fargate EKS clusters with Fargate support can be provisioned with the eksctl command-line tool.
Observability EKS Fargate supports monitoring tools like CloudWatch Logs and Prometheus, enabling insight into workloads running on Fargate nodes.
Capacity management Fargate nodes are automatically right-sized based on pod requests/limits. These values need to be set accurately to prevent wasted resources and ensure that workloads perform as expected.
Security Fargate workloads are secured by dedicated tenancy and restrictions on pod privileges like kernel and volume access.
Pricing Fargate users pay only for their compute capacity with no upfront costs. Pricing is higher than EC2 instances but Fargate requires less operational overhead.
Best practices Fargate best practices include right-sizing pod requests/limits, enabling observability tools, organizing Fargate profiles correctly, and understanding what use cases are suitable for Fargate.
Limitations Fargate’s limitations include less control of the host infrastructure, no option to customize the operating system configuration, and restrictions on valid pod configurations.
Alternatives Other compute options for EKS users include self-managed EC2 nodes, managed node groups, Lambda containers, and AppRunner. Analyzing the use case of each will help determine the appropriate approach.
Summary EKS Fargate is a valuable tool for EKS users aiming to reduce the operational overhead caused by infrastructure management. Users should validate their use cases to ensure that they are compatible with Fargate.

Machine learning for Kubernetes sizing

Learn More
Visualize Utilization Metrics Set Resource Requests & Limits Set Requests & Limits with Machine Learning Identify mis-sized containers at a glance & automate resizing Get Optimal Node Configuration Recommendations
Kubernetes ✔ ✔
Kubernetes + Densify ✔ ✔ ✔ ✔ ✔

Fargate architecture

EKS Fargate nodes are fully managed by AWS, which means the host machines run within AWS internal accounts and are not visible to users. AWS is responsible for managing the machine hosts, including initial provisioning, operating system upgrades, patching, health checking, and repairs in the case of failure.

When a user deploys a Kubernetes pod (container) to Fargate, the user does not need to decide on instance types, AMIs, kernel versions, or other host-specific details. The user deploys the pod with a defined CPU/memory request, and AWS takes care of provisioning appropriately sized compute capacity to host the workload. When the user deletes a pod, AWS is responsible for decommissioning the host. Users can now focus exclusively on managing their pods while delegating compute provisioning to AWS.

An EKS cluster may contain any combination of EC2 and Fargate nodes. When deploying a pod, users may want to specify whether to assign the pod to an EC2 node or Fargate node. To deploy pods to Fargate, users will configure the EKS cluster with a Fargate profile, which is a collection of Kubernetes namespace and label settings to determine which pods are assigned to Fargate. Users will specify a namespace and pod labels in the Fargate profile. When deploying a new pod to the cluster, EKS will use the Fargate profile settings to determine whether to assign the pod to EC2 nodes or Fargate nodes. Fargate profiles allow users to have granular control over where particular pods should run.

The diagram below displays the division of responsibilities between AWS and the user. The EKS control plane and Fargate nodes are fully managed by AWS, while EC2 nodes (and all associated operational activities) are the user’s responsibility.

AWS and user responsibilities when using EKS Fargate

Use cases

The primary use case for Fargate is mitigating the operational overhead of managing EC2 instance fleets. Teams normally need to invest time and expertise into capacity planning, selecting instance types, setting up scaling mechanisms (like EC2 auto scaling groups), monitoring unhealthy instances, upgrading and patching operating systems, and efficiently scaling. These activities take focus away from application development and solving core business problems. Users can focus on more valuable engineering challenges by delegating compute capacity management to AWS via EKS Fargate.

Another use case for EKS Fargate is rapid testing and development. Pods can be launched quickly on Fargate without spending time setting up EC2-related infrastructure, allowing users to quickly build prototypes and proof-of-concept workloads.

Fargate is also an excellent option for users running temporary workloads like scheduled jobs. This use case would require infrastructure to only be temporarily available for the duration of the job processing before disposing of the infrastructure resources. Configuring EC2-related resources for one-off or temporary workloads involves more overhead than Fargate, where AWS fully manages the infrastructure provisioning/deprovisioning logic.

Getting started with EKS Fargate

We can deploy Fargate Nodes to either new or existing EKS clusters via the command line or web console. The de facto tool for managing EKS clusters via the command line is called eksctl. This tool will provision clusters, EC2 node groups, and Fargate nodes, will deploy or upgrade Kubernetes addons (like CoreDNS), and can enable other valuable operations required by cluster administrators. For more details, you can view the full installation instructions on the AWS site.

Provision an EKS cluster

The eksctl tool can read configuration settings from a ClusterConfig text file, which is helpful for recording a reproducible configuration state in a version control system. Here is an example of a ClusterConfig plain text file defining an EKS cluster with Fargate support: 

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: my-fargate-cluster
  region:us-west-2
# Create a Fargate Profile
fargateProfiles:
- name: fargate-profile-1
    selectors:
      # Pods in the "serverless" Kubernetes namespace will be
      # scheduled onto Fargate:
      - namespace: serverless         
                    

Save the text above into a file with a meaningful name, such as fargate-cluster.config. Use the configuration to deploy an EKS cluster by running this command:

eksctl create cluster --config-file fargate-cluster.config

The eksctl tool will launch a cluster with the specified configuration. The output will look similar to the following:

eksctl version 0.112.0
 using region us-west-2
 using Kubernetes version 1.22
 creating EKS cluster "my-fargate-cluster" in "us-west-2" region with Fargate profile
 building cluster stack "eksctl-my-fargate-cluster-cluster"
 deploying stack "eksctl-my-fargate-cluster-cluster"
 creating Fargate profile "fargate-profile-1" on EKS cluster "my-fargate-cluster"
 EKS cluster "my-fargate-cluster" in "us-west-2" region is ready

The EKS cluster will take 15-20 minutes to create. Once the cluster is active, verify that it is configured and responding properly:

kubectl get svc

Pick the ideal instance type for your workload using an ML-powered visual catalog map

See how it works

Deploy pods to EKS Fargate

Let’s run some pods now to verify that they are running on Fargate. The only requirement below is for the pods to run in the serverless namespace, which matches the Fargate profile configuration set earlier. Any pods deployed to this namespace are assigned to Fargate.

First, let’s create the namespace:

kubectl create namespace serverless

Next, we will create the deployment:

kubectl create deployment webserver --replicas 2 --image nginx -n serverless

Now we will check if the pods are running and deployed on Fargate:

kubectl get pods -n serverless--output wide

This command shows us the deployed pods and the corresponding worker nodes hosting the pods. For pods deployed to Fargate, fargate is the worker node’s name prefix. The output confirms that the pods were successfully scheduled to EKS Fargate nodes.

webserver-559b886555-6dxsr fargate-ip-192-168-xx.us-west-2.compute.internal
 webserver-559b886555-hhd2k fargate-ip-192-168-xx.us-west-2.compute.internal

Note: It may take a few minutes for the Fargate nodes to launch, so it is normal for the pods to remain in the Pending state during this time.

Finally, clean up the cluster with:

eksctl delete cluster --config-file fargate-cluster.config

Observability

EKS Fargate supports standard monitoring tools like CloudWatch Logs and Prometheus metrics. Combining these tools will provide insight into workloads running on Fargate nodes.

Configuring log streaming can be done via the FluentBit project. Fargate nodes come with a FluentBit log router preinstalled and will stream container logs to CloudWatch (and other AWS services) for any pods running on EKS Fargate. The log streaming configuration is sourced from a ConfigMap resource managed by the user:

kind: ConfigMap
apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
data:
  output.conf: |
    [OUTPUT]
        Name cloudwatch_logs
        log_group_name fluent-bit-cloudwatch
        log_stream_prefix from-fluent-bit-
                        

Fargate nodes also expose standard worker node metrics via Kubelet (and cAdvisor), such as data related to CPU and memory. Prometheus users can scrape this data along with any custom metrics the pod exposes, allowing a high degree of observability for Fargate workloads.

While there are no hosts to manage with Fargate, configuring appropriate observability tooling is still essential to running a production workload.

Capacity management

Fargate workloads benefit from right-sized worker nodes. EKS sizes Fargate nodes at launch according to the requests/limits defined in a pod, which ensures that there is no wasted excess node capacity in the cluster. EKS will provide a variety of readily available host sizes for various pod requirements without users needing to preconfigure instance types manually.

Sizing pods appropriately is necessary to ensure that Fargate nodes are right-sized. Setting values too high will result in excess cost and wasted resources; values too low will result in malfunctioning pods.

<insert right-sizing product advertising card>

Vertically scaling Fargate nodes is not possible: A Fargate node will have a fixed amount of resources and will not expand based on pod resource utilization. Horizontal scaling is the best approach to expanding workloads running on Fargate. Tools like the Horizontal Pod Autoscaler or Keda will scale Pod replica counts according to utilization and trigger additional Fargate nodes to launch. This scaling approach is beneficial for workloads capable of scaling horizontally but is not suitable for those only capable of scaling vertically.

Identify under/over-provisioned K8s resources and use Terraform to auto-optimize

WATCH 3-MIN VIDEO

Security

Kubernetes pods deployed to EKS Fargate benefit from workload isolation because Fargate nodes have dedicated tenancy, so they only execute one pod at a time. There is a highly secure isolation boundary, where each pod has a dedicated operating system kernel, CPU, memory, and local storage resources. A compromised pod will be unable to access any other workloads through the compromised host due to this dedicated tenancy.

EKS Fargate also enhances security by blocking pods requesting root access, kernel access, or access to host devices, volumes, or networking. Fargate will stop any pods from running if they request any administrative privileges, increasing the security posture of pods running on Fargate. Nodes also implement disk encryption by default.

These features provide a layer of security for workloads running on EKS Fargate. Users will still need to implement additional Kubernetes security best practices to maintain a strong security posture, like hardening the RBAC configuration.

Pricing

There is no overprovisioning or upfront cost associated with Fargate: EKS Fargate users pay only for the compute capacity they use. However, the prices will be higher than regular EC2 instances.

Billing depends on the configuration of each pod deployed to Fargate. Pods specify a CPU/memory requirement, and Fargate nodes launch according to these capacity requests. EKS Fargate will charge by the minute, so after a pod terminates, billing for it will cease within 60 seconds.

As an example, suppose a user deploys one pod requesting 2 CPUs and 4 GB of memory. If this pod runs in the US-EAST-1 region (North Virginia), the cost will be $0.099 per hour, which corresponds to an average monthly fee of $72.27. An EC2 instance with similar CPU/memory resources (such as a t3.medium) may cost only $30.37 per month, which is less than half the Fargate price. 

Depending on EC2 instance type capabilities and Fargate node resource requests, pricing will vary. However, Fargate will always cost more for the equivalent EC2 compute capacity.

The justification for Fargate’s increased costs will vary for each user, depending on how much operational overhead the user is spending on maintaining the EC2 fleet. Calculating the engineering time spent on infrastructure provisioning and maintenance will help determine whether Fargate is worth the higher compute cost.

See Amazon’s documentation for further details on Fargate pricing.

Best practices

There are a few best practices users can follow to maximize the usefulness of EKS Fargate.

  • Do not configure the Fargate profile to target the default namespace. Users generally deploy many random workloads to the default namespace, and they may not want a disorganized mix of pods to all deploy to Fargate. A better approach is to create a separate namespace dedicated to Fargate, such as serverless, which allows cluster users to quickly determine which pods are allocated to Fargate or EC2. It also makes it easier to control deployment privileges to the namespace via Kubernetes RBAC policies.
  • Troubleshooting any pods running on Fargate will not be possible via SSH, because Fargate nodes do not allow host access. The limitation should be noted in any developer playbooks to ensure that alternative troubleshooting approaches are explored, such as execing into pods, collecting logs/metrics, and testing containers on non-serverless hosts first.
  • Enable EKS control plane logs for auditing purposes. The audit log will provide insight into Kubernetes users deploying to the Fargate namespaces, workload details, and resources consumed. This information can be helpful for auditing Fargate utilization in a cluster.
  • Investigate the operational hours spent on maintaining EC2 infrastructure to determine the value proposition of Fargate. Ensuring that the additional Fargate charges are offset by savings in developer overhead is essential for justifying the choice of service. If the overhead mitigated does not justify Fargate’s costs, consider other options for running containerized workloads on AWS.

Investing time planning how workloads can utilize Fargate will help prevent future problems. 

Free Proof of Concept implementation if you run more than 5,000 containers

REQUEST SESSION

Limitations

EKS Fargate has a few limitations that are typical of any managed service. AWS can only provide reliable quality of service assurances to users if there are guardrails to manage user workloads safely. Evaluating these limitations is necessary for understanding appropriate use cases.

AWS exclusively manages the host machines, so users have no flexibility regarding the Fargate host machine configuration. Users cannot modify details like kernel versions or operating system settings or install custom operating system packages. AWS allows EKS users to select between a Linux-based or Windows-based Fargate node, but that is the only host machine detail exposed to users. No other aspect of the Fargate node is transparent to the user. Users with requirements involving host customization will need to evaluate EC2 nodes instead. For example, an organization’s security team may require certain hardening or monitoring software to run on the host for compliance purposes, which is impossible with Fargate.

Another limitation relates to the types of pods compatible with Fargate. Pods requiring special privileges or access to the host machine will not be allowed to run on Fargate. Kubernetes allows users to optionally configure pods to request root access to host machines, request kernel access, mount host devices (like volumes), and expose host network ports. None of these features are allowed on EKS Fargate due to security requirements. AWS attempts to maintain the security of its host machines by blocking any pods requesting privileged access.

More information regarding the limitations of EKS Fargate can be found here.

Alternatives

There are alternative options available for hosting Kubernetes pods in EKS clusters. Each option has trade-offs to evaluate based on the user’s requirements:

  • EKS Self-Managed EC2 Nodes: The vanilla approach for EKS is to self-manage EC2 instances, which involves deploying and managing the entire EC2 worker node lifecycle manually. This option allows fine-grained governance over the infrastructure configuration and complete end-to-end control of all involved resources. 
  • EKS Managed Node Groups: There is an option to allow EKS to deploy EC2 auto scaling groups on behalf of the user. EKS can provide health checks, automatic graceful node draining during a shutdown, and more straightforward upgrade functionality. The user still manages EC2 instances but with some responsibilities delegated to EKS.
  • ECS Fargate: Users comfortable with managing containers on ECS will have access to the serverless Fargate feature. The differences between ECS Fargate and EKS Fargate are minimal because of a standard underlying Fargate implementation. The user evaluates whether their workloads are suitable for the more straightforward ECS service or they require access to a Kubernetes platform.
  • Lambda Containers: Users running short-lived containers with minimal resource requirements may be suited to Lambda. There is no access to a Kubernetes API, and containers are limited to a 15-minute runtime, which may not suit all workloads.
  • AppRunner: Users running containerized web applications designed for serverless infrastructure are suited for AppRunner. The service manages load balancing, SSL certificates, and scaling on behalf of the user. The service is useful for getting web applications up and running quickly, although with less flexibility than ECS or EKS.

There are many options available for running containers on AWS infrastructure. Users will benefit from testing different approaches to validate their benefits and limitations.

Summary

Fargate can be a valuable tool for users managing EKS clusters and looking to reduce infrastructure operational overhead. Mitigating the need to manage EC2 instance fleets manually allows teams to focus engineering resources on application workloads instead of operating infrastructure. 

There are drawbacks to using EKS Fargate, so users must carefully evaluate their workload requirements to ensure compatibility with Fargate’s constraints. 

Users may benefit from testing workloads on EKS Fargate to gather data on its suitability in the long term. Since there are no upfront costs and deploying Fargate Nodes can be done in minutes, it is possible to build a proof-of-concept relatively quickly. EKS has an entry-level tutorial available here.

Like this article?

Subscribe to our LinkedIn Newsletter to receive more educational content

Subscribe now

Discover the benefits of optimized cloud & container resources. Try Densify today!

Request a Demo