There’s a near-endless list of administrative tools for DevOps practitioners that need to provision, configure, and deploy resources. This article takes a closer look at one tool in particular: Terraform, and how it can be used to provision AWS EKS. Specifically, we’ll discuss:
But first, let’s start with the basics.
At its core, AWS EKS is a service that provisions and manages the control plane (configuration files, API server, and controllers) of your Kubernetes (K8s) cluster. This means that you won’t have to worry about your control plane’s security, high availability, and upgrades. AWS EKS doesn’t automatically provision the K8s cluster’s worker nodes that host your workload, so you have the flexibility to provision them as you see fit.
Terraform is an open source tool, created by HashiCorp, that is used to declare the desired configuration state of your public and private application infrastructure. After you define a configuration state, Terraform provisions your environment and maintains the state of its changes over time. Terraform came to market as modern microservice-based immutable architecture took root. Since then, it has been gradually supplanting configuration management tools such as Chef, Puppet, and Ansible, as infrastructure administrators’ favorite helper. Terraform’s registry of integrations, also known as providers, boasts an impressive list.
In short, Terraform is the right tool to provision AWS EKS if you have already selected it as your standard cross-platform tool, or have decided to adopt an Infrastructure as Code approach and don’t want to be locked into an AWS-specific tooling. Otherwise, AWS eksctl is the fastest and simplest method, and CloudFormation for AWS EKS would be a more natural choice for AWS-only IaC.
As you may already know, there are other ways to provision a Kubernetes (a.k.a. K8s) cluster in AWS EKS). So why use Terraform? To answer that question, let’s first look at a few different tools that usually come up in this type of conversation, and their main purpose.
|Leading infrastructure as code platform-independent provisioning tool with lots of integrations.
|Newer to Kubernetes and not specialized for it.
|Native AWS infrastructure as code provisioning tool intended for use with AWS services.
|Not intended for multi-cloud use or data center infrastructure provisioning.
|Native Kubernetes CLI
|Native K8s command line interface (CLI) that supports both imperative and declarative configuration files.
|Not intended for use outside of K8s, and naturally useful only after your cluster is provisioned.
|AWS Kubernetes CLI
|Leverages the AWS CLI, Kubernetes API, and kubectl to automate provisioning of all required AWS resources.
|Designed to serve only as a CLI for AWS EKS.
|Kubernetes Package Manager
|Package manager for deploying applications inside an existing Kubernetes cluster.
|Not intended to provision a cluster, and requires to install server side code in the cluster.
|Kubernetes Cluster Management
|A pre-EKS tool for easily setting up a new K8s cluster on AWS.
|It has overlap with EKS’ core functionality.
|Chef, Puppet, Ansible, SaltStack
|Infrastructure Configuration Management
|Widely used configuration management tools.
|Not intended for provisioning infrastructure as code, not specialized in either AWS or Kubernetes.
So, where does that leave us? Well, like most things, there’s some important nuance to understand about these tools—namely, the category of infrastructure tooling they fall under. Knowing each tool’s fundamental approach to solving a problem can help you better select the right tool to create the best (most scalable) solution for your specific use case. For example, a knife can also be used as a screwdriver—but it’s far from ideal.
Let’s take a look at the main categories these tools fall under.
Machine learning for Kubernetes sizingLearn More
|Visualize Utilization Metrics
|Set Resource Requests & Limits
|Set Requests & Limits with Machine Learning
|Identify mis-sized containers at a glance & automate resizing
|Get Optimal Node Configuration Recommendations
|Kubernetes + Densify
In an effort to simplify the concepts, we are defining three categories to host above tools:
Terraform and CloudFormation are both infrastructure as code provisioning tools that use declarative definitions and stateful version control to provision a complex environment configured in a desired state. This means that you can use it as a single tool for all of your needs. kOps would also fall in this category even though limited to K8s cluster management.
Helm, on the other hand, is a package manager for applications configured to run on a Kubernetes cluster. This means that, with an impressive list of packaged applications known as charts available in its artifact hub, you can quickly deploy applications on Kubernetes in the same way that Yum or Apt are used with Linux. Helm’s strength lies in its customized charts used for managing your specific application environments on K8s. However, it’s not intended to provision datacenter infrastructure or general cloud services.
Chef, Puppet, Ansible, SaltStack are first-generation tools that were designed to automate infrastructure management when the architectures were mutable. In a mutable architecture, you deploy your systems and keep changing its configuration over time (thus configuration management). Modern architecture tends to be immutable, meaning it is not designed to evolve over time, but to be wiped away and replaced with an entirely new system.
It is common for an administrator to use tools in all three categories. For example, an administrator might:
This brings us to another common question: What are the benefits of using AWS EKS when you can provision it on AWS EC2s or VMware VMs?
Even though AWS isn’t shy about promoting the many virtues of their offering, in our view, it boils down to a few essentials:
|AWS EKS Feature
|Identify and Access Management (IAM)
|IAM users and roles help you manage secure access by K8s objects
|EKS control plane is provisioned across Availability Zones for resilience
|Automates the provisioning and termination of nodes based on workload
|EBS Fast Snapshots
|A quicker way to backup both your configuration (etcd) and data volumes
|Eliminate the need for you to manage Kubernetes worker nodes and pods
|Allows you to extends a Kubernetes cluster to VMs in your data center
Identify under/over-provisioned K8s resources and use Terraform to auto-optimizeWATCH 3-MIN VIDEO
Now that you have some context, let’s set up an EKS Cluster. Setting up an EKS Cluster is fairly simple. The first prerequisite is to have Terraform already installed. You can refer to the following link to install Terraform on your server.
The second requirement is to configure aws credentials on your system. To configure AWS credentials, you will need an IAM user/role with the required permissions.
Once you have Terraform and AWS Credentials configured on your system, you are all set to create an EKS Cluster on AWS using Terraform.
Below is a Terraform configuration file with a few preset sample options:
Learn the “80-20 rule” applied to Kubernetes optimizationWatch Free Video
And that’s it! You’ve provisioned your first EKS Cluster. You can review additional related reading materials, such as:
Since you are provisioning a K8s cluster in AWS, you might as well take full advantage of the AWS services available. Use S3 replication to safeguard your Terraform state file. This prevents you from having to recreate your entire cluster if you were to ever lose access to your state file. S3 replication allows you to replicate the file within or across Availability Zones to help avoid file corruption or a disaster.
Remote state locking helps prevent concurrent operations on your resources. If you have configured the S3 backend to store a state file and attempt to perform the update operation on the same EKS cluster concurrently, locking will prevent other administrators in your environment from corrupting your state file.
You may have multiple EKS clusters serving different purposes (e.g., a production vs. a development environment). In such a scenario, tagging EKS clusters helps organize your cluster based on nodes and user profiles, and also helps with chargeback when used in conjunction with Cost Allocation tags and AWS Cost Explorer. You may already be using Labels within your K8s cluster for a more granular organization of your workload resources and K8s objects, in which case AWS tagging is simply a way for you to track your EKS at a cluster, node, and user profile levels.
Free Proof of Concept implementation if you run more than 5,000 containersRequest PoC
As part of its well-architected framework, AWS has been promoting a best practice of using multiple AWS accounts to organize your resources. For example, you would deploy an EKS cluster in your production account and another in your development account, or may you separate your accounts to align with cost centers. Fortunately, supporting multiple “providers” is a strength of Terraform. You would then create an IAM Role for each of your administrators for a particular account and use them in your Terraform script similarly to the example below.
Terraform offers a feature named a “module” that allows you to organize your resources in module blocks which can be called by other modules to help avoid duplication in your state file. Another advantage of organizing your state file into modules is to align with Terraform registry’s approach of offering third-party configuration in the form of modules. The Terraform registry contains plugins called “Terraform providers” that add resource types (such as AWS VPC).
Some of your infrastructure components (such as AWS VPC, ELB, or databases) don’t change often in configuration, while others (such as containers) change frequently. Using multiple state files to separate the control of their respective configuration has a couple of advantages:
This is commonly performed using Terraform Workspaces.
Kubernetes provides StatefulSet to enable the implementation of stateful applications that reference common data as containers are added to scale horizontally via the EKS Cluster Autoscaler. This approach in turn requires persistent storage of data such as a cache or a database. AWS supports this use case with the EBS Container Storage Interface (CSI) which is still in beta as of the time we are writing this article, and a most valuable ingredient for deploying stateful applications on AWS EKS.
There are many ways for you to pay less for the AWS EC2s that support your K8s cluster. One common option is to obtain up to 72% discounts by committing to a certain volume of usage at least one year in advance via Savings Plan or Reserved Instances.
Another approach is to use AWS Spot instances for discounts up to 90% which is especially handy for workloads that can tolerate delay such as batch jobs. You must remember that the nodes may be taken away with a mere 2 minutes notice. So remember to increase your spot instance bid to decrease the chance of being outbid, and use Mixed Instance Policies. This helps increase your chance of securing a spot instance in case of shortage of certain types on the spot market. The spot instance selector can help you programmatically find similar types in the vast expanding universe of EC2 types and sizes.
For a more sophisticated approach to sizing and cost management across multiple cloud providers and data centers, you may also use third-party tooling vendors who offer policy-driven optimization-as-code functionalities that integrate with Terraform to right-size your environment starting from your cluster nodes all the way up to your containers.
We discussed the idea of auto scaling cluster nodes—but what about scaling pods? A lack of attention to your pod-level rightsizing ultimately results in unnecessary wasted nodes via the cluster autoscaler. Kubernetes provides the Vertical Pod Autoscaler (VPA) that can adjust up and down pod resource requests based on historic CPU and memory usage. It can also automatically keep CPU and memory resource limits proportional to resource requests. This helps with over-requesting resources to save money, but also with under-requesting resources which can cause performance bottlenecks.The VPA feature is supported in AWS EKS by installing the Metrics Server. Once enabled, you must be careful to not allow Terraform to override VPA’s functionality, if both try to simultaneously drive the resource requirement settings.
Public cloud, Infrastructure as Code (IaC) and Kubernetes are the three most important infrastructure architecture trends in recent years. Fortunately, the technologies have sufficiently matured to automate mundane tasks and play nicely together using Terraform and AWS EKS. As your environment scales over time, a common challenge becomes to size your containers, nodes and clusters accurately to ensure high performance while avoiding waste, which is where third-party vendors come in to help.
Subscribe to our LinkedIn Newsletter to receive more educational contentSubscribe now