Automated Resource Optimization for Kubernetes-Orchestrated Container Environments

In this video, see how Densify helps you manage resource allocation in your Kubernetes-orchestrated container environment automatically, so you don’t have to worry about container sizing and can focus on app development.

Densify Automates the Optimization of Your Container Resources

With Densify container resource optimization, you get:

Improved Performance
Your apps continuously receive the right resources
Increased Utilization & Resource Efficiency
Pay only for the resources your apps actually require
Continuous Automation & Optimization
Augment your DevOps processes with Densify recommendations incorporated and integrated upstream into your CI/CD pipeline
Worry-Free Container Sizing
We shoulder the burden of selecting the correct resources and limits, so you and your team can focus on application development
See Automated Container Resource Optimization in Your Own Environment
Resource optimization across Kubernetes, OpenStack, Docker, and more

Personalized Container Resource Management Demo

Ready to see the right picks for your infrastructure?

Request a 1:1 Demo

Complete Video Transcript

First, we’ll look at the Summary page of the Densify Kubernetes Container Optimization Dashboard. As you can see, this environment has 2 clusters, 25 containers, and 22 pods. Having collected raw utilization data from Prometheus in 5-minute intervals, Densify leverages machine learning to analyze the container usage patterns and recommends the best container resource allocations.

This pie chart shows some containers are being downsized to reduce resource waste, some are being upsized to ensure adequate resource are allocated, and some are being resized—meaning they might be downsized in memory and upsized in CPU, or the other way around.

Overall, 45% of the CPU request and 36% of memory request can be reclaimed in this environment, meaning the whole infrastructure can be safely reduced by more than a third.

These values can be broken down by Clusters, Namespaces, and presented by individual containers.

Now, let’s look at the three containers at the top, nginx1, nginx2 and a webserver. Two of them are being downsized, and one is being upsized.

For example, nginx2’s Memory Request is recommended to upsize from 256 to 320, and nginx1’s CPU Request and Limit values are both recommended to downsize from 100 to 50 millicores. These data can be consumed as a report, but more importantly, it’s available by Densify APIs to drive automated, continuous optimization of your container resources.

Here is a sample Densify API output in the Terraform format showing the same recommendations for these three containers.

Now, let’s go to the Google Kubernetes Engine dashboard and we can see these 3 containers currently running before optimization.

Open the nginx1’s YAML file, we can see that the CPU request and limit are currently set at 100 millicores. And the nginx2’s current memory request is currently at 256.

If now we go ahead and do Terraform Apply, it will automatically look up and get the optimal values of resource allocations based on the Terraform file from Densify API output.

Then we enter Yes, it will execute and make the changes in the container resource specification manifests upstream in the DevOps process.

Now let’s go back to Google Kubernetes Engine dashboard and refresh. We can see now nginx1 is running at the optimal size, which is bumped down to 50 millicores for both CPU Request and Limit (as opposed to 100 millicores before).

And nginx2 is also running optimally and safely with memory request bumped up to 320 (as opposed to 256 before).

With Densify, your entire Kubernetes-orchestrated environment is continuously and automatically optimized, so you can focus on app development while your containers run and scale safely and efficiently. To learn more, book a personalized demo.