Optimization for Container Platforms

Continuously Automate & Optimize Your Container Environments

Automated Continuous Optimization Maximizes Your Container Environments

Container infrastructure adoption is growing rapidly, and Kubernetes is becoming the de-facto standard for container scheduling, both on-premises and in cloud environments. The promise of highly scalable and elastic applications combined with high portability and rapid app updates is fueling this massive technological shift.

But, many organizations overestimate the capabilities of containers, and assume that many of the challenges present in legacy environments do not apply. Although containers simplify some aspects of operations, there is still a significant amount of optimization required to ensure they run safely and efficiently, especially in production. Densify’s machine-learning-powered optimization is designed to do exactly this.

Manifest optimization for Kubernetes & other container platforms

The Challenge of Optimizing Container Resources

Containers can be stacked on physical servers, VMs, or cloud instances in a manner similar to how traditional virtual machines run on hypervisor hosts. But, unlike their virtual counterpart, the ability to overcommit resources is not nearly as sophisticated, and many container environments run at low levels of utilization. CPU and memory resources may be assigned to running containers, but not actually leveraged. In order to solve this, Densify is able to leverage its advanced machine learning and predictive analytics to optimize container resources at several levels.

This container cluster shows very inefficient use of resources

Container Resource Optimization

One major cause of inefficiency is the fact that developers must explicitly specify the CPU and memory requests and limits for each container. These are often specified in a template or manifest, such a Terraform, and provides the Kubernetes scheduler with an indication of how many resources the container is expected to consume. But determining these values is often very difficult for developers or DevOps teams: there may be little visibility into the actual operational patterns of the containers, and even if there is, the resources are often based on peaks or worst-case scenarios. This may cause resources to go unused, and because container schedulers are not good at overcommitting resources, the slack isn’t picked up by other containers. What is optimal for an individual container may not be optimal for all containers in a cluster, and the result is extremely low cluster utilization.

Densify solves this by learning the activity patterns of the containers and pods, and scientifically determining request and limit values that give each container what it needs, while at the same time optimizing the overall density of the container environment. By gathering granular container data from frameworks like Prometheus, learning the patterns of activity, and applying sophisticated policies to generate safe recommendations, Densify can produce very precise, automatable recommendations.

Analyzing container usage patterns using detailed optimization policies

These recommendations can have a drastic impact on container efficiency. When performed in scale, the Densify analysis will often reduce container request values by over 40%, which has a direct impact on the number of nodes required to host the workload.

Example Kubernetes optimization analysis showing 36% reduction in CPU resources

Node Resource Automation

Once the container resource allocations are aligned with actual consumption, Densify will also optimize the nodes the containers are running on in order to make sure the underlying resources are consistent with workload demands. This process is supported for both on-prem nodes as well as cloud-based deployments

Node Optimization in the Cloud

Cloud-based container deployments are typically hosted on top of scalable node groups, such as AWS’ Auto Scaling groups (ASGs). In these deployments, the types nodes may not match the actual work being done, and Densify will generate recommendations to change the nodes to match the workload. For example, some container workloads may be memory intensive, and running them in a general-purpose instance type may be less efficient than running them in a memory optimized or burstable instance. Densify will also recommend different min and max values for the group based on the workload patterns. This will often shave 30% or more of the cost of the nodes in use, and improve elasticity and app performance.

Example AWS Auto Scaling group optimization analysis

Node Optimization On-Premises

Because on-prem nodes are not as elastic as cloud infrastructure, the focus is typically to ensure there is sufficient capacity to meet peak requirements, and if the nodes are virtual machines, to ensure the container workloads are optimized within the broader context, providing the optimal workload density. In this context, Densify is able to analyze several critical factors that impact container operation:

  • Cumulative resource allocations
  • Actual utilization and contention probability
  • Service tiering and fit-for-purpose clusters
  • Managing overlapping quotas (“double overcommit”)


Because of the precision of the Densify optimization recommendations, it is possible to close the loop on execution and achieve a high degree of automation. A highly differentiated feature of Densify is the ability to integrate with automation frameworks such as HashiCorp Terraform, AWS CloudFormation, and Red Hat Ansible, creating optimization as code. By embedding the machine learning recommendations directly in the app definitions, they will actually optimize themselves based on learned behaviour.

This next-generation automation strategy allows the optimization to be initiated from the source files themselves, not from external orchestration solutions, which can conflict with the deployment automation strategies used in container environments. Densify provides a rich set of APIs and integration modules to easily enable this automation strategy.

Terraform-based Container Optimization-as-Code showing hard-coded resource specifications being replaced with dynamic references to Densify analytics results

DevOps Integration

Taking this automation strategy further, optimization as code becomes a key strategy in DevOps environments, where highly automated release cycles render traditional optimization methods ineffective. Any changes made outside the DevOps toolchain will be quickly undone next time a release occurs, so automation in these environments requires requiring a new way of thinking.

Densify has an innovative solution for this as well, and has coined the sequence, Continuous Integration, Continuous Delivery, Continuous Optimization, or simply CI/CD/CO. In this paradigm, Densify becomes an integral part of the toolchain, providing closed-loop optimization by embedding optimization hooks in the upstream process. To enable this, Densify automatically generates both machine-readable and human-readable output that populate a repository of machine learning artifacts, and this repo is made available to the entire DevOps toolchain. The machine-readable output drives the automated closed-loop automation, and the human-readable output enables approval processes and app-owner buy-in.

The Holy Grail: CI/CD/CO (Continuous Optimization) for Containers enabled by Densify

Software License Control

As organization deploy containers into production, a set of new challenges often arise, many of which had not been considered at the outset of the container adoption. One of these challenges is optimizing the use of licensed software, and specifically, controlling container placement and resource consumption in a way that ensures licensing policies are met and costs are minimized. Unless an organization plans to completely eliminate the use of licensed software as they transition to containers (which is extremely unlikely for most business organizations) this is a critical consideration.

Densify has a proven track record of advanced software license control in virtual environments, and this same optimization approach translates directly into container environments, where licensing is often enforced at the node level and workload placement and resource allocation are critical.

Security & Compliance

Another area where containers don’t make problems magically disappear is in the enforcement of security and compliance policies. If a workload is subject to PCI, HIPAA or even just corporate governance policies, it is highly unlikely that these requirements will go away simply because the applications are now hosted in containers. This means that the containerized workloads need to be subjected to the same controls as their virtual cousins. Densify’s advanced policy models and optimization analysis also address this challenge, bringing the same level of enterprise rigor to container environments as is expected from the legacy environments they replace.

Advanced security and compliance policies in Densify

Workload Routing

Any organization that spans multiple physical locations or hosting providers needs to have clear policies governing what workloads run where. This includes the analysis of security and compliance policies, data residency, jurisdictional requirements, technical hosting capabilities, service proximity, resiliency, and other policies. Again, just because an application is hosted in containers does not make these requirements go away, and any organization with more than one Kubernetes cluster needs to adopt an automated mechanism to route workloads. Densify is also a leader in this field, and provides automated workload routing analytics for many leading organizations.

Densify’s hybrid workload routing analysis

Performance Optimization

Performance optimization is also a key benefit of ML-based container optimization. At the container level this not only ensures that they run as efficiently as possible, but also assures that they get the resources they need when they need them. By understanding the entire body of workload and the patterns of activity of each container and pod, contention events can be reduced and out of memory conditions avoided. This is particularly important for apps that don’t respond well to being killed, such as legacy apps that have not yet been converted to microservices.

Densify also performs performance optimization at the node-level, and by optimizing the node types to match the workload demands, containers can be assured the optimal CPU, memory and I/O resources based on their needs. But optimization goes far beyond this—as organizations grow their container footprints they will often set up different cluster configurations with different design points or service tiers, making them “fit for purpose” for different types of workloads. For example, it may make sense to set up “CPU intensive” and “Memory Intensive” clusters, and route workloads into them based on their resource needs.

It is even possible to analyze an application’s sensitivity to different resource types, like more cores vs larger cores, further optimizing the performance of each application. Densify enables these parameters to eb scientifically controlled, and uses performance benchmarks to automatically normalize data between different CPU architectures, enabling predictive models to be correlated with reality in order to optimize app performance.

Container Migration Analysis

Not all organizations can afford to rewrite their applications from scratch to run in container environments, and the good news is that containers can also be used to host monolithic and/or legacy workloads, as long as they are managed properly. In these cases, it is absolutely critical to consider the factors described above, including workload pattern analysis, cluster-level placement, software licensing requirements, and compliance requirements. For existing applications, it is critical that the migration of the workloads into containers be done scientifically and accurately.

To address this need, Densify is able to perform advanced, predictive what-if analysis to model a variety of transformation scenarios, including lift-and-shift VM-to container migration, EC2-to-container migration, and a variety of others. In this process, Densify uses detailed pattern analysis to model the interaction between container workloads and the “dovetailing” effect for different combinations of workloads, giving a precise, quantifiable assessment of hosting alternatives.

Comparison of EC2-hosted vs Docker-on-EC2 scenarios for a set of VM workloads

Once an optimal strategy has been determined, Densify will also provide detailed resource specifications, both at the container-level and at the node level. This not only de-risks the migration process, but it also feeds directly into steady-state optimization by enabling optimization as code to be embedded into the new environments from the start. It also enables the generation of an accurate business case for moving to containers, giving far deeper insight into the target operational state.

EC2-to-container migration analysis showing workload density predictions for different node types