How to Choose the Right Amazon EC2 Instance Type
AWS offers over 300 EC2 instance types across five EC2 instance families, each with varying resource and performance focuses.
In this video, Densify’s experts summarize the strengths of the most commonly-used EC2 instance types and offer guidelines on how to identify the EC2 instance that provides the best resources for your application workload for the lowest price.
Amazon Elastic Compute Cloud (EC2) is an AWS service offering that delivers secure and scalable cloud compute capacity.
Each of your EC2 instances is a virtual server providing compute power capable of running apps within the AWS public cloud.
EC2 instances are launched by created by an Amazon Machine Image (AMI)—an AWS template that describes and defines the OS and operating environment for one or more EC2 instances of one or more EC2 instance types.
Each instance type delivers a mix of CPU, memory, storage, and networking capacity, across one or more size options, and should be carefully matched to your workload’s unique demands.
AWS groups instances into families that offer discreet abilities for your workloads:
The elastic designation in Elastic Compute Cloud refers to the ability to increase your EC2 instance footprint on demand—up or down—manually, or automatically through Auto Scaling.
At the time of writing, there are nearly 280 EC2 instance types available across nearly 25 regions and 70 availability zones—yielding millions of possible permutations to choose from when optimizing the EC2 instance selection for your workload.
So, how do you choose?
Although other public cloud providers may leverage different groupings and terminologies for their compute service offerings, the general concepts outlined below will apply.
General purpose instances are designed for scalable services, such as web servers, microservices, and distributed data stores. Within the general purpose family there are A1, T4g, T3, T3a, T2, M6g, M5, M5a, M5n, and M4 instance types.
The A1 type is used for Arm processor-based workloads.
Instance Size | vCPU | Mem (GiB) | Storage | Network Performance (Gbps) |
---|---|---|---|---|
a1.medium | 1 | 2 | EBS | Up to 10 |
a1.large | 2 | 4 | EBS | Up to 10 |
a1.xlarge | 4 | 8 | EBS | Up to 10 |
a1.2xlarge | 8 | 16 | EBS | Up to 10 |
a1.4xlarge | 16 | 32 | EBS | Up to 10 |
a1.metal | 16 | 32 | EBS | Up to 10 |
Amazon EC2 T4g instances are EBS-optimized for enhanced networking, and each contain a custom AWS Graviton2 processor with a 64-bit Arm core. T4g instances deliver up to 40% better performance for the price than T3 instances and are appropriate for a broad set of burstable general purpose workloads, including microservices, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repos, and business-critical apps.
AWS T4g instances earn CPU credits when workloads operate below the baseline threshold. Each accumulated CPU credit can be used to burst to full CPU core performance for one minute. In unlimited mode, T4g instances can burst for as long as required at any time.
Instance Size | vCPU | Memory (GiB) | Baseline Performance / vCPU | CPU Credits Earned / Hr | Network Burst Bandwidth | EBS Burst Bandwidth |
---|---|---|---|---|---|---|
t4g.nano | 2 | 0.5 | 5% | 6 | Up to 5 Gbps | Up to 2,085 Mbps |
t4g.micro | 2 | 1 | 10% | 12 | Up to 5 Gbps | Up to 2,085 Mbps |
t4g.small | 2 | 2 | 20% | 24 | Up to 5 Gbps | Up to 2,085 Mbps |
t4g.medium | 2 | 4 | 20% | 24 | Up to 5 Gbps | Up to 2,085 Mbps |
t4g.large | 2 | 8 | 30% | 36 | Up to 5 Gbps | Up to 2,780 Mbps |
t4g.xlarge | 4 | 16 | 40% | 96 | Up to 5 Gbps | Up to 2,780 Mbps |
t4g.2xlarge | 8 | 32 | 40% | 192 | Up to 5 Gbps | Up to 2,780 Mbps |
Densify offers the industry’s most automated and accurate recommendations for sizing cloud resources, including VMs, containers, databases, and auto scaling groups.
Densify has partnered with Intel to offer one year of free resource optimization software licensing to qualified companies.
T3 instances, and the next-generation version, T3a, are configured with a balance of CPU, memory, and networking resources. There are 7 T3 instance types, ranging from t3.nano with two virtual CPUs and 0.5GB of memory to t3.2xlarge with eight virtual CPUs and 32GB of memory.
T3 instance types are configured by default to increase CPU bursting without limit. This helps prevent insufficient CPU, but also leaves customers at risk of paying more than they have to for the same level of CPU resources. The T3 instances use the Nitro system, which enables network and EBS bursting, as well.
Instance Size | vCPU | CPU Credits/hour | Mem (GiB) | Storage | Network Performance |
---|---|---|---|---|---|
t3.nano | 2 | 6 | 0.5 | EBS | Up to 5 Gbps |
t3.micro | 2 | 12 | 1 | EBS | Up to 5 Gbps |
t3.small | 2 | 24 | 2 | EBS | Up to 5 Gbps |
t3.medium | 2 | 24 | 4 | EBS | Up to 5 Gbps |
t3.large | 2 | 36 | 8 | EBS | Up to 5 Gbps |
t3.xlarge | 4 | 96 | 16 | EBS | Up to 5 Gbps |
t3.2xlarge | 8 | 192 | 32 | EBS | Up to 5 Gbps |
t3a.nano | 2 | 6 | 0.5 | EBS | Up to 5 Gbps |
t3a.micro | 2 | 12 | 1 | EBS | Up to 5 Gbps |
t3a.small | 2 | 24 | 2 | EBS | Up to 5 Gbps |
t3a.medium | 2 | 24 | 4 | EBS | Up to 5 Gbps |
t3a.large | 2 | 36 | 8 | EBS | Up to 5 Gbps |
t3a.xlarge | 4 | 96 | 16 | EBS | Up to 5 Gbps |
t3a.2xlarge | 8 | 192 | 32 | EBS | Up to 5 Gbps |
t2.nano | 1 | 3 | 0.5 | EBS | Low |
t2.micro | 1 | 6 | 1 | EBS | Low to Moderate |
t2.small | 1 | 12 | 2 | EBS | Low to Moderate |
t2.medium | 2 | 24 | 4 | EBS | Low to Moderate |
t2.large | 2 | 36 | 8 | EBS | Low to Moderate |
t2.xlarge | 4 | 54 | 16 | EBS | Moderate |
t2.2xlarge | 8 | 81 | 32 | EBS | Moderate |
Amazon EC2 M6g instances are driven by 64-bit Neoverse Arm-based AWS Graviton2 processors that deliver up to 40% improvement in price and performance beyond current generation M5 instances and enable a balance of compute, memory, and networking resources to support a broad set of workloads.
Instance Size | vCPU | Memory (GiB) | Instance Storage (GIB) | Network Bandwidth (Gbps) | EBS Bandwidth (Mbps) |
---|---|---|---|---|---|
m6g.medium | 1 | 4 | EBS | Up to 10 | Up to 4,750 |
m6g.large | 2 | 8 | EBS | Up to 10 | Up to 4,750 |
m6g.xlarge | 4 | 16 | EBS | Up to 10 | Up to 4,750 |
m6g.2xlarge | 8 | 32 | EBS | Up to 10 | Up to 4,750 |
m6g.4xlarge | 16 | 64 | EBS | Up to 10 | 4,750 |
m6g.8xlarge | 32 | 128 | EBS | 12 | 9,000 |
m6g.12xlarge | 48 | 192 | EBS | 20 | 13,500 |
m6g.16xlarge | 64 | 256 | EBS | 25 | 19,000 |
m6g.metal | 64 | 256 | EBS | 25 | 19,000 |
m6gd.medium | 1 | 4 | 1×59 NVMe SSD | Up to 10 | Up to 4,750 |
m6gd.large | 2 | 8 | 1×118 NVMe SSD | Up to 10 | Up to 4,750 |
m6gd.xlarge | 4 | 16 | 1×237 NVMe SSD | Up to 10 | Up to 4,750 |
m6gd.2xlarge | 8 | 32 | 1×474 NVMe SSD | Up to 10 | Up to 4,750 |
m6gd.4xlarge | 16 | 64 | 1×950 NVMe SSD | Up to 10 | 4,750 |
m6gd.8xlarge | 32 | 128 | 1×1900 NVMe SSD | 12 | 9,000 |
m6gd.12xlarge | 48 | 192 | 2×1425 NVMe SSD | 20 | 13,500 |
m6gd.16xlarge | 64 | 256 | 2×1900 NVMe SSD | 25 | 19,000 |
m6gd.metal | 64 | 256 | 2×1900 NVMe SSD | 25 | 19,000 |
The M5, M5a, M5n, and M4 instances are also balanced CPU and memory instances. They’re designed for small and midsize databases and back-end applications.
Instance Size | vCPU | Memory (GiB) | Instance Storage (GiB) | Network Bandwidth (Gbps) | EBS Bandwidth (Mbps) |
---|---|---|---|---|---|
m5.large | 2 | 8 | EBS | Up to 10 | Up to 4,750 |
m5.xlarge | 4 | 16 | EBS | Up to 10 | Up to 4,750 |
m5.2xlarge | 8 | 32 | EBS | Up to 10 | Up to 4,750 |
m5.4xlarge | 16 | 64 | EBS | Up to 10 | 4,750 |
m5.8xlarge | 32 | 128 | EBS | 10 | 6,800 |
m5.12xlarge | 48 | 192 | EBS | 10 | 9,500 |
m5.16xlarge | 64 | 256 | EBS | 20 | 13,600 |
m5.24xlarge | 96 | 384 | EBS | 25 | 19,000 |
m5.metal | 96 | 384 | EBS | 25 | 19,000 |
m5d.large | 2 | 8 | 1×75 NVMe SSD | Up to 10 | Up to 4,750 |
m5d.xlarge | 4 | 16 | 1×150 NVMe SSD | Up to 10 | Up to 4,750 |
m5d.2xlarge | 8 | 32 | 1×300 NVMe SSD | Up to 10 | Up to 4,750 |
m5d.4xlarge | 16 | 64 | 2×300 NVMe SSD | Up to 10 | 4,750 |
m5d.8xlarge | 32 | 128 | 2×600 NVMe SSD | 10 | 6,800 |
m5d.12xlarge | 48 | 192 | 2×900 NVMe SSD | 10 | 9,500 |
m5d.16xlarge | 64 | 256 | 4×600 NVMe SSD | 20 | 13,600 |
m5d.24xlarge | 96 | 384 | 4×900 NVMe SSD | 25 | 19,000 |
m5d.metal | 96 | 384 | 4×900 NVMe SSD | 25 | 19,000 |
m5a.large | 2 | 8 | EBS | Up to 10 | Up to 2,880 |
m5a.xlarge | 4 | 16 | EBS | Up to 10 | Up to 2,880 |
m5a.2xlarge | 8 | 32 | EBS | Up to 10 | Up to 2,880 |
m5a.4xlarge | 16 | 64 | EBS | Up to 10 | 2,880 |
m5a.8xlarge | 32 | 128 | EBS | Up to 10 | 4,750 |
m5a.12xlarge | 48 | 192 | EBS | 10 | 6,780 |
m5a.16xlarge | 64 | 256 | EBS | 12 | 9,500 |
m5a.24xlarge | 96 | 384 | EBS | 20 | 13,570 |
m5ad.large | 2 | 8 | 1×75 NVMe SSD | Up to 10 | Up to 2,880 |
m5ad.xlarge | 4 | 16 | 1×150 NVMe SSD | Up to 10 | Up to 2,880 |
m5ad.2xlarge | 8 | 32 | 1×300 NVMe SSD | Up to 10 | Up to 2,880 |
m5ad.4xlarge | 16 | 64 | 2×300 NVMe SSD | Up to 10 | 2,880 |
m5ad.8xlarge | 32 | 128 | 2×600 NVMe SSD | Up to 10 | 4,750 |
m5ad.12xlarge | 48 | 192 | 2×900 NVMe SSD | 10 | 6,870 |
m5ad.16xlarge | 64 | 256 | 4×600 NVMe SSD | 12 | 9,500 |
m5ad.24xlarge | 96 | 384 | 4×900 NVMe SSD | 20 | 13,570 |
m5n.large | 2 | 8 | EBS | Up to 25 | Up to 4,750 |
m5n.xlarge | 4 | 16 | EBS | Up to 25 | Up to 4,750 |
m5n.2xlarge | 8 | 32 | EBS | Up to 25 | Up to 4,750 |
m5n.4xlarge | 16 | 64 | EBS | Up to 25 | 4,750 |
m5n.8xlarge | 32 | 128 | EBS | 25 | 6,800 |
m5n.12xlarge | 48 | 192 | EBS | 50 | 9,500 |
m5n.16xlarge | 64 | 256 | EBS | 75 | 13,600 |
m5n.24xlarge | 96 | 384 | EBS | 100 | 19,000 |
m5dn.large | 2 | 8 | 1×75 NVMe SSD | Up to 25 | Up to 4,750 |
m5dn.xlarge | 4 | 16 | 1×150 NVMe SSD | Up to 25 | Up to 4,750 |
m5dn.2xlarge | 8 | 32 | 1×300 NVMe SSD | Up to 25 | Up to 4,750 |
m5dn.4xlarge | 16 | 64 | 2×300 NVMe SSD | Up to 25 | 4,750 |
m5dn.8xlarge | 32 | 128 | 2×600 NVMe SSD | 25 | 6,800 |
m5dn.12xlarge | 48 | 192 | 2×900 NVMe SSD | 50 | 9,500 |
m5dn.16xlarge | 64 | 256 | 4×600 NVMe SSD | 75 | 13,600 |
m5dn.24xlarge | 96 | 384 | 4×900 NVMe SSD | 100 | 19,000 |
Instance Size | vCPU | Mem (GiB) | Storage | Dedicated EBS Bandwidth (Mbps) | Network Performance |
---|---|---|---|---|---|
m4.large | 2 | 8 | EBS | 450 | Moderate |
m4.xlarge | 4 | 16 | EBS | 750 | High |
m4.2xlarge | 8 | 32 | EBS | 1,000 | High |
m4.4xlarge | 16 | 64 | EBS | 2,000 | High |
m4.10xlarge | 40 | 160 | EBS | 4,000 | 10 gigabit |
m4.16xlarge | 64 | 256 | EBS | 10,000 | 25 gigabit |
Densify offers the industry’s most automated and accurate recommendations for sizing cloud resources, including VMs, containers, databases, and auto scaling groups.
Densify has partnered with Intel to offer one year of free resource optimization software licensing to qualified companies.
The C5, C5n, and C4 instance types offer the lowest price per CPU ratio of the instance types. These are designed for compute-intensive workloads, like batch processing, data analytics, machine learning, and high-performance computing. The C5 come in nine different models, from the c5.large with two virtual CPUs and 4GB of memory to the c5d.18xlarge with 72 virtual CPUs and 144GB of memory.
Instance Size | vCPU | Memory (GiB) | Instance Storage (GiB) | Network Bandwidth (Gbps) | EBS Bandwidth (Mbps) |
---|---|---|---|---|---|
c6g.medium | 1 | 2 | EBS | Up to 10 | Up to 4,750 |
c6g.large | 2 | 4 | EBS | Up to 10 | Up to 4,750 |
c6g.xlarge | 4 | 8 | EBS | Up to 10 | Up to 4,750 |
c6g.2xlarge | 8 | 16 | EBS | Up to 10 | Up to 4,750 |
c6g.4xlarge | 16 | 32 | EBS | Up to 10 | 4750 |
c6g.8xlarge | 32 | 64 | EBS | 12 | 9000 |
c6g.12xlarge | 48 | 96 | EBS | 20 | 13500 |
c6g.16xlarge | 64 | 128 | EBS | 25 | 19000 |
c6g.metal | 64 | 128 | EBS | 25 | 19000 |
c6gd.medium | 1 | 2 | 1×59 NVMe SSD | Up to 10 | Up to 4,750 |
c6gd.large | 2 | 4 | 1×118 NVMe SSD | Up to 10 | Up to 4,750 |
c6gd.xlarge | 4 | 8 | 1×237 NVMe SSD | Up to 10 | Up to 4,750 |
c6gd.2xlarge | 8 | 16 | 1×474 NVMe SSD | Up to 10 | Up to 4,750 |
c6gd.4xlarge | 16 | 32 | 1×950 NVMe SSD | Up to 10 | 4,750 |
c6gd.8xlarge | 32 | 64 | 1×1900 NVMe SSD | 12 | 9,000 |
c6gd.12xlarge | 48 | 96 | 2×1425 NVMe SSD | 20 | 13,500 |
c6gd.16xlarge | 64 | 128 | 2×1900 NVMe SSD | 25 | 19,000 |
c6gd.metal | 64 | 128 | 2×1900 NVMe SSD | 25 | 19,000 |
c5.large | 2 | 4 | EBS | Up to 10 | Up to 4,750 |
c5.xlarge | 4 | 8 | EBS | Up to 10 | Up to 4,750 |
c5.2xlarge | 8 | 16 | EBS | Up to 10 | Up to 4,750 |
c5.4xlarge | 16 | 32 | EBS | Up to 10 | 4,750 |
c5.9xlarge | 36 | 72 | EBS | 10 | 9,500 |
c5.12xlarge | 48 | 96 | EBS | 12 | 9,500 |
c5.18xlarge | 72 | 144 | EBS | 25 | 19,000 |
c5.24xlarge | 96 | 192 | EBS | 25 | 19,000 |
c5.metal | 96 | 192 | EBS | 25 | 19,000 |
c5d.large | 2 | 4 | 1×50 NVMe SSD | Up to 10 | Up to 4,750 |
c5d.xlarge | 4 | 8 | 1×100 NVMe SSD | Up to 10 | Up to 4,750 |
c5d.2xlarge | 8 | 16 | 1×200 NVMe SSD | Up to 10 | Up to 4,750 |
c5d.4xlarge | 16 | 32 | 1×400 NVMe SSD | Up to 10 | 4,750 |
c5d.9xlarge | 36 | 72 | 1×900 NVMe SSD | 10 | 9,500 |
c5d.12xlarge | 48 | 96 | 2×900 NVMe SSD | 12 | 9,500 |
c5d.18xlarge | 72 | 144 | 2×900 NVMe SSD | 25 | 19,000 |
c5d.24xlarge | 96 | 192 | 4×900 NVMe SSD | 25 | 19,000 |
c5d.metal | 96 | 192 | 4×900 NVMe SSD | 25 | 19,000 |
c5a.large | 2 | 4 | EBS | Up to 10 | Up to 3,170 |
c5a.xlarge | 4 | 8 | EBS | Up to 10 | Up to 3,170 |
c5a.2xlarge | 8 | 16 | EBS | Up to 10 | Up to 3,170 |
c5a.4xlarge | 16 | 32 | EBS | Up to 10 | Up to 3,170 |
c5a.8xlarge | 32 | 64 | EBS | 10 | 3,170 |
c5a.12xlarge | 48 | 96 | EBS | 12 | 4,750 |
c5a.16xlarge | 64 | 128 | EBS | 20 | 6,300 |
c5a.24xlarge | 96 | 192 | EBS | 20 | 9,500 |
c5ad.large | 2 | 4 | 1 x 75 NVMe SSD | up to 10 | up to 3,170 |
c5ad.xlarge | 4 | 8 | 1 x 150 NVMe SSD | up to 10 | up to 3,170 |
c5ad.2xlarge | 8 | 16 | 1 x 300 NVMe SSD | up to 10 | up to 3,170 |
c5ad.4xlarge | 16 | 32 | 2 x 300 NVMe SSD | up to 10 | up to 3,170 |
c5ad.8xlarge | 32 | 64 | 2 x 600 NVMe SSD | 10 | 3,170 |
c5ad.12xlarge | 48 | 96 | 2 x 900 NVMe SSD | 12 | 4,750 |
c5ad.16xlarge | 64 | 128 | 2 x 1200 NVMe SSD | 20 | 6,300 |
c5ad.24xlarge | 96 | 192 | 2 x 1900 NVMe SSD | 20 | 9,500 |
c5n.large | 2 | 5.25 | EBS | Up to 25 | Up to 4,750 |
c5n.xlarge | 4 | 10.5 | EBS | Up to 25 | Up to 4,750 |
c5n.2xlarge | 8 | 21 | EBS | Up to 25 | Up to 4,750 |
c5n.4xlarge | 16 | 42 | EBS | Up to 25 | 4,750 |
c5n.9xlarge | 36 | 96 | EBS | 50 | 9,500 |
c5n.18xlarge | 72 | 192 | EBS | 100 | 19,000 |
c5n.metal | 72 | 192 | EBS | 100 | 19,000 |
Instance Size | vCPU | Mem (GiB) | Storage | Dedicated EBS Bandwidth (Mbps) | Network Performance |
---|---|---|---|---|---|
c4.large | 2 | 3.75 | EBS | 500 | Moderate |
c4.xlarge | 4 | 7.5 | EBS | 750 | High |
c4.2xlarge | 8 | 15 | EBS | 1,000 | High |
c4.4xlarge | 16 | 30 | EBS | 2,000 | High |
c4.8xlarge | 36 | 60 | EBS | 4,000 | 10 gigabit |
The R6g, R5, R5a, R5n, R4, X1e, X1, High Memory (U) and z1d instance types are memory optimized. These are designed for memory-intensive applications such as databases and real-time stream processing.
Instance Size | vCPU | Memory (GiB) | Instance Storage | Network Bandwidth (Gbps) | EBS Bandwidth (Mbps) |
---|---|---|---|---|---|
r6g.medium | 1 | 8 | EBS | Up to 10 | Up to 4,750 |
r6g.large | 2 | 16 | EBS | Up to 10 | Up to 4,750 |
r6g.xlarge | 4 | 32 | EBS | Up to 10 | Up to 4,750 |
r6g.2xlarge | 8 | 64 | EBS | Up to 10 | Up to 4,750 |
r6g.4xlarge | 16 | 128 | EBS | Up to 10 | 4750 |
r6g.8xlarge | 32 | 256 | EBS | 12 | 9000 |
r6g.12xlarge | 48 | 384 | EBS | 20 | 13500 |
r6g.16xlarge | 64 | 512 | EBS | 25 | 19000 |
r6g.metal | 64 | 512 | EBS | 25 | 19000 |
r6gd.medium | 1 | 8 | 1×59 NVMe SSD | Up to 10 | Up to 4,750 |
r6gd.large | 2 | 16 | 1×118 NVMe SSD | Up to 10 | Up to 4,750 |
r6gd.xlarge | 4 | 32 | 1×237 NVMe SSD | Up to 10 | Up to 4,750 |
r6gd.2xlarge | 8 | 64 | 1×474 NVMe SSD | Up to 10 | Up to 4,750 |
r6gd.4xlarge | 16 | 128 | 1×950 NVMe SSD | Up to 10 | 4,750 |
r6gd.8xlarge | 32 | 256 | 1×1900 NVMe SSD | 12 | 9,000 |
r6gd.12xlarge | 48 | 384 | 2×1425 NVMe SSD | 20 | 13,500 |
r6gd.16xlarge | 64 | 512 | 2×1900 NVMe SSD | 25 | 19,000 |
r6gd.metal | 64 | 512 | 2×1900 NVMe SSD | 25 | 19,000 |
r5.large | 2 | 16 | EBS | up to 10 | Up to 4,750 |
r5.xlarge | 4 | 32 | EBS | up to 10 | Up to 4,750 |
r5.2xlarge | 8 | 64 | EBS | up to 10 | Up to 4,750 |
r5.4xlarge | 16 | 128 | EBS | up to 10 | 4,750 |
r5.8xlarge | 32 | 256 | EBS | 10 | 6,800 |
r5.12xlarge | 48 | 384 | EBS | 10 | 9,500 |
r5.16xlarge | 64 | 512 | EBS | 20 | 13,600 |
r5.24xlarge | 96 | 768 | EBS | 25 | 19,000 |
r5.metal | 96 | 768 | EBS | 25 | 19,000 |
r5d.large | 2 | 16 | 1×75 NVMe SSD | up to 10 | Up to 4,750 |
r5d.xlarge | 4 | 32 | 1×150 NVMe SSD | up to 10 | Up to 4,750 |
r5d.2xlarge | 8 | 64 | 1×300 NVMe SSD | up to 10 | Up to 4,750 |
r5d.4xlarge | 16 | 128 | 2×300 NVMe SSD | up to 10 | 4,750 |
r5d.8xlarge | 32 | 256 | 2×600 NVMe SSD | 10 | 6,800 |
r5d.12xlarge | 48 | 384 | 2×900 NVMe SSD | 10 | 9,500 |
r5d.16xlarge | 64 | 512 | 4×600 NVMe SSD | 20 | 13,600 |
r5d.24xlarge | 96 | 768 | 4×900 NVMe SSD | 25 | 19,000 |
r5d.metal | 96 | 768 | 4×900 NVMe SSD | 25 | 19,000 |
r5a.large | 2 | 16 | EBS | Up to 10 | Up to 2,880 |
r5a.xlarge | 4 | 32 | EBS | Up to 10 | Up to 2,880 |
r5a.2xlarge | 8 | 64 | EBS | Up to 10 | Up to 2,880 |
r5a.4xlarge | 16 | 128 | EBS | Up to 10 | 2,880 |
r5a.8xlarge | 32 | 256 | EBS | Up to 10 | 4,750 |
r5a.12xlarge | 48 | 384 | EBS | 10 | 6,780 |
r5a.16xlarge | 64 | 512 | EBS | 12 | 9,500 |
r5a.24xlarge | 96 | 768 | EBS | 20 | 13,570 |
r5ad.large | 2 | 16 | 1×75 NVMe SSD | Up to 10 | Up to 2,880 |
r5ad.xlarge | 4 | 32 | 1×150 NVMe SSD | Up to 10 | Up to 2,880 |
r5ad.2xlarge | 8 | 64 | 1×300 NVMe SSD | Up to 10 | Up to 2,880 |
r5ad.4xlarge | 16 | 128 | 2×300 NVMe SSD | Up to 10 | 2,880 |
r5ad.8xlarge | 32 | 256 | 2×600 NVMe SSD | Up to 10 | 4,750 |
r5ad.12xlarge | 48 | 384 | 2×900 NVMe SSD | 10 | 6,780 |
r5ad.16xlarge | 64 | 512 | 4×600 NVMe SSD | 12 | 9,500 |
r5ad.24xlarge | 96 | 768 | 4×900 NVMe SSD | 20 | 13,570 |
r5n.large | 2 | 16 | EBS | Up to 25 | Up to 4,750 |
r5n.xlarge | 4 | 32 | EBS | Up to 25 | Up to 4,750 |
r5n.2xlarge | 8 | 64 | EBS | Up to 25 | Up to 4,750 |
r5n.4xlarge | 16 | 128 | EBS | Up to 25 | 4,750 |
r5n.8xlarge | 32 | 256 | EBS | 25 | 6,800 |
r5n.12xlarge | 48 | 384 | EBS | 50 | 9,500 |
r5n.16xlarge | 64 | 512 | EBS | 75 | 13,600 |
r5n.24xlarge | 96 | 768 | EBS | 100 | 19,000 |
r5dn.large | 2 | 16 | 1×75 NVMe SSD | Up to 25 | Up to 4,750 |
r5dn.xlarge | 4 | 32 | 1×150 NVMe SSD | Up to 25 | Up to 4,750 |
r5dn.2xlarge | 8 | 64 | 1×300 NVMe SSD | Up to 25 | Up to 4,750 |
r5dn.4xlarge | 16 | 128 | 2×300 NVMe SSD | Up to 25 | 4,750 |
r5dn.8xlarge | 32 | 256 | 2×600 NVMe SSD | 25 | 6,800 |
r5dn.12xlarge | 48 | 384 | 2×900 NVMe SSD | 50 | 9,500 |
r5dn.16xlarge | 64 | 512 | 4×600 NVMe SSD | 75 | 13,600 |
r5dn.24xlarge | 96 | 768 | 4×900 NVMe SSD | 100 | 19,000 |
Instance Size | vCPU | Mem (GiB) | Storage | Networking Performance (Gbps) |
---|---|---|---|---|
r4.large | 2 | 15.25 | EBS | Up to 10 |
r4.xlarge | 4 | 30.5 | EBS | Up to 10 |
r4.2xlarge | 8 | 61 | EBS | Up to 10 |
r4.4xlarge | 16 | 122 | EBS | Up to 10 |
r4.8xlarge | 32 | 244 | EBS | 10 |
r4.16xlarge | 64 | 488 | EBS | 25 |
The X1e and X1 are well-suited for database servers with up to 128 virtual CPUs and 3,904GB of memory.
Instance Size | vCPU | Mem (GiB) | SSD Storage (GB) | Dedicated EBS Bandwidth (Mbps) | Networking Performance |
---|---|---|---|---|---|
x1e.xlarge | 4 | 122 | 1×120 | 500 | Up to 10 gigabit |
x1e.2xlarge | 8 | 244 | 1×240 | 1,000 | Up to 10 gigabit |
x1e.4xlarge | 16 | 488 | 1×480 | 1,750 | Up to 10 gigabit |
x1e.8xlarge | 32 | 976 | 1×960 | 3,500 | Up to 10 gigabit |
x1e.16xlarge | 64 | 1,952 | 1×1,920 | 7,000 | 10 gigabit |
x1e.32xlarge | 128 | 3,904 | 2×1,920 | 14,000 | 25 gigabit |
x1.16xlarge | 64 | 976 | 1×1,920 | 7,000 | 10 gigabit |
x1.32xlarge | 128 | 1,952 | 2×1,920 | 14,000 | 25 gigabit |
Instance Size | Logical Processors | RAM (GiB) | Network Perf (Gbps) | Dedicated EBS Bandwidth (Gbps) |
---|---|---|---|---|
u-6tb1.metal | 448 | 6144 | 100 | 38 |
u-9tb1.metal | 448 | 9216 | 100 | 38 |
u-12tb1.metal | 448 | 12288 | 100 | 38 |
u-18tb1.metal | 448 | 18432 | 100 | 38 |
u-24tb1.metal | 448 | 24576 | 100 | 38 |
The Z1D is a specialty instance with a 4.0GHz sustained frequency; it’s designed for applications with high per-core licensing costs.
Instance Size | vCPU | Mem (GiB) | Networking Performance | SSD Storage (GB) |
---|---|---|---|---|
z1d.large | 2 | 16 | Up to 10 gigabit | 1×75 NVMe SSD |
z1d.xlarge | 4 | 32 | Up to 10 gigabit | 1×150 NVMe SSD |
z1d.2xlarge | 8 | 64 | Up to 10 gigabit | 1×300 NVMe SSD |
z1d.3xlarge | 12 | 96 | Up to 10 gigabit | 1×450 NVMe SSD |
z1d.6xlarge | 24 | 192 | 10 gigabit | 1×900 NVMe SSD |
z1d.12xlarge | 48 | 384 | 25 gigabit | 2×900 NVMe SSD |
z1d.metal | 48 | 384 | 25 gigabit | 2×900 NVMe SSD |
Densify offers the industry’s most automated and accurate recommendations for sizing cloud resources, including VMs, containers, databases, and auto scaling groups.
Densify has partnered with Intel to offer one year of free resource optimization software licensing to qualified companies.
P3, P2, Inf1, G4, G3, and F1 accelerated computing instances provide graphics processing units (GPUs) or field programmable gate arrays (FPGAs) that are used for machine learning, high-performance computing, and other numerically intensive workloads.
Instance Size | GPUs | vCPU | Mem (GiB) | GPU Mem (GiB) | GPU P2P | Storage (GB) | Dedicated EBS Bandwidth | Networking Performance |
---|---|---|---|---|---|---|---|---|
p3.2xlarge | 1 | 8 | 61 | 16 | – | EBS | 1.5 Gbps | Up to 10 gigabit |
p3.8xlarge | 4 | 32 | 244 | 64 | NVLink | EBS | 7 Gbps | 10 gigabit |
p3.16xlarge | 8 | 64 | 488 | 128 | NVLink | EBS | 14 Gbps | 25 gigabit |
p3dn.24xlarge | 8 | 96 | 768 | 256 | NVLink | 2×900 NVMe SSD | 19 Gbps | 100 gigabit |
Instance Size | GPUs | vCPU | Mem (GiB) | GPU Memory (GiB) | Network Performance |
---|---|---|---|---|---|
p2.xlarge | 1 | 4 | 61 | 12 | High |
p2.8xlarge | 8 | 32 | 488 | 96 | 10 gigabit |
p2.16xlarge | 16 | 64 | 732 | 192 | 25 gigabit |
Instance Size | vCPUs | Memory (GiB) | Storage | Inferentia chips | Inferentia chip-to-chip interconnect | Network bandwidth | EBS bandwidth |
---|---|---|---|---|---|---|---|
inf1.xlarge | 4 | 8 | EBS | 1 | N/A | Up to 25 Gbps | Up to 4.75 Gbps |
inf1.2xlarge | 8 | 16 | EBS | 1 | N/A | Up to 25 Gbps | Up to 4.75 Gbps |
inf1.6xlarge | 24 | 48 | EBS | 4 | Yes | 25 Gbps | 4.75 Gbps |
inf1.24xlarge | 96 | 192 | EBS | 16 | Yes | 100 Gbps | 19 Gbps |
Instance Size | GPUs | vCPU | Mem (GiB) | GPU Memory (GiB) | Instance Storage (GB) | Network Performance (Gbps) |
---|---|---|---|---|---|---|
g4dn.xlarge | 1 | 4 | 16 | 16 | 125 | Up to 25 |
g4dn.2xlarge | 1 | 8 | 32 | 16 | 225 | Up to 25 |
g4dn.4xlarge | 1 | 16 | 64 | 16 | 225 | Up to 25 |
g4dn.8xlarge | 1 | 32 | 128 | 16 | 1×900 | 50 |
g4dn.16xlarge | 1 | 64 | 256 | 16 | 1×900 | 50 |
g4dn.12xlarge | 4 | 48 | 192 | 64 | 1×900 | 50 |
g4dn.metal | 8 | 96 | 384 | 128 | 2×900 | 100 |
Instance Size | GPUs | vCPU | Mem (GiB) | GPU Memory (GiB) | Network Performance |
---|---|---|---|---|---|
g3s.xlarge | 1 | 4 | 30.5 | 8 | Up to 10 gigabit |
g3.4xlarge | 1 | 16 | 122 | 8 | Up to 10 gigabit |
g3.8xlarge | 2 | 32 | 244 | 16 | 10 gigabit |
g3.16xlarge | 4 | 64 | 488 | 32 | 25 gigabit |
Instance Size | FPGAs | vCPU | Mem (GiB) | SSD Storage (GB) | Networking Performance |
---|---|---|---|---|---|
f1.2xlarge | 1 | 8 | 122 | 470 | Up to 10 gigabit |
f1.4xlarge | 2 | 16 | 244 | 940 | Up to 10 gigabit |
f1.16xlarge | 8 | 64 | 976 | 4×940 | 25 gigabit |
The I3, I3en, D2, and H1 families make up the storage optimized instances.
The I3 and I3en use non-volatile memory express (NVMe) SSD storage. These devices are optimized for low latency, random I/O and high sequential reads.
Instance Size | vCPU | Mem (GiB) | Local Storage (GB) | Networking Performance (Gbps) |
---|---|---|---|---|
i3.large | 2 | 15.25 | 1×475 NVMe SSD | Up to 10 |
i3.xlarge | 4 | 30.5 | 1×950 NVMe SSD | Up to 10 |
i3.2xlarge | 8 | 61 | 1×1,900 NVMe SSD | Up to 10 |
i3.4xlarge | 16 | 122 | 2×1,900 NVMe SSD | Up to 10 |
i3.8xlarge | 32 | 244 | 4×1,900 NVMe SSD | 10 |
i3.16xlarge | 64 | 488 | 8×1,900 NVMe SSD | 25 |
i3.metal | 72 | 512 | 8×1,900 NVMe SSD | 25 |
Instance Size | vCPU | Mem (GiB) | Local Storage (GB) | Network Bandwidth |
---|---|---|---|---|
i3en.large | 2 | 16 | 1×1,250 NVMe SSD | Up to 25 Gbps |
i3en.xlarge | 4 | 32 | 1×2,500 NVMe SSD | Up to 25 Gbps |
i3en.2xlarge | 8 | 64 | 2×2,500 NVMe SSD | Up to 25 Gbps |
i3en.3xlarge | 12 | 96 | 1×7,500 NVMe SSD | Up to 25 Gbps |
i3en.6xlarge | 24 | 192 | 2×7,500 NVMe SSD | 25 Gbps |
i3en.12xlarge | 48 | 384 | 4×7,500 NVMe SSD | 50 Gbps |
i3en.24xlarge | 96 | 768 | 8×7,500 NVMe SSD | 100 Gbps |
i3en.metal | 96 | 768 | 8×7,500 NVMe SSD | 100 Gbps |
The D2 instances are backed by hard disk drives and offer large-volume, low-cost persistent storage with up to 48TB per instance.
Instance Size | vCPU | Mem (GiB) | Storage (GB) | Network Performance |
---|---|---|---|---|
d2.xlarge | 4 | 30.5 | 3×2000 HDD | Moderate |
d2.2xlarge | 8 | 61 | 6×2000 HDD | High |
d2.4xlarge | 16 | 122 | 12×2000 HDD | High |
d2.8xlarge | 36 | 244 | 24×2000 HDD | 10 gigabit |
The H1 instances have up to 16TB of hard disk drive storage designed for high disk throughput.
Instance Size | vCPU | Mem (GiB) | Networking Performance | Storage (GB) |
---|---|---|---|---|
h1.2xlarge | 8 | 32 | Up to 10 gigabit | 1×2,000 HDD |
h1.4xlarge | 16 | 64 | Up to 10 gigabit | 2×2,000 HDD |
h1.8xlarge | 32 | 128 | 10 gigabit | 4×2,000 HDD |
h1.16xlarge | 64 | 256 | 25 gigabit | 8×2,000 HDD |
In addition to selecting the right EC2 instance family and type, other resourcing considerations include:
A few patterns emerge when describing the different kinds of instances you can use in AWS. First, there are a large number of instances. Even within a single family of instances, there can be up to 18 different configurations. This makes it difficult to find the optimal instance type for a given workload.
For example, you may have a compute-intensive application, but you may not be sure if you should use a compute-optimized instance or an accelerated instance. If the application makes many floating point calculations and the code can take advantage of a GPU, an accelerated instance may be the best option. In another scenario, you may want to deploy a distributed computing application that’s I/O-intensive; should you use an SSD or hard disk drive-backed instance? That will depend on the importance of low latency I/O balanced against cost considerations.
These characteristics are key to doing accurate comparisons between different instance types, which can help you decide which kind of instance is best for a particular workload. Also consider technical constraints on different instances, such as which images run on particular instances; if EBS and networking can burst; and the limits of local storage. Finally, consider business policies that might be in effect that limit your options. IaaS clouds offer a significant opportunity to reduce infrastructure costs and increase an enterprise’s ability to adapt to changes in business conditions and related demands on infrastructure. The wide array of choices in choosing an instance type can be bewildering. Mistakes can be costly, as well as degrade performance.
To make an optimal selection in instance type, you need to understand the detailed nuances of the different instance types and models, as well as your workload’s characteristics—including short-term burst in CPU load and longer-term variations in workload due to business cycles. For small numbers of instances this can be done manually, but as environments grow, this can be very challenging.
Instance Size | Suitibility | Memory & CPU |
---|---|---|
M3, M4, M5, & M5a | General purpose family suitable for a wide range of applications from databases to servers | 4:1 memory to vCPU ratio |
C3, C4, C5, & C5a | Compute intensive family offering superior performance for compute workloads, ideal for HPC, web servers, gaming, and analytics | 2:1 memory to vCPU ratio |
R3, R4, R5, & R5a | Memory intensive family geared towards applications like high performance databases, in-memory caching, and big data analytics | 8:1 memory to vCPU ratio |
Z1d | Offers a balance between R5 and C5 instance types, ideal for electronic design automation and databases with high software license costs per core | 8:1 memory to vCPU ratio |
T2, T3, & T3a | Burstable family suitable for workloads that are spikey in nature: VDIs, small databases, and frequently, for dev environments | Accumulate credits while operating under baseline CPU performance levels that can be leveraged when bursting |
Properly resourcing your application workload requires precise selection of EC2 instance family and sizing—choices that demand you balance performance, stability, and cost.
Skilled cloud infrastructure managers make these choices every day, and although it is possible to run infrastructure at scale without EC2 management, the cumulative impact of best-guess instance selection always leaves performance, stability, and cost savings gains on the table—sometimes to the tune of hundreds of thousands of dollars annually.
Additionally, due to continuous service and instance releases across all the cloud providers, it is very difficult to keep up with the latest technology—there can be millions of possible instance configuration possibilities for each of your workloads.
Finally, once an application is up and running in the cloud, it can be difficult to justify manually-determined infrastructure optimizations to those responsible for business delivery of the app.
Densify helps enterprises manage EC2 instance selection at scale, providing recommendations for the best-fit type for each of your workloads. Request a demo, and we’ll show you the power of machine-learning-driven EC2 instance selection.
Densify addresses enterprise EC2 management concerns:
Densify offers the industry’s most automated and accurate recommendations for sizing cloud resources, including VMs, containers, databases, and auto scaling groups.
Densify has partnered with Intel to offer one year of free resource optimization software licensing to qualified companies.