Ec2 Ug
Ec2 Ug
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Table of Contents
What is Amazon EC2? ......................................................................................................................... 1
Features of Amazon EC2 ............................................................................................................. 1
How to get started with Amazon EC2 ........................................................................................... 1
Related services ......................................................................................................................... 2
Access Amazon EC2 .................................................................................................................... 3
Pricing for Amazon EC2 .............................................................................................................. 3
PCI DSS compliance .................................................................................................................... 4
Set up .............................................................................................................................................. 5
Sign up for AWS ........................................................................................................................ 5
Create a key pair ........................................................................................................................ 5
Create a security group ............................................................................................................... 6
Get started tutorial ............................................................................................................................ 9
Overview ................................................................................................................................... 9
Prerequisites ............................................................................................................................ 10
Step 1: Launch an instance ........................................................................................................ 10
Step 2: Connect to your instance ............................................................................................... 11
Step 3: Clean up your instance .................................................................................................. 11
Next steps ............................................................................................................................... 12
Best practices .................................................................................................................................. 13
Tutorials .......................................................................................................................................... 15
Install LAMP on Amazon Linux 2 ................................................................................................ 15
Step 1: Prepare the LAMP server ........................................................................................ 16
Step 2: Test your LAMP server ........................................................................................... 19
Step 3: Secure the database server ..................................................................................... 20
Step 4: (Optional) Install phpMyAdmin ................................................................................ 21
Troubleshoot ................................................................................................................... 24
Related topics .................................................................................................................. 24
Install LAMP on Amazon Linux 2022 .......................................................................................... 25
Step 1: Prepare the LAMP server ........................................................................................ 25
Step 2: Test your LAMP server ........................................................................................... 28
Step 3: Secure the database server ..................................................................................... 29
Step 4: (Optional) Install phpMyAdmin ................................................................................ 30
Troubleshoot ................................................................................................................... 33
Related topics .................................................................................................................. 33
Configure SSL/TLS on Amazon Linux 2 ....................................................................................... 34
Prerequisites .................................................................................................................... 35
Step 1: Enable TLS on the server ....................................................................................... 35
Step 2: Obtain a CA-signed certificate ................................................................................. 37
Step 3: Test and harden the security configuration ............................................................... 42
Troubleshoot ................................................................................................................... 44
Certificate automation: Let's Encrypt with Certbot on Amazon Linux 2 .................................... 45
Configure SSL/TLS on Amazon Linux 2022 .................................................................................. 49
Prerequisites .................................................................................................................... 50
Step 1: Enable TLS on the server ....................................................................................... 35
Step 2: Obtain a CA-signed certificate ................................................................................. 52
Step 3: Test and harden the security configuration ............................................................... 57
Troubleshoot ................................................................................................................... 59
Certificate automation: Let's Encrypt with Certbot on Amazon Linux 2022 ............................... 60
Host a WordPress blog on Amazon Linux 2 ................................................................................. 63
Prerequisites .................................................................................................................... 64
Install WordPress .............................................................................................................. 64
Next steps ....................................................................................................................... 71
Help! My public DNS name changed and now my blog is broken ............................................. 72
Install LAMP on the Amazon Linux AMI ....................................................................................... 72
iii
Amazon Elastic Compute Cloud
User Guide for Linux Instances
iv
Amazon Elastic Compute Cloud
User Guide for Linux Instances
v
Amazon Elastic Compute Cloud
User Guide for Linux Instances
vi
Amazon Elastic Compute Cloud
User Guide for Linux Instances
vii
Amazon Elastic Compute Cloud
User Guide for Linux Instances
viii
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ix
Amazon Elastic Compute Cloud
User Guide for Linux Instances
x
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error: Host key validation failed for EC2 Instance Connect .................................................. 1696
Stop your instance ................................................................................................................ 1697
Force stop the instance ................................................................................................. 1697
Create a replacement instance ........................................................................................ 1698
Terminate your instance ........................................................................................................ 1699
Instance terminates immediately .................................................................................... 1700
Delayed instance termination ......................................................................................... 1700
Terminated instance still displayed .................................................................................. 1700
Instances automatically launched or terminated ............................................................... 1700
Failed status checks .............................................................................................................. 1700
Review status check information ..................................................................................... 1701
Retrieve the system logs ................................................................................................ 1702
Troubleshoot system log errors for Linux-based instances .................................................. 1702
Out of memory: kill process ........................................................................................... 1703
ERROR: mmu_update failed (Memory management update failed) ...................................... 1704
I/O error (block device failure) ....................................................................................... 1704
I/O ERROR: neither local nor remote disk (Broken distributed block device) .......................... 1705
request_module: runaway loop modprobe (Looping legacy kernel modprobe on older Linux
versions) ...................................................................................................................... 1706
"FATAL: kernel too old" and "fsck: No such file or directory while trying to open /dev" (Kernel
and AMI mismatch) ....................................................................................................... 1707
"FATAL: Could not load /lib/modules" or "BusyBox" (Missing kernel modules) ........................ 1708
ERROR Invalid kernel (EC2 incompatible kernel) ................................................................ 1709
fsck: No such file or directory while trying to open... (File system not found) ......................... 1710
General error mounting filesystems (failed mount) ............................................................ 1711
VFS: Unable to mount root fs on unknown-block (Root filesystem mismatch) ........................ 1713
Error: Unable to determine major/minor number of root device... (Root file system/device
mismatch) .................................................................................................................... 1714
XENBUS: Device with no driver... ..................................................................................... 1715
... days without being checked, check forced (File system check required) ............................. 1716
fsck died with exit status... (Missing device) ...................................................................... 1716
GRUB prompt (grubdom>) ............................................................................................. 1717
Bringing up interface eth0: Device eth0 has different MAC address than expected, ignoring.
(Hard-coded MAC address) ............................................................................................. 1719
Unable to load SELinux Policy. Machine is in enforcing mode. Halting now. (SELinux
misconfiguration) .......................................................................................................... 1720
XENBUS: Timeout connecting to devices (Xenbus timeout) ................................................. 1721
Troubleshoot an unreachable instance ..................................................................................... 1721
Instance reboot ............................................................................................................ 1722
Instance console output ................................................................................................. 1722
Capture a screenshot of an unreachable instance .............................................................. 1723
Instance recovery when a host computer fails .................................................................. 1724
Boot from the wrong volume ................................................................................................. 1724
EC2Rescue for Linux .............................................................................................................. 1725
Install EC2Rescue for Linux ............................................................................................ 1726
(Optional) Verify the signature of EC2Rescue for Linux ...................................................... 1727
Work with EC2Rescue for Linux ...................................................................................... 1729
Develop EC2Rescue modules .......................................................................................... 1731
EC2 Serial Console ................................................................................................................ 1735
Configure access to the EC2 Serial Console ...................................................................... 1736
Connect to the EC2 Serial Console .................................................................................. 1741
Terminate an EC2 Serial Console session .......................................................................... 1746
Troubleshoot your instance using the EC2 Serial Console ................................................... 1746
Send a diagnostic interrupt .................................................................................................... 1752
Supported instance types .............................................................................................. 1753
Prerequisites ................................................................................................................ 1753
Send a diagnostic interrupt ............................................................................................ 1755
xi
Amazon Elastic Compute Cloud
User Guide for Linux Instances
xii
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Features of Amazon EC2
For more information about cloud computing, see What is cloud computing?
For more information about the features of Amazon EC2, see the Amazon EC2 product page.
For more information about running your website on AWS, see Web Hosting.
1
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Related services
Basics
Storage
• AWS Systems Manager Run Command in the AWS Systems Manager User Guide
• Tutorial: Install a LAMP web server on Amazon Linux 2 (p. 15)
• Tutorial: Configure SSL/TLS on Amazon Linux 2 (p. 34)
If you have questions about whether AWS is right for you, contact AWS Sales. If you have technical
questions about Amazon EC2, use the Amazon EC2 forum.
Related services
You can provision Amazon EC2 resources, such as instances and volumes, directly using Amazon EC2.
You can also provision Amazon EC2 resources using other services in AWS. For more information, see the
following documentation:
To automatically distribute incoming application traffic across multiple instances, use Elastic Load
Balancing. For more information, see the Elastic Load Balancing User Guide.
To get a managed relational database in the cloud, use Amazon Relational Database Service (Amazon
RDS) to launch a database instance. Although you can set up a database on an EC2 instance, Amazon
2
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Access Amazon EC2
RDS offers the advantage of handling your database management tasks, such as patching the software,
backing up, and storing the backups. For more information, see the Amazon Relational Database Service
Developer Guide.
To make it easier to manage Docker containers on a cluster of EC2 instances, use Amazon Elastic
Container Service (Amazon ECS). For more information, see the Amazon Elastic Container Service
Developer Guide or the Amazon Elastic Container Service User Guide for AWS Fargate.
To monitor basic statistics for your instances and Amazon EBS volumes, use Amazon CloudWatch. For
more information, see the Amazon CloudWatch User Guide.
To detect potentially unauthorized or malicious use of your EC2 instances, use Amazon GuardDuty. For
more information see the Amazon GuardDuty User Guide.
If you prefer to use a command line interface, you have the following options:
Provides commands for a broad set of AWS products, and is supported on Windows, Mac, and Linux.
To get started, see AWS Command Line Interface User Guide. For more information about the
commands for Amazon EC2, see ec2 in the AWS CLI Command Reference.
AWS Tools for Windows PowerShell
Provides commands for a broad set of AWS products for those who script in the PowerShell
environment. To get started, see the AWS Tools for Windows PowerShell User Guide. For more
information about the cmdlets for Amazon EC2, see the AWS Tools for PowerShell Cmdlet
Reference.
Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON
or YAML, that describes your AWS resources, and AWS CloudFormation provisions and configures
those resources for you. You can reuse your CloudFormation templates to provision the same resources
multiple times, whether in the same Region and account or in multiple Regions and accounts. For more
information about the resource types and properties for Amazon EC2, see EC2 resource type reference in
the AWS CloudFormation User Guide.
Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs
GET or POST and a Query parameter named Action. For more information about the API actions for
Amazon EC2, see Actions in the Amazon EC2 API Reference.
If you prefer to build applications using language-specific APIs instead of submitting a request over
HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software
developers. These libraries provide basic functions that automate tasks such as cryptographically signing
your requests, retrying requests, and handling error responses, making it is easier for you to get started.
For more information, see Tools to Build on AWS.
3
Amazon Elastic Compute Cloud
User Guide for Linux Instances
PCI DSS compliance
On-Demand Instances
Pay for the instances that you use by the second, with no long-term commitments or upfront
payments.
Savings Plans
You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage,
in USD per hour, for a term of 1 or 3 years.
Reserved Instances
You can reduce your Amazon EC2 costs by making a commitment to a specific instance
configuration, including instance type and Region, for a term of 1 or 3 years.
Spot Instances
Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly.
For a complete list of charges and prices for Amazon EC2, see Amazon EC2 pricing.
When calculating the cost of a provisioned environment, remember to include incidental costs such as
snapshot storage for EBS volumes. To calculate the cost of a sample provisioned environment, see Cloud
Economics Center.
To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost
Management console. Your bill contains links to usage reports that provide details about your bill. To
learn more about AWS account billing, see AWS Billing and Cost Management User Guide.
If you have questions concerning AWS billing, accounts, and events, contact AWS Support.
For an overview of Trusted Advisor, a service that helps you optimize the costs, security, and performance
of your AWS environment, see AWS Trusted Advisor.
4
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Sign up for AWS
When you are finished, you will be ready for the Amazon EC2 Getting started (p. 9) tutorial.
With Amazon EC2, you pay only for what you use. If you are a new AWS customer, you can get started
with Amazon EC2 for free. For more information, see AWS Free Tier.
If you have an AWS account already, skip to the next task. If you don't have an AWS account, use the
following procedure to create one.
1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.
Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.
If you haven't created a key pair already, you can create one by using the Amazon EC2 console. Note that
if you plan to launch instances in multiple Regions, you'll need to create a key pair in each Region. For
more information about Regions, see Regions and Zones (p. 1005).
5
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create a security group
4. For Name, enter a descriptive name for the key pair. Amazon EC2 associates the public key with the
name that you specify as the key name. A key name can include up to 255 ASCII characters. It can’t
include leading or trailing spaces.
5. For Key pair type, choose either RSA or ED25519. Note that ED25519 keys are not supported for
Windows instances.
6. For Private key file format, choose the format in which to save the private key. To save the private
key in a format that can be used with OpenSSH, choose pem. To save the private key in a format
that can be used with PuTTY, choose ppk.
If you chose ED25519 in the previous step, the Private key file format options do not appear, and
the private key format defaults to pem.
7. Choose Create key pair.
8. The private key file is automatically downloaded by your browser. The base file name is the name
you specified as the name of your key pair, and the file name extension is determined by the file
format you chose. Save the private key file in a safe place.
Important
This is the only chance for you to save the private key file.
9. If you will use an SSH client on a macOS or Linux computer to connect to your Linux instance, use
the following command to set the permissions of your private key file so that only you can read it.
If you do not set these permissions, then you cannot connect to your instance using this key pair. For
more information, see Error: Unprotected private key file (p. 1693).
For more information, see Amazon EC2 key pairs and Linux instances (p. 1288).
Note that if you plan to launch instances in multiple Regions, you'll need to create a security group in
each Region. For more information about Regions, see Regions and Zones (p. 1005).
Prerequisites
You'll need the public IPv4 address of your local computer. The security group editor in the Amazon
EC2 console can automatically detect the public IPv4 address for you. Alternatively, you can use the
search phrase "what is my IP address" in an Internet browser, or use the following service: Check IP. If
you are connecting through an Internet service provider (ISP) or from behind a firewall without a static IP
address, you need to find out the range of IP addresses used by client computers.
You can create a custom security group using one of the following methods.
New console
6
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create a security group
a. Enter a name for the new security group and a description. Use a name that is easy for
you to remember, such as your user name, followed by _SG_, plus the Region name. For
example, me_SG_uswest2.
b. In the VPC list, select your default VPC for the Region.
6. For Inbound rules, create rules that allow specific traffic to reach your instance. For example,
use the following rules for a web server that accepts HTTP and HTTPS traffic. For more
examples, see Security group rules for different use cases (p. 1318).
a. Choose Add rule. For Type, choose HTTP. For Source, choose Anywhere.
b. Choose Add rule. For Type, choose HTTPS. For Source, choose Anywhere.
c. Choose Add rule. For Type, choose SSH. For Source, do one of the following:
• Choose My IP to automatically add the public IPv4 address of your local computer.
• Choose Custom and specify the public IPv4 address of your computer or network in CIDR
notation. To specify an individual IP address in CIDR notation, add the routing suffix /32,
for example, 203.0.113.25/32. If your company or your router allocates addresses
from a range, specify the entire range, such as 203.0.113.0/24.
Warning
For security reasons, do not choose Anywhere for Source with a rule for SSH. This
would allow access to your instance from all IP addresses on the internet. This is
acceptable for a short time in a test environment, but it is unsafe for production
environments.
7. For Outbound rules, keep the default rule, which allows all outbound traffic.
8. Choose Create security group.
Old console
• Choose HTTP from the Type list, and make sure that Source is set to Anywhere (0.0.0.0/0).
• Choose HTTPS from the Type list, and make sure that Source is set to Anywhere
(0.0.0.0/0).
• Choose SSH from the Type list. In the Source box, choose My IP to automatically populate
the field with the public IPv4 address of your local computer. Alternatively, choose Custom
and specify the public IPv4 address of your computer or network in CIDR notation. To
specify an individual IP address in CIDR notation, add the routing suffix /32, for example,
203.0.113.25/32. If your company allocates addresses from a range, specify the entire
range, such as 203.0.113.0/24.
7
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create a security group
Warning
For security reasons, do not allow SSH access from all IP addresses to your instance.
This is acceptable for a short time in a test environment, but it is unsafe for
production environments.
7. On the Outbound rules tab, keep the default rule, which allows all outbound traffic.
8. Choose Create security group.
Command line
For more information, see Amazon EC2 security groups for Linux instances (p. 1303).
8
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Overview
When you sign up for AWS, you can get started with Amazon EC2 using the AWS Free Tier. If you created
your AWS account less than 12 months ago, and have not already exceeded the free tier benefits for
Amazon EC2, it will not cost you anything to complete this tutorial, because we help you select options
that are within the free tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from
the time that you launch the instance until you terminate the instance (which is the final task of this
tutorial), even if it remains idle.
Contents
• Overview (p. 9)
• Prerequisites (p. 10)
• Step 1: Launch an instance (p. 10)
• Step 2: Connect to your instance (p. 11)
• Step 3: Clean up your instance (p. 11)
• Next steps (p. 12)
Related tutorials
• If you'd prefer to launch a Windows instance, see this tutorial in the Amazon EC2 User Guide for
Windows Instances: Get started with Amazon EC2 Windows instances.
• If you'd prefer to use the command line, see this tutorial in the AWS Command Line Interface User
Guide: Using Amazon EC2 through the AWS CLI.
Overview
The instance is an Amazon EBS-backed instance (meaning that the root volume is an EBS volume).
You can either specify the Availability Zone in which your instance runs, or let Amazon EC2 select an
Availability Zone for you. You can think of an Availability Zone as an isolated data center.
When you launch your instance, you secure it by specifying a key pair (to prove your identity) and a
security group (which acts as a virtual firewall to control ingoing and outgoing traffic). When you connect
to your instance, you must specify the private key of the key pair that you specified when launching your
instance.
9
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Prerequisites
Prerequisites
Before you begin, be sure that you've completed the steps in Set up to use Amazon EC2 (p. 5).
To launch an instance
10
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Connect to your instance
c. Select your security group from the list of existing security groups, and then choose Review and
Launch.
7. On the Review Instance Launch page, choose Launch.
8. When prompted for a key pair, select Choose an existing key pair, then select the key pair that you
created when getting set up.
Warning
Don't select Proceed without a key pair. If you launch your instance without a key pair,
then you can't connect to it.
When you are ready, select the acknowledgement check box, and then choose Launch Instances.
9. A confirmation page lets you know that your instance is launching. Choose View Instances to close
the confirmation page and return to the console.
10. On the Instances screen, you can view the status of the launch. It takes a short time for an instance
to launch. When you launch an instance, its initial state is pending. After the instance starts, its
state changes to running and it receives a public DNS name. (If the Public IPv4 DNS column is
hidden, choose the settings icon ( ) in the top-right corner, toggle on Public IPv4 DNS, and
choose Confirm.
11. It can take a few minutes for the instance to be ready so that you can connect to it. Check that your
instance has passed its status checks; you can view this information in the Status check column.
If you launched an instance that is not within the AWS Free Tier, you'll stop incurring charges for that
instance as soon as the instance status changes to shutting down or terminated. To keep your
instance for later, but not incur charges, you can stop the instance now and then start it again later. For
more information, see Stop and start your instance (p. 622).
1. In the navigation pane, choose Instances. In the list of instances, select the instance.
2. Choose Instance state, Terminate instance.
3. Choose Terminate when prompted for confirmation.
11
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Next steps
Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains
visible on the console for a short while, and then the entry is automatically deleted. You cannot
remove the terminated instance from the console display yourself.
Next steps
After you start your instance, you might want to try some of the following exercises:
• Learn how to remotely manage your EC2 instance using Run Command. For more information, see
AWS Systems Manager Run Command in the AWS Systems Manager User Guide.
• Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier. For more information,
see Tracking your AWS Free Tier usage in the AWS Billing and Cost Management User Guide.
• Add an EBS volume. For more information, see Create an Amazon EBS volume (p. 1349) and Attach an
Amazon EBS volume to an instance (p. 1353).
• Install the LAMP stack. For more information, see Tutorial: Install a LAMP web server on Amazon Linux
2 (p. 15).
12
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Security
• Manage access to AWS resources and APIs using identity federation, IAM users, and IAM roles. Establish
credential management policies and procedures for creating, distributing, rotating, and revoking AWS
access credentials. For more information, see IAM Best Practices in the IAM User Guide.
• Implement the least permissive rules for your security group. For more information, see Security group
rules (p. 1304).
• Regularly patch, update, and secure the operating system and applications on your instance. For more
information about updating Amazon Linux 2 or the Amazon Linux AMI, see Manage software on your
Linux instance in the Amazon EC2 User Guide for Linux Instances.
Storage
• Understand the implications of the root device type for data persistence, backup, and recovery. For
more information, see Storage for the root device (p. 96).
• Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume
with your data persists after instance termination. For more information, see Preserve Amazon EBS
volumes on instance termination (p. 650).
• Use the instance store available for your instance to store temporary data. Remember that the data
stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use
instance store for database storage, ensure that you have a cluster with a replication factor that
ensures fault tolerance.
• Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption (p. 1536).
Resource management
• Use instance metadata and custom resource tags to track and identify your AWS resources. For
more information, see Instance metadata and user data (p. 710) and Tag your Amazon EC2
resources (p. 1666).
• View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time
that you'll need them. For more information, see Amazon EC2 service quotas (p. 1680).
• Regularly back up your EBS volumes using Amazon EBS snapshots (p. 1381), and create an Amazon
Machine Image (AMI) (p. 93) from your instance to save the configuration as a template for
launching future instances.
• Deploy critical components of your application across multiple Availability Zones, and replicate your
data appropriately.
• Design your applications to handle dynamic IP addressing when your instance restarts. For more
information, see Amazon EC2 instance IP addressing (p. 1018).
• Monitor and respond to events. For more information, see Monitor Amazon EC2 (p. 925).
• Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a
network interface or Elastic IP address to a replacement instance. For more information, see Elastic
network interfaces (p. 1067). For an automated solution, you can use Amazon EC2 Auto Scaling. For
more information, see the Amazon EC2 Auto Scaling User Guide.
13
Amazon Elastic Compute Cloud
User Guide for Linux Instances
• Regularly test the process of recovering your instances and Amazon EBS volumes if they fail.
Networking
• Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller
value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability
issues for your instances.
14
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Install LAMP on Amazon Linux 2
Tutorials
• Tutorial: Install a LAMP web server on Amazon Linux 2 (p. 15)
• Tutorial: Install a LAMP web server on Amazon Linux 2022 (p. 25)
• Tutorial: Configure SSL/TLS on Amazon Linux 2 (p. 34)
• Tutorial: Configure SSL/TLS on Amazon Linux 2022 (p. 49)
• Tutorial: Host a WordPress blog on Amazon Linux 2 (p. 63)
• Tutorial: Install a LAMP web server on the Amazon Linux AMI (p. 72)
• Tutorial: Configure SSL/TLS with the Amazon Linux AMI (p. 82)
To complete this tutorial using AWS Systems Manager Automation instead of the following tasks, run the
AWSDocs-InstallALAMPServer-AL2 Automation document.
Tasks
• Step 1: Prepare the LAMP server (p. 16)
• Step 2: Test your LAMP server (p. 19)
• Step 3: Secure the database server (p. 20)
• Step 4: (Optional) Install phpMyAdmin (p. 21)
15
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Prepare the LAMP server
• This tutorial assumes that you have already launched a new instance using Amazon Linux 2, with a
public DNS name that is reachable from the internet. For more information, see Step 1: Launch an
instance (p. 10). You must also have configured your security group to allow SSH (port 22), HTTP (port
80), and HTTPS (port 443) connections. For more information about these prerequisites, see Authorize
inbound traffic for your Linux instances (p. 1285).
• The following procedure installs the latest PHP version available on Amazon Linux 2, currently PHP
7.2. If you plan to use PHP applications other than those described in this tutorial, you should check
their compatibility with PHP 7.2.
Note
Note, this install package is bundled with Mariadb (lamp-mariadb10.2-php7.2). A number of
previous vulnerabilities in php7.2 have since been patched via backports by AWS, however
your particular security software may still flag this version of PHP. Ensure you perform system
updates frequently. You can choose to install a newer version of PHP, however you will need to
install MariaDB separately.
The -y option installs the updates without asking for confirmation. If you would like to examine the
updates before installing, you can omit this option.
3. Install the lamp-mariadb10.2-php7.2 and php7.2 Amazon Linux Extras repositories to get the
latest versions of the LAMP MariaDB and PHP packages for Amazon Linux 2.
If you receive an error stating sudo: amazon-linux-extras: command not found, then your
instance was not launched with an Amazon Linux 2 AMI (perhaps you are using the Amazon Linux
AMI instead). You can view your version of Amazon Linux using the following command.
cat /etc/system-release
To set up a LAMP web server on Amazon Linux AMI , see Tutorial: Install a LAMP web server on the
Amazon Linux AMI (p. 72).
4. Now that your instance is current, you can install the Apache web server, MariaDB, and PHP software
packages.
Use the yum install command to install multiple software packages and all related dependencies at
the same time.
16
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Prepare the LAMP server
You can view the current versions of these packages using the following command:
6. Use the systemctl command to configure the Apache web server to start at each system boot.
7. Add a security rule to allow inbound HTTP (port 80) connections to your instance if you have not
already done so. By default, a launch-wizard-N security group was set up for your instance during
initialization. This group contains a single rule to allow SSH connections.
Warning
Using 0.0.0.0/0 allows all IPv4 addresses to access your instance using SSH. This
is acceptable for a short time in a test environment, but it's unsafe for production
environments. In production, you authorize only a specific IP address or range of
addresses to access your instance.
d. Choose the link for the security group. Using the procedures in Add rules to a security
group (p. 1311), add a new inbound security rule with the following values:
• Type: HTTP
• Protocol: TCP
• Port Range: 80
• Source: Custom
8. Test your web server. In a web browser, type the public DNS address (or the public IP address) of
your instance. If there is no content in /var/www/html, you should see the Apache test page. You
can get the public DNS for your instance using the Amazon EC2 console (check the Public DNS
column; if this column is hidden, choose Show/Hide Columns (the gear-shaped icon) and choose
Public DNS).
Verify that the security group for the instance contains a rule to allow HTTP traffic on port 80. For
more information, see Add rules to a security group (p. 1311).
17
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Prepare the LAMP server
Important
If you are not using Amazon Linux, you may also need to configure the firewall on your
instance to allow these connections. For more information about how to configure the
firewall, see the documentation for your specific distribution.
Apache httpd serves files that are kept in a directory called the Apache document root. The Amazon
Linux Apache document root is /var/www/html, which by default is owned by root.
To allow the ec2-user account to manipulate files in this directory, you must modify the ownership and
permissions of the directory. There are many ways to accomplish this task. In this tutorial, you add ec2-
user to the apache group, to give the apache group ownership of the /var/www directory and assign
write permissions to the group.
1. Add your user (in this case, ec2-user) to the apache group.
2. Log out and then log back in again to pick up the new group, and then verify your membership.
a. Log out (use the exit command or close the terminal window):
b. To verify your membership in the apache group, reconnect to your instance, and then run the
following command:
3. Change the group ownership of /var/www and its contents to the apache group.
18
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Test your LAMP server
4. To add group write permissions and to set the group ID on future subdirectories, change the
directory permissions of /var/www and its subdirectories.
[ec2-user ~]$ sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775
{} \;
5. To add group write permissions, recursively change the file permissions of /var/www and its
subdirectories:
Now, ec2-user (and any future members of the apache group) can add, delete, and edit files in the
Apache document root, enabling you to add content, such as a static website or a PHP application.
A web server running the HTTP protocol provides no transport security for the data that it sends or
receives. When you connect to an HTTP server using a web browser, the URLs that you visit, the content
of webpages that you receive, and the contents (including passwords) of any HTML forms that you
submit are all visible to eavesdroppers anywhere along the network pathway. The best practice for
securing your web server is to install support for HTTPS (HTTP Secure), which protects your data with
SSL/TLS encryption.
For information about enabling HTTPS on your server, see Tutorial: Configure SSL/TLS on Amazon Linux
2 (p. 34).
If you get a "Permission denied" error when trying to run this command, try logging out and
logging back in again to pick up the proper group permissions that you configured in To set file
permissions (p. 18).
2. In a web browser, type the URL of the file that you just created. This URL is the public DNS address
of your instance followed by a forward slash and the file name. For example:
http://my.public.dns.amazonaws.com/phpinfo.php
19
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 3: Secure the database server
If you do not see this page, verify that the /var/www/html/phpinfo.php file was created properly
in the previous step. You can also verify that all of the required packages were installed with the
following command.
If any of the required packages are not listed in your output, install them with the sudo yum install
package command. Also verify that the php7.2 and lamp-mariadb10.2-php7.2 extras are
enabled in the output of the amazon-linux-extras command.
3. Delete the phpinfo.php file. Although this can be useful information, it should not be broadcast to
the internet for security reasons.
You should now have a fully functional LAMP web server. If you add content to the Apache document
root at /var/www/html, you should be able to view that content at the public DNS address for your
instance.
2. Run mysql_secure_installation.
20
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 4: (Optional) Install phpMyAdmin
i. Type the current root password. By default, the root account does not have a password set.
Press Enter.
ii. Type Y to set a password, and type a secure password twice. For more information about
creating a secure password, see https://identitysafe.norton.com/password-generator/.
Make sure to store this password in a safe place.
Setting a root password for MariaDB is only the most basic measure for securing your
database. When you build or install a database-driven application, you typically create a
database service user for that application and avoid using the root account for anything but
database administration.
b. Type Y to remove the anonymous user accounts.
c. Type Y to disable the remote root login.
d. Type Y to remove the test database.
e. Type Y to reload the privilege tables and save your changes.
3. (Optional) If you do not plan to use the MariaDB server right away, stop it. You can restart it when
you need it again.
4. (Optional) If you want the MariaDB server to start at every boot, type the following command.
To install phpMyAdmin
2. Restart Apache.
3. Restart php-fpm.
21
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 4: (Optional) Install phpMyAdmin
5. Select a source package for the latest phpMyAdmin release from https://www.phpmyadmin.net/
downloads. To download the file directly to your instance, copy the link and paste it into a wget
command, as in this example:
6. Create a phpMyAdmin folder and extract the package into it with the following command.
9. In a web browser, type the URL of your phpMyAdmin installation. This URL is the public DNS
address (or the public IP address) of your instance followed by a forward slash and the name of your
installation directory. For example:
http://my.public.dns.amazonaws.com/phpMyAdmin
22
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 4: (Optional) Install phpMyAdmin
10. Log in to your phpMyAdmin installation with the root user name and the MySQL root password you
created earlier.
Your installation must still be configured before you put it into service. We suggest that you begin
by manually creating the configuration file, as follows:
a. To start with a minimal configuration file, use your favorite text editor to create a new file, and
then copy the contents of config.sample.inc.php into it.
23
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot
b. Save the file as config.inc.php in the phpMyAdmin directory that contains index.php.
c. Refer to post-file creation instructions in the Using the Setup script section of the phpMyAdmin
installation instructions for any additional setup.
For information about using phpMyAdmin, see the phpMyAdmin User Guide.
Troubleshoot
This section offers suggestions for resolving common problems you may encounter while setting up a
new LAMP server.
If the httpd process is not running, repeat the steps described in To prepare the LAMP
server (p. 16).
• Is the firewall correctly configured?
Verify that the security group for the instance contains a rule to allow HTTP traffic on port 80. For
more information, see Add rules to a security group (p. 1311).
After you install Apache, the server is configured for HTTP traffic. To support HTTPS, enable TLS on
the server and install an SSL certificate. For information, see Tutorial: Configure SSL/TLS on Amazon
Linux 2 (p. 34).
• Is the firewall correctly configured?
Verify that the security group for the instance contains a rule to allow HTTPS traffic on port 443. For
more information, see Add rules to a security group (p. 1311).
Related topics
For more information about transferring files to your instance or installing a WordPress blog on your web
server, see the following documentation:
24
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Install LAMP on Amazon Linux 2022
For more information about the commands and software used in this tutorial, see the following
webpages:
For more information about registering a domain name for your web server, or transferring an existing
domain name to this host, see Creating and Migrating Domains and Subdomains to Amazon Route 53 in
the Amazon Route 53 Developer Guide.
To complete this tutorial using AWS Systems Manager Automation instead of the following tasks, run the
AWSDocs-InstallALAMPServer-AL2 Automation document.
Tasks
• Step 1: Prepare the LAMP server (p. 25)
• Step 2: Test your LAMP server (p. 28)
• Step 3: Secure the database server (p. 29)
• Step 4: (Optional) Install phpMyAdmin (p. 30)
• Troubleshoot (p. 33)
• Related topics (p. 33)
• This tutorial assumes that you have already launched a new instance using Amazon Linux 2022, with
a public DNS name that is reachable from the internet. For more information, see Step 1: Launch an
instance (p. 10). You must also have configured your security group to allow SSH (port 22), HTTP (port
80), and HTTPS (port 443) connections. For more information about these prerequisites, see Authorize
inbound traffic for your Linux instances (p. 1285).
25
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Prepare the LAMP server
• The following procedure installs the latest PHP version available on Amazon Linux 2022, currently 7.4;.
If you plan to use PHP applications other than those described in this tutorial, you should check their
compatibility with 7.4.
The -y option installs the updates without asking for confirmation. If you would like to examine the
updates before installing, you can omit this option.
3. Install the latest versions of Apache web server, and PHP packages for Amazon Linux 2022.
[ec2-user ~]$ sudo yum install -y httpd wget php-fpm php-mysqli php-json php php-devel
To set up a LAMP web server on Amazon Linux 2, see Tutorial: Install a LAMP web server on Amazon
Linux 2 (p. 15).
4. Install the MariaDB, software packages. Use the dnf install command to install multiple software
packages and all related dependencies at the same time.
You can view the current versions of these packages using the following command:
6. Use the systemctl command to configure the Apache web server to start at each system boot.
7. Add a security rule to allow inbound HTTP (port 80) connections to your instance if you have not
already done so. By default, a launch-wizard-N security group was set up for your instance during
initialization. This group contains a single rule to allow SSH connections.
26
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Prepare the LAMP server
Warning
Using 0.0.0.0/0 allows all IPv4 addresses to access your instance using SSH. This
is acceptable for a short time in a test environment, but it's unsafe for production
environments. In production, you authorize only a specific IP address or range of
addresses to access your instance.
d. Choose the link for the security group. Using the procedures in Add rules to a security
group (p. 1311), add a new inbound security rule with the following values:
• Type: HTTP
• Protocol: TCP
• Port Range: 80
• Source: Custom
8. Test your web server. In a web browser, type the public DNS address (or the public IP address) of
your instance. If there is no content in /var/www/html, you should see the Apache test page. You
can get the public DNS for your instance using the Amazon EC2 console (check the Public DNS
column; if this column is hidden, choose Show/Hide Columns (the gear-shaped icon) and choose
Public DNS).
Verify that the security group for the instance contains a rule to allow HTTP traffic on port 80. For
more information, see Add rules to a security group (p. 1311).
Important
If you are not using Amazon Linux, you may also need to configure the firewall on your
instance to allow these connections. For more information about how to configure the
firewall, see the documentation for your specific distribution.
Apache httpd serves files that are kept in a directory called the Apache document root. The Amazon
Linux Apache document root is /var/www/html, which by default is owned by root.
To allow the ec2-user account to manipulate files in this directory, you must modify the ownership and
permissions of the directory. There are many ways to accomplish this task. In this tutorial, you add ec2-
user to the apache group, to give the apache group ownership of the /var/www directory and assign
write permissions to the group.
27
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Test your LAMP server
1. Add your user (in this case, ec2-user) to the apache group.
2. Log out and then log back in again to pick up the new group, and then verify your membership.
a. Log out (use the exit command or close the terminal window):
b. To verify your membership in the apache group, reconnect to your instance, and then run the
following command:
3. Change the group ownership of /var/www and its contents to the apache group.
4. To add group write permissions and to set the group ID on future subdirectories, change the
directory permissions of /var/www and its subdirectories.
[ec2-user ~]$ sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775
{} \;
5. To add group write permissions, recursively change the file permissions of /var/www and its
subdirectories:
Now, ec2-user (and any future members of the apache group) can add, delete, and edit files in the
Apache document root, enabling you to add content, such as a static website or a PHP application.
A web server running the HTTP protocol provides no transport security for the data that it sends or
receives. When you connect to an HTTP server using a web browser, the URLs that you visit, the content
of webpages that you receive, and the contents (including passwords) of any HTML forms that you
submit are all visible to eavesdroppers anywhere along the network pathway. The best practice for
securing your web server is to install support for HTTPS (HTTP Secure), which protects your data with
SSL/TLS encryption.
For information about enabling HTTPS on your server, see Tutorial: Configure SSL/TLS on Amazon Linux
2 (p. 34).
28
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 3: Secure the database server
If you get a "Permission denied" error when trying to run this command, try logging out and
logging back in again to pick up the proper group permissions that you configured in To set file
permissions (p. 28).
2. In a web browser, type the URL of the file that you just created. This URL is the public DNS address
of your instance followed by a forward slash and the file name. For example:
http://my.public.dns.amazonaws.com/phpinfo.php
If you do not see this page, verify that the /var/www/html/phpinfo.php file was created properly
in the previous step. You can also verify that all of the required packages were installed with the
following command.
If any of the required packages are not listed in your output, install them with the sudo yum install
package command. Also verify that the php7.2 and lamp-mariadb10.2-php7.2 extras are
enabled in the output of the amazon-linux-extras command.
3. Delete the phpinfo.php file. Although this can be useful information, it should not be broadcast to
the internet for security reasons.
You should now have a fully functional LAMP web server. If you add content to the Apache document
root at /var/www/html, you should be able to view that content at the public DNS address for your
instance.
29
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 4: (Optional) Install phpMyAdmin
mysql_secure_installation command walks you through the process of setting a root password and
removing the insecure features from your installation. Even if you are not planning on using the MariaDB
server, we recommend performing this procedure.
2. Run mysql_secure_installation.
i. Type the current root password. By default, the root account does not have a password set.
Press Enter.
ii. Type Y to set a password, and type a secure password twice. For more information about
creating a secure password, see https://identitysafe.norton.com/password-generator/.
Make sure to store this password in a safe place.
Setting a root password for MariaDB is only the most basic measure for securing your
database. When you build or install a database-driven application, you typically create a
database service user for that application and avoid using the root account for anything but
database administration.
b. Type Y to remove the anonymous user accounts.
c. Type Y to disable the remote root login.
d. Type Y to remove the test database.
e. Type Y to reload the privilege tables and save your changes.
3. (Optional) If you do not plan to use the MariaDB server right away, stop it. You can restart it when
you need it again.
4. (Optional) If you want the MariaDB server to start at every boot, type the following command.
30
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 4: (Optional) Install phpMyAdmin
To install phpMyAdmin
2. Restart Apache.
3. Restart php-fpm.
5. Select a source package for the latest phpMyAdmin release from https://www.phpmyadmin.net/
downloads. To download the file directly to your instance, copy the link and paste it into a wget
command, as in this example:
6. Create a phpMyAdmin folder and extract the package into it with the following command.
9. In a web browser, type the URL of your phpMyAdmin installation. This URL is the public DNS
address (or the public IP address) of your instance followed by a forward slash and the name of your
installation directory. For example:
http://my.public.dns.amazonaws.com/phpMyAdmin
31
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 4: (Optional) Install phpMyAdmin
10. Log in to your phpMyAdmin installation with the root user name and the MySQL root password you
created earlier.
Your installation must still be configured before you put it into service. We suggest that you begin
by manually creating the configuration file, as follows:
a. To start with a minimal configuration file, use your favorite text editor to create a new file, and
then copy the contents of config.sample.inc.php into it.
32
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot
b. Save the file as config.inc.php in the phpMyAdmin directory that contains index.php.
c. Refer to post-file creation instructions in the Using the Setup script section of the phpMyAdmin
installation instructions for any additional setup.
For information about using phpMyAdmin, see the phpMyAdmin User Guide.
Troubleshoot
This section offers suggestions for resolving common problems you may encounter while setting up a
new LAMP server.
If the httpd process is not running, repeat the steps described in To prepare the LAMP
server (p. 16).
• Is the firewall correctly configured?
Verify that the security group for the instance contains a rule to allow HTTP traffic on port 80. For
more information, see Add rules to a security group (p. 1311).
After you install Apache, the server is configured for HTTP traffic. To support HTTPS, enable TLS on
the server and install an SSL certificate. For information, see Tutorial: Configure SSL/TLS on Amazon
Linux 2 (p. 34).
• Is the firewall correctly configured?
Verify that the security group for the instance contains a rule to allow HTTPS traffic on port 443. For
more information, see Add rules to a security group (p. 1311).
Related topics
For more information about transferring files to your instance or installing a WordPress blog on your web
server, see the following documentation:
33
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure SSL/TLS on Amazon Linux 2
For more information about the commands and software used in this tutorial, see the following
webpages:
For more information about registering a domain name for your web server, or transferring an existing
domain name to this host, see Creating and Migrating Domains and Subdomains to Amazon Route 53 in
the Amazon Route 53 Developer Guide.
For historical reasons, web encryption is often referred to simply as SSL. While web browsers still
support SSL, its successor protocol TLS is less vulnerable to attack. Amazon Linux 2 disables server-
side support for all versions of SSL by default. Security standards bodies consider TLS 1.0 to be unsafe,
and both TLS 1.0 and TLS 1.1 are on track to be formally deprecated by the IETF. This tutorial contains
guidance based exclusively on enabling TLS 1.2. (A newer TLS 1.3 protocol exists, but it is not installed
by default on Amazon Linux 2.) For more information about the updated encryption standards, see RFC
7568 and RFC 8446.
Contents
• Prerequisites (p. 35)
• Step 1: Enable TLS on the server (p. 35)
34
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Prerequisites
Prerequisites
Before you begin this tutorial, complete the following steps:
• Launch an EBS-backed Amazon Linux 2 instance. For more information, see Step 1: Launch an
instance (p. 10).
• Configure your security groups to allow your instance to accept connections on the following TCP
ports:
• SSH (port 22)
• HTTP (port 80)
• HTTPS (port 443)
For more information, see Authorize inbound traffic for your Linux instances (p. 1285).
• Install the Apache web server. For step-by-step instructions, see Tutorial: Install a LAMP Web Server on
Amazon Linux 2 (p. 15). Only the httpd package and its dependencies are needed, so you can ignore
the instructions involving PHP and MariaDB.
• To identify and authenticate websites, the TLS public key infrastructure (PKI) relies on the Domain
Name System (DNS). To use your EC2 instance to host a public website, you need to register a domain
name for your web server or transfer an existing domain name to your Amazon EC2 host. Numerous
third-party domain registration and DNS hosting services are available for this, or you can use Amazon
Route 53.
1. Connect to your instance (p. 11) and confirm that Apache is running.
If the returned value is not "enabled," start Apache and set it to start each time the system boots.
[ec2-user ~]$ sudo systemctl start httpd && sudo systemctl enable httpd
2. To ensure that all of your software packages are up to date, perform a quick software update on
your instance. This process may take a few minutes, but it is important to make sure that you have
the latest security updates and bug fixes.
Note
The -y option installs the updates without asking for confirmation. If you would like to
examine the updates before installing, you can omit this option.
35
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Enable TLS on the server
3. Now that your instance is current, add TLS support by installing the Apache module mod_ssl.
Your instance now has the following files that you use to configure your secure server and create a
certificate for testing:
• /etc/httpd/conf.d/ssl.conf
The configuration file for mod_ssl. It contains directives telling Apache where to find encryption
keys and certificates, the TLS protocol versions to allow, and the encryption ciphers to accept.
• /etc/pki/tls/certs/make-dummy-cert
A script to generate a self-signed X.509 certificate and private key for your server host. This
certificate is useful for testing that Apache is properly set up to use TLS. Because it offers no proof
of identity, it should not be used in production. If used in production, it triggers warnings in Web
browsers.
4. Run the script to generate a self-signed dummy certificate and key for testing.
This generates a new file localhost.crt in the /etc/pki/tls/certs/ directory. The specified
file name matches the default that is assigned in the SSLCertificateFile directive in /etc/httpd/
conf.d/ssl.conf.
This file contains both a self-signed certificate and the certificate's private key. Apache requires the
certificate and key to be in PEM format, which consists of Base64-encoded ASCII characters framed
by "BEGIN" and "END" lines, as in the following abbreviated example.
-----BEGIN CERTIFICATE-----
MIIEazCCA1OgAwIBAgICWxQwDQYJKoZIhvcNAQELBQAwgbExCzAJBgNVBAYTAi0t
MRIwEAYDVQQIDAlTb21lU3RhdGUxETAPBgNVBAcMCFNvbWVDaXR5MRkwFwYDVQQK
DBBTb21lT3JnYW5pemF0aW9uMR8wHQYDVQQLDBZTb21lT3JnYW5pemF0aW9uYWxV
bml0MRkwFwYDVQQDDBBpcC0xNzItMzEtMjAtMjM2MSQwIgYJKoZIhvcNAQkBFhVy
...
z5rRUE/XzxRLBZOoWZpNWTXJkQ3uFYH6s/sBwtHpKKZMzOvDedREjNKAvk4ws6F0
CuIjvubtUysVyQoMVPQ97ldeakHWeRMiEJFXg6kZZ0vrGvwnKoMh3DlK44D9dlU3
WanXWehT6FiSZvB4sTEXXJN2jdw8g+sHGnZ8zCOsclknYhHrCVD2vnBlZJKSZvak
3ZazhBxtQSukFMOnWPP2a0DMMFGYUHOd0BQE8sBJxg==
-----END CERTIFICATE-----
36
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
The file names and extensions are a convenience and have no effect on function. For example, you
can call a certificate cert.crt, cert.pem, or any other file name, so long as the related directive in
the ssl.conf file uses the same name.
Note
When you replace the default TLS files with your own customized files, be sure that they are
in PEM format.
5. Open the /etc/httpd/conf.d/ssl.conf file using your favorite text editor (such as vim or nano)
and comment out the following line, because the self-signed dummy certificate also contains the
key. If you do not comment out this line before you complete the next step, the Apache service fails
to start.
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
6. Restart Apache.
Note
Make sure that TCP port 443 is accessible on your EC2 instance, as previously described.
7. Your Apache web server should now support HTTPS (secure HTTP) over port 443. Test it by entering
the IP address or fully qualified domain name of your EC2 instance into a browser URL bar with the
prefix https://.
Because you are connecting to a site with a self-signed, untrusted host certificate, your browser may
display a series of security warnings. Override the warnings and proceed to the site.
If the default Apache test page opens, it means that you have successfully configured TLS on your
server. All data passing between the browser and server is now encrypted.
Note
To prevent site visitors from encountering warning screens, you must obtain a trusted, CA-
signed certificate that not only encrypts, but also publicly authenticates you as the owner
of the site.
A self-signed TLS X.509 host certificate is cryptologically identical to a CA-signed certificate. The
difference is social, not mathematical. A CA promises, at a minimum, to validate a domain's ownership
before issuing a certificate to an applicant. Each web browser contains a list of CAs trusted by the
browser vendor to do this. An X.509 certificate consists primarily of a public key that corresponds to
your private server key, and a signature by the CA that is cryptographically tied to the public key. When a
browser connects to a web server over HTTPS, the server presents a certificate for the browser to check
against its list of trusted CAs. If the signer is on the list, or accessible through a chain of trust consisting
of other trusted signers, the browser negotiates a fast encrypted data channel with the server and loads
the page.
37
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
Certificates generally cost money because of the labor involved in validating the requests, so it pays to
shop around. A few CAs offer basic-level certificates free of charge. The most notable of these CAs is the
Let's Encrypt project, which also supports the automation of the certificate creation and renewal process.
For more information about using Let's Encrypt as your CA, see Certificate automation: Let's Encrypt with
Certbot on Amazon Linux 2 (p. 45).
If you plan to offer commercial-grade services, AWS Certificate Manager is a good option.
Underlying the host certificate is the key. As of 2019, government and industry groups recommend using
a minimum key (modulus) size of 2048 bits for RSA keys intended to protect documents, through 2030.
The default modulus size generated by OpenSSL in Amazon Linux 2 is 2048 bits, which is suitable for use
in a CA-signed certificate. In the following procedure, an optional step provided for those who want a
customized key, for example, one with a larger modulus or using a different encryption algorithm.
Important
These instructions for acquiring a CA-signed host certificate do not work unless you own a
registered and hosted DNS domain.
1. Connect to your instance (p. 11) and navigate to /etc/pki/tls/private/. This is the directory where
you store the server's private key for TLS. If you prefer to use an existing host key to generate the
CSR, skip to Step 3.
2. (Optional) Generate a new private key. Here are some examples of key configurations. Any of the
resulting keys works with your web server, but they vary in the degree and type of security that they
implement.
• Example 1: Create a default RSA host key. The resulting file, custom.key, is a 2048-bit RSA
private key.
• Example 2: Create a stronger RSA key with a bigger modulus. The resulting file, custom.key, is a
4096-bit RSA private key.
• Example 3: Create a 4096-bit encrypted RSA key with password protection. The resulting file,
custom.key, is a 4096-bit RSA private key encrypted with the AES-128 cipher.
Important
Encrypting the key provides greater security, but because an encrypted key requires a
password, services depending on it cannot be auto-started. Each time you use this key,
you must supply the password (in the preceding example, "abcde12345") over an SSH
connection.
[ec2-user ~]$ sudo openssl genrsa -aes128 -passout pass:abcde12345 -out custom.key
4096
• Example 4: Create a key using a non-RSA cipher. RSA cryptography can be relatively slow because
of the size of its public keys, which are based on the product of two large prime numbers.
However, it is possible to create keys for TLS that use non-RSA ciphers. Keys based on the
mathematics of elliptic curves are smaller and computationally faster when delivering an
equivalent level of security.
[ec2-user ~]$ sudo openssl ecparam -name prime256v1 -out custom.key -genkey
38
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
The result is a 256-bit elliptic curve private key using prime256v1, a "named curve" that OpenSSL
supports. Its cryptographic strength is slightly greater than a 2048-bit RSA key, according to NIST.
Note
Not all CAs provide the same level of support for elliptic-curve-based keys as for RSA
keys.
Make sure that the new private key has highly restrictive ownership and permissions (owner=root,
group=root, read/write for owner only). The commands would be as shown in the following
example.
After you have created and configured a satisfactory key, you can create a CSR.
3. Create a CSR using your preferred key. The following example uses custom.key.
[ec2-user ~]$ sudo openssl req -new -key custom.key -out csr.pem
OpenSSL opens a dialog and prompts you for the information shown in the following table. All of
the fields except Common Name are optional for a basic, domain-validated host certificate.
Country Name The two-letter ISO abbreviation for your US (=United States)
country.
Organization The full legal name of your organization. Do not Example Corporation
Name abbreviate your organization name.
39
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
Finally, OpenSSL prompts you for an optional challenge password. This password applies only to the
CSR and to transactions between you and your CA, so follow the CA's recommendations about this
and the other optional field, optional company name. The CSR challenge password has no effect on
server operation.
The resulting file csr.pem contains your public key, your digital signature of your public key, and
the metadata that you entered.
4. Submit the CSR to a CA. This usually consists of opening your CSR file in a text editor and copying
the contents into a web form. At this time, you may be asked to supply one or more subject
alternate names (SANs) to be placed on the certificate. If www.example.com is the common name,
then example.com would be a good SAN, and vice versa. A visitor to your site entering either of
these names would see an error-free connection. If your CA web form allows it, include the common
name in the list of SANs. Some CAs include it automatically.
After your request has been approved, you receive a new host certificate signed by the CA. You
might also be instructed to download an intermediate certificate file that contains additional
certificates needed to complete the CA's chain of trust.
Note
Your CA might send you files in multiple formats intended for various purposes. For this
tutorial, you should only use a certificate file in PEM format, which is usually (but not
always) marked with a .pem or .crt file extension. If you are uncertain which file to use,
open the files with a text editor and find the one containing one or more blocks beginning
with the following line.
- - - - -BEGIN CERTIFICATE - - - - -
- - - -END CERTIFICATE - - - - -
You can also test the file at the command line as shown in the following.
Verify that these lines appear in the file. Do not use files ending with .p7b, .p7c, or similar
file extensions.
5. Place the new CA-signed certificate and any intermediate certificates in the /etc/pki/tls/certs
directory.
Note
There are several ways to upload your new certificate to your EC2 instance, but the most
straightforward and informative way is to open a text editor (for example, vi, nano,
or notepad) on both your local computer and your instance, and then copy and paste
the file contents between them. You need root [sudo] permissions when performing
these operations on the EC2 instance. This way, you can see immediately if there are any
permission or path problems. Be careful, however, not to add any additional lines while
copying the contents, or to change them in any way.
40
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
From inside the /etc/pki/tls/certs directory, check that the file ownership, group, and
permission settings match the highly restrictive Amazon Linux 2 defaults (owner=root, group=root,
read/write for owner only). The following example shows the commands to use.
The permissions for the intermediate certificate file are less stringent (owner=root, group=root,
owner can write, group can read, world can read). The following example shows the commands to
use.
6. Place the private key that you used to create the CSR in the /etc/pki/tls/private/ directory.
Note
There are several ways to upload your custom key to your EC2 instance, but the most
straightforward and informative way is to open a text editor (for example, vi, nano,
or notepad) on both your local computer and your instance, and then copy and paste
the file contents between them. You need root [sudo] permissions when performing
these operations on the EC2 instance. This way, you can see immediately if there are any
permission or path problems. Be careful, however, not to add any additional lines while
copying the contents, or to change them in any way.
From inside the /etc/pki/tls/private directory, use the following commands to verify that the
file ownership, group, and permission settings match the highly restrictive Amazon Linux 2 defaults
(owner=root, group=root, read/write for owner only).
a. Provide the path and file name of the CA-signed host certificate in Apache's
SSLCertificateFile directive:
SSLCertificateFile /etc/pki/tls/certs/custom.crt
41
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 3: Test and harden the security configuration
b. If you received an intermediate certificate file (intermediate.crt in this example), provide its
path and file name using Apache's SSLCACertificateFile directive:
SSLCACertificateFile /etc/pki/tls/certs/intermediate.crt
Note
Some CAs combine the host certificate and the intermediate certificates in a single file,
making the SSLCACertificateFile directive unnecessary. Consult the instructions
provided by your CA.
c. Provide the path and file name of the private key (custom.key in this example) in Apache's
SSLCertificateKeyFile directive:
SSLCertificateKeyFile /etc/pki/tls/private/custom.key
9. Test your server by entering your domain name into a browser URL bar with the prefix https://.
Your browser should load the test page over HTTPS without generating errors.
On the Qualys SSL Labs site, enter the fully qualified domain name of your server, in the form
www.example.com. After about two minutes, you receive a grade (from A to F) for your site and a
detailed breakdown of the findings. The following table summarizes the report for a domain with
settings identical to the default Apache configuration on Amazon Linux 2, and with a default Certbot
certificate.
Overall rating B
Certificate 100%
Though the overview shows that the configuration is mostly sound, the detailed report flags several
potential problems, listed here in order of severity:
42
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 3: Test and harden the security configuration
✗ The RC4 cipher is supported for use by certain older browsers. A cipher is the mathematical core of
an encryption algorithm. RC4, a fast cipher used to encrypt TLS data-streams, is known to have several
serious weaknesses. Unless you have very good reasons to support legacy browsers, you should disable
this.
✗ Old TLS versions are supported. The configuration supports TLS 1.0 (already deprecated) and TLS 1.1
(on a path to deprecation). Only TLS 1.2 has been recommended since 2018.
✗ Forward secrecy is not fully supported. Forward secrecy is a feature of algorithms that encrypt using
temporary (ephemeral) session keys derived from the private key. This means in practice that attackers
cannot decrypt HTTPS data even if they possess a web server's long-term private key.
1. Open the configuration file /etc/httpd/conf.d/ssl.conf in a text editor and comment out the
following line by entering "#" at the beginning of the line.
This directive explicitly disables SSL versions 2 and 3, as well as TLS versions 1.0 and 1.1. The
server now refuses to accept encrypted connections with clients using anything except TLS 1.2.
The verbose wording in the directive conveys more clearly, to a human reader, what the server is
configured to do.
Note
Disabling TLS versions 1.0 and 1.1 in this manner blocks a small percentage of outdated
web browsers from accessing your site.
#SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5
2. Specify explicit cipher suites and a cipher order that prioritizes forward secrecy and avoids insecure
ciphers. The SSLCipherSuite directive used here is based on output from the Mozilla SSL
Configuration Generator, which tailors a TLS configuration to the specific software running on
your server. (For more information, see Mozilla's useful resource Security/Server Side TLS.) First
determine your Apache and OpenSSL versions by using the output from the following commands.
For example, if the returned information is Apache 2.4.34 and OpenSSL 1.0.2, we enter this into
the generator. If you choose the "modern" compatibility model, this creates an SSLCipherSuite
directive that aggressively enforces security but still works for most browsers. If your software
doesn't support the modern configuration, you can update your software or choose the
"intermediate" configuration instead.
43
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-
CHACHA20-POLY1305:
ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:
ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-
AES128-SHA256
The selected ciphers have ECDHE in their names, an abbreviation for Elliptic Curve Diffie-Hellman
Ephemeral . The term ephemeral indicates forward secrecy. As a by-product, these ciphers do not
support RC4.
We recommend that you use an explicit list of ciphers instead of relying on defaults or terse
directives whose content isn't visible.
#SSLHonorCipherOrder on
This directive forces the server to prefer high-ranking ciphers, including (in this case) those that
support forward secrecy. With this directive turned on, the server tries to establish a strong secure
connection before falling back to allowed ciphers with lesser security.
After completing both of these procedures, save the changes to /etc/httpd/conf.d/ssl.conf and
restart Apache.
If you test the domain again on Qualys SSL Labs, you should see that the RC4 vulnerability and other
warnings are gone and the summary looks something like the following.
Overall rating A
Certificate 100%
Each update to OpenSSL introduces new ciphers and removes support for old ones. Keep your EC2
Amazon Linux 2 instance up-to-date, watch for security announcements from OpenSSL, and be alert to
reports of new security exploits in the technical press.
Troubleshoot
• My Apache webserver doesn't start unless I enter a password
This is expected behavior if you installed an encrypted, password-protected, private server key.
You can remove the encryption and password requirement from the key. Assuming that you have a
private encrypted RSA key called custom.key in the default directory, and that the password on it is
44
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Certificate automation: Let's Encrypt
with Certbot on Amazon Linux 2
abcde12345, run the following commands on your EC2 instance to generate an unencrypted version
of the key.
When you are installing the required packages for SSL, you may see errors similar to the following.
This typically means that your EC2 instance is not running Amazon Linux 2. This tutorial only supports
instances freshly created from an official Amazon Linux 2 AMI.
The Let's Encrypt certificate authority is the centerpiece of an effort by the Electronic Frontier
Foundation (EFF) to encrypt the entire internet. In line with that goal, Let's Encrypt host certificates
are designed to be created, validated, installed, and maintained with minimal human intervention. The
automated aspects of certificate management are carried out by a software agent running on your
web server. After you install and configure the agent, it communicates securely with Let's Encrypt and
performs administrative tasks on Apache and the key management system. This tutorial uses the free
Certbot agent because it allows you either to supply a customized encryption key as the basis for your
certificates, or to allow the agent itself to create a key based on its defaults. You can also configure
Certbot to renew your certificates on a regular basis without human interaction, as described in To
automate Certbot (p. 48). For more information, consult the Certbot User Guide and man page.
Certbot is not officially supported on Amazon Linux 2, but is available for download and functions
correctly when installed. We recommend that you make the following backups to protect your data and
avoid inconvenience:
• Before you begin, take a snapshot of your Amazon EBS root volume. This allows you to restore the
original state of your EC2 instance. For information about creating EBS snapshots, see Create Amazon
EBS snapshots (p. 1385).
45
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Certificate automation: Let's Encrypt
with Certbot on Amazon Linux 2
• The procedure below requires you to edit your httpd.conf file, which controls Apache's operation.
Certbot makes its own automated changes to this and other configuration files. Make a backup copy of
your entire /etc/httpd directory in case you need to restore it.
Prepare to install
Complete the following procedures before you install Certbot.
1. Download the Extra Packages for Enterprise Linux (EPEL) 7 repository packages. These are required
to supply dependencies needed by Certbot.
a. Navigate to your home directory (/home/ec2-user). Download EPEL using the following
command.
You can confirm that EPEL is enabled with the following command.
[ec2-user ~]$
...
epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64
enabled: 12949+175
epel-debuginfo/x86_64 Extra Packages for Enterprise Linux 7 - x86_64
- Debug enabled: 2890
epel-source/x86_64 Extra Packages for Enterprise Linux 7 - x86_64
- Source enabled: 0
epel-testing/x86_64 Extra Packages for Enterprise Linux 7 -
Testing - x86_64 enabled: 778+12
epel-testing-debuginfo/x86_64 Extra Packages for Enterprise Linux 7 -
Testing - x86_64 - Debug enabled: 107
epel-testing-source/x86_64 Extra Packages for Enterprise Linux 7 -
Testing - x86_64 - Source enabled: 0
...
2. Edit the main Apache configuration file, /etc/httpd/conf/httpd.conf. Locate the "Listen 80"
directive and add the following lines after it, replacing the example domain names with the actual
Common Name and Subject Alternative Name (SAN).
<VirtualHost *:80>
DocumentRoot "/var/www/html"
ServerName "example.com"
ServerAlias "www.example.com"
</VirtualHost>
46
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Certificate automation: Let's Encrypt
with Certbot on Amazon Linux 2
Save the file and restart Apache.
3. Run Certbot.
4. At the prompt "Enter email address (used for urgent renewal and security notices)," enter a contact
address and press Enter.
5. Agree to the Let's Encrypt Terms of Service at the prompt. Enter "A" and press Enter to proceed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v02.api.letsencrypt.org/directory
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(A)gree/(C)ancel: A
6. At the authorization for EFF to put you on their mailing list, enter "Y" or "N" and press Enter.
7. Certbot displays the Common Name and Subject Alternative Name (SAN) that you provided in the
VirtualHost block.
47
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Certificate automation: Let's Encrypt
with Certbot on Amazon Linux 2
Created an SSL vhost at /etc/httpd/conf/httpd-le-ssl.conf
Deploying Certificate for example.com to VirtualHost /etc/httpd/conf/httpd-le-ssl.conf
Enabling site /etc/httpd/conf/httpd-le-ssl.conf by adding Include to root configuration
Deploying Certificate for www.example.com to VirtualHost /etc/httpd/conf/httpd-le-
ssl.conf
Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel):
To allow visitors to connect to your server via unencrypted HTTP, enter "1". If you want to accept
only encrypted connections via HTTPS, enter "2". Press Enter to submit your choice.
9. Certbot completes the configuration of Apache and reports success and other information.
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/certbot.oneeyedman.net/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/certbot.oneeyedman.net/privkey.pem
Your cert will expire on 2019-08-01. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
10. After you complete the installation, test and optimize the security of your server as described in
Step 3: Test and harden the security configuration (p. 42).
To automate Certbot
1. Open the /etc/crontab file in a text editor, such as vim or nano, using sudo. Alternatively, use
sudo crontab -e.
2. Add a line similar to the following and save the file.
48
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure SSL/TLS on Amazon Linux 2022
39 1,13 * * *
Schedules a command to be run at 01:39 and 13:39 every day. The selected values are arbitrary,
but the Certbot developers suggest running the command at least twice daily. This guarantees
that any certificate found to be compromised is promptly revoked and replaced.
root
The command to be run. The renew subcommand causes Certbot to check any previously
obtained certificates and to renew those that are approaching expiration. The --no-self-
upgrade flag prevents Certbot from upgrading itself without your intervention.
3. Restart the cron daemon.
For historical reasons, web encryption is often referred to simply as SSL. While web browsers still
support SSL, its successor protocol TLS is less vulnerable to attack. Amazon Linux 2022 disables server-
side support for all versions of SSL by default. Security standards bodies consider TLS 1.0 to be unsafe,
and both TLS 1.0 and TLS 1.1 are on track to be formally deprecated by the IETF. This tutorial contains
guidance based exclusively on enabling TLS 1.2. (A newer TLS 1.3 protocol exists, but it is not installed by
default on Amazon Linux 2022.) For more information about the updated encryption standards, see RFC
7568 and RFC 8446.
49
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Prerequisites
ACM for Nitro Enclaves works with nginx running on your Amazon EC2 Linux instance to create
private keys, to distribute certificates and private keys, and to manage certificate renewals.
To use ACM for Nitro Enclaves, you must use an enclave-enabled Linux instance.
For more information, see What is AWS Nitro Enclaves? and AWS Certificate Manager for Nitro
Enclaves in the AWS Nitro Enclaves User Guide.
Contents
• Prerequisites (p. 50)
• Step 1: Enable TLS on the server (p. 35)
• Step 2: Obtain a CA-signed certificate (p. 52)
• Step 3: Test and harden the security configuration (p. 57)
• Troubleshoot (p. 59)
• Certificate automation: Let's Encrypt with Certbot on Amazon Linux 2022 (p. 60)
Prerequisites
Before you begin this tutorial, complete the following steps:
• Launch an EBS-backed Amazon Linux 2022 instance. For more information, see Step 1: Launch an
instance (p. 10).
• Configure your security groups to allow your instance to accept connections on the following TCP
ports:
• SSH (port 22)
• HTTP (port 80)
• HTTPS (port 443)
For more information, see Authorize inbound traffic for your Linux instances (p. 1285).
• Install the Apache web server. For step-by-step instructions, see Tutorial: Install a LAMP Web Server on
Amazon Linux 2022 (p. 15). Only the httpd package and its dependencies are needed, so you can
ignore the instructions involving PHP and MariaDB.
• To identify and authenticate websites, the TLS public key infrastructure (PKI) relies on the Domain
Name System (DNS). To use your EC2 instance to host a public website, you need to register a domain
name for your web server or transfer an existing domain name to your Amazon EC2 host. Numerous
third-party domain registration and DNS hosting services are available for this, or you can use Amazon
Route 53.
1. Connect to your instance (p. 11) and confirm that Apache is running.
If the returned value is not "enabled," start Apache and set it to start each time the system boots.
50
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Enable TLS on the server
[ec2-user ~]$ sudo systemctl start httpd && sudo systemctl enable httpd
2. To ensure that all of your software packages are up to date, perform a quick software update on
your instance. This process may take a few minutes, but it is important to make sure that you have
the latest security updates and bug fixes.
Note
The -y option installs the updates without asking for confirmation. If you would like to
examine the updates before installing, you can omit this option.
3. After you enter the following command, you will be taken to a prompt where you can enter
information about your site.
[ec2-user ~]$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/
pki/tls/private/apache-selfsigned.key -out /etc/pki/tls/certs/apache-selfsigned.crt
This generates a new file localhost.crt in the /etc/pki/tls/certs/ directory. The specified
file name matches the default that is assigned in the SSLCertificateFile directive in /etc/httpd/
conf.d/ssl.conf.
Your instance now has the following files that you use to configure your secure server and create a
certificate for testing:
• /etc/httpd/conf.d/ssl.conf
The configuration file for mod_ssl. It contains directives telling Apache where to find encryption
keys and certificates, the TLS protocol versions to allow, and the encryption ciphers to accept. This
will be your local certificate file:
• /etc/pki/tls/certs/localhost.crt
This file contains both a self-signed certificate and the certificate's private key. Apache requires the
certificate and key to be in PEM format, which consists of Base64-encoded ASCII characters framed
by "BEGIN" and "END" lines, as in the following abbreviated example.
-----BEGIN CERTIFICATE-----
MIIEazCCA1OgAwIBAgICWxQwDQYJKoZIhvcNAQELBQAwgbExCzAJBgNVBAYTAi0t
MRIwEAYDVQQIDAlTb21lU3RhdGUxETAPBgNVBAcMCFNvbWVDaXR5MRkwFwYDVQQK
DBBTb21lT3JnYW5pemF0aW9uMR8wHQYDVQQLDBZTb21lT3JnYW5pemF0aW9uYWxV
bml0MRkwFwYDVQQDDBBpcC0xNzItMzEtMjAtMjM2MSQwIgYJKoZIhvcNAQkBFhVy
...
z5rRUE/XzxRLBZOoWZpNWTXJkQ3uFYH6s/sBwtHpKKZMzOvDedREjNKAvk4ws6F0
CuIjvubtUysVyQoMVPQ97ldeakHWeRMiEJFXg6kZZ0vrGvwnKoMh3DlK44D9dlU3
WanXWehT6FiSZvB4sTEXXJN2jdw8g+sHGnZ8zCOsclknYhHrCVD2vnBlZJKSZvak
3ZazhBxtQSukFMOnWPP2a0DMMFGYUHOd0BQE8sBJxg==
51
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
-----END CERTIFICATE-----
The file names and extensions are a convenience and have no effect on function. For example, you
can call a certificate cert.crt, cert.pem, or any other file name, so long as the related directive in
the ssl.conf file uses the same name.
Note
When you replace the default TLS files with your own customized files, be sure that they are
in PEM format.
4. Restart Apache.
Note
Make sure that TCP port 443 is accessible on your EC2 instance, as previously described.
5. Your Apache web server should now support HTTPS (secure HTTP) over port 443. Test it by entering
the IP address or fully qualified domain name of your EC2 instance into a browser URL bar with the
prefix https://.
Because you are connecting to a site with a self-signed, untrusted host certificate, your browser may
display a series of security warnings. Override the warnings and proceed to the site.
If the default Apache test page opens, it means that you have successfully configured TLS on your
server. All data passing between the browser and server is now encrypted.
Note
To prevent site visitors from encountering warning screens, you must obtain a trusted, CA-
signed certificate that not only encrypts, but also publicly authenticates you as the owner
of the site.
A self-signed TLS X.509 host certificate is cryptologically identical to a CA-signed certificate. The
difference is social, not mathematical. A CA promises, at a minimum, to validate a domain's ownership
before issuing a certificate to an applicant. Each web browser contains a list of CAs trusted by the
browser vendor to do this. An X.509 certificate consists primarily of a public key that corresponds to
your private server key, and a signature by the CA that is cryptographically tied to the public key. When a
browser connects to a web server over HTTPS, the server presents a certificate for the browser to check
against its list of trusted CAs. If the signer is on the list, or accessible through a chain of trust consisting
of other trusted signers, the browser negotiates a fast encrypted data channel with the server and loads
the page.
Certificates generally cost money because of the labor involved in validating the requests, so it pays to
shop around. A few CAs offer basic-level certificates free of charge. The most notable of these CAs is the
Let's Encrypt project, which also supports the automation of the certificate creation and renewal process.
For more information about using Let's Encrypt as your CA, see Certificate automation: Let's Encrypt with
Certbot on Amazon Linux 2 (p. 45).
52
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
If you plan to offer commercial-grade services, AWS Certificate Manager is a good option.
Underlying the host certificate is the key. As of 2019, government and industry groups recommend using
a minimum key (modulus) size of 2048 bits for RSA keys intended to protect documents, through 2030.
The default modulus size generated by OpenSSL in Amazon Linux 2022 is 2048 bits, which is suitable for
use in a CA-signed certificate. In the following procedure, an optional step provided for those who want
a customized key, for example, one with a larger modulus or using a different encryption algorithm.
Important
These instructions for acquiring a CA-signed host certificate do not work unless you own a
registered and hosted DNS domain.
1. Connect to your instance (p. 11) and navigate to /etc/pki/tls/private/. This is the directory where
you store the server's private key for TLS. If you prefer to use an existing host key to generate the
CSR, skip to Step 3.
2. (Optional) Generate a new private key. Here are some examples of key configurations. Any of the
resulting keys works with your web server, but they vary in the degree and type of security that they
implement.
• Example 1: Create a default RSA host key. The resulting file, custom.key, is a 2048-bit RSA
private key.
• Example 2: Create a stronger RSA key with a bigger modulus. The resulting file, custom.key, is a
4096-bit RSA private key.
• Example 3: Create a 4096-bit encrypted RSA key with password protection. The resulting file,
custom.key, is a 4096-bit RSA private key encrypted with the AES-128 cipher.
Important
Encrypting the key provides greater security, but because an encrypted key requires a
password, services depending on it cannot be auto-started. Each time you use this key,
you must supply the password (in the preceding example, "abcde12345") over an SSH
connection.
[ec2-user ~]$ sudo openssl genrsa -aes128 -passout pass:abcde12345 -out custom.key
4096
• Example 4: Create a key using a non-RSA cipher. RSA cryptography can be relatively slow because
of the size of its public keys, which are based on the product of two large prime numbers.
However, it is possible to create keys for TLS that use non-RSA ciphers. Keys based on the
mathematics of elliptic curves are smaller and computationally faster when delivering an
equivalent level of security.
[ec2-user ~]$ sudo openssl ecparam -name prime256v1 -out custom.key -genkey
The result is a 256-bit elliptic curve private key using prime256v1, a "named curve" that OpenSSL
supports. Its cryptographic strength is slightly greater than a 2048-bit RSA key, according to NIST.
Note
Not all CAs provide the same level of support for elliptic-curve-based keys as for RSA
keys.
53
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
Make sure that the new private key has highly restrictive ownership and permissions (owner=root,
group=root, read/write for owner only). The commands would be as shown in the following
example.
After you have created and configured a satisfactory key, you can create a CSR.
3. Create a CSR using your preferred key. The following example uses custom.key.
[ec2-user ~]$ sudo openssl req -new -key custom.key -out csr.pem
OpenSSL opens a dialog and prompts you for the information shown in the following table. All of
the fields except Common Name are optional for a basic, domain-validated host certificate.
Country Name The two-letter ISO abbreviation for your US (=United States)
country.
Organization The full legal name of your organization. Do not Example Corporation
Name abbreviate your organization name.
Finally, OpenSSL prompts you for an optional challenge password. This password applies only to the
CSR and to transactions between you and your CA, so follow the CA's recommendations about this
54
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
and the other optional field, optional company name. The CSR challenge password has no effect on
server operation.
The resulting file csr.pem contains your public key, your digital signature of your public key, and
the metadata that you entered.
4. Submit the CSR to a CA. This usually consists of opening your CSR file in a text editor and copying
the contents into a web form. At this time, you may be asked to supply one or more subject
alternate names (SANs) to be placed on the certificate. If www.example.com is the common name,
then example.com would be a good SAN, and vice versa. A visitor to your site entering either of
these names would see an error-free connection. If your CA web form allows it, include the common
name in the list of SANs. Some CAs include it automatically.
After your request has been approved, you receive a new host certificate signed by the CA. You
might also be instructed to download an intermediate certificate file that contains additional
certificates needed to complete the CA's chain of trust.
Note
Your CA might send you files in multiple formats intended for various purposes. For this
tutorial, you should only use a certificate file in PEM format, which is usually (but not
always) marked with a .pem or .crt file extension. If you are uncertain which file to use,
open the files with a text editor and find the one containing one or more blocks beginning
with the following line.
- - - - -BEGIN CERTIFICATE - - - - -
- - - -END CERTIFICATE - - - - -
You can also test the file at the command line as shown in the following.
Verify that these lines appear in the file. Do not use files ending with .p7b, .p7c, or similar
file extensions.
5. Place the new CA-signed certificate and any intermediate certificates in the /etc/pki/tls/certs
directory.
Note
There are several ways to upload your new certificate to your EC2 instance, but the most
straightforward and informative way is to open a text editor (for example, vi, nano,
or notepad) on both your local computer and your instance, and then copy and paste
the file contents between them. You need root [sudo] permissions when performing
these operations on the EC2 instance. This way, you can see immediately if there are any
permission or path problems. Be careful, however, not to add any additional lines while
copying the contents, or to change them in any way.
From inside the /etc/pki/tls/certs directory, check that the file ownership, group, and
permission settings match the highly restrictive Amazon Linux 2022 defaults (owner=root,
group=root, read/write for owner only). The following example shows the commands to use.
The permissions for the intermediate certificate file are less stringent (owner=root, group=root,
owner can write, group can read, world can read). The following example shows the commands to
use.
6. Place the private key that you used to create the CSR in the /etc/pki/tls/private/ directory.
Note
There are several ways to upload your custom key to your EC2 instance, but the most
straightforward and informative way is to open a text editor (for example, vi, nano,
or notepad) on both your local computer and your instance, and then copy and paste
the file contents between them. You need root [sudo] permissions when performing
these operations on the EC2 instance. This way, you can see immediately if there are any
permission or path problems. Be careful, however, not to add any additional lines while
copying the contents, or to change them in any way.
From inside the /etc/pki/tls/private directory, use the following commands to verify that
the file ownership, group, and permission settings match the highly restrictive Amazon Linux 2022
defaults (owner=root, group=root, read/write for owner only).
a. Provide the path and file name of the CA-signed host certificate in Apache's
SSLCertificateFile directive:
SSLCertificateFile /etc/pki/tls/certs/custom.crt
b. If you received an intermediate certificate file (intermediate.crt in this example), provide its
path and file name using Apache's SSLCACertificateFile directive:
SSLCACertificateFile /etc/pki/tls/certs/intermediate.crt
Note
Some CAs combine the host certificate and the intermediate certificates in a single file,
making the SSLCACertificateFile directive unnecessary. Consult the instructions
provided by your CA.
c. Provide the path and file name of the private key (custom.key in this example) in Apache's
SSLCertificateKeyFile directive:
56
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 3: Test and harden the security configuration
SSLCertificateKeyFile /etc/pki/tls/private/custom.key
9. Test your server by entering your domain name into a browser URL bar with the prefix https://.
Your browser should load the test page over HTTPS without generating errors.
On the Qualys SSL Labs site, enter the fully qualified domain name of your server, in the form
www.example.com. After about two minutes, you receive a grade (from A to F) for your site and a
detailed breakdown of the findings. The following table summarizes the report for a domain with
settings identical to the default Apache configuration on Amazon Linux 2022, and with a default Certbot
certificate.
Overall rating B
Certificate 100%
Though the overview shows that the configuration is mostly sound, the detailed report flags several
potential problems, listed here in order of severity:
✗ The RC4 cipher is supported for use by certain older browsers. A cipher is the mathematical core of
an encryption algorithm. RC4, a fast cipher used to encrypt TLS data-streams, is known to have several
serious weaknesses. Unless you have very good reasons to support legacy browsers, you should disable
this.
✗ Old TLS versions are supported. The configuration supports TLS 1.0 (already deprecated) and TLS 1.1
(on a path to deprecation). Only TLS 1.2 has been recommended since 2018.
✗ Forward secrecy is not fully supported. Forward secrecy is a feature of algorithms that encrypt using
temporary (ephemeral) session keys derived from the private key. This means in practice that attackers
cannot decrypt HTTPS data even if they possess a web server's long-term private key.
57
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 3: Test and harden the security configuration
1. Open the configuration file /etc/httpd/conf.d/ssl.conf in a text editor and comment out the
following line by entering "#" at the beginning of the line.
This directive explicitly disables SSL versions 2 and 3, as well as TLS versions 1.0 and 1.1. The
server now refuses to accept encrypted connections with clients using anything except TLS 1.2.
The verbose wording in the directive conveys more clearly, to a human reader, what the server is
configured to do.
Note
Disabling TLS versions 1.0 and 1.1 in this manner blocks a small percentage of outdated
web browsers from accessing your site.
#SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5
2. Specify explicit cipher suites and a cipher order that prioritizes forward secrecy and avoids insecure
ciphers. The SSLCipherSuite directive used here is based on output from the Mozilla SSL
Configuration Generator, which tailors a TLS configuration to the specific software running on
your server. (For more information, see Mozilla's useful resource Security/Server Side TLS.) First
determine your Apache and OpenSSL versions by using the output from the following commands.
For example, if the returned information is Apache 2.4.34 and OpenSSL 1.0.2, we enter this into
the generator. If you choose the "modern" compatibility model, this creates an SSLCipherSuite
directive that aggressively enforces security but still works for most browsers. If your software
doesn't support the modern configuration, you can update your software or choose the
"intermediate" configuration instead.
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-
CHACHA20-POLY1305:
ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:
ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-
AES128-SHA256
The selected ciphers have ECDHE in their names, an abbreviation for Elliptic Curve Diffie-Hellman
Ephemeral . The term ephemeral indicates forward secrecy. As a by-product, these ciphers do not
support RC4.
58
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot
We recommend that you use an explicit list of ciphers instead of relying on defaults or terse
directives whose content isn't visible.
#SSLHonorCipherOrder on
This directive forces the server to prefer high-ranking ciphers, including (in this case) those that
support forward secrecy. With this directive turned on, the server tries to establish a strong secure
connection before falling back to allowed ciphers with lesser security.
After completing both of these procedures, save the changes to /etc/httpd/conf.d/ssl.conf and
restart Apache.
If you test the domain again on Qualys SSL Labs, you should see that the RC4 vulnerability and other
warnings are gone and the summary looks something like the following.
Overall rating A
Certificate 100%
Each update to OpenSSL introduces new ciphers and removes support for old ones. Keep your EC2
Amazon Linux 2022 instance up-to-date, watch for security announcements from OpenSSL, and be alert
to reports of new security exploits in the technical press.
Troubleshoot
• My Apache webserver doesn't start unless I enter a password
This is expected behavior if you installed an encrypted, password-protected, private server key.
You can remove the encryption and password requirement from the key. Assuming that you have a
private encrypted RSA key called custom.key in the default directory, and that the password on it is
abcde12345, run the following commands on your EC2 instance to generate an unencrypted version
of the key.
59
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Certificate automation: Let's Encrypt
with Certbot on Amazon Linux 2022
[ec2-user private]$ sudo systemctl restart httpd
When you are installing the required packages for SSL, you may see errors similar to the following.
This typically means that your EC2 instance is not running Amazon Linux 2022. This tutorial only
supports instances freshly created from an official Amazon Linux 2022 AMI.
The Let's Encrypt certificate authority is the centerpiece of an effort by the Electronic Frontier
Foundation (EFF) to encrypt the entire internet. In line with that goal, Let's Encrypt host certificates
are designed to be created, validated, installed, and maintained with minimal human intervention. The
automated aspects of certificate management are carried out by a software agent running on your
web server. After you install and configure the agent, it communicates securely with Let's Encrypt and
performs administrative tasks on Apache and the key management system. This tutorial uses the free
Certbot agent because it allows you either to supply a customized encryption key as the basis for your
certificates, or to allow the agent itself to create a key based on its defaults. You can also configure
Certbot to renew your certificates on a regular basis without human interaction, as described in To
automate Certbot (p. 63). For more information, consult the Certbot User Guide and man page.
Certbot is not officially supported on Amazon Linux 2022, but is available for download and functions
correctly when installed. We recommend that you make the following backups to protect your data and
avoid inconvenience:
• Before you begin, take a snapshot of your Amazon EBS root volume. This allows you to restore the
original state of your EC2 instance. For information about creating EBS snapshots, see Create Amazon
EBS snapshots (p. 1385).
• The procedure below requires you to edit your httpd.conf file, which controls Apache's operation.
Certbot makes its own automated changes to this and other configuration files. Make a backup copy of
your entire /etc/httpd directory in case you need to restore it.
Prepare to install
Complete the following procedures before you install Certbot.
1. Download the Extra Packages for Enterprise Linux (EPEL) 7 repository packages. These are required
to supply dependencies needed by Certbot.
60
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Certificate automation: Let's Encrypt
with Certbot on Amazon Linux 2022
a. Navigate to your home directory (/home/ec2-user). Download these packages using the
following command.
b. Execute the following instructions on the command line on the machine to set up a Python
virtual environment.
Install certbot.
A a symbolic link.
If you cannot stop your web server, alternatively you can use the following command.
Run Certbot
This procedure is based on the EFF documentation for installing Certbot on Fedora and on RHEL 7. It
describes the default use of Certbot, resulting in a certificate based on a 2048-bit RSA key.
1. Run Certbot.
2. At the prompt "Enter email address (used for urgent renewal and security notices)," enter a contact
address and press Enter.
3. Agree to the Let's Encrypt Terms of Service at the prompt. Enter "A" and press Enter to proceed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v02.api.letsencrypt.org/directory
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(A)gree/(C)ancel: A
4. At the authorization for EFF to put you on their mailing list, enter "Y" or "N" and press Enter.
61
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Certificate automation: Let's Encrypt
with Certbot on Amazon Linux 2022
5. Certbot displays the Common Name and Subject Alternative Name (SAN) that you provided in the
VirtualHost block.
Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel):
To allow visitors to connect to your server via unencrypted HTTP, enter "1". If you want to accept
only encrypted connections via HTTPS, enter "2". Press Enter to submit your choice.
7. Certbot completes the configuration of Apache and reports success and other information.
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/certbot.oneeyedman.net/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/certbot.oneeyedman.net/privkey.pem
Your cert will expire on 2019-08-01. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
62
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Host a WordPress blog on Amazon Linux 2
8. After you complete the installation, test and optimize the security of your server as described in
Step 3: Test and harden the security configuration (p. 57).
To automate Certbot
1. Open the /etc/crontab file in a text editor, such as vim or nano, using sudo. Alternatively, use
sudo crontab -e.
2. Add a line similar to the following and save the file.
39 1,13 * * *
Schedules a command to be run at 01:39 and 13:39 every day. The selected values are arbitrary,
but the Certbot developers suggest running the command at least twice daily. This guarantees
that any certificate found to be compromised is promptly revoked and replaced.
root
The command to be run. The renew subcommand causes Certbot to check any previously
obtained certificates and to renew those that are approaching expiration. The --no-self-
upgrade flag prevents Certbot from upgrading itself without your intervention.
3. Restart the cron daemon.
You are responsible for updating the software packages and maintaining security patches for your server.
For a more automated WordPress installation that does not require direct interaction with the web
server configuration, the AWS CloudFormation service provides a WordPress template that can also get
you started quickly. For more information, see Get started in the AWS CloudFormation User Guide. If
you'd prefer to host your WordPress blog on a Windows instance, see Deploy a WordPress blog on your
Amazon EC2 Windows instance in the Amazon EC2 User Guide for Windows Instances. If you need a high-
63
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Prerequisites
availability solution with a decoupled database, see Deploying a high-availability WordPress website in
the AWS Elastic Beanstalk Developer Guide.
Important
These procedures are intended for use with Amazon Linux. For more information about other
distributions, see their specific documentation. Many steps in this tutorial do not work on
Ubuntu instances. For help installing WordPress on an Ubuntu instance, see WordPress in the
Ubuntu documentation.
To complete this tutorial using AWS Systems Manager Automation instead of the following tasks, run
one of the following Automation documents: AWSDocs-HostingAWordPressBlog-AL (Amazon Linux) or
AWSDocs-HostingAWordPressBlog-AL2 (Amazon Linux 2).
Topics
• Prerequisites (p. 64)
• Install WordPress (p. 64)
• Next steps (p. 71)
• Help! My public DNS name changed and now my blog is broken (p. 72)
Prerequisites
This tutorial assumes that you have launched an Amazon Linux instance with a functional web server
with PHP and database (either MySQL or MariaDB) support by following all of the steps in Tutorial:
Install a LAMP web server on the Amazon Linux AMI (p. 72) for Amazon Linux AMI or Tutorial: Install
a LAMP web server on Amazon Linux 2 (p. 15) for Amazon Linux 2. This tutorial also has steps for
configuring a security group to allow HTTP and HTTPS traffic, as well as several steps to ensure that file
permissions are set properly for your web server. For information about adding rules to your security
group, see Add rules to a security group (p. 1311).
We strongly recommend that you associate an Elastic IP address (EIP) to the instance you are using
to host a WordPress blog. This prevents the public DNS address for your instance from changing and
breaking your installation. If you own a domain name and you want to use it for your blog, you can
update the DNS record for the domain name to point to your EIP address (for help with this, contact your
domain name registrar). You can have one EIP address associated with a running instance at no charge.
For more information, see Elastic IP addresses (p. 1059).
If you don't already have a domain name for your blog, you can register a domain name with Route 53
and associate your instance's EIP address with your domain name. For more information, see Registering
domain names using Amazon Route 53 in the Amazon Route 53 Developer Guide.
Install WordPress
Connect to your instance, and download the WordPress installation package.
1. Download the latest WordPress installation package with the wget command. The following
command should always download the latest release.
2. Unzip and unarchive the installation package. The installation folder is unzipped to a folder called
wordpress.
64
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Install WordPress
2. Log in to the database server as the root user. Enter your database root password when prompted;
this may be different than your root system password, or it might even be empty if you have not
secured your database server.
If you have not secured your database server yet, it is important that you do so. For more
information, see To secure the MariaDB server (p. 20) (Amazon Linux 2) or To secure the database
server (p. 77) (Amazon Linux AMI).
3. Create a user and password for your MySQL database. Your WordPress installation uses these values
to communicate with your MySQL database. Enter the following command, substituting a unique
user name and password.
Make sure that you create a strong password for your user. Do not use the single quote character
( ' ) in your password, because this will break the preceding command. For more information about
creating a secure password, go to http://www.pctools.com/guides/password/. Do not reuse an
existing password, and make sure to store this password in a safe place.
4. Create your database. Give your database a descriptive, meaningful name, such as wordpress-db.
Note
The punctuation marks surrounding the database name in the command below are called
backticks. The backtick (`) key is usually located above the Tab key on a standard keyboard.
Backticks are not always required, but they allow you to use otherwise illegal characters,
such as hyphens, in database names.
5. Grant full privileges for your database to the WordPress user that you created earlier.
FLUSH PRIVILEGES;
65
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Install WordPress
exit
The WordPress installation folder contains a sample configuration file called wp-config-sample.php.
In this procedure, you copy this file and edit it to fit your specific configuration.
1. Copy the wp-config-sample.php file to a file called wp-config.php. This creates a new
configuration file and keeps the original sample file intact as a backup.
2. Edit the wp-config.php file with your favorite text editor (such as nano or vim) and enter values
for your installation. If you do not have a favorite text editor, nano is suitable for beginners.
a. Find the line that defines DB_NAME and change database_name_here to the database
name that you created in Step 4 (p. 65) of To create a database user and database for your
WordPress installation (p. 65).
define('DB_NAME', 'wordpress-db');
b. Find the line that defines DB_USER and change username_here to the database user that
you created in Step 3 (p. 65) of To create a database user and database for your WordPress
installation (p. 65).
define('DB_USER', 'wordpress-user');
c. Find the line that defines DB_PASSWORD and change password_here to the strong password
that you created in Step 3 (p. 65) of To create a database user and database for your
WordPress installation (p. 65).
define('DB_PASSWORD', 'your_strong_password');
d. Find the section called Authentication Unique Keys and Salts. These KEY and SALT
values provide a layer of encryption to the browser cookies that WordPress users store on their
local machines. Basically, adding long, random values here makes your site more secure. Visit
https://api.wordpress.org/secret-key/1.1/salt/ to randomly generate a set of key values that
you can copy and paste into your wp-config.php file. To paste text into a PuTTY terminal,
place the cursor where you want to paste the text and right-click your mouse inside the PuTTY
terminal.
66
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Install WordPress
• Now that you've unzipped the installation folder, created a MySQL database and user, and
customized the WordPress configuration file, you are ready to copy your installation files to your
web server document root so you can run the installation script that completes your installation.
The location of these files depends on whether you want your WordPress blog to be available
at the actual root of your web server (for example, my.public.dns.amazonaws.com) or in a
subdirectory or folder under the root (for example, my.public.dns.amazonaws.com/blog).
• If you want WordPress to run at your document root, copy the contents of the wordpress
installation directory (but not the directory itself) as follows:
• If you want WordPress to run in an alternative directory under the document root, first create
that directory, and then copy the files to it. In this example, WordPress will run from the
directory blog:
Important
For security purposes, if you are not moving on to the next procedure immediately, stop the
Apache web server (httpd) now. After you move your installation under the Apache document
root, the WordPress installation script is unprotected and an attacker could gain access to your
blog if the Apache web server were running. To stop the Apache web server, enter the command
sudo service httpd stop. If you are moving on to the next procedure, you do not need to stop
the Apache web server.
WordPress permalinks need to use Apache .htaccess files to work properly, but this is not enabled by
default on Amazon Linux. Use this procedure to allow all overrides in the Apache document root.
1. Open the httpd.conf file with your favorite text editor (such as nano or vim). If you do not have a
favorite text editor, nano is suitable for beginners.
<Directory "/var/www/html">
#
67
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Install WordPress
#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride None
#
# Controls who can get stuff from this server.
#
Require all granted
</Directory>
3. Change the AllowOverride None line in the above section to read AllowOverride All.
Note
There are multiple AllowOverride lines in this file; be sure you change the line in the
<Directory "/var/www/html"> section.
AllowOverride All
The GD library for PHP enables you to modify images. Install this library if you need to crop the header
image for your blog. The version of phpMyAdmin that you install might require a specific minimum
version of this library (for example, version 7.2).
Use the following command to install the PHP graphics drawing library on Amazon Linux 2. For example,
if you installed php7.2 from amazon-linux-extras as part of installing the LAMP stack, this command
installs version 7.2 of the PHP graphics drawing library.
To install the PHP graphics drawing library on the Amazon Linux AMI
68
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Install WordPress
The GD library for PHP enables you to modify images. Install this library if you need to crop the header
image for your blog. The version of phpMyAdmin that you install might require a specific minimum
version of this library (for example, version 7.2).
The following is an example line from the output for the PHP graphics drawing library (version 7.2):
Use the following command to install a specific version of the PHP graphics drawing library (for example,
version 7.2) on the Amazon Linux AMI:
Some of the available features in WordPress require write access to the Apache document root (such
as uploading media though the Administration screens). If you have not already done so, apply the
following group memberships and permissions (as described in greater detail in the LAMP web server
tutorial (p. 72)).
1. Grant file ownership of /var/www and its contents to the apache user.
2. Grant group ownership of /var/www and its contents to the apache group.
3. Change the directory permissions of /var/www and its subdirectories to add group write
permissions and to set the group ID on future subdirectories.
4. Recursively change the file permissions of /var/www and its subdirectories to add group write
permissions.
5. Restart the Apache web server to pick up the new group and permissions.
• Amazon Linux 2
69
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Install WordPress
You are ready to install WordPress. The commands that you use depend on the operating system. The
commands in this procedure are for use with Amazon Linux 2. Use the procedure that follows this one
with Amazon Linux AMI.
1. Use the systemctl command to ensure that the httpd and database services start at every system
boot.
[ec2-user ~]$ sudo systemctl enable httpd && sudo systemctl enable mariadb
4. In a web browser, type the URL of your WordPress blog (either the public DNS address for your
instance, or that address followed by the blog folder). You should see the WordPress installation
script. Provide the information required by the WordPress installation. Choose Install WordPress to
complete the installation. For more information, see Step 5: Run the Install Script on the WordPress
website.
1. Use the chkconfig command to ensure that the httpd and database services start at every system
boot.
70
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Next steps
4. In a web browser, type the URL of your WordPress blog (either the public DNS address for your
instance, or that address followed by the blog folder). You should see the WordPress installation
script. Provide the information required by the WordPress installation. Choose Install WordPress to
complete the installation. For more information, see Step 5: Run the Install Script on the WordPress
website.
Next steps
After you have tested your WordPress blog, consider updating its configuration.
If you have a domain name associated with your EC2 instance's EIP address, you can configure your blog
to use that name instead of the EC2 public DNS address. For more information, see Changing The Site
URL on the WordPress website.
You can configure your blog to use different themes and plugins to offer a more personalized experience
for your readers. However, sometimes the installation process can backfire, causing you to lose your
entire blog. We strongly recommend that you create a backup Amazon Machine Image (AMI) of your
instance before attempting to install any themes or plugins so you can restore your blog if anything goes
wrong during installation. For more information, see Create your own AMI (p. 94).
Increase capacity
If your WordPress blog becomes popular and you need more compute power or storage, consider the
following steps:
• Expand the storage space on your instance. For more information, see Amazon EBS Elastic
Volumes (p. 1523).
• Move your MySQL database to Amazon RDS to take advantage of the service's ability to scale easily.
If you expect your blog to drive traffic from users located around the world, consider AWS Global
Accelerator. Global Accelerator helps you achieve lower latency by improving internet traffic
performance between your users’ client devices and your WordPress application running on AWS. Global
Accelerator uses the AWS global network to direct traffic to a healthy application endpoint in the AWS
Region that is closest to the client.
For information about WordPress, see the WordPress Codex help documentation at http://
codex.wordpress.org/. For more information about troubleshooting your installation, go to https://
wordpress.org/support/article/how-to-install-wordpress/#common-installation-problems. For
information about making your WordPress blog more secure, go to https://wordpress.org/support/
article/hardening-wordpress/. For information about keeping your WordPress blog up-to-date, go to
https://wordpress.org/support/article/updating-wordpress/.
71
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Help! My public DNS name changed
and now my blog is broken
If this has happened to your WordPress installation, you may be able to recover your blog with the
procedure below, which uses the wp-cli command line interface for WordPress.
You should see references to your old public DNS name in the output, which will look like this (old
site URL in red):
4. Search and replace the old site URL in your WordPress installation with the following command.
Substitute the old and new site URLs for your EC2 instance and the path to your WordPress
installation (usually /var/www/html or /var/www/html/blog).
5. In a web browser, enter the new site URL of your WordPress blog to verify that the site is working
properly again. If it is not, see https://wordpress.org/support/article/changing-the-site-url/ and
https://wordpress.org/support/article/how-to-install-wordpress/#common-installation-problems
for more information.
72
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Prepare the LAMP server
to host a static website or deploy a dynamic PHP application that reads and writes information to a
database.
Important
If you are trying to set up a LAMP web server on a different distribution, such as Ubuntu or Red
Hat Enterprise Linux, this tutorial will not work. For Amazon Linux 2, see Tutorial: Install a LAMP
web server on Amazon Linux 2 (p. 15). For Ubuntu, see the following Ubuntu community
documentation: ApacheMySQLPHP. For other distributions, see their specific documentation.
To complete this tutorial using AWS Systems Manager Automation instead of the following tasks, run the
AWSDocs-InstallALAMPServer-AL Automation document.
Tasks
• Step 1: Prepare the LAMP server (p. 73)
• Step 2: Test your Lamp server (p. 76)
• Step 3: Secure the database server (p. 77)
• Step 4: (Optional) Install phpMyAdmin (p. 78)
• Troubleshoot (p. 81)
• Related topics (p. 82)
This tutorial assumes that you have already launched a new instance using the Amazon Linux AMI, with
a public DNS name that is reachable from the internet. For more information, see Step 1: Launch an
instance (p. 10). You must also have configured your security group to allow SSH (port 22), HTTP (port
80), and HTTPS (port 443) connections. For more information about these prerequisites, see Authorize
inbound traffic for your Linux instances (p. 1285).
To install and start the LAMP web server with the Amazon Linux AMI
The -y option installs the updates without asking for confirmation. If you would like to examine the
updates before installing, you can omit this option.
3. Now that your instance is current, you can install the Apache web server, MySQL, and PHP software
packages.
Important
Some applications may not be compatible with the following recommended software
environment. Before installing these packages, check whether your LAMP applications
are compatible with them. If there is a problem, you may need to install an alternative
environment. For more information, see The application software I want to run on my
server is incompatible with the installed PHP version or other software (p. 81)
Use the yum install command to install multiple software packages and all related dependencies at
the same time.
73
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Prepare the LAMP server
If you receive the error No package package-name available, then your instance was not
launched with the Amazon Linux AMI (perhaps you are using Amazon Linux 2 instead). You can view
your version of Amazon Linux with the following command.
cat /etc/system-release
5. Use the chkconfig command to configure the Apache web server to start at each system boot.
The chkconfig command does not provide any confirmation message when you successfully use it to
enable a service.
Warning
Using 0.0.0.0/0 allows all IPv4 addresses to access your instance using SSH. This
is acceptable for a short time in a test environment, but it's unsafe for production
environments. In production, you authorize only a specific IP address or range of
addresses to access your instance.
d. Choose the link for the security group. Using the procedures in Add rules to a security
group (p. 1311), add a new inbound security rule with the following values:
• Type: HTTP
• Protocol: TCP
• Port Range: 80
• Source: Custom
7. Test your web server. In a web browser, type the public DNS address (or the public IP address) of
your instance. You can get the public DNS address for your instance using the Amazon EC2 console.
If there is no content in /var/www/html, you should see the Apache test page. When you add
74
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Prepare the LAMP server
content to the document root, your content appears at the public DNS address of your instance
instead of the test page.
Verify that the security group for the instance contains a rule to allow HTTP traffic on port 80. For
more information, see Add rules to a security group (p. 1311).
If you are not using Amazon Linux, you may also need to configure the firewall on your instance
to allow these connections. For more information about how to configure the firewall, see the
documentation for your specific distribution.
Apache httpd serves files that are kept in a directory called the Apache document root. The Amazon
Linux Apache document root is /var/www/html, which by default is owned by root.
To allow the ec2-user account to manipulate files in this directory, you must modify the ownership and
permissions of the directory. There are many ways to accomplish this task. In this tutorial, you add ec2-
user to the apache group, to give the apache group ownership of the /var/www directory and assign
write permissions to the group.
1. Add your user (in this case, ec2-user) to the apache group.
2. Log out and then log back in again to pick up the new group, and then verify your membership.
a. Log out (use the exit command or close the terminal window):
b. To verify your membership in the apache group, reconnect to your instance, and then run the
following command:
3. Change the group ownership of /var/www and its contents to the apache group.
4. To add group write permissions and to set the group ID on future subdirectories, change the
directory permissions of /var/www and its subdirectories.
5. To add group write permissions, recursively change the file permissions of /var/www and its
subdirectories:
75
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Test your Lamp server
Now, ec2-user (and any future members of the apache group) can add, delete, and edit files in the
Apache document root, enabling you to add content, such as a static website or a PHP application.
A web server running the HTTP protocol provides no transport security for the data that it sends or
receives. When you connect to an HTTP server using a web browser, the URLs that you visit, the content
of webpages that you receive, and the contents (including passwords) of any HTML forms that you
submit are all visible to eavesdroppers anywhere along the network pathway. The best practice for
securing your web server is to install support for HTTPS (HTTP Secure), which protects your data with
SSL/TLS encryption.
For information about enabling HTTPS on your server, see Tutorial: Configure SSL/TLS with the Amazon
Linux AMI (p. 82).
If you get a "Permission denied" error when trying to run this command, try logging out and logging
back in again to pick up the proper group permissions that you configured in Step 1: Prepare the
LAMP server (p. 73).
2. In a web browser, type the URL of the file that you just created. This URL is the public DNS address
of your instance followed by a forward slash and the file name. For example:
http://my.public.dns.amazonaws.com/phpinfo.php
76
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 3: Secure the database server
If you do not see this page, verify that the /var/www/html/phpinfo.php file was created properly
in the previous step. You can also verify that all of the required packages were installed with the
following command. The package versions in the second column do not need to match this example
output.
[ec2-user ~]$ sudo yum list installed httpd24 php72 mysql57-server php72-mysqlnd
Loaded plugins: priorities, update-motd, upgrade-helper
Installed Packages
httpd24.x86_64 2.4.25-1.68.amzn1 @amzn-
updates
mysql56-server.x86_64 5.6.35-1.23.amzn1 @amzn-
updates
php70.x86_64 7.0.14-1.20.amzn1 @amzn-
updates
php70-mysqlnd.x86_64 7.0.14-1.20.amzn1 @amzn-
updates
If any of the required packages are not listed in your output, install them using the sudo yum install
package command.
3. Delete the phpinfo.php file. Although this can be useful information, it should not be broadcast to
the internet for security reasons.
77
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 4: (Optional) Install phpMyAdmin
Starting mysqld: [ OK ]
2. Run mysql_secure_installation.
i. Type the current root password. By default, the root account does not have a password set.
Press Enter.
ii. Type Y to set a password, and type a secure password twice. For more information about
creating a secure password, see https://identitysafe.norton.com/password-generator/.
Make sure to store this password in a safe place.
Setting a root password for MySQL is only the most basic measure for securing your
database. When you build or install a database-driven application, you typically create a
database service user for that application and avoid using the root account for anything but
database administration.
b. Type Y to remove the anonymous user accounts.
c. Type Y to disable the remote root login.
d. Type Y to remove the test database.
e. Type Y to reload the privilege tables and save your changes.
3. (Optional) If you do not plan to use the MySQL server right away, stop it. You can restart it when you
need it again.
4. (Optional) If you want the MySQL server to start at every boot, type the following command.
You should now have a fully functional LAMP web server. If you add content to the Apache document
root at /var/www/html, you should be able to view that content at the public DNS address for your
instance.
phpMyAdmin is a web-based database management tool that you can use to view and edit the MySQL
databases on your EC2 instance. Follow the steps below to install and configure phpMyAdmin on your
Amazon Linux instance.
Important
We do not recommend using phpMyAdmin to access a LAMP server unless you have enabled
SSL/TLS in Apache; otherwise, your database administrator password and other data are
78
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 4: (Optional) Install phpMyAdmin
transmitted insecurely across the internet. For security recommendations from the developers,
see Securing your phpMyAdmin installation.
Note
The Amazon Linux package management system does not currently support the automatic
installation of phpMyAdmin in a PHP 7 environment. This tutorial describes how to install
phpMyAdmin manually.
3. Restart Apache.
5. Select a source package for the latest phpMyAdmin release from https://www.phpmyadmin.net/
downloads. To download the file directly to your instance, copy the link and paste it into a wget
command, as in this example:
6. Create a phpMyAdmin folder and extract the package into it using the following command.
9. In a web browser, type the URL of your phpMyAdmin installation. This URL is the public DNS
address (or the public IP address) of your instance followed by a forward slash and the name of your
installation directory. For example:
http://my.public.dns.amazonaws.com/phpMyAdmin
79
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 4: (Optional) Install phpMyAdmin
10. Log in to your phpMyAdmin installation with the root user name and the MySQL root password you
created earlier.
Your installation must still be configured before you put it into service. To configure phpMyAdmin,
you can manually create a configuration file, use the setup console, or combine both approaches.
For information about using phpMyAdmin, see the phpMyAdmin User Guide.
80
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot
Troubleshoot
This section offers suggestions for resolving common problems you may encounter while setting up a
new LAMP server.
If the httpd process is not running, repeat the steps described in Step 1: Prepare the LAMP
server (p. 73).
• Is the firewall correctly configured?
Verify that the security group for the instance contains a rule to allow HTTP traffic on port 80. For
more information, see Add rules to a security group (p. 1311).
How to downgrade
The well-tested previous version of this tutorial called for the following core LAMP packages:
• httpd24
• php56
• mysql55-server
• php56-mysqlnd
If you have already installed the latest packages as recommended at the start of this tutorial, you must
first uninstall these packages and other dependencies as follows:
[ec2-user ~]$ sudo yum remove -y httpd24 php72 mysql57-server php72-mysqlnd perl-DBD-
MySQL57
81
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Related topics
If you decide later to upgrade to the recommended environment, you must first remove the customized
packages and dependencies:
[ec2-user ~]$ sudo yum remove -y httpd24 php56 mysql55-server php56-mysqlnd perl-DBD-
MySQL56
Related topics
For more information about transferring files to your instance or installing a WordPress blog on your web
server, see the following documentation:
For more information about the commands and software used in this tutorial, see the following
webpages:
For more information about registering a domain name for your web server, or transferring an existing
domain name to this host, see Creating and Migrating Domains and Subdomains to Amazon Route 53 in
the Amazon Route 53 Developer Guide.
For historical reasons, web encryption is often referred to simply as SSL. While web browsers still
support SSL, its successor protocol TLS is less vulnerable to attack. The Amazon Linux AMI disables
server-side support all versions of SSL by default. Security standards bodies consider TLS 1.0 to be
unsafe, and both TLS 1.0 and TLS 1.1 are on track to be formally deprecated by the IETF. This tutorial
contains guidance based exclusively on enabling TLS 1.2. (A newer TLS 1.3 protocol exists in draft form,
but is not supported on Amazon Linux.) For more information about the updated encryption standards,
see RFC 7568 and RFC 8446.
82
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Prerequisites
Important
These procedures are intended for use with the Amazon Linux AMI. If you are trying to set
up a LAMP web server on an instance with a different distribution, some procedures in this
tutorial might not work for you. For Amazon Linux 2, see Tutorial: Configure SSL/TLS on
Amazon Linux 2 (p. 34). For Ubuntu, see the following Ubuntu community documentation:
ApacheMySQLPHP. For Red Hat Enterprise Linux, see the following: Setting up the Apache HTTP
Web Server. For other distributions, see their specific documentation.
Note
Alternatively, you can use AWS Certificate Manager (ACM) for AWS Nitro enclaves, which is an
enclave application that allows you to use public and private SSL/TLS certificates with your
web applications and servers running on Amazon EC2 instances with AWS Nitro Enclaves. Nitro
Enclaves is an Amazon EC2 capability that enables creation of isolated compute environments to
protect and securely process highly sensitive data, such as SSL/TLS certificates and private keys.
ACM for Nitro Enclaves works with nginx running on your Amazon EC2 Linux instance to create
private keys, to distribute certificates and private keys, and to manage certificate renewals.
To use ACM for Nitro Enclaves, you must use an enclave-enabled Linux instance.
For more information, see What is AWS Nitro Enclaves? and AWS Certificate Manager for Nitro
Enclaves in the AWS Nitro Enclaves User Guide.
Contents
• Prerequisites (p. 83)
• Step 1: Enable TLS on the server (p. 83)
• Step 2: Obtain a CA-signed certificate (p. 85)
• Step 3: Test and harden the security configuration (p. 90)
• Troubleshoot (p. 92)
Prerequisites
Before you begin this tutorial, complete the following steps:
• Launch an EBS-backed instance using the Amazon Linux AMI. For more information, see Step 1: Launch
an instance (p. 10).
• Configure your security group to allow your instance to accept connections on the following TCP ports:
• SSH (port 22)
• HTTP (port 80)
• HTTPS (port 443)
For more information, see Authorize inbound traffic for your Linux instances (p. 1285).
• Install Apache web server. For step-by-step instructions, see Tutorial: Installing a LAMP Web Server on
Amazon Linux (p. 72). Only the http24 package and its dependencies are needed; you can ignore
the instructions involving PHP and MySQL.
• To identify and authenticate web sites, the TLS public key infrastructure (PKI) relies on the Domain
Name System (DNS). To use your EC2 instance to host a public web site, you need to register a domain
name for your web server or transfer an existing domain name to your Amazon EC2 host. Numerous
third-party domain registration and DNS hosting services are available for this, or you can use Amazon
Route 53.
83
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 1: Enable TLS on the server
Note
A self-signed certificate is acceptable for testing but not production. If you expose your self-
signed certificate to the internet, visitors to your site receive security warnings.
1. Connect to your instance (p. 11) and confirm that Apache is running.
2. To ensure that all of your software packages are up to date, perform a quick software update on
your instance. This process may take a few minutes, but it is important to make sure you have the
latest security updates and bug fixes.
Note
The -y option installs the updates without asking for confirmation. If you would like to
examine the updates before installing, you can omit this option.
3. Now that your instance is current, add TLS support by installing the Apache module mod_ssl:
Your instance now has the following files that you use to configure your secure server and create a
certificate for testing:
/etc/httpd/conf.d/ssl.conf
The configuration file for mod_ssl. It contains "directives" telling Apache where to find
encryption keys and certificates, the TLS protocol versions to allow, and the encryption ciphers
to accept.
/etc/pki/tls/private/localhost.key
An automatically generated, 2048-bit RSA private key for your Amazon EC2 host. During
installation, OpenSSL used this key to generate a self-signed host certificate, and you can also
use this key to generate a certificate signing request (CSR) to submit to a certificate authority
(CA).
/etc/pki/tls/certs/localhost.crt
An automatically generated, self-signed X.509 certificate for your server host. This certificate is
useful for testing that Apache is properly set up to use TLS.
The .key and .crt files are both in PEM format, which consists of Base64-encoded ASCII characters
framed by "BEGIN" and "END" lines, as in this abbreviated example of a certificate:
-----BEGIN CERTIFICATE-----
MIIEazCCA1OgAwIBAgICWxQwDQYJKoZIhvcNAQELBQAwgbExCzAJBgNVBAYTAi0t
MRIwEAYDVQQIDAlTb21lU3RhdGUxETAPBgNVBAcMCFNvbWVDaXR5MRkwFwYDVQQK
DBBTb21lT3JnYW5pemF0aW9uMR8wHQYDVQQLDBZTb21lT3JnYW5pemF0aW9uYWxV
bml0MRkwFwYDVQQDDBBpcC0xNzItMzEtMjAtMjM2MSQwIgYJKoZIhvcNAQkBFhVy
...
z5rRUE/XzxRLBZOoWZpNWTXJkQ3uFYH6s/sBwtHpKKZMzOvDedREjNKAvk4ws6F0
84
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
WanXWehT6FiSZvB4sTEXXJN2jdw8g+sHGnZ8zCOsclknYhHrCVD2vnBlZJKSZvak
3ZazhBxtQSukFMOnWPP2a0DMMFGYUHOd0BQE8sBJxg==
-----END CERTIFICATE-----
The file names and extensions are a convenience and have no effect on function; you can call a
certificate cert.crt, cert.pem, or any other file name, so long as the related directive in the
ssl.conf file uses the same name.
Note
When you replace the default TLS files with your own customized files, be sure that they are
in PEM format.
4. Restart Apache.
5. Your Apache web server should now support HTTPS (secure HTTP) over port 443. Test it by typing
the IP address or fully qualified domain name of your EC2 instance into a browser URL bar with the
prefix https://. Because you are connecting to a site with a self-signed, untrusted host certificate,
your browser may display a series of security warnings.
Override the warnings and proceed to the site. If the default Apache test page opens, it means that
you have successfully configured TLS on your server. All data passing between the browser and
server is now safely encrypted.
To prevent site visitors from encountering warning screens, you need to obtain a certificate that not
only encrypts, but also publicly authenticates you as the owner of the site.
A self-signed TLS X.509 host certificate is cryptologically identical to a CA-signed certificate. The
difference is social, not mathematical; a CA promises to validate, at a minimum, a domain's ownership
before issuing a certificate to an applicant. Each web browser contains a list of CAs trusted by the
browser vendor to do this. An X.509 certificate consists primarily of a public key that corresponds to
your private server key, and a signature by the CA that is cryptographically tied to the public key. When a
browser connects to a web server over HTTPS, the server presents a certificate for the browser to check
against its list of trusted CAs. If the signer is on the list, or accessible through a chain of trust consisting
of other trusted signers, the browser negotiates a fast encrypted data channel with the server and loads
the page.
Certificates generally cost money because of the labor involved in validating the requests, so it pays to
shop around. A few CAs offer basic-level certificates free of charge. The most notable of these CAs is the
Let's Encrypt project, which also supports the automation of the certificate creation and renewal process.
For more information about using Let's Encrypt as your CA, see Certificate automation: Let's Encrypt with
Certbot on Amazon Linux 2 (p. 45).
If you plan to offer commercial-grade services, AWS Certificate Manager is a good option.
Underlying the host certificate is the key. As of 2017, government and industry groups recommend using
a minimum key (modulus) size of 2048 bits for RSA keys intended to protect documents through 2030.
85
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
The default modulus size generated by OpenSSL in Amazon Linux is 2048 bits, which means that the
existing auto-generated key is suitable for use in a CA-signed certificate. An alternative procedure is
described below for those who desire a customized key, for instance, one with a larger modulus or using
a different encryption algorithm.
These instructions for acquiring a CA-signed host certificate do not work unless you own a registered and
hosted DNS domain.
1. Connect to your instance (p. 11) and navigate to /etc/pki/tls/private/. This is the directory where
the server's private key for TLS is stored. If you prefer to use your existing host key to generate the
CSR, skip to Step 3.
2. (Optional) Generate a new private key. Here are some examples of key configurations. Any of
the resulting keys work with your web server, but they vary in how (and how much) security they
implement.
• Example 1: Create a default RSA host key. The resulting file, custom.key, is a 2048-bit RSA
private key.
• Example 2: Create a stronger RSA key with a bigger modulus. The resulting file, custom.key, is a
4096-bit RSA private key.
• Example 3: Create a 4096-bit encrypted RSA key with password protection. The resulting file,
custom.key, is a 4096-bit RSA private key encrypted with the AES-128 cipher.
Important
Encrypting the key provides greater security, but because an encrypted key requires a
password, services depending on it cannot be auto-started. Each time you use this key,
you must supply the password (in the preceding example, "abcde12345") over an SSH
connection.
[ec2-user ~]$ sudo openssl genrsa -aes128 -passout pass:abcde12345 -out custom.key
4096
• Example 4: Create a key using a non-RSA cipher. RSA cryptography can be relatively slow because
of the size of its public keys, which are based on the product of two large prime numbers.
However, it is possible to create keys for TLS that use non-RSA ciphers. Keys based on the
mathematics of elliptic curves are smaller and computationally faster when delivering an
equivalent level of security.
[ec2-user ~]$ sudo openssl ecparam -name prime256v1 -out custom.key -genkey
The result is a 256-bit elliptic curve private key using prime256v1, a "named curve" that OpenSSL
supports. Its cryptographic strength is slightly greater than a 2048-bit RSA key, according to NIST.
Note
Not all CAs provide the same level of support for elliptic-curve-based keys as for RSA
keys.
Make sure that the new private key has highly restrictive ownership and permissions (owner=root,
group=root, read/write for owner only). The commands would be as follows:
86
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
After you have created and configured a satisfactory key, you can create a CSR.
3. Create a CSR using your preferred key; the example below uses custom.key:
[ec2-user ~]$ sudo openssl req -new -key custom.key -out csr.pem
OpenSSL opens a dialog and prompts you for the information shown in the following table. All of
the fields except Common Name are optional for a basic, domain-validated host certificate.
Country Name The two-letter ISO abbreviation for your US (=United States)
country.
Organization The full legal name of your organization. Do not Example Corporation
Name abbreviate your organization name.
Common This value must exactly match the web address www.example.com
Name that you expect users to type into a browser.
Usually, this means a domain name with
a prefixed host name or alias in the form
www.example.com. In testing with a self-
signed certificate and no DNS resolution,
the common name may consist of the host
name alone. CAs also offer more expensive
certificates that accept wild-card names such as
*.example.com.
Finally, OpenSSL prompts you for an optional challenge password. This password applies only to the
CSR and to transactions between you and your CA, so follow the CA's recommendations about this
and the other optional field, optional company name. The CSR challenge password has no effect on
server operation.
The resulting file csr.pem contains your public key, your digital signature of your public key, and
the metadata that you entered.
87
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
4. Submit the CSR to a CA. This usually consists of opening your CSR file in a text editor and copying
the contents into a web form. At this time, you may be asked to supply one or more subject
alternate names (SANs) to be placed on the certificate. If www.example.com is the common name,
then example.com would be a good SAN, and vice versa. A visitor to your site typing in either of
these names would see an error-free connection. If your CA web form allows it, include the common
name in the list of SANs. Some CAs include it automatically.
After your request has been approved, you receive a new host certificate signed by the CA. You
might also be instructed to download an intermediate certificate file that contains additional
certificates needed to complete the CA's chain of trust.
Note
Your CA may send you files in multiple formats intended for various purposes. For this
tutorial, you should only use a certificate file in PEM format, which is usually (but not
always) marked with a .pem or .crt extension. If you are uncertain which file to use, open
the files with a text editor and find the one containing one or more blocks beginning with
the following:
- - - - -BEGIN CERTIFICATE - - - - -
- - - -END CERTIFICATE - - - - -
Verify that these lines appear in the file. Do not use files ending with .p7b, .p7c, or similar
file extensions.
5. Place the new CA-signed certificate and any intermediate certificates in the /etc/pki/tls/certs
directory.
Note
There are several ways to upload your custom key to your EC2 instance, but the most
straightforward and informative way is to open a text editor (for example, vi, nano,
or notepad) on both your local computer and your instance, and then copy and paste
the file contents between them. You need root [sudo] permissions when performing
these operations on the EC2 instance. This way, you can see immediately if there are any
permission or path problems. Be careful, however, not to add any additional lines while
copying the contents, or to change them in any way.
From inside the /etc/pki/tls/certs directory, use the following commands to verify that the
file ownership, group, and permission settings match the highly restrictive Amazon Linux defaults
(owner=root, group=root, read/write for owner only).
The permissions for the intermediate certificate file are less stringent (owner=root, group=root,
owner can write, group can read, world can read). The commands would be:
88
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 2: Obtain a CA-signed certificate
6. If you used a custom key to create your CSR and the resulting host certificate, remove or rename the
old key from the /etc/pki/tls/private/ directory, and then install the new key there.
Note
There are several ways to upload your custom key to your EC2 instance, but the most
straightforward and informative way is to open a text editor (vi, nano, notepad, etc.) on
both your local computer and your instance, and then copy and paste the file contents
between them. You need root [sudo] privileges when performing these operations on
the EC2 instance. This way, you can see immediately if there are any permission or path
problems. Be careful, however, not to add any additional lines while copying the contents,
or to change them in any way.
From inside the /etc/pki/tls/private directory, check that the file ownership, group, and
permission settings match the highly restrictive Amazon Linux defaults (owner=root, group=root,
read/write for owner only). The commands would be as follows:
a. Provide the path and file name of the CA-signed host certificate in Apache's
SSLCertificateFile directive:
SSLCertificateFile /etc/pki/tls/certs/custom.crt
b. If you received an intermediate certificate file (intermediate.crt in this example), provide its
path and file name using Apache's SSLCACertificateFile directive:
SSLCACertificateFile /etc/pki/tls/certs/intermediate.crt
Note
Some CAs combine the host certificate and the intermediate certificates in a single file,
making this directive unnecessary. Consult the instructions provided by your CA.
c. Provide the path and file name of the private key in Apache's SSLCertificateKeyFile
directive:
SSLCertificateKeyFile /etc/pki/tls/private/custom.key
89
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 3: Test and harden the security configuration
9. Test your server by entering your domain name into a browser URL bar with the prefix https://.
Your browser should load the test page over HTTPS without generating errors.
On the Qualys SSL Labs site, type the fully qualified domain name of your server, in the form
www.example.com. After about two minutes, you receive a grade (from A to F) for your site and a
detailed breakdown of the findings. Though the overview shows that the configuration is mostly sound,
the detailed report flags several potential problems. For example:
✗ The RC4 cipher is supported for use by certain older browsers. A cipher is the mathematical core of
an encryption algorithm. RC4, a fast cipher used to encrypt TLS data-streams, is known to have several
serious weaknesses. Unless you have very good reasons to support legacy browsers, you should disable
this.
✗ Old TLS versions are supported. The configuration supports TLS 1.0 (already deprecated) and TLS 1.1
(on a path to deprecation). Only TLS 1.2 has been recommended since 2018.
1. Open the configuration file /etc/httpd/conf.d/ssl.conf in a text editor and comment out the
following lines by typing "#" at the beginning of each:
These directives explicitly disable SSL versions 2 and 3, as well as TLS versions 1.0 and 1.1. The
server now refuses to accept encrypted connections with clients using anything except TLS 1.2. The
verbose wording in the directive communicates more clearly, to a human reader, what the server is
configured to do.
Note
Disabling TLS versions 1.0 and 1.1 in this manner blocks a small percentage of outdated
web browsers from accessing your site.
90
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Step 3: Test and harden the security configuration
1. Open the configuration file /etc/httpd/conf.d/ssl.conf and find the section with
commented-out examples for configuring SSLCipherSuite and SSLProxyCipherSuite.
#SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5
#SSLProxyCipherSuite HIGH:MEDIUM:!aNULL:!MD5
Leave these as they are, and below them add the following directives:
Note
Though shown here on several lines for readability, each of these two directives must be on
a single line without spaces between the cipher names.
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-
CHACHA20-POLY1305:
ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-
SHA256:ECDHE-ECDSA-AES256-SHA384:
ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES:!aNULL:!
eNULL:!EXPORT:!DES:
!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
SSLProxyCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-
ECDSA-CHACHA20-POLY1305:
ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-
SHA256:ECDHE-ECDSA-AES256-SHA384:
ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES:!aNULL:!
eNULL:!EXPORT:!DES:
!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
These ciphers are a subset of the much longer list of supported ciphers in OpenSSL. They were
selected and ordered according to the following criteria:
The high-ranking ciphers have ECDHE in their names, for Elliptic Curve Diffie-Hellman Ephemeral; the
ephemeral indicates forward secrecy. Also, RC4 is now among the forbidden ciphers near the end.
We recommend that you use an explicit list of ciphers instead relying on defaults or terse directives
whose content isn't visible. The cipher list shown here is just one of many possible lists; for instance,
you might want to optimize a list for speed rather than forward secrecy.
If you anticipate a need to support older clients, you can allow the DES-CBC3-SHA cipher suite.
Each update to OpenSSL introduces new ciphers and deprecates old ones. Keep your EC2 Amazon
Linux instance up to date, watch for security announcements from OpenSSL, and be alert to reports
of new security exploits in the technical press.
2. Uncomment the following line by removing the "#":
#SSLHonorCipherOrder on
91
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot
This command forces the server to prefer high-ranking ciphers, including (in this case) those that
support forward secrecy. With this directive turned on, the server tries to establish a strongly secure
connection before falling back to allowed ciphers with lesser security.
3. Restart Apache. If you test the domain again on Qualys SSL Labs, you should see that the RC4
vulnerability is gone.
Troubleshoot
• My Apache webserver won't start unless I enter a password
This is expected behavior if you installed an encrypted, password-protected, private server key.
You can remove the encryption and password requirement from the key. Assuming that you have a
private encrypted RSA key called custom.key in the default directory, and that the password on it is
abcde12345, run the following commands on your EC2 instance to generate an unencrypted version
of the key.
92
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Use an AMI
• One or more Amazon Elastic Block Store (Amazon EBS) snapshots, or, for instance-store-backed AMIs,
a template for the root volume of the instance (for example, an operating system, an application
server, and applications).
• Launch permissions that control which AWS accounts can use the AMI to launch instances.
• A block device mapping that specifies the volumes to attach to the instance when it's launched.
Contents
• Use an AMI (p. 93)
• Create your own AMI (p. 94)
• Buy, share, and sell AMIs (p. 94)
• Deregister your AMI (p. 95)
• Amazon Linux 2 and Amazon Linux AMI (p. 95)
• AMI types (p. 95)
• Linux AMI virtualization types (p. 98)
• Boot modes (p. 100)
• Find a Linux AMI (p. 107)
• Shared AMIs (p. 112)
• Paid AMIs (p. 130)
• AMI lifecycle (p. 133)
• Use encryption with EBS-backed AMIs (p. 189)
• Understand AMI billing information (p. 193)
• Amazon Linux (p. 197)
• User provided kernels (p. 215)
• Configure the Amazon Linux 2 MATE desktop connection (p. 220)
Use an AMI
The following diagram summarizes the AMI lifecycle. After you create and register an AMI, you can use it
to launch new instances. (You can also launch instances from an AMI if the AMI owner grants you launch
permissions.) You can copy an AMI within the same AWS Region or to different AWS Regions. When you
no longer require an AMI, you can deregister it.
93
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create your own AMI
You can search for an AMI that meets the criteria for your instance. You can search for AMIs provided
by AWS or AMIs provided by the community. For more information, see AMI types (p. 95) and Find a
Linux AMI (p. 107).
After you launch an instance from an AMI, you can connect to it. When you are connected to an instance,
you can use it just like you use any other server. For information about launching, connecting, and using
your instance, see Tutorial: Get started with Amazon EC2 Linux instances (p. 9).
The root storage device of the instance determines the process you follow to create an AMI. The root
volume of an instance is either an Amazon Elastic Block Store (Amazon EBS) volume or an instance
store volume. For more information about the root device volume, see Amazon EC2 instance root device
volume (p. 1638).
• To create an Amazon EBS-backed AMI, see Create an Amazon EBS-backed Linux AMI (p. 134).
• To create an instance store-backed AMI, see Create an instance store-backed Linux AMI (p. 139).
To help categorize and manage your AMIs, you can assign custom tags to them. For more information,
see Tag your Amazon EC2 resources (p. 1666).
You can purchase AMIs from a third party, including AMIs that come with service contracts from
organizations such as Red Hat. You can also create an AMI and sell it to other Amazon EC2 users. For
more information about buying or selling AMIs, see Paid AMIs (p. 130).
94
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deregister your AMI
• A stable, secure, and high-performance execution environment for applications running on Amazon
EC2.
• Provided at no additional charge to Amazon EC2 users.
• Repository access to multiple versions of MySQL, PostgreSQL, Python, Ruby, Tomcat, and many more
common packages.
• Updated on a regular basis to include the latest components, and these updates are also made
available in the yum repositories for installation on running instances.
• Includes packages that enable easy integration with AWS services, such as the AWS CLI, Amazon EC2
API and AMI tools, the Boto library for Python, and the Elastic Load Balancing tools.
AMI types
You can select an AMI to use based on the following characteristics:
Launch permissions
The owner of an AMI determines its availability by specifying launch permissions. Launch permissions fall
into the following categories.
Launch Description
permission
explicit The owner grants launch permissions to specific AWS accounts, organizations, or
organizational units (OUs).
95
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage for the root device
Amazon and the Amazon EC2 community provide a large selection of public AMIs. For more information,
see Shared AMIs (p. 112). Developers can charge for their AMIs. For more information, see Paid
AMIs (p. 130).
• Amazon EBS-backed AMI – The root device for an instance launched from the AMI is an Amazon Elastic
Block Store (Amazon EBS) volume created from an Amazon EBS snapshot.
• Amazon instance store-backed AMI – The root device for an instance launched from the AMI is an
instance store volume created from a template stored in Amazon S3.
For more information, see Amazon EC2 instance root device volume (p. 1638).
The following table summarizes the important differences when using the two types of AMIs.
Boot time for an Usually less than 1 minute Usually less than 5 minutes
instance
Data persistence By default, the root volume Data on any instance store volumes
is deleted when the instance persists only during the life of the
terminates.* Data on any other EBS instance.
volumes persists after instance
termination by default.
Modifications The instance type, kernel, RAM disk, Instance attributes are fixed for the
and user data can be changed while life of an instance.
the instance is stopped.
Charges You're charged for instance usage, You're charged for instance usage
EBS volume usage, and storing your and storing your AMI in Amazon S3.
AMI as an EBS snapshot.
AMI creation/bundling Uses a single command/call Requires installation and use of AMI
tools
* By default, EBS root volumes have the DeleteOnTermination flag set to true. For information
about how to change this flag so that the volume persists after termination, see Change the root volume
to persist (p. 1642).
** Supported with io2 EBS Block Express only. For more information, see io2 Block Express
volumes (p. 1337).
96
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage for the root device
Old console
To determine the root device type of an AMI using the command line
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
Stopped state
You can stop an instance that has an EBS volume for its root device, but you can't stop an instance that
has an instance store volume for its root device.
Stopping causes the instance to stop running (its status goes from running to stopping to stopped).
A stopped instance persists in Amazon EBS, which allows it to be restarted. Stopping is different from
terminating; you can't restart a terminated instance. Because instances with an instance store volume for
the root device can't be stopped, they're either running or terminated. For more information about what
happens and what you can do while an instance is stopped, see Stop and start your instance (p. 622).
Instances that have Amazon EBS for the root device automatically have an EBS volume attached. The
volume appears in your list of volumes like any other. With most instance types, instances that have
an EBS volume for the root device don't have instance store volumes by default. You can add instance
97
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Virtualization types
store volumes or additional EBS volumes using a block device mapping. For more information, see Block
device mappings (p. 1647).
Boot times
Instances launched from an Amazon EBS-backed AMI launch faster than instances launched from an
instance store-backed AMI. When you launch an instance from an instance store-backed AMI, all the
parts have to be retrieved from Amazon S3 before the instance is available. With an Amazon EBS-backed
AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the
instance is available. However, the performance of an instance that uses an EBS volume for its root
device is slower for a short time while the remaining parts are retrieved from the snapshot and loaded
into the volume. When you stop and restart the instance, it launches quickly, because the state is stored
in an EBS volume.
AMI creation
To create Linux AMIs backed by instance store, you must create an AMI from your instance on the
instance itself using the Amazon EC2 AMI tools.
AMI creation is much easier for AMIs backed by Amazon EBS. The CreateImage API action creates your
Amazon EBS-backed AMI and registers it. There's also a button in the AWS Management Console that
lets you create an AMI from a running instance. For more information, see Create an Amazon EBS-backed
Linux AMI (p. 134).
With Amazon EC2 instance store-backed AMIs, each time you customize an AMI and create a new one, all
of the parts are stored in Amazon S3 for each AMI. So, the storage footprint for each customized AMI is
the full size of the AMI. For Amazon EBS-backed AMIs, each time you customize an AMI and create a new
one, only the changes are stored. So, the storage footprint for subsequent AMIs that you customize after
the first is much smaller, resulting in lower AMI storage charges.
When an instance with an EBS volume for its root device is stopped, you're not charged for instance
usage; however, you're still charged for volume storage. As soon as you start your instance, we charge a
minimum of one minute for usage. After one minute, we charge only for the seconds used. For example,
if you run an instance for 20 seconds and then stop it, we charge for a full one minute. If you run an
instance for 3 minutes and 40 seconds, we charge for exactly 3 minutes and 40 seconds of usage. We
charge you for each second, with a one-minute minimum, that you keep the instance running, even if the
instance remains idle and you don't connect to it.
For the best performance, we recommend that you use current generation instance types and HVM
AMIs when you launch your instances. For more information about current generation instance types,
see Amazon EC2 Instance Types. If you are using previous generation instance types and would like to
upgrade, see Upgrade Paths.
98
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Virtualization types
HVM PV
Description HVM AMIs are presented with a PV AMIs boot with a special boot
fully virtualized set of hardware loader called PV-GRUB, which
and boot by executing the starts the boot cycle and then
master boot record of the root chain loads the kernel specified
block device of your image. This in the menu.lst file on your
virtualization type provides image. Paravirtual guests can
the ability to run an operating run on host hardware that does
system directly on top of a not have explicit support for
virtual machine without any virtualization. Historically, PV
modification, as if it were run guests had better performance
on the bare-metal hardware. than HVM guests in many cases,
The Amazon EC2 host system but because of enhancements
emulates some or all of the in HVM virtualization and the
underlying hardware that is availability of PV drivers for
presented to the guest. HVM AMIs, this is no longer
true. For more information
about PV-GRUB and its use in
Amazon EC2, see User provided
kernels (p. 215).
Support for hardware extensions Yes. Unlike PV guests, HVM No, they cannot take advantage
guests can take advantage of special hardware extensions
of hardware extensions that such as enhanced networking or
provide fast access to the GPU processing.
underlying hardware on the host
system. For more information
on CPU virtualization extensions
available in Amazon EC2, see
Intel Virtualization Technology
on the Intel website. HVM AMIs
are required to take advantage
of enhanced networking and
GPU processing. In order to
pass through instructions to
specialized network and GPU
devices, the OS needs to be
able to have access to the
native hardware platform; HVM
virtualization provides this
access. For more information,
see Enhanced networking
on Linux (p. 1100) and
Linux accelerated computing
instances (p. 330).
Supported instance types All current generation instance The following previous
types support HVM AMIs. generation instance types
support PV AMIs: C1, C3, HS1,
M1, M3, M2, and T1. Current
generation instance types do not
support PV AMIs.
Supported Regions All Regions support HVM Asia Pacific (Tokyo), Asia
instances. Pacific (Singapore), Asia Pacific
(Sydney), Europe (Frankfurt),
99
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Boot modes
HVM PV
Europe (Ireland), South America
(São Paulo), US East (N. Virginia),
US West (N. California), and US
West (Oregon)
How to find Verify that the virtualization Verify that the virtualization
type of the AMI is set to hvm, type of the AMI is set to
using the console or the paravirtual, using the
describe-images command. console or the describe-images
command.
PV on HVM
Paravirtual guests traditionally performed better with storage and network operations than HVM guests
because they could leverage special drivers for I/O that avoided the overhead of emulating network
and disk hardware, whereas HVM guests had to translate these instructions to emulated hardware.
Now PV drivers are available for HVM guests, so operating systems that cannot be ported to run in
a paravirtualized environment can still see performance advantages in storage and network I/O by
using them. With these PV on HVM drivers, HVM guests can get the same, or better, performance than
paravirtual guests.
Boot modes
When a computer boots, the first software that it runs is responsible for initializing the platform and
providing an interface for the operating system to perform platform-specific operations.
In EC2, two variants of the boot mode software are supported: Legacy BIOS and Unified Extensible
Firmware Interface (UEFI). By default, Intel and AMD instance types run on Legacy BIOS, and Graviton
instance types run on UEFI.
Most Intel and AMD instance types can run on both UEFI and Legacy BIOS. To use UEFI, you must select
an AMI with the boot mode parameter set to uefi, and the operating system contained in the AMI must
be configured to support UEFI.
The AMI boot mode parameter signals to EC2 which boot mode to use when launching an instance.
When the boot mode parameter is set to uefi, EC2 attempts to launch the instance on UEFI. If the
operating system is not configured to support UEFI, the instance launch might be unsuccessful.
Warning
Setting the boot mode parameter does not automatically configure the operating system
for the specified boot mode. The configuration is specific to the operating system. For the
configuration instructions, see the manual for your operating system.
The AMI boot mode parameter is optional. An AMI can have one of the following boot mode parameter
values: uefi or legacy-bios. Some AMIs do not have a boot mode parameter. For AMIs with no boot mode
100
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Considerations
parameter, the instances launched from these AMIs use the default value of the instance type—uefi on
Graviton, and legacy-bios on all Intel and AMD instance types.
Topics
• Considerations (p. 101)
• Requirements for launching an instance with UEFI (p. 101)
• Determine the boot mode parameter of an AMI (p. 101)
• Determine the supported boot modes of an instance type (p. 102)
• Determine the boot mode of an instance (p. 103)
• Determine the boot mode of the OS (p. 104)
• Set the boot mode of an AMI (p. 105)
• UEFI variables (p. 107)
Considerations
• Default boot modes:
• Intel and AMD instance types: Legacy BIOS
• Graviton instance types: UEFI
• Intel and AMD instance types that support UEFI, in addition to Legacy BIOS:
• Virtualized: C5, C5a, C5ad, C5d, C5n, D3, D3en, G4, I3en, M5, M5a, M5ad, M5d, M5dn, M5n, M5zn,
M6i, R5, R5a, R5ad, R5b, R5d, R5dn, R5n, T3, T3a, and z1d
• UEFI Secure Boot is currently not supported.
• Instance type – When launching an instance, you must select an instance type that supports UEFI. For
more information, see Determine the supported boot modes of an instance type (p. 102).
• AMI – When launching an instance, you must select an AMI that is configured for UEFI. The AMI must
be configured as follows:
• OS – The operating system contained in the AMI must be configured to use UEFI; otherwise, the
instance launch will fail. For more information, see Determine the boot mode of the OS (p. 104).
• AMI boot mode parameter – The boot mode parameter of the AMI must be set to uefi. For more
information, see Determine the boot mode parameter of an AMI (p. 101).
AWS does not provide AMIs that are already configured to support UEFI. You must configure the
AMI (p. 105), import the AMI through VM Import/Export, or import the AMI through CloudEndure.
Some AMIs do not have a boot mode parameter. When an AMI has no boot mode parameter, the
instances launched from the AMI use the default value of the instance type, which is uefi on Graviton,
and legacy-bios on Intel and AMD instance types.
101
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Determine the supported boot modes of an instance type
To determine the boot mode parameter of an AMI when launching an instance (console)
When launching an instance using the launch instance wizard, at the step to select an AMI, inspect the
Boot mode field. For more information, see Step 1: Choose an Amazon Machine Image (AMI) (p. 566).
To determine the boot mode parameter of an AMI (AWS CLI version 1.19.34 and later and version
2.1.32 and later)
Expected output
{
"Images": [
{
...
],
"EnaSupport": true,
"Hypervisor": "xen",
"ImageOwnerAlias": "amazon",
"Name": "UEFI_Boot_Mode_Enabled-Windows_Server-2016-English-Full-
Base-2020.09.30",
"RootDeviceName": "/dev/sda1",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm",
"BootMode": "uefi"
}
]
}
Use the describe-instance-types command to determine the supported boot modes of an instance type.
By including the --query parameter, you can filter the output. In this example, the output is filtered to
return only the supported boot modes.
The following example shows that m5.2xlarge supports both UEFI and Legacy BIOS boot modes.
Expected output
102
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Determine the boot mode of an instance
[
[
"legacy-bios",
"uefi"
]
]
The following example shows that t2.xlarge supports only Legacy BIOS.
Expected output
[
[
"legacy-bios"
]
]
• An AMI with a boot mode parameter of uefi creates an instance with a boot mode parameter of uefi.
• An AMI with a boot mode parameter of legacy-bios creates an instance with no boot mode parameter.
An instance with no boot mode parameter uses its default value, which is legacy-bios in this case.
• An AMI with no boot mode parameter value creates an instance with no boot mode parameter value.
The value of the instance's boot mode parameter determines the mode in which it boots. If there is
no value, the default boot mode is used, which is uefi on Graviton, and legacy-bios on Intel and AMD
instance types.
To determine the boot mode of an instance (AWS CLI version 1.19.34 and later and version 2.1.32
and later)
Expected output
{
"Reservations": [
103
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Determine the boot mode of the OS
{
"Groups": [],
"Instances": [
{
"AmiLaunchIndex": 0,
"ImageId": "ami-0e2063e7f6dc3bee8",
"InstanceId": "i-1234567890abcdef0",
"InstanceType": "m5.2xlarge",
...
},
"BootMode": "uefi"
}
],
"OwnerId": "1234567890",
"ReservationId": "r-1234567890abcdef0"
}
]
}
BootCurrent: 0001
Timeout: 0 seconds
BootOrder: 0000,0001,0002
Boot0000* UiApp
Boot0001* UEFI Amazon Elastic Block Store vol-xyz
Boot0002* EFI Internal Shell
• Run the following command to verify the existence of the /sys/firmware/efi directory. This
directory exists only if the instance boots using UEFI. If this directory doesn't exist, the command
returns Legacy BIOS Boot Detected.
[ec2-user ~]$ [ -d /sys/firmware/efi ] && echo "UEFI Boot Detected" || echo "Legacy
BIOS Boot Detected"
104
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set the boot mode of an AMI
• Run the following command to verify that EFI appears in the dmesg output.
To convert an existing Legacy BIOS-based instance to UEFI, or an existing UEFI-based instance to Legacy
BIOS, you need to perform a number of steps: First, modify the instance's volume and OS to support the
selected boot mode. Then, create a snapshot of the volume. Finally, use register-image to create the AMI
using the snapshot.
You can't set the boot mode of an AMI using the create-image command. With create-image, the AMI
inherits the boot mode of the EC2 instance used for creating the AMI. For example, if you create an AMI
from an EC2 instance running on Legacy BIOS, the AMI boot mode will be configured as legacy-bios.
Warning
Before proceeding with these steps, you must first make suitable modifications to the instance's
volume and OS to support booting via the selected boot mode; otherwise, the resulting AMI
will not be usable. The modifications that are required are operating system-specific. For more
information, see the manual for your operating system.
To set the boot mode of an AMI (AWS CLI version 1.19.34 and later and version 2.1.32 and
later)
1. Make suitable modifications to the instance's volume and OS to support booting via the selected
boot mode. The modifications that are required are operating system-specific. For more information,
see the manual for your operating system.
Note
If you don't perform this step, the AMI will not be usable.
2. To find the volume ID of the instance, use the describe-instances command. You'll create a snapshot
of this volume in the next step.
Expected output
...
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"AttachTime": "",
"DeleteOnTermination": true,
"Status": "attached",
"VolumeId": "vol-1234567890abcdef0"
}
105
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set the boot mode of an AMI
}
...
3. To create a snapshot of the volume, use the create-snapshot command. Use the volume ID from the
previous step.
Expected output
{
"Description": "add text",
"Encrypted": false,
"OwnerId": "123",
"Progress": "",
"SnapshotId": "snap-01234567890abcdef",
"StartTime": "",
"State": "pending",
"VolumeId": "vol-1234567890abcdef0",
"VolumeSize": 30,
"Tags": []
}
Example output
{
"Snapshots": [
{
"Description": "This is my snapshot",
"Encrypted": false,
"VolumeId": "vol-049df61146c4d7901",
"State": "completed",
"VolumeSize": 8,
"StartTime": "2019-02-28T21:28:32.000Z",
"Progress": "100%",
"OwnerId": "012345678910",
"SnapshotId": "snap-01234567890abcdef",
...
6. To create a new AMI, use the register-image command. Use the snapshot ID that you noted in
the earlier step. To set the boot mode to UEFI, add the --boot-mode uefi parameter to the
command.
106
Amazon Elastic Compute Cloud
User Guide for Linux Instances
UEFI variables
--boot-mode uefi
Expected output
{
"ImageId": "ami-new_ami_123"
}
7. To verify that the newly-created AMI has the boot mode that you specified in the previous step, use
the describe-images command.
Expected output
{
"Images": [
{
"Architecture": "x86_64",
"CreationDate": "2021-01-06T14:31:04.000Z",
"ImageId": "ami-new_ami_123",
"ImageLocation": "",
...
"BootMode": "uefi"
}
]
}
8. Launch a new instance using the newly-created AMI. All new instances created from this AMI will
inherit the same boot mode.
9. To verify that the new instance has the expected boot mode, use the describe-instances command.
UEFI variables
When you launch an instance where the boot mode is set to UEFI, a key-value store for variables is
created. The store can be used by UEFI and the instance operating system for storing UEFI variables.
UEFI variables are used by the boot loader and the OS to configure early system startup. They allow the
OS to manage certain settings of the boot process, like the boot order.
Warning
Operating systems often provide read access to local processes for any UEFI variable. You should
never store sensitive data, such as passwords or personally identifiable information, in the UEFI
variable store.
• The Region
• The operating system
• The architecture: 32-bit (i386), 64-bit (x86_64), or 64-bit ARM (arm64)
• The root device type: Amazon EBS or instance store
107
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Find a Linux AMI using the Amazon EC2 console
If you need to find a Windows AMI, see Find a Windows AMI in the Amazon EC2 User Guide for Windows
Instances.
If you need to find a Ubuntu AMI, see their EC2 AMI Locator.
If you need to find an RedHat AMI, see the RHEL knowledgebase article.
Contents
• Find a Linux AMI using the Amazon EC2 console (p. 108)
• Find an AMI using the AWS CLI (p. 109)
• Find the latest Amazon Linux AMI using Systems Manager (p. 109)
• Use a Systems Manager parameter to find an AMI (p. 110)
108
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Find an AMI using the AWS CLI
console, see Launching your instance from an AMI (p. 567). If you're not ready to launch the
instance now, make note of the AMI ID for later.
The describe-images command supports filtering parameters. For example, use the --owners parameter
to display public AMIs owned by Amazon.
You can add the following filter to the previous command to display only AMIs backed by Amazon EBS.
--filters "Name=root-device-type,Values=ebs"
Important
Omitting the --owners flag from the describe-images command will return all images for
which you have launch permissions, regardless of ownership.
The Amazon EC2 AMI public parameters are available from the following path:
/aws/service/ami-amazon-linux-latest
You can view a list of all Linux AMIs in the current AWS Region by using the following command in the
AWS CLI.
The following example uses the EC2-provided public parameter to launch an m5.xlarge instance using
the latest Amazon Linux 2 AMI.
To specify the parameter in the command, use the following syntax: resolve:ssm:public-
parameter, where resolve:ssm is the standard prefix and public-parameter is the path and name
of the public parameter.
In this example, the --count and --security-group parameters are not included. For --count, the
default is 1. If you have a default VPC and a default security group, they are used.
109
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Use a Systems Manager parameter to find an AMI
--image-id resolve:ssm:/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
--instance-type m5.xlarge
--key-name MyKeyPair
For more information, see Using public parameters in the AWS Systems Manager User Guide and Query
for the latest Amazon Linux AMI IDs Using AWS Systems Manager Parameter Store.
A Systems Manager parameter is a customer-defined key-value pair that you can create in Systems
Manager Parameter Store. The Parameter Store provides a central store to externalize your application
configuration values. For more information, see AWS Systems Manager Parameter Store in the AWS
Systems Manager User Guide.
When you create a parameter that points to an AMI ID, make sure that you specify the data type as
aws:ec2:image. This data type ensures that when the parameter is created or modified, the parameter
value is validated as an AMI ID. For more information, see Native parameter support for Amazon Machine
Image IDs in the AWS Systems Manager User Guide.
Contents
• Use cases (p. 110)
• Launch an instance using a Systems Manager parameter (p. 111)
• Permissions (p. 112)
• Limitations (p. 112)
Use cases
By using Systems Manager parameters to point to AMI IDs, you can make it easier for your users to select
the correct AMI when launching instances, and you can simplify the maintenance of automation code.
If you require instances to be launched using a specific AMI, and if that AMI is updated regularly, we
recommend that you require your users to select a Systems Manager parameter to find the AMI. By
requiring your users to select a Systems Manager parameter, you can ensure that the latest AMI is used
to launch instances.
For example, every month in your organization you might create a new version of your AMI that has
the latest operating system and application patches. You also require your users to launch instances
using the latest version of your AMI. To ensure that your users use the latest version, you can create a
Systems Manager parameter (for example, golden-ami) that points to the correct AMI ID. Each time a
new version of the AMI is created, you update the AMI ID value in the parameter so that it always points
to the latest AMI. Your users don't need to know about the periodic updates to the AMI, because they
continue to select the same Systems Manager parameter every time. By having users select a Systems
Manager parameter, you make it easier for them to select the correct AMI for an instance launch.
If you use automation code to launch your instances, you can specify the Systems Manager parameter
instead of the AMI ID. If a new version of the AMI is created, you change the AMI ID value in the
parameter so that it points to the latest AMI. The automation code that references the parameter doesn’t
110
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Use a Systems Manager parameter to find an AMI
need to be modified every time a new version of the AMI is created. This greatly simplifies maintenance
of automation and helps drive down deployment costs.
Note
Running instances are not affected when you change the AMI ID to which the Systems Manager
parameter points.
For more information about launching an instance from an AMI using the launch wizard, see Step 1:
Choose an Amazon Machine Image (AMI) (p. 566).
To launch an instance using an AWS Systems Manager parameter instead of an AMI ID (AWS CLI)
The following example uses the Systems Manager parameter golden-ami to launch an m5.xlarge
instance. The parameter points to an AMI ID.
To specify the parameter in the command, use the following syntax: resolve:ssm:/parameter-name,
where resolve:ssm is the standard prefix and parameter-name is the unique parameter name. Note
that the parameter name is case-sensitive. Backslashes for the parameter name are only necessary when
the parameter is part of a hierarchy, for example, /amis/production/golden-ami. You can omit the
backslash if the parameter is not part of a hierarchy.
In this example, the --count and --security-group parameters are not included. For --count, the
default is 1. If you have a default VPC and a default security group, they are used.
To launch an instance using a specific version of an AWS Systems Manager parameter (AWS CLI)
Systems Manager parameters have version support. Each iteration of a parameter is assigned a unique
version number. You can reference the version of the parameter as follows resolve:ssm:parameter-
name:version, where version is the unique version number. By default, the latest version of the
parameter is used when no version is specified.
In this example, the --count and --security-group parameters are not included. For --count, the
default is 1. If you have a default VPC and a default security group, they are used.
111
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Shared AMIs
Amazon EC2 provides Systems Manager public parameters for public AMIs provided by AWS. For
example, the public parameter /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 is
available in all Regions and always points to the latest version of the Amazon Linux 2 AMI in the Region.
Permissions
If you use Systems Manager parameters that point to AMI IDs in the launch instance wizard,
you must add ssm:DescribeParameters and ssm:GetParameters to your IAM policy.
ssm:DescribeParameters grants your IAM users the permission to view and select Systems Manager
parameters. ssm:GetParameters grants your IAM users the permission to get the values of the
Systems Manager parameters. You can also restrict access to specific Systems Manager parameters. For
more information, see Use the EC2 launch wizard (p. 1267).
Limitations
AMIs and Systems Manager parameters are Region specific. To use the same Systems Manager parameter
name across Regions, create a Systems Manager parameter in each Region with the same name (for
example, golden-ami). In each Region, point the Systems Manager parameter to an AMI in that Region.
Shared AMIs
A shared AMI is an AMI that a developer created and made available for others to use. One of the easiest
ways to get started with Amazon EC2 is to use a shared AMI that has the components you need and then
add custom content. You can also create your own AMIs and share them with others.
You use a shared AMI at your own risk. Amazon can't vouch for the integrity or security of AMIs shared
by other Amazon EC2 users. Therefore, you should treat shared AMIs as you would any foreign code that
you might consider deploying in your own data center and perform the appropriate due diligence. We
recommend that you get an AMI from a trusted source.
Amazon's public images have an aliased owner, which appears as amazon in the account field. This
enables you to find AMIs from Amazon easily. Other users can't alias their AMIs.
For information about creating an AMI, see Create an instance store-backed Linux AMI or Create an
Amazon EBS-backed Linux AMI. For information about building, delivering, and maintaining your
applications on the AWS Marketplace, see the AWS Marketplace Documentation.
Contents
• Find shared AMIs (p. 113)
• Make an AMI public (p. 115)
• Share an AMI with specific organizations or organizational units (p. 116)
112
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Find shared AMIs
AMIs are a regional resource. Therefore, when searching for a shared AMI (public or private), you must
search for it from within the Region from which it is being shared. To make an AMI available in a different
Region, copy the AMI to the Region and then share it. For more information, see Copy an AMI (p. 170).
Topics
• Find a shared AMI (console) (p. 113)
• Find a shared AMI (AWS CLI) (p. 113)
• Use shared AMIs (p. 114)
The following command lists all public AMIs, including any public AMIs that you own.
The following command lists the AMIs for which you have explicit launch permissions. This list does not
include any AMIs that you own.
113
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Find shared AMIs
The following command lists the AMIs owned by Amazon. Amazon's public AMIs have an aliased owner,
which appears as amazon in the account field. This enables you to find AMIs from Amazon easily. Other
users can't alias their AMIs.
The following command lists the AMIs owned by the specified AWS account.
To reduce the number of displayed AMIs, use a filter to list only the types of AMIs that interest you. For
example, use the following filter to display only EBS-backed AMIs.
--filters "Name=root-device-type,Values=ebs"
To ensure that you don't accidentally lose access to your instance, we recommend that you initiate
two SSH sessions and keep the second session open until you've removed credentials that you don't
recognize and confirmed that you can still log into your instance using SSH.
1. Identify and disable any unauthorized public SSH keys. The only key in the file should be the key you
used to launch the AMI. The following command locates authorized_keys files:
2. Disable password-based authentication for the root user. Open the sshd_config file and edit the
PermitRootLogin line as follows:
PermitRootLogin without-password
Alternatively, you can disable the ability to log into the instance as the root user:
PermitRootLogin No
114
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Make an AMI public
5. To prevent preconfigured remote logging, you should delete the existing configuration file and
restart the rsyslog service. For example:
If you discover a public AMI that you feel presents a security risk, contact the AWS security team. For
more information, see the AWS Security Center.
Topics
• Considerations (p. 115)
• Share an AMI with all AWS accounts (console) (p. 115)
• Share an AMI with all AWS accounts (AWS CLI) (p. 116)
Considerations
• AMIs with encrypted volumes cannot be made public.
• To avoid exposing sensitive data when you share an AMI, read the security considerations in Guidelines
for shared Linux AMIs (p. 125) and follow the recommended actions.
• If an AMI has a product code, or contains a snapshot of an encrypted volume, you can't make it public;
you can share the AMI only with specific AWS accounts.
• AMIs are a regional resource. When you share an AMI, it is only available in that Region. To make
an AMI available in a different Region, copy the AMI to the Region and then share it. For more
information, see Copy an AMI (p. 170).
• You are not billed when your AMI is used by other AWS accounts to launch instances. The accounts that
launch instances using the AMI are billed for the instances that they launch.
• When you share an AMI, users can only launch instances from the AMI. They can’t delete, share, or
modify it. However, after they have launched an instance using your AMI, they can then create an AMI
from their instance.
New console
115
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share an AMI with organizations or OUs
Old console
You can add or remove account IDs from the list of accounts that have launch permissions for an AMI. To
make the AMI public, specify the all group. You can specify both public and explicit launch permissions.
1. Use the modify-image-attribute command as follows to add the all group to the
launchPermission list for the specified AMI.
2. To verify the launch permissions of the AMI, use the describe-image-attribute command.
3. (Optional) To make the AMI private again, remove the all group from its launch permissions.
Note that the owner of the AMI always has launch permissions and is therefore unaffected by this
command.
An organization is an entity that you create to consolidate and centrally manage your AWS accounts. You
can organize the accounts in a hierarchical, tree-like structure with a root at the top and organizational
units nested under the root. Each account can be directly in the root, or placed in one of the OUs in
116
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share an AMI with organizations or OUs
the hierarchy. For more information, see AWS Organizations terminology and concepts in the AWS
Organizations User Guide.
When you share an AMI with an organization or an OU, all of the children accounts gain access to the
AMI. For example, in the following diagram, the AMI is shared with a top-level OU (indicated by the
arrow at the number 1). All of the OUs and accounts that are nested underneath that top-level OU
(indicated by the dotted line at number 2) also have access to the AMI. The accounts in the organization
and OU outside the dotted line (indicated by the number 3) do not have access to the AMI because they
are not children of the OU that the AMI is shared with.
Considerations
• No sharing limits – The AMI owner can share an AMI with any organization or OU, including
organizations and OUs that they’re not a member of.
There is no limit to the number of organizations or OUs with which an AMI can be shared.
• Tags – User-defined tags that you attach to a shared AMI are available only to your AWS account and
not to the AWS accounts in the other organizations and OUs that the AMI is shared with.
117
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share an AMI with organizations or OUs
• ARN format – When specifying an organization or OU in a command, make sure to use the correct ARN
format. You'll get an error if you specify only the ID, for example, if you specify only o-123example or
ou-1234-5example.
Where:
• account-id is the 12-digit management account number, for example, 123456789012. If
you don't know the management account number, you can describe the organization or the
organizational unit to get the ARN, which includes the management account number. For more
information, see Get the ARN (p. 122).
• organization-id is the organization ID, for example, o-123example.
• ou-id is the organizational unit ID, for example, ou-1234-5example.
For more information about the ARN format, see Amazon Resource Names (ARNs) in the AWS General
Reference.
• Encryption and keys – You can share AMIs that are backed by unencrypted and encrypted snapshots.
• The encrypted snapshots must be encrypted with a customer managed key. You can’t share AMIs
that are backed by snapshots that are encrypted with the default AWS managed key. For more
information, see Share an Amazon EBS snapshot (p. 1419).
• If you share an AMI that is backed by encrypted snapshots, you must allow the organizations or OUs
to use the customer managed keys that were used to encrypt the snapshots. For more information,
see Allow organizations and OUs to use a KMS key (p. 118).
• Regional resource – AMIs are a regional resource. When you share an AMI, it is only available in that
Region. To make an AMI available in a different Region, copy the AMI to the Region and then share it.
For more information, see Copy an AMI (p. 170).
• AMI use – When you share an AMI, users can only launch instances from the AMI. They can’t delete,
share, or modify it. However, after they have launched an instance using your AMI, they can then
create an AMI from their instance.
• Billing – You are not billed when your AMI is used by other AWS accounts to launch instances. The
accounts that launch instances using the AMI are billed for the instances that they launch.
For information about editing a key policy, see Allowing users in other accounts to use a KMS key in the
AWS Key Management Service Developer Guide and Share a KMS key (p. 1421).
To give an organization or OU permission to use a KMS key, add the following statement to the key
policy.
118
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share an AMI with organizations or OUs
To share a KMS key with multiple OUs, you can use a policy similar to the following example.
{
"Sid": "Allow access for specific OUs and their descendants",
"Effect": "Allow",
"Principal": "*",
"Action": [
"kms:Describe*",
"kms:List*",
"kms:Get*"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:PrincipalOrgID": "o-123example"
},
"ForAnyValue:StringLike": {
"aws:PrincipalOrgPaths": [
"o-123example/r-ab12/ou-ab12-33333333/*",
"o-123example/r-ab12/ou-ab12-22222222/*"
]
}
}
}
Share an AMI
You can use the Amazon EC2 console or the AWS CLI to share an AMI with an organization or OU.
To share this AMI with multiple organizations or OUs, repeat this step until you have added all of the
required organizations or OUs.
119
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share an AMI with organizations or OUs
Note
You do not need to share the Amazon EBS snapshots that an AMI references in order
to share the AMI. Only the AMI itself needs to be shared, and the system automatically
provides the instance with access to the referenced Amazon EBS snapshots for the
launch. However, you do need to share the KMS keys used to encrypt snapshots that the
AMI references. For more information, see Allow organizations and OUs to use a KMS
key (p. 118).
7. Choose Save changes when you're done.
8. (Optional) To view the organizations or OUs with which you have shared the AMI, select the AMI in
the list, choose the Permissions tab, and scroll down to Shared organizations/OUs. To find AMIs
that are shared with you, see Find shared AMIs (p. 113).
The modify-image-attribute command grants launch permissions for the specified AMI to the specified
organization. Note that you must specify the full ARN, not just the ID.
The modify-image-attribute command grants launch permissions for the specified AMI to the specified
OU. Note that you must specify the full ARN, not just the ID.
Note
You do not need to share the Amazon EBS snapshots that an AMI references in order to share
the AMI. Only the AMI itself needs to be shared, and the system automatically provides the
instance with access to the referenced Amazon EBS snapshots for the launch. However, you
do need to share the KMS keys used to encrypt snapshots that the AMI references. For more
information, see Allow organizations and OUs to use a KMS key (p. 118).
120
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share an AMI with organizations or OUs
4. Under Shared organizations/OUs, select the organizations or OUs with which you want to stop
sharing the AMI, and then choose Remove selected.
5. Choose Save changes when you're done.
6. (Optional) To confirm that you have stopped sharing the AMI with the organizations or OUs, select
the AMI in the list, choose the Permissions tab, and scroll down to Shared organizations/OUs.
The modify-image-attribute command removes launch permissions for the specified AMI from the
specified organization. Note that you must specify the ARN.
To stop sharing an AMI with all organizations, OUs, and AWS accounts using the AWS CLI
The reset-image-attribute command removes all public and explicit launch permissions from the
specified AMI. Note that the owner of the AMI always has launch permissions and is therefore unaffected
by this command.
Note
You can't stop sharing an AMI with a specific account if it's in an organization or OU with which
an AMI is shared. If you try to stop sharing the AMI by removing launch permissions for the
account, Amazon EC2 returns a success message. However, the AMI continues to be shared with
the account.
View the organizations and OUs with which an AMI is shared (console)
To check with which organizations and OUs you've shared your AMI using the console
To find AMIs that are shared with you, see Find shared AMIs (p. 113).
View the organizations and OUs with which an AMI is shared (AWS CLI)
You can check with which organizations and OUs you've shared your AMI by using the describe-image-
attribute command (AWS CLI) and the launchPermission attribute.
121
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share an AMI with organizations or OUs
To check with which organizations and OUs you've shared your AMI using the AWS CLI
The describe-image-attribute command describes the launchPermission attribute for the specified
AMI, and returns the organizations and OUs with which you've shared the AMI.
Example response
{
"ImageId": "ami-0abcdef1234567890",
"LaunchPermissions": [
{
"OrganizationalUnitArn": "arn:aws:organizations::111122223333:ou/o-123example/
ou-1234-5example"
}
]
}
Before you can get the ARNs, you must have the permission to describe organizations and organizational
units. The following policy provides the necessary permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"organizations:Describe*"
],
"Resource": "*"
}
]
}
Use the describe-organization command and the --query parameter set to 'Organization.Arn' to
return only the organization ARN.
Example response
"arn:aws:organizations::123456789012:organization/o-123example"
122
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share an AMI with specific AWS accounts
Use the describe-organizational-unit command, specify the OU ID, and set the --query parameter to
'OrganizationalUnit.Arn' to return only the organizational unit ARN.
Example response
"arn:aws:organizations::123456789012:ou/o-123example/ou-1234-5example"
Considerations
• No sharing limits – There is no limit to the number of AWS accounts with which an AMI can be shared.
• Tags – User-defined tags that you attach to a shared AMI are available only to your AWS account and
not to the other accounts that the AMI is shared with.
• Encryption and keys – You can share AMIs that are backed by unencrypted and encrypted snapshots.
• The encrypted snapshots must be encrypted with a customer managed key. You can’t share AMIs
that are backed by snapshots that are encrypted with the default AWS managed key. For more
information, see Share an Amazon EBS snapshot (p. 1419).
• If you share an AMI that is backed by encrypted snapshots, you must allow the AWS accounts to use
the customer managed keys that were used to encrypt the snapshots. For more information, see
Allow organizations and OUs to use a KMS key (p. 118).
• Regional resource – AMIs are a regional resource. When you share an AMI, it is only available in that
Region. To make an AMI available in a different Region, copy the AMI to the Region and then share it.
For more information, see Copy an AMI (p. 170).
• AMI use – When you share an AMI, users can only launch instances from the AMI. They can’t delete,
share, or modify it. However, after they have launched an instance using your AMI, they can then
create an AMI from their instance.
• Copying shared AMIs – If users in another account want to copy a shared AMI, you must grant
them read permissions for the storage that backs the AMI. For more information, see Cross-account
copying (p. 175).
• Billing – You are not billed when your AMI is used by other AWS accounts to launch instances. The
accounts that launch instances using the AMI are billed for the instances that they launch.
123
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share an AMI with specific AWS accounts
To share this AMI with multiple accounts, repeat Steps 5 and 6 until you have added all the
required account IDs.
Note
You do not need to share the Amazon EBS snapshots that an AMI references in order
to share the AMI. Only the AMI itself needs to be shared; the system automatically
provides the instance access to the referenced Amazon EBS snapshots for the launch.
However, you do need to share any KMS keys used to encrypt snapshots that the AMI
references. For more information, see Share an Amazon EBS snapshot (p. 1419).
7. Choose Save changes when you are done.
8. (Optional) To view the AWS account IDs with which you have shared the AMI, select the AMI in
the list, and choose the Permissions tab. To find AMIs that are shared with you, see Find shared
AMIs (p. 113).
Old console
To share this AMI with multiple users, repeat this step until you have added all the required
users.
Note
You do not need to share the Amazon EBS snapshots that an AMI references in order
to share the AMI. Only the AMI itself needs to be shared; the system automatically
provides the instance access to the referenced Amazon EBS snapshots for the launch.
However, you do need to share any KMS keys used to encrypt snapshots that the AMI
references. For more information, see Share an Amazon EBS snapshot (p. 1419).
5. Choose Save when you are done.
6. (Optional) To view the AWS account IDs with which you have shared the AMI, select the AMI in
the list, and choose the Permissions tab. To find AMIs that are shared with you, see Find shared
AMIs (p. 113).
The following command grants launch permissions for the specified AMI to the specified AWS account.
124
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Use bookmarks
Note
You do not need to share the Amazon EBS snapshots that an AMI references in order to share
the AMI. Only the AMI itself needs to be shared; the system automatically provides the instance
access to the referenced Amazon EBS snapshots for the launch. However, you do need to share
any KMS keys used to encrypt snapshots that the AMI references. For more information, see
Share an Amazon EBS snapshot (p. 1419).
The following command removes launch permissions for the specified AMI from the specified AWS
account:
The following command removes all public and explicit launch permissions from the specified AMI. Note
that the owner of the AMI always has launch permissions and is therefore unaffected by this command.
Use bookmarks
If you have created a public AMI, or shared an AMI with another AWS user, you can create a bookmark
that allows a user to access your AMI and launch an instance in their own account immediately. This is an
easy way to share AMI references, so users don't have to spend time finding your AMI in order to use it.
Note that your AMI must be public, or you must have shared it with the user to whom you want to send
the bookmark.
1. Type a URL with the following information, where region is the Region in which your AMI resides:
https://console.aws.amazon.com/ec2/v2/home?
region=region#LaunchInstanceWizard:ami=ami_id
For example, this URL launches an instance from the ami-0abcdef1234567890 AMI in the us-east-1
Region:
https://console.aws.amazon.com/ec2/v2/home?region=us-
east-1#LaunchInstanceWizard:ami=ami-0abcdef1234567890
125
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Guidelines for shared Linux AMIs
Important
No list of security guidelines can be exhaustive. Build your shared AMIs carefully and take time
to consider where you might expose sensitive data.
Contents
• Update the AMI tools before using them (p. 126)
• Disable password-based remote logins for root (p. 126)
• Disable local root access (p. 127)
• Remove SSH host key pairs (p. 127)
• Install public key credentials (p. 128)
• Disabling sshd DNS checks (optional) (p. 129)
• Identify yourself (p. 129)
• Protect yourself (p. 129)
If you are building AMIs for AWS Marketplace, see Best practices for building AMIs in the AWS
Marketplace Seller Guide for guidelines, policies, and best practices.
For additional information about sharing AMIs safely, see the following articles:
For Amazon Linux 2, install the aws-amitools-ec2 package and add the AMI tools to your PATH with
the following command. For the Amazon Linux AMI, aws-amitools-ec2 package is already installed by
default.
[ec2-user ~]$ sudo yum install -y aws-amitools-ec2 && export PATH=$PATH:/opt/aws/bin > /
etc/profile.d/aws-amitools-ec2.sh && . /etc/profile.d/aws-amitools-ec2.sh
For other distributions, make sure you have the latest AMI tools.
To solve this problem, disable password-based remote logins for the root user.
1. Open the /etc/ssh/sshd_config file with a text editor and locate the following line:
126
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Guidelines for shared Linux AMIs
#PermitRootLogin yes
PermitRootLogin without-password
The location of this configuration file might differ for your distribution, or if you are not running
OpenSSH. If this is the case, consult the relevant documentation.
Note
This command does not impact the use of sudo.
Remove all of the following key files that are present on your system.
• ssh_host_dsa_key
• ssh_host_dsa_key.pub
• ssh_host_key
• ssh_host_key.pub
• ssh_host_rsa_key
• ssh_host_rsa_key.pub
• ssh_host_ecdsa_key
• ssh_host_ecdsa_key.pub
• ssh_host_ed25519_key
• ssh_host_ed25519_key.pub
You can securely remove all of these files with the following command.
Warning
Secure deletion utilities such as shred may not remove all copies of a file from your storage
media. Hidden copies of files may be created by journalling file systems (including Amazon Linux
default ext4), snapshots, backups, RAID, and temporary caching. For more information see the
shred documentation.
Important
If you forget to remove the existing SSH host key pairs from your public AMI, our routine
auditing process notifies you and all customers running instances of your AMI of the potential
security risk. After a short grace period, we mark the AMI private.
127
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Guidelines for shared Linux AMIs
Amazon EC2 allows users to specify a public-private key pair name when launching an instance. When
a valid key pair name is provided to the RunInstances API call (or through the command line API
tools), the public key (the portion of the key pair that Amazon EC2 retains on the server after a call to
CreateKeyPair or ImportKeyPair) is made available to the instance through an HTTP query against
the instance metadata.
To log in through SSH, your AMI must retrieve the key value at boot and append it to /root/.ssh/
authorized_keys (or the equivalent for any other user account on the AMI). Users can launch instances
of your AMI with a key pair and log in without requiring a root password.
Many distributions, including Amazon Linux and Ubuntu, use the cloud-init package to inject public
key credentials for a configured user. If your distribution does not support cloud-init, you can add
the following code to a system start-up script (such as /etc/rc.local) to pull in the public key you
specified at launch for the root user.
Note
In the following example, the IP address http://169.254.169.254/ is a link-local address and is
valid only from the instance.
IMDSv2
if [ ! -d /root/.ssh ] ; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
# Fetch public key using HTTP
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-
token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-
data/public-keys/0/openssh-key > /tmp/my-key
if [ $? -eq 0 ] ; then
cat /tmp/my-key >> /root/.ssh/authorized_keys
chmod 700 /root/.ssh/authorized_keys
rm /tmp/my-key
fi
IMDSv1
if [ ! -d /root/.ssh ] ; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
# Fetch public key using HTTP
curl http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/my-key
if [ $? -eq 0 ] ; then
cat /tmp/my-key >> /root/.ssh/authorized_keys
chmod 700 /root/.ssh/authorized_keys
rm /tmp/my-key
fi
This can be applied to any user account; you do not need to restrict it to root.
Note
Rebundling an instance based on this AMI includes the key with which it was launched. To
prevent the key's inclusion, you must clear out (or delete) the authorized_keys file or exclude
this file from rebundling.
128
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Guidelines for shared Linux AMIs
1. Open the /etc/ssh/sshd_config file with a text editor and locate the following line:
#UseDNS yes
UseDNS no
Note
The location of this configuration file can differ for your distribution or if you are not running
OpenSSH. If this is the case, consult the relevant documentation.
Identify yourself
Currently, there is no easy way to know who provided a shared AMI, because each AMI is represented by
an account ID.
We recommend that you post a description of your AMI, and the AMI ID, in the Amazon EC2 forum. This
provides a convenient central location for users who are interested in trying new shared AMIs.
Protect yourself
We recommend against storing sensitive data or software on any AMI that you share. Users who launch a
shared AMI might be able to rebundle it and register it as their own. Follow these guidelines to help you
to avoid some easily overlooked security risks:
Warning
The limitations of shred described in the warning above apply here as well.
Be aware that bash writes the history of the current session to the disk on exit. If you log out
of your instance after deleting ~/.bash_history, and then log back in, you will find that
~/.bash_history has been re-created and contains all of the commands you ran during
your previous session.
Other programs besides bash also write histories to disk, Use caution and remove or exclude
unnecessary dot-files and dot-directories.
• Bundling a running instance requires your private key and X.509 certificate. Put these and other
credentials in a location that is not bundled (such as the instance store).
129
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Paid AMIs
Paid AMIs
A paid AMI is an AMI that you can purchase from a developer.
Amazon EC2 integrates with AWS Marketplace, enabling developers to charge other Amazon EC2 users
for the use of their AMIs or to provide support for instances.
The AWS Marketplace is an online store where you can buy software that runs on AWS, including AMIs
that you can use to launch your EC2 instance. The AWS Marketplace AMIs are organized into categories,
such as Developer Tools, to enable you to find products to suit your requirements. For more information
about AWS Marketplace, see the AWS Marketplace site.
Launching an instance from a paid AMI is the same as launching an instance from any other AMI. No
additional parameters are required. The instance is charged according to the rates set by the owner of
the AMI, as well as the standard usage fees for the related web services, for example, the hourly rate for
running an m1.small instance type in Amazon EC2. Additional taxes might also apply. The owner of the
paid AMI can confirm whether a specific instance was launched using that paid AMI.
Important
Amazon DevPay is no longer accepting new sellers or products. AWS Marketplace is now
the single, unified e-commerce platform for selling software and services through AWS. For
information about how to deploy and sell software from AWS Marketplace, see Selling in AWS
Marketplace. AWS Marketplace supports AMIs backed by Amazon EBS.
Contents
• Sell your AMI (p. 130)
• Find a paid AMI (p. 130)
• Purchase a paid AMI (p. 131)
• Get the product code for your instance (p. 132)
• Use paid support (p. 132)
• Bills for paid and supported AMIs (p. 133)
• Manage your AWS Marketplace subscriptions (p. 133)
For information about how to sell your AMI on the AWS Marketplace, see Selling in AWS Marketplace.
130
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Purchase a paid AMI
This command returns numerous details that describe each AMI, including the product code for a paid
AMI. The output from describe-images includes an entry for the product code like the following:
"ProductCodes": [
{
"ProductCodeId": "product_code",
"ProductCodeType": "marketplace"
}
],
If you know the product code, you can filter the results by product code. This example returns the most
recent AMI with the specified product code.
Typically a seller of a paid AMI presents you with information about the AMI, including its price and a
link where you can buy it. When you click the link, you're first asked to log into AWS, and then you can
purchase the AMI.
131
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get the product code for your instance
• AWS Marketplace website: You can launch preconfigured software quickly with the 1-Click
deployment feature.
• Amazon EC2 launch wizard: You can search for an AMI and launch an instance directly from the
wizard. For more information, see Launch an AWS Marketplace instance (p. 595).
IMDSv2
IMDSv1
To associate a product code with your AMI, use one of the following commands, where ami_id is the ID of
the AMI and product_code is the product code:
132
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Bills for paid and supported AMIs
After you set the product code attribute, it cannot be changed or removed.
1. Ensure that you have terminated any instances running from the subscription.
AMI lifecycle
Topics
• Create an AMI (p. 134)
• Copy an AMI (p. 170)
• Store and restore an AMI using S3 (p. 176)
• Deprecate an AMI (p. 182)
133
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Create an AMI
You can create Amazon EBS-back Linux AMIs and instance store-backed AMIs.
Topics
• Create an Amazon EBS-backed Linux AMI (p. 134)
• Create an instance store-backed Linux AMI (p. 139)
For information about how to create a Windows AMI, see Create a custom Windows AMI.
The procedures described below work for Amazon EC2 instances backed by encrypted Amazon Elastic
Block Store (Amazon EBS) volumes (including the root volume) as well as for unencrypted volumes.
The AMI creation process is different for instance store-backed AMIs. For information about the
differences between Amazon EBS-backed and instance store-backed instances, and how to determine
the root device type for your instance, see Storage for the root device (p. 96). For information about
creating an instance store-backed Linux AMI, see Create an instance store-backed Linux AMI (p. 139).
For information about creating an Amazon EBS-backed Windows AMI, see Create an Amazon EBS-backed
Windows AMI in the Amazon EC2 User Guide for Windows Instances.
Find an existing AMI that is similar to the AMI that you'd like to create. This can be an AMI you have
obtained from the AWS Marketplace, an AMI you have created using the AWS Server Migration
Service or VM Import/Export, or any other AMI you can access. You'll customize this AMI for your
needs.
In the diagram, EBS root volume snapshot #1 indicates that the AMI is an Amazon EBS-backed AMI
and that information about the root volume is stored in this snapshot.
134
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
The way to configure an AMI is to launch an instance from the AMI on which you'd like to base your
new AMI, and then customize the instance (indicated at 3 in the diagram). Then, you'll create a new
AMI that includes the customizations (indicated at 4 in the diagram).
3 – EC2 instance #1: Customize the instance
Connect to your instance and customize it for your needs. Your new AMI will include these
customizations.
You can perform any of the following actions on your instance to customize it:
• Install software and applications
• Copy data
• Reduce start time by deleting temporary files, defragmenting your hard drive, and zeroing out free
space
• Attach additional EBS volumes
4 – Create image
When you create an AMI from an instance, Amazon EC2 powers down the instance before creating
the AMI to ensure that everything on the instance is stopped and in a consistent state during the
creation process. If you're confident that your instance is in a consistent state appropriate for AMI
creation, you can tell Amazon EC2 not to power down and reboot the instance. Some file systems,
such as XFS, can freeze and unfreeze activity, making it safe to create the image without rebooting
the instance.
During the AMI-creation process, Amazon EC2 creates snapshots of your instance's root volume
and any other EBS volumes attached to your instance. You're charged for the snapshots until you
deregister the AMI (p. 185) and delete the snapshots. If any volumes attached to the instance
are encrypted, the new AMI only launches successfully on instances that support Amazon EBS
encryption (p. 1536).
Depending on the size of the volumes, it can take several minutes for the AMI-creation process to
complete (sometimes up to 24 hours). You might find it more efficient to create snapshots of your
volumes before creating your AMI. This way, only small, incremental snapshots need to be created
when the AMI is created, and the process completes more quickly (the total time for snapshot
creation remains the same). For more information, see Create Amazon EBS snapshots (p. 1385).
5 – AMI #2: New AMI
After the process completes, you have a new AMI and snapshot (snapshot #2) created from the
root volume of the instance. If you added instance-store volumes or EBS volumes to the instance, in
addition to the root device volume, the block device mapping for the new AMI contains information
for these volumes.
When you launch an instance using the new AMI, Amazon EC2 creates a new EBS volume for the
instance's root volume using the snapshot. If you added instance-store volumes or EBS volumes
when you customized the instance, the block device mapping for the new AMI contains information
for these volumes, and the block device mappings for instances that you launch from the new AMI
automatically contain information for these volumes. The instance-store volumes specified in the
135
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
block device mapping for the new instance are new and don't contain any data from the instance
store volumes of the instance you used to create the AMI. The data on EBS volumes persists. For
more information, see Block device mappings (p. 1647).
When you create a new instance from an EBS-backed AMI, you should initialize both its root volume
and any additional EBS storage before putting it into production. For more information, see Initialize
Amazon EBS volumes (p. 1586).
Console
Choose Launch instance from image (new console) or Launch (old console) to launch an
instance of the EBS-backed AMI that you've selected. Accept the default values as you step
through the wizard. For more information, see Launch an instance using the Launch Instance
Wizard (p. 565).
3. Customize the instance.
While the instance is running, connect to it. You can perform any of the following actions on
your instance to customize it for your needs:
(Optional) Create snapshots of all the volumes attached to your instance. For more information
about creating snapshots, see Create Amazon EBS snapshots (p. 1385).
4. Create an AMI from the instance.
a. In the navigation pane, choose Instances, select your instance, and then choose Actions,
Image and templates, Create image.
Tip
If this option is disabled, your instance isn't an Amazon EBS-backed instance.
b. On the Create image page, specify the following information, and then choose Create
image.
136
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
instance is in a consistent state appropriate for image creation, select Enable to avoid
having your instance shut down.
Warning
If you select Enable, the AMI will be crash consistent (all the volumes are
snapshotted at the same time), but not application consistent (all the operating
system buffers are not flushed to disk before the snapshots are created).
• Instance volumes – The fields in this section enable you to modify the root volume, and
add additional Amazon EBS and instance store volumes.
• The root volume is defined in the first row. To change the size of the root volume, for
Size, enter the required value.
• If you select Delete on termination, when you terminate the instance created from
this AMI, the EBS volume is deleted. If you clear Delete on termination, when you
terminate the instance, the EBS volume is not deleted. For more information, see
Preserve Amazon EBS volumes on instance termination (p. 650).
• To add an EBS volume, choose Add volume (which adds a new row). For Volume
type, choose EBS, and fill in the fields in the row. When you launch an instance from
your new AMI, additional volumes are automatically attached to the instance. Empty
volumes must be formatted and mounted. Volumes based on a snapshot must be
mounted.
• To add an instance store volume, see Add instance store volumes to an AMI (p. 1624).
When you launch an instance from your new AMI, additional volumes are automatically
initialized and mounted. These volumes do not contain data from the instance store
volumes of the running instance on which you based your AMI.
• Tags – You can tag the AMI and the snapshots with the same tags, or you can tag them
with different tags.
• To tag the AMI and the snapshots with the same tags, choose Tag image and
snapshots together. The same tags are applied to the AMI and every snapshot that is
created.
• To tag the AMI and the snapshots with different tags, choose Tag image and snapshots
separately. Different tags are applied to the AMI and the snapshots that are created.
However, all the snapshots get the same tags; you can't tag each snapshot with a
different tag.
To add a tag, choose Add tag, and enter the key and value for the tag. Repeat for each
tag.
c. To view the status of your AMI while it is being created, in the navigation pane, choose
AMIs. Initially, the status is pending but should change to available after a few minutes.
5. (Optional) To view the snapshot that was created for the new AMI, choose Snapshots. When you
launch an instance from this AMI, we use this snapshot to create its root device volume.
6. Launch an instance from your new AMI.
For more information, see Launch an instance using the Launch Instance Wizard (p. 565).
The new running instance contains all of the customizations that you applied in the previous
steps.
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
New console
For each volume, you can specify the size, type, performance characteristics, the behavior of
delete on termination, and encryption status. For the root volume, the size cannot be smaller
than the size of the snapshot.
12. Choose Create image.
Old console
138
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
• (PV virtualization type only) Kernel ID and RAM disk ID: Choose the AKI and ARI from the
lists. If you choose the default AKI or don't choose an AKI, you must specify an AKI every time
you launch an instance using this AMI. In addition, your instance may fail the health checks if
the default AKI is incompatible with the instance.
• (Optional) Block Device Mappings: Add volumes or expand the default size of the root
volume for the AMI. For more information about resizing the file system on your instance for a
larger volume, see Extend a Linux file system after resizing a volume (p. 1532).
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
To create an instance store-backed Linux AMI, start from an instance that you've launched from an
existing instance store-backed Linux AMI. After you've customized the instance to suit your needs,
bundle the volume and register a new AMI, which you can use to launch new instances with these
customizations.
Important
Only the following instance types support an instance store volume as the root device: C3, D2,
G2, I2, M3, and R3.
The AMI creation process is different for Amazon EBS-backed AMIs. For more information about the
differences between Amazon EBS-backed and instance store-backed instances, and how to determine
the root device type for your instance, see Storage for the root device (p. 96). If you need to create an
Amazon EBS-backed Linux AMI, see Create an Amazon EBS-backed Linux AMI (p. 134).
139
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
First, launch an instance from an AMI that's similar to the AMI that you'd like to create. You can connect
to your instance and customize it. When the instance is set up the way you want it, you can bundle it.
It takes several minutes for the bundling process to complete. After the process completes, you have a
bundle, which consists of an image manifest (image.manifest.xml) and files (image.part.xx) that
contain a template for the root volume. Next you upload the bundle to your Amazon S3 bucket and then
register your AMI.
Note
To upload objects to an S3 bucket for your instance store-backed Linux AMI, ACLs must be
enabled for the bucket. Otherwise, Amazon EC2 will not be able to set ACLs on the objects
to upload. If your destination bucket uses the bucket owner enforced setting for S3 Object
Ownership, this won’t work because ACLs are disabled. For more information, see Controlling
ownership of uploaded objects using S3 Object Ownership.
When you launch an instance using the new AMI, we create the root volume for the instance using
the bundle that you uploaded to Amazon S3. The storage space used by the bundle in Amazon S3
incurs charges to your account until you delete it. For more information, see Deregister your Linux
AMI (p. 185).
If you add instance store volumes to your instance in addition to the root device volume, the block device
mapping for the new AMI contains information for these volumes, and the block device mappings for
instances that you launch from the new AMI automatically contain information for these volumes. For
more information, see Block device mappings (p. 1647).
Prerequisites
Before you can create an AMI, you must complete the following tasks:
• Install the AMI tools. For more information, see Set up the AMI tools (p. 141).
• Install the AWS CLI. For more information, see Getting Set Up with the AWS Command Line Interface.
• Ensure that you have an S3 bucket for the bundle, and that your bucket has ACLs enabled.
To create an S3 bucket, open the Amazon S3 console and click Create Bucket. Alternatively, you can
use the AWS CLI mb command.
• Ensure that you have your AWS account ID. For more information, see AWS Account Identifiers in the
AWS General Reference.
• Ensure that you have your access key ID and secret access key. For more information, see Access Keys in
the AWS General Reference.
• Ensure that you have an X.509 certificate and corresponding private key.
• If you need to create an X.509 certificate, see Manage signing certificates (p. 143). The X.509
certificate and private key are used to encrypt and decrypt your AMI.
140
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Tasks
1. Install Ruby using the package manager for your Linux distribution, such as yum. For example:
2. Download the RPM file using a tool such as wget or curl. For example:
The command above should indicate that the file's SHA1 and MD5 hashes are OK. If the command
indicates that the hashes are NOT OK, use the following command to view the file's Header SHA1
and MD5 hashes:
Then, compare your file's Header SHA1 and MD5 hashes with the following verified AMI tools hashes
to confirm the file's authenticity:
If your file's Header SHA1 and MD5 hashes match the verified AMI tools hashes, continue to the next
step.
4. Install the RPM using the following command:
141
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
5. Verify your AMI tools installation using the ec2-ami-tools-version (p. 154) command.
Note
If you receive a load error such as "cannot load such file -- ec2/amitools/version
(LoadError)", complete the next step to add the location of your AMI tools installation to
your RUBYLIB path.
6. (Optional) If you received an error in the previous step, add the location of your AMI tools
installation to your RUBYLIB path.
In the above example, the missing file from the previous load error is located at /usr/lib/
ruby/site_ruby and /usr/lib64/ruby/site_ruby.
b. Add the locations from the previous step to your RUBYLIB path.
c. Verify your AMI tools installation using the ec2-ami-tools-version (p. 154) command.
1. Install Ruby and unzip using the package manager for your Linux distribution, such as apt-get. For
example:
[ec2-user ~]$ sudo apt-get update -y && sudo apt-get install -y ruby unzip
2. Download the .zip file using a tool such as wget or curl. For example:
Notice that the .zip file contains a folder ec2-ami-tools-x.x.x, where x.x.x is the version number of
the tools (for example, ec2-ami-tools-1.5.7).
4. Set the EC2_AMITOOL_HOME environment variable to the installation directory for the tools. For
example:
142
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
6. You can verify your AMI tools installation using the ec2-ami-tools-version (p. 154) command.
openssl req -new -x509 -nodes -sha256 -days 365 -key private-key.pem -outform PEM -
out certificate.pem
To disable or re-enable a signing certificate for a user, use the update-signing-certificate command. The
following command disables the certificate:
Topics
• Create an AMI from an instance store-backed Amazon Linux instance (p. 144)
143
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
This section describes the creation of an AMI from an Amazon Linux instance. The following procedures
may not work for instances running other Linux distributions. For Ubuntu-specific procedures, see Create
an AMI from an instance store-backed Ubuntu instance (p. 146).
1. The AMI tools require GRUB Legacy to boot properly. Use the following command to install GRUB:
This procedure assumes that you have satisfied the prerequisites in Prerequisites (p. 140).
1. Upload your credentials to your instance. We use these credentials to ensure that only you and
Amazon EC2 can access your AMI.
This enables you to exclude your credentials from the created image.
b. Copy your X.509 certificate and corresponding private key from your computer to the /tmp/
cert directory on your instance using a secure copy tool such as scp (p. 601). The -i my-
private-key.pem option in the following scp command is the private key you use to connect
to your instance with SSH, not the X.509 private key. For example:
Alternatively, because these are plain text files, you can open the certificate and key in a text editor
and copy their contents into new files in /tmp/cert.
2. Prepare the bundle to upload to Amazon S3 by running the ec2-bundle-vol (p. 156) command
from inside your instance. Be sure to specify the -e option to exclude the directory where your
credentials are stored. By default, the bundle process excludes files that might contain sensitive
information. These files include *.sw, *.swo, *.swp, *.pem, *.priv, *id_rsa*, *id_dsa*
*.gpg, *.jks, */.ssh/authorized_keys, and */.bash_history. To include all of these files,
use the --no-filter option. To include some of these files, use the --include option.
Important
By default, the AMI bundling process creates a compressed, encrypted collection of files
in the /tmp directory that represents your root volume. If you do not have enough free
144
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
disk space in /tmp to store the bundle, you need to specify a different location for the
bundle to be stored with the -d /path/to/bundle/storage option. Some instances
have ephemeral storage mounted at /mnt or /media/ephemeral0 that you can use, or
you can also create (p. 1349), attach (p. 1353), and mount (p. 1360) a new Amazon Elastic
Block Store (Amazon EBS) volume to store the bundle.
a. You must run the ec2-bundle-vol command as root. For most commands, you can use sudo to
gain elevated permissions, but in this case, you should run sudo -E su to keep your environment
variables.
Note that bash prompt now identifies you as the root user, and that the dollar sign has been
replaced by a hash tag, signalling that you are in a root shell:
[root ec2-user]#
b. To create the AMI bundle, run the ec2-bundle-vol (p. 156) command as follows:
Note
For the China (Beijing) and AWS GovCloud (US-West) Regions, use the --ec2cert
parameter and specify the certificates as per the prerequisites (p. 140).
It can take a few minutes to create the image. When this command completes, your /tmp
(or non-default) directory contains the bundle (image.manifest.xml, plus multiple
image.part.xx files).
c. Exit from the root shell.
3. (Optional) To add more instance store volumes, edit the block device mappings in
the image.manifest.xml file for your AMI. For more information, see Block device
mappings (p. 1647).
c. Edit the block device mappings in image.manifest.xml with a text editor. The example below
shows a new entry for the ephemeral1 instance store volume.
Note
For a list of excluded files, see ec2-bundle-vol (p. 156).
<block_device_mapping>
<mapping>
<virtual>ami</virtual>
145
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
<device>sda</device>
</mapping>
<mapping>
<virtual>ephemeral0</virtual>
<device>sdb</device>
</mapping>
<mapping>
<virtual>ephemeral1</virtual>
<device>sdc</device>
</mapping>
<mapping>
<virtual>root</virtual>
<device>/dev/sda1</device>
</mapping>
</block_device_mapping>
Important
To register your AMI in a Region other than US East (N. Virginia), you must specify both the
target Region with the --region option and a bucket path that already exists in the target
Region or a unique bucket path that can be created in the target Region.
5. (Optional) After the bundle is uploaded to Amazon S3, you can remove the bundle from the /tmp
directory on the instance using the following rm command:
Important
If you specified a path with the -d /path/to/bundle/storage option in Step
2 (p. 144), use that path instead of /tmp.
6. To register your AMI, run the register-image command as follows.
Important
If you previously specified a Region for the ec2-upload-bundle (p. 167) command, specify
that Region again for this command.
This section describes the creation of an AMI from an Ubuntu Linux instance with an instance store
volume as the root volume. The following procedures may not work for instances running other Linux
distributions. For procedures specific to Amazon Linux, see Create an AMI from an instance store-backed
Amazon Linux instance (p. 144).
The AMI tools require GRUB Legacy to boot properly. However, Ubuntu is configured to use GRUB 2. You
must check to see that your instance uses GRUB Legacy, and if not, you need to install and configure it.
HVM instances also require partitioning tools to be installed for the AMI tools to work properly.
146
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
1. GRUB Legacy (version 0.9x or less) must be installed on your instance. Check to see if GRUB Legacy
is present and install it if necessary.
In this example, the GRUB version is greater than 0.9x, so you must install GRUB Legacy.
Proceed to Step 1.b (p. 147). If GRUB Legacy is already present, you can skip to Step
2 (p. 147).
b. Install the grub package using the following command.
2. Install the following partition management packages using the package manager for your
distribution.
Note the options following the kernel and root device parameters: ro, console=ttyS0, and
xen_emul_unplug=unnecessary. Your options may differ.
4. Check the kernel entries in /boot/grub/menu.lst.
Note that the console parameter is pointing to hvc0 instead of ttyS0 and that the
xen_emul_unplug=unnecessary parameter is missing. Again, your options may differ.
5. Edit the /boot/grub/menu.lst file with your favorite text editor (such as vim or nano) to change
the console and add the parameters you identified earlier to the boot entries.
147
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
6. Verify that your kernel entries now contain the correct parameters.
7. [For Ubuntu 14.04 and later only] Starting with Ubuntu 14.04, instance store backed Ubuntu AMIs
use a GPT partition table and a separate EFI partition mounted at /boot/efi. The ec2-bundle-vol
command will not bundle this boot partition, so you need to comment out the /etc/fstab entry
for the EFI partition as shown in the following example.
This procedure assumes that you have satisfied the prerequisites in Prerequisites (p. 140).
1. Upload your credentials to your instance. We use these credentials to ensure that only you and
Amazon EC2 can access your AMI.
This enables you to exclude your credentials from the created image.
b. Copy your X.509 certificate and private key from your computer to the /tmp/cert directory on
your instance, using a secure copy tool such as scp (p. 601). The -i my-private-key.pem
option in the following scp command is the private key you use to connect to your instance with
SSH, not the X.509 private key. For example:
Alternatively, because these are plain text files, you can open the certificate and key in a text editor
and copy their contents into new files in /tmp/cert.
2. Prepare the bundle to upload to Amazon S3 by running the ec2-bundle-vol (p. 156) command
from your instance. Be sure to specify the -e option to exclude the directory where your credentials
are stored. By default, the bundle process excludes files that might contain sensitive information.
These files include *.sw, *.swo, *.swp, *.pem, *.priv, *id_rsa*, *id_dsa* *.gpg, *.jks,
148
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
*/.ssh/authorized_keys, and */.bash_history. To include all of these files, use the --no-
filter option. To include some of these files, use the --include option.
Important
By default, the AMI bundling process creates a compressed, encrypted collection of files
in the /tmp directory that represents your root volume. If you do not have enough free
disk space in /tmp to store the bundle, you need to specify a different location for the
bundle to be stored with the -d /path/to/bundle/storage option. Some instances
have ephemeral storage mounted at /mnt or /media/ephemeral0 that you can use, or
you can also create (p. 1349), attach (p. 1353), and mount (p. 1360) a new Amazon Elastic
Block Store (Amazon EBS) volume to store the bundle.
a. You must run the ec2-bundle-vol command needs as root. For most commands, you can use
sudo to gain elevated permissions, but in this case, you should run sudo -E su to keep your
environment variables.
ubuntu:~$ sudo -E su
Note that bash prompt now identifies you as the root user, and that the dollar sign has been
replaced by a hash tag, signalling that you are in a root shell:
root@ubuntu:#
b. To create the AMI bundle, run the ec2-bundle-vol (p. 156) command as follows.
Important
For Ubuntu 14.04 and later HVM instances, add the --partition mbr flag to bundle
the boot instructions properly; otherwise, your newly-created AMI will not boot.
It can take a few minutes to create the image. When this command completes, your tmp
directory contains the bundle (image.manifest.xml, plus multiple image.part.xx files).
c. Exit from the root shell.
root@ubuntu:# exit
3. (Optional) To add more instance store volumes, edit the block device mappings in
the image.manifest.xml file for your AMI. For more information, see Block device
mappings (p. 1647).
c. Edit the block device mappings in image.manifest.xml with a text editor. The example below
shows a new entry for the ephemeral1 instance store volume.
<block_device_mapping>
<mapping>
149
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
<virtual>ami</virtual>
<device>sda</device>
</mapping>
<mapping>
<virtual>ephemeral0</virtual>
<device>sdb</device>
</mapping>
<mapping>
<virtual>ephemeral1</virtual>
<device>sdc</device>
</mapping>
<mapping>
<virtual>root</virtual>
<device>/dev/sda1</device>
</mapping>
</block_device_mapping>
Important
If you intend to register your AMI in a Region other than US East (N. Virginia), you must
specify both the target Region with the --region option and a bucket path that already
exists in the target Region or a unique bucket path that can be created in the target Region.
5. (Optional) After the bundle is uploaded to Amazon S3, you can remove the bundle from the /tmp
directory on the instance using the following rm command:
Important
If you specified a path with the -d /path/to/bundle/storage option in Step
2 (p. 148), use that same path below, instead of /tmp.
6. To register your AMI, run the register-image AWS CLI command as follows.
Important
If you previously specified a Region for the ec2-upload-bundle (p. 167) command, specify
that Region again for this command.
7. [Ubuntu 14.04 and later] Uncomment the EFI entry in /etc/fstab; otherwise, your running
instance will not be able to reboot.
150
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
1. Launch an Amazon Linux instance from an Amazon EBS-backed AMI. For more information, see
Launch an instance using the Launch Instance Wizard (p. 565). Amazon Linux instances have the
AWS CLI and AMI tools pre-installed.
2. Upload the X.509 private key that you used to bundle your instance store-backed AMI to your
instance. We use this key to ensure that only you and Amazon EC2 can access your AMI.
a. Create a temporary directory on your instance for your X.509 private key as follows:
b. Copy your X.509 private key from your computer to the /tmp/cert directory on your instance,
using a secure copy tool such as scp (p. 601). The my-private-key parameter in the
following command is the private key you use to connect to your instance with SSH. For
example:
3. Set environment variables for your AWS access key and secret key.
4. Prepare an Amazon Elastic Block Store (Amazon EBS) volume for your new AMI.
a. Create an empty EBS volume in the same Availability Zone as your instance using the create-
volume command. Note the volume ID in the command output.
Important
This EBS volume must be the same size or larger than the original instance store root
volume.
b. Attach the volume to your Amazon EBS-backed instance using the attach-volume command.
6. Download the bundle for your instance store-based AMI to /tmp/bundle using the ec2-download-
bundle (p. 162) command.
7. Reconstitute the image file from the bundle using the ec2-unbundle (p. 166) command.
151
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
8. Copy the files from the unbundled image to the new EBS volume.
9. Probe the volume for any new partitions that were unbundled.
10. List the block devices to find the device name to mount.
In this example, the partition to mount is /dev/sdb1, but your device name will likely be different.
If your volume is not partitioned, then the device to mount will be similar to /dev/sdb (without a
device partition trailing digit).
11. Create a mount point for the new EBS volume and mount the volume.
12. Open the /etc/fstab file on the EBS volume with your favorite text editor (such as vim or nano)
and remove any entries for instance store (ephemeral) volumes. Because the EBS volume is mounted
on /mnt/ebs, the fstab file is located at /mnt/ebs/etc/fstab.
152
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
c. Identify the processor architecture, virtualization type, and the kernel image (aki) used on the
original AMI with the describe-images command. You need the AMI ID of the original instance
store-backed AMI for this step.
In this example, the architecture is x86_64 and the kernel image ID is aki-fc8f11cc. Use
these values in the following step. If the output of the above command also lists an ari ID, take
note of that as well.
d. Register your new AMI with the snapshot ID of your new EBS volume and the values from the
previous step. If the previous command output listed an ari ID, include that in the following
command with --ramdisk-id ari_id.
15. (Optional) After you have tested that you can launch an instance from your new AMI, you can delete
the EBS volume that you created for this procedure.
For information about your access keys, see Best Practices for Managing AWS Access Keys.
Commands
• ec2-ami-tools-version (p. 154)
• ec2-bundle-image (p. 154)
• ec2-bundle-vol (p. 156)
• ec2-delete-bundle (p. 160)
• ec2-download-bundle (p. 162)
• ec2-migrate-manifest (p. 164)
• ec2-unbundle (p. 166)
• ec2-upload-bundle (p. 167)
• Common options for AMI tools (p. 170)
153
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
ec2-ami-tools-version
Description
Syntax
ec2-ami-tools-version
Output
Example
This example command displays the version information for the AMI tools that you're using.
ec2-bundle-image
Description
Creates an instance store-backed Linux AMI from an operating system image created in a loopback file.
Syntax
Options
Required: Yes
-k, --privatekey path
The path to a PEM-encoded RSA key file. You'll need to specify this key to unbundle this bundle, so
keep it in a safe place. Note that the key doesn't have to be registered to your AWS account.
Required: Yes
-u, --user account
Required: Yes
-i, --image path
Required: Yes
-d, --destination path
154
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Default: /tmp
Required: No
--ec2cert path
The path to the Amazon EC2 X.509 public key certificate used to encrypt the image manifest.
The us-gov-west-1 and cn-north-1 Regions use a non-default public key certificate and the
path to that certificate must be specified with this option. The path to the certificate varies based
on the installation method of the AMI tools. For Amazon Linux, the certificates are located at /opt/
aws/amitools/ec2/etc/ec2/amitools/. If you installed the AMI tools from the RPM or ZIP file
in Set up the AMI tools (p. 141), the certificates are located at $EC2_AMITOOL_HOME/etc/ec2/
amitools/.
Image architecture. If you don't provide the architecture on the command line, you'll be prompted
for it when bundling starts.
Required: No
--productcodes code1,code2,...
Required: No
-B, --block-device-mapping mapping
Defines how block devices are exposed to an instance of this AMI if its instance type supports the
specified device.
Specify a comma-separated list of key-value pairs, where each key is a virtual name and each value is
the corresponding device name. Virtual names include the following:
• ami—The root file system device, as seen by the instance
• root—The root file system device, as seen by the kernel
• swap—The swap device, as seen by the instance
• ephemeralN—The Nth instance store volume
Required: No
-p, --prefix prefix
Default: The name of the image file. For example, if the image path is /var/spool/my-image/
version-2/debian.img, then the default prefix is debian.img.
Required: No
--kernel kernel_id
Required: No
--ramdisk ramdisk_id
155
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Required: No
Output
Status messages describing the stages and status of the bundling process.
Example
This example creates a bundled AMI from an operating system image that was created in a loopback file.
ec2-bundle-vol
Description
Creates an instance store-backed Linux AMI by compressing, encrypting, and signing a copy of the root
device volume for the instance.
Amazon EC2 attempts to inherit product codes, kernel settings, RAM disk settings, and block device
mappings from the instance.
By default, the bundle process excludes files that might contain sensitive information. These files
include *.sw, *.swo, *.swp, *.pem, *.priv, *id_rsa*, *id_dsa* *.gpg, *.jks, */.ssh/
authorized_keys, and */.bash_history. To include all of these files, use the --no-filter option.
To include some of these files, use the --include option.
For more information, see Create an instance store-backed Linux AMI (p. 139).
Syntax
156
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Options
Required: Yes
-k, --privatekey path
Required: Yes
-u, --user account
Required: Yes
-d, --destination destination
Default: /tmp
Required: No
--ec2cert path
The path to the Amazon EC2 X.509 public key certificate used to encrypt the image manifest.
The us-gov-west-1 and cn-north-1 Regions use a non-default public key certificate and the
path to that certificate must be specified with this option. The path to the certificate varies based
on the installation method of the AMI tools. For Amazon Linux, the certificates are located at /opt/
aws/amitools/ec2/etc/ec2/amitools/. If you installed the AMI tools from the RPM or ZIP file
in Set up the AMI tools (p. 141), the certificates are located at $EC2_AMITOOL_HOME/etc/ec2/
amitools/.
The image architecture. If you don't provide this on the command line, you'll be prompted to provide
it when the bundling starts.
Required: No
--productcodes code1,code2,...
Required: No
-B, --block-device-mapping mapping
Defines how block devices are exposed to an instance of this AMI if its instance type supports the
specified device.
Specify a comma-separated list of key-value pairs, where each key is a virtual name and each value is
the corresponding device name. Virtual names include the following:
• ami—The root file system device, as seen by the instance
157
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Required: No
-a, --all
Required: No
-e, --exclude directory1,directory2,...
A list of absolute directory paths and files to exclude from the bundle operation. This parameter
overrides the --all option. When exclude is specified, the directories and subdirectories listed with
the parameter will not be bundled with the volume.
Required: No
-i, --include file1,file2,...
A list of files to include in the bundle operation. The specified files would otherwise be excluded
from the AMI because they might contain sensitive information.
Required: No
--no-filter
If specified, we won't exclude files from the AMI because they might contain sensitive information.
Required: No
-p, --prefix prefix
Default: image
Required: No
-s, --size size
The size, in MB (1024 * 1024 bytes), of the image file to create. The maximum size is 10240 MB.
Default: 10240
Required: No
--[no-]inherit
Indicates whether the image should inherit the instance's metadata (the default is to inherit).
Bundling fails if you enable --inherit but the instance metadata is not accessible.
Required: No
-v, --volume volume
The absolute path to the mounted volume from which to create the bundle.
Required: No
158
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Indicates whether the disk image should use a partition table. If you don't specify a partition table
type, the default is the type used on the parent block device of the volume, if applicable, otherwise
the default is gpt.
Required: No
-S, --script script
A customization script to be run right before bundling. The script must expect a single argument, the
mount point of the volume.
Required: No
--fstab path
The path to the fstab to bundle into the image. If this is not specified, Amazon EC2 bundles /etc/
fstab.
Required: No
--generate-fstab
Required: No
--grub-config
The path to an alternate grub configuration file to bundle into the image. By default, ec2-bundle-
vol expects either /boot/grub/menu.lst or /boot/grub/grub.conf to exist on the cloned
image. This option allows you to specify a path to an alternative grub configuration file, which will
then be copied over the defaults (if present).
Required: No
--kernel kernel_id
Required: No
--ramdiskramdisk_id
Required: No
Output
Example
This example creates a bundled AMI by compressing, encrypting and signing a snapshot of the local
machine's root file system.
159
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Excluding:
sys
dev/shm
proc
dev/pts
proc/sys/fs/binfmt_misc
dev
media
mnt
proc
sys
tmp/image
mnt/img-mnt
1+0 records in
1+0 records out
mke2fs 1.38 (30-Jun-2005)
warning: 256 blocks unused.
Splitting /mnt/image.gz.crypt...
Created image.part.00
Created image.part.01
Created image.part.02
Created image.part.03
...
Created image.part.22
Created image.part.23
Generating digests for each part...
Digests generated.
Creating bundle manifest...
Bundle Volume complete.
ec2-delete-bundle
Description
Deletes the specified bundle from Amazon S3 storage. After you delete a bundle, you can't launch
instances from the corresponding AMI.
Syntax
Options
The name of the Amazon S3 bucket containing the bundled AMI, followed by an optional '/'-
delimited path prefix
Required: Yes
-a, --access-key access_key_id
Required: Yes
-s, --secret-key secret_access_key
Required: Yes
160
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
The delegation token to pass along to the AWS request. For more information, see the Using
Temporary Security Credentials.
Default: us-east-1
Valid values: 2 | 4
Default: 4
Required: No
-m, --manifestpath
The bundled AMI filename prefix. Provide the entire prefix. For example, if the prefix is image.img,
use -p image.img and not -p image.
Deletes the Amazon S3 bucket if it's empty after deleting the specified bundle.
Required: No
--retry
Required: No
-y, --yes
Required: No
Output
Amazon EC2 displays status messages indicating the stages and status of the delete process.
Example
161
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
ec2-download-bundle
Description
Downloads the specified instance store-backed Linux AMIs from Amazon S3 storage.
Syntax
Options
The name of the Amazon S3 bucket where the bundle is located, followed by an optional '/'-
delimited path prefix.
Required: Yes
-a, --access-key access_key_id
Required: Yes
-s, --secret-key secret_access_key
Required: Yes
162
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Required: Yes
--url url
Default: https://s3.amazonaws.com/
Required: No
--region region
Default: us-east-1
Valid values: 2 | 4
Default: 4
Required: No
-m, --manifest file
The name of the manifest file (without the path). We recommend that you specify either the
manifest (-m) or a prefix (-p).
Required: No
-p, --prefix prefix
Default: image
Required: No
-d, --directory directory
The directory where the downloaded bundle is saved. The directory must exist.
Required: No
--retry
Required: No
Output
Status messages indicating the various stages of the download process are displayed.
163
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Example
This example creates the bundled directory (using the Linux mkdir command) and downloads the
bundle from the DOC-EXAMPLE-BUCKET1 Amazon S3 bucket.
ec2-migrate-manifest
Description
Modifies an instance store-backed Linux AMI (for example, its certificate, kernel, and RAM disk) so that it
supports a different Region.
Syntax
Options
Required: Yes
-k, --privatekey path
Required: Yes
--manifest path
164
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Required: Yes
-a, --access-key access_key_id
During migration, Amazon EC2 replaces the kernel and RAM disk in the manifest file with a kernel
and RAM disk designed for the destination region. Unless the --no-mapping parameter is given,
ec2-migrate-bundle might use the DescribeRegions and DescribeImages operations to
perform automated mappings.
Required: Required if you're not providing the -a, -s, and --region options used for automatic
mapping.
--ec2cert path
The path to the Amazon EC2 X.509 public key certificate used to encrypt the image manifest.
The us-gov-west-1 and cn-north-1 Regions use a non-default public key certificate and the
path to that certificate must be specified with this option. The path to the certificate varies based
on the installation method of the AMI tools. For Amazon Linux, the certificates are located at /
opt/aws/amitools/ec2/etc/ec2/amitools/. If you installed the AMI tools from the ZIP file
in Set up the AMI tools (p. 141), the certificates are located at $EC2_AMITOOL_HOME/etc/ec2/
amitools/.
Required: No
--ramdisk ramdisk_id
Required: No
165
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Output
Status messages describing the stages and status of the bundling process.
Example
This example copies the AMI specified in the my-ami.manifest.xml manifest from the US to the EU.
Backing up manifest...
Successfully migrated my-ami.manifest.xml It is now suitable for use in eu-west-1.
ec2-unbundle
Description
Syntax
Options
Required: Yes
-m, --manifest path
Required: Yes
-s, --source source_directory
Required: No
-d, --destination destination_directory
The directory in which to unbundle the AMI. The destination directory must exist.
Required: No
Example
This Linux and UNIX example unbundles the AMI specified in the image.manifest.xml file.
166
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
$ ls -l unbundled
total 1025008
-rw-r--r-- 1 root root 1048578048 Aug 25 23:46 image.img
Output
Status messages indicating the various stages of the unbundling process are displayed.
ec2-upload-bundle
Description
Uploads the bundle for an instance store-backed Linux AMI to Amazon S3 and sets the appropriate
access control lists (ACLs) on the uploaded objects. For more information, see Create an instance store-
backed Linux AMI (p. 139).
Note
To upload objects to an S3 bucket for your instance store-backed Linux AMI, ACLs must be
enabled for the bucket. Otherwise, Amazon EC2 will not be able to set ACLs on the objects
to upload. If your destination bucket uses the bucket owner enforced setting for S3 Object
Ownership, this won’t work because ACLs are disabled. For more information, see Controlling
ownership of uploaded objects using S3 Object Ownership.
Syntax
Options
The name of the Amazon S3 bucket in which to store the bundle, followed by an optional '/'-
delimited path prefix. If the bucket doesn't exist, it's created if the bucket name is available.
Required: Yes
-a, --access-key access_key_id
Required: Yes
-s, --secret-key secret_access_key
Required: Yes
-t, --delegation-token token
The delegation token to pass along to the AWS request. For more information, see the Using
Temporary Security Credentials.
The path to the manifest file. The manifest file is created during the bundling process and can be
found in the directory containing the bundle.
167
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
Required: Yes
--url url
Deprecated. Use the --region option instead unless your bucket is constrained to the EU location
(and not eu-west-1). The --location flag is the only way to target that specific location restraint.
Default: https://s3.amazonaws.com/
Required: No
--region region
The Region to use in the request signature for the destination S3 bucket.
• If the bucket doesn't exist and you don't specify a Region, the tool creates the bucket without a
location constraint (in us-east-1).
• If the bucket doesn't exist and you specify a Region, the tool creates the bucket in the specified
Region.
• If the bucket exists and you don't specify a Region, the tool uses the bucket's location.
• If the bucket exists and you specify us-east-1 as the Region, the tool uses the bucket's actual
location without any error message, any existing matching files are over-written.
• If the bucket exists and you specify a Region (other than us-east-1) that doesn't match the
bucket's actual location, the tool exits with an error.
If your bucket is constrained to the EU location (and not eu-west-1), use the --location flag
instead. The --location flag is the only way to target that specific location restraint.
Default: us-east-1
Valid values: 2 | 4
Default: 4
Required: No
--acl acl
Default: aws-exec-read
Required: No
-d, --directory directory
Default: The directory containing the manifest file (see the -m option).
Required: No
168
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an AMI
--part part
Starts uploading the specified part and all subsequent parts. For example, --part 04.
Required: No
--retry
Required: No
--skipmanifest
Required: No
--location location
Deprecated. Use the --region option instead, unless your bucket is constrained to the EU location
(and not eu-west-1). The --location flag is the only way to target that specific location restraint.
The location constraint of the destination Amazon S3 bucket. If the bucket exists and you specify a
location that doesn't match the bucket's actual location, the tool exits with an error. If the bucket
exists and you don't specify a location, the tool uses the bucket's location. If the bucket doesn't exist
and you specify a location, the tool creates the bucket in the specified location. If the bucket doesn't
exist and you don't specify a location, the tool creates the bucket without a location constraint (in
us-east-1).
Default: If --region is specified, the location is set to that specified Region. If --region is not
specified, the location defaults to us-east-1.
Required: No
Output
Amazon EC2 displays status messages that indicate the stages and status of the upload process.
Example
169
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Copy an AMI
Uploaded image.part.14
Uploading manifest ...
Uploaded manifest.
Bundle upload completed.
--help, -h
Copy an AMI
You can copy an Amazon Machine Image (AMI) within or across AWS Regions. You can copy both Amazon
EBS-backed AMIs and instance-store-backed AMIs. You can copy AMIs with encrypted snapshots and also
change encryption status during the copy process. You can copy AMIs that are shared with you.
Copying a source AMI results in an identical but distinct target AMI with its own unique identifier. You can
change or deregister the source AMI with no effect on the target AMI. The reverse is also true.
With an Amazon EBS-backed AMI, each of its backing snapshots is copied to an identical but distinct
target snapshot. If you copy an AMI to a new Region, the snapshots are complete (non-incremental)
copies. If you encrypt unencrypted backing snapshots or encrypt them to a new KMS key, the snapshots
are complete (non-incremental) copies. Subsequent copy operations of an AMI result in incremental
copies of the backing snapshots.
There are no charges for copying an AMI. However, standard storage and data transfer rates apply. If you
copy an EBS-backed AMI, you will incur charges for the storage of any additional EBS snapshots.
Considerations
• You can use IAM policies to grant or deny users permissions to copy AMIs. Resource-level permissions
specified for the CopyImage action apply only to the new AMI. You cannot specify resource-level
permissions for the source AMI.
• AWS does not copy launch permissions, user-defined tags, or Amazon S3 bucket permissions from the
source AMI to the new AMI. After the copy operation is complete, you can apply launch permissions,
user-defined tags, and Amazon S3 bucket permissions to the new AMI.
• If you are using an AWS Marketplace AMI, or an AMI that was directly or indirectly derived from an
AWS Marketplace AMI, you cannot copy it across accounts. Instead, launch an EC2 instance using the
AWS Marketplace AMI and then create an AMI from the instance. For more information, see Create an
Amazon EBS-backed Linux AMI (p. 134).
170
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Copy an AMI
Contents
• Permissions for copying an instance store-backed AMI (p. 171)
• Copy an AMI (p. 172)
• Stop a pending AMI copy operation (p. 173)
• Cross-Region copying (p. 174)
• Cross-account copying (p. 175)
• Encryption and copying (p. 175)
The following example policy allows the user to copy the AMI source in the specified bucket to the
specified Region.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::ami-source-bucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:GetBucketAcl",
"s3:PutObjectAcl",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::amis-for-123456789012-in-us-east-1*"
]
}
]
}
To find the Amazon Resource Name (ARN) of the AMI source bucket, open the Amazon EC2 console at
https://console.aws.amazon.com/ec2/, in the navigation pane choose AMIs, and locate the bucket name
in the Source column.
Note
The s3:CreateBucket permission is only needed the first time that the IAM user copies an
instance store-backed AMI to an individual Region. After that, the Amazon S3 bucket that is
already created in the Region is used to store all future AMIs that you copy to that Region.
171
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Copy an AMI
Copy an AMI
You can copy an AMI using the AWS Management Console, the AWS Command Line Interface or SDKs, or
the Amazon EC2 API, all of which support the CopyImage action.
Prerequisite
Create or obtain an AMI backed by an Amazon EBS snapshot. Note that you can use the Amazon EC2
console to search a wide variety of AMIs provided by AWS. For more information, see Create an Amazon
EBS-backed Linux AMI (p. 134) and Finding an AMI.
New console
• AMI copy name: A name for the new AMI. You can include operating system information in
the name, as we do not provide this information when displaying details about the AMI.
• AMI copy description: By default, the description includes information about the source
AMI so that you can distinguish a copy from its original. You can change this description as
needed.
• Destination Region: The Region in which to copy the AMI. For more information, see Cross-
Region copying (p. 174).
• Encrypt EBS snapshots of AMI copy: Select this check box to encrypt the target snapshots, or
to re-encrypt them using a different key. If you have enabled encryption by default (p. 1539),
the Encrypt EBS snapshots of AMI copy check box is selected and cannot be cleared. For
more information, see Encryption and copying (p. 175).
• KMS key: The KMS key to used to encrypt the target snapshots.
6. Choose Copy AMI.
The initial status of the new AMI is Pending. The AMI copy operation is complete when the
status is Available.
Old console
• Destination region: The Region in which to copy the AMI. For more information, see Cross-
Region copying (p. 174).
• Name: A name for the new AMI. You can include operating system information in the name,
as we do not provide this information when displaying details about the AMI.
172
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Copy an AMI
• Description: By default, the description includes information about the source AMI so that
you can distinguish a copy from its original. You can change this description as needed.
• Encryption: Select this field to encrypt the target snapshots, or to re-encrypt them using a
different key. If you have enabled encryption by default (p. 1539), the Encryption option is
set and cannot be unset. For more information, see Encryption and copying (p. 175).
• KMS Key: The KMS key to used to encrypt the target snapshots.
5. We display a confirmation page to let you know that the copy operation has been initiated and
to provide you with the ID of the new AMI.
To check on the progress of the copy operation immediately, follow the provided link. To check
on the progress later, choose Done, and then when you are ready, use the navigation bar to
switch to the target Region (if applicable) and locate your AMI in the list of AMIs.
The initial status of the target AMI is pending and the operation is complete when the status is
available.
You can copy an AMI using the copy-image command. You must specify both the source and destination
Regions. You specify the source Region using the --source-region parameter. You can specify
the destination Region using either the --region parameter or an environment variable. For more
information, see Configuring the AWS Command Line Interface.
When you encrypt a target snapshot during copying, you must specify these additional parameters: --
encrypted and --kms-key-id.
You can copy an AMI using the Copy-EC2Image command. You must specify both the source and
destination Regions. You specify the source Region using the -SourceRegion parameter. You can
specify the destination Region using either the -Region parameter or the Set-AWSDefaultRegion
command. For more information, see Specifying AWS Regions.
When you encrypt a target snapshot during copying, you must specify these additional parameters: -
Encrypted and -KmsKeyId.
New console
Old console
173
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Copy an AMI
2. From the navigation bar, select the destination Region from the Region selector.
3. In the navigation pane, choose AMIs.
4. Select the AMI to stop copying and choose Actions, Deregister.
5. When asked for confirmation, choose Continue.
Cross-Region copying
Copying an AMI across geographically diverse Regions provides the following benefits:
• Consistent global deployment: Copying an AMI from one Region to another enables you to launch
consistent instances in different Regions based on the same AMI.
• Scalability: You can more easily design and build global applications that meet the needs of your users,
regardless of their location.
• Performance: You can increase performance by distributing your application, as well as locating critical
components of your application in closer proximity to your users. You can also take advantage of
Region-specific features, such as instance types or other AWS services.
• High availability: You can design and deploy applications across AWS Regions, to increase availability.
The following diagram shows the relations among a source AMI and two copied AMIs in different
Regions, as well as the EC2 instances launched from each. When you launch an instance from an AMI, it
resides in the same Region where the AMI resides. If you make changes to the source AMI and want those
changes to be reflected in the AMIs in the target Regions, you must recopy the source AMI to the target
Regions.
When you first copy an instance store-backed AMI to a Region, we create an Amazon S3 bucket for the
AMIs copied to that Region. All instance store-backed AMIs that you copy to that Region are stored in this
bucket. The bucket names have the following format: amis-for-account-in-region-hash. For example:
amis-for-123456789012-in-us-east-2-yhjmxvp6.
174
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Copy an AMI
Prerequisite
Prior to copying an AMI, you must ensure that the contents of the source AMI are updated to support
running in a different Region. For example, you should update any database connection strings or similar
application configuration data to point to the appropriate resources. Otherwise, instances launched from
the new AMI in the destination Region may still use the resources from the source Region, which can
impact performance and cost.
Limits
Cross-account copying
You can share an AMI with another AWS account. Sharing an AMI does not affect the ownership of the
AMI. The owning account is charged for the storage in the Region. For more information, see Share an
AMI with specific AWS accounts (p. 123).
If you copy an AMI that has been shared with your account, you are the owner of the target AMI in your
account. The owner of the source AMI is charged standard Amazon EBS or Amazon S3 transfer fees, and
you are charged for the storage of the target AMI in the destination Region.
Resource Permissions
To copy an AMI that was shared with you from another account, the owner of the source AMI must grant
you read permissions for the storage that backs the AMI, either the associated EBS snapshot (for an
Amazon EBS-backed AMI) or an associated S3 bucket (for an instance store-backed AMI). If the shared
AMI has encrypted snapshots, the owner must share the key or keys with you as well.
1 Unencrypted-to-unencrypted Yes
2 Encrypted-to-encrypted Yes
3 Unencrypted-to-encrypted Yes
4 Encrypted-to-unencrypted No
Note
Encrypting during the CopyImage action applies only to Amazon EBS-backed AMIs. Because
an instance store-backed AMI does not rely on snapshots, you cannot use copying to change its
encryption status.
By default (i.e., without specifying encryption parameters), the backing snapshot of an AMI is copied with
its original encryption status. Copying an AMI backed by an unencrypted snapshot results in an identical
175
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Store and restore an AMI
target snapshot that is also unencrypted. If the source AMI is backed by an encrypted snapshot, copying
it results in an identical target snapshot that is encrypted by the same AWS KMS key. Copying an AMI
backed by multiple snapshots preserves, by default, the source encryption status in each target snapshot.
If you specify encryption parameters while copying an AMI, you can encrypt or re-encrypt its backing
snapshots. The following example shows a non-default case that supplies encryption parameters to the
CopyImage action in order to change the target AMI's encryption state.
In this scenario, an AMI backed by an unencrypted root snapshot is copied to an AMI with an encrypted
root snapshot. The CopyImage action is invoked with two encryption parameters, including a customer
managed key. As a result, the encryption status of the root snapshot changes, so that the target AMI is
backed by a root snapshot containing the same data as the source snapshot, but encrypted using the
specified key. You incur storage costs for the snapshots in both AMIs, as well as charges for any instances
you launch from either AMI.
Note
Enabling encryption by default (p. 1539) has the same effect as setting the Encrypted
parameter to true for all snapshots in the AMI.
Setting the Encrypted parameter encrypts the single snapshot for this instance. If you do not specify
the KmsKeyId parameter, the default customer managed key is used to encrypt the snapshot copy.
For more information about copying AMIs with encrypted snapshots, see Use encryption with EBS-
backed AMIs (p. 189).
The supported APIs for storing and restoring an AMI using S3 are CreateStoreImageTask,
DescribeStoreImageTasks, and CreateRestoreImageTask.
CopyImage is the recommended API to use for copying AMIs within an AWS partition. However,
CopyImage can’t copy an AMI to another partition.
176
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Store and restore an AMI
Warning
Ensure that you comply with all applicable laws and business requirements when moving
data between AWS partitions or AWS Regions, including, but not limited to, any applicable
government regulations and data residency requirements.
Topics
• Use cases (p. 177)
• How the AMI store and restore APIs work (p. 178)
• Limitations (p. 179)
• Costs (p. 180)
• Securing your AMIs (p. 180)
• Permissions for storing and restoring AMIs using S3 (p. 180)
• Work with the AMI store and restore APIs (p. 181)
Use cases
Use the store and restore APIs to do the following:
• Copy an AMI from one AWS partition to another AWS partition (p. 177)
• Make archival copies of AMIs (p. 178)
• Store the AMI in an S3 bucket in the current Region by using CreateStoreImageTask. In this
example, the S3 bucket is located in us-east-2. For an example command, see Store an AMI in an S3
bucket (p. 181).
• Monitor the progress of the store task by using DescribeStoreImageTasks. The object becomes
visible in the S3 bucket when the task is completed. For an example command, see Describe the
progress of an AMI store task (p. 181).
• Copy the stored AMI object to an S3 bucket in the target partition using a procedure of your choice. In
this example, the S3 bucket is located in us-gov-east-1.
Note
Because you need different AWS credentials for each partition, you can’t copy an S3 object
directly from one partition to another. The process for copying an S3 object across partitions
is outside the scope of this documentation. We provide the following copy processes as
examples, but you must use the copy process that meets your security requirements.
• To copy one AMI across partitions, the copy process could be as straightforward as the
following: Download the object from the source bucket to an intermediate host (for
example, an EC2 instance or a laptop), and then upload the object from the intermediate
host to the source bucket. For each stage of the process, use the AWS credentials for the
partition.
• For more sustained usage, consider developing an application that manages the copies,
potentially using S3 multipart downloads and uploads.
• Restore the AMI from the S3 bucket in the target partition by using CreateRestoreImageTask. In
this example, the S3 bucket is located in us-gov-east-1. For an example command, see Restore an
AMI from an S3 bucket (p. 181).
177
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Store and restore an AMI
• Monitor the progress of the restore task by describing the AMI to check when its state becomes
available. You can also monitor the progress percentages of the snapshots that make up the restored
AMI by describing the snapshots.
The AMI is packed into a single object in S3, and all of the AMI metadata (excluding sharing information)
is preserved as part of the stored AMI. The AMI data is compressed as part of the storage process. AMIs
that contain data that can easily be compressed will result in smaller objects in S3. To reduce costs,
you can use less expensive S3 storage tiers. For more information, see Amazon S3 Storage Classes and
Amazon S3 pricing
CreateStoreImageTask
The CreateStoreImageTask (p. 181) API stores an AMI as a single object in an S3 bucket.
The API creates a task that reads all of the data from the AMI and its snapshots, and then uses an S3
multipart upload to store the data in an S3 object. The API takes all of the components of the AMI,
including most of the non-Region-specific AMI metadata, and all the EBS snapshots contained in the
AMI, and packs them into a single object in S3. The data is compressed as part of the upload process to
reduce the amount of space used in S3, so the object in S3 might be smaller than the sum of the sizes of
the snapshots in the AMI.
If there are AMI and snapshot tags visible to the account calling this API, they are preserved.
The object in S3 has the same ID as the AMI, but with a .bin extension. The following data is also stored
as S3 metadata tags on the S3 object: AMI name, AMI description, AMI registration date, AMI owner
account, and a timestamp for the store operation.
The time it takes to complete the task depends on the size of the AMI. It also depends on how many
other tasks are in progress because tasks are queued. You can track the progress of the task by calling
the DescribeStoreImageTasks (p. 181) API.
The sum of the sizes of all the AMIs in progress is limited to 600 GB of EBS snapshot data per account.
Further task creation will be rejected until the tasks in progress are less than the limit. For example,
if an AMI with 100 GB of snapshot data and another AMI with 200 GB of snapshot data are currently
being stored, another request will be accepted, because the total in progress is 300 GB, which is less than
178
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Store and restore an AMI
the limit. But if a single AMI with 800 GB of snapshot data is currently being stored, further tasks are
rejected until the task is completed.
DescribeStoreImageTasks
The DescribeStoreImageTasks (p. 181) API describes the progress of the AMI store tasks. You can
describe tasks for specified AMIs. If you don't specify AMIs, you get a paginated list of all of the store
image tasks that have been processed in the last 31 days.
For each AMI task, the response indicates if the task is InProgress, Completed, or Failed. For tasks
InProgress, the response shows an estimated progress as a percentage.
CreateRestoreImageTask
The CreateRestoreImageTask (p. 181) API starts a task that restores an AMI from an S3 object that was
previously created by using a CreateStoreImageTask (p. 181) request.
The restore task can be performed in the same or a different Region in which the store task was
performed.
The S3 bucket from which the AMI object will be restored must be in the same Region in which the
restore task is requested. The AMI will be restored in this Region.
The AMI is restored with its metadata, such as the name, description, and block device mappings
corresponding to the values of the stored AMI. The name must be unique for AMIs in the Region for this
account. If you do not provide a name, the new AMI gets the same name as the original AMI. The AMI
gets a new AMI ID that is generated at the time of the restore process.
The time it takes to complete the AMI restoration task depends on the size of the AMI. It also depends on
how many other tasks are in progress because tasks are queued. You can view the progress of the task by
describing the AMI (describe-images) or its EBS snapshots (describe-snapshots). If the task fails, the AMI
and snapshots are moved to a failed state.
The sum of the sizes of all of the AMIs in progress is limited to 300 GB (based on the size after
restoration) of EBS snapshot data per account. Further task creation will be rejected until the tasks in
progress are less than the limit.
Limitations
• Only EBS-backed AMIs can be stored using these APIs.
• Paravirtual (PV) AMIs are not supported.
• The size of an AMI (before compression) that can be stored is limited to 1 TB.
• Quota on store image (p. 181) requests: 600 GB of storage work (snapshot data) in progress.
• Quota on restore image (p. 181) requests: 300 GB of restore work (snapshot data) in progress.
• For the duration of the store task, the snapshots must not be deleted and the IAM principal doing the
store must have access to the snapshots, otherwise the store process will fail.
• You can’t create multiple copies of an AMI in the same S3 bucket.
• An AMI that is stored in an S3 bucket can’t be restored with its original AMI ID. You can mitigate this by
using AMI aliasing.
• Currently the store and restore APIs are only supported by using the AWS Command Line Interface,
AWS SDKs, and Amazon EC2 API. You can’t store and restore an AMI using the Amazon EC2 console.
179
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Store and restore an AMI
Costs
When you store and restore AMIs using S3, you are charged for the services that are used by the store
and restore APIs, and for data transfer. The APIs use S3 and the EBS Direct API (used internally by these
APIs to access the snapshot data). For more information, see Amazon S3 pricing and Amazon EBS pricing.
For information about how to set the appropriate security settings for your S3 buckets, review the
following security topics:
When the AMI snapshots are copied to the S3 object, the data is then copied over TLS connections. You
can store AMIs with encrypted snapshots, but the snapshots are decrypted as part of the store process.
The following example policy includes all of the actions that are required to allow an IAM principal to
carry out the store and restore tasks.
You can also craft policies so that IAM principals can only access named resources. For more example
policies, see Access management for AWS resources in the IAM User Guide.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:AbortMultipartUpload",
"ebs:CompleteSnapshot",
"ebs:GetSnapshotBlock",
"ebs:ListChangedBlocks",
"ebs:ListSnapshotBlocks",
"ebs:PutSnapshotBlock",
"ebs:StartSnapshot",
"ec2:CreateStoreImageTask",
"ec2:DescribeStoreImageTasks",
"ec2:CreateRestoreImageTask",
"ec2:GetEbsEncryptionByDefault",
180
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Store and restore an AMI
"ec2:DescribeTags"
],
"Resource": "*"
}
]
}
Use the create-store-image-task command. Specify the ID of the AMI and the name of the S3 bucket in
which to store the AMI.
Expected output
{
"ObjectKey": "ami-1234567890abcdef0.bin"
}
Expected output
{
"AmiId": "ami-1234567890abcdef0",
"Bucket": "myamibucket",
"ProgressPercentage": 17,
"S3ObjectKey": "ami-1234567890abcdef0.bin",
"StoreTaskState": "InProgress",
"StoreTaskFailureReason": null,
"TaskStartTime": "2021-01-01T01:01:01.001Z"
}
181
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deprecate an AMI
Use the create-restore-image-task command. Using the values for S3ObjectKey and Bucket from the
describe-store-image-tasks output, specify the object key of the AMI and the name of the S3
bucket to which the AMI was copied. Also specify a name for the restored AMI. The name must be unique
for AMIs in the Region for this account.
Note
The restored AMI gets a new AMI ID.
Expected output
{
"ImageId": "ami-0eab20fe36f83e1a8"
}
Deprecate an AMI
You can deprecate an AMI to indicate that it is out of date and should not be used. You can also specify a
future deprecation date for an AMI, indicating when the AMI will be out of date. For example, you might
deprecate an AMI that is no longer actively maintained, or you might deprecate an AMI that has been
superseded by a newer version. By default, deprecated AMIs do not appear in AMI listings, preventing
new users from using out-of-date AMIs. However, existing users and launch services, such as launch
templates and Auto Scaling groups, can continue to use a deprecated AMI by specifying its ID. To delete
the AMI so that users and services cannot use it, you must deregister (p. 185) it.
• For AMI users, the deprecated AMI does not appear in DescribeImages API calls unless you specify
its ID or specify that deprecated AMIs must appear. AMI owners continue to see deprecated AMIs in
DescribeImages API calls.
• For AMI users, the deprecated AMI is not available to select via the EC2 console. For example, a
deprecated AMI does not appear in the AMI catalog in the launch instance wizard. AMI owners
continue to see deprecated AMIs in the EC2 console.
• For AMI users, if you know the ID of a deprecated AMI, you can continue to launch instances using the
deprecated AMI by using the API, CLI, or the SDKs.
• Launch services, such as launch templates and Auto Scaling groups, can continue to reference
deprecated AMIs.
• EC2 instances that were launched using an AMI that is subsequently deprecated are not affected, and
can be stopped, started, and rebooted.
You can also create Amazon Data Lifecycle Manager EBS-backed AMI policies to automate the
deprecation of EBS-backed AMIs. For more information, see Automate AMI lifecycles (p. 1491).
Topics
• Costs (p. 183)
• Limitations (p. 179)
• Deprecate an AMI (p. 183)
182
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deprecate an AMI
Costs
When you deprecate an AMI, the AMI is not deleted. The AMI owner continues to pay for the
AMI's snapshots. To stop paying for the snapshots, the AMI owner must delete the AMI by
deregistering (p. 185) it.
Limitations
• To deprecate an AMI, you must be the owner of the AMI.
• You can’t use the EC2 console to deprecate an AMI or to cancel the deprecation of an AMI.
Deprecate an AMI
You can deprecate an AMI on a specific date and time. You must be the AMI owner to perform this
procedure.
Use the enable-image-deprecation command. Specify the ID of the AMI and the date and time on which
to deprecate the AMI. If you specify a value for seconds, Amazon EC2 rounds the seconds to the nearest
minute.
Expected output
{
"RequestID": "59dbff89-35bd-4eac-99ed-be587EXAMPLE",
"Return": "true"
}
By default, when you describe all AMIs using the describe-images command, deprecated AMIs that are
not owned by you, but which are shared with you, do not appear in the results. To include deprecated
AMIs in the results, you must specify the --include-deprecated true parameter. The default
value for --include-deprecated is false. If you omit this parameter, deprecated AMIs do not
appear in the results.
• If you are the AMI owner:
When you describe all AMIs using the describe-images command, all the AMIs that you own, including
deprecated AMIs, appear in the results. You do not need to specify the --include-deprecated
183
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deprecate an AMI
true parameter. Furthermore, you cannot exclude deprecated AMIs that you own from the results by
using --include-deprecated false.
To include all deprecated AMIs when describing all AMIs (AWS CLI)
Use the describe-images command and specify the --include-deprecated parameter with a value of
true to include all deprecated AMIs that are not owned by you in the results.
Note that if you specify --include-deprecated false together with the AMI ID, the deprecated AMI
will be returned in the results.
Expected output
The DeprecationTime field displays the date on which the AMI is set to be deprecated. If the AMI is not
set to be deprecated, the DeprecationTime field does not appear in the output.
{
"Images": [
{
"VirtualizationType": "hvm",
"Description": "Provided by Red Hat, Inc.",
"PlatformDetails": "Red Hat Enterprise Linux",
"EnaSupport": true,
"Hypervisor": "xen",
"State": "available",
"SriovNetSupport": "simple",
"ImageId": "ami-1234567890EXAMPLE",
"DeprecationTime": "2021-05-10T13:17:12.000Z"
"UsageOperation": "RunInstances:0010",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"SnapshotId": "snap-111222333444aaabb",
"DeleteOnTermination": true,
"VolumeType": "gp2",
"VolumeSize": 10,
"Encrypted": false
}
}
],
184
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deregister your Linux AMI
"Architecture": "x86_64",
"ImageLocation": "123456789012/RHEL-8.0.0_HVM-20190618-x86_64-1-Hourly2-GP2",
"RootDeviceType": "ebs",
"OwnerId": "123456789012",
"RootDeviceName": "/dev/sda1",
"CreationDate": "2019-05-10T13:17:12.000Z",
"Public": true,
"ImageType": "machine",
"Name": "RHEL-8.0.0_HVM-20190618-x86_64-1-Hourly2-GP2"
}
]
}
Expected output
{
"RequestID": "11aabb229-4eac-35bd-99ed-be587EXAMPLE",
"Return": "true"
}
When you deregister an AMI, it doesn't affect any instances that you've already launched from the
AMI. You'll continue to incur usage costs for these instances. Therefore, if you are finished with these
instances, you should terminate them.
The procedure that you'll use to clean up your AMI depends on whether it is backed by Amazon EBS or
instance store. For more information, see Determine the root device type of your AMI (p. 97).
Contents
• Considerations (p. 185)
• Clean up your Amazon EBS-backed AMI (p. 186)
• Clean up your instance store-backed AMI (p. 188)
Considerations
The following considerations apply to deregistering AMIs:
185
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deregister your Linux AMI
• You can't deregister an AMI that is managed by the AWS Backup service using Amazon EC2. Instead,
use AWS Backup to delete the corresponding recovery points in the backup vault.
The following diagram illustrates the process for cleaning up your Amazon EBS-backed AMI.
You can use one of the following methods to clean up your Amazon EBS-backed AMI.
New console
186
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deregister your Linux AMI
If you are finished with an instance that you launched from the AMI, you can terminate it.
a. In the navigation pane, choose Instances then select the instance to terminate.
b. Choose Instance state, Terminate instance. When prompted for confirmation, choose
Terminate.
Old console
If you are finished with an instance that you launched from the AMI, you can terminate it.
a. In the navigation pane, choose Instances then select the instance to terminate.
b. Choose Actions, Instance State, Terminate. When prompted for confirmation, choose Yes,
Terminate.
AWS CLI
Follow these steps to clean up your Amazon EBS-backed AMI using the AWS CLI
Delete snapshots that are no longer needed by using the delete-snapshot command:
If you are finished with an instance that you launched from the AMI, you can terminate it by
using the terminate-instances command:
187
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deregister your Linux AMI
PowerShell
Follow these steps to clean up your Amazon EBS-backed AMI using the AWS Tools for Windows
PowerShell
Delete snapshots that are no longer needed by using the Remove-EC2Snapshot cmdlet:
If you are finished with an instance that you launched from the AMI, you can terminate it by
using the Remove-EC2Instance cmdlet:
The following diagram illustrates the process for cleaning up your instance store-backed AMI.
188
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Automate the EBS-backed AMI lifecycle
2. Delete the bundle in Amazon S3 using the ec2-delete-bundle (p. 160) (AMI tools) command as
follows.
3. (Optional) If you are finished with an instance that you launched from the AMI, you can terminate it
using the terminate-instances command as follows.
4. (Optional) If you are finished with the Amazon S3 bucket that you uploaded the bundle to, you can
delete the bucket. To delete an Amazon S3 bucket, open the Amazon S3 console, select the bucket,
choose Actions, and then choose Delete.
EC2 instances with encrypted EBS volumes are launched from AMIs in the same way as other instances.
In addition, when you launch an instance from an AMI backed by unencrypted EBS snapshots, you can
encrypt some or all of the volumes during launch.
Like EBS volumes, snapshots in AMIs can be encrypted by either your default AWS KMS key, or to a
customer managed key that you specify. You must in all cases have permission to use the selected KMS
key.
AMIs with encrypted snapshots can be shared across AWS accounts. For more information, see Shared
AMIs (p. 112).
Instance-launching scenarios
Amazon EC2 instances are launched from AMIs using the RunInstances action with parameters
supplied through block device mapping, either by means of the AWS Management Console or directly
189
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance-launching scenarios
using the Amazon EC2 API or CLI. For more information about block device mapping, see Block device
mapping. For examples of controlling block device mapping from the AWS CLI, see Launch, List, and
Terminate EC2 Instances.
By default, without explicit encryption parameters, a RunInstances action maintains the existing
encryption state of an AMI's source snapshots while restoring EBS volumes from them. If Encryption by
default (p. 1539) is enabled, all volumes created from the AMI (whether from encrypted or unencrypted
snapshots) will be encrypted. If encryption by default is not enabled, then the instance maintains the
encryption state of the AMI.
You can also launch an instance and simultaneously apply a new encryption state to the resulting
volumes by supplying encryption parameters. Consequently, the following behaviors are observed:
The default behaviors can be overridden by supplying encryption parameters. The available parameters
are Encrypted and KmsKeyId. Setting only the Encrypted parameter results in the following:
• An unencrypted snapshot is restored to an EBS volume that is encrypted by your AWS account's
default KMS key.
• An encrypted snapshot that you own is restored to an EBS volume encrypted by the same KMS key. (In
other words, the Encrypted parameter has no effect.)
• An encrypted snapshot that you do not own (i.e., the AMI is shared with you) is restored to a volume
that is encrypted by your AWS account's default KMS key. (In other words, the Encrypted parameter
has no effect.)
Setting both the Encrypted and KmsKeyId parameters allows you to specify a non-default KMS key for
an encryption operation. The following behaviors result:
• An unencrypted snapshot is restored to an EBS volume encrypted by the specified KMS key.
• An encrypted snapshot is restored to an EBS volume encrypted not to the original KMS key, but
instead to the specified KMS key.
Submitting a KmsKeyId without also setting the Encrypted parameter results in an error.
The following sections provide examples of launching instances from AMIs using non-default encryption
parameters. In each of these scenarios, parameters supplied to the RunInstances action result in a
change of encryption state during restoration of a volume from a snapshot.
For information about using the console to launch an instance from an AMI, see Launch your
instance (p. 563).
190
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance-launching scenarios
The Encrypted parameter alone results in the volume for this instance being encrypted. Providing a
KmsKeyId parameter is optional. If no KMS key ID is specified, the AWS account's default KMS key is
used to encrypt the volume. To encrypt the volume to a different KMS key that you own, supply the
KmsKeyId parameter.
If you own the AMI and supply no encryption parameters, the resulting instance has a volume encrypted
by the same KMS key as the snapshot. If the AMI is shared rather than owned by you, and you supply no
encryption parameters, the volume is encrypted by your default KMS key. With encryption parameters
supplied as shown, the volume is encrypted by the specified KMS key.
191
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Image-copying scenarios
In this scenario, the RunInstances action is supplied with encryption parameters for each of the source
snapshots. When all possible encryption parameters are specified, the resulting instance is the same
regardless of whether you own the AMI.
Image-copying scenarios
Amazon EC2 AMIs are copied using the CopyImage action, either through the AWS Management Console
or directly using the Amazon EC2 API or CLI.
By default, without explicit encryption parameters, a CopyImage action maintains the existing
encryption state of an AMI's source snapshots during copy. You can also copy an AMI and simultaneously
apply a new encryption state to its associated EBS snapshots by supplying encryption parameters.
Consequently, the following behaviors are observed:
All of these default behaviors can be overridden by supplying encryption parameters. The available
parameters are Encrypted and KmsKeyId. Setting only the Encrypted parameter results in the
following:
• An unencrypted snapshot is copied to a snapshot encrypted by the AWS account's default KMS key.
• An encrypted snapshot is copied to a snapshot encrypted by the same KMS key. (In other words, the
Encrypted parameter has no effect.)
• An encrypted snapshot that you do not own (i.e., the AMI is shared with you) is copied to a volume that
is encrypted by your AWS account's default KMS key. (In other words, the Encrypted parameter has
no effect.)
Setting both the Encrypted and KmsKeyId parameters allows you to specify a customer managed KMS
key for an encryption operation. The following behaviors result:
192
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Understand AMI billing
Submitting a KmsKeyId without also setting the Encrypted parameter results in an error.
The following section provides an example of copying an AMI using non-default encryption parameters,
resulting in a change of encryption state.
For detailed instructions using the console, see Copy an AMI (p. 170).
Setting the Encrypted parameter encrypts the single snapshot for this instance. If you do not specify
the KmsKeyId parameter, the default customer managed key is used to encrypt the snapshot copy.
Note
You can also copy an image with multiple snapshots and configure the encryption state of each
individually.
193
Amazon Elastic Compute Cloud
User Guide for Linux Instances
AMI billing fields
when launching your instance affects the bottom line on your AWS bill, you can research the associated
operating system platform and billing information. Do this before you launch any On-Demand or Spot
Instances, or purchase a Reserved Instance.
Here are two examples of how researching your AMI in advance can help you choose the AMI that best
suits your needs:
• For Spot Instances, you can use the AMI Platform details to confirm that the AMI is supported for Spot
Instances.
• When purchasing a Reserved Instance, you can make sure that you select the operating system
platform (Platform) that maps to the AMI Platform details.
For more information about instance pricing, see Amazon EC2 pricing.
Contents
• AMI billing information fields (p. 194)
• Finding AMI billing and usage details (p. 195)
• Verify AMI charges on your bill (p. 197)
Platform details
The platform details associated with the billing code of the AMI. For example, Red Hat
Enterprise Linux.
Usage operation
The operation of the Amazon EC2 instance and the billing code that is associated with the AMI. For
example, RunInstances:0010. Usage operation corresponds to the lineitem/Operation column on
your AWS Cost and Usage Report (CUR) and in the AWS Price List API.
You can view these fields on the Instances or AMIs page in the Amazon EC2 console, or in the response
that is returned by the describe-images command.
Linux/UNIX RunInstances
194
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Find AMI billing information
Windows RunInstances:0002
* If two software licenses are associated with an AMI, the Platform details field shows both.
** If you are running Spot Instances, the lineitem/Operation on your AWS Cost and Usage Report
might be different from the Usage operation value that is listed here. For example, if lineitem/
Operation displays RunInstances:0010:SV006, it means that Amazon EC2 is running Red Hat
Enterprise Linux Spot Instance-hour in US East (Virginia) in VPC Zone #6.
The following fields can help you verify AMI charges on your bill:
• Platform details
• Usage operation
• AMI ID
195
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Find AMI billing information
3. On the Details tab, check the values for Platform details and Usage operation.
If you know the instance ID, you can get the AMI ID for the instance by using the describe-instances
command.
..."Instances": [
{
"AmiLaunchIndex": 0,
"ImageId": "ami-0123456789EXAMPLE",
"InstanceId": "i-123456789abcde123",
...
}]
If you know the AMI ID, you can use the describe-images command to get the AMI platform and usage
operation details.
The following example output shows the PlatformDetails and UsageOperation fields. In this
example, the ami-0123456789EXAMPLE platform is Red Hat Enterprise Linux and the usage
operation and billing code is RunInstances:0010.
{
"Images": [
{
"VirtualizationType": "hvm",
"Description": "Provided by Red Hat, Inc.",
"Hypervisor": "xen",
"EnaSupport": true,
"SriovNetSupport": "simple",
"ImageId": "ami-0123456789EXAMPLE",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
196
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Verify AMI charges on your bill
"SnapshotId": "snap-111222333444aaabb",
"DeleteOnTermination": true,
"VolumeType": "gp2",
"VolumeSize": 10,
"Encrypted": false
}
}
],
"Architecture": "x86_64",
"ImageLocation": "123456789012/RHEL-8.0.0_HVM-20190618-x86_64-1-Hourly2-GP2",
"RootDeviceType": "ebs",
"OwnerId": "123456789012",
"PlatformDetails": "Red Hat Enterprise Linux",
"UsageOperation": "RunInstances:0010",
"RootDeviceName": "/dev/sda1",
"CreationDate": "2019-05-10T13:17:12.000Z",
"Public": true,
"ImageType": "machine",
"Name": "RHEL-8.0.0_HVM-20190618-x86_64-1-Hourly2-GP2"
}
]
}
To verify the billing information, find the instance ID in your CUR and check the corresponding value
in the lineitem/Operation column. That value should match the value for Usage operation that's
associated with the AMI.
For example, the AMI ami-0123456789EXAMPLE has the following billing information:
If you launched an instance using this AMI, you can find the instance ID in your CUR, and check the
corresponding value in the lineitem/Operation column. In this example, the value should be
RunInstances:0010.
Amazon Linux
Amazon Linux is provided by Amazon Web Services (AWS). It is designed to provide a stable, secure,
and high-performance execution environment for applications running on Amazon EC2. It also includes
packages that enable easy integration with AWS, including launch configuration tools and many popular
AWS libraries and tools. AWS provides ongoing security and maintenance updates for all instances
running Amazon Linux. Many applications developed on CentOS (and similar distributions) run on
Amazon Linux.
Contents
• Amazon Linux availability (p. 198)
• Connect to an Amazon Linux instance (p. 198)
• Identify Amazon Linux images (p. 198)
• AWS command line tools (p. 199)
197
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Linux availability
The last version of the Amazon Linux AMI, 2018.03, reaches the end of standard support on December
31, 2020. For more information, see the following blog post: Amazon Linux AMI end of life. If you are
currently using the Amazon Linux AMI, we recommend that you migrate to Amazon Linux 2. To migrate
to Amazon Linux 2, launch an instance or create a virtual machine using the current Amazon Linux 2
image. Install your applications, plus any required packages. Test your application, and make any changes
required for it to run on Amazon Linux 2.
For more information, see Amazon Linux 2 and Amazon Linux AMI. For Amazon Linux Docker container
images, see amazonlinux on Docker Hub.
• image_name, image_version, image_arch — Values from the build recipe that Amazon used to
construct the image.
• image_stamp — A unique, random hex value generated during image creation.
• image_date — The UTC time of image creation, in YYYYMMDDhhmmss format
• recipe_name, recipe_id — The name and ID of the build recipe Amazon used to construct the
image.
Amazon Linux contains an /etc/system-release file that specifies the current release that is
installed. This file is updated using yum and is part of the system-release RPM.
Amazon Linux also contains a machine-readable version of /etc/system-release that follows the
CPE specification; see /etc/system-release-cpe.
Amazon Linux 2
The following is an example of /etc/image-id for the current version of Amazon Linux 2:
198
Amazon Elastic Compute Cloud
User Guide for Linux Instances
AWS command line tools
The following is an example of /etc/system-release for the current version of Amazon Linux 2:
The following is an example of /etc/system-release for the current Amazon Linux AMI:
• aws-amitools-ec2
• aws-apitools-as
• aws-apitools-cfn
• aws-apitools-elb
199
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Package repository
• aws-apitools-mon
• aws-cfn-bootstrap
• aws-cli
Amazon Linux 2 and the minimal versions of Amazon Linux (amzn-ami-minimal-* and amzn2-ami-
minimal-*) do not always contain all of these packages; however, you can install them from the default
repositories using the following command:
For instances launched using IAM roles, a simple script has been included to prepare
AWS_CREDENTIAL_FILE, JAVA_HOME, AWS_PATH, PATH, and product-specific environment variables
after a credential file has been installed to simplify the configuration of these tools.
Also, to allow the installation of multiple versions of the API and AMI tools, we have placed symbolic
links to the desired versions of these tools in /opt/aws, as described here:
/opt/aws/bin
Products are installed in directories of the form name-version and a symbolic link name that is
attached to the most recently installed version.
/opt/aws/{apitools|amitools}/name/environment.sh
Package repository
Amazon Linux 2 and the Amazon Linux AMI are designed to be used with online package repositories
hosted in each Amazon EC2 AWS Region. These repositories provide ongoing updates to packages in
Amazon Linux 2 and the Amazon Linux AMI, as well as access to hundreds of additional common open-
source server applications. The repositories are available in all Regions and are accessed using yum
update tools. Hosting repositories in each Region enables us to deploy updates quickly and without any
data transfer charges.
Amazon Linux 2 and the Amazon Linux AMI are updated regularly with security and feature
enhancements. If you do not need to preserve data or customizations for your instances, you can simply
launch new instances using the current AMI. If you need to preserve data or customizations for your
instances, you can maintain those instances through the Amazon Linux package repositories. These
repositories contain all the updated packages. You can choose to apply these updates to your running
instances. Older versions of the AMI and update packages continue to be available for use, even as new
versions are released.
Important
Your instance must have access to the internet in order to access the repository.
For the Amazon Linux AMI, access to the Extra Packages for Enterprise Linux (EPEL) repository is
configured, but it is not enabled by default. Amazon Linux 2 is not configured to use the EPEL repository.
200
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Package repository
EPEL provides third-party packages in addition to those that are in the repositories. The third-party
packages are not supported by AWS. You can enable the EPEL repository with the following commands:
If you find that Amazon Linux does not contain an application you need, you can simply install the
application directly on your Amazon Linux instance. Amazon Linux uses RPMs and yum for package
management, and that is likely the simplest way to install new applications. You should always check to
see if an application is available in our central Amazon Linux repository first, because many applications
are available there. These applications can easily be added to your Amazon Linux instance.
To upload your applications onto a running Amazon Linux instance, use scp or sftp and then configure
the application by logging on to your instance. Your applications can also be uploaded during the
instance launch by using the PACKAGE_SETUP action from the built-in cloud-init package. For more
information, see cloud-init (p. 203).
Security updates
Security updates are provided using the package repositories as well as updated AMI security alerts are
published in the Amazon Linux Security Center. For more information about AWS security policies or to
report a security problem, go to the AWS Security Center.
Amazon Linux is configured to download and install critical or important security updates at launch
time. We recommend that you make the necessary updates for your use case after launch. For example,
you may want to apply all updates (not just security updates) at launch, or evaluate each update and
apply only the ones applicable to your system. This is controlled using the following cloud-init setting:
repo_upgrade. The following snippet of cloud-init configuration shows how you can change the
settings in the user data text you pass to your instance initialization:
#cloud-config
repo_upgrade: security
critical
Apply outstanding critical or important updates that Amazon marks as security updates.
201
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Extras library (Amazon Linux 2)
bugfix
Apply updates that Amazon marks as bug fixes. Bug fixes are a larger set of updates, which include
security updates and fixes for various other minor bugs.
all
The default setting for repo_upgrade is security. That is, if you don't specify a different value in your
user data, by default, Amazon Linux performs the security upgrades at launch for any packages installed
at that time. Amazon Linux also notifies you of any updates to the installed packages by listing the
number of available updates upon login using the /etc/motd file. To install these updates, you need to
run sudo yum upgrade on the instance.
Repository configuration
With Amazon Linux, AMIs are treated as snapshots in time, with a repository and update structure that
always gives you the latest packages when you run yum update -y.
The repository structure is configured to deliver a continuous flow of updates that enable you to roll
from one version of Amazon Linux to the next. For example, if you launch an instance from an older
version of the Amazon Linux AMI (such as 2017.09 or earlier) and run yum update -y, you end up with
the latest packages.
You can disable rolling updates by enabling the lock-on-launch feature. The lock-on-launch feature locks
your instance to receive updates only from the specified release of the AMI. For example, you can launch
a 2017.09 AMI and have it receive only the updates that were released prior to the 2018.03 AMI, until
you are ready to migrate to the 2018.03 AMI.
Important
If you lock to a version of the repositories that is not the latest, you do not receive further
updates. To receive a continuous flow of updates, you must use the latest AMI, or consistently
update your AMI with the repositories pointed to latest.
To enable lock-on-launch in new instances, launch it with the following user data passed to cloud-init:
#cloud-config
repo_releasever: 2017.09
1. Edit /etc/yum.conf.
2. Comment out releasever=latest.
3. To clear the cache, run yum clean all.
202
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Access source packages for reference
To enable a topic and install the latest version of its package to ensure freshness, use the following
command:
To enable topics and install specific versions of their packages to ensure stability, use the following
command:
[ec2-user ~]$ sudo yum remove $(yum list installed | grep amzn2extra-topic | awk '{ print
$1 }')
Note
This command does not remove packages that were installed as dependencies of the extra.
To disable a topic and make the packages inaccessible to the yum package manager, use the following
command:
Important
This command is intended for advanced users. Improper usage of this command could cause
package compatibility conflicts.
The source RPM can be unpacked, and, for reference, you can view the source tree using standard RPM
tools. After you finish debugging, the package is available for use.
cloud-init
The cloud-init package is an open-source application built by Canonical that is used to bootstrap Linux
images in a cloud computing environment, such as Amazon EC2. Amazon Linux contains a customized
version of cloud-init. It enables you to specify actions that should happen to your instance at boot time.
You can pass desired actions to cloud-init through the user data fields when launching an instance.
This means you can use common AMIs for many use cases and configure them dynamically at startup.
Amazon Linux also uses cloud-init to perform initial configuration of the ec2-user account.
Amazon Linux uses the cloud-init actions found in /etc/cloud/cloud.cfg.d and /etc/cloud/
cloud.cfg. You can create your own cloud-init action files in /etc/cloud/cloud.cfg.d. All files
203
Amazon Elastic Compute Cloud
User Guide for Linux Instances
cloud-init
in this directory are read by cloud-init. They are read in lexical order, and later files overwrite values in
earlier files.
The cloud-init package performs these (and other) common configuration tasks for instances at boot:
#cloud-config
mounts:
- [ ephemeral0 ]
For more control over mounts, see Mounts in the cloud-init documentation.
• Instance store volumes that support TRIM are not formatted when an instance launches, so you
must partition and format them before you can mount them. For more information, see Instance
store volume TRIM support (p. 1628). You can use the disk_setup module to partition and
format your instance store volumes at boot. For more information, see Disk Setup in the cloud-init
documentation.
• Gzip
• If user-data is gzip compressed, cloud-init decompresses the data and handles it appropriately.
• MIME multipart
• Using a MIME multipart file, you can specify more than one type of data. For example, you could
specify both a user-data script and a cloud-config type. Each part of the multipart file can be
handled by cloud-init if it is one of the supported formats.
• Base64 decoding
• If user-data is base64-encoded, cloud-init determines if it can understand the decoded data as
one of the supported types. If it understands the decoded data, it decodes the data and handles it
appropriately. If not, it returns the base64 data intact.
• User-Data script
• Begins with #! or Content-Type: text/x-shellscript.
• The script is run by /etc/init.d/cloud-init-user-scripts during the first boot cycle. This
occurs late in the boot process (after the initial configuration actions are performed).
• Include file
204
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Subscribe to Amazon Linux notifications
a. [Amazon Linux 2] For Topic ARN, copy and paste the following Amazon Resource Name (ARN):
arn:aws:sns:us-east-1:137112412989:amazon-linux-2-ami-updates.
b. [Amazon Linux] For Topic ARN, copy and paste the following Amazon Resource Name (ARN):
arn:aws:sns:us-east-1:137112412989:amazon-linux-ami-updates.
c. For Protocol, choose Email.
d. For Endpoint, enter an email address that you can use to receive the notifications.
e. Choose Create subscription.
5. You receive a confirmation email with the subject line "AWS Notification - Subscription
Confirmation". Open the email and choose Confirm subscription to complete your subscription.
Whenever AMIs are released, we send notifications to the subscribers of the corresponding topic. To stop
receiving these notifications, use the following procedure to unsubscribe.
205
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Subscribe to Amazon Linux notifications
3. In the navigation pane, choose Subscriptions, select the subscription, and choose Actions, Delete
subscriptions.
4. When prompted for confirmation, choose Delete.
{
"description": "Validates output from AMI Release SNS message",
"type": "object",
"properties": {
"v1": {
"type": "object",
"properties": {
"ReleaseVersion": {
"description": "Major release (ex. 2018.03)",
"type": "string"
},
"ImageVersion": {
"description": "Full release (ex. 2018.03.0.20180412)",
"type": "string"
},
"ReleaseNotes": {
"description": "Human-readable string with extra information",
"type": "string"
},
"Regions": {
"type": "object",
"description": "Each key will be a region name (ex. us-east-1)",
"additionalProperties": {
"type": "array",
"items": {
"type": "object",
"properties": {
"Name": {
"description": "AMI Name (ex. amzn-ami-
hvm-2018.03.0.20180412-x86_64-gp2)",
"type": "string"
},
"ImageId": {
"description": "AMI Name (ex.ami-467ca739)",
"type": "string"
}
},
"required": [
"Name",
"ImageId"
]
}
}
}
},
"required": [
"ReleaseVersion",
"ImageVersion",
"ReleaseNotes",
"Regions"
]
}
},
"required": [
"v1"
206
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Run Amazon Linux 2 on premises
]
}
• VMWare
• KVM
• VirtualBox (Oracle VM)
• Microsoft Hyper-V
To use the Amazon Linux 2 virtual machine images with one of the supported virtualization platforms,
do the following:
To generate the seed.iso boot image, you need two configuration files:
• meta-data—This file includes the hostname and static network settings for the VM.
• user-data—This file configures user accounts, and specifies their passwords, key pairs, and access
mechanisms. By default, the Amazon Linux 2 VM image creates a ec2-user user account. You use the
user-data configuration file to set the password for the default user account.
local-hostname: vm_hostname
# eth0 is the default network interface enabled in the image. You can configure
static network settings with an entry like the following.
network-interfaces: |
auto eth0
iface eth0 inet static
address 192.168.1.10
network 192.168.1.0
netmask 255.255.255.0
broadcast 192.168.1.255
207
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Run Amazon Linux 2 on premises
gateway 192.168.1.254
Replace vm_hostname with a VM host name of your choice, and configure the network settings
as required.
c. Save and close the meta-data configuration file.
#cloud-config
#vim:syntax=yaml
users:
# A user by the name `ec2-user` is created in the image by default.
- default
chpasswd:
list: |
ec2-user:plain_text_password
# In the above line, do not add any spaces after 'ec2-user:'.
Replace plain_text_password with a password of your choice for the default ec2-user user
account.
c. (Optional) By default, cloud-init applies network settings each time the VM boots. Add the
following to prevent cloud-init from applying network settings at each boot, and to retain the
network settings applied during the first boot.
You can also create additional user accounts and specify their access mechanisms, passwords, and
key pairs. For more information about the supported directives, see Modules. For an example user-
data file that creates three additional users and specifies a custom password for the default ec2-
user user account, see the sample Seed.iso file.
4. Create the seed.iso boot image using the meta-data and user-data configuration files.
For Linux, use a tool such as genisoimage. Navigate into the seedconfig folder, and run the
following command.
For macOS, use a tool such as hdiutil. Navigate one level up from the seedconfig folder, and run
the following command.
208
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Run Amazon Linux 2 on premises
• VMWare
• KVM
• Oracle VirtualBox
• Microsoft Hyper-V
VMWare vSphere
1. Create a new datastore for the seed.iso file, or add it to an existing datastore.
2. Deploy the OVF template, but do not start the VM yet.
3. In the Navigator panel, right-click the new virtual machine and choose Edit Settings.
4. On the Virtual Hardware tab, for New device, choose CD/DVD Drive, and then choose Add.
5. For New CD/DVD Drive, choose Datastore ISO File. Select the datastore to which you added the
seed.iso file, browse to and select the seed.iso file, and then choose OK.
6. For New CD/DVD Drive, select Connect, and then choose OK.
After you have associated the datastore with the VM, you should be able to boot it.
KVM
209
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Kernel Live Patching
Oracle VirtualBox
a. Select the new VM, choose Settings, and then choose Storage.
b. In the Storage Devices list, under Controller: IDE, choose the Empty optical drive.
c. In the Attributes section for the optical drive, choose the browse button, select Choose
Virtual Optical Disk File, and then select the seed.iso file. Choose OK to apply the
changes and close the Settings.
After you have added the seed.iso file to the virtual optical drive, you should be able to start the
VM.
Microsoft Hyper-V
The VM image for Microsoft Hyper-V is compressed into a zip file. You must extract the contents of
the zip file.
After the VM has booted, log in using one of the user accounts that is defined in the user-data
configuration file. After you have logged in for the first time, you can then disconnect the seed.iso boot
image from the VM.
AWS releases two types of kernel live patches for Amazon Linux 2:
210
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Kernel Live Patching
• Security updates—Include updates for Linux common vulnerabilities and exposures (CVE). These
updates are typically rated as important or critical using the Amazon Linux Security Advisory ratings.
They generally map to a Common Vulnerability Scoring System (CVSS) score of 7 and higher. In some
cases, AWS might provide updates before a CVE is assigned. In these cases, the patches might appear
as bug fixes.
• Bug fixes—Include fixes for critical bugs and stability issues that are not associated with CVEs.
AWS provides kernel live patches for an Amazon Linux 2 kernel version for up to 3 months after its
release. After the 3-month period, you must update to a later kernel version to continue to receive kernel
live patches.
Amazon Linux 2 kernel live patches are made available as signed RPM packages in the existing Amazon
Linux 2 repositories. The patches can be installed on individual instances using existing yum workflows,
or they can be installed on a group of managed instances using AWS Systems Manager.
Topics
• Supported configurations and prerequisites (p. 211)
• Work with Kernel Live Patching (p. 211)
• Limitations (p. 215)
• Frequently asked questions (p. 215)
Note
The 64-bit ARM (arm64) architecture is not supported.
The following sections explain how to enable and use Kernel Live Patching on individual instances using
the command line.
For more information about enabling and using Kernel Live Patching on a group of managed instances,
see Use Kernel Live Patching on Amazon Linux 2 instances in the AWS Systems Manager User Guide.
Topics
• Enable Kernel Live Patching (p. 212)
• View the available kernel live patches (p. 213)
• Apply kernel live patches (p. 213)
• View the applied kernel live patches (p. 214)
• Disable Kernel Live Patching (p. 215)
211
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Kernel Live Patching
Prerequisites
Kernel Live Patching requires binutils. If you do not have binutils installed, install it using the
following command:
1. Kernel live patches are available for Amazon Linux 2 with kernel version 4.14.165-131.185 or
later. To check your kernel version, run the following command.
2. If you already have a supported kernel version, skip this step. If you do not have a supported kernel
version, run the following commands to update the kernel to the latest version and to reboot the
instance.
$ sudo reboot
This command also installs the latest version of the kernel live patch RPM from the configured
repositories.
5. To confirm that the yum plugin for kernel live patching has installed successfully, run the following
command.
When you enable Kernel Live Patching, an empty kernel live patch RPM is automatically applied. If
Kernel Live Patching was successfully enabled, this command returns a list that includes the initial
empty kernel live patch RPM.
6. Install the kpatch package.
8. Start the kpatch service. This service loads all of the kernel live patches upon initialization or at
boot.
212
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Kernel Live Patching
9. Configure the Amazon Linux 2 Kernel Live Patching repository, which contains the kernel live
patches.
You can also discover the available kernel live patches for advisories and CVEs using the command line.
213
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Kernel Live Patching
You can choose to apply a specific kernel live patch, or to apply any available kernel live patches along
with your regular security updates.
1. Get the kernel live patch version using one of the commands described in View the available kernel
live patches (p. 213).
2. Apply the kernel live patch for your Amazon Linux 2 kernel.
For example, the following command applies a kernel live patch for Amazon Linux 2 kernel version
4.14.165-133.209.
To apply any available kernel live patches along with your regular security updates
• The kernel version is not updated after applying kernel live patches. The version is only
updated to the new version after the instance is rebooted.
• An Amazon Linux 2 kernel receives kernel live patches for a period of three months. After the
three month period has lapsed, no new kernel live patches are released for that kernel version.
To continue to receive kernel live patches after the three-month period, you must reboot the
instance to move to the new kernel version, which will then continue receiving kernel live
patches for the next three months. To check the support window for your kernel version, run
yum kernel-livepatch supported.
$ kpatch list
The command returns a list of the loaded and installed security update kernel live patches. The following
is example output.
214
Amazon Elastic Compute Cloud
User Guide for Linux Instances
User provided kernels
Note
A single kernel live patch can include and install multiple live patches.
1. Remove the RPM packages for the applied kernel live patches.
$ sudo reboot
Limitations
Kernel Live Patching has the following limitations:
• While applying a kernel live patch, you can't perform hibernation, use advanced debugging tools (such
as SystemTap, kprobes, and eBPF-based tools), or access ftrace output files used by the Kernel Live
Patching infrastructure.
• Amazon Linux 2 instances with 64-bit ARM (arm64) architecture are not supported.
Contents
• HVM AMIs (GRUB) (p. 215)
• Paravirtual AMIs (PV-GRUB) (p. 216)
215
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Paravirtual AMIs (PV-GRUB)
By default, GRUB does not send its output to the instance console because it creates an extra boot delay.
For more information, see Instance console output (p. 1722). If you are installing a custom kernel, you
should consider enabling GRUB output.
You don't need to specify a fallback kernel, but we recommend that you have a fallback when you test a
new kernel. GRUB can fall back to another kernel in the event that the new kernel fails. Having a fallback
kernel enables the instance to boot even if the new kernel isn't found.
The legacy GRUB for Amazon Linux uses /boot/grub/menu.lst. GRUB2 for Amazon Linux 2 uses /
etc/default/grub. For more information about updating the default kernel in the bootloader, see the
documentation for your Linux distribution.
PV-GRUB understands standard grub.conf or menu.lst commands, which allows it to work with all
currently supported Linux distributions. Older distributions such as Ubuntu 10.04 LTS, Oracle Enterprise
Linux, or CentOS 5.x require a special "ec2" or "xen" kernel package, while newer distributions include the
required drivers in the default kernel package.
Most modern paravirtual AMIs use a PV-GRUB AKI by default (including all of the paravirtual Linux AMIs
available in the Amazon EC2 Launch Wizard Quick Start menu), so there are no additional steps that
you need to take to use a different kernel on your instance, provided that the kernel you want to use
is compatible with your distribution. The best way to run a custom kernel on your instance is to start
with an AMI that is close to what you want and then to compile the custom kernel on your instance and
modify the menu.lst file to boot with that kernel.
You can verify that the kernel image for an AMI is a PV-GRUB AKI. Run the following describe-images
command (substituting your kernel image ID) and check whether the Name field starts with pv-grub:
Contents
• Limitations of PV-GRUB (p. 216)
• Configure GRUB for paravirtual AMIs (p. 217)
• Amazon PV-GRUB Kernel Image IDs (p. 217)
• Update PV-GRUB (p. 219)
Limitations of PV-GRUB
PV-GRUB has the following limitations:
• You can't use the 64-bit version of PV-GRUB to start a 32-bit kernel or vice versa.
• You can't specify an Amazon ramdisk image (ARI) when using a PV-GRUB AKI.
• AWS has tested and verified that PV-GRUB works with these file system formats: EXT2, EXT3, EXT4,
JFS, XFS, and ReiserFS. Other file system formats might not work.
• PV-GRUB can boot kernels compressed using the gzip, bzip2, lzo, and xz compression formats.
• Cluster AMIs don't support or need PV-GRUB, because they use full hardware virtualization (HVM).
While paravirtual instances use PV-GRUB to boot, HVM instance volumes are treated like actual disks,
and the boot process is similar to the boot process of a bare metal operating system with a partitioned
disk and bootloader.
216
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Paravirtual AMIs (PV-GRUB)
• PV-GRUB versions 1.03 and earlier don't support GPT partitioning; they support MBR partitioning only.
• If you plan to use a logical volume manager (LVM) with Amazon Elastic Block Store (Amazon EBS)
volumes, you need a separate boot partition outside of the LVM. Then you can create logical volumes
with the LVM.
The following is an example of a menu.lst configuration file for booting an AMI with a PV-GRUB
AKI. In this example, there are two kernel entries to choose from: Amazon Linux 2018.03 (the original
kernel for this AMI), and Vanilla Linux 4.16.4 (a newer version of the Vanilla Linux kernel from https://
www.kernel.org/). The Vanilla entry was copied from the original entry for this AMI, and the kernel
and initrd paths were updated to the new locations. The default 0 parameter points the bootloader
to the first entry it sees (in this case, the Vanilla entry), and the fallback 1 parameter points the
bootloader to the next entry if there is a problem booting the first.
default 0
fallback 1
timeout 0
hiddenmenu
You don't need to specify a fallback kernel in your menu.lst file, but we recommend that you have a
fallback when you test a new kernel. PV-GRUB can fall back to another kernel in the event that the new
kernel fails. Having a fallback kernel allows the instance to boot even if the new kernel isn't found.
PV-GRUB checks the following locations for menu.lst, using the first one it finds:
• (hd0)/boot/grub
• (hd0,0)/boot/grub
• (hd0,0)/grub
• (hd0,1)/boot/grub
• (hd0,1)/grub
• (hd0,2)/boot/grub
• (hd0,2)/grub
• (hd0,3)/boot/grub
• (hd0,3)/grub
Note that PV-GRUB 1.03 and earlier only check one of the first two locations in this list.
217
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Paravirtual AMIs (PV-GRUB)
We recommend that you always use the latest version of the PV-GRUB AKI, as not all versions of the PV-
GRUB AKI are compatible with all instance types. Use the following describe-images command to get a
list of the PV-GRUB AKIs for the current region:
PV-GRUB is the only AKI available in the ap-southeast-2 Region. You should verify that any AMI you
want to copy to this Region is using a version of PV-GRUB that is available in this Region.
The following are the current AKI IDs for each Region. Register new AMIs using an hd0 AKI.
Note
We continue to provide hd00 AKIs for backward compatibility in Regions where they were
previously available.
aki-f975a998 pv-grub-hd0_1.05-i386.gz
aki-7077ab11 pv-grub-hd0_1.05-x86_64.gz
aki-17a40074 pv-grub-hd0_1.05-i386.gz
aki-73a50110 pv-grub-hd0_1.05-x86_64.gz
aki-ba5665d9 pv-grub-hd0_1.05-i386.gz
aki-66506305 pv-grub-hd0_1.05-x86_64.gz
aki-1419e57b pv-grub-hd0_1.05-i386.gz
aki-931fe3fc pv-grub-hd0_1.05-x86_64.gz
aki-1c9fd86f pv-grub-hd0_1.05-i386.gz
aki-dc9ed9af pv-grub-hd0_1.05-x86_64.gz
218
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Paravirtual AMIs (PV-GRUB)
aki-7cd34110 pv-grub-hd0_1.05-i386.gz
aki-912fbcfd pv-grub-hd0_1.05-x86_64.gz
aki-04206613 pv-grub-hd0_1.05-i386.gz
aki-5c21674b pv-grub-hd0_1.05-x86_64.gz
aki-5ee9573f pv-grub-hd0_1.05-i386.gz
aki-9ee55bff pv-grub-hd0_1.05-x86_64.gz
aki-43cf8123 pv-grub-hd0_1.05-i386.gz
aki-59cc8239 pv-grub-hd0_1.05-x86_64.gz
aki-7a69931a pv-grub-hd0_1.05-i386.gz
aki-70cb0e10 pv-grub-hd0_1.05-x86_64.gz
Update PV-GRUB
We recommend that you always use the latest version of the PV-GRUB AKI, as not all versions of the PV-
GRUB AKI are compatible with all instance types. Also, older versions of PV-GRUB are not available in all
regions, so if you copy an AMI that uses an older version to a Region that does not support that version,
you will be unable to boot instances launched from that AMI until you update the kernel image. Use the
following procedures to check your instance's version of PV-GRUB and update it if necessary.
219
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure the MATE desktop connection
{
"InstanceId": "instance_id",
"KernelId": "aki-70cb0e10"
}
{
"Images": [
{
"VirtualizationType": "paravirtual",
"Name": "pv-grub-hd0_1.05-x86_64.gz",
...
"Description": "PV-GRUB release 1.05, 64-bit"
}
]
}
This kernel image is PV-GRUB 1.05. If your PV-GRUB version is not the newest version (as shown in
Amazon PV-GRUB Kernel Image IDs (p. 217)), you should update it using the following procedure.
1. Identify the latest PV-GRUB AKI for your Region and processor architecture from Amazon PV-GRUB
Kernel Image IDs (p. 217).
2. Stop your instance. Your instance must be stopped to modify the kernel image used.
220
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Prerequisite
Important
xrdp is the remote desktop software bundled in the AMI. By default, xrdp uses a self-signed
TLS certificate to encrypt remote desktop sessions. Neither AWS nor the xrdp maintainers
recommend using self-signed certificates in production. Instead, obtain a certificate from an
appropriate certificate authority (CA) and install it on your instances. For more information
about TLS configuration, see TLS security layer on the xrdp wiki.
Prerequisite
To run the commands shown in this topic, you must install the AWS Command Line Interface (AWS CLI)
or AWS Tools for Windows PowerShell, and configure your AWS profile.
Options
1. Install the AWS CLI – For more information, see Installing the AWS CLI and Configuration basics in the
AWS Command Line Interface User Guide.
2. Install the Tools for Windows PowerShell – For more information, see Installing the AWS Tools for
Windows PowerShell and Shared credentials in the AWS Tools for Windows PowerShell User Guide.
1. To get the ID of the AMI for Amazon Linux 2 that includes MATE in the AMI name, use the describe-
images command from your local command line tool.
If you already have a certificate and key, copy them to the /etc/xrdp/ directory as follows:
221
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure the RDP connection
• Certificate — /etc/xrdp/cert.pem
• Key — /etc/xrdp/key.pem
If you do not have a certificate and key, use the following command to generate them in the /etc/
xrdp directory.
$ sudo openssl req -x509 -sha384 -newkey rsa:3072 -nodes -keyout /etc/xrdp/key.pem -
out /etc/xrdp/cert.pem -days 365
Note
This command generates a certificate that is valid for 365 days.
5. Open an RDP client on the computer from which you will connect to the instance (for example,
Remote Desktop Connection on a computer running Microsoft Windows). Enter ec2-user as the
user name and enter the password that you set in the previous step.
You can turn off the GUI environment at any time by running one of the following commands on your
Linux instance.
To turn the GUI back on, you can run one of the following commands on your Linux instance.
222
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instances and AMIs
Before you launch a production environment, you need to answer the following questions.
Amazon EC2 provides different instance types to enable you to choose the CPU, memory, storage,
and networking capacity that you need to run your applications. For more information, see Instance
types (p. 226).
Q. What purchasing option best meets my needs?
Amazon EC2 supports On-Demand Instances (the default), Spot Instances, and Reserved Instances.
For more information, see Instance purchasing options (p. 375).
Q. Which type of root volume meets my needs?
Each instance is backed by Amazon EBS or backed by instance store. Select an AMI based on which
type of root volume you need. For more information, see Storage for the root device (p. 96).
Q. Can I remotely manage a fleet of EC2 instances and machines in my hybrid environment?
AWS Systems Manager enables you to remotely and securely manage the configuration of your
Amazon EC2 instances, and your on-premises instances and virtual machines (VMs) in hybrid
environments, including VMs from other cloud providers. For more information, see the AWS
Systems Manager User Guide.
223
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instances
Your instances keep running until you stop, hibernate, or terminate them, or until they fail. If an instance
fails, you can launch a new one from the AMI.
Instances
An instance is a virtual server in the cloud. Its configuration at launch is a copy of the AMI that you
specified when you launched the instance.
You can launch different types of instances from a single AMI. An instance type essentially determines
the hardware of the host computer used for your instance. Each instance type offers different compute
and memory capabilities. Select an instance type based on the amount of memory and computing power
that you need for the application or software that you plan to run on the instance. For more information
about the hardware specifications for each Amazon EC2 instance type, see Amazon EC2 Instance Types.
After you launch an instance, it looks like a traditional host, and you can interact with it as you would any
computer. You have complete control of your instances; you can use sudo to run commands that require
root privileges.
Your AWS account has a limit on the number of instances that you can have running. For more
information about this limit, and how to request an increase, see How many instances can I run in
Amazon EC2 in the Amazon EC2 General FAQ.
Your instance may include local storage volumes, known as instance store volumes, which you
can configure at launch time with block device mapping. For more information, see Block device
mappings (p. 1647). After these volumes have been added to and mapped on your instance, they are
available for you to mount and use. If your instance fails, or if your instance is stopped or terminated,
the data on these volumes is lost; therefore, these volumes are best used for temporary data. To keep
important data safe, you should use a replication strategy across multiple instances, or store your
persistent data in Amazon S3 or Amazon EBS volumes. For more information, see Storage (p. 1324).
224
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instances
Stop an instance
When an instance is stopped, the instance performs a normal shutdown, and then transitions to a
stopped state. All of its Amazon EBS volumes remain attached, and you can start the instance again at a
later time.
You are not charged for additional instance usage while the instance is in a stopped state. A minimum of
one minute is charged for every transition from a stopped state to a running state. If the instance type
was changed while the instance was stopped, you will be charged the rate for the new instance type
after the instance is started. All of the associated Amazon EBS usage of your instance, including root
device usage, is billed using typical Amazon EBS prices.
When an instance is in a stopped state, you can attach or detach Amazon EBS volumes. You can also
create an AMI from the instance, and you can change the kernel, RAM disk, and instance type.
Terminate an instance
When an instance is terminated, the instance performs a normal shutdown. The root device volume is
deleted by default, but any attached Amazon EBS volumes are preserved by default, determined by each
volume's deleteOnTermination attribute setting. The instance itself is also deleted, and you can't
start the instance again at a later time.
To prevent accidental termination, you can disable instance termination. If you do so, ensure that
the disableApiTermination attribute is set to true for the instance. To control the behavior
of an instance shutdown, such as shutdown -h in Linux or shutdown in Windows, set the
instanceInitiatedShutdownBehavior instance attribute to stop or terminate as desired.
Instances with Amazon EBS volumes for the root device default to stop, and instances with instance-
store root devices are always terminated as the result of an instance shutdown.
225
Amazon Elastic Compute Cloud
User Guide for Linux Instances
AMIs
AMIs
Amazon Web Services (AWS) publishes many Amazon Machine Images (AMIs) that contain common
software configurations for public use. In addition, members of the AWS developer community have
published their own custom AMIs. You can also create your own custom AMI or AMIs; doing so enables
you to quickly and easily start new instances that have everything you need. For example, if your
application is a website or a web service, your AMI could include a web server, the associated static
content, and the code for the dynamic pages. As a result, after you launch an instance from this AMI,
your web server starts, and your application is ready to accept requests.
All AMIs are categorized as either backed by Amazon EBS, which means that the root device for an
instance launched from the AMI is an Amazon EBS volume, or backed by instance store, which means
that the root device for an instance launched from the AMI is an instance store volume created from a
template stored in Amazon S3.
The description of an AMI indicates the type of root device (either ebs or instance store). This is
important because there are significant differences in what you can do with each type of AMI. For more
information about these differences, see Storage for the root device (p. 96).
You can deregister an AMI when you have finished using it. After you deregister an AMI, you can't use it
to launch new instances. Existing instances launched from the AMI are not affected. Therefore, if you are
also finished with the instances launched from these AMIs, you should terminate them.
Instance types
When you launch an instance, the instance type that you specify determines the hardware of the host
computer used for your instance. Each instance type offers different compute, memory, and storage
capabilities, and is grouped in an instance family based on these capabilities. Select an instance type
based on the requirements of the application or software that you plan to run on your instance.
Amazon EC2 provides each instance with a consistent and predictable amount of CPU capacity,
regardless of its underlying hardware.
Amazon EC2 dedicates some resources of the host computer, such as CPU, memory, and instance
storage, to a particular instance. Amazon EC2 shares other resources of the host computer, such as the
network and the disk subsystem, among instances. If each instance on a host computer tries to use
as much of one of these shared resources as possible, each receives an equal share of that resource.
However, when a resource is underused, an instance can consume a higher share of that resource while
it's available.
Each instance type provides higher or lower minimum performance from a shared resource. For example,
instance types with high I/O performance have a larger allocation of shared resources. Allocating a larger
share of shared resources also reduces the variance of I/O performance. For most applications, moderate
I/O performance is more than enough. However, for applications that require greater or more consistent
I/O performance, consider an instance type with higher I/O performance.
Contents
• Available instance types (p. 227)
• Hardware specifications (p. 231)
• AMI virtualization types (p. 232)
• Instances built on the Nitro System (p. 232)
• Networking and storage features (p. 233)
• Instance limits (p. 237)
• General purpose instances (p. 237)
226
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Available instance types
227
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Available instance types
228
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Available instance types
229
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Available instance types
230
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hardware specifications
Type Sizes
C1 c1.medium | c1.xlarge
G2 g2.2xlarge | g2.8xlarge
T1 t1.micro
Hardware specifications
For more information about the hardware specifications for each Amazon EC2 instance type, see Amazon
EC2 Instance Types.
To determine which instance type best meets your needs, we recommend that you launch an instance
and use your own benchmark application. Because you pay by the instance second, it's convenient and
inexpensive to test multiple instance types before making a decision. If your needs change, even after
you make a decision, you can change the instance type later. For more information, see Change the
instance type (p. 367).
Naming conventions
Instance type names combine the instance family, generation, and size. They can also indicate additional
capabilities, such as:
• a – AMD processors
• g – AWS Graviton processors
• i – Intel processors
• d – Instance store volumes
• n – Network optimization
• b – Block storage optimization
• e – Extra storage or memory
• z – High frequency
231
Amazon Elastic Compute Cloud
User Guide for Linux Instances
AMI virtualization types
Processor features
Intel processor features
Amazon EC2 instances that run on Intel processors may include the following features. Not all of the
following processor features are supported by all instance types. For detailed information about which
features are available for each instance type, see Amazon EC2 Instance Types.
• Intel AES New Instructions (AES-NI) — Intel AES-NI encryption instruction set improves upon the
original Advanced Encryption Standard (AES) algorithm to provide faster data protection and greater
security. All current generation EC2 instances support this processor feature.
• Intel Advanced Vector Extensions (Intel AVX, Intel AVX2, and Intel AVX-512) — Intel AVX and Intel
AVX2 are 256-bit, and Intel AVX-512 is a 512-bit instruction set extension designed for applications
that are Floating Point (FP) intensive. Intel AVX instructions improve performance for applications like
image and audio/video processing, scientific simulations, financial analytics, and 3D modeling and
analysis. These features are only available on instances launched with HVM AMIs.
• Intel Turbo Boost Technology — Intel Turbo Boost Technology processors automatically run cores
faster than the base operating frequency.
• Intel Deep Learning Boost (Intel DL Boost) — Accelerates AI deep learning use cases. The 2nd Gen
Intel Xeon Scalable processors extend Intel AVX-512 with a new Vector Neural Network Instruction
(VNNI/INT8) that significantly increases deep learning inference performance over previous generation
Intel Xeon Scalable processors (with FP32) for image recognition/segmentation, object detection,
speech recognition, language translation, recommendation systems, reinforcement learning, and more.
VNNI may not be compatible with all Linux distributions.
The following instances support VNNI: M5n, R5n, M5dn, M5zn, R5b, R5dn, D3, and D3en. C5 and C5d
instances support VNNI for only 12xlarge, 24xlarge, and metal instances.
Confusion may result from industry naming conventions for 64-bit CPUs. Chip manufacturer Advanced
Micro Devices (AMD) introduced the first commercially successful 64-bit architecture based on the Intel
x86 instruction set. Consequently, the architecture is widely referred to as AMD64 regardless of the
chip manufacturer. Windows and several Linux distributions follow this practice. This explains why the
internal system information on an instance running Ubuntu or Windows displays the CPU architecture as
AMD64 even though the instances are running on Intel hardware.
For best performance, we recommend that you use an HVM AMI. In addition, HVM AMIs are required to
take advantage of enhanced networking. HVM virtualization uses hardware-assist technology provided
by the AWS platform. With HVM virtualization, the guest VM runs as if it were on a native hardware
platform, except that it still uses PV network and storage drivers for improved performance.
The Nitro System provides bare metal capabilities that eliminate virtualization overhead and support
workloads that require full access to host hardware. Bare metal instances are well suited for the
following:
232
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Networking and storage features
• Workloads that require access to low-level hardware features (for example, Intel VT) that are not
available or fully supported in virtualized environments
• Applications that require a non-virtualized environment for licensing or support
Nitro components
The following components are part of the Nitro System:
• Nitro card
• Local NVMe storage volumes
• Networking hardware support
• Management
• Monitoring
• Security
• Nitro security chip, integrated into the motherboard
• Nitro hypervisor - A lightweight hypervisor that manages memory and CPU allocation and delivers
performance that is indistinguishable from bare metal for most workloads.
Instance types
The following instances are built on the Nitro System:
• Virtualized: A1, C5, C5a, C5ad, C5d, C5n, C6g, C6gd, C6gn, C6i, D3, D3en, DL1, G4, G4ad, G5, G5g,
Hpc6a, I3en, Im4gn, Inf1, Is4gen, M5, M5a, M5ad, M5d, M5dn, M5n, M5zn, M6a, M6g, M6gd, M6i,
p3dn.24xlarge, P4, R5, R5a, R5ad, R5b, R5d, R5dn, R5n, R6g, R6gd, R6i, T3, T3a, T4g, high memory
(u-*), VT1, X2gd, and z1d
• Bare metal: a1.metal, c5.metal, c5d.metal, c5n.metal, c6g.metal, c6gd.metal, i3.metal,
i3en.metal, m5.metal, m5d.metal, m5dn.metal, m5n.metal, m5zn.metal, m6g.metal,
m6gd.metal, mac1.metal, r5.metal, r5b.metal, r5d.metal, r5dn.metal, r5n.metal,
r6g.metal, r6gd.metal, r6i.metal, u-6tb1.metal, u-9tb1.metal, u-12tb1.metal,
u-18tb1.metal, u-24tb1.metal, x2gd.metal, and z1d.metal
Learn more
For more information, see the following videos:
Networking features
• IPv6 is supported on all current generation instance types and the C3, R3, and I2 previous generation
instance types.
• To maximize the networking and bandwidth performance of your instance type, you can do the
following:
233
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Networking and storage features
• Launch supported instance types into a cluster placement group to optimize your instances for
high performance computing (HPC) applications. Instances in a common cluster placement group
can benefit from high-bandwidth, low-latency networking. For more information, see Placement
groups (p. 1167).
• Enable enhanced networking for supported current generation instance types to get significantly
higher packet per second (PPS) performance, lower network jitter, and lower latencies. For more
information, see Enhanced networking on Linux (p. 1100).
• Current generation instance types that are enabled for enhanced networking have the following
networking performance attributes:
• Traffic within the same Region over private IPv4 or IPv6 can support 5 Gbps for single-flow traffic
and up to 25 Gbps for multi-flow traffic (depending on the instance type).
• Traffic to and from Amazon S3 buckets within the same Region over the public IP address space or
through a VPC endpoint can use all available instance aggregate bandwidth.
• The maximum transmission unit (MTU) supported varies across instance types. All Amazon EC2
instance types support standard Ethernet V2 1500 MTU frames. All current generation instances
support 9001 MTU, or jumbo frames, and some previous generation instances support them as well.
For more information, see Network maximum transmission unit (MTU) for your EC2 instance (p. 1179).
Storage features
• Some instance types support EBS volumes and instance store volumes, while other instance types
support only EBS volumes. Some instance types that support instance store volumes use solid state
drives (SSD) to deliver very high random I/O performance. Some instance types support NVMe
instance store volumes. Some instance types support NVMe EBS volumes. For more information, see
Amazon EBS and NVMe on Linux instances (p. 1552) and NVMe SSD volumes (p. 1627).
• To obtain additional, dedicated capacity for Amazon EBS I/O, you can launch some instance types as
EBS–optimized instances. Some instance types are EBS–optimized by default. For more information,
see Amazon EBS–optimized instances (p. 1556).
234
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Networking and storage features
235
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Networking and storage features
T2 Yes No No No No
236
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance limits
The following table summarizes the networking and storage features supported by previous generation
instance types.
G2 SSD Yes No
M3 SSD No No
Instance limits
There is a limit on the total number of instances that you can launch in a Region, and there are additional
limits on some instance types.
For more information about the default limits, see How many instances can I run in Amazon EC2?
For more information about viewing your current limits or requesting an increase in your current limits,
see Amazon EC2 service quotas (p. 1680).
Bare metal instances, such as m5.metal, provide your applications with direct access to physical
resources of the host server, such as processors and memory.
M5zn
These instances are ideal for applications that benefit from extremely high single-thread performance,
high throughput, and low latency networking. They are well-suited for the following:
• Gaming
• High performance computing
• Simulation modeling
237
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Bare metal instances, such as m5zn.metal, provide your applications with direct access to physical
resources of the host server, such as processors and memory.
• Application servers
• Microservices
• Gaming servers
• Midsize data stores
• Caching fleets
Bare metal instances, such as m6g.metal, provide your applications with direct access to physical
resources of the host server, such as processors and memory.
M6i instances
These instances are well suited for general-purpose workloads such as the following:
Bare metal instances, such as m6i.metal, provide your applications with direct access to physical
resources of the host server, such as processors and memory.
Mac1 instances
These instances are powered by Apple Mac mini computers. They provide up to 10 Gbps of network
bandwidth and 8 Gbps EBS bandwidth through high-speed Thunderbolt 3 connections. They are well
suited to develop, build, test, and sign applications for Apple devices, such as iPhone, iPad, iPod, Mac,
Apple Watch, and Apple TV.
For more information, see Amazon EC2 Mac instances (p. 286).
238
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
For more information, see Amazon EC2 T2 Instances, Amazon EC2 T3 Instances, and Amazon EC2 T4g
Instances.
Contents
• Hardware specifications (p. 239)
• Instance performance (p. 243)
• Network performance (p. 243)
• SSD I/O performance (p. 247)
• Instance features (p. 249)
• Release notes (p. 250)
• Burstable performance instances (p. 251)
• Amazon EC2 Mac instances (p. 286)
Hardware specifications
The following is a summary of the hardware specifications for general purpose instances.
m4.large 2 8
m4.xlarge 4 16
m4.2xlarge 8 32
m4.4xlarge 16 64
m4.10xlarge 40 160
m4.16xlarge 64 256
m5.large 2 8
m5.xlarge 4 16
m5.2xlarge 8 32
m5.4xlarge 16 64
m5.8xlarge 32 128
m5.12xlarge 48 192
m5.16xlarge 64 256
m5.24xlarge 96 384
m5.metal 96 384
m5a.large 2 8
m5a.xlarge 4 16
m5a.2xlarge 8 32
m5a.4xlarge 16 64
m5a.8xlarge 32 128
239
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
m5a.12xlarge 48 192
m5a.16xlarge 64 256
m5a.24xlarge 96 384
m5ad.large 2 8
m5ad.xlarge 4 16
m5ad.2xlarge 8 32
m5ad.4xlarge 16 64
m5ad.8xlarge 32 128
m5ad.12xlarge 48 192
m5ad.16xlarge 64 256
m5ad.24xlarge 96 384
m5d.large 2 8
m5d.xlarge 4 16
m5d.2xlarge 8 32
m5d.4xlarge 16 64
m5d.8xlarge 32 128
m5d.12xlarge 48 192
m5d.16xlarge 64 256
m5d.24xlarge 96 384
m5d.metal 96 384
m5dn.large 2 8
m5dn.xlarge 4 16
m5dn.2xlarge 8 32
m5dn.4xlarge 16 64
m5dn.8xlarge 32 128
m5dn.12xlarge 48 192
m5dn.16xlarge 64 256
m5dn.24xlarge 96 384
m5dn.metal 96 384
m5n.large 2 8
m5n.xlarge 4 16
240
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
m5n.2xlarge 8 32
m5n.4xlarge 16 64
m5n.8xlarge 32 128
m5n.12xlarge 48 192
m5n.16xlarge 64 256
m5n.24xlarge 96 384
m5n.metal 96 384
m5zn.large 2 8
m5zn.xlarge 4 16
m5zn.2xlarge 8 32
m5zn.3xlarge 12 48
m5zn.6xlarge 24 96
m5zn.12xlarge 48 192
m5zn.metal 48 192
m6a.large 2 8
m6a.xlarge 4 16
m6a.2xlarge 8 32
m6a.4xlarge 16 64
m6a.8xlarge 32 128
m6a.12xlarge 48 192
m6a.16xlarge 64 256
m6a.24xlarge 96 256
m6g.medium 1 4
m6g.large 2 8
m6g.xlarge 4 16
m6g.2xlarge 8 32
m6g.4xlarge 16 64
m6g.8xlarge 32 128
m6g.12xlarge 48 192
241
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
m6g.16xlarge 64 256
m6g.metal 64 256
m6gd.medium 1 4
m6gd.large 2 8
m6gd.xlarge 4 16
m6gd.2xlarge 8 32
m6gd.4xlarge 16 64
m6gd.8xlarge 32 128
m6gd.12xlarge 48 192
m6gd.16xlarge 64 256
m6gd.metal 64 256
m6i.large 2 8
m6i.xlarge 4 16
m6i.2xlarge 8 32
m6i.4xlarge 16 64
m6i.8xlarge 32 128
m6i.12xlarge 48 192
m6i.16xlarge 64 256
m6i.24xlarge 96 384
mac1.metal 12 32
t2.nano 1 0.5
t2.micro 1 1
t2.small 1 2
t2.medium 2 4
t2.large 2 8
t2.xlarge 4 16
t2.2xlarge 8 32
t3.nano 2 0.5
t3.micro 2 1
242
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
t3.small 2 2
t3.medium 2 4
t3.large 2 8
t3.xlarge 4 16
t3.2xlarge 8 32
t3a.nano 2 0.5
t3a.micro 2 1
t3a.small 2 2
t3a.medium 2 4
t3a.large 2 8
t3a.xlarge 4 16
t3a.2xlarge 8 32
t4g.nano 2 0.5
t4g.micro 2 1
t4g.small 2 2
t4g.medium 2 4
t4g.large 2 8
t4g.xlarge 4 16
t4g.2xlarge 8 32
For more information about the hardware specifications for each Amazon EC2 instance type, see Amazon
EC2 Instance Types.
For more information about specifying CPU options, see Optimize CPU options (p. 676).
Instance performance
EBS-optimized instances enable you to get consistently high performance for your EBS volumes by
eliminating contention between Amazon EBS I/O and other network traffic from your instance. Some
general purpose instances are EBS-optimized by default at no additional cost. For more information, see
Amazon EBS–optimized instances (p. 1556).
Some general purpose instance types provide the ability to control processor C-states and P-states on
Linux. C-states control the sleep levels that a core can enter when it is inactive, while P-states control the
desired performance (in CPU frequency) from a core. For more information, see Processor state control
for your EC2 instance (p. 663).
Network performance
You can enable enhanced networking on supported instance types to provide lower latencies, lower
network jitter, and higher packet-per-second (PPS) performance. Most applications do not consistently
243
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
need a high level of network performance, but can benefit from access to increased bandwidth when
they send or receive data. For more information, see Enhanced networking on Linux (p. 1100).
The following is a summary of network performance for general purpose instances that support
enhanced networking.
244
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
m5dn.24xlarge | m5dn.metal 100 Gbps ENA (p. 1101), EFA (p. 1128)
| m5n.24xlarge | m5n.metal |
m5zn.12xlarge | m5zn.metal
† These instances have a baseline bandwidth and can use a network I/O credit mechanism to burst
beyond their baseline bandwidth on a best effort basis. For more information, see instance network
bandwidth (p. 1098).
m5.large .75 10
m5.xlarge 1.25 10
m5.2xlarge 2.5 10
m5.4xlarge 5 10
m5a.large .75 10
m5a.xlarge 1.25 10
m5a.2xlarge 2.5 10
m5a.4xlarge 5 10
m5ad.large .75 10
m5ad.xlarge 1.25 10
m5ad.2xlarge 2.5 10
m5ad.4xlarge 5 10
m5d.large .75 10
m5d.xlarge 1.25 10
m5d.2xlarge 2.5 10
m5d.4xlarge 5 10
m5dn.large 2.1 25
m5dn.xlarge 4.1 25
245
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
m5dn.2xlarge 8.125 25
m5dn.4xlarge 16.25 25
m5n.large 2.1 25
m5n.xlarge 4.1 25
m5n.2xlarge 8.125 25
m5n.4xlarge 16.25 25
m5zn.large 3 25
m5zn.xlarge 5 25
m5zn.2xlarge 10 25
m5zn.3xlarge 15 25
m6g.medium .5 10
m6g.large .75 10
m6g.xlarge 1.25 10
m6g.2xlarge 2.5 10
m6g.4xlarge 5 10
m6gd.medium .5 10
m6gd.large .75 10
m6gd.xlarge 1.25 10
m6gd.2xlarge 2.5 10
m6gd.4xlarge 5 10
t3.nano .032 5
t3.micro .064 5
t3.small .128 5
246
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
t3.medium .256 5
t3.large .512 5
t3.xlarge 1.024 5
t3.2xlarge 2.048 5
t3a.nano .032 5
t3a.micro .064 5
t3a.small .128 5
t3a.medium .256 5
t3a.large .512 5
t3a.xlarge 1.024 5
t3a.2xlarge 2.048 5
t4g.nano .032 5
t4g.micro .064 5
t4g.small .128 5
t4g.medium .256 5
t4g.large .512 5
t4g.xlarge 1.024 5
t4g.2xlarge 2.048 5
247
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
As you fill the SSD-based instance store volumes for your instance, the number of write IOPS that
you can achieve decreases. This is due to the extra work the SSD controller must do to find available
space, rewrite existing data, and erase unused space so that it can be rewritten. This process of
garbage collection results in internal write amplification to the SSD, expressed as the ratio of SSD write
248
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
operations to user write operations. This decrease in performance is even larger if the write operations
are not in multiples of 4,096 bytes or not aligned to a 4,096-byte boundary. If you write a smaller
amount of bytes or bytes that are not aligned, the SSD controller must read the surrounding data and
store the result in a new location. This pattern results in significantly increased write amplification,
increased latency, and dramatically reduced I/O performance.
SSD controllers can use several strategies to reduce the impact of write amplification. One such strategy
is to reserve space in the SSD instance storage so that the controller can more efficiently manage the
space available for write operations. This is called over-provisioning. The SSD-based instance store
volumes provided to an instance don't have any space reserved for over-provisioning. To reduce write
amplification, we recommend that you leave 10% of the volume unpartitioned so that the SSD controller
can use it for over-provisioning. This decreases the storage that you can use, but increases performance
even if the disk is close to full capacity.
For instance store volumes that support TRIM, you can use the TRIM command to notify the SSD
controller whenever you no longer need data that you've written. This provides the controller with more
free space, which can reduce write amplification and increase performance. For more information, see
Instance store volume TRIM support (p. 1628).
Instance features
The following is a summary of features for general purpose instances:
M4 Yes No No Yes
T2 Yes No No No
T3 Yes Yes No No
249
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Release notes
• M5, M5d, and T3 instances feature a 3.1 GHz Intel Xeon Platinum 8000 series processor from either the
first generation (Skylake-SP) or second generation (Cascade Lake).
• M5a, M5ad, and T3a instances feature a 2.5 GHz AMD EPYC 7000 series processor.
• M5zn instances are powered by Intel Cascade Lake CPUs that deliver all-core turbo frequency of up to
4.5 GHz and up to 100 Gbps network bandwidth.
• M6g and M6gd instances feature an AWS Graviton2 processor based on 64-bit Arm architecture.
• M6i instances feature third generation Intel Xeon Scalable processors (Ice Lake) and support the Intel
Advanced Vector Extensions 512 (Intel AVX-512) instruction set.
• Mac1 instances feature a 3.2 GHz Intel eighth-generation (Coffee Lake) Core i7 processor.
• T4g instances feature an AWS Graviton2 processor based on 64-bit Arm architecture.
• Instances built on the Nitro System (p. 232), M4, t2.large and larger, t3.large and larger, and
t3a.large and larger instance types require 64-bit HVM AMIs. They have high-memory, and require
a 64-bit operating system to take advantage of that capacity. HVM AMIs provide superior performance
in comparison to paravirtual (PV) AMIs on high-memory instance types. In addition, you must use an
HVM AMI to take advantage of enhanced networking.
• Instances built on the Nitro System (p. 232) have the following requirements:
• NVMe drivers (p. 1552) must be installed
• Elastic Network Adapter (ENA) drivers (p. 1101) must be installed
250
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEASPM=y
• Bare metal instances use a PCI-based serial device rather than an I/O port-based serial device. The
upstream Linux kernel and the latest Amazon Linux AMIs support this device. Bare metal instances also
provide an ACPI SPCR table to enable the system to automatically use the PCI-based serial device. The
latest Windows AMIs automatically use the PCI-based serial device.
• Instances built on the Nitro System should have system-logind or acpid installed to support clean
shutdown through API requests.
• There is a limit on the total number of instances that you can launch in a Region, and there are
additional limits on some instance types. For more information, see How many instances can I run in
Amazon EC2? in the Amazon EC2 FAQ.
251
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
These low-to-moderate CPU utilization workloads lead to wastage of CPU cycles and, as a result, you
pay for more than you use. To overcome this, you can leverage the low-cost burstable general purpose
instances, which are the T instances.
The T instance family provides a baseline CPU performance with the ability to burst above the baseline
at any time for as long as required. The baseline CPU is defined to meet the needs of the majority
of general purpose workloads, including large-scale micro-services, web servers, small and medium
databases, data logging, code repositories, virtual desktops, development and test environments,
and business-critical applications. The T instances offer a balance of compute, memory, and network
resources, and provide you with the most cost-effective way to run a broad spectrum of general purpose
applications that have a low-to-moderate CPU usage. They can save you up to 15% in costs when
compared to M instances, and can lead to even more cost savings with smaller, more economical instance
sizes, offering as low as 2 vCPUs and 0.5 GiB of memory. The smaller T instance sizes, such as nano,
micro, small, and medium, are well suited for workloads that need a small amount of memory and do
not expect high CPU usage.
The T4g instance types are the latest generation of burstable instances. They provide the best price for
performance, and provide you with the lowest cost of all the EC2 instance types. The T4g instance types
are powered by Arm-based AWS Graviton2 processors with extensive ecosystem support from operating
systems vendors, independent software vendors, and popular AWS services and applications.
The following table summarizes the key differences between the burstable instance types.
Latest generation
T4g Lowest cost EC2 instance type AWS Graviton2 processors with
with up to 40% higher price/ Arm Neoverse N1 cores
252
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
T3a Lowest cost x86-based instances AMD 1st gen EPYC processors
with 10% lower costs vs T3
instances
Previous generation
For more information about instance pricing and additional specifications, see Amazon EC2 Pricing and
Amazon EC2 Instance Types.
If your account is less than 12 months old, you can use a t2.micro instance for free (or a t3.micro
instance in Regions where t2.micro is unavailable) within certain usage limits. For more information,
see AWS Free Tier.
• On-Demand Instances
• Reserved Instances
• Dedicated Instances (T3 only)
• Dedicated Hosts (T3 only, in standard mode only)
• Spot Instances
Contents
• Best practices (p. 253)
• Key concepts and definitions for burstable performance instances (p. 254)
• Unlimited mode for burstable performance instances (p. 260)
• Standard mode for burstable performance instances (p. 267)
• Work with burstable performance instances (p. 277)
• Monitor your CPU credits (p. 282)
Best practices
Follow these best practices to get the maximum benefit from burstable performance instances.
• Ensure that the instance size you choose passes the minimum memory requirements of your operating
system and applications. Operating systems with graphical user interfaces that consume significant
memory and CPU resources (for example, Windows) might require a t3.micro or larger instance size
for many use cases. As the memory and CPU requirements of your workload grow over time, you have
the flexibility with the T instances to scale to larger instance sizes of the same instance type, or to
select another instance type.
253
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
• Enable AWS Compute Optimizer for your account and review the Compute Optimizer
recommendations for your workload. Compute Optimizer can help assess whether instances should be
upsized to improve performance or downsized for cost savings.
• For additional requirements, see Release notes (p. 250).
Each burstable performance instance continuously earns credits when it stays below the CPU baseline,
and continuously spends credits when it bursts above the baseline. The amount of credits earned or
spent depends on the CPU utilization of the instance:
• If the CPU utilization is below baseline, then credits earned are greater than credits spent.
• If the CPU utilization is equal to baseline, then credits earned are equal to credits spent.
• If the CPU utilization is higher than baseline, then credits spent are higher than credits earned.
When the credits earned are greater than credits spent, then the difference is called accrued credits,
which can be used later to burst above baseline CPU utilization. Similarly, when the credits spent are
more than credits earned, then the instance behavior depends on the credit configuration mode—
Standard mode or Unlimited mode.
In Standard mode, when credits spent are more than credits earned, the instance uses the accrued credits
to burst above baseline CPU utilization. If there are no accrued credits remaining, then the instance
gradually comes down to baseline CPU utilization and cannot burst above baseline until it accrues more
credits.
In Unlimited mode, if the instance bursts above baseline CPU utilization, then the instance first uses
the accrued credits to burst. If there are no accrued credits remaining, then the instance spends surplus
credits to burst. When its CPU utilization falls below the baseline, it uses the CPU credits that it earns
to pay down the surplus credits that it spent earlier. The ability to earn CPU credits to pay down surplus
credits enables Amazon EC2 to average the CPU utilization of an instance over a 24-hour period. If the
average CPU usage over a 24-hour period exceeds the baseline, the instance is billed for the additional
usage at a flat additional rate per vCPU-hour.
Contents
• Key concepts and definitions (p. 254)
• Earn CPU credits (p. 257)
• CPU credit earn rate (p. 258)
• CPU credit accrual limit (p. 258)
• Accrued CPU credits life span (p. 259)
• Baseline utilization (p. 259)
CPU utilization
CPU utilization is the percentage of allocated EC2 compute units that are currently in use on the
instance. This metric measures the percentage of allocated CPU cycles that are being utilized on an
254
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
instance. The CPU Utilization CloudWatch metric shows CPU usage per instance and not CPU usage
per core. The baseline CPU specification of an instance is also based on the CPU usage per instance.
To measure CPU utilization using the AWS Management Console or the AWS CLI, see Get statistics
for a specific instance (p. 973).
CPU credit
A unit of vCPU-time.
Examples:
The baseline utilization is the level at which the CPU can be utilized for a net credit balance of zero,
when the number CPU credits being earned matches the number of CPU credits being used. Baseline
utilization is also known as the baseline. Baseline utilization is expressed as a percentage of vCPU
utilization, which is calculated as follows: Baseline utilization % = (number of credits earned/number
of vCPUs)/60 minutes
Earned credits
Number of credits earned per hour = % baseline utilization * number of vCPUs * 60 minutes
Example:
A t3.nano with 2 vCPUs and a baseline utilization of 5% earns 6 credits per hour, calculated as
follows:
CPU credits spent per minute = Number of vCPUs * CPU utilization * 1 minute
Accrued credits
Unspent CPU credits when an instance uses fewer credits than is required for baseline utilization. In
other words, accrued credits = (Earned credits – Used credits) below baseline.
Example:
If a t3.nano is running at 2% CPU utilization, which is below its baseline of 5% for an hour, the
accrued credits is calculated as follows:
Accrued CPU credits = (Earned credits per hour – Used credits per hour) = 6 – 2 vCPUs * 2% CPU
utilization * 60 minutes = 6 – 2.4 = 3.6 accrued credits per hour
Credit accrual limit
Depends on the instance size but in general is equal to the number of maximum credits earned in 24
hours.
Example:
255
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Only applicable for T2 instances configured for Standard mode. Launch credits are a limited number
of CPU credits that are allocated to a new T2 instance so that, when launched in Standard mode, it
can burst above the baseline.
Surplus credits
Credits that are spent by an instance after it depletes its accrued credit balance. The surplus credits
are designed for burstable instances to sustain high performance for an extended period of time,
and are only used in Unlimited mode. The surplus credits balance is used to determine how many
credits were used by the instance for bursting in Unlimited mode.
Standard mode
Credit configuration mode, which allows an instance to burst above the baseline by spending credits
it has accrued in its credit balance.
Unlimited mode
Credit configuration mode, which allows an instance to burst above the baseline by sustaining high
CPU utilization for any period of time whenever required. The hourly instance price automatically
covers all CPU usage spikes if the average CPU utilization of the instance is at or below the baseline
over a rolling 24-hour period or the instance lifetime, whichever is shorter. If the instance runs at
higher CPU utilization for a prolonged period, it can do so for a flat additional rate per vCPU-hour.
The following table summarizes the key credit differences between the burstable instance types.
Latest generation
Previous generation
256
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Note
Unlimited mode is not supported for T3 instances that are launched on a Dedicated Host.
Each burstable performance instance continuously earns (at a millisecond-level resolution) a set rate
of CPU credits per hour, depending on the instance size. The accounting process for whether credits
are accrued or spent also happens at a millisecond-level resolution, so you don't have to worry about
overspending CPU credits; a short burst of CPU uses a small fraction of a CPU credit.
If a burstable performance instance uses fewer CPU resources than is required for baseline utilization
(such as when it is idle), the unspent CPU credits are accrued in the CPU credit balance. If a burstable
performance instance needs to burst above the baseline utilization level, it spends the accrued credits.
The more credits that a burstable performance instance has accrued, the more time it can burst beyond
its baseline when more CPU utilization is needed.
The following table lists the burstable performance instance types, the rate at which CPU credits are
earned per hour, the maximum number of earned CPU credits that an instance can accrue, the number of
vCPUs per instance, and the baseline utilization as a percentage of a full core (using a single vCPU).
T2
t2.nano 3 72 1 5%
T3
T3a
257
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
T4g
* The number of credits that can be accrued is equivalent to the number of credits that can be earned
in a 24-hour period.
** The percentage baseline utilization in the table is per vCPU. In CloudWatch, CPU utilization is shown
per vCPU. For example, the CPU utilization for a t3.large instance operating at the baseline level
is shown as 30% in CloudWatch CPU metrics. For information about how to calculate the baseline
utilization, see Baseline utilization (p. 259).
*** Each vCPU is a thread of either an Intel Xeon core or an AMD EPYC core, except for T2 and T4g
instances.
The number of CPU credits earned per hour is determined by the instance size. For example, a t3.nano
earns six credits per hour, while a t3.small earns 24 credits per hour. The preceding table lists the
credit earn rate for all instances.
While earned credits never expire on a running instance, there is a limit to the number of earned credits
that an instance can accrue. The limit is determined by the CPU credit balance limit. After the limit is
reached, any new credits that are earned are discarded, as indicated by the following image. The full
bucket indicates the CPU credit balance limit, and the spillover indicates the newly earned credits that
exceed the limit.
258
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
The CPU credit balance limit differs for each instance size. For example, a t3.micro instance can accrue
a maximum of 288 earned CPU credits in the CPU credit balance. The preceding table lists the maximum
number of earned credits that each instance can accrue.
T2 Standard instances also earn launch credits. Launch credits do not count towards the CPU credit
balance limit. If a T2 instance has not spent its launch credits, and remains idle over a 24-hour period
while accruing earned credits, its CPU credit balance appears as over the limit. For more information, see
Launch credits (p. 268).
T4g, T3a and T3 instances do not earn launch credits. These instances launch as unlimited by default,
and therefore can burst immediately upon start without any launch credits. T3 instances launched on a
Dedicated Host launch as standard by default; de>unlimited mode is not supported for T3 instances
on a Dedicated Host.
For T2, the CPU credit balance does not persist between instance stops and starts. If you stop a T2
instance, the instance loses all its accrued credits.
For T4g, T3a and T3, the CPU credit balance persists for seven days after an instance stops and the
credits are lost thereafter. If you start the instance within seven days, no credits are lost.
For more information, see CPUCreditBalance in the CloudWatch metrics table (p. 283).
Baseline utilization
The baseline utilization is the level at which the CPU can be utilized for a net credit balance of zero,
when the number of CPU credits being earned matches the number of CPU credits being used. Baseline
utilization is also known as the baseline.
For example, a t3.nano instance, with 2 vCPUs, earns 6 credits per hour, resulting in a baseline
utilization of 5% , which is calculated as follows:
259
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
A t3.xlarge instance, with 4 vCPUs, earns 96 credits per hour, resulting in a baseline utilization of 40%
((96/4)/60).
The following graph provides an example of a t3.large with an average CPU utilization below the
baseline.
For the vast majority of general-purpose workloads, instances configured as unlimited provide
ample performance without any additional charges. If the instance runs at higher CPU utilization for a
prolonged period, it can do so for a flat additional rate per vCPU-hour. For information about pricing, see
Amazon EC2 pricing and T2/T3/T4 Unlimited Mode Pricing.
If you use a t2.micro or t3.micro instance under the AWS Free Tier offer and use it in unlimited
mode, charges might apply if your average utilization over a rolling 24-hour period exceeds the baseline
utilization (p. 259) of the instance.
T4g, T3a and T3 instances launch as unlimited by default. If the average CPU usage over a 24-hour
period exceeds the baseline, you incur charges for surplus credits. If you launch Spot Instances as
unlimited and plan to use them immediately and for a short duration, with no idle time for accruing
CPU credits, you incur charges for surplus credits. We recommend that you launch your Spot Instances
in standard (p. 267) mode to avoid paying higher costs. For more information, see Surplus credits can
incur charges (p. 263) and Burstable performance instances (p. 483).
Note
T3 instances launched on a Dedicated Host launch as standard by default; unlimited mode is
not supported for T3 instances on a Dedicated Host.
Contents
• Unlimited mode concepts (p. 261)
• How Unlimited burstable performance instances work (p. 261)
260
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
The unlimited mode is a credit configuration option for burstable performance instances. It can be
enabled or disabled at any time for a running or stopped instance. You can set unlimited as the default
credit option at the account level per AWS Region, per burstable performance instance family, so that all
new burstable performance instances in the account launch using the default credit option.
If a burstable performance instance configured as unlimited depletes its CPU credit balance, it can
spend surplus credits to burst beyond the baseline (p. 259). When its CPU utilization falls below
the baseline, it uses the CPU credits that it earns to pay down the surplus credits that it spent earlier.
The ability to earn CPU credits to pay down surplus credits enables Amazon EC2 to average the CPU
utilization of an instance over a 24-hour period. If the average CPU usage over a 24-hour period exceeds
the baseline, the instance is billed for the additional usage at a flat additional rate per vCPU-hour.
The following graph shows the CPU usage of a t3.large. The baseline CPU utilization for a t3.large
is 30%. If the instance runs at 30% CPU utilization or less on average over a 24-hour period, there is
no additional charge because the cost is already covered by the instance hourly price. However, if the
instance runs at 40% CPU utilization on average over a 24-hour period, as shown in the graph, the
instance is billed for the additional 10% CPU usage at a flat additional rate per vCPU-hour.
For more information about the baseline utilization per vCPU for each instance type and how many
credits each instance type earns, see the credit table (p. 257).
When determining whether you should use a burstable performance instance in unlimited mode,
such as T3, or a fixed performance instance, such as M5, you need to determine the breakeven CPU
261
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
usage. The breakeven CPU usage for a burstable performance instance is the point at which a burstable
performance instance costs the same as a fixed performance instance. The breakeven CPU usage helps
you determine the following:
• If the average CPU usage over a 24-hour period is at or below the breakeven CPU usage, use a
burstable performance instance in unlimited mode so that you can benefit from the lower price of a
burstable performance instance while getting the same performance as a fixed performance instance.
• If the average CPU usage over a 24-hour period is above the breakeven CPU usage, the burstable
performance instance will cost more than the equivalently-sized fixed performance instance. If a T3
instance continuously bursts at 100% CPU, you end up paying approximately 1.5 times the price of an
equivalently-sized M5 instance.
The following graph shows the breakeven CPU usage point where a t3.large costs the same as an
m5.large. The breakeven CPU usage point for a t3.large is 42.5%. If the average CPU usage is at
42.5%, the cost of running the t3.large is the same as an m5.large, and is more expensive if the
average CPU usage is above 42.5%. If the workload needs less than 42.5% average CPU usage, you can
benefit from the lower price of the t3.large while getting the same performance as an m5.large.
The following table shows how to calculate the breakeven CPU usage threshold so that you can
determine when it's less expensive to use a burstable performance instance in unlimited mode or a
fixed performance instance. The columns in the table are labeled A through K.
A B C D E= F G H= I= J = (I / K=
D-C G / 60 E/H 60) / B F+J
262
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
The following table shows the breakeven CPU usage (in %) for T3 instance types compared to the
similarly-sized M5 instance types.
t3.large 42.5%
t3.xlarge 52.5%
t3.2xlarge 52.5%
If the average CPU utilization of an instance is at or below the baseline, the instance incurs no additional
charges. Because an instance earns a maximum number of credits (p. 257) in a 24-hour period (for
example, a t3.micro instance can earn a maximum of 288 credits in a 24-hour period), it can spend
surplus credits up to that maximum without being charged.
However, if CPU utilization stays above the baseline, the instance cannot earn enough credits to pay
down the surplus credits that it has spent. The surplus credits that are not paid down are charged at
a flat additional rate per vCPU-hour. For information about the rate, see T2/T3/T4g Unlimited Mode
Pricing.
Surplus credits that were spent earlier are charged when any of the following occurs:
• The spent surplus credits exceed the maximum number of credits (p. 257) the instance can earn in a
24-hour period. Spent surplus credits above the maximum are charged at the end of the hour.
• The instance is stopped or terminated.
• The instance is switched from unlimited to standard.
Spent surplus credits are tracked by the CloudWatch metric CPUSurplusCreditBalance. Surplus
credits that are charged are tracked by the CloudWatch metric CPUSurplusCreditsCharged. For more
information, see Additional CloudWatch metrics for burstable performance instances (p. 282).
263
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
T2 Standard instances receive launch credits (p. 268), but T2 Unlimited instances do not. A T2
Unlimited instance can burst beyond the baseline at any time with no additional charge, as long as
its average CPU utilization is at or below the baseline over a rolling 24-hour window or its lifetime,
whichever is shorter. As such, T2 Unlimited instances do not require launch credits to achieve high
performance immediately after launch.
If a T2 instance is switched from standard to unlimited, any accrued launch credits are removed from
the CPUCreditBalance before the remaining CPUCreditBalance is carried over.
T4g, T3a and T3 instances never receive launch credits because they support Unlimited mode. The
Unlimited mode credit configuration enables T4g, T3a and T3 instances to use as much CPU as needed to
burst beyond baseline and for as long as needed.
You can switch from unlimited to standard, and from standard to unlimited, at any time on
a running or stopped instance. For more information, see Launch a burstable performance instance
as Unlimited or Standard (p. 277) and Modify the credit specification of a burstable performance
instance (p. 280).
You can set unlimited as the default credit option at the account level per AWS Region, per burstable
performance instance family, so that all new burstable performance instances in the account launch
using the default credit option. For more information, see Set the default credit specification for the
account (p. 281).
You can check whether your burstable performance instance is configured as unlimited or standard
using the Amazon EC2 console or the AWS CLI. For more information, see View the credit specification of
a burstable performance instance (p. 279) and View the default credit specification (p. 282).
CPUCreditBalance is a CloudWatch metric that tracks the number of credits accrued by an instance.
CPUSurplusCreditBalance is a CloudWatch metric that tracks the number of surplus credits spent by
an instance.
When you change an instance configured as unlimited to standard, the following occurs:
To see if your instance is spending more credits than the baseline provides, you can use CloudWatch
metrics to track usage, and you can set up hourly alarms to be notified of credit usage. For more
information, see Monitor your CPU credits (p. 282).
The following examples explain credit use for instances that are configured as unlimited.
264
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Examples
• Example 1: Explain credit use with T3 Unlimited (p. 265)
• Example 2: Explain credit use with T2 Unlimited (p. 266)
In this example, you see the CPU utilization of a t3.nano instance launched as unlimited, and how it
spends earned and surplus credits to sustain CPU utilization.
A t3.nano instance earns 144 CPU credits over a rolling 24-hour period, which it can redeem for 144
minutes of vCPU use. When it depletes its CPU credit balance (represented by the CloudWatch metric
CPUCreditBalance), it can spend surplus CPU credits—that it has not yet earned—to burst for as long
as it needs. Because a t3.nano instance earns a maximum of 144 credits in a 24-hour period, it can
spend surplus credits up to that maximum without being charged immediately. If it spends more than
144 CPU credits, it is charged for the difference at the end of the hour.
The intent of the example, illustrated by the following graph, is to show how an instance can burst using
surplus credits even after it depletes its CPUCreditBalance. The following workflow references the
numbered points on the graph:
P1 – At 0 hours on the graph, the instance is launched as unlimited and immediately begins to earn
credits. The instance remains idle from the time it is launched—CPU utilization is 0%—and no credits are
spent. All unspent credits are accrued in the credit balance. For the first 24 hours, CPUCreditUsage is at
0, and the CPUCreditBalance value reaches its maximum of 144.
P2 – For the next 12 hours, CPU utilization is at 2.5%, which is below the 5% baseline. The instance
earns more credits than it spends, but the CPUCreditBalance value cannot exceed its maximum of 144
credits.
P3 – For the next 24 hours, CPU utilization is at 7% (above the baseline), which requires a spend of 57.6
credits. The instance spends more credits than it earns, and the CPUCreditBalance value reduces to
86.4 credits.
P4 – For the next 12 hours, CPU utilization decreases to 2.5% (below the baseline), which requires a
spend of 36 credits. In the same time, the instance earns 72 credits. The instance earns more credits than
it spends, and the CPUCreditBalance value increases to 122 credits.
P5 – For the next 5 hours, the instance bursts at 100% CPU utilization, and spends a total of 570 credits
to sustain the burst. About an hour into this period, the instance depletes its entire CPUCreditBalance
of 122 credits, and starts to spend surplus credits to sustain the high CPU utilization, totaling 448
surplus credits in this period (570-122=448). When the CPUSurplusCreditBalance value reaches
144 CPU credits (the maximum a t3.nano instance can earn in a 24-hour period), any surplus credits
spent thereafter cannot be offset by earned credits. The surplus credits spent thereafter amounts to 304
credits (448-144=304), which results in a small additional charge at the end of the hour for 304 credits.
P6 – For the next 13 hours, CPU utilization is at 5% (the baseline). The instance earns as
many credits as it spends, with no excess to pay down the CPUSurplusCreditBalance. The
CPUSurplusCreditBalance value remains at 144 credits.
P7 – For the last 24 hours in this example, the instance is idle and CPU utilization is 0%. During this time,
the instance earns 144 credits, which it uses to pay down the CPUSurplusCreditBalance.
265
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
In this example, you see the CPU utilization of a t2.nano instance launched as unlimited, and how it
spends earned and surplus credits to sustain CPU utilization.
A t2.nano instance earns 72 CPU credits over a rolling 24-hour period, which it can redeem for 72
minutes of vCPU use. When it depletes its CPU credit balance (represented by the CloudWatch metric
CPUCreditBalance), it can spend surplus CPU credits—that it has not yet earned—to burst for as long
as it needs. Because a t2.nano instance earns a maximum of 72 credits in a 24-hour period, it can spend
surplus credits up to that maximum without being charged immediately. If it spends more than 72 CPU
credits, it is charged for the difference at the end of the hour.
The intent of the example, illustrated by the following graph, is to show how an instance can burst using
surplus credits even after it depletes its CPUCreditBalance. You can assume that, at the start of the
time line in the graph, the instance has an accrued credit balance equal to the maximum number of
credits it can earn in 24 hours. The following workflow references the numbered points on the graph:
1 – In the first 10 minutes, CPUCreditUsage is at 0, and the CPUCreditBalance value remains at its
maximum of 72.
2 – At 23:40, as CPU utilization increases, the instance spends CPU credits and the CPUCreditBalance
value decreases.
3 – At around 00:47, the instance depletes its entire CPUCreditBalance, and starts to spend surplus
credits to sustain high CPU utilization.
4 – Surplus credits are spent until 01:55, when the CPUSurplusCreditBalance value reaches 72 CPU
credits. This is equal to the maximum a t2.nano instance can earn in a 24-hour period. Any surplus
credits spent thereafter cannot be offset by earned credits within the 24-hour period, which results in a
small additional charge at the end of the hour.
5 – The instance continues to spend surplus credits until around 02:20. At this time, CPU utilization
falls below the baseline, and the instance starts to earn credits at 3 credits per hour (or 0.25
credits every 5 minutes), which it uses to pay down the CPUSurplusCreditBalance. After the
CPUSurplusCreditBalance value reduces to 0, the instance starts to accrue earned credits in its
CPUCreditBalance at 0.25 credits every 5 minutes.
266
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Surplus credits cost $0.05 per vCPU-hour. The instance spent approximately 25 surplus credits between
01:55 and 02:20, which is equivalent to 0.42 vCPU-hours.
Additional charges for this instance are 0.42 vCPU-hours x $0.05/vCPU-hour = $0.021, rounded to $0.02.
You can set billing alerts to be notified every hour of any accruing charges, and take action if required.
Contents
• Standard mode concepts (p. 268)
• How standard burstable performance instances work (p. 268)
• Launch credits (p. 268)
• Launch credit limits (p. 269)
267
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
The standard mode is a configuration option for burstable performance instances. It can be enabled
or disabled at any time for a running or stopped instance. You can set standard as the default credit
option at the account level per AWS Region, per burstable performance instance family, so that all new
burstable performance instances in the account launch using the default credit option.
T2 Standard instances receive two types of CPU credits: earned credits and launch credits. When a T2
Standard instance is in a running state, it continuously earns (at a millisecond-level resolution) a set rate
of earned credits per hour. At start, it has not yet earned credits for a good startup experience; therefore,
to provide a good startup experience, it receives launch credits at start, which it spends first while it
accrues earned credits.
T4g, T3a and T3 instances do not receive launch credits because they support Unlimited mode. The
Unlimited mode credit configuration enables T4g, T3a and T3 instances to use as much CPU as needed to
burst beyond baseline and for as long as needed.
Launch credits
T2 Standard instances get 30 launch credits per vCPU at launch or start. For example, a t2.micro
instance has one vCPU and gets 30 launch credits, while a t2.xlarge instance has four vCPUs and gets
120 launch credits. Launch credits are designed to provide a good startup experience to allow instances
to burst immediately after launch before they have accrued earned credits.
Launch credits are spent first, before earned credits. Unspent launch credits are accrued in the CPU
credit balance, but do not count towards the CPU credit balance limit. For example, a t2.micro instance
has a CPU credit balance limit of 144 earned credits. If it is launched and remains idle for 24 hours,
its CPU credit balance reaches 174 (30 launch credits + 144 earned credits), which is over the limit.
However, after the instance spends the 30 launch credits, the credit balance cannot exceed 144. For more
information about the CPU credit balance limit for each instance size, see the credit table (p. 257).
The following table lists the initial CPU credit allocation received at launch or start, and the number of
vCPUs.
268
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
t1.micro 15 1
t2.nano 30 1
t2.micro 30 1
t2.small 30 1
t2.medium 60 2
t2.large 60 2
t2.xlarge 120 4
t2.2xlarge 240 8
There is a limit to the number of times T2 Standard instances can receive launch credits. The default limit
is 100 launches or starts of all T2 Standard instances combined per account, per Region, per rolling 24-
hour period. For example, the limit is reached when one instance is stopped and started 100 times within
a 24-hour period, or when 100 instances are launched within a 24-hour period, or other combinations
that equate to 100 starts. New accounts may have a lower limit, which increases over time based on your
usage.
Tip
To ensure that your workloads always get the performance they need, switch to Unlimited mode
for burstable performance instances (p. 260) or consider using a larger instance size.
The following table lists the differences between launch credits and earned credits.
Credit earn T2 Standard instances get 30 launch Each T2 instance continuously earns (at
rate credits per vCPU at launch or start. a millisecond-level resolution) a set rate
of CPU credits per hour, depending on
If a T2 instance is switched from the instance size. For more information
unlimited to standard, it does not get about the number of CPU credits
launch credits at the time of switching. earned per instance size, see the credit
table (p. 257).
Credit earn The limit for receiving launch credits is A T2 instance cannot accrue more credits
limit 100 launches or starts of all T2 Standard than the CPU credit balance limit. If the
instances combined per account, per CPU credit balance has reached its limit,
Region, per rolling 24-hour period. New any credits that are earned after the limit
accounts may have a lower limit, which is reached are discarded. Launch credits
increases over time based on your usage. do not count towards the limit. For more
information about the CPU credit balance
limit for each T2 instance size, see the
credit table (p. 257).
Credit use Launch credits are spent first, before Earned credits are spent only after all
earned credits. launch credits are spent.
269
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
The number of accrued launch credits and accrued earned credits is tracked by the CloudWatch metric
CPUCreditBalance. For more information, see CPUCreditBalance in the CloudWatch metrics
table (p. 283).
The following examples explain credit use when instances are configured as standard.
Examples
• Example 1: Explain credit use with T3 Standard (p. 270)
• Example 2: Explain credit use with T2 Standard (p. 271)
In this example, you see how a t3.nano instance launched as standard earns, accrues, and spends
earned credits. You see how the credit balance reflects the accrued earned credits.
A running t3.nano instance earns 144 credits every 24 hours. Its credit balance limit is 144 earned
credits. After the limit is reached, new credits that are earned are discarded. For more information about
the number of credits that can be earned and accrued, see the credit table (p. 257).
You might launch a T3 Standard instance and use it immediately. Or, you might launch a T3 Standard
instance and leave it idle for a few days before running applications on it. Whether an instance is used or
remains idle determines if credits are spent or accrued. If an instance remains idle for 24 hours from the
time it is launched, the credit balance reaches it limit, which is the maximum number of earned credits
that can be accrued.
This example describes an instance that remains idle for 24 hours from the time it is launched, and walks
you through seven periods of time over a 96-hour period, showing the rate at which credits are earned,
accrued, spent, and discarded, and the value of the credit balance at the end of each period.
P1 – At 0 hours on the graph, the instance is launched as standard and immediately begins to earn
credits. The instance remains idle from the time it is launched—CPU utilization is 0%—and no credits are
spent. All unspent credits are accrued in the credit balance. For the first 24 hours, CPUCreditUsage is at
0, and the CPUCreditBalance value reaches its maximum of 144.
P2 – For the next 12 hours, CPU utilization is at 2.5%, which is below the 5% baseline. The instance
earns more credits than it spends, but the CPUCreditBalance value cannot exceed its maximum of 144
credits. Any credits that are earned in excess of the limit are discarded.
P3 – For the next 24 hours, CPU utilization is at 7% (above the baseline), which requires a spend of 57.6
credits. The instance spends more credits than it earns, and the CPUCreditBalance value reduces to
86.4 credits.
P4 – For the next 12 hours, CPU utilization decreases to 2.5% (below the baseline), which requires a
spend of 36 credits. In the same time, the instance earns 72 credits. The instance earns more credits than
it spends, and the CPUCreditBalance value increases to 122 credits.
270
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
P5 – For the next two hours, the instance bursts at 100% CPU utilization, and depletes its entire
CPUCreditBalance value of 122 credits. At the end of this period, with the CPUCreditBalance at
zero, CPU utilization is forced to drop to the baseline utilization level of 5%. At the baseline, the instance
earns as many credits as it spends.
P6 – For the next 14 hours, CPU utilization is at 5% (the baseline). The instance earns as many credits as
it spends. The CPUCreditBalance value remains at 0.
P7 – For the last 24 hours in this example, the instance is idle and CPU utilization is 0%. During this time,
the instance earns 144 credits, which it accrues in its CPUCreditBalance.
A t2.nano instance gets 30 launch credits when it is launched, and earns 72 credits every 24 hours. Its
credit balance limit is 72 earned credits; launch credits do not count towards the limit. After the limit is
reached, new credits that are earned are discarded. For more information about the number of credits
that can be earned and accrued, see the credit table (p. 257). For more information about limits, see
Launch credit limits (p. 269).
You might launch a T2 Standard instance and use it immediately. Or, you might launch a T2 Standard
instance and leave it idle for a few days before running applications on it. Whether an instance is used
or remains idle determines if credits are spent or accrued. If an instance remains idle for 24 hours from
the time it is launched, the credit balance appears to exceed its limit because the balance reflects both
accrued earned credits and accrued launch credits. However, after CPU is used, the launch credits are
spent first. Thereafter, the limit always reflects the maximum number of earned credits that can be
accrued.
This example describes an instance that remains idle for 24 hours from the time it is launched, and walks
you through seven periods of time over a 96-hour period, showing the rate at which credits are earned,
accrued, spent, and discarded, and the value of the credit balance at the end of each period.
Period 1: 1 – 24 hours
At 0 hours on the graph, the T2 instance is launched as standard and immediately gets 30 launch
credits. It earns credits while in the running state. The instance remains idle from the time it is launched
—CPU utilization is 0%—and no credits are spent. All unspent credits are accrued in the credit balance.
At approximately 14 hours after launch, the credit balance is 72 (30 launch credits + 42 earned credits),
which is equivalent to what the instance can earn in 24 hours. At 24 hours after launch, the credit
balance exceeds 72 credits because the unspent launch credits are accrued in the credit balance—the
credit balance is 102 credits: 30 launch credits + 72 earned credits.
271
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Conclusion
If there is no CPU utilization after launch, the instance accrues more credits than what it can earn in 24
hours (30 launch credits + 72 earned credits = 102 credits).
In a real-world scenario, an EC2 instance consumes a small number of credits while launching and
running, which prevents the balance from reaching the maximum theoretical value in this example.
Period 2: 25 – 36 hours
For the next 12 hours, the instance continues to remain idle and earn credits, but the credit balance does
not increase. It plateaus at 102 credits (30 launch credits + 72 earned credits). The credit balance has
reached its limit of 72 accrued earned credits, so newly earned credits are discarded.
272
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Credit Discard Rate 72 credits per 24 hours (100% of credit earn rate)
Conclusion
An instance constantly earns credits, but it cannot accrue more earned credits if the credit balance
has reached its limit. After the limit is reached, newly earned credits are discarded. Launch credits do
not count towards the credit balance limit. If the balance includes accrued launch credits, the balance
appears to be over the limit.
Period 3: 37 – 61 hours
For the next 25 hours, the instance uses 2% CPU, which requires 30 credits. In the same period, it earns
75 credits, but the credit balance decreases. The balance decreases because the accrued launch credits
are spent first, while newly earned credits are discarded because the credit balance is already at its limit
of 72 earned credits.
Credit Spend Rate 28.8 credits per 24 hours (1.2 credits per hour,
2% CPU utilization, 40% of credit earn rate)—30
credits over 25 hours
Credit Discard Rate 72 credits per 24 hours (100% of credit earn rate)
Conclusion
An instance spends launch credits first, before spending earned credits. Launch credits do not count
towards the credit limit. After the launch credits are spent, the balance can never go higher than what
can be earned in 24 hours. Furthermore, while an instance is running, it cannot get more launch credits.
273
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Period 4: 62 – 72 hours
For the next 11 hours, the instance uses 2% CPU, which requires 13.2 credits. This is the same CPU
utilization as in the previous period, but the balance does not decrease. It stays at 72 credits.
The balance does not decrease because the credit earn rate is higher than the credit spend rate. In the
time that the instance spends 13.2 credits, it also earns 33 credits. However, the balance limit is 72
credits, so any earned credits that exceed the limit are discarded. The balance plateaus at 72 credits,
which is different from the plateau of 102 credits during Period 2, because there are no accrued launch
credits.
Credit Spend Rate 28.8 credits per 24 hours (1.2 credits per hour, 2%
CPU utilization, 40% of credit earn rate)—13.2
credits over 11 hours
Credit Discard Rate 43.2 credits per 24 hours (60% of credit earn rate)
Conclusion
After launch credits are spent, the credit balance limit is determined by the number of credits that an
instance can earn in 24 hours. If the instance earns more credits than it spends, newly earned credits over
the limit are discarded.
Period 5: 73 – 75 hours
For the next three hours, the instance bursts at 20% CPU utilization, which requires 36 credits. The
instance earns nine credits in the same three hours, which results in a net balance decrease of 27 credits.
At the end of three hours, the credit balance is 45 accrued earned credits.
274
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Credit Spend Rate 288 credits per 24 hours (12 credits per hour, 20%
CPU utilization, 400% of credit earn rate)—36
credits over 3 hours
Conclusion
If an instance spends more credits than it earns, its credit balance decreases.
Period 6: 76 – 90 hours
For the next 15 hours, the instance uses 2% CPU, which requires 18 credits. This is the same CPU
utilization as in Periods 3 and 4. However, the balance increases in this period, whereas it decreased in
Period 3 and plateaued in Period 4.
In Period 3, the accrued launch credits were spent, and any earned credits that exceeded the credit limit
were discarded, resulting in a decrease in the credit balance. In Period 4, the instance spent fewer credits
than it earned. Any earned credits that exceeded the limit were discarded, so the balance plateaued at its
maximum of 72 credits.
In this period, there are no accrued launch credits, and the number of accrued earned credits in the
balance is below the limit. No earned credits are discarded. Furthermore, the instance earns more credits
than it spends, resulting in an increase in the credit balance.
275
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Credit Spend Rate 28.8 credits per 24 hours (1.2 credits per hour,
2% CPU utilization, 40% of credit earn rate)—18
credits over 15 hours
Credit Earn Rate 72 credits per 24 hours (45 credits over 15 hours)
Conclusion
If an instance spends fewer credits than it earns, its credit balance increases.
Period 7: 91 – 96 hours
For the next six hours, the instance remains idle—CPU utilization is 0%—and no credits are spent. This is
the same CPU utilization as in Period 2, but the balance does not plateau at 102 credits—it plateaus at
72 credits, which is the credit balance limit for the instance.
In Period 2, the credit balance included 30 accrued launch credits. The launch credits were spent in
Period 3. A running instance cannot get more launch credits. After its credit balance limit is reached, any
earned credits that exceed the limit are discarded.
276
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Credit Discard Rate 72 credits per 24 hours (100% of credit earn rate)
Conclusion
An instance constantly earns credits, but cannot accrue more earned credits if the credit balance limit has
been reached. After the limit is reached, newly earned credits are discarded. The credit balance limit is
determined by the number of credits that an instance can earn in 24 hours. For more information about
credit balance limits, see the credit table (p. 257).
Contents
• Launch a burstable performance instance as Unlimited or Standard (p. 277)
• Use an Auto Scaling group to launch a burstable performance instance as Unlimited (p. 278)
• View the credit specification of a burstable performance instance (p. 279)
• Modify the credit specification of a burstable performance instance (p. 280)
• Set the default credit specification for the account (p. 281)
• View the default credit specification (p. 282)
You can launch your instances as unlimited or standard using the Amazon EC2 console, an AWS SDK,
a command line tool, or with an Auto Scaling group. For more information, see Use an Auto Scaling
group to launch a burstable performance instance as Unlimited (p. 278).
Requirements
• You must launch your instances using an Amazon EBS volume as the root device. For more
information, see Amazon EC2 instance root device volume (p. 1638).
• For more information about AMI and driver requirements for these instances, see Release
notes (p. 250).
1. Follow the Launch an instance using the Launch Instance Wizard (p. 565) procedure.
2. On the Choose an Instance Type page, select an instance type, and choose Next: Configure
Instance Details.
277
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Use the run-instances command to launch your instances. Specify the credit specification using the --
credit-specification CpuCredits= parameter. Valid credit specifications are unlimited and
standard.
• For T4g, T3a and T3, if you do not include the --credit-specification parameter, the instance
launches as unlimited by default.
• For T2, if you do not include the --credit-specification parameter, the instance launches as
standard by default.
When burstable performance instances are launched or started, they require CPU credits for a good
bootstrapping experience. If you use an Auto Scaling group to launch your instances, we recommend
that you configure your instances as unlimited. If you do, the instances use surplus credits when
they are automatically launched or restarted by the Auto Scaling group. Using surplus credits prevents
performance restrictions.
You must use a launch template for launching instances as unlimited in an Auto Scaling group. A
launch configuration does not support launching instances as unlimited.
Note
unlimited mode is not supported for T3 instances that are launched on a Dedicated Host.
1. Follow the Creating a Launch Template for an Auto Scaling Group procedure.
2. In Launch template contents, for Instance type, choose an instance size.
3. To launch instances as unlimited in an Auto Scaling group, under Advanced details, for Credit
specification, choose Unlimited.
4. When you've finished defining the launch template parameters, choose Create launch template.
For more information, see Creating a Launch Template for an Auto Scaling Group in the Amazon EC2
Auto Scaling User Guide.
Use the create-launch-template command and specify unlimited as the credit specification.
• For T4g, T3a and T3, if you do not include the CreditSpecification={CpuCredits=unlimited}
value, the instance launches as unlimited by default.
278
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
To associate the launch template with an Auto Scaling group, create the Auto Scaling group using the
launch template, or add the launch template to an existing Auto Scaling group.
Use the create-auto-scaling-group AWS CLI command and specify the --launch-template parameter.
Use the update-auto-scaling-group AWS CLI command and specify the --launch-template parameter.
You can view the credit specification (unlimited or standard) of a running or stopped instance.
New console
279
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
4. Choose Details and view the Credit specification field. The value is either unlimited or
standard.
Old console
Use the describe-instance-credit-specifications command. If you do not specify one or more instance
IDs, all instances with the credit specification of unlimited are returned, as well as instances that were
previously configured with the unlimited credit specification. For example, if you resize a T3 instance
to an M4 instance, while it is configured as unlimited, Amazon EC2 returns the M4 instance.
Example
{
"InstanceCreditSpecifications": [
{
"InstanceId": "i-1234567890abcdef0",
"CpuCredits": "unlimited"
}
]
}
You can switch the credit specification of a running or stopped instance at any time between unlimited
and standard.
New console
280
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Old console
Use the modify-instance-credit-specification command. Specify the instance and its credit specification
using the --instance-credit-specification parameter. Valid credit specifications are unlimited
and standard.
Example
{
"SuccessfulInstanceCreditSpecifications": [
{
"InstanceId": "i- 1234567890abcdef0"
}
],
"UnsuccessfulInstanceCreditSpecifications": []
}
You can set the default credit specification for each burstable performance instance family at the
account level per AWS Region.
If you use the Launch Instance Wizard in the EC2 console to launch instances, the value you select for
the credit specification overrides the account-level default credit specification. If you use the AWS CLI to
launch instances, all new burstable performance instances in the account launch using the default credit
specification. The credit specification for existing running or stopped instances is not affected.
Consideration
The default credit specification for an instance family can be modified only once in a rolling 5-minute
period, and up to four times in a rolling 24-hour period.
281
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
To set the default credit specification at the account level (AWS CLI)
Use the modify-default-credit-specification command. Specify the AWS Region, instance family, and the
default credit specification using the --cpu-credits parameter. Valid default credit specifications are
unlimited and standard.
You can view the default credit specification of a burstable performance instance family at the account
level per AWS Region.
To view the default credit specification at the account level (AWS CLI)
Use the get-default-credit-specification command. Specify the AWS Region and instance family.
Contents
• Additional CloudWatch metrics for burstable performance instances (p. 282)
• Calculate CPU credit usage (p. 284)
Burstable performance instances have these additional CloudWatch metrics, which are updated every
five minutes:
• CPUCreditUsage – The number of CPU credits spent during the measurement period.
• CPUCreditBalance – The number of CPU credits that an instance has accrued. This balance is
depleted when the CPU bursts and CPU credits are spent more quickly than they are earned.
• CPUSurplusCreditBalance – The number of surplus CPU credits spent to sustain CPU utilization
when the CPUCreditBalance value is zero.
• CPUSurplusCreditsCharged – The number of surplus CPU credits exceeding the maximum number
of CPU credits (p. 257) that can be earned in a 24-hour period, and thus attracting an additional
charge.
282
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
The following table describes the CloudWatch metrics for burstable performance instances. For more
information, see List the available CloudWatch metrics for your instances (p. 961).
Metric Description
CPUCreditUsage The number of CPU credits spent by the instance for CPU
utilization. One CPU credit equals one vCPU running at 100%
utilization for one minute or an equivalent combination of vCPUs,
utilization, and time (for example, one vCPU running at 50%
utilization for two minutes or two vCPUs running at 25% utilization
for two minutes).
Credits are accrued in the credit balance after they are earned,
and removed from the credit balance when they are spent. The
credit balance has a maximum limit, determined by the instance
size. After the limit is reached, any new credits that are earned are
discarded. For T2 Standard, launch credits do not count towards the
limit.
CPUSurplusCreditsCharged The number of spent surplus credits that are not paid down by
earned CPU credits, and which thus incur an additional charge.
283
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Metric Description
Spent surplus credits are charged when any of the following occurs:
Amazon EC2 sends the metrics to CloudWatch every five minutes. A reference to the prior value of a
metric at any point in time implies the previous value of the metric, sent five minutes ago.
• The CPU credit balance increases if CPU utilization is below the baseline, when the credits spent are
less than the credits earned in the prior five-minute interval.
• The CPU credit balance decreases if CPU utilization is above the baseline, when the credits spent are
more than the credits earned in the prior five-minute interval.
Example
The size of the instance determines the number of credits that the instance can earn per hour and the
number of earned credits that it can accrue in the credit balance. For information about the number of
credits earned per hour, and the credit balance limit for each instance size, see the credit table (p. 257).
Example
This example uses a t3.nano instance. To calculate the CPUCreditBalance value of the instance, use
the preceding equation as follows:
284
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
Example
When a burstable performance instance needs to burst above the baseline, it always spends accrued
credits before spending surplus credits. When it depletes its accrued CPU credit balance, it can spend
surplus credits to burst CPU for as long as it needs. When CPU utilization falls below the baseline, surplus
credits are always paid down before the instance accrues earned credits.
We use the term Adjusted balance in the following equations to reflect the activity that occurs in
this five-minute interval. We use this value to arrive at the values for the CPUCreditBalance and
CPUSurplusCreditBalance CloudWatch metrics.
Example
A value of 0 for Adjusted balance indicates that the instance spent all its earned credits
for bursting, and no surplus credits were spent. As a result, both CPUCreditBalance and
CPUSurplusCreditBalance are set to 0.
A positive Adjusted balance value indicates that the instance accrued earned credits, and previous
surplus credits, if any, were paid down. As a result, the Adjusted balance value is assigned to
CPUCreditBalance, and the CPUSurplusCreditBalance is set to 0. The instance size determines the
maximum number of credits (p. 257) that it can accrue.
Example
A negative Adjusted balance value indicates that the instance spent all its earned credits that it
accrued and, in addition, also spent surplus credits for bursting. As a result, the Adjusted balance
value is assigned to CPUSurplusCreditBalance and CPUCreditBalance is set to 0. Again, the
instance size determines the maximum number of credits (p. 257) that it can accrue.
Example
If the surplus credits spent exceed the maximum credits that the instance can accrue, the surplus credit
balance is set to the maximum, as shown in the preceding equation. The remaining surplus credits are
charged as represented by the CPUSurplusCreditsCharged metric.
Example
Finally, when the instance terminates, any surplus credits tracked by the CPUSurplusCreditBalance
are charged. If the instance is switched from unlimited to standard, any remaining
CPUSurplusCreditBalance is also charged.
285
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
For more information, see Amazon EC2 Mac Instances and Pricing.
Contents
• Considerations (p. 286)
• Launch a Mac instance using the console (p. 287)
• Launch a Mac instance using the AWS CLI (p. 287)
• Connect to your instance using SSH (p. 288)
• Connect to your instance using Apple Remote Desktop (p. 289)
• Modify macOS screen resolution on Mac instances (p. 289)
• EC2 macOS AMIs (p. 290)
• Update the operating system and software (p. 290)
• EC2 macOS Init (p. 291)
• EC2 System Monitoring for macOS (p. 291)
• Increase the size of an EBS volume on your Mac instance (p. 292)
• Stop and terminate your Mac instance (p. 292)
• Subscribe to macOS AMI notifications (p. 292)
• Release the Dedicated Host for your Mac instance (p. 293)
Considerations
The following considerations apply to Mac instances:
• Mac instances are available only as bare metal instances on Dedicated Hosts (p. 483), with a
minimum allocation period of 24 hours before you can release the Dedicated Host. You can launch
one Mac instance per Dedicated Host. You can share the Dedicated Host with the AWS accounts or
organizational units within your AWS organization, or the entire AWS organization.
• Mac instances are available only as On-Demand Instances. They are not available as Spot Instances or
Reserved Instances. You can save money on Mac instances by purchasing a Savings Plan.
• Mac instances can run one of the following operating systems:
• macOS Catalina (version 10.15)
• macOS Mojave (version 10.14)
• macOS Big Sur (version 11)
• If you attach an EBS volume to a running Mac instance, you must reboot the instance to make the
volume available.
• If you resized an existing EBS volume on a running Mac instance, you must reboot the instance to make
the new size available.
• If you attach a network interface to a running Mac instance, you must reboot the instance to make the
network interface available.
• AWS does not manage or support the internal SSD on the Apple hardware. We strongly recommend
that you use Amazon EBS volumes instead. EBS volumes provide the same elasticity, availability, and
durability benefits on Mac instances as they do on any other EC2 instance.
• We recommend using General Purpose SSD (gp2 and gp3) and Provisioned IOPS SSD (io1 and io2)
with Mac instances for optimal EBS performance.
286
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
• You cannot use Mac instances with Amazon EC2 Auto Scaling.
• Automatic software updates are disabled. We recommend that you apply updates and test them on
your instance before you put the instance into production. For more information, see Update the
operating system and software (p. 290).
• When you stop or terminate a Mac instance, a scrubbing workflow is performed on the Dedicated Host.
For more information, see Stop and terminate your Mac instance (p. 292).
• Warning
Do not use FileVault. If data-at-rest and data-in-transit is required, use EBS encryption to
avoid boot issues and performance impact. Enabling FileVault will result in the host failing to
boot due to the partitions being locked.
a. For Instance family, choose mac1. If mac1 doesn’t appear in the list, it’s not supported in the
currently selected Region.
b. For Instance type, select mac1.metal.
c. For Availability Zone, choose the Availability Zone for the Dedicated Host.
d. For Quantity, keep 1.
e. Choose Allocate.
4. Select the Dedicated Host that you created and then do the following:
287
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
The initial state of an instance is pending. The instance is ready when its state changes to running
and it passes status checks. Use the following describe-instance-status command to display status
information for your instance:
The following is example output for an instance that is running and has passed status checks.
{
"InstanceStatuses": [
{
"AvailabilityZone": "us-east-1b",
"InstanceId": "i-017f8354e2dc69c4f",
"InstanceState": {
"Code": 16,
"Name": "running"
},
"InstanceStatus": {
"Details": [
{
"Name": "reachability",
"Status": "passed"
}
],
"Status": "ok"
},
"SystemStatus": {
"Details": [
{
"Name": "reachability",
"Status": "passed"
}
],
"Status": "ok"
}
}
]
}
To support connecting to your instance using SSH, launch the instance using a key pair and a security
group that allows SSH access, and ensure that the instance has internet connectivity. You provide the
.pem file for the key pair when you connect to the instance.
Use the following procedure to connect to your Mac instance using an SSH client. If you receive an error
while attempting to connect to your instance, see Troubleshoot connecting to your instance (p. 1686).
1. Verify that your local computer has an SSH client installed by entering ssh at the command line. If
your computer doesn't recognize the command, search for an SSH client for your operating system
and install it.
2. Get the public DNS name of your instance. Using the Amazon EC2 console, you can find the public
DNS name on both the Details and the Networking tabs. Using the AWS CLI, you can find the public
DNS name using the describe-instances command.
288
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
3. Locate the .pem file for the key pair that you specified when you launched the instance.
4. Connect to your instance using the following ssh command, specifying the public DNS name of the
instance and the .pem file.
1. Verify that your local computer has an ARD client or a VNC client that supports ARD installed. On
macOS, you can leverage the built-in Screen Sharing application. Otherwise, search for ARD for your
operating system and install it.
2. From your local computer, connect to your instance using SSH (p. 288).
3. Set up a password for the ec2-user account using the passwd command as follows.
4. Start the Apple Remote Desktop agent and enable remote desktop access as follows.
5. From your computer, connect to your instance using the following ssh command. In addition to the
options shown in the previous section, use the -L option to enable port forwarding and forward all
traffic on local port 5900 to the ARD server on the instance.
6. From your local computer, use the ARD client or VNC client that supports ARD to connect to
localhost on port 5900. For example, use the Screen Sharing application on macOS as follows:
1. Install displayplacer.
289
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
displayplacer list
For example:
RES="2560x1600"
displayplacer "id:69784AF1-CD7D-B79B-E5D4-60D937407F68 res:${RES} scaling:off origin:
(0,0) degree:0"
• ENA drivers
• EC2 macOS Init
• EC2 System Monitoring for macOS
• SSM Agent for macOS
• AWS Command Line Interface (AWS CLI) version 2
• Command Line Tools for Xcode
• Homebrew
AWS provides updated EC2 macOS AMIs on a regular basis that include updates to AWS-owned packages
and the latest fully-tested macOS version. Additionally, AWS provides updated AMIs with the latest
minor version updates or major version updates as soon as they can be fully tested and vetted. If you
do not need to preserve data or customizations to your Mac instances, you can get the latest updates by
launching a new instance using the current AMI and then terminating the previous instance. Otherwise,
you can choose which updates to apply to your Mac instances.
1. List the packages with available updates using the following command.
2. Install all updates or only specific updates. To install specific updates, use the following command.
290
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
System administrators can use AWS Systems Manager to roll out pre-approved operating system
updates. For more information, see the AWS Systems Manager User Guide.
You can use Homebrew to install updates to packages in the EC2 macOS AMIs, so that you have the
latest version of these packages on your instances. You can also use Homebrew to install and run
common macOS applications on Amazon EC2 macOS. For more information, see the Homebrew
Documentation.
2. List the packages with available updates using the following command.
3. Install all updates or only specific updates. To install specific updates, use the following command.
Warning
Do not install beta or prerelease macOS versions on your EC2 Mac instances, as this
configuration is currently not supported. Installing beta or prerelease macOS versions will lead
to degradation of your EC2 Mac Dedicated Host when you stop or terminate your instance, and
will prevent you from starting or launching a new instance on that host.
291
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
After you increase the size of the volume, you must increase the size of your APFS container as follows.
1. Determine if a restart is needed. If you resized an existing EBS volume on a running Mac instance,
you must reboot the instance to make the new size available. If disk space modification was done
during launch time, a reboot will not be needed.
PDISK=$(diskutil list physical external | head -n1 | cut -d" " -f1)
APFSCONT=$(diskutil list physical external | grep "Apple_APFS" | tr -s " " | cut -d" "
-f8)
yes | sudo diskutil repairDisk $PDISK
When you stop or terminate a Mac instance, Amazon EC2 performs a scrubbing workflow on the
underlying Dedicated Host to erase the internal SSD, to clear the persistent NVRAM variables, and if
needed, to update the bridgeOS software on the underlying Mac mini. This ensures that Mac instances
provide the same security and data privacy as other EC2 Nitro instances. It also enables you to run the
latest macOS AMIs without manually updating the bridgeOS software. During the scrubbing workflow,
the Dedicated Host temporarily enters the pending state. If the bridgeOS software does not need to be
updated, the scrubbing workflow takes up to 50 minutes to complete. If the bridgeOS software needs to
be updated, the scrubbing workflow can take up to 3 hours to complete.
You can't start the stopped Mac instance or launch a new Mac instance until after the scrubbing
workflow completes, at which point the Dedicated Host enters the available state.
Metering and billing is paused when the Dedicated Host enters the pending state. You are not charged
for the duration of the scrubbing workflow.
292
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General purpose
a. For Topic ARN, copy and paste one of the following Amazon Resource Names (ARNs):
• arn:aws:sns:us-east-1:898855652048:amazon-ec2-macos-ami-updates
• arn:aws:sns:us-east-1:898855652048:amazon-ec2-bridgeos-updates
For Protocol:
b. Email:
For Endpoint, type an email address that you can use to receive the notifications. After you
create your subscription you'll receive a confirmation message with the subject line AWS
Notification - Subscription Confirmation. Open the email and choose Confirm
subscription to complete your subscription
c. SMS:
For Endpoint, type a phone number that you can use to receive the notifications.
d. AWS Lambda, Amazon SQS, Amazon Kinesis Data Firehose (Notifications come in JSON
format):
For Endpoint, enter the ARN for the Lambda function, SQS queue, or Firehose stream you can
use to receive the notifications.
e. Choose Create subscription.
Whenever macOS AMIs are released, we send notifications to the subscribers of the amazon-ec2-
macos-ami-updates topic. Whenever bridgeOS is updated, we send notifications to the subscribers of
the amazon-ec2-bridgeos-updates topic. If you no longer want to receive these notifications, use
the following procedure to unsubscribe.
293
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
Bare metal instances, such as c5.metal, provide your applications with direct access to physical
resources of the host server, such as processors and memory.
Bare metal instances, such as c6g.metal, provide your applications with direct access to physical
resources of the host server, such as processors and memory.
C6i instances
These instances are ideal for running advanced, compute-intensive workloads, such as the following:
294
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
• Batch processing
• Ad serving
• Video encoding
• Distributed analytics
• Highly scalable multiplayer gaming
Hpc6a instances
These instances are ideal for running high performance computing (HPC) workloads, such as the
following:
• Molecular dynamics
• Computational chemistry
• Computational fluid dynamics
• Weather forecasting
• Materials simulation
• Crash simulations
• Astrophysics
Contents
• Hardware specifications (p. 295)
• Instance performance (p. 298)
• Network performance (p. 298)
• SSD I/O performance (p. 301)
• Instance features (p. 302)
• Release notes (p. 303)
Hardware specifications
The following is a summary of the hardware specifications for compute optimized instances.
c4.large 2 3.75
c4.xlarge 4 7.5
c4.2xlarge 8 15
c4.4xlarge 16 30
c4.8xlarge 36 60
c5.large 2 4
c5.xlarge 4 8
c5.2xlarge 8 16
295
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
c5.4xlarge 16 32
c5.9xlarge 36 72
c5.12xlarge 48 96
c5.18xlarge 72 144
c5.24xlarge 96 192
c5.metal 96 192
c5a.large 2 4
c5a.xlarge 4 8
c5a.2xlarge 8 16
c5a.4xlarge 16 32
c5a.8xlarge 32 64
c5a.12xlarge 48 96
c5a.16xlarge 64 128
c5a.24xlarge 96 192
c5ad.large 2 4
c5ad.xlarge 4 8
c5ad.2xlarge 8 16
c5ad.4xlarge 16 32
c5ad.8xlarge 32 64
c5ad.12xlarge 48 96
c5ad.16xlarge 64 128
c5ad.24xlarge 96 192
c5d.large 2 4
c5d.xlarge 4 8
c5d.2xlarge 8 16
c5d.4xlarge 16 32
c5d.9xlarge 36 72
c5d.12xlarge 48 96
c5d.18xlarge 72 144
c5d.24xlarge 96 192
c5d.metal 96 192
296
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
c5n.large 2 5.25
c5n.xlarge 4 10.5
c5n.2xlarge 8 21
c5n.4xlarge 16 42
c5n.9xlarge 36 96
c5n.18xlarge 72 192
c5n.metal 72 192
c6g.medium 1 2
c6g.large 2 4
c6g.xlarge 4 8
c6g.2xlarge 8 16
c6g.4xlarge 16 32
c6g.8xlarge 32 64
c6g.12xlarge 48 96
c6g.16xlarge 64 128
c6g.metal 64 128
c6gd.medium 1 2
c6gd.large 2 4
c6gd.xlarge 4 8
c6gd.2xlarge 8 16
c6gd.4xlarge 16 32
c6gd.8xlarge 32 64
c6gd.12xlarge 48 96
c6gd.16xlarge 64 128
c6gd.metal 64 128
c6gn.medium 1 2
c6gn.large 2 4
c6gn.xlarge 4 8
c6gn.2xlarge 8 16
c6gn.4xlarge 16 32
c6gn.8xlarge 32 64
297
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
c6gn.12xlarge 48 96
c6gn.16xlarge 64 128
c6i.large 2 4
c6i.xlarge 4 8
c6i.2xlarge 8 16
c6i.4xlarge 16 32
c6i.8xlarge 32 64
c6i.12xlarge 48 96
c6i.16xlarge 64 128
c6i.24xlarge 96 192
hpc6a.48xlarge 96 384
For more information about the hardware specifications for each Amazon EC2 instance type, see Amazon
EC2 Instance Types.
For more information about specifying CPU options, see Optimize CPU options (p. 676).
Instance performance
EBS-optimized instances enable you to get consistently high performance for your EBS volumes by
eliminating contention between Amazon EBS I/O and other network traffic from your instance. Some
compute optimized instances are EBS-optimized by default at no additional cost. For more information,
see Amazon EBS–optimized instances (p. 1556).
Some compute optimized instance types provide the ability to control processor C-states and P-states on
Linux. C-states control the sleep levels that a core can enter when it is inactive, while P-states control the
desired performance (in CPU frequency) from a core. For more information, see Processor state control
for your EC2 instance (p. 663).
Network performance
You can enable enhanced networking on supported instance types to provide lower latencies, lower
network jitter, and higher packet-per-second (PPS) performance. Most applications do not consistently
need a high level of network performance, but can benefit from access to increased bandwidth when
they send or receive data. For more information, see Enhanced networking on Linux (p. 1100).
The following is a summary of network performance for compute optimized instances that support
enhanced networking.
298
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
† These instances have a baseline bandwidth and can use a network I/O credit mechanism to burst
beyond their baseline bandwidth on a best effort basis. For more information, see instance network
bandwidth (p. 1098).
c5.large .75 10
c5.xlarge 1.25 10
c5.2xlarge 2.5 10
299
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
c5.4xlarge 5 10
c5a.large .75 10
c5a.xlarge 1.25 10
c5a.2xlarge 2.5 10
c5a.4xlarge 5 10
c5ad.large .75 10
c5ad.xlarge 1.25 10
c5ad.2xlarge 2.5 10
c5ad.4xlarge 5 10
c5d.large .75 10
c5d.xlarge 1.25 10
c5d.2xlarge 2.5 10
c5d.4xlarge 5 10
c5n.large 3 25
c5n.xlarge 5 25
c5n.2xlarge 10 25
c5n.4xlarge 15 25
c6g.medium .5 10
c6g.large .75 10
c6g.xlarge 1.25 10
c6g.2xlarge 2.5 10
c6g.4xlarge 5 10
c6gd.medium .5 10
c6gd.large .75 10
c6gd.xlarge 1.25 10
c6gd.2xlarge 2.5 10
c6gd.4xlarge 5 10
c6gn.medium 1.6 25
c6gn.large 3 25
c6gn.xlarge 6.3 25
c6gn.2xlarge 12.5 25
300
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
c6gn.4xlarge 15 25
301
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
As you fill the SSD-based instance store volumes for your instance, the number of write IOPS that
you can achieve decreases. This is due to the extra work the SSD controller must do to find available
space, rewrite existing data, and erase unused space so that it can be rewritten. This process of
garbage collection results in internal write amplification to the SSD, expressed as the ratio of SSD write
operations to user write operations. This decrease in performance is even larger if the write operations
are not in multiples of 4,096 bytes or not aligned to a 4,096-byte boundary. If you write a smaller
amount of bytes or bytes that are not aligned, the SSD controller must read the surrounding data and
store the result in a new location. This pattern results in significantly increased write amplification,
increased latency, and dramatically reduced I/O performance.
SSD controllers can use several strategies to reduce the impact of write amplification. One such strategy
is to reserve space in the SSD instance storage so that the controller can more efficiently manage the
space available for write operations. This is called over-provisioning. The SSD-based instance store
volumes provided to an instance don't have any space reserved for over-provisioning. To reduce write
amplification, we recommend that you leave 10% of the volume unpartitioned so that the SSD controller
can use it for over-provisioning. This decreases the storage that you can use, but increases performance
even if the disk is close to full capacity.
For instance store volumes that support TRIM, you can use the TRIM command to notify the SSD
controller whenever you no longer need data that you've written. This provides the controller with more
free space, which can reduce write amplification and increase performance. For more information, see
Instance store volume TRIM support (p. 1628).
Instance features
The following is a summary of features for compute optimized instances:
C4 Yes No No Yes
302
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Compute optimized
Release notes
• C5 and C5d instances feature a 3.1 GHz Intel Xeon Platinum 8000 series processor from either the first
generation (Skylake-SP) or second generation (Cascade Lake).
• C5a and C5ad instances feature a second-generation AMD EPYC processor (Rome) running at
frequencies as high as 3.3. GHz.
• C6g, C6gd, and C6gn instances feature an AWS Graviton2 processor based on 64-bit Arm architecture.
• C4 instances and instances built on the Nitro System (p. 232) require 64-bit EBS-backed HVM AMIs.
They have high-memory and require a 64-bit operating system to take advantage of that capacity.
HVM AMIs provide superior performance in comparison to paravirtual (PV) AMIs on high-memory
instance types. In addition, you must use an HVM AMI to take advantage of enhanced networking.
• Instances built on the Nitro System have the following requirements:
• NVMe drivers (p. 1552) must be installed
• Elastic Network Adapter (ENA) drivers (p. 1101) must be installed
303
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
• To get the best performance from your C6i instances, ensure that they have ENA driver version 2.2.9
or later. Using an ENA driver earlier than version 1.2 with these instances causes network interface
attachment failures. The following AMIs have a compatible ENA driver.
• Amazon Linux 2 with kernel 4.14.186
• Ubuntu 20.04 with kernel 5.4.0-1025-aws
• Red Hat Enterprise Linux 8.3 with kernel 4.18.0-240.1.1.el8_3.ARCH
• SUSE Linux Enterprise Server 15 SP2 with kernel 5.3.18-24.15.1
• Instances built on the Nitro System instances support a maximum of 28 attachments, including
network interfaces, EBS volumes, and NVMe instance store volumes. For more information, see Nitro
System volume limits (p. 1637).
• To get the best performance from your C6gn instances, ensure that they have ENA driver version 2.2.9
or later. Using an ENA driver earlier than version 1.2 with these instances causes network interface
attachment failures. The following AMIs have a compatible ENA driver.
• Amazon Linux 2 with kernel 4.14.186
• Ubuntu 20.04 with kernel 5.4.0-1025-aws
• Red Hat Enterprise Linux 8.3 with kernel 4.18.0-240.1.1.el8_3.ARCH
• SUSE Linux Enterprise Server 15 SP2 with kernel 5.3.18-24.15.1
• Traffic Mirroring is not supported on C6gn instances.
• To launch AMIs for all Linux distributions on C6gn instances, use AMIs with the latest version and run
an update for the latest driver. For earlier versions, download the latest driver from GitHub.
• Launching a bare metal instance boots the underlying server, which includes verifying all hardware and
firmware components. This means that it can take 20 minutes from the time the instance enters the
running state until it becomes available over the network.
• To attach or detach EBS volumes or secondary network interfaces from a bare metal instance requires
PCIe native hotplug support. Amazon Linux 2 and the latest versions of the Amazon Linux AMI
support PCIe native hotplug, but earlier versions do not. You must enable the following Linux kernel
configuration options:
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEASPM=y
• Bare metal instances use a PCI-based serial device rather than an I/O port-based serial device. The
upstream Linux kernel and the latest Amazon Linux AMIs support this device. Bare metal instances also
provide an ACPI SPCR table to enable the system to automatically use the PCI-based serial device. The
latest Windows AMIs automatically use the PCI-based serial device.
• Instances built on the Nitro System should have acpid installed to support clean shutdown through API
requests.
• There is a limit on the total number of instances that you can launch in a Region, and there are
additional limits on some instance types. For more information, see How many instances can I run in
Amazon EC2? in the Amazon EC2 FAQ.
304
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
• Distributed web scale cache stores that provide in-memory caching of key-value type data
(Memcached and Redis).
• In-memory databases using optimized data storage formats and analytics for business intelligence (for
example, SAP HANA).
• Applications performing real-time processing of big unstructured data (financial services, Hadoop/
Spark clusters).
• High-performance computing (HPC) and Electronic Design Automation (EDA) applications.
R5b instances support io2 Block Express volumes. All io2 volumes attached to an R5b instance during
or after launch automatically run on EBS Block Express. For more information, see io2 Block Express
volumes.
Bare metal instances, such as r5.metal, provide your applications with direct access to physical
resources of the host server, such as processors and memory.
These instances are powered by AWS Graviton2 processors and are ideal for running memory-intensive
workloads, such as the following:
Bare metal instances, such as r6g.metal, provide your applications with direct access to physical
resources of the host server, such as processors and memory.
R6i instances
These instances are ideal for running memory-intensive workloads, such as the following:
These instances offer 6 TiB, 9 TiB, 12 TiB, 18 TiB, and 24 TiB of memory per instance. They are designed
to run large in-memory databases, including production deployments of the SAP HANA in-memory
database.
For more information, see Amazon EC2 High Memory Instances and Storage Configuration for SAP
HANA. For information about supported operating systems, see Migrating SAP HANA on AWS to an EC2
High Memory Instance.
X1 instances
305
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
• In-memory databases such as SAP HANA, including SAP-certified support for Business Suite S/4HANA,
Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA.
For more information, see SAP HANA on the AWS Cloud.
• Big-data processing engines such as Apache Spark or Presto.
• High-performance computing (HPC) applications.
X1e instances
These instances are well suited for the following:
• High-performance databases.
• In-memory databases such as SAP HANA. For more information, see SAP HANA on the AWS Cloud.
• Memory-intensive enterprise applications.
X2gd instances
These instances are well suited for the following:
X2iezn instances
These instances are well suited for the following:
z1d instances
These instances deliver both high compute and high memory and are well-suited for the following:
z1d.metal instances provide your applications with direct access to physical resources of the host
server, such as processors and memory.
Contents
• Hardware specifications (p. 307)
306
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
Hardware specifications
The following is a summary of the hardware specifications for memory optimized instances.
r4.large 2 15.25
r4.xlarge 4 30.5
r4.2xlarge 8 61
r4.4xlarge 16 122
r4.8xlarge 32 244
r4.16xlarge 64 488
r5.large 2 16
r5.xlarge 4 32
r5.2xlarge 8 64
r5.4xlarge 16 128
r5.8xlarge 32 256
r5.12xlarge 48 384
r5.16xlarge 64 512
r5.24xlarge 96 768
r5.metal 96 768
r5a.large 2 16
r5a.xlarge 4 32
r5a.2xlarge 8 64
r5a.4xlarge 16 128
r5a.8xlarge 32 256
r5a.12xlarge 48 384
r5a.16xlarge 64 512
r5a.24xlarge 96 768
307
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
r5ad.large 2 16
r5ad.xlarge 4 32
r5ad.2xlarge 8 64
r5ad.4xlarge 16 128
r5ad.8xlarge 32 256
r5ad.12xlarge 48 384
r5ad.16xlarge 64 512
r5ad.24xlarge 96 768
r5b.large 2 16
r5b.xlarge 4 32
r5b.2xlarge 8 64
r5b.4xlarge 16 128
r5b.8xlarge 32 256
r5b.12xlarge 48 384
r5b.16xlarge 64 512
r5b.24xlarge 96 768
r5b.metal 96 768
r5d.large 2 16
r5d.xlarge 4 32
r5d.2xlarge 8 64
r5d.4xlarge 16 128
r5d.8xlarge 32 256
r5d.12xlarge 48 384
r5d.16xlarge 64 512
r5d.24xlarge 96 768
r5d.metal 96 768
r5dn.large 2 16
r5dn.xlarge 4 32
r5dn.2xlarge 8 64
r5dn.4xlarge 16 128
r5dn.8xlarge 32 256
308
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
r5dn.12xlarge 48 384
r5dn.16xlarge 64 512
r5dn.24xlarge 96 768
r5dn.metal 96 768
r5n.large 2 16
r5n.xlarge 4 32
r5n.2xlarge 8 64
r5n.4xlarge 16 128
r5n.8xlarge 32 256
r5n.12xlarge 48 384
r5n.16xlarge 64 512
r5n.24xlarge 96 768
r5n.metal 96 768
r6g.medium 1 8
r6g.large 2 16
r6g.xlarge 4 32
r6g.2xlarge 8 64
r6g.4xlarge 16 128
r6g.8xlarge 32 256
r6g.12xlarge 48 384
r6g.16xlarge 64 512
r6gd.medium 1 8
r6gd.large 2 16
r6gd.xlarge 4 32
r6gd.2xlarge 8 64
r6gd.4xlarge 16 128
r6gd.8xlarge 32 256
r6gd.12xlarge 48 384
r6gd.16xlarge 64 512
r6i.large 2 16
r6i.xlarge 4 32
309
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
r6i.2xlarge 8 64
r6i.4xlarge 16 128
r6i.8xlarge 32 256
r6i.12xlarge 48 384
r6i.16xlarge 64 512
r6i.24xlarge 96 768
x1.16xlarge 64 976
x1e.xlarge 4 122
x1e.2xlarge 8 244
x1e.4xlarge 16 488
x1e.8xlarge 32 976
x1e.16xlarge 64 1,952
x2gd.medium 1 16
x2gd.large 2 32
x2gd.xlarge 4 64
x2gd.2xlarge 8 128
x2gd.4xlarge 16 256
x2gd.8xlarge 32 512
310
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
x2gd.12xlarge 48 768
x2gd.16xlarge 64 1,024
x2gd.metal 64 1,024
x2iezn.2xlarge 8 256
x2iezn.4xlarge 16 512
x2iezn.6xlarge 24 768
x2iezn.8xlarge 32 1,024
x2iezn.12xlarge 48 1,536
x2iezn.metal 48 1,536
z1d.large 2 16
z1d.xlarge 4 32
z1d.2xlarge 8 64
z1d.3xlarge 12 96
z1d.6xlarge 24 192
z1d.12xlarge 48 384
z1d.metal 48 384
For more information about the hardware specifications for each Amazon EC2 instance type, see Amazon
EC2 Instance Types.
For more information about specifying CPU options, see Optimize CPU options (p. 676).
Memory performance
X1 instances include Intel Scalable Memory Buffers, providing 300 GiB/s of sustainable memory-read
bandwidth and 140 GiB/s of sustainable memory-write bandwidth.
For more information about how much RAM can be enabled for memory optimized instances, see
Hardware specifications (p. 307).
Memory optimized instances have high memory and require 64-bit HVM AMIs to take advantage of that
capacity. HVM AMIs provide superior performance in comparison to paravirtual (PV) AMIs on memory
optimized instances. For more information, see Linux AMI virtualization types (p. 98).
Instance performance
Memory optimized instances enable increased cryptographic performance through the latest Intel AES-
NI feature, support Intel Transactional Synchronization Extensions (TSX) to boost the performance of in-
memory transactional data processing, and support Advanced Vector Extensions 2 (Intel AVX2) processor
instructions to expand most integer commands to 256 bits.
311
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
Some memory optimized instances provide the ability to control processor C-states and P-states on
Linux. C-states control the sleep levels that a core can enter when it is inactive, while P-states control
the desired performance (measured by CPU frequency) from a core. For more information, see Processor
state control for your EC2 instance (p. 663).
Network performance
You can enable enhanced networking on supported instance types to provide lower latencies, lower
network jitter, and higher packet-per-second (PPS) performance. Most applications do not consistently
need a high level of network performance, but can benefit from access to increased bandwidth when
they send or receive data. For more information, see Enhanced networking on Linux (p. 1100).
The following is a summary of network performance for memory optimized instances that support
enhanced networking.
312
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
* Instances of this type launched after March 12, 2020 provide network performance of 100 Gbps.
Instances of this type launched before March 12, 2020 might only provide network performance of 25
Gbps. To ensure that instances launched before March 12, 2020 have a network performance of 100
Gbps, contact your account team to upgrade your instance at no additional cost.
† These instances have a baseline bandwidth and can use a network I/O credit mechanism to burst
beyond their baseline bandwidth on a best effort basis. For more information, see instance network
bandwidth (p. 1098).
r4.large .75 10
r4.xlarge 1.25 10
r4.2xlarge 2.5 10
r4.4xlarge 5 10
r5.large .75 10
r5.xlarge 1.25 10
r5.2xlarge 2.5 10
r5.4xlarge 5 10
r5a.large .75 10
r5a.xlarge 1.25 10
r5a.2xlarge 2.5 10
r5a.4xlarge 5 10
r5a.8xlarge 7.5 10
r5ad.large .75 10
r5ad.xlarge 1.25 10
r5ad.2xlarge 2.5 10
r5ad.4xlarge 5 10
313
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
r5ad.8xlarge 7.5 10
r5b.large .75 10
r5b.xlarge 1.25 10
r5b.2xlarge 2.5 10
r5b.4xlarge 5 10
r5d.large .75 10
r5d.xlarge 1.25 10
r5d.2xlarge 2.5 10
r5d.4xlarge 5 10
r5dn.large 2.1 25
r5dn.xlarge 4.1 25
r5dn.2xlarge 8.125 25
r5dn.4xlarge 16.25 25
r5n.large 2.1 25
r5n.xlarge 4.1 25
r5n.2xlarge 8.125 25
r5n.4xlarge 16.25 25
r6g.medium .5 10
r6g.large .75 10
r6g.xlarge 1.25 10
r6g.2xlarge 2.5 10
r6g.4xlarge 5 10
r6gd.medium .5 10
r6gd.large .75 10
r6gd.xlarge 1.25 10
r6gd.2xlarge 2.5 10
r6gd.4xlarge 5 10
314
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
x1e.xlarge .625 10
x1e.2xlarge 1.25 10
x1e.4xlarge 2.5 10
x1e.8xlarge 5 10
x2gd.medium .5 10
x2gd.large .75 10
x2gd.xlarge 1.25 10
x2gd.2xlarge 2.5 10
x2gd.4xlarge 5 10
x2iezn.2xlarge 12.5 25
x2iezn.4xlarge 15 25
z1d.large .75 10
z1d.xlarge 1.25 10
z1d.2xlarge 2.5 10
z1d.3xlarge 5 10
315
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
316
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
As you fill the SSD-based instance store volumes for your instance, the number of write IOPS that
you can achieve decreases. This is due to the extra work the SSD controller must do to find available
space, rewrite existing data, and erase unused space so that it can be rewritten. This process of
garbage collection results in internal write amplification to the SSD, expressed as the ratio of SSD write
operations to user write operations. This decrease in performance is even larger if the write operations
are not in multiples of 4,096 bytes or not aligned to a 4,096-byte boundary. If you write a smaller
amount of bytes or bytes that are not aligned, the SSD controller must read the surrounding data and
store the result in a new location. This pattern results in significantly increased write amplification,
increased latency, and dramatically reduced I/O performance.
SSD controllers can use several strategies to reduce the impact of write amplification. One such strategy
is to reserve space in the SSD instance storage so that the controller can more efficiently manage the
space available for write operations. This is called over-provisioning. The SSD-based instance store
volumes provided to an instance don't have any space reserved for over-provisioning. To reduce write
amplification, we recommend that you leave 10% of the volume unpartitioned so that the SSD controller
can use it for over-provisioning. This decreases the storage that you can use, but increases performance
even if the disk is close to full capacity.
For instance store volumes that support TRIM, you can use the TRIM command to notify the SSD
controller whenever you no longer need data that you've written. This provides the controller with more
free space, which can reduce write amplification and increase performance. For more information, see
Instance store volume TRIM support (p. 1628).
Instance features
The following is a summary of features for memory optimized instances.
R4 Yes No No Yes
317
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
X1 No No SSD Yes
** All io2 volumes attached to an R5b instance during or after launch automatically run on EBS Block
Express. For more information, see io2 Block Express volumes.
318
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Memory optimized
Release notes
• R4 instances feature up to 64 vCPUs and are powered by two AWS-customized Intel XEON processors
based on E5-2686v4 that feature high-memory bandwidth and larger L3 caches to boost the
performance of in-memory applications.
• R5, R5b, and R5d instances feature a 3.1 GHz Intel Xeon Platinum 8000 series processor from either
the first generation (Skylake-SP) or second generation (Cascade Lake).
• R5a and R5ad instances feature a 2.5 GHz AMD EPYC 7000 series processor.
• R6g and R6gd instances feature an AWS Graviton2 processor based on 64-bit Arm architecture.
• High memory instances (u-6tb1.metal, u-9tb1.metal, and u-12tb1.metal) are the first
instances to be powered by an eight-socket platform with the latest generation Intel Xeon Platinum
8176M (Skylake) processors that are optimized for mission-critical enterprise workloads. High Memory
instances with 18 TB and 24 TB of memory (u-18tb1.metal and u-24tb1.metal) are the first
instances powered by an 8-socket platform with 2nd Generation Intel Xeon Scalable 8280L (Cascade
Lake) processors.
• X1e and X1 instances feature up to 128 vCPUs and are powered by four Intel Xeon E7-8880 v3
processors that feature high-memory bandwidth and larger L3 caches to boost the performance of in-
memory applications.
• X2iezn instances feature a custom Intel Xeon Scalable processor (Cascade Lake).
• Instances built on the Nitro System have the following requirements:
• NVMe drivers (p. 1552) must be installed
• Elastic Network Adapter (ENA) drivers (p. 1101) must be installed
319
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEASPM=y
• Bare metal instances use a PCI-based serial device rather than an I/O port-based serial device. The
upstream Linux kernel and the latest Amazon Linux AMIs support this device. Bare metal instances also
provide an ACPI SPCR table to enable the system to automatically use the PCI-based serial device. The
latest Windows AMIs automatically use the PCI-based serial device.
• You can't launch X1 instances using a Windows Server 2008 SP2 64-bit AMI, except for x1.16xlarge
instances.
• You can't launch X1e instances using a Windows Server 2008 SP2 64-bit AMI.
• With earlier versions of the Windows Server 2008 R2 64-bit AMI, you can't launch r4.large and
r4.4xlarge instances. If you experience this issue, update to the latest version of this AMI.
• There is a limit on the total number of instances that you can launch in a Region, and there are
additional limits on some instance types. For more information, see How many instances can I run in
Amazon EC2? in the Amazon EC2 FAQ.
D2 instances
These instances offer scale out of instance storage and are well suited for the following:
320
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
H1 instances
These instances are well suited for the following:
Bare metal instances provide your applications with direct access to physical resources of the host server,
such as processors and memory.
Im4gn instances
These instances are well suited for workloads that require high random I/O performance at a low latency,
such as the following:
• Relational databases
• NoSQL databases
• Search
• Distributed file systems
For more information, see Amazon EC2 Im4gn and Is4gen Instances.
Is4gen instances
These instances are well suited for workloads that require high random I/O performance at a low latency,
such as the following:
• NoSQL databases
• Indexing
• Streaming
• Caching
• Warm storage
For more information, see Amazon EC2 Im4gn and Is4gen Instances.
Contents
321
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
Hardware specifications
The primary data storage for D2, D3, and D3en instances is HDD instance store volumes. The primary
data storage for I3 and I3en instances is non-volatile memory express (NVMe) SSD instance store
volumes.
Instance store volumes persist only for the life of the instance. When you stop, hibernate, or terminate
an instance, the applications and data in its instance store volumes are erased. We recommend that you
regularly back up or replicate important data in your instance store volumes. For more information, see
Amazon EC2 instance store (p. 1613) and SSD instance store volumes (p. 1627).
The following is a summary of the hardware specifications for storage optimized instances.
d2.xlarge 4 30.5
d2.2xlarge 8 61
d2.4xlarge 16 122
d2.8xlarge 36 244
d3.xlarge 4 32
d3.2xlarge 8 64
d3.4xlarge 16 128
d3.8xlarge 32 256
d3en.large 2 8
d3en.xlarge 4 16
d3en.2xlarge 8 32
d3en.4xlarge 16 64
d3en.6xlarge 24 96
d3en.8xlarge 32 128
d3en.12xlarge 48 192
h1.2xlarge 8 32
h1.4xlarge 16 64
h1.8xlarge 32 128
322
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
h1.16xlarge 64 256
i3.large 2 15.25
i3.xlarge 4 30.5
i3.2xlarge 8 61
i3.4xlarge 16 122
i3.8xlarge 32 244
i3.16xlarge 64 488
i3.metal 72 512
i3en.large 2 16
i3en.xlarge 4 32
i3en.2xlarge 8 64
i3en.3xlarge 12 96
i3en.6xlarge 24 192
i3en.12xlarge 48 384
i3en.24xlarge 96 768
i3en.metal 96 768
im4gn.large 2 8
im4gn.xlarge 4 16
im4gn.2xlarge 8 32
im4gn.4xlarge 16 64
im4gn.8xlarge 32 128
im4gn.16xlarge 64 256
is4gen.medium 1 6
is4gen.large 2 12
is4gen.xlarge 4 24
is4gen.2xlarge 8 48
is4gen.4xlarge 16 96
is4gen.8xlarge 32 192
For more information about the hardware specifications for each Amazon EC2 instance type, see Amazon
EC2 Instance Types.
For more information about specifying CPU options, see Optimize CPU options (p. 676).
323
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
Instance performance
To ensure the best disk throughput performance from your instance on Linux, we recommend that you
use the most recent version of Amazon Linux 2 or the Amazon Linux AMI.
For instances with NVMe instance store volumes, you must use a Linux AMI with kernel version 4.4 or
later. Otherwise, your instance will not achieve the maximum IOPS performance available.
D2 instances provide the best disk performance when you use a Linux kernel that supports persistent
grants, an extension to the Xen block ring protocol that significantly improves disk throughput and
scalability. For more information about persistent grants, see this article in the Xen Project Blog.
EBS-optimized instances enable you to get consistently high performance for your EBS volumes by
eliminating contention between Amazon EBS I/O and other network traffic from your instance. Some
storage optimized instances are EBS-optimized by default at no additional cost. For more information,
see Amazon EBS–optimized instances (p. 1556).
Some storage optimized instance types provide the ability to control processor C-states and P-states on
Linux. C-states control the sleep levels that a core can enter when it is inactive, while P-states control the
desired performance (in CPU frequency) from a core. For more information, see Processor state control
for your EC2 instance (p. 663).
Network performance
You can enable enhanced networking on supported instance types to provide lower latencies, lower
network jitter, and higher packet-per-second (PPS) performance. Most applications do not consistently
need a high level of network performance, but can benefit from access to increased bandwidth when
they send or receive data. For more information, see Enhanced networking on Linux (p. 1100).
The following is a summary of network performance for storage optimized instances that support
enhanced networking.
324
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
i3en.24xlarge | i3en.metal 100 Gbps ENA (p. 1101), EFA (p. 1128)
| im4gn.16xlarge
† These instances have a baseline bandwidth and can use a network I/O credit mechanism to burst
beyond their baseline bandwidth on a best effort basis. For more information, see instance network
bandwidth (p. 1098).
d3.xlarge 3 15
d3.2xlarge 6 15
d3.4xlarge 12.5 15
d3en.large 3 25
d3en.xlarge 6 25
d3en.2xlarge 12.5 25
i3.large .75 10
i3.xlarge 1.25 10
i3.2xlarge 2.5 10
i3.4xlarge 5 10
i3en.large 2.1 25
i3en.xlarge 4.2 25
i3en.2xlarge 8.4 25
i3en.3xlarge 12.5 25
im4gn.large 3.125 25
im4gn.xlarge 6.250 25
im4gn.2xlarge 12.5 25
is4gen.medium 1.563 25
is4gen.large 3.125 25
is4gen.xlarge 6.25 25
is4gen.2xlarge 12.5 25
325
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
326
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
As you fill your SSD-based instance store volumes, the I/O performance that you get decreases. This is
due to the extra work that the SSD controller must do to find available space, rewrite existing data, and
erase unused space so that it can be rewritten. This process of garbage collection results in internal write
amplification to the SSD, expressed as the ratio of SSD write operations to user write operations. This
decrease in performance is even larger if the write operations are not in multiples of 4,096 bytes or not
aligned to a 4,096-byte boundary. If you write a smaller amount of bytes or bytes that are not aligned,
the SSD controller must read the surrounding data and store the result in a new location. This pattern
results in significantly increased write amplification, increased latency, and dramatically reduced I/O
performance.
SSD controllers can use several strategies to reduce the impact of write amplification. One such strategy
is to reserve space in the SSD instance storage so that the controller can more efficiently manage the
space available for write operations. This is called over-provisioning. The SSD-based instance store
volumes provided to an instance don't have any space reserved for over-provisioning. To reduce write
amplification, we recommend that you leave 10% of the volume unpartitioned so that the SSD controller
can use it for over-provisioning. This decreases the storage that you can use, but increases performance
even if the disk is close to full capacity.
For instance store volumes that support TRIM, you can use the TRIM command to notify the SSD
controller whenever you no longer need data that you've written. This provides the controller with more
free space, which can reduce write amplification and increase performance. For more information, see
Instance store volume TRIM support (p. 1628).
Instance features
The following is a summary of features for storage optimized instances:
D2 No HDD Yes
D3 No HDD * Yes
H1 No HDD * Yes
I3 No NVMe * Yes
327
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
The following Linux AMIs support launching d2.8xlarge instances with 36 vCPUs:
If you must use a different AMI for your application, and your d2.8xlarge instance launch does not
complete successfully (for example, if your instance status changes to stopped during launch with a
Client.InstanceInitiatedShutdown state transition reason), modify your instance as described in
the following procedure to support more than 32 vCPUs so that you can use the d2.8xlarge instance
type.
1. Launch a D2 instance using your AMI, choosing any D2 instance type other than d2.8xlarge.
2. Update the kernel to the latest version by following your operating system-specific instructions. For
example, for RHEL 6, use the following command:
a. Change the instance type of your stopped instance to any D2 instance type other than
d2.8xlarge (choose Actions, Instance settings, Change instance type, and then follow the
directions).
b. Add the maxcpus=32 option to your boot kernel parameters by following your operating
system-specific instructions. For example, for RHEL 6, edit the /boot/grub/menu.lst file and
add the following option to the most recent and active kernel entry:
default=0
timeout=1
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.32-504.3.3.el6.x86_64)
root (hd0,0)
kernel /boot/vmlinuz-2.6.32-504.3.3.el6.x86_64 maxcpus=32 console=ttyS0 ro
root=UUID=9996863e-b964-47d3-a33b-3920974fdbd9 rd_NO_LUKS KEYBOARDTYPE=pc
KEYTABLE=us LANG=en_US.UTF-8 xen_blkfront.sda_is_xvda=1 console=ttyS0,115200n8
328
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage optimized
Release notes
• Instances built on the Nitro System (p. 232) have the following requirements:
• NVMe drivers (p. 1552) must be installed
• Elastic Network Adapter (ENA) drivers (p. 1101) must be installed
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEASPM=y
• Bare metal instances use a PCI-based serial device rather than an I/O port-based serial device. The
upstream Linux kernel and the latest Amazon Linux AMIs support this device. Bare metal instances also
provide an ACPI SPCR table to enable the system to automatically use the PCI-based serial device. The
latest Windows AMIs automatically use the329PCI-based serial device.
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
• With FreeBSD AMIs, bare metal instances take nearly an hour to boot and I/O to the local NVMe
storage does not complete. As a workaround, add the following line to /boot/loader.conf and
reboot:
hw.nvme.per_cpu_io_queues="0"
• The d2.8xlarge instance type has 36 vCPUs, which might cause launch issues in some
Linux operating systems that have a vCPU limit of 32. For more information, see Support for
vCPUs (p. 328).
• The d3.8xlarge and d3en.12xlarge instances support a maximum of three attachments, including
the root volume. If you exceed the attachment limit when you add a network interface or EBS volume,
this causes attachment issues on your instance.
• There is a limit on the total number of instances that you can launch in a Region, and there are
additional limits on some instance types. For more information, see How many instances can I run in
Amazon EC2? in the Amazon EC2 FAQ.
If you require high processing capability, you'll benefit from using accelerated computing instances,
which provide access to hardware-based compute accelerators such as Graphics Processing Units (GPUs),
Field Programmable Gate Arrays (FPGAs), or AWS Inferentia.
Contents
• GPU instances (p. 330)
• Video transcoding instances (p. 332)
• Instances with AWS Inferentia (p. 332)
• Instances with Habana accelerators (p. 333)
• FPGA instances (p. 333)
• Hardware specifications (p. 334)
• Instance performance (p. 336)
• Network performance (p. 336)
• SSD I/O performance (p. 338)
• Instance features (p. 339)
• Release notes (p. 339)
• Install NVIDIA drivers on Linux instances (p. 340)
• Install AMD drivers on Linux instances (p. 357)
• Activate NVIDIA GRID Virtual Applications (p. 362)
• Optimize GPU settings (p. 362)
GPU instances
GPU-based instances provide access to NVIDIA GPUs with thousands of compute cores. You can use
these instances to accelerate scientific, engineering, and rendering applications by leveraging the
CUDA or Open Computing Language (OpenCL) parallel computing frameworks. You can also use them
for graphics applications, including game streaming, 3-D application streaming, and other graphics
workloads.
330
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
G5 instances
G5 instances use NVIDIA A10G GPUs and provide high performance for graphics-intensive applications
such as remote workstations, video rendering, and cloud gaming, and deep learning models for
applications such as natural language processing, computer vision, and recommendation engines. These
instances feature up to 8 NVIDIA A10G GPUs, second generation AMD EPY processors, up to 100 Gbps of
network bandwidth, and up to 7.6 TB of local NVMe SSD storage.
G5g instances
G5g instances use NVIDIA T4G GPUs and provide high performance for graphics-intensive applications
such as game streaming and rendering that leverage industry-standard APIs, such as OpenGL and
Vulkan. These instances are also suitable for running deep learning models for applications such as
natural language processing and computer vision. These instances feature up to 2 NVIDIA T4G Tensor
Core GPUs, AWS Graviton2 processors, and up to 25 Gbps of network bandwidth.
G4ad instances use AMD Radeon Pro V520 GPUs and 2nd generation AMD EPYC processors, and are well-
suited for graphics applications such as remote graphics workstations, game streaming, and rendering
that leverage industry-standard APIs such as OpenGL, DirectX, and Vulkan. They provide up to 4 AMD
Radeon Pro V520 GPUs, 64 vCPUs, 25 Gbps networking, and 2.4 TB local NVMe-based SSD storage.
G4dn instances use NVIDIA Tesla GPUs and provide a cost-effective, high-performance platform for
general purpose GPU computing using the CUDA or machine learning frameworks along with graphics
applications using DirectX or OpenGL. These instances provide high- bandwidth networking, powerful
half and single-precision floating-point capabilities, along with INT8 and INT4 precisions. Each GPU has
16 GiB of GDDR6 memory, making G4dn instances well-suited for machine learning inference, video
transcoding, and graphics applications like remote graphics workstations and game streaming in the
cloud.
G4dn instances support NVIDIA GRID Virtual Workstation. For more information, see NVIDIA Marketplace
offerings.
G3 instances
These instances use NVIDIA Tesla M60 GPUs and provide a cost-effective, high-performance platform
for graphics applications using DirectX or OpenGL. G3 instances also provide NVIDIA GRID Virtual
Workstation features, such as support for four monitors with resolutions up to 4096x2160, and NVIDIA
GRID Virtual Applications. G3 instances are well-suited for applications such as 3D visualizations,
graphics-intensive remote workstations, 3D rendering, video encoding, virtual reality, and other server-
side graphics workloads requiring massively parallel processing power.
G3 instances support NVIDIA GRID Virtual Workstation and NVIDIA GRID Virtual Applications. To activate
either of these features, see Activate NVIDIA GRID Virtual Applications (p. 362).
G2 instances
These instances use NVIDIA GRID K520 GPUs and provide a cost-effective, high-performance platform
for graphics applications using DirectX or OpenGL. NVIDIA GRID GPUs also support NVIDIA’s fast capture
and encode API operations. Example applications include video creation services, 3D visualizations,
streaming graphics-intensive applications, and other server-side graphics workloads.
P4d instances
331
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
These instances use NVIDIA A100 GPUs and provide a high-performance platform for machine learning
and HPC workloads. P4d instances offer 400 Gbps of aggregate network bandwidth throughput and
support, Elastic Fabric Adapter (EFA). They are the first EC2 instances to provide multiple network cards.
P4d instances support NVIDIA NVSwitch GPU interconnect and NVIDIA GPUDirect RDMA.
P3 instances
These instances use NVIDIA Tesla V100 GPUs and are designed for general purpose GPU computing
using the CUDA or OpenCL programming models or through a machine learning framework. P3
instances provide high-bandwidth networking, powerful half, single, and double-precision floating-
point capabilities, and up to 32 GiB of memory per GPU, which makes them ideal for deep learning,
computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics,
rendering, and other server-side GPU compute workloads. Tesla V100 GPUs do not support graphics
mode.
P3 instances support NVIDIA NVLink peer to peer transfers. For more information, see NVIDIA NVLink.
P2 instances
P2 instances use NVIDIA Tesla K80 GPUs and are designed for general purpose GPU computing using
the CUDA or OpenCL programming models. P2 instances provide high-bandwidth networking, powerful
single and double precision floating-point capabilities, and 12 GiB of memory per GPU, which makes
them ideal for deep learning, graph databases, high-performance databases, computational fluid
dynamics, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other
server-side GPU compute workloads.
P2 instances support NVIDIA GPUDirect peer to peer transfers. For more information, see NVIDIA
GPUDirect.
VT1 instances
VT1 instances feature Xilinx Alveo U30 media accelerators and are designed for live video transcoding
workloads. These instances offer up to 8 Xilinx Alveo U30 acceleration cards, provide up to 192 GB of
system memory, and up to 25 Gbps of network bandwidth. VT1 instances feature H.264/AVC and H.265/
HEVC codecs and support up to 4K UHD resolutions for multi-stream video transcoding.
• Launch a VT1 instance using the Xilinx U30 AMIs on AWS Marketplace.
• Launch a VT1 instance using your own AMI and install the Xilinx U30 drivers and Xilinx Video SDK.
• Launch a container instance using a VT1 instance and an Amazon ECS-optimized AMI.
• Create an Amazon EKS cluster with nodes running VT1 instances.
332
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
language processing, object detection and classification, content personalization and filtering, and
speech recognition.
• Use SageMaker, a fully-managed service that is the easiest way to get started with machine learning
models. For more information, see Compile and deploy a TensorFlow model on Inf1 using Sagemaker
Neo.
• Launch an Inf1 instance using the Deep Learning AMI. For more information, see AWS Inferentia with
DLAMI in the AWS Deep Learning AMI Developer Guide.
• Launch an Inf1 instance using your own AMI and install the AWS Neuron SDK, which enables you to
compile, run, and profile deep learning models for AWS Inferentia.
• Launch a container instance using an Inf1 instance and an Amazon ECS-optimized AMI. For more
information, see Amazon Linux 2 (Inferentia) AMIs in the Amazon Elastic Container Service Developer
Guide.
• Create an Amazon EKS cluster with nodes running Inf1 instances. For more information, see Inferentia
support in the Amazon EKS User Guide.
Inf1 instances
Inf1 instances use AWS Inferentia machine learning inference chips. Inferentia was developed to enable
highly cost-effective low latency inference performance at any scale.
DL1 instances
DL1 instances use Habana Gaudi accelerators. They offer up to 400 Gbps of aggregate network
bandwidth, along with 32 GB of high bandwidth memory (HBM) per accelerator. DL1 instances are
designed to provide high performance and cost efficiency for training deep learning models.
FPGA instances
FPGA-based instances provide access to large FPGAs with millions of parallel system logic cells. You can
use FPGA-based accelerated computing instances to accelerate workloads such as genomics, financial
analysis, real-time video processing, big data analysis, and security workloads by leveraging custom
hardware accelerations. You can develop these accelerations using hardware description languages such
333
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
as Verilog or VHDL, or by using higher-level languages such as OpenCL parallel computing frameworks.
You can either develop your own hardware acceleration code or purchase hardware accelerations through
the AWS Marketplace.
The FPGA Developer AMI provides the tools for developing, testing, and building AFIs. You can use the
FPGA Developer AMI on any EC2 instance with at least 32 GB of system memory (for example, C5, M4,
and R4 instances).
For more information, see the documentation for the AWS FPGA Hardware Development Kit.
F1 instances
F1 instances use Xilinx UltraScale+ VU9P FPGAs and are designed to accelerate computationally
intensive algorithms, such as data-flow or highly parallel operations not suited to general purpose
CPUs. Each FPGA in an F1 instance contains approximately 2.5 million logic elements and approximately
6,800 Digital Signal Processing (DSP) engines, along with 64 GiB of local DDR ECC protected memory,
connected to the instance by a dedicated PCIe Gen3 x16 connection. F1 instances provide local NVMe
SSD volumes.
Developers can use the FPGA Developer AMI and AWS Hardware Developer Kit to create custom
hardware accelerations for use on F1 instances. The FPGA Developer AMI includes development tools for
full-cycle FPGA development in the cloud. Using these tools, developers can create and share Amazon
FPGA Images (AFIs) that can be loaded onto the FPGA of an F1 instance.
Hardware specifications
The following is a summary of the hardware specifications for accelerated computing instances.
dl1.24xlarge 96 768 8
f1.2xlarge 8 122 1
f1.4xlarge 16 244 2
f1.16xlarge 64 976 8
g2.2xlarge 8 15 1
g2.8xlarge 32 60 4
g3s.xlarge 4 30.5 1
g3.4xlarge 16 122 1
g3.8xlarge 32 244 2
g3.16xlarge 64 488 4
g4ad.xlarge 4 16 1
g4ad.2xlarge 8 32 1
g4ad.4xlarge 16 64 1
g4ad.8xlarge 32 128 2
g4ad.16xlarge 64 256 4
334
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
g4dn.xlarge 4 16 1
g4dn.2xlarge 8 32 1
g4dn.4xlarge 16 64 1
g4dn.8xlarge 32 128 1
g4dn.12xlarge 48 192 4
g4dn.16xlarge 64 256 1
g4dn.metal 96 384 8
g5.xlarge 4 16 1
g5.2xlarge 8 32 1
g5.4xlarge 16 64 1
g5.8xlarge 32 128 1
g5.12xlarge 48 192 4
g5.16xlarge 64 256 1
g5.24xlarge 96 384 4
g5g.xlarge 4 8 1
g5g.2xlarge 8 16 1
g5g.4xlarge 16 32 1
g5g.8xlarge 32 64 1
g5g.16xlarge 64 128 2
inf1.xlarge 4 8 1
inf1.2xlarge 8 16 1
inf1.6xlarge 24 48 4
inf1.24xlarge 96 192 16
p2.xlarge 4 61 1
p2.8xlarge 32 488 8
p2.16xlarge 64 732 16
p3.2xlarge 8 61 1
p3.8xlarge 32 244 4
p3.16xlarge 64 488 8
p3dn.24xlarge 96 768 8
335
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
p4d.24xlarge 96 1,152 8
vt1.3xlarge 12 24 2
vt1.6xlarge 24 48 4
vt1.24xlarge 96 192 16
For more information about the hardware specifications for each Amazon EC2 instance type, see Amazon
EC2 Instance Types.
For more information about specifying CPU options, see Optimize CPU options (p. 676).
Instance performance
There are several GPU setting optimizations that you can perform to achieve the best performance on
your instances. For more information, see Optimize GPU settings (p. 362).
EBS-optimized instances enable you to get consistently high performance for your EBS volumes
by eliminating contention between Amazon EBS I/O and other network traffic from your instance.
Some accelerated computing instances are EBS-optimized by default at no additional cost. For more
information, see Amazon EBS–optimized instances (p. 1556).
Some accelerated computing instance types provide the ability to control processor C-states and P-
states on Linux. C-states control the sleep levels that a core can enter when it is inactive, while P-states
control the desired performance (in CPU frequency) from a core. For more information, see Processor
state control for your EC2 instance (p. 663).
Network performance
You can enable enhanced networking on supported instance types to provide lower latencies, lower
network jitter, and higher packet-per-second (PPS) performance. Most applications do not consistently
need a high level of network performance, but can benefit from access to increased bandwidth when
they send or receive data. For more information, see Enhanced networking on Linux (p. 1100).
The following is a summary of network performance for accelerated computing instances that support
enhanced networking.
336
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
† These instances have a baseline bandwidth and can use a network I/O credit mechanism to burst
beyond their baseline bandwidth on a best effort basis. For more information, see instance network
bandwidth (p. 1098).
f1.2xlarge 2.5 10
f1.4xlarge 5 10
g3.4xlarge 5 10
g3s.xlarge 1.25 10
g4ad.xlarge 2 10
g4ad.2xlarge 4.167 10
g4ad.4xlarge 8.333 10
g4dn.xlarge 5 25
g4dn.2xlarge 10 25
g4dn.4xlarge 20 25
g5.xlarge 2.5 10
g5.2xlarge 5 10
g5.4xlarge 10 25
337
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
g5g.xlarge 1.25 10
g5g.2xlarge 2.5 10
g5g.4xlarge 5 10
p3.2xlarge 2.5 10
vt1.3xlarge 12.5 25
As you fill the SSD-based instance store volumes for your instance, the number of write IOPS that
you can achieve decreases. This is due to the extra work the SSD controller must do to find available
space, rewrite existing data, and erase unused space so that it can be rewritten. This process of
garbage collection results in internal write amplification to the SSD, expressed as the ratio of SSD write
operations to user write operations. This decrease in performance is even larger if the write operations
are not in multiples of 4,096 bytes or not aligned to a 4,096-byte boundary. If you write a smaller
amount of bytes or bytes that are not aligned, the SSD controller must read the surrounding data and
store the result in a new location. This pattern results in significantly increased write amplification,
increased latency, and dramatically reduced I/O performance.
SSD controllers can use several strategies to reduce the impact of write amplification. One such strategy
is to reserve space in the SSD instance storage so that the controller can more efficiently manage the
338
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
space available for write operations. This is called over-provisioning. The SSD-based instance store
volumes provided to an instance don't have any space reserved for over-provisioning. To reduce write
amplification, we recommend that you leave 10% of the volume unpartitioned so that the SSD controller
can use it for over-provisioning. This decreases the storage that you can use, but increases performance
even if the disk is close to full capacity.
For instance store volumes that support TRIM, you can use the TRIM command to notify the SSD
controller whenever you no longer need data that you've written. This provides the controller with more
free space, which can reduce write amplification and increase performance. For more information, see
Instance store volume TRIM support (p. 1628).
Instance features
The following is a summary of features for accelerated computing instances.
F1 No No NVMe * Yes
G2 No No SSD Yes
G3 Yes No No Yes
P2 Yes No No Yes
Release notes
• You must launch the instance using an HVM AMI.
339
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
• Instances built on the Nitro System (p. 232) have the following requirements:
• NVMe drivers (p. 1552) must be installed
• Elastic Network Adapter (ENA) drivers (p. 1101) must be installed
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEASPM=y
• Bare metal instances use a PCI-based serial device rather than an I/O port-based serial device. The
upstream Linux kernel and the latest Amazon Linux AMIs support this device. Bare metal instances also
provide an ACPI SPCR table to enable the system to automatically use the PCI-based serial device. The
latest Windows AMIs automatically use the PCI-based serial device.
• There is a limit of 100 AFIs per Region.
• There is a limit on the total number of instances that you can launch in a Region, and there are
additional limits on some instance types. For more information, see How many instances can I run in
Amazon EC2? in the Amazon EC2 FAQ.
To install AMD drivers on a Linux instance with an attached AMD GPU, such as a G4ad instance, see
Install AMD drivers (p. 357) instead. To install NVIDIA drivers on a Windows instance, see Install NVIDIA
drivers on Windows instances.
Contents
• Types of NVIDIA drivers (p. 341)
• Available drivers by instance type (p. 341)
• Installation options (p. 342)
• Install an additional version of CUDA (p. 357)
340
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
Tesla drivers
These drivers are intended primarily for compute workloads, which use GPUs for computational
tasks such as parallelized floating-point calculations for machine learning and fast Fourier
transforms for high performance computing applications.
GRID drivers
These drivers are certified to provide optimal performance for professional visualization applications
that render content such as 3D models or high-resolution videos. You can configure GRID drivers to
support two modes. Quadro Virtual Workstations provide access to four 4K displays per GPU. GRID
vApps provide RDSH App hosting capabilities.
Gaming drivers
These drivers contain optimizations for gaming and are updated frequently to provide performance
enhancements. They support a single 4K display per GPU.
The NVIDIA control panel is supported with GRID and Gaming drivers. It is not supported with Tesla
drivers.
G2 Yes No No
G3 Yes Yes No
G5g Yes ¹ No No
P2 Yes No No
P3 Yes Yes ² No
P4d Yes ³ No No
¹ This Tesla driver also supports optimized graphics applications specific to the ARM64 platform
341
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
Installation options
Use one of the following options to get the NVIDIA drivers required for your GPU instance.
AWS and NVIDIA offer different Amazon Machine Images (AMI) that come with the NVIDIA drivers
installed.
To update the driver version installed using one of these AMIs, you must uninstall the NVIDIA packages
from your instance to avoid version conflicts. Use this command to uninstall the NVIDIA packages:
The CUDA toolkit package has dependencies on the NVIDIA drivers. Uninstalling the NVIDIA packages
erases the CUDA toolkit. You must reinstall the CUDA toolkit after installing the NVIDIA driver.
The options offered by AWS come with the necessary license for the driver. Alternatively, you can install
the public drivers and bring your own license. To install a public driver, download it from the NVIDIA site
as described here.
Alternatively, you can use the options offered by AWS instead of the public drivers. To use a GRID driver
on a P3 instance, use the AWS Marketplace AMIs as described in Option 1 (p. 342). To use a GRID driver
on a G5, G4dn, or G3 instance, use the AWS Marketplace AMIs, as described in Option 1 or install the
NVIDIA drivers provided by AWS as described in Option 3 (p. 343).
Log on to your Linux instance and download the 64-bit NVIDIA driver appropriate for the instance type
from http://www.nvidia.com/Download/Find.aspx. For Product Type, Product Series, and Product, use
the options in the following table.
342
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
² G5g instances require driver version 470.82.01 or later. The operating systems is Linux aarch64
For more information about installing and configuring the driver, see the NVIDIA Driver Installation
Quickstart Guide.
These downloads are available to AWS customers only. By downloading, you agree to use the
downloaded software only to develop AMIs for use with the NVIDIA A10G, NVIDIA Tesla T4, or NVIDIA
Tesla M60 hardware. Upon installation of the software, you are bound by the terms of the NVIDIA GRID
Cloud End User License Agreement.
Prerequisites
• Install the AWS CLI on your Linux instance and configure default credentials. For more information, see
Installing the AWS CLI in the AWS Command Line Interface User Guide.
• IAM users must have the permissions granted by the AmazonS3ReadOnlyAccess policy.
• G5 instances require GRID 13.1 or later (or GRID 12.4 or later).
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
6. Download the GRID driver installation utility using the following command:
343
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
Multiple versions of the GRID driver are stored in this bucket. You can see all of the available
versions using the following command.
7. Add permissions to run the driver installation utility using the following command.
8. Run the self-install script as follows to install the GRID driver that you downloaded. For example:
Note
If you are using Amazon Linux 2 with kernel version 5.10, use the following command to
install the GRID driver.
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
9. Reboot the instance.
10. Confirm that the driver is functional. The response for the following command lists the installed
version of the NVIDIA driver and details about the GPUs.
11. (Optional) Depending on your use case, you might complete the following optional steps. If you do
not require this functionality, do not complete these steps.
a. To help take advantage of the four displays of up to 4K resolution, set up the high-performance
display protocol NICE DCV.
b. NVIDIA Quadro Virtual Workstation mode is enabled by default. To activate GRID Virtual
Applications for RDSH Application hosting capabilities, complete the GRID Virtual Application
activation steps in Activate NVIDIA GRID Virtual Applications (p. 362).
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
344
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
5. Install the gcc compiler and the kernel headers package for the version of the kernel you are
currently running.
6. Disable the nouveau open source driver for NVIDIA graphics cards.
GRUB_CMDLINE_LINUX="rdblacklist=nouveau"
7. Download the GRID driver installation utility using the following command:
Multiple versions of the GRID driver are stored in this bucket. You can see all of the available
versions using the following command.
8. Add permissions to run the driver installation utility using the following command.
9. Run the self-install script as follows to install the GRID driver that you downloaded. For example:
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
10. Reboot the instance.
11. Confirm that the driver is functional. The response for the following command lists the installed
version of the NVIDIA driver and details about the GPUs.
12. (Optional) Depending on your use case, you might complete the following optional steps. If you do
not require this functionality, do not complete these steps.
345
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
a. To help take advantage of the four displays of up to 4K resolution, set up the high-performance
display protocol NICE DCV.
b. NVIDIA Quadro Virtual Workstation mode is enabled by default. To activate GRID Virtual
Applications for RDSH Application hosting capabilities, complete the GRID Virtual Application
activation steps in Activate NVIDIA GRID Virtual Applications (p. 362).
c. Install the GUI desktop/workstation package.
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
[ec2-user ~]$ sudo dnf install -y make gcc elfutils-libelf-devel libglvnd-devel kernel-
devel-$(uname -r)
6. Download the GRID driver installation utility using the following command:
Multiple versions of the GRID driver are stored in this bucket. You can see all of the available
versions using the following command.
7. Add permissions to run the driver installation utility using the following command.
8. Run the self-install script as follows to install the GRID driver that you downloaded. For example:
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
9. Reboot the instance.
346
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
10. Confirm that the driver is functional. The response for the following command lists the installed
version of the NVIDIA driver and details about the GPUs.
11. (Optional) Depending on your use case, you might complete the following optional steps. If you do
not require this functionality, do not complete these steps.
a. To help take advantage of the four displays of up to 4K resolution, set up the high-performance
display protocol NICE DCV.
b. NVIDIA Quadro Virtual Workstation mode is enabled by default. To activate GRID Virtual
Applications for RDSH Application hosting capabilities, complete the GRID Virtual Application
activation steps in Activate NVIDIA GRID Virtual Applications (p. 362).
c. Install the GUI workstation package.
Rocky Linux 8
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
[ec2-user ~]$ sudo dnf install -y make gcc elfutils-libelf-devel libglvnd-devel kernel-
devel-$(uname -r)
6. Download the GRID driver installation utility using the following command:
Multiple versions of the GRID driver are stored in this bucket. You can see all of the available
versions using the following command.
7. Add permissions to run the driver installation utility using the following command.
8. Run the self-install script as follows to install the GRID driver that you downloaded. For example:
347
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
9. Reboot the instance.
10. Confirm that the driver is functional. The response for the following command lists the installed
version of the NVIDIA driver and details about the GPUs.
11. (Optional) Depending on your use case, you might complete the following optional steps. If you do
not require this functionality, do not complete these steps.
a. To help take advantage of the four displays of up to 4K resolution, set up the high-performance
display protocol NICE DCV.
b. NVIDIA Quadro Virtual Workstation mode is enabled by default. To activate GRID Virtual
Applications for RDSH Application hosting capabilities, complete the GRID Virtual Application
activation steps in Activate NVIDIA GRID Virtual Applications (p. 362).
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
7. Disable the nouveau open source driver for NVIDIA graphics cards.
348
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
EOF
GRUB_CMDLINE_LINUX="rdblacklist=nouveau"
$ sudo update-grub
8. Download the GRID driver installation utility using the following command:
Multiple versions of the GRID driver are stored in this bucket. You can see all of the available
versions using the following command.
9. Add permissions to run the driver installation utility using the following command.
10. Run the self-install script as follows to install the GRID driver that you downloaded. For example:
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
11. Reboot the instance.
12. Confirm that the driver is functional. The response for the following command lists the installed
version of the NVIDIA driver and details about the GPUs.
13. (Optional) Depending on your use case, you might complete the following optional steps. If you do
not require this functionality, do not complete these steps.
a. To help take advantage of the four displays of up to 4K resolution, set up the high-performance
display protocol NICE DCV.
b. NVIDIA Quadro Virtual Workstation mode is enabled by default. To activate GRID Virtual
Applications for RDSH Application hosting capabilities, complete the GRID Virtual Application
activation steps in Activate NVIDIA GRID Virtual Applications (p. 362).
c. Install the GUI desktop/workstation package.
These drivers are available to AWS customers only. By downloading them, you agree to use the
downloaded software only to develop AMIs for use with the NVIDIA A10G and NVIDIA Tesla T4 hardware.
349
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
Upon installation of the software, you are bound by the terms of the NVIDIA GRID Cloud End User
License Agreement.
Prerequisites
• Install the AWS CLI on your Linux instance and configure default credentials. For more information, see
Installing the AWS CLI in the AWS Command Line Interface User Guide.
• IAM users must have the permissions granted by the AmazonS3ReadOnlyAccess policy.
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
6. Download the gaming driver installation utility using the following command:
Multiple versions of the gaming driver are stored in this bucket. You can see all of the available
versions using the following command:
7. Add permissions to run the driver installation utility using the following command.
Note
If you are using Amazon Linux 2 with kernel version 5.10, use the following command to
install the NVIDIA gaming drivers.
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
9. Use the following command to create the required configuration file.
350
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
10. Use the following command to download and rename the certification file.
• For version 460.39 or later:
12. (Optional) To help take advantage of a single display of up to 4K resolution, set up the high-
performance display protocol NICE DCV.
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
6. Disable the nouveau open source driver for NVIDIA graphics cards.
351
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
blacklist rivatv
EOF
GRUB_CMDLINE_LINUX="rdblacklist=nouveau"
7. Download the gaming driver installation utility using the following command:
Multiple versions of the gaming driver are stored in this bucket. You can see all of the available
versions using the following command:
8. Add permissions to run the driver installation utility using the following command.
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
10. Use the following command to create the required configuration file.
11. Use the following command to download and rename the certification file.
• For version 460.39 or later:
352
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
13. (Optional) To help take advantage of a single display of up to 4K resolution, set up the high-
performance display protocol NICE DCV. If you do not require this functionality, do not complete this
step.
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
6. Download the gaming driver installation utility using the following command:
Multiple versions of the gaming driver are stored in this bucket. You can see all of the available
versions using the following command:
7. Add permissions to run the driver installation utility using the following command.
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
9. Use the following command to create the required configuration file.
10. Use the following command to download and rename the certification file.
• For version 460.39 or later:
353
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
12. (Optional) To help take advantage of a single display of up to 4K resolution, set up the high-
performance display protocol NICE DCV.
Rocky Linux 8
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
[ec2-user ~]$ sudo dnf install -y gcc make elfutils-libelf-devel libglvnd-devel kernel-
devel-$(uname -r)
6. Download the gaming driver installation utility using the following command:
Multiple versions of the gaming driver are stored in this bucket. You can see all of the available
versions using the following command:
7. Add permissions to run the driver installation utility using the following command.
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
354
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
10. Use the following command to download and rename the certification file.
• For version 460.39 or later:
12. (Optional) To help take advantage of a single display of up to 4K resolution, set up the high-
performance display protocol NICE DCV.
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
7. Disable the nouveau open source driver for NVIDIA graphics cards.
355
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
GRUB_CMDLINE_LINUX="rdblacklist=nouveau"
$ sudo update-grub
8. Download the gaming driver installation utility using the following command:
Multiple versions of the gaming driver are stored in this bucket. You can see all of the available
versions using the following command:
9. Add permissions to run the driver installation utility using the following command.
When prompted, accept the license agreement and specify the installation options as required (you
can accept the default options).
11. Use the following command to create the required configuration file.
12. Use the following command to download and rename the certification file.
• For version 460.39 or later:
356
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
14. (Optional) To help take advantage of a single display of up to 4K resolution, set up the high-
performance display protocol NICE DCV. If you do not require this functionality, do not complete this
step.
6. Run the install script as follows to install the CUDA toolkit and add the CUDA version number to the
toolkit path.
To install NVIDIA drivers on an instance with an attached NVIDIA GPU, such as a G4dn instance, see
Install NVIDIA drivers (p. 340) instead. To install AMD drivers on a Windows instance, see Install AMD
drivers on Windows instances.
Contents
• AMD Radeon Pro Software for Enterprise Driver (p. 358)
• AMIs with the AMD driver installed (p. 358)
• AMD driver download (p. 358)
357
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
Supported APIs
• OpenGL, OpenCL
• Vulkan
• AMD Advanced Media Framework
• Video Acceleration API
These downloads are available to AWS customers only. By downloading, you agree to use the
downloaded software only to develop AMIs for use with the AMD Radeon Pro V520 hardware. Upon
installation of the software, you are bound by the terms of the AMD Software End User License
Agreement.
Prerequisites
• Install the AWS CLI on your Linux instance and configure default credentials. For more information, see
Installing the AWS CLI in the AWS Command Line Interface User Guide.
• IAM users must have the permissions granted by the AmazonS3ReadOnlyAccess policy.
1. Connect to your Linux instance. Install gcc and make, if they are not already installed.
2. Update your package cache and get the package updates for your instance.
• For Ubuntu:
• For CentOS:
358
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
$ sudo reboot
• For Ubuntu:
• For Ubuntu:
9. Run the self install script to install the full graphics stack.
$ ./amdgpu-pro-install -y --opencl=pal,legacy
$ sudo reboot
Initialized amdgpu
Prerequisite
359
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
Open a text editor and save the following as a file named xorg.conf. You'll need this file on your
instance.
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0"
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
EndSection
Section "Files"
ModulePath "/opt/amdgpu/lib64/xorg/modules/drivers"
ModulePath "/opt/amdgpu/lib/xorg/modules"
ModulePath "/opt/amdgpu-pro/lib/xorg/modules/extensions"
ModulePath "/opt/amdgpu-pro/lib64/xorg/modules/extensions"
ModulePath "/usr/lib64/xorg/modules"
ModulePath "/usr/lib/xorg/modules"
EndSection
Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
EndSection
Section "Device"
Identifier "Device0"
Driver "amdgpu"
VendorName "AMD"
BoardName "Radeon MxGPU V520"
BusID "PCI:0:30:0"
EndSection
Section "Extensions"
Option "DPMS" "Disable"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
SubSection "Display"
Virtual 3840 2160
Depth 32
EndSubSection
EndSection
360
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
$ sudo reboot
5. (Optional) Install the NICE DCV server to use NICE DCV as a high-performance display protocol, and
then connect to a NICE DCV session using your preferred client.
$ sudo reboot
5. (Optional) Install the NICE DCV server to use NICE DCV as a high-performance display protocol, and
then connect to a NICE DCV session using your preferred client.
6. After the DCV installation give the DCV User video permissions:
$ sudo reboot
361
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Accelerated computing
5. (Optional) Install the NICE DCV server to use NICE DCV as a high-performance display protocol, and
then connect to a NICE DCV session using your preferred client.
FeatureType=0
IgnoreSP=TRUE
1. Configure the GPU settings to be persistent. This command can take several minutes to run.
2. [G2, G3, and P2 instances only] Disable the autoboost feature for all GPUs on the instance.
3. Set all GPU clock speeds to their maximum frequency. Use the memory and graphics clock speeds
specified in the following commands.
Some versions of the NVIDIA driver do not support setting the application clock speed, and display
the error "Setting applications clocks is not supported for GPU...", which you can
ignore.
• G3 instances:
362
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Find an instance type
• G4dn instances:
• G5 instances:
• P2 instances:
• P4d instances:
363
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get recommendations
6. (Optional) Select multiple instance types to see a side-by-side comparison across all attributes in the
Details pane.
7. (Optional) To save the list of instance types to a comma-separated values (.csv) file for further
review, choose Download list CSV. The file includes all instance types that match the filters you set.
8. After locating instance types that meet your needs, you can use them to launch instances. For more
information, see Launch an instance using the Launch Instance Wizard (p. 565).
1. If you have not done so already, install the AWS CLI For more information, see the AWS Command
Line Interface User Guide.
2. Use the describe-instance-types command to filter instance types based on instance attributes. For
example, you can use the following command to display only instance types with 48 vCPUs.
4. After locating instance types that meet your needs, make note of them so that you can use these
instance types when you launch instances. For more information, see Launching your instance in the
AWS Command Line Interface User Guide.
To make recommendations, Compute Optimizer analyzes your existing instance specifications and
utilization metrics. The compiled data is then used to recommend which Amazon EC2 instance types are
best able to handle the existing workload. Recommendations are returned along with per-hour instance
pricing.
This topic outlines how to view recommendations through the Amazon EC2 console. For more
information, see the AWS Compute Optimizer User Guide.
Note
To get recommendations from Compute Optimizer, you must first opt in to Compute Optimizer.
For more information, see Getting Started with AWS Compute Optimizer in the AWS Compute
Optimizer User Guide.
Contents
• Limitations (p. 365)
• Findings (p. 365)
364
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get recommendations
Limitations
Compute Optimizer currently generates recommendations for M, C, R, T, and X instance types. Other
instance types are not considered by Compute Optimizer. If you're using other instance types, they will
not be listed in the Compute Optimizer recommendations view. For information about these and other
instance types, see Instance types (p. 226).
Findings
Compute Optimizer classifies its findings for EC2 instances as follows:
View recommendations
After you opt in to Compute Optimizer, you can view the findings that Compute Optimizer generates for
your EC2 instances in the EC2 console. You can then access the Compute Optimizer console to view the
recommendations. If you recently opted in, findings might not be reflected in the EC2 console for up to
12 hours.
New console
The instance opens in Compute Optimizer, where it is labeled as the Current instance. Up to
three different instance type recommendations, labeled Option 1, Option 2, and Option 3, are
provided. The bottom half of the window shows recent CloudWatch metric data for the current
instance: CPU utilization, Memory utilization, Network in, and Network out.
365
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get recommendations
4.
(Optional) In the Compute Optimizer console, choose the settings ( ) icon to change the
visible columns in the table, or to view the public pricing information for a different purchasing
option for the current and recommended instance types.
Note
If you’ve purchased a Reserved Instance, your On-Demand Instance might be billed as
a Reserved Instance. Before you change your current instance type, first evaluate the
impact on Reserved Instance utilization and coverage.
Old console
The instance opens in Compute Optimizer, where it is labeled as the Current instance. Up to
three different instance type recommendations, labeled Option 1, Option 2, and Option 3, are
provided. The bottom half of the window shows recent CloudWatch metric data for the current
instance: CPU utilization, Memory utilization, Network in, and Network out.
4.
(Optional) In the Compute Optimizer console, choose the settings ( ) icon to change the
visible columns in the table, or to view the public pricing information for a different purchasing
option for the current and recommended instance types.
Note
If you’ve purchased a Reserved Instance, your On-Demand Instance might be billed as
a Reserved Instance. Before you change your current instance type, first evaluate the
impact on Reserved Instance utilization and coverage.
Determine whether you want to use one of the recommendations. Decide whether to optimize for
performance improvement, for cost reduction, or for a combination of the two. For more information,
see Viewing Resource Recommendations in the AWS Compute Optimizer User Guide.
To view recommendations for all EC2 instances across all Regions through the Compute
Optimizer console
a. To filter recommendations to one or more AWS Regions, enter the name of the Region in the
Filter by one or more Regions text box, or choose one or more Regions in the drop-down list
that appears.
b. To view recommendations for resources in another account, choose Account, and then select a
different account ID.
This option is available only if you are signed in to a management account of an organization,
and you opted in all member accounts within the organization.
c. To clear the selected filters, choose Clear filters.
d. To change the purchasing option that is displayed for the current and recommended instance
types, choose the settings ( ) icon , and then choose On-Demand Instances, Reserved
Instances, standard 1-year no upfront, or Reserved Instances, standard 3-year no upfront.
366
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the instance type
• The recommendations don’t forecast your usage. Recommendations are based on your historical usage
over the most recent 14-day time period. Be sure to choose an instance type that is expected to meet
your future resource needs.
• Focus on the graphed metrics to determine whether actual usage is lower than instance capacity.
You can also view metric data (average, peak, percentile) in CloudWatch to further evaluate your EC2
instance recommendations. For example, notice how CPU percentage metrics change during the day
and whether there are peaks that need to be accommodated. For more information, see Viewing
Available Metrics in the Amazon CloudWatch User Guide.
• Compute Optimizer might supply recommendations for burstable performance instances, which are
T3, T3a, and T2 instances. If you periodically burst above the baseline, make sure that you can continue
to do so based on the vCPUs of the new instance type. For more information, see Key concepts and
definitions for burstable performance instances (p. 254).
• If you’ve purchased a Reserved Instance, your On-Demand Instance might be billed as a Reserved
Instance. Before you change your current instance type, first evaluate the impact on Reserved Instance
utilization and coverage.
• Consider conversions to newer generation instances, where possible.
• When migrating to a different instance family, make sure the current instance type and the new
instance type are compatible, for example, in terms of virtualization, architecture, or network type. For
more information, see Compatibility for changing the instance type (p. 372).
• Finally, consider the performance risk rating that's provided for each recommendation. Performance
risk indicates the amount of effort you might need to spend in order to validate whether the
recommended instance type meets the performance requirements of your workload. We also
recommend rigorous load and performance testing before and after making any changes.
There are other considerations when resizing an EC2 instance. For more information, see Change the
instance type (p. 367).
Additional resources
For more information:
367
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the instance type
If you want a recommendation for an instance type that is best able to handle your existing workload,
you can use AWS Compute Optimizer. For more information, see Get recommendations for an instance
type (p. 364).
Instance store Not applicable Change the instance type of an instance store-
backed instance (p. 373)
• You must stop your Amazon EBS-backed instance before you can change its instance type. Ensure
that you plan for downtime while your instance is stopped. Stopping the instance and changing its
instance type might take a few minutes, and restarting your instance might take a variable amount of
time depending on your application's startup scripts. For more information, see Stop and start your
instance (p. 622).
• When you stop and start an instance, we move the instance to new hardware. If your instance has a
public IPv4 address, we release the address and give your instance a new public IPv4 address. If you
require a public IPv4 address that does not change, use an Elastic IP address (p. 1059).
• You can't change the instance type if hibernation (p. 626) is enabled for the instance.
• If your instance is in an Auto Scaling group, the Amazon EC2 Auto Scaling service marks the stopped
instance as unhealthy, and may terminate it and launch a replacement instance. To prevent this, you
can suspend the scaling processes for the group while you're changing the instance type. For more
information, see Suspending and resuming a process for an Auto Scaling group in the Amazon EC2
Auto Scaling User Guide.
• When you change the instance type of an instance with NVMe instance store volumes, the updated
instance might have additional instance store volumes because Otherwise, the updated instance
has the same number of instance store volumes that you specified when you launched the original
instance.
368
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the instance type
New console
1. (Optional) If the new instance type requires drivers that are not installed on the existing
instance, you must connect to your instance and install the drivers first. For more information,
see Compatibility for changing the instance type (p. 372).
2. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
3. In the navigation pane, choose Instances.
4. Select the instance and choose Instance state, Stop instance. When prompted for confirmation,
choose Stop. It can take a few minutes for the instance to stop.
5. With the instance still selected, choose Actions, Instance settings, Change instance type. This
option is grayed out if the instance state is not stopped.
6. On the Change instance type page, do the following:
a. For Instance type, select the instance type that you want.
If the instance type is not in the list, then it's not compatible with the configuration of your
instance. Instead, use the following instructions: Change the instance type by launching a
new instance (p. 370).
b. (Optional) If the instance type that you selected supports EBS optimization, select
EBS-optimized to enable EBS optimization, or deselect EBS-optimized to disable EBS
optimization. If the instance type that you selected is EBS optimized by default, EBS-
optimized is selected and you can't deselect it.
c. Choose Apply to accept the new settings.
7. To start the instance, select the instance and choose Instance state, Start instance. It can take
a few minutes for the instance to enter the running state. If you instance won't start, see
Troubleshoot changing the instance type (p. 373).
Old console
1. (Optional) If the new instance type requires drivers that are not installed on the existing
instance, you must connect to your instance and install the drivers first. For more information,
see Compatibility for changing the instance type (p. 372).
2. Open the Amazon EC2 console.
3. In the navigation pane, choose Instances.
4. Select the instance and choose Actions, Instance State, Stop. When prompted for confirmation,
choose Yes, Stop.
a. From Instance Type, select the instance type that you want.
If the instance type that you want does not appear in the list, then it is not compatible
with the configuration of your instance. Instead, use the following instructions: Change the
instance type by launching a new instance (p. 370).
b. (Optional) If the instance type that you selected supports EBS–optimization, select
EBS-optimized to enable EBS–optimization or deselect EBS-optimized to disable EBS–
369
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the instance type
optimization. If the instance type that you selected is EBS–optimized by default, EBS-
optimized is selected and you can't deselect it.
c. Choose Apply to accept the new settings.
7. To restart the stopped instance, select the instance and choose Actions, Instance State, Start.
8. In the confirmation dialog box, choose Yes, Start. It can take a few minutes for the instance to
enter the running state. If you instance won't start, see Troubleshoot changing the instance
type (p. 373).
New console
• For data on your instance store volumes, back up the data to persistent storage.
• For data on your EBS volumes, take a snapshot of the volumes (p. 1385) or detach the
volumes from the instance (p. 1378) so that you can attach them to the new instance later.
3. In the navigation pane, choose Instances.
4. Choose Launch instances. When you configure the instance, do the following:
a. Select an AMI that will support the instance type that you want. Note that current
generation instance types require an HVM AMI.
b. Select the new instance type that you want. If the instance type that you want isn't
available, then it's not compatible with the configuration of the AMI that you selected.
c. If you're using an Elastic IP address, select the VPC that the original instance is currently
running in.
d. If you want to allow the same traffic to reach the new instance, select the security group
that is associated with the original instance.
e. When you're done configuring your new instance, complete the steps to select a key pair
and launch your instance. It can take a few minutes for the instance to enter the running
state.
370
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the instance type
5. If required, attach any new EBS volumes (p. 1353) based on the snapshots that you created, or
any EBS volumes that you detached from the original instance, to the new instance.
6. Install your application and any required software on the new instance.
7. Restore any data that you backed up from the instance store volumes of the original instance.
8. If you are using an Elastic IP address, assign it to the new instance as follows:
Old console
1. Back up any data on your instance store volumes that you need to keep to persistent storage.
To migrate data on your EBS volumes that you need to keep, create a snapshot of the volumes
(see Create Amazon EBS snapshots (p. 1385)) or detach the volume from the instance so that
you can attach it to the new instance later (see Detach an Amazon EBS volume from a Linux
instance (p. 1378)).
2. Launch a new instance, selecting the following:
• An HVM AMI.
• The HVM only instance type.
• If you are using an Elastic IP address, select the VPC that the original instance is currently
running in.
• Any EBS volumes that you detached from the original instance and want to attach to the new
instance, or new EBS volumes based on the snapshots that you created.
• If you want to allow the same traffic to reach the new instance, select the security group that
is associated with the original instance.
3. Install your application and any required software on the instance.
4. Restore any data that you backed up from the instance store volumes of the original instance.
5. If you are using an Elastic IP address, assign it to the newly launched instance as follows:
371
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the instance type
Virtualization type
Linux AMIs use one of two types of virtualization: paravirtual (PV) or hardware virtual machine
(HVM). If an instance was launched from a PV AMI, you can't change to an instance type that is HVM
only. For more information, see Linux AMI virtualization types (p. 98). To check the virtualization
type of your instance, check the Virtualization value on the details pane of the Instances screen in
the Amazon EC2 console.
Architecture
AMIs are specific to the architecture of the processor, so you must select an instance type with the
same processor architecture as the current instance type. For example:
• If the current instance type has a processor based on the Arm architecture, you are limited to the
instance types that support a processor based on the Arm architecture, such as C6g and M6g.
• The following instance types are the only instance types that support 32-bit AMIs: t2.nano,
t2.micro, t2.small, t2.medium, c3.large, t1.micro, m1.small, m1.medium, and
c1.medium. If you are changing the instance type of a 32-bit instance, you are limited to these
instance types.
Network
Newer instance types must be launched in a VPC. Therefore, if the instance is in the EC2-Classic
platform, you can't change the instance type to one that is available only in a VPC unless you have
a nondefault VPC. To check whether your instance is in a VPC, check the VPC ID value on the details
pane of the Instances screen in the Amazon EC2 console. For more information, see Migrate from
EC2-Classic to a VPC (p. 1201).
Network cards
Some instance types support multiple network cards (p. 1069). You must select an instance type
that supports the same number of network cards as the current instance type.
Enhanced networking
Instance types that support enhanced networking (p. 1100) require the necessary drivers installed.
For example, instances based on the Nitro System (p. 232) require EBS-backed AMIs with the
Elastic Network Adapter (ENA) drivers installed. To change from an instance type that does not
support enhanced networking to an instance type that supports enhanced networking, you must
install the ENA drivers (p. 1101) or ixgbevf drivers (p. 1110) on the instance, as appropriate.
NVMe
EBS volumes are exposed as NVMe block devices on instances built on the Nitro System (p. 232).
If you change from an instance type that does not support NVMe to an instance type that supports
NVMe, you must first install the NVMe drivers (p. 1552) on your instance. Also, the device names for
devices that you specify in the block device mapping are renamed using NVMe device names (/dev/
nvme[0-26]n1). Therefore, to mount file systems at boot time using /etc/fstab, you must use
UUID/Label instead of device names.
AMI
For information about the AMIs required by instance types that support enhanced networking and
NVMe, see the Release Notes in the following documentation:
372
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the instance type
If your instance won't boot, it is possible that one of the requirements for the new instance type was
not met. For more information, see Why is my Linux instance not booting after I changed its type?
Possible cause: AMI does not support instance type
If you use the EC2 console to change the instance type, only the instance types that are supported
by the selected AMI are available. However, if you use the AWS CLI to launch an instance, you can
specify an incompatible AMI and instance type. If the AMI and instance type are incompatible,
the instance can't start. For more information, see Compatibility for changing the instance
type (p. 372).
Possible cause: Instance is in cluster placement group
If your instance is in a cluster placement group (p. 1168) and, after changing the instance type, the
instance fails to start, try the following:
1. Stop all the instances in the cluster placement group.
2. Change the instance type of the affected instance.
3. Start all the instances in the cluster placement group.
Application or website not reachable from the internet after changing instance
type
Possible cause: Public IPv4 address is released
When you change the instance type, you must first stop the instance. When you stop an instance, we
release the public IPv4 address and give your instance a new public IPv4 address.
To retain the public IPv4 address between instance stops and starts, we recommend that you use
an Elastic IP address, at no extra cost provided your instance is running. For more information, see
Elastic IP addresses (p. 1059).
373
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the instance type
New console
• For data on your instance store volumes, back up the data to persistent storage.
• For data on your EBS volumes, take a snapshot of the volumes (p. 1385) or detach the volume
from the instance (p. 1378) so that you can attach it to the new instance later.
2. Create an AMI from your instance by satisfying the prerequisites and following the procedures in
Create an instance store-backed Linux AMI (p. 139). When you are finished creating an AMI from
your instance, return to this procedure.
3. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
4. In the navigation pane, choose AMIs. From the filter lists, choose Owned by me, and select the
image that you created in Step 2. Notice that AMI name is the name that you specified when
you registered the image and Source is your Amazon S3 bucket.
Note
If you do not see the AMI that you created in Step 2, make sure that you have selected
the Region in which you created your AMI.
5. With the AMI selected, choose Launch instance from image. When you configure the instance,
do the following:
a. Select the new instance type that you want. If the instance type that you want isn't
available, then it's not compatible with the configuration of the AMI that you created. For
more information, see Compatibility for changing the instance type (p. 372).
b. If you're using an Elastic IP address, select the VPC that the original instance is currently
running in.
c. If you want to allow the same traffic to reach the new instance, select the security group
that is associated with the original instance.
d. When you're done configuring your new instance, complete the steps to select a key pair
and launch your instance. It can take a few minutes for the instance to enter the running
state.
6. If required, attach any new EBS volumes (p. 1353) based on the snapshots that you created, or
any EBS volumes that you detached from the original instance, to the new instance.
7. Install your application and any required software on the new instance.
8. If you are using an Elastic IP address, assign it to the new instance as follows:
374
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance purchasing options
f. (Optional) For Private IP address, specify a private IP address with which to associate the
Elastic IP address.
g. Choose Associate.
9. (Optional) You can terminate the original instance if it's no longer needed. Select the instance,
verify that you are about to terminate the original instance and not the new instance (for
example, check the name or launch time), and then choose Instance state, Terminate instance.
Old console
1. Back up any data on your instance store volumes that you need to keep to persistent storage.
To migrate data on your EBS volumes that you need to keep, take a snapshot of the volumes
(see Create Amazon EBS snapshots (p. 1385)) or detach the volume from the instance so that
you can attach it to the new instance later (see Detach an Amazon EBS volume from a Linux
instance (p. 1378)).
2. Create an AMI from your instance store-backed instance by satisfying the prerequisites and
following the procedures in Create an instance store-backed Linux AMI (p. 139). When you are
finished creating an AMI from your instance, return to this procedure.
3. Open the Amazon EC2 console and in the navigation pane, choose AMIs. From the filter lists,
choose Owned by me, and choose the image that you created in the previous step. Notice that
AMI Name is the name that you specified when you registered the image and Source is your
Amazon S3 bucket.
Note
If you do not see the AMI that you created in the previous step, make sure that you
have selected the Region in which you created your AMI.
4. Choose Launch. When you specify options for the instance, be sure to select the new
instance type that you want. If the instance type that you want can't be selected, then it
is not compatible with configuration of the AMI that you created (for example, because of
virtualization type). You can also specify any EBS volumes that you detached from the original
instance.
It can take a few minutes for the instance to enter the running state.
5. (Optional) You can terminate the instance that you started with, if it's no longer needed.
Select the instance and verify that you are about to terminate the original instance, not the
new instance (for example, check the name or launch time). Choose Actions, Instance State,
Terminate.
• On-Demand Instances – Pay, by the second, for the instances that you launch.
• Savings Plans – Reduce your Amazon EC2 costs by making a commitment to a consistent amount of
usage, in USD per hour, for a term of 1 or 3 years.
• Reserved Instances – Reduce your Amazon EC2 costs by making a commitment to a consistent
instance configuration, including instance type and Region, for a term of 1 or 3 years.
• Spot Instances – Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly.
• Dedicated Hosts – Pay for a physical host that is fully dedicated to running your instances, and bring
your existing per-socket, per-core, or per-VM software licenses to reduce costs.
• Dedicated Instances – Pay, by the hour, for instances that run on single-tenant hardware.
375
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Determine the instance lifecycle
• Capacity Reservations – Reserve capacity for your EC2 instances in a specific Availability Zone for any
duration.
If you require a capacity reservation, purchase Reserved Instances or Capacity Reservations for a specific
Availability Zone. Spot Instances are a cost-effective choice if you can be flexible about when your
applications run and if they can be interrupted. Dedicated Hosts or Dedicated Instances can help you
address compliance requirements and reduce costs by using your existing server-bound software
licenses. For more information, see Amazon EC2 Pricing.
For more information about Savings Plans, see the Savings Plans User Guide.
Contents
• Determine the instance lifecycle (p. 376)
• On-Demand Instances (p. 377)
• Reserved Instances (p. 381)
• Scheduled Reserved Instances (p. 423)
• Spot Instances (p. 424)
• Dedicated Hosts (p. 483)
• Dedicated Instances (p. 516)
• On-Demand Capacity Reservations (p. 522)
New console
Old console
376
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Instances
If the instance is running on a Dedicated Host, the output contains the following information:
"Tenancy": "host"
If the instance is a Dedicated Instance, the output contains the following information:
"Tenancy": "dedicated"
If the instance is a Spot Instance, the output contains the following information:
"InstanceLifecycle": "spot"
On-Demand Instances
With On-Demand Instances, you pay for compute capacity by the second with no long-term
commitments. You have full control over its lifecycle—you decide when to launch, stop, hibernate, start,
reboot, or terminate it.
There is no long-term commitment required when you purchase On-Demand Instances. You pay only
for the seconds that your On-Demand Instances are in the running state. The price per second for a
running On-Demand Instance is fixed, and is listed on the Amazon EC2 Pricing, On-Demand Pricing page.
We recommend that you use On-Demand Instances for applications with short-term, irregular workloads
that cannot be interrupted.
For significant savings over On-Demand Instances, use AWS Savings Plans, Spot Instances (p. 424), or
Reserved Instances (p. 381).
Contents
• Work with On-Demand Instances (p. 378)
• On-Demand Instance limits (p. 378)
• Monitor On-Demand Instance limits and usage (p. 379)
377
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Instances
If you're new to Amazon EC2, see How to get started with Amazon EC2 (p. 1).
Each limit specifies the vCPU limit for one or more instance families. For information about the different
instance families, generations, and sizes, see Amazon EC2 Instance Types.
You can launch any combination of instance types that meet your changing application needs, as long as
the number of vCPUs does not exceed your account limit. For example, with a Standard instance limit of
256 vCPUs, you could launch 32 m5.2xlarge instances (32 x 8 vCPUs) or 16 c5.4xlarge instances (16
x 16 vCPUs). For more information, see EC2 On-Demand Instance limits.
Topics
• Monitor On-Demand Instance limits and usage (p. 379)
378
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Instances
For more information, see Amazon EC2 service quotas (p. 1680) in the Amazon EC2 User Guide, Viewing
service quotas in the Service Quotas User Guide, and AWS Trusted Advisor.
With Amazon CloudWatch metrics integration, you can monitor EC2 usage against limits. You can also
configure alarms to warn about approaching limits. For more information, see Service Quotas and
Amazon CloudWatch alarms in the Service Quotas User Guide.
When using the calculator, keep the following in mind: The calculator assumes that you have reached
your current limit. The value that you enter for Instance count is the number of instances that you need
to launch in addition to what is permitted by your current limit. The calculator adds your current limit to
the Instance count to arrive at a new limit.
You can view and use the following controls and information:
• Instance type – The instance types that you add to the vCPU limits calculator.
379
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Instances
• Instance count – The number of instances that you require for the selected instance type.
• vCPU count – The number of vCPUs that corresponds to the Instance count.
• Current limit – Your current limit for the limit type to which the instance type belongs. The limit
applies to all instance types of the same limit type. For example, in the preceding screenshot, the
current limit for m5.2xlarge and c5.4xlarge is 2,016 vCPUs, which is the limit for all the instance
types that belong to the All Standard instances limit.
• New limit – The new limit, in number of vCPUs, which is calculated by adding vCPU count and Current
limit.
• X – Choose the X to remove the row.
• Add instance type – Choose Add instance type to add another instance type to the calculator.
• Limits calculation – Displays the current limit, vCPUs needed, and new limit for the limit types.
• Instance limit name – The limit type for the instance types that you selected.
• Current limit – The current limit for the limit type.
• vCPUs needed – The number of vCPUs that corresponds to the number of instances that you
specified in Instance count. For the All Standard instances limit type, the vCPUs needed is calculated
by adding the values for vCPU count for all the instance types of this limit type.
• New limit – The new limit is calculated by adding Current limit and vCPUs needed.
• Options – Choose Request limit increase to request a limit increase for the corresponding limit
type.
1. Open the Create case, Service limit increase form in the Support Center console at https://
console.aws.amazon.com/support/home#/case/create.
• From the Limits Calculator, choose one or more instance types and specify the number of
instances, and then choose Request on-demand limit increase.
• On the Limits page, choose a limit, and then choose Request limit increase.
2. For Limit type, choose EC2 Instances.
3. For Region, select the required Region.
380
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
4. For Primary instance type, select the On-Demand Instance limit for which you want to request a
limit increase.
5. For New limit value, enter the total number of vCPUs that you want to run concurrently. To
determine the total number of vCPUs that you need, use the value that appears in the New limit
column in the vCPU limits calculator, or see Amazon EC2 Instance Types to find the number of vCPUs
of each instance type.
6. (Conditional) You must create a separate limit request for each On-Demand Instance limit. To
request an increase for another On-Demand Instance limit, choose Add another request and repeat
steps 3 through 5 in this procedure.
7. For Use case description, enter your use case, and then choose Submit.
For more information about requesting a limit increase, see Amazon EC2 service quotas (p. 1680).
Reserved Instances
Reserved Instances provide you with significant savings on your Amazon EC2 costs compared to On-
Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount
applied to the use of On-Demand Instances in your account. These On-Demand Instances must match
certain attributes, such as instance type and Region, in order to benefit from the billing discount.
Savings Plans also offer significant savings on your Amazon EC2 costs compared to On-Demand Instance
pricing. With Savings Plans, you make a commitment to a consistent usage amount, measured in USD per
hour. This provides you with the flexibility to use the instance configurations that best meet your needs
and continue to save money, instead of making a commitment to a specific instance configuration. For
more information, see the AWS Savings Plans User Guide.
381
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
In this scenario, you have a running On-Demand Instance (T2) in your account, for which you're currently
paying On-Demand rates. You purchase a Reserved Instance that matches the attributes of your running
instance, and the billing benefit is immediately applied. Next, you purchase a Reserved Instance for
a C4 instance. You do not have any running instances in your account that match the attributes of
this Reserved Instance. In the final step, you launch an instance that matches the attributes of the C4
Reserved Instance, and the billing benefit is immediately applied.
Instance attributes
A Reserved Instance has four instance attributes that determine its price.
• Instance type: For example, m4.large. This is composed of the instance family (for example, m4) and
the instance size (for example, large).
• Region: The Region in which the Reserved Instance is purchased.
• Tenancy: Whether your instance runs on shared (default) or single-tenant (dedicated) hardware. For
more information, see Dedicated Instances (p. 516).
• Platform: The operating system; for example, Windows or Linux/Unix. For more information, see
Choosing a platform (p. 397).
Term commitment
You can purchase a Reserved Instance for a one-year or three-year commitment, with the three-year
commitment offering a bigger discount.
Reserved Instances do not renew automatically; when they expire, you can continue using the EC2
instance without interruption, but you are charged On-Demand rates. In the above example, when the
Reserved Instances that cover the T2 and C4 instances expire, you go back to paying the On-Demand
rates until you terminate the instances or purchase new Reserved Instances that match the instance
attributes.
Payment options
The following payment options are available for Reserved Instances:
382
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
• All Upfront: Full payment is made at the start of the term, with no other costs or additional hourly
charges incurred for the remainder of the term, regardless of hours used.
• Partial Upfront: A portion of the cost must be paid upfront and the remaining hours in the term are
billed at a discounted hourly rate, regardless of whether the Reserved Instance is being used.
• No Upfront: You are billed a discounted hourly rate for every hour within the term, regardless of
whether the Reserved Instance is being used. No upfront payment is required.
Note
No Upfront Reserved Instances are based on a contractual obligation to pay monthly for the
entire term of the reservation. For this reason, a successful billing history is required before
you can purchase No Upfront Reserved Instances.
Generally speaking, you can save more money making a higher upfront payment for Reserved Instances.
You can also find Reserved Instances offered by third-party sellers at lower prices and shorter term
lengths on the Reserved Instance Marketplace. For more information, see Sell in the Reserved Instance
Marketplace (p. 404).
Offering class
If your computing needs change, you might be able to modify or exchange your Reserved Instance,
depending on the offering class.
• Standard: These provide the most significant discount, but can only be modified. Standard Reserved
Instances can't be exchanged.
• Convertible: These provide a lower discount than Standard Reserved Instances, but can be exchanged
for another Convertible Reserved Instance with different instance attributes. Convertible Reserved
Instances can also be modified.
For more information, see Types of Reserved Instances (offering classes) (p. 384).
After you purchase a Reserved Instance, you cannot cancel your purchase. However, you might be able to
modify (p. 411), exchange (p. 418), or sell (p. 404) your Reserved Instance if your needs change.
For more information, see the Amazon EC2 Reserved Instances Pricing page.
• Regional: When you purchase a Reserved Instance for a Region, it's referred to as a regional Reserved
Instance.
• Zonal: When you purchase a Reserved Instance for a specific Availability Zone, it's referred to as a
zonal Reserved Instance.
The scope does not affect the price. You pay the same price for a regional or zonal Reserved Instance. For
more information about Reserved Instance pricing, see Key variables that determine Reserved Instance
pricing (p. 382) and Amazon EC2 Reserved Instances Pricing.
383
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Availability Zone flexibility The Reserved Instance discount No Availability Zone flexibility—
applies to instance usage in any the Reserved Instance discount
Availability Zone in the specified applies to instance usage in the
Region. specified Availability Zone only.
Instance size flexibility The Reserved Instance discount No instance size flexibility—
applies to instance usage within the Reserved Instance discount
the instance family, regardless of applies to instance usage for the
size. specified instance type and size
only.
Only supported on Amazon
Linux/Unix Reserved Instances
with default tenancy. For more
information, see Instance
size flexibility determined by
normalization factor (p. 386).
Queuing a purchase You can queue purchases for You can't queue purchases for
regional Reserved Instances. zonal Reserved Instances.
For more information and examples, see How Reserved Instances are applied (p. 385).
The configuration of a Reserved Instance comprises a single instance type, platform, scope, and tenancy
over a term. If your computing needs change, you might be able to modify or exchange your Reserved
Instance.
384
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Sell in the Reserved Instance Can be sold in the Reserved Can't be sold in the Reserved
Marketplace Instance Marketplace. Instance Marketplace.
Buy in the Reserved Instance Can be bought in the Reserved Can't be bought in the Reserved
Marketplace Instance Marketplace. Instance Marketplace.
If you purchase a Reserved Instance and you already have a running On-Demand Instance that matches
the specifications of the Reserved Instance, the billing discount is applied immediately and automatically.
You do not have to restart your instances. If you do not have an eligible running On-Demand Instance,
launch an On-Demand Instance with the same specifications as your Reserved Instance. For more
information, see Use your Reserved Instances (p. 391).
The offering class (Standard or Convertible) of the Reserved Instance does not affect how the billing
discount is applied.
Topics
• How zonal Reserved Instances are applied (p. 385)
• How regional Reserved Instances are applied (p. 385)
• Instance size flexibility (p. 386)
• Examples of applying Reserved Instances (p. 388)
• The Reserved Instance discount applies to matching instance usage in that Availability Zone.
• The attributes (tenancy, platform, Availability Zone, instance type, and instance size) of the running
instances must match that of the Reserved Instances.
For example, if you purchase two c4.xlarge default tenancy Linux/Unix Standard Reserved Instances
for Availability Zone us-east-1a, then up to two c4.xlarge default tenancy Linux/Unix instances
running in the Availability Zone us-east-1a can benefit from the Reserved Instance discount.
• The Reserved Instance discount applies to instance usage in any Availability Zone in that Region.
• The Reserved Instance discount applies to instance usage within the instance family, regardless of size
—this is known as instance size flexibility (p. 386).
385
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Limitations
Instance size flexibility does not apply to the following Reserved Instances:
• Reserved Instances that are purchased for a specific Availability Zone (zonal Reserved Instances)
• Reserved Instances with dedicated tenancy
• Reserved Instances for Windows Server, Windows Server with SQL Standard, Windows Server with SQL
Server Enterprise, Windows Server with SQL Server Web, RHEL, and SUSE Linux Enterprise Server
• Reserved Instances for G4dn instances
Instance size flexibility is determined by the normalization factor of the instance size. The discount
applies either fully or partially to running instances of the same instance family, depending on the
instance size of the reservation, in any Availability Zone in the Region. The only attributes that must be
matched are the instance family, tenancy, and platform.
The following table lists the different sizes within an instance family, and the corresponding
normalization factor. This scale is used to apply the discounted rate of Reserved Instances to the
normalized usage of the instance family.
nano 0.25
micro 0.5
small 1
medium 2
large 4
xlarge 8
2xlarge 16
3xlarge 24
4xlarge 32
6xlarge 48
8xlarge 64
9xlarge 72
10xlarge 80
386
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
12xlarge 96
16xlarge 128
18xlarge 144
24xlarge 192
32xlarge 256
56xlarge 448
112xlarge 896
For example, a t2.medium instance has a normalization factor of 2. If you purchase a t2.medium
default tenancy Amazon Linux/Unix Reserved Instance in the US East (N. Virginia) and you have two
running t2.small instances in your account in that Region, the billing benefit is applied in full to both
instances.
Or, if you have one t2.large instance running in your account in the US East (N. Virginia) Region, the
billing benefit is applied to 50% of the usage of the instance.
The normalization factor is also applied when modifying Reserved Instances. For more information, see
Modify Reserved Instances (p. 411).
Instance size flexibility also applies to bare metal instances within the instance family. If you have
regional Amazon Linux/Unix Reserved Instances with shared tenancy on bare metal instances, you can
benefit from the Reserved Instance savings within the same instance family. The opposite is also true: if
387
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
you have regional Amazon Linux/Unix Reserved Instances with shared tenancy on instances in the same
family as a bare metal instance, you can benefit from the Reserved Instance savings on the bare metal
instance.
The metal instance size does not have a single normalization factor. A bare metal instance has the same
normalization factor as the equivalent virtualized instance size within the same instance family. For
example, an i3.metal instance has the same normalization factor as an i3.16xlarge instance.
a1.metal 32
m5zn.metal | z1d.metal 96
c5n.metal 144
u-*.metal 896
For example, an i3.metal instance has a normalization factor of 128. If you purchase an i3.metal
default tenancy Amazon Linux/Unix Reserved Instance in the US East (N. Virginia), the billing benefit can
apply as follows:
• If you have one running i3.16xlarge in your account in that Region, the billing benefit is applied in
full to the i3.16xlarge instance (i3.16xlarge normalization factor = 128).
• Or, if you have two running i3.8xlarge instances in your account in that Region, the billing benefit is
applied in full to both i3.8xlarge instances (i3.8xlarge normalization factor = 64).
• Or, if you have four running i3.4xlarge instances in your account in that Region, the billing benefit
is applied in full to all four i3.4xlarge instances (i3.4xlarge normalization factor = 32).
The opposite is also true. For example, if you purchase two i3.8xlarge default tenancy Amazon Linux/
Unix Reserved Instances in the US East (N. Virginia), and you have one running i3.metal instance in
that Region, the billing benefit is applied in full to the i3.metal instance.
388
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
• 4 x m3.large Linux, default tenancy Reserved Instances in Availability Zone us-east-1a (capacity is
reserved)
• 4 x m4.large Amazon Linux, default tenancy Reserved Instances in Region us-east-1
• 1 x c4.large Amazon Linux, default tenancy Reserved Instances in Region us-east-1
• The discount and capacity reservation of the four m3.large zonal Reserved Instances is used by the
four m3.large instances because the attributes (instance size, Region, platform, tenancy) between
them match.
• The m4.large regional Reserved Instances provide Availability Zone and instance size flexibility,
because they are regional Amazon Linux Reserved Instances with default tenancy.
You've purchased four m4.large regional Reserved Instances, and in total, they are equal to 16
normalized units/hour (4x4). Account A has two m4.xlarge instances running, which is equivalent to
16 normalized units/hour (2x8). In this case, the four m4.large regional Reserved Instances provide
the billing benefit to an entire hour of usage of the two m4.xlarge instances.
• The c4.large regional Reserved Instance in us-east-1 provides Availability Zone and instance size
flexibility, because it is a regional Amazon Linux Reserved Instance with default tenancy, and applies
to the c4.xlarge instance. A c4.large instance is equivalent to 4 normalized units/hour and a
c4.xlarge is equivalent to 8 normalized units/hour.
In this case, the c4.large regional Reserved Instance provides partial benefit to c4.xlarge usage.
This is because the c4.large Reserved Instance is equivalent to 4 normalized units/hour of usage,
but the c4.xlarge instance requires 8 normalized units/hour. Therefore, the c4.large Reserved
Instance billing discount applies to 50% of c4.xlarge usage. The remaining c4.xlarge usage is
charged at the On-Demand rate.
• The m3.2xlarge regional Reserved Instance in us-east-1 provides Availability Zone and instance size
flexibility, because it is a regional Amazon Linux Reserved Instance with default tenancy. It applies first
to the m3.large instances and then to the m3.xlarge instances, because it applies from the smallest
to the largest instance size within the instance family based on the normalization factor.
389
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
The m3.2xlarge regional Reserved Instance provides full benefit to 2 x m3.large usage, because
together these instances account for 8 normalized units/hour. This leaves 8 normalized units/hour to
apply to the m3.xlarge instances.
With the remaining 8 normalized units/hour, the m3.2xlarge regional Reserved Instance provides
full benefit to 1 x m3.xlarge usage, because each m3.xlarge instance is equivalent to 8 normalized
units/hour. The remaining m3.xlarge usage is charged at the On-Demand rate.
Reserved Instances are first applied to usage within the purchasing account, followed by qualifying usage
in any other account in the organization. For more information, see Reserved Instances and consolidated
billing (p. 394). For regional Reserved Instances that offer instance size flexibility, the benefit is applied
from the smallest to the largest instance size within the instance family.
You're running the following On-Demand Instances in account A (the purchasing account):
Another customer is running the following On-Demand Instances in account B—a linked account:
The regional Reserved Instance benefits are applied in the following way:
• The discount of the four m4.xlarge Reserved Instances is used by the two m4.xlarge instances
and the single m4.2xlarge instance in account A (purchasing account). All three instances match
the attributes (instance family, Region, platform, tenancy). The discount is applied to instances in the
purchasing account (account A) first, even though account B (linked account) has two m4.xlarge that
also match the Reserved Instances. There is no capacity reservation because the Reserved Instances are
regional Reserved Instances.
• The discount of the two c4.xlarge Reserved Instances applies to the two c4.xlarge instances,
because they are a smaller instance size than the c4.2xlarge instance. There is no capacity
reservation because the Reserved Instances are regional Reserved Instances.
In general, Reserved Instances that are owned by an account are applied first to usage in that account.
However, if there are qualifying, unused Reserved Instances for a specific Availability Zone (zonal
Reserved Instances) in other accounts in the organization, they are applied to the account before regional
Reserved Instances owned by the account. This is done to ensure maximum Reserved Instance utilization
and a lower bill. For billing purposes, all the accounts in the organization are treated as one account. The
following example might help explain this.
390
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
You're running the following On-Demand Instance in account A (the purchasing account):
A customer also purchases the following zonal Reserved Instances in linked account C:
• The discount of the m4.xlarge zonal Reserved Instance owned by account C is applied to the
m4.xlarge usage in account A.
• The discount of the m4.xlarge regional Reserved Instance owned by account A is applied to the
m4.xlarge usage in account B.
• If the regional Reserved Instance owned by account A was first applied to the usage in account A, the
zonal Reserved Instance owned by account C remains unused and usage in account B is charged at On-
Demand rates.
For more information, see Reserved Instances in the Billing and Cost Management Report.
If you're launching an On-Demand Instance to take advantage of the billing benefit of a Reserved
Instance, ensure that you specify the following information when you configure your On-Demand
Instance:
Platform
You must specify an Amazon Machine Image (AMI) that matches the platform (product description)
of your Reserved Instance. For example, if you specified Linux/UNIX for your Reserved Instance,
you can launch an instance from an Amazon Linux AMI or an Ubuntu AMI.
Instance type
If you purchased a zonal Reserved Instance, you must specify the same instance type as your
Reserved Instance; for example, t3.large. For more information, see How zonal Reserved Instances
are applied (p. 385).
If you purchased a regional Reserved Instance, you must specify an instance type from the same
instance family as the instance type of your Reserved Instance. For example, if you specified
t3.xlarge for your Reserved Instance, you must launch your instance from the T3 family, but you
can specify any size, for example, t3.medium. For more information, see How regional Reserved
Instances are applied (p. 385).
391
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Availability Zone
If you purchased a zonal Reserved Instance for a specific Availability Zone, you must launch the
instance into the same Availability Zone.
If you purchased a regional Reserved Instance, you can launch the instance into any Availability Zone
in the Region that you specified for the Reserved Instance.
Tenancy
The tenancy (dedicated or shared) of the instance must match the tenancy of your Reserved
Instance. For more information, see Dedicated Instances (p. 516).
For examples of how Reserved Instances are applied to your running On-Demand Instances, see How
Reserved Instances are applied (p. 385). For more information, see Why aren't my Amazon EC2
Reserved Instances applying to my AWS billing in the way that I expected?
You can use various methods to launch the On-Demand Instances that use your Reserved Instance
discount. For information about the different launch methods, see Launch your instance (p. 563). You
can also use Amazon EC2 Auto Scaling to launch an instance. For more information, see the Amazon EC2
Auto Scaling User Guide.
When Reserved Instances expire, you are charged On-Demand rates for EC2 instance usage. You can
queue a Reserved Instance for purchase up to three years in advance. This can help you ensure that you
have uninterrupted coverage. For more information, see Queue your purchase (p. 397).
The AWS Free Tier is available for new AWS accounts. If you are using the AWS Free Tier to run Amazon
EC2 instances, and you purchase a Reserved Instance, you are charged under standard pricing guidelines.
For information, see AWS Free Tier.
Contents
• Usage billing (p. 392)
• Viewing your bill (p. 393)
• Reserved Instances and consolidated billing (p. 394)
• Reserved Instance discount pricing tiers (p. 394)
Usage billing
Reserved Instances are billed for every clock-hour during the term that you select, regardless of whether
an instance is running. Each clock-hour starts on the hour (zero minutes and zero seconds past the hour)
of a standard 24-hour clock. For example, 1:00:00 to 1:59:59 is one clock-hour. For more information
about instance states, see Instance lifecycle (p. 559).
A Reserved Instance billing benefit can be applied to a running instance on a per-second basis. Per-
second billing is available for instances using an open-source Linux distribution, such as Amazon Linux
and Ubuntu. Per-hour billing is used for commercial Linux distributions, such as Red Hat Enterprise Linux
and SUSE Linux Enterprise Server.
A Reserved Instance billing benefit can apply to a maximum of 3600 seconds (one hour) of instance
usage per clock-hour. You can run multiple instances concurrently, but can only receive the benefit of the
392
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Reserved Instance discount for a total of 3600 seconds per clock-hour; instance usage that exceeds 3600
seconds in a clock-hour is billed at the On-Demand rate.
For example, if you purchase one m4.xlarge Reserved Instance and run four m4.xlarge instances
concurrently for one hour, one instance is charged at one hour of Reserved Instance usage and the other
three instances are charged at three hours of On-Demand usage.
However, if you purchase one m4.xlarge Reserved Instance and run four m4.xlarge instances for 15
minutes (900 seconds) each within the same hour, the total running time for the instances is one hour,
which results in one hour of Reserved Instance usage and 0 hours of On-Demand usage.
If multiple eligible instances are running concurrently, the Reserved Instance billing benefit is applied
to all the instances at the same time up to a maximum of 3600 seconds in a clock-hour; thereafter, On-
Demand rates apply.
Cost Explorer on the Billing and Cost Management console enables you to analyze the savings against
running On-Demand Instances. The Reserved Instances FAQ includes an example of a list value
calculation.
If you close your AWS account, On-Demand billing for your resources stops. However, if you have any
Reserved Instances in your account, you continue to receive a bill for these until they expire.
393
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
You can view the charges online, or you can download a CSV file.
You can also track your Reserved Instance utilization using the AWS Cost and Usage Report. For
more information, see Reserved Instances under Cost and Usage Report in the AWS Billing and Cost
Management User Guide.
If you close the account that purchased the Reserved Instance, the payer account is charged for the
Reserved Instance until the Reserved Instance expires. After the closed account is permanently deleted in
90 days, the member accounts no longer benefit from the Reserved Instance billing discount.
• Pricing tiers and related discounts apply only to purchases of Amazon EC2 Standard Reserved
Instances.
• Pricing tiers do not apply to Reserved Instances for Windows with SQL Server Standard, SQL Server
Web, and SQL Server Enterprise.
• Pricing tiers do not apply to Reserved Instances for Linux with SQL Server Standard, SQL Server Web,
and SQL Server Enterprise.
• Pricing tier discounts only apply to purchases made from AWS. They do not apply to purchases of
third-party Reserved Instances.
• Discount pricing tiers are currently not applicable to Convertible Reserved Instance purchases.
Topics
• Calculate Reserved Instance pricing discounts (p. 394)
• Buy with a discount tier (p. 395)
• Crossing pricing tiers (p. 396)
• Consolidated billing for pricing tiers (p. 396)
You can determine the pricing tier for your account by calculating the list value for all of your Reserved
Instances in a Region. Multiply the hourly recurring price for each reservation by the total number of
hours for the term and add the undiscounted upfront price (also known as the fixed price) at the time of
purchase. Because the list value is based on undiscounted (public) pricing, it is not affected if you qualify
for a volume discount or if the price drops after you buy your Reserved Instances.
394
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
List value = fixed price + (undiscounted recurring hourly price * hours in term)
For example, for a 1-year Partial Upfront t2.small Reserved Instance, assume the upfront price is
$60.00 and the hourly rate is $0.007. This provides a list value of $121.32.
New console
To view the fixed price values for Reserved Instances using the Amazon EC2 console
Old console
To view the fixed price values for Reserved Instances using the Amazon EC2 console
To view the fixed price values for Reserved Instances using the command line
When you buy Reserved Instances, Amazon EC2 automatically applies any discounts to the part of your
purchase that falls within a discount pricing tier. You don't need to do anything differently, and you can
buy Reserved Instances using any of the Amazon EC2 tools. For more information, see Buy Reserved
Instances (p. 396).
After the list value of your active Reserved Instances in a Region crosses into a discount pricing tier,
any future purchase of Reserved Instances in that Region are charged at a discounted rate. If a single
purchase of Reserved Instances in a Region takes you over the threshold of a discount tier, then the
portion of the purchase that is above the price threshold is charged at the discounted rate. For more
information about the temporary Reserved Instance IDs that are created during the purchase process,
see Crossing pricing tiers (p. 396).
If your list value falls below the price point for that discount pricing tier—for example, if some of your
Reserved Instances expire—future purchases of Reserved Instances in the Region are not discounted.
However, you continue to get the discount applied against any Reserved Instances that were originally
purchased within the discount pricing tier.
When you buy Reserved Instances, one of four possible scenarios occurs:
395
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
If your purchase crosses into a discounted pricing tier, you see multiple entries for that purchase: one for
that part of the purchase charged at the regular price, and another for that part of the purchase charged
at the applicable discounted rate.
The Reserved Instance service generates several Reserved Instance IDs because your purchase crossed
from an undiscounted tier, or from one discounted tier to another. There is an ID for each set of
reservations in a tier. Consequently, the ID returned by your purchase CLI command or API action is
different from the actual ID of the new Reserved Instances.
A consolidated billing account aggregates the list value of member accounts within a Region. When
the list value of all active Reserved Instances for the consolidated billing account reaches a discount
pricing tier, any Reserved Instances purchased after this point by any member of the consolidated
billing account are charged at the discounted rate (as long as the list value for that consolidated account
stays above the discount pricing tier threshold). For more information, see Reserved Instances and
consolidated billing (p. 394).
When you search for Reserved Instances to buy, you receive a quote on the cost of the returned offerings.
When you proceed with the purchase, AWS automatically places a limit price on the purchase price. The
total cost of your Reserved Instances won't exceed the amount that you were quoted.
If the price rises or changes for any reason, the purchase is not completed. If, at the time of purchase,
there are offerings similar to your choice but at a lower price, AWS sells you the offerings at the lower
price.
Before you confirm your purchase, review the details of the Reserved Instance that you plan to buy, and
make sure that all the parameters are accurate. After you purchase a Reserved Instance (either from a
third-party seller in the Reserved Instance Marketplace or from AWS), you cannot cancel your purchase.
Note
To purchase and modify Reserved Instances, ensure that your IAM user account has the
appropriate permissions, such as the ability to describe Availability Zones. For information,
see Example Policies for Working With the AWS CLI or an AWS SDK and Example Policies for
Working in the Amazon EC2 Console.
Topics
• Choosing a platform (p. 397)
• Queue your purchase (p. 397)
• Buy Standard Reserved Instances (p. 398)
396
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Choosing a platform
Amazon EC2 supports the following Linux platforms for Reserved Instances:
• Linux/UNIX
• Linux with SQL Server Standard
• Linux with SQL Server Web
• Linux with SQL Server Enterprise
• SUSE Linux
• Red Hat Enterprise Linux
• Red Hat Enterprise Linux with HA
When you purchase a Reserved Instance, you must choose an offering for a platform that represents the
operating system for your instance.
• For SUSE Linux and RHEL distributions, you must choose offerings for those specific platforms, i.e., for
the SUSE Linux or Red Hat Enterprise Linux platforms.
• For all other Linux distributions (including Ubuntu), choose an offering for the Linux/UNIX platform.
• If you bring your existing RHEL subscription, you must choose an offering for the Linux/UNIX
platform, not an offering for the Red Hat Enterprise Linux platform.
Important
If you plan to purchase a Reserved Instance to apply to an On-Demand Instance that was
launched from an AWS Marketplace AMI, first check the PlatformDetails field of the AMI.
The PlatformDetails field indicates which Reserved Instance to purchase. The platform
details of the AMI must match the platform of the Reserved Instance, otherwise the Reserved
Instance will not be applied to the On-Demand Instance. For information about how to view the
platform details of the AMI, see Understand AMI billing information (p. 193).
For information about the supported platforms for Windows, see Choosing a platform in the Amazon
EC2 User Guide for Windows Instances.
You can queue purchases for regional Reserved Instances, but not zonal Reserved Instances or Reserved
Instances from other sellers. You can queue a purchase up to three years in advance. On the scheduled
date and time, the purchase is made using the default payment method. After the payment is successful,
the billing benefit is applied.
You can view your queued purchases in the Amazon EC2 console. The status of a queued purchase is
queued. You can cancel a queued purchase any time before its scheduled time. For details, see Cancel a
queued purchase (p. 403).
397
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
New console
To purchase a regional Reserved Instance, toggle off this setting. When you toggle off this
setting, the Availability Zone field disappears.
5. Select other configurations as needed, and then choose Search.
6. For each Reserved Instance that you want to purchase, enter the desired quantity, and choose
Add to cart.
To purchase a Standard Reserved Instance from the Reserved Instance Marketplace, look for 3rd
party in the Seller column in the search results. The Term column displays non-standard terms.
For more information, see Buy from the Reserved Instance Marketplace (p. 402).
7. To see a summary of the Reserved Instances that you selected, choose View cart.
8. If Order on is Now, the purchase is completed immediately after you choose Order all. To
queue a purchase, choose Now and select a date. You can select a different date for each
eligible offering in the cart. The purchase is queued until 00:00 UTC on the selected date.
9. To complete the order, choose Order all.
If, at the time of placing the order, there are offerings similar to your choice but with a lower
price, AWS sells you the offerings at the lower price.
10. Choose Close.
The status of your order is listed in the State column. When your order is complete, the State
value changes from Payment-pending to Active. When the Reserved Instance is Active, it is
ready to use.
Note
If the status goes to Retired, AWS might not have received your payment.
Old console
398
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
To purchase a Standard Reserved Instance from the Reserved Instance Marketplace, look for 3rd
Party in the Seller column in the search results. The Term column displays non-standard terms.
6. For each Reserved Instance that you want to purchase, enter the quantity, and choose Add to
Cart.
7. To see a summary of the Reserved Instances that you selected, choose View Cart.
8. If Order On is Now, the purchase is completed immediately. To queue a purchase, choose Now
and select a date. You can select a different date for each eligible offering in the cart. The
purchase is queued until 00:00 UTC on the selected date.
9. To complete the order, choose Order.
If, at the time of placing the order, there are offerings similar to your choice but with a lower
price, AWS sells you the offerings at the lower price.
10. Choose Close.
The status of your order is listed in the State column. When your order is complete, the State
value changes from payment-pending to active. When the Reserved Instance is active, it is
ready to use.
Note
If the status goes to retired, AWS might not have received your payment.
To find Reserved Instances on the Reserved Instance Marketplace only, use the marketplace filter
and do not specify a duration in the request, as the term might be shorter than a 1– or 3-year term.
When you find a Reserved Instance that meets your needs, take note of the offering ID. For example:
"ReservedInstancesOfferingId": "bec624df-a8cc-4aad-a72f-4f8abc34caf2"
2. Use the purchase-reserved-instances-offering command to buy your Reserved Instance. You must
specify the Reserved Instance offering ID you obtained the previous step and you must specify the
number of instances for the reservation.
399
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
By default, the purchase is completed immediately. Alternatively, to queue the purchase, add the
following parameter to the previous call.
--purchase-time "2020-12-01T00:00:00Z"
3. Use the describe-reserved-instances command to get the status of your Reserved Instance.
Alternatively, use the following AWS Tools for Windows PowerShell commands:
• Get-EC2ReservedInstancesOffering
• New-EC2ReservedInstance
• Get-EC2ReservedInstance
After the purchase is complete, if you already have a running instance that matches the specifications
of the Reserved Instance, the billing benefit is immediately applied. You do not have to restart your
instances. If you do not have a suitable running instance, launch an instance and ensure that you match
the same criteria that you specified for your Reserved Instance. For more information, see Use your
Reserved Instances (p. 391).
For examples of how Reserved Instances are applied to your running instances, see How Reserved
Instances are applied (p. 385).
New console
To purchase a regional Reserved Instance, toggle off this setting. When you toggle off this
setting, the Availability Zone field disappears.
5. Select other configurations as needed and choose Search.
6. For each Convertible Reserved Instance that you want to purchase, enter the quantity, and
choose Add to cart.
7. To see a summary of your selection, choose View cart.
400
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
8. If Order on is Now, the purchase is completed immediately after you choose Order all. To
queue a purchase, choose Now and select a date. You can select a different date for each
eligible offering in the cart. The purchase is queued until 00:00 UTC on the selected date.
9. To complete the order, choose Order all.
If, at the time of placing the order, there are offerings similar to your choice but with a lower
price, AWS sells you the offerings at the lower price.
10. Choose Close.
The status of your order is listed in the State column. When your order is complete, the State
value changes from Payment-pending to Active. When the Reserved Instance is Active, it is
ready to use.
Note
If the status goes to Retired, AWS might not have received your payment.
Old console
If, at the time of placing the order, there are offerings similar to your choice but with a lower
price, AWS sells you the offerings at the lower price.
10. Choose Close.
The status of your order is listed in the State column. When your order is complete, the State
value changes from payment-pending to active. When the Reserved Instance is active, it is
ready to use.
Note
If the status goes to retired, AWS might not have received your payment.
401
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
When you find a Reserved Instance that meets your needs, take note of the offering ID. For example:
"ReservedInstancesOfferingId": "bec624df-a8cc-4aad-a72f-4f8abc34caf2"
2. Use the purchase-reserved-instances-offering command to buy your Reserved Instance. You must
specify the Reserved Instance offering ID you obtained the previous step and you must specify the
number of instances for the reservation.
By default, the purchase is completed immediately. Alternatively, to queue the purchase, add the
following parameter to the previous call.
--purchase-time "2020-12-01T00:00:00Z"
3. Use the describe-reserved-instances command to get the status of your Reserved Instance.
Alternatively, use the following AWS Tools for Windows PowerShell commands:
• Get-EC2ReservedInstancesOffering
• New-EC2ReservedInstance
• Get-EC2ReservedInstance
If you already have a running instance that matches the specifications of the Reserved Instance, the
billing benefit is immediately applied. You do not have to restart your instances. If you do not have
a suitable running instance, launch an instance and ensure that you match the same criteria that you
specified for your Reserved Instance. For more information, see Use your Reserved Instances (p. 391).
For examples of how Reserved Instances are applied to your running instances, see How Reserved
Instances are applied (p. 385).
There are a few differences between Reserved Instances purchased in the Reserved Instance Marketplace
and Reserved Instances purchased directly from AWS:
• Term – Reserved Instances that you purchase from third-party sellers have less than a full standard
term remaining. Full standard terms from AWS run for one year or three years.
402
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
• Upfront price – Third-party Reserved Instances can be sold at different upfront prices. The usage or
recurring fees remain the same as the fees set when the Reserved Instances were originally purchased
from AWS.
• Types of Reserved Instances – Only Amazon EC2 Standard Reserved Instances can be purchased
from the Reserved Instance Marketplace. Convertible Reserved Instances, Amazon RDS, and Amazon
ElastiCache Reserved Instances are not available for purchase on the Reserved Instance Marketplace.
Basic information about you is shared with the seller, for example, your ZIP code and country
information.
This information enables sellers to calculate any necessary transaction taxes that they have to remit to
the government (such as sales tax or value-added tax) and is provided as a disbursement report. In rare
circumstances, AWS might have to provide the seller with your email address, so that they can contact
you regarding questions related to the sale (for example, tax questions).
For similar reasons, AWS shares the legal entity name of the seller on the buyer's purchase invoice. If you
need additional information about the seller for tax or related reasons, contact AWS Support.
New console
403
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Old console
New console
Old console
As soon as you list your Reserved Instances in the Reserved Instance Marketplace, they are available
for potential buyers to find. All Reserved Instances are grouped according to the duration of the term
remaining and the hourly price.
404
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
To fulfill a buyer's request, AWS first sells the Reserved Instance with the lowest upfront price in the
specified grouping. Then, AWS sells the Reserved Instance with the next lowest price, until the buyer's
entire order is fulfilled. AWS then processes the transactions and transfers ownership of the Reserved
Instances to the buyer.
You own your Reserved Instance until it's sold. After the sale, you've given up the capacity reservation
and the discounted recurring fees. If you continue to use your instance, AWS charges you the On-
Demand price starting from the time that your Reserved Instance was sold.
If you want to sell your unused Reserved Instances on the Reserved Instance Marketplace, you must meet
certain eligibility criteria.
For information about buying Reserved Instances on the Reserved Instance Marketplace, see Buy from
the Reserved Instance Marketplace (p. 402).
Contents
• Restrictions and limitations (p. 405)
• Register as a seller (p. 406)
• Bank account for disbursement (p. 406)
• Tax information (p. 407)
• Price your Reserved Instances (p. 407)
• List your Reserved Instances (p. 408)
• Reserved Instance listing states (p. 409)
• Lifecycle of a listing (p. 409)
• After your Reserved Instance is sold (p. 410)
• Getting paid (p. 410)
• Information shared with the buyer (p. 410)
The following limitations and restrictions apply when selling Reserved Instances:
• Only Amazon EC2 Standard Reserved Instances can be sold in the Reserved Instance Marketplace.
Amazon EC2 Convertible Reserved Instances cannot be sold. Reserved Instances for other AWS
services, such as Amazon RDS and Amazon ElastiCache, cannot be sold.
• There must be at least one month remaining in the term of the Standard Reserved Instance.
• You cannot sell a Standard Reserved Instance in a Region that is disabled by default.
• The minimum price allowed in the Reserved Instance Marketplace is $0.00.
• You can sell No Upfront, Partial Upfront, or All Upfront Reserved Instances in the Reserved Instance
Marketplace. If there is an upfront payment on a Reserved Instance, it can be sold only after AWS has
received the upfront payment and the reservation has been active (you've owned it) for at least 30
days.
• You cannot modify your listing in the Reserved Instance Marketplace directly. However, you can
change your listing by first canceling it and then creating another listing with new parameters. For
information, see Price your Reserved Instances (p. 407). You can also modify your Reserved Instances
before listing them. For information, see Modify Reserved Instances (p. 411).
• In order to list a regional Reserved Instance in the marketplace, you must modify the scope to zonal as
it is not possible to sell regional Reserved Instances via the console.
405
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
• AWS charges a service fee of 12 percent of the total upfront price of each Standard Reserved Instance
you sell in the Reserved Instance Marketplace. The upfront price is the price the seller is charging for
the Standard Reserved Instance.
• When you register as a seller, the bank you specify must have a US address. For more information, see
Additional seller requirements for paid products in the AWS Marketplace Seller Guide.
• Amazon Internet Services Private Limited (AISPL) customers can't sell Reserved Instances in the
Reserved Instance Marketplace even if they have a US bank account. For more information, see What
are the differences between AWS accounts and AISPL accounts?
Register as a seller
Note
Only the AWS account root user can register an account as a seller.
To sell in the Reserved Instance Marketplace, you must first register as a seller. During registration, you
provide the following information:
• Bank information—AWS must have your bank information in order to disburse funds collected when
you sell your reservations. The bank you specify must have a US address. For more information, see
Bank account for disbursement (p. 406).
• Tax information—All sellers are required to complete a tax information interview to determine any
necessary tax reporting obligations. For more information, see Tax information (p. 407).
After AWS receives your completed seller registration, you receive an email confirming your registration
and informing you that you can get started selling in the Reserved Instance Marketplace.
1. Open the Reserved Instance Marketplace Seller Registration page and sign in using your AWS
credentials.
2. On the Manage Bank Account page, provide the following information about the bank through to
receive payment:
Note
If you are using a corporate bank account, you are prompted to send the information about
the bank account via fax (1-206-765-3424).
After registration, the bank account provided is set as the default, pending verification with the
bank. It can take up to two weeks to verify a new bank account, during which time you can't receive
disbursements. For an established account, it usually takes about two days for disbursements to
complete.
406
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
1. On the Reserved Instance Marketplace Seller Registration page, sign in with the account that you
used when you registered.
2. On the Manage Bank Account page, add a new bank account or modify the default bank account as
needed.
Tax information
Your sale of Reserved Instances might be subject to a transaction-based tax, such as sales tax or value-
added tax. You should check with your business's tax, legal, finance, or accounting department to
determine if transaction-based taxes are applicable. You are responsible for collecting and sending the
transaction-based taxes to the appropriate tax authority.
As part of the seller registration process, you must complete a tax interview in the Seller Registration
Portal. The interview collects your tax information and populates an IRS form W-9, W-8BEN, or W-8BEN-
E, which is used to determine any necessary tax reporting obligations.
The tax information you enter as part of the tax interview might differ depending on whether you
operate as an individual or business, and whether you or your business are a US or non-US person or
entity. As you fill out the tax interview, keep in mind the following:
• Information provided by AWS, including the information in this topic, does not constitute tax, legal, or
other professional advice. To find out how the IRS reporting requirements might affect your business,
or if you have other questions, contact your tax, legal, or other professional advisor.
• To fulfill the IRS reporting requirements as efficiently as possible, answer all questions and enter all
information requested during the interview.
• Check your answers. Avoid misspellings or entering incorrect tax identification numbers. They can
result in an invalidated tax form.
Based on your tax interview responses and IRS reporting thresholds, Amazon might file Form 1099-K.
Amazon mails a copy of your Form 1099-K on or before January 31 in the year following the year that
your tax account reaches the threshold levels. For example, if your account reaches the threshold in
2018, your Form 1099-K is mailed on or before January 31, 2019.
For more information about IRS requirements and Form 1099-K, see the IRS website.
• You can sell up to $50,000 in Reserved Instances. To increase this limit, complete the EC2 Reserved
Instance Sales form.
• You can sell up to 5,000 Reserved Instances. To increase this limit, complete the EC2 Reserved
Instance Sales form.
• The minimum price is $0. The minimum allowed price in the Reserved Instance Marketplace is $0.00.
You cannot modify your listing directly. However, you can change your listing by first canceling it and
then creating another listing with new parameters.
You can cancel your listing at any time, as long as it's in the active state. You cannot cancel the listing
if it's already matched or being processed for a sale. If some of the instances in your listing are matched
and you cancel the listing, only the remaining unmatched instances are removed from the listing.
407
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Because the value of Reserved Instances decreases over time, by default, AWS can set prices to decrease
in equal increments month over month. However, you can set different upfront prices based on when
your reservation sells.
For example, if your Reserved Instance has nine months of its term remaining, you can specify the
amount that you would accept if a customer were to purchase that Reserved Instance with nine months
remaining. You could set another price with five months remaining, and yet another price with one
month remaining.
The console determines a suggested price. It checks for offerings that match your Reserved Instance and
matches the one with the lowest price. Otherwise, it calculates a suggested price based on the cost of
the Reserved Instance for its remaining time. If the calculated value is less than $1.01, the suggested
price is $1.01.
If you cancel your listing and a portion of that listing has already been sold, the cancellation is not
effective on the portion that has been sold. Only the unsold portion of the listing is no longer available
in the Reserved Instance Marketplace.
To list a Reserved Instance in the Reserved Instance Marketplace using the AWS Management
Console
To manage Reserved Instances in the Reserved Instance Marketplace using the AWS CLI
408
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
The information displayed by Listing State is about the status of your listing in the Reserved Instance
Marketplace. It is different from the status information that is displayed by the State column in the
Reserved Instances page. This State information is about your reservation.
Lifecycle of a listing
When all the instances in your listing are matched and sold, the My Listings tab shows that the Total
instance count matches the count listed under Sold. Also, there are no Available instances left for your
listing, and its Status is closed.
When only a portion of your listing is sold, AWS retires the Reserved Instances in the listing and creates
the number of Reserved Instances equal to the Reserved Instances remaining in the count. So, the listing
ID and the listing that it represents, which now has fewer reservations for sale, is still active.
Any future sales of Reserved Instances in this listing are processed this way. When all the Reserved
Instances in the listing are sold, AWS marks the listing as closed.
The My Listings tab in the Reserved Instance console page displays the listing this way:
A buyer purchases two of the reservations, which leaves a count of three reservations still available for
sale. Because of this partial sale, AWS creates a new reservation with a count of three to represent the
remaining reservations that are still for sale.
409
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
• Status = active
If you cancel your listing and a portion of that listing has already sold, the cancelation is not effective
on the portion that has been sold. Only the unsold portion of the listing is no longer available in the
Reserved Instance Marketplace.
The My Listings tab contains the Listing State value. It also contains information about the term,
listing price, and a breakdown of how many instances in the listing are available, pending, sold, and
canceled.
You can also use the describe-reserved-instances-listings command with the appropriate filter to obtain
information about your listings.
Getting paid
As soon as AWS receives funds from the buyer, a message is sent to the registered owner account email
for the sold Reserved Instance.
AWS sends an Automated Clearing House (ACH) wire transfer to your specified bank account.
Typically, this transfer occurs between one to three days after your Reserved Instance has been sold.
Disbursements take place once a day. You will receive an email with a disbursement report after the
funds are released. Keep in mind that you can't receive disbursements until AWS receives verification
from your bank. This can take up to two weeks.
The Reserved Instance that you sold continues to appear when you describe your Reserved Instances.
You receive a cash disbursement for your Reserved Instances through a wire transfer directly into your
bank account. AWS charges a service fee of 12 percent of the total upfront price of each Reserved
Instance you sell in the Reserved Instance Marketplace.
For similar reasons, the buyer's ZIP code and country information are provided to the seller in the
disbursement report. As a seller, you might need this information to accompany any necessary
transaction taxes that you remit to the government (such as sales tax and value-added tax).
AWS cannot offer tax advice, but if your tax specialist determines that you need specific additional
information, contact AWS Support.
410
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
You can modify all or a subset of your Reserved Instances. You can separate your original Reserved
Instances into two or more new Reserved Instances. For example, if you have a reservation for 10
instances in us-east-1a and decide to move 5 instances to us-east-1b, the modification request
results in two new reservations: one for 5 instances in us-east-1a and the other for 5 instances in us-
east-1b.
You can also merge two or more Reserved Instances into a single Reserved Instance. For example, if
you have four t2.small Reserved Instances of one instance each, you can merge them to create one
t2.large Reserved Instance. For more information, see Support for modifying instance sizes (p. 413).
After modification, the benefit of the Reserved Instances is applied only to instances that match the new
parameters. For example, if you change the Availability Zone of a reservation, the capacity reservation
and pricing benefits are automatically applied to instance usage in the new Availability Zone. Instances
that no longer match the new parameters are charged at the On-Demand rate, unless your account has
other applicable reservations.
• The modified reservation becomes effective immediately and the pricing benefit is applied to the new
instances beginning at the hour of the modification request. For example, if you successfully modify
your reservations at 9:15PM, the pricing benefit transfers to your new instance at 9:00PM. You can
get the effective date of the modified Reserved Instances by using the describe-reserved-instances
command.
• The original reservation is retired. Its end date is the start date of the new reservation, and the end
date of the new reservation is the same as the end date of the original Reserved Instance. If you
modify a three-year reservation that had 16 months left in its term, the resulting modified reservation
is a 16-month reservation with the same end date as the original one.
• The modified reservation lists a $0 fixed price and not the fixed price of the original reservation.
• The fixed price of the modified reservation does not affect the discount pricing tier calculations
applied to your account, which are based on the fixed price of the original reservation.
If your modification request fails, your Reserved Instances maintain their original configuration, and are
immediately available for another modification request.
There is no fee for modification, and you do not receive any new bills or invoices.
You can modify your reservations as frequently as you like, but you cannot change or cancel a pending
modification request after you submit it. After the modification has completed successfully, you can
submit another modification request to roll back any changes you made, if needed.
Contents
• Requirements and restrictions for modification (p. 412)
• Support for modifying instance sizes (p. 413)
• Submit modification requests (p. 416)
• Troubleshoot modification requests (p. 417)
411
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Change the scope from Linux and Windows If you change the scope from
Availability Zone to Region and Availability Zone to Region, you
vice versa lose the capacity reservation
benefit.
Change the instance size within Linux/UNIX only The reservation must use
the same instance family default tenancy. Some instance
Instance size flexibility is not families are not supported,
available for Reserved Instances because there are no other sizes
on the other platforms, which available. For more information,
include Linux with SQL Server see Support for modifying
Standard, Linux with SQL Server instance sizes (p. 413).
Web, Linux with SQL Server
Enterprise, Red Hat Enterprise
Linux, SUSE Linux, Windows,
Windows with SQL Standard,
Windows with SQL Server
Enterprise, and Windows with
SQL Server Web.
Change the network from EC2- Linux and Windows The network platform must be
Classic to Amazon VPC and vice available in your AWS account.
versa If you created your AWS account
after 2013-12-04, it does not
support EC2-Classic.
Requirements
Amazon EC2 processes your modification request if there is sufficient capacity for your new configuration
(if applicable), and if the following conditions are met:
• The Reserved Instance cannot be modified before or at the same time that you purchase it
• The Reserved Instance must be active
• There cannot be a pending modification request
• The Reserved Instance is not listed in the Reserved Instance Marketplace
• There must be a match between the instance size footprint of the original reservation and the new
configuration. For more information, see Support for modifying instance sizes (p. 413).
• The original Reserved Instances are all Standard Reserved Instances or all Convertible Reserved
Instances, not some of each type
412
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
• The original Reserved Instances must expire within the same hour, if they are Standard Reserved
Instances
• The Reserved Instance is not a G4 instance.
Requirements
You cannot modify the instance size of Reserved Instances for the following instances, because each of
these instance families has only one size:
• cc2.8xlarge
• cr1.8xlarge
• hs1.8xlarge
• t1.micro
• The original and new Reserved Instance must have the same instance size footprint.
Contents
• Instance size footprint (p. 413)
• Normalization factors for bare metal instances (p. 415)
Each Reserved Instance has an instance size footprint, which is determined by the normalization factor
of the instance size and the number of instances in the reservation. When you modify the instance
sizes in an Reserved Instance, the footprint of the new configuration must match that of the original
configuration, otherwise the modification request is not processed.
To calculate the instance size footprint of a Reserved Instance, multiply the number of instances by
the normalization factor. In the Amazon EC2 console, the normalization factor is measured in units.
The following table describes the normalization factor for the instance sizes in an instance family. For
example, t2.medium has a normalization factor of 2, so a reservation for four t2.medium instances has
a footprint of 8 units.
nano 0.25
micro 0.5
small 1
medium 2
large 4
xlarge 8
2xlarge 16
413
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
3xlarge 24
4xlarge 32
6xlarge 48
8xlarge 64
9xlarge 72
10xlarge 80
12xlarge 96
16xlarge 128
18xlarge 144
24xlarge 192
32xlarge 256
56xlarge 448
112xlarge 896
You can allocate your reservations into different instance sizes across the same instance family as
long as the instance size footprint of your reservation remains the same. For example, you can divide
a reservation for one t2.large (1 @ 4 units) instance into four t2.small (4 @ 1 unit) instances.
Similarly, you can combine a reservation for four t2.small instances into one t2.large instance.
However, you cannot change your reservation for two t2.small instances into one t2.large instance
because the footprint of the new reservation (4 units) is larger than the footprint of the original
reservation (2 units).
In the following example, you have a reservation with two t2.micro instances (1 unit) and a reservation
with one t2.small instance (1 unit). If you merge both of these reservations to a single reservation
with one t2.medium instance (2 units), the footprint of the new reservation equals the footprint of the
combined reservations.
You can also modify a reservation to divide it into two or more reservations. In the following example,
you have a reservation with a t2.medium instance (2 units). You can divide the reservation into two
414
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
reservations, one with two t2.nano instances (.5 units) and the other with three t2.micro instances
(1.5 units).
The following table describes the normalization factor for the bare metal instance sizes in the instance
families that have bare metal instances. The normalization factor for metal instances depends on the
instance family, unlike the other instance sizes.
a1.metal 32
m5zn.metal | z1d.metal 96
c5n.metal 144
u-*.metal 896
For example, an i3.metal instance has a normalization factor of 128. If you purchase an i3.metal
default tenancy Amazon Linux/Unix Reserved Instance, you can divide the reservation as follows:
• An i3.16xlarge is the same size as an i3.metal instance, so its normalization factor is 128 (128/1).
The reservation for one i3.metal instance can be modified into one i3.16xlarge instance.
• An i3.8xlarge is half the size of an i3.metal instance, so its normalization factor is 64 (128/2). The
reservation for one i3.metal instance can be divided into two i3.8xlarge instances.
• An i3.4xlarge is a quarter the size of an i3.metal instance, so its normalization factor is 32
(128/4). The reservation for one i3.metal instance can be divided into four i3.4xlarge instances.
415
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
New console
• Scope: Choose whether the configuration applies to an Availability Zone or to the whole
Region.
• Availability Zone: Choose the required Availability Zone. Not applicable for regional Reserved
Instances.
• Instance type: Select the required instance type. The combined configurations must equal the
instance size footprint of your original configurations.
• Count: Specify the number of instances. To split the Reserved Instances into multiple
configurations, reduce the count, choose Add, and specify a count for the additional
configuration. For example, if you have a single configuration with a count of 10, you can
change its count to 6 and add a configuration with a count of 4. This process retires the
original Reserved Instance after the new Reserved Instances are activated.
4. Choose Continue.
5. To confirm your modification choices when you finish specifying your target configurations,
choose Submit modifications.
6. You can determine the status of your modification request by looking at the State column in the
Reserved Instances screen. The following are the possible states.
Old console
416
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
2. On the Reserved Instances page, select one or more Reserved Instances to modify, and choose
Actions, Modify Reserved Instances.
Note
If your Reserved Instances are not in the active state or cannot be modified, Modify
Reserved Instances is disabled.
3. The first entry in the modification table displays attributes of selected Reserved Instances, and
at least one target configuration beneath it. The Units column displays the total instance size
footprint. Choose Add for each new configuration to add. Modify the attributes as needed for
each configuration, and then choose Continue:
• Scope: Choose whether the configuration applies to an Availability Zone or to the whole
Region.
• Availability Zone: Choose the required Availability Zone. Not applicable for regional Reserved
Instances.
• Instance Type: Select the required instance type. The combined configurations must equal the
instance size footprint of your original configurations.
• Count: Specify the number of instances. To split the Reserved Instances into multiple
configurations, reduce the count, choose Add, and specify a count for the additional
configuration. For example, if you have a single configuration with a count of 10, you can
change its count to 6 and add a configuration with a count of 4. This process retires the
original Reserved Instance after the new Reserved Instances are activated.
4. To confirm your modification choices when you finish specifying your target configurations,
choose Submit Modifications.
5. You can determine the status of your modification request by looking at the State column in the
Reserved Instances screen. The following are the possible states.
1. To modify your Reserved Instances, you can use one of the following commands:
• modify-reserved-instances (AWS CLI)
• Edit-EC2ReservedInstance (AWS Tools for Windows PowerShell)
2. To get the status of your modification request (processing, fulfilled, or failed), use one of
the following commands:
• describe-reserved-instances-modifications (AWS CLI)
• Get-EC2ReservedInstancesModification (AWS Tools for Windows PowerShell)
417
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
In some situations, you might get a message indicating incomplete or failed modification requests
instead of a confirmation. Use the information in such messages as a starting point for resubmitting
another modification request. Ensure that you have read the applicable restrictions (p. 412) before
submitting the request.
Amazon EC2 identifies and lists the Reserved Instances that cannot be modified. If you receive a message
like this, go to the Reserved Instances page in the Amazon EC2 console and check the information for
the Reserved Instances.
You submitted one or more Reserved Instances for modification and none of your requests can be
processed. Depending on the number of reservations you are modifying, you can get different versions of
the message.
Amazon EC2 displays the reasons why your request cannot be processed. For example, you might have
specified the same target configuration—a combination of Availability Zone and platform—for one or
more subsets of the Reserved Instances you are modifying. Try submitting the modification requests
again, but ensure that the instance details of the reservations match, and that the target configurations
for all subsets being modified are unique.
When you exchange your Convertible Reserved Instance, the number of instances for your current
reservation is exchanged for a number of instances that cover the equal or higher value of the
configuration of the new Convertible Reserved Instance. Amazon EC2 calculates the number of Reserved
Instances that you can receive as a result of the exchange.
You can't exchange Standard Reserved Instances, but you can modify them. For more information, see
Modify Reserved Instances (p. 411) .
Contents
• Requirements for exchanging Convertible Reserved Instances (p. 418)
• Calculate Convertible Reserved Instances exchanges (p. 419)
• Merge Convertible Reserved Instances (p. 420)
• Exchange a portion of a Convertible Reserved Instance (p. 421)
• Submit exchange requests (p. 422)
• Active
• Not pending a previous exchange request
• Convertible Reserved Instances can only be exchanged for other Convertible Reserved Instances
currently offered by AWS.
418
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
• Convertible Reserved Instances are associated with a specific Region, which is fixed for the duration
of the reservation's term. You cannot exchange a Convertible Reserved Instance for a Convertible
Reserved Instance in a different Region.
• You can exchange one or more Convertible Reserved Instances at a time for one Convertible Reserved
Instance only.
• To exchange a portion of a Convertible Reserved Instance, you can modify it into two or
more reservations, and then exchange one or more of the reservations for a new Convertible
Reserved Instance. For more information, see Exchange a portion of a Convertible Reserved
Instance (p. 421). For more information about modifying your Reserved Instances, see Modify
Reserved Instances (p. 411).
• All Upfront Convertible Reserved Instances can be exchanged for Partial Upfront Convertible Reserved
Instances, and vice versa.
Note
If the total upfront payment required for the exchange (true-up cost) is less than $0.00, AWS
automatically gives you a quantity of instances in the Convertible Reserved Instance that
ensures that true-up cost is $0.00 or more.
Note
If the total value (upfront price + hourly price * number of remaining hours) of the new
Convertible Reserved Instance is less than the total value of the exchanged Convertible
Reserved Instance, AWS automatically gives you a quantity of instances in the Convertible
Reserved Instance that ensures that the total value is the same or higher than that of the
exchanged Convertible Reserved Instance.
• To benefit from better pricing, you can exchange a No Upfront Convertible Reserved Instance for an All
Upfront or Partial Upfront Convertible Reserved Instance.
• You cannot exchange All Upfront and Partial Upfront Convertible Reserved Instances for No Upfront
Convertible Reserved Instances.
• You can exchange a No Upfront Convertible Reserved Instance for another No Upfront Convertible
Reserved Instance only if the new Convertible Reserved Instance's hourly price is the same or higher
than the exchanged Convertible Reserved Instance's hourly price.
Note
If the total value (hourly price * number of remaining hours) of the new Convertible Reserved
Instance is less than the total value of the exchanged Convertible Reserved Instance, AWS
automatically gives you a quantity of instances in the Convertible Reserved Instance that
ensures that the total value is the same or higher than that of the exchanged Convertible
Reserved Instance.
• If you exchange multiple Convertible Reserved Instances that have different expiration dates, the
expiration date for the new Convertible Reserved Instance is the date that's furthest in the future.
• If you exchange a single Convertible Reserved Instance, it must have the same term (1-year or 3-
years) as the new Convertible Reserved Instance. If you merge multiple Convertible Reserved Instances
with different term lengths, the new Convertible Reserved Instance has a 3-year term. For more
information, see Merge Convertible Reserved Instances (p. 420).
• After you exchange a Convertible Reserved Instance, the original reservation is retired. Its end date is
the start date of the new reservation, and the end date of the new reservation is the same as the end
date of the original Convertible Reserved Instance. For example, if you modify a three-year reservation
that had 16 months left in its term, the resulting modified reservation is a 16-month reservation with
the same end date as the original one.
419
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
Each Convertible Reserved Instance has a list value. This list value is compared to the list value of the
Convertible Reserved Instances that you want in order to determine how many instance reservations you
can receive from the exchange.
For example: You have 1 x $35-list value Convertible Reserved Instance that you want to exchange for a
new instance type with a list value of $10.
$35/$10 = 3.5
You can exchange your Convertible Reserved Instance for three $10 Convertible Reserved Instances.
It's not possible to purchase half reservations; therefore you must purchase an additional Convertible
Reserved Instance to cover the remainder:
The fourth Convertible Reserved Instance has the same end date as the other three. If you are
exchanging Partial or All Upfront Convertible Reserved Instances, you pay the true-up cost for the fourth
reservation. If the remaining upfront cost of your Convertible Reserved Instances is $500, and the new
reservation would normally cost $600 on a prorated basis, you are charged $100.
$600 prorated upfront cost of new reservations - $500 remaining upfront cost of original
reservations = $100 difference
For example, you have the following Convertible Reserved Instances in your account:
• You can merge aaaa1111 and bbbb2222 and exchange them for a 1-year Convertible Reserved
Instance. You cannot exchange them for a 3-year Convertible Reserved Instance. The expiration date of
the new Convertible Reserved Instance is 2018-12-31.
• You can merge bbbb2222 and cccc3333 and exchange them for a 3-year Convertible Reserved
Instance. You cannot exchange them for a 1-year Convertible Reserved Instance. The expiration date of
the new Convertible Reserved Instance is 2018-07-31.
• You can merge cccc3333 and dddd4444 and exchange them for a 3-year Convertible Reserved
Instance. You cannot exchange them for a 1-year Convertible Reserved Instance. The expiration date of
the new Convertible Reserved Instance is 2019-12-31.
420
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
In this example, you have a t2.micro Convertible Reserved Instance with four instances in the
reservation. To exchange two t2.micro instances for an m4.xlarge instance:
1. Modify the t2.micro Convertible Reserved Instance by splitting it into two t2.micro Convertible
Reserved Instances with two instances each.
2. Exchange one of the new t2.micro Convertible Reserved Instances for an m4.xlarge Convertible
Reserved Instance.
In this example, you have a t2.large Convertible Reserved Instance. To change it to a smaller
t2.medium instance and a m3.medium instance:
1. Modify the t2.large Convertible Reserved Instance by splitting it into two t2.medium Convertible
Reserved Instances. A single t2.large instance has the same instance size footprint as two
t2.medium instances.
2. Exchange one of the new t2.medium Convertible Reserved Instances for an m3.medium Convertible
Reserved Instance.
For more information, see Support for modifying instance sizes (p. 413) and Submit exchange
requests (p. 422).
421
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reserved Instances
You can search for Convertible Reserved Instances offerings and select your new configuration from the
choices provided.
New console
Old console
The Reserved Instances that were exchanged are retired, and the new Reserved Instances are displayed in
the Amazon EC2 console. This process can take a few minutes to propagate.
To exchange a Convertible Reserved Instance, first find a new Convertible Reserved Instance that meets
your needs:
Get a quote for the exchange, which includes the number of Reserved Instances you get from the
exchange, and the true-up cost for the exchange:
422
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled Instances
• For each Region, you can purchase 20 regional (p. 385) Reserved Instances per month.
• Plus, for each Availability Zone, you can purchase an additional 20 zonal (p. 385) Reserved Instances
per month.
For example, in a Region with three Availability Zones, the limit is 80 Reserved Instances per month:
20 regional Reserved Instances for the Region plus 20 zonal Reserved Instances for each of the three
Availability Zones (20*3=60).
A regional Reserved Instance applies a discount to a running On-Demand Instance. The default On-
Demand Instance limit is 20. You cannot exceed your running On-Demand Instance limit by purchasing
regional Reserved Instances. For example, if you already have 20 running On-Demand Instances, and you
purchase 20 regional Reserved Instances, the 20 regional Reserved Instances are used to apply a discount
to the 20 running On-Demand Instances. If you purchase more regional Reserved Instances, you will not
be able to launch more instances because you have reached your On-Demand Instance limit.
Before purchasing regional Reserved Instances, make sure your On-Demand Instance limit matches or
exceeds the number of regional Reserved Instances you intend to own. If required, make sure you request
an increase to your On-Demand Instance limit before purchasing more regional Reserved Instances.
A zonal Reserved Instance—a Reserved Instance that is purchased for a specific Availability Zone—
provides capacity reservation as well as a discount. You can exceed your running On-Demand Instance
limit by purchasing zonal Reserved Instances. For example, if you already have 20 running On-Demand
Instances, and you purchase 20 zonal Reserved Instances, you can launch a further 20 On-Demand
Instances that match the specifications of your zonal Reserved Instances, giving you a total of 40 running
instances.
The Amazon EC2 console provides quota information. For more information, see View your current
limits (p. 1680).
423
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Spot Instances
A Spot Instance is an instance that uses spare EC2 capacity that is available for less than the On-Demand
price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can
lower your Amazon EC2 costs significantly. The hourly price for a Spot Instance is called a Spot price. The
Spot price of each instance type in each Availability Zone is set by Amazon EC2, and is adjusted gradually
based on the long-term supply of and demand for Spot Instances. Your Spot Instance runs whenever
capacity is available and the maximum price per hour for your request exceeds the Spot price.
Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if
your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch
jobs, background processing, and optional tasks. For more information, see Amazon EC2 Spot Instances.
Topics
Concepts
Before you get started with Spot Instances, you should be familiar with the following concepts:
• Spot capacity pool – A set of unused EC2 instances with the same instance type (for example,
m5.large) and Availability Zone.
• Spot price – The current price of a Spot Instance per hour.
• Spot Instance request – Requests a Spot Instance. The request provides the maximum price per hour
that you are willing to pay for a Spot Instance. If you don't specify a maximum price, the default
maximum price is the On-Demand price. When the maximum price per hour for your request exceeds
the Spot price, Amazon EC2 fulfills your request if capacity is available. A Spot Instance request is
either one-time or persistent. Amazon EC2 automatically resubmits a persistent Spot Instance request
after the Spot Instance associated with the request is terminated.
• EC2 instance rebalance recommendation - Amazon EC2 emits an instance rebalance recommendation
signal to notify you that a Spot Instance is at an elevated risk of interruption. This signal gives you the
opportunity to proactively rebalance your workloads across existing or new Spot Instances without
having to wait for the two-minute Spot Instance interruption notice.
• Spot Instance interruption – Amazon EC2 terminates, stops, or hibernates your Spot Instance when
Amazon EC2 needs the capacity back or the Spot price exceeds the maximum price for your request.
Amazon EC2 provides a Spot Instance interruption notice, which gives the instance a two-minute
warning before it is interrupted.
Launch time Can only be launched immediately if Can only be launched immediately if
the Spot Instance request is active and you make a manual launch request and
capacity is available. capacity is available.
424
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Available If capacity is not available, the If capacity is not available when you
capacity Spot Instance request continues to make a launch request, you get an
automatically make the launch request insufficient capacity error (ICE).
until capacity becomes available.
Hourly price The hourly price for Spot Instances varies The hourly price for On-Demand
based on demand. Instances is static.
Rebalance The signal that Amazon EC2 emits for a You determine when an On-Demand
recommendation running Spot Instance when the instance Instance is interrupted (stopped,
is at an elevated risk of interruption. hibernated, or terminated).
Instance You can stop and start an Amazon You determine when an On-Demand
interruption EBS-backed Spot Instance. In addition, Instance is interrupted (stopped,
the Amazon EC2 Spot service can hibernated, or terminated).
interrupt (p. 460) an individual Spot
Instance if capacity is no longer available,
the Spot price exceeds your maximum
price, or demand for Spot Instances
increases.
Spot basics
425
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Related services
You can provision Spot Instances directly using Amazon EC2. You can also provision Spot Instances using
other services in AWS. For more information, see the following documentation.
You can create launch templates or configurations with the maximum price that you are willing
to pay, so that Amazon EC2 Auto Scaling can launch Spot Instances. For more information, see
Requesting Spot Instances for fault-tolerant and flexible applications and Auto Scaling groups with
multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide.
Amazon EMR and Spot Instances
There are scenarios where it can be useful to run Spot Instances in an Amazon EMR cluster. For
more information, see Spot Instances and When Should You Use Spot Instances in the Amazon EMR
Management Guide.
AWS CloudFormation templates
AWS CloudFormation enables you to create and manage a collection of AWS resources using
a template in JSON format. AWS CloudFormation templates can include the maximum price
you are willing to pay. For more information, see EC2 Spot Instance Updates - Auto Scaling and
CloudFormation Integration.
AWS SDK for Java
You can use the Java programming language to manage your Spot Instances. For more information,
see Tutorial: Amazon EC2 Spot Instances and Tutorial: Advanced Amazon EC2 Spot Request
Management.
AWS SDK for .NET
You can use the .NET programming environment to manage your Spot Instances. For more
information, see Tutorial: Amazon EC2 Spot Instances.
If you or Amazon EC2 interrupts a running Spot Instance, you are charged for the seconds used or the full
hour, or you receive no charge, depending on the operating system used and who interrupted the Spot
Instance. For more information, see Billing for interrupted Spot Instances (p. 468).
View prices
To view the current (updated every five minutes) lowest Spot price per AWS Region and instance type,
see the Amazon EC2 Spot Instances Pricing page.
426
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
To view the Spot price history for the past three months, use the Amazon EC2 console or the
describe-spot-price-history command (AWS CLI). For more information, see Spot Instance pricing
history (p. 431).
We independently map Availability Zones to codes for each AWS account. Therefore, you can get
different results for the same Availability Zone code (for example, us-west-2a) between different
accounts.
View savings
You can view the savings made from using Spot Instances for a single Spot Fleet or for all Spot Instances.
You can view the savings made in the last hour or the last three days, and you can view the average cost
per vCPU hour and per memory (GiB) hour. Savings are estimated and may differ from actual savings
because they do not include the billing adjustments for your usage. For more information about viewing
savings information, see Savings from purchasing Spot Instances (p. 432).
View billing
Your bill provides details about your service usage. For more information, see Viewing your bill in the
AWS Billing and Cost Management User Guide.
Spot Instances are recommended for stateless, fault-tolerant, flexible applications. For example,
Spot Instances work well for big data, containerized workloads, CI/CD, stateless web servers, high
performance computing (HPC), and rendering workloads.
While running, Spot Instances are exactly the same as On-Demand Instances. However, Spot does
not guarantee that you can keep your running instances long enough to finish your workloads. Spot
also does not guarantee that you can get immediate availability of the instances that you are looking
for, or that you can always get the aggregate capacity that you requested. Moreover, Spot Instance
interruptions and capacity can change over time because Spot Instance availability varies based on
supply and demand, and past performance isn’t a guarantee of future results.
Spot Instances are not suitable for workloads that are inflexible, stateful, fault-intolerant, or tightly
coupled between instance nodes. It's also not recommended for workloads that are intolerant of
occasional periods when the target capacity is not completely available. We strongly warn against
using Spot Instances for these workloads or attempting to fail-over to On-Demand Instances to handle
interruptions.
Regardless of whether you're an experienced Spot user or new to Spot Instances, if you are currently
experiencing issues with Spot Instance interruptions or availability, we recommend that you follow these
best practices to have the best experience using the Spot service.
427
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
An EC2 Instance rebalance recommendation is a new signal that notifies you when a Spot Instance is
at elevated risk of interruption. The signal gives you the opportunity to proactively manage the Spot
Instance in advance of the two-minute Spot Instance interruption notice. You can decide to rebalance
your workload to new or existing Spot Instances that are not at an elevated risk of interruption. We've
made it easy for you to use this new signal by using the Capacity Rebalancing feature in Auto Scaling
groups and Spot Fleet. For more information, see Use proactive capacity rebalancing (p. 429).
A Spot Instance interruption notice is a warning that is issued two minutes before Amazon EC2 interrupts
a Spot Instance. If your workload is "time-flexible," you can configure your Spot Instances to be stopped
or hibernated, instead of being terminated, when they are interrupted. Amazon EC2 automatically stops
or hibernates your Spot Instances on interruption, and automatically resumes the instances when we
have available capacity.
We recommend that you create a rule in Amazon EventBridge that captures the rebalance
recommendations and interruption notifications, and then triggers a checkpoint for the progress of
your workload or gracefully handles the interruption. For more information, see Monitor rebalance
recommendation signals (p. 457). For a detailed example that walks you through how to create and use
event rules, see Taking Advantage of Amazon EC2 Spot Instance Interruption Notices.
For more information, see EC2 instance rebalance recommendations (p. 456) and Spot Instance
interruptions (p. 460).
Depending on your specific needs, you can evaluate which instance types you can be flexible across
to fulfill your compute requirements. If a workload can be vertically scaled, you should include larger
instance types (more vCPUs and memory) in your requests. If you can only scale horizontally, you should
include older generation instance types because they are less in demand from On-Demand customers.
A good rule of thumb is to be flexible across at least 10 instance types for each workload. In addition,
make sure that all Availability Zones are configured for use in your VPC and selected for your workload.
Use EC2 Auto Scaling groups or Spot Fleet to manage your aggregate capacity
Spot enables you to think in terms of aggregate capacity—in units that include vCPUs, memory, storage,
or network throughput—rather than thinking in terms of individual instances. Auto Scaling groups and
Spot Fleet enable you to launch and maintain a target capacity, and to automatically request resources
to replace any that are disrupted or manually terminated. When you configure an Auto Scaling group
or a Spot Fleet, you need only specify the instance types and target capacity based on your application
needs. For more information, see Auto Scaling groups in the Amazon EC2 Auto Scaling User Guide and
Create a Spot Fleet request (p. 854) in this user guide.
428
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
optimized strategy because this strategy automatically provisions instances from the most-available
Spot capacity pools. You can also take advantage of the capacity optimized allocation strategy
in Spot Fleet. Because your Spot Instance capacity is sourced from pools with optimal capacity, this
decreases the possibility that your Spot Instances are reclaimed. For more information about allocation
strategies, see Spot Instances in the Amazon EC2 Auto Scaling User Guide and Configure Spot Fleet for
capacity optimization (p. 825) in this user guide.
Capacity Rebalancing complements the capacity optimized allocation strategy (which is designed to
help find the most optimal spare capacity) and the mixed instances policy (which is designed to enhance
availability by deploying instances across multiple instance types running in multiple Availability Zones).
You can launch a Spot Instance using several different services. For more information, see Getting
Started with Amazon EC2 Spot Instances. In this user guide, we describe the following ways to launch a
Spot Instance using EC2:
• You can create a Spot Instance request. For more information, see Create a Spot Instance
request (p. 437).
• You can create an EC2 Fleet, in which you specify the desired number of Spot Instances. Amazon EC2
creates a Spot Instance request on your behalf for every Spot Instance that is specified in the EC2
Fleet. For more information, see Create an EC2 Fleet (p. 812).
• You can create a Spot Fleet request, in which you specify the desired number of Spot Instances.
Amazon EC2 creates a Spot Instance request on your behalf for every Spot Instance that is specified in
the Spot Fleet request. For more information, see Create a Spot Fleet request (p. 854).
The Spot Instance request must include the maximum price that you're willing to pay per hour per
instance. If you don't specify a price, the price defaults to the On-Demand price. The request can include
other constraints such as the instance type and Availability Zone.
Your Spot Instance launches if the maximum price that you're willing to pay exceeds the Spot price, and
if there is available capacity. If the maximum price you're willing to pay is lower than the Spot price, then
your instance does not launch. However, because Amazon EC2 gradually adjusts the Spot price based on
the long-term supply of and demand for Spot Instances, the maximum price you're willing to pay might
eventually exceed the Spot price, in which case your instance will launch.
429
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Your Spot Instance runs until you stop or terminate it, or until Amazon EC2 interrupts it (known as a Spot
Instance interruption).
When you use Spot Instances, you must be prepared for interruptions. Amazon EC2 can interrupt
your Spot Instance when the demand for Spot Instances rises, when the supply of Spot Instances
decreases, or when the Spot price exceeds your maximum price. When Amazon EC2 interrupts a Spot
Instance, it provides a Spot Instance interruption notice, which gives the instance a two-minute warning
before Amazon EC2 interrupts it. You can't enable termination protection for Spot Instances. For more
information, see Spot Instance interruptions (p. 460).
You can stop, start, reboot, or terminate an Amazon EBS-backed Spot Instance. The Spot service can
stop, terminate, or hibernate a Spot Instance when it interrupts it.
Contents
• Launch Spot Instances in a launch group (p. 430)
• Launch Spot Instances in an Availability Zone group (p. 430)
• Launch Spot Instances in a VPC (p. 431)
Although this option can be useful, adding this constraint can decrease the chances that your Spot
Instance request is fulfilled and increase the chances that your Spot Instances are terminated. For
example, your launch group includes instances in multiple Availability Zones. If capacity in one of these
Availability Zones decreases and is no longer available, then Amazon EC2 terminates all instances for the
launch group.
If you create another successful Spot Instance request that specifies the same (existing) launch group
as an earlier successful request, then the new instances are added to the launch group. Subsequently, if
an instance in this launch group is terminated, all instances in the launch group are terminated, which
includes instances launched by the first and second requests.
Although this option can be useful, adding this constraint can lower the chances that your Spot Instance
request is fulfilled.
If you specify an Availability Zone group but don't specify an Availability Zone in the Spot Instance
request, the result depends on the network you specified.
Default VPC
Amazon EC2 uses the Availability Zone for the specified subnet. If you don't specify a subnet, it selects
an Availability Zone and its default subnet, but not necessarily the lowest-priced zone. If you deleted the
default subnet for an Availability Zone, then you must specify a different subnet.
Nondefault VPC
Amazon EC2 uses the Availability Zone for the specified subnet.
430
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
• You should use the default maximum price (the On-Demand price), or base your maximum price on the
Spot price history of Spot Instances in a VPC.
• [Default VPC] If you want your Spot Instance launched in a specific low-priced Availability Zone, you
must specify the corresponding subnet in your Spot Instance request. If you do not specify a subnet,
Amazon EC2 selects one for you, and the Availability Zone for this subnet might not have the lowest
Spot price.
• [Nondefault VPC] You must specify the subnet for your Spot Instance.
When you request Spot Instances, we recommend that you use the default maximum price (the On-
Demand price). When your request is fulfilled, your Spot Instances launch at the current Spot price,
not exceeding the On-Demand price. If you want to specify a maximum price, we recommend that you
first review the Spot price history. You can view the Spot price history for the last 90 days, filtering by
instance type, operating system, and Availability Zone.
For the current Spot Instance prices, see Amazon EC2 Spot Instances Pricing.
• If you choose Availability Zones, then choose the Instance type, operating system (Platform),
and Date range for which to view the price history.
• If you choose Instance Types, then choose up to five Instance type(s), the Availability Zone,
operating system (Platform), and Date range for which to view the price history.
The following screenshot shows a price comparison for different instance types.
431
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
5. Move your pointer over the graph to display the prices at specific times in the selected date range.
The prices are displayed in the information blocks above the graph. The price displayed in the top
row shows the price on a specific date. The price displayed in the second row shows the average
price over the selected date range.
6. To display the price per vCPU, toggle on Display normalized prices. To display the price for the
instance type, toggle off Display normalized prices.
You can use one of the following commands. For more information, see Access Amazon EC2 (p. 3).
The following screenshot from the Savings section shows the Spot usage and savings information for a
Spot Fleet.
• Spot Instances – The number of Spot Instances launched and terminated by the Spot Fleet. When
viewing the savings summary, the number represents all your running Spot Instances.
• vCPU-hours – The number of vCPU hours used across all the Spot Instances for the selected time
frame.
• Mem(GiB)-hours – The number of GiB hours used across all the Spot Instances for the selected time
frame.
• On-Demand total – The total amount you would've paid for the selected time frame had you launched
these instances as On-Demand Instances.
• Spot total – The total amount to pay for the selected time frame.
• Savings – The percentage that you are saving by not paying the On-Demand price.
• Average cost per vCPU-hour – The average hourly cost of using the vCPUs across all the Spot
Instances for the selected time frame, calculated as follows: Average cost per vCPU-hour = Spot total
/ vCPU-hours.
432
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
• Average cost per mem(GiB)-hour – The average hourly cost of using the GiBs across all the Spot
Instances for the selected time frame, calculated as follows: Average cost per mem(GiB)-hour = Spot
total / Mem(GiB)-hours.
• Details table – The different instance types (the number of instances per instance type is in
parentheses) that comprise the Spot Fleet. When viewing the savings summary, these comprise all
your running Spot Instances.
Savings information can only be viewed using the Amazon EC2 console.
Alternatively, select the check box next to the Spot Fleet request ID and choose the Savings tab.
4. By default, the page displays usage and savings information for the last three days. You can choose
last hour or the last three days. For Spot Fleets that were launched less than an hour ago, the page
shows the estimated savings for the hour.
To view the savings information for all running Spot Instances (console)
The following illustration shows how Spot Instance requests work. Notice that the request type (one-
time or persistent) determines whether the request is opened again when Amazon EC2 interrupts a Spot
Instance or if you stop a Spot Instance. If the request is persistent, the request is opened again after your
Spot Instance is interrupted. If the request is persistent and you stop your Spot Instance, the request only
opens after you start your Spot Instance.
433
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Contents
• Spot Instance request states (p. 434)
• Define a duration for your Spot Instances (p. 435)
• Specify a tenancy for your Spot Instances (p. 435)
• Service-linked role for Spot Instance requests (p. 436)
• Create a Spot Instance request (p. 437)
• Find running Spot Instances (p. 440)
• Tag Spot Instance requests (p. 441)
• Cancel a Spot Instance request (p. 446)
• Stop a Spot Instance (p. 446)
• Start a Spot Instance (p. 447)
• Terminate a Spot Instance (p. 448)
• Spot Instance request example launch specifications (p. 449)
434
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
The following illustration represents the transitions between the request states. Notice that the
transitions depend on the request type (one-time or persistent).
A one-time Spot Instance request remains active until Amazon EC2 launches the Spot Instance, the
request expires, or you cancel the request. If the Spot price exceeds your maximum price or capacity is
not available, your Spot Instance is terminated and the Spot Instance request is closed.
A persistent Spot Instance request remains active until it expires or you cancel it, even if the request is
fulfilled. If the Spot price exceeds your maximum price or capacity is not available, your Spot Instance
is interrupted. After your instance is interrupted, when your maximum price exceeds the Spot price or
capacity becomes available again, the Spot Instance is started if stopped or resumed if hibernated. You
can stop a Spot Instance and start it again if capacity is available and your maximum price exceeds the
current Spot price. If the Spot Instance is terminated (irrespective of whether the Spot Instance is in a
stopped or running state), the Spot Instance request is opened again and Amazon EC2 launches a new
Spot Instance. For more information, see Stop a Spot Instance (p. 446), Start a Spot Instance (p. 447),
and Terminate a Spot Instance (p. 448).
You can track the status of your Spot Instance requests, as well as the status of the Spot Instances
launched, through the status. For more information, see Spot request status (p. 451).
• Specify a tenancy of dedicated when you create the Spot Instance request. For more information,
see Create a Spot Instance request (p. 437).
435
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
• Request a Spot Instance in a VPC with an instance tenancy of dedicated. For more information, see
Create a VPC with an instance tenancy of dedicated (p. 519). You cannot request a Spot Instance with
a tenancy of default if you request it in a VPC with an instance tenancy of dedicated.
All instance families support Dedicated Spot Instances except T instances. For each supported instance
family, only the largest instance size or metal size supports Dedicated Spot Instances.
Amazon EC2 uses the service-linked role named AWSServiceRoleForEC2Spot to launch and manage
Spot Instances on your behalf.
Under most circumstances, you don't need to manually create a service-linked role. Amazon EC2 creates
the AWSServiceRoleForEC2Spot service-linked role the first time you request a Spot Instance using the
console.
If you had an active Spot Instance request before October 2017, when Amazon EC2 began supporting
this service-linked role, Amazon EC2 created the AWSServiceRoleForEC2Spot role in your AWS account.
For more information, see A New Role Appeared in My Account in the IAM User Guide.
If you use the AWS CLI or an API to request a Spot Instance, you must first ensure that this role exists.
436
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
If you no longer need to use Spot Instances, we recommend that you delete the
AWSServiceRoleForEC2Spot role. After this role is deleted from your account, Amazon EC2 will create
the role again if you request Spot Instances.
Grant access to customer managed keys for use with encrypted AMIs and EBS snapshots
If you specify an encrypted AMI (p. 189) or an encrypted Amazon EBS snapshot (p. 1536) for
your Spot Instances and you use a customer managed key for encryption, you must grant the
AWSServiceRoleForEC2Spot role permission to use the customer managed key so that Amazon EC2 can
launch Spot Instances on your behalf. To do this, you must add a grant to the customer managed key, as
shown in the following procedure.
When providing permissions, grants are an alternative to key policies. For more information, see Using
Grants and Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.
To grant the AWSServiceRoleForEC2Spot role permissions to use the customer managed key
• Use the create-grant command to add a grant to the customer managed key and to specify the
principal (the AWSServiceRoleForEC2Spot service-linked role) that is given permission to perform
the operations that the grant permits. The customer managed key is specified by the key-id
parameter and the ARN of the customer managed key. The principal is specified by the grantee-
principal parameter and the ARN of the AWSServiceRoleForEC2Spot service-linked role.
• To request a Spot Instance using the console, use the launch instance wizard. For more information,
see To create a Spot Instance request (console) (p. 438).
• To request a Spot Instance using the CLI, use the request-spot-instances command or the run-instances
command. For more information, see To create a Spot Instance request using request-spot-instances
(CLI) and To create a Spot Instance request using run-instances (CLI).
After you've submitted your Spot Instance request, you can't change the parameters of the request. This
means that you can't make changes to the maximum price that you're willing to pay.
If you request multiple Spot Instances at one time, Amazon EC2 creates separate Spot Instance requests
so that you can track the status of each request separately. For more information about tracking Spot
Instance requests, see Spot request status (p. 451).
To launch a fleet that includes Spot Instances and On-Demand Instances, see Create a Spot Fleet
request (p. 854).
Note
You can't launch a Spot Instance and an On-Demand Instance in the same call using the launch
instance wizard or the run-instances command.
Prerequisites
437
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Before you begin, decide on your maximum price, how many Spot Instances you'd like, and what instance
type to use. To review Spot price trends, see Spot Instance pricing history (p. 431).
For more information about configuring your Spot Instance, see Step 3: Configure Instance
Details (p. 567).
7. The AMI you selected includes one or more volumes of storage, including the root device volume. On
the Add Storage page, you can specify additional volumes to attach to the instance by choosing Add
New Volume. For more information, see Step 4: Add Storage (p. 570).
8. On the Add Tags page, specify tags (p. 1666) by providing key and value combinations. For more
information, see Step 5: Add Tags (p. 570).
9. On the Configure Security Group page, use a security group to define firewall rules for your
instance. These rules specify which incoming network traffic is delivered to your instance. All other
438
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
traffic is ignored. (For more information about security groups, see Amazon EC2 security groups for
Linux instances (p. 1303).) Select or create a security group, and then choose Review and Launch.
For more information, see Step 6: Configure Security Group (p. 570).
10. On the Review Instance Launch page, check the details of your instance, and make any necessary
changes by choosing the appropriate Edit link. When you are ready, choose Launch. For more
information, see Step 7: Review Instance Launch and Select Key Pair (p. 571).
11. In the Select an existing key pair or create a new key pair dialog box, you can choose an existing
key pair, or create a new one. For example, choose Choose an existing key pair, then select the key
pair that you created when getting set up. For more information, see Amazon EC2 key pairs and
Linux instances (p. 1288).
Important
If you choose the Proceed without key pair option, you won't be able to connect to the
instance unless you choose an AMI that is configured to allow users another way to log in.
12. To launch your instance, select the acknowledgment check box, then choose Launch Instances.
If the instance fails to launch or the state immediately goes to terminated instead of running, see
Troubleshoot instance launch issues (p. 1683).
For example launch specification files to use with these commands, see Spot Instance request example
launch specifications (p. 449). If you download a launch specification file from the console, you must
use the request-spot-fleet command instead (the console specifies a Spot Instance request using a Spot
Fleet).
Use the run-instances command and specify the Spot Instance options in the --instance-market-
options parameter.
The following is the data structure to specify in the JSON file for --instance-market-options. You
can also specify ValidUntil and InstanceInterruptionBehavior. If you do not specify a field in
the data structure, the default value is used. This example creates a one-time request and specifies
0.02 as the maximum price you're willing to pay for the Spot Instance.
439
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
{
"MarketType": "spot",
"SpotOptions": {
"MaxPrice": "0.02",
"SpotInstanceType": "one-time"
}
}
Alternatively, in the navigation pane, choose Instances. In the top right corner, choose the settings
icon ( ), and then under Attribute columns, select Instance lifecycle. For each instance, Instance
lifecycle is either normal, spot, or scheduled.
To enumerate your Spot Instances, use the describe-spot-instance-requests command with the --query
option.
[
{
"ID": "i-1234567890abcdef0"
},
{
"ID": "i-0598c7d356eba48d7"
}
]
Alternatively, you can enumerate your Spot Instances using the describe-instances command with the --
filters option.
440
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
To describe a single Spot Instance instance, use the describe-spot-instance-requests command with the
--spot-instance-request-ids option.
When you tag a Spot Instance request, the instances and volumes that are launched by the Spot Instance
request are not automatically tagged. You need to explicitly tag the instances and volumes launched
by the Spot Instance request. You can assign a tag to a Spot Instance and volumes during launch, or
afterward.
For more information about how tags work, see Tag your Amazon EC2 resources (p. 1666).
Contents
• Prerequisites (p. 441)
• Tag a new Spot Instance request (p. 443)
• Tag an existing Spot Instance request (p. 443)
• View Spot Instance request tags (p. 444)
Prerequisites
Grant the IAM user the permission to tag resources. For more information about IAM policies and
example policies, see Example: Tag resources (p. 1258).
The IAM policy you create is determined by which method you use for creating a Spot Instance request.
• If you use the launch instance wizard or run-instances to request Spot Instances, see To grant an
IAM user the permission to tag resources when using the launch instance wizard or run-instances.
• If you use the request-spot-instances command to request Spot Instances, see To grant an IAM
user the permission to tag resources when using request-spot-instances.
To grant an IAM user the permission to tag resources when using the launch instance wizard or run-
instances
• The ec2:RunInstances action. This grants the IAM user permission to launch an instance.
• For Resource, specify spot-instances-request. This allows users to create Spot Instance
requests, which request Spot Instances.
• The ec2:CreateTags action. This grants the IAM user permission to create tags.
• For Resource, specify *. This allows users to tag all resources that are created during instance launch.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLaunchInstances",
"Effect": "Allow",
"Action": [
441
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:security-group/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:volume/*",
"arn:aws:ec2:us-east-1:*:instance/*",
"arn:aws:ec2:us-east-1:*:spot-instances-request/*"
]
},
{
"Sid": "TagSpotInstanceRequests",
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "*"
}
]
}
Note
When you use the RunInstances action to create Spot Instance requests and tag the Spot
Instance requests on create, you need to be aware of how Amazon EC2 evaluates the spot-
instances-request resource in the RunInstances statement.
The spot-instances-request resource is evaluated in the IAM policy as follows:
• If you don't tag a Spot Instance request on create, Amazon EC2 does not evaluate the spot-
instances-request resource in the RunInstances statement.
• If you tag a Spot Instance request on create, Amazon EC2 evaluates the spot-instances-
request resource in the RunInstances statement.
Therefore, for the spot-instances-request resource, the following rules apply to the IAM
policy:
• If you use RunInstances to create a Spot Instance request and you don't intend to tag the Spot
Instance request on create, you don’t need to explicitly allow the spot-instances-request
resource; the call will succeed.
• If you use RunInstances to create a Spot Instance request and intend to tag the Spot Instance
request on create, you must include the spot-instances-request resource in the
RunInstances allow statement, otherwise the call will fail.
• If you use RunInstances to create a Spot Instance request and intend to tag the Spot Instance
request on create, you must specify the spot-instances-request resource or include a *
wildcard in the CreateTags allow statement, otherwise the call will fail.
For example IAM policies, including policies that are not supported for Spot Instance requests,
see Work with Spot Instances (p. 1252).
To grant an IAM user the permission to tag resources when using request-spot-instances
• The ec2:RequestSpotInstances action. This grants the IAM user permission to create a Spot
Instance request.
• The ec2:CreateTags action. This grants the IAM user permission to create tags.
• For Resource, specify spot-instances-request. This allows users to tag only the Spot Instance
request.
442
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TagSpotInstanceRequest",
"Effect": "Allow",
"Action": [
"ec2:RequestSpotInstances",
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:us-east-1:111122223333:spot-instances-request/*"
}
For each tag, you can tag the Spot Instance request, the Spot Instances, and the volumes with
the same tag. To tag all three, ensure that Instances, Volumes, and Spot Instance Requests are
selected. To tag only one or two, ensure that the resources you want to tag are selected, and the
other resources are cleared.
3. Complete the required fields to create a Spot Instance request, and then choose Launch. For more
information, see Create a Spot Instance request (p. 437).
To tag a Spot Instance request when you create it, configure the Spot Instance request configuration as
follows:
• Specify the tags for the Spot Instance request using the --tag-specification parameter.
• For ResourceType, specify spot-instances-request. If you specify another value, the Spot
Instance request will fail.
• For Tags, specify the key-value pair. You can specify more than one key-value pair.
In the following example, the Spot Instance request is tagged with two tags: Key=Environment and
Value=Production, and Key=Cost-Center and Value=123.
After you have created a Spot Instance request, you can add tags to the Spot Instance request using the
console.
443
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
After your Spot Instance request has launched your Spot Instance, you can add tags to the instance using
the console. For more information, see Add and delete tags on an individual resource (p. 1673).
To tag an existing Spot Instance request or Spot Instance using the AWS CLI
Use the create-tags command to tag existing resources. In the following example, the existing Spot
Instance request and the Spot Instance are tagged with Key=purpose and Value=test.
Use the describe-tags command to view the tags for the specified resource. In the following example,
you describe the tags for the specified request.
{
"Tags": [
{
"Key": "Environment",
"ResourceId": "sir-11112222-3333-4444-5555-66666EXAMPLE",
"ResourceType": "spot-instances-request",
"Value": "Production"
},
{
"Key": "Another key",
"ResourceId": "sir-11112222-3333-4444-5555-66666EXAMPLE",
"ResourceType": "spot-instances-request",
"Value": "Another value"
}
]
}
You can also view the tags of a Spot Instance request by describing the Spot Instance request.
Use the describe-spot-instance-requests command to view the configuration of the specified Spot
Instance request, which includes any tags that were specified for the request.
444
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
--spot-instance-request-ids sir-11112222-3333-4444-5555-66666EXAMPLE
{
"SpotInstanceRequests": [
{
"CreateTime": "2020-06-24T14:22:11+00:00",
"InstanceId": "i-1234567890EXAMPLE",
"LaunchSpecification": {
"SecurityGroups": [
{
"GroupName": "launch-wizard-6",
"GroupId": "sg-1234567890EXAMPLE"
}
],
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"DeleteOnTermination": true,
"VolumeSize": 8,
"VolumeType": "gp2"
}
}
],
"ImageId": "ami-1234567890EXAMPLE",
"InstanceType": "t2.micro",
"KeyName": "my-key-pair",
"NetworkInterfaces": [
{
"DeleteOnTermination": true,
"DeviceIndex": 0,
"SubnetId": "subnet-11122233"
}
],
"Placement": {
"AvailabilityZone": "eu-west-1c",
"Tenancy": "default"
},
"Monitoring": {
"Enabled": false
}
},
"LaunchedAvailabilityZone": "eu-west-1c",
"ProductDescription": "Linux/UNIX",
"SpotInstanceRequestId": "sir-1234567890EXAMPLE",
"SpotPrice": "0.012600",
"State": "active",
"Status": {
"Code": "fulfilled",
"Message": "Your spot request is fulfilled.",
"UpdateTime": "2020-06-25T18:30:21+00:00"
},
"Tags": [
{
"Key": "Environment",
"Value": "Production"
},
{
"Key": "Another key",
"Value": "Another value"
}
],
"Type": "one-time",
"InstanceInterruptionBehavior": "terminate"
445
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
}
]
}
• Your Spot Instance request is open when your request has not yet been fulfilled and no instances have
been launched.
• Your Spot Instance request is active when your request has been fulfilled and Spot Instances have
launched as a result.
• Your Spot Instance request is disabled when you stop your Spot Instance.
If your Spot Instance request is active and has an associated running Spot Instance, canceling the
request does not terminate the instance. For more information about terminating a Spot Instance, see
Terminate a Spot Instance (p. 448).
• Use the cancel-spot-instance-requests command to cancel the specified Spot Instance request.
Limitations
• You can only stop a Spot Instance if the Spot Instance was launched from a persistent Spot
Instance request.
• You can't stop a Spot Instance if the associated Spot Instance request is cancelled. When the Spot
Instance request is cancelled, you can only terminate the Spot Instance.
• You can't stop a Spot Instance if it is part of a fleet or launch group, or Availability Zone group.
446
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
New console
Old console
AWS CLI
• Use the stop-instances command to manually stop one or more Spot Instances.
Prerequisites
Limitations
• You can't start a Spot Instance if it is part of fleet or launch group, or Availability Zone group.
New console
447
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Old console
AWS CLI
• Use the start-instances command to manually start one or more Spot Instances.
If you cancel an active Spot Instance request that has a running Spot Instance, the running Spot
Instance is not automatically terminated; you must manually terminate the Spot Instance.
If you cancel a disabled Spot Instance request that has a stopped Spot Instance, the stopped Spot
Instance is automatically terminated by the Amazon EC2 Spot service. There might be a short lag
between when you cancel the Spot Instance request and when the Spot service terminates the Spot
Instance.
For information about canceling a Spot Instance request, see Cancel a Spot Instance request (p. 446).
New console
1. Before you terminate an instance, verify that you won't lose any data by checking that your
Amazon EBS volumes won't be deleted on termination and that you've copied any data that you
need from your instance store volumes to persistent storage, such as Amazon EBS or Amazon
S3.
2. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
3. In the navigation pane, choose Instances.
4. To confirm that the instance is a Spot Instance, check that spot appears in the Instance lifecycle
column.
5. Select the instance, and choose Actions, Instance state, Terminate instance.
6. Choose Terminate when prompted for confirmation.
Old console
1. Before you terminate an instance, verify that you won't lose any data by checking that your
Amazon EBS volumes won't be deleted on termination and that you've copied any data that you
448
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
need from your instance store volumes to persistent storage, such as Amazon EBS or Amazon
S3.
2. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
3. In the navigation pane, choose Instances.
4. To confirm that the instance is a Spot Instance, check that spot appears in the Lifecycle column.
5. Select the instance, and choose Actions, Instance State, Terminate.
6. Choose Yes, Terminate when prompted for confirmation.
AWS CLI
The following example does not include an Availability Zone or subnet. Amazon EC2 selects an
Availability Zone for you. Amazon EC2 launches the instances in the default subnet of the selected
Availability Zone.
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
"InstanceType": "m3.medium",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
The following example includes an Availability Zone. Amazon EC2 launches the instances in the default
subnet of the specified Availability Zone.
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
449
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
"InstanceType": "m3.medium",
"Placement": {
"AvailabilityZone": "us-west-2a"
},
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
The following example includes a subnet. Amazon EC2 launches the instances in the specified subnet. If
the VPC is a nondefault VPC, the instance does not receive a public IPv4 address by default.
{
"ImageId": "ami-1a2b3c4d",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
"InstanceType": "m3.medium",
"SubnetId": "subnet-1a2b3c4d",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"InstanceType": "m3.medium",
"NetworkInterfaces": [
{
"DeviceIndex": 0,
"SubnetId": "subnet-1a2b3c4d",
"Groups": [ "sg-1a2b3c4d" ],
"AssociatePublicIpAddress": true
}
],
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
The following example requests Spot Instance with a tenancy of dedicated. A Dedicated Spot Instance
must be launched in a VPC.
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
"InstanceType": "c3.8xlarge",
"SubnetId": "subnet-1a2b3c4d",
"Placement": {
"Tenancy": "dedicated"
}
450
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
At each step of the process—also called the Spot request lifecycle—specific events determine successive
request states.
Contents
• Lifecycle of a Spot request (p. 451)
• Get request status information (p. 454)
• Spot request status codes (p. 455)
Pending evaluation
As soon as you create a Spot Instance request, it goes into the pending-evaluation state unless one
or more request parameters are not valid (bad-parameters).
451
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Holding
If one or more request constraints are valid but can't be met yet, or if there is not enough capacity, the
request goes into a holding state waiting for the constraints to be met. The request options affect the
likelihood of the request being fulfilled. For example, if you specify a maximum price below the current
Spot price, your request stays in a holding state until the Spot price goes below your maximum price.
If you specify an Availability Zone group, the request stays in a holding state until the Availability Zone
constraint is met.
In the event of an outage of one of the Availability Zones, there is a chance that the spare EC2 capacity
available for Spot Instance requests in other Availability Zones can be affected.
Pending evaluation/fulfillment-terminal
Your Spot Instance request can go to a terminal state if you create a request that is valid only during
a specific time period and this time period expires before your request reaches the pending fulfillment
phase. It might also happen if you cancel the request, or if a system error occurs.
Pending fulfillment
452
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
When the constraints you specified (if any) are met and your maximum price is equal to or higher than
the current Spot price, your Spot request goes into the pending-fulfillment state.
At this point, Amazon EC2 is getting ready to provision the instances that you requested. If the process
stops at this point, it is likely to be because it was canceled by the user before a Spot Instance was
launched. It might also be because an unexpected system error occurred.
Fulfilled
When all the specifications for your Spot Instances are met, your Spot request is fulfilled. Amazon
EC2 launches the Spot Instances, which can take a few minutes. If a Spot Instance is hibernated or
stopped when interrupted, it remains in this state until the request can be fulfilled again or the request is
canceled.
If you stop a Spot Instance, your Spot request goes into the marked-for-stop or instance-
stopped-by-user state until the Spot Instance can be started again or the request is cancelled.
* A Spot Instance goes into the instance-stopped-by-user state if you stop the instance or run the
shutdown command from the instance. After you've stopped the instance, you can start it again. On
restart, the Spot Instance request returns to the pending-evaluation state and then Amazon EC2
launches a new Spot Instance when the constraints are met.
** The Spot request state is disabled if you stop the Spot Instance but do not cancel the request. The
request state is cancelled if your Spot Instance is stopped and the request expires.
Fulfilled-terminal
Your Spot Instances continue to run as long as your maximum price is at or above the Spot price, there
is available capacity for your instance type, and you don't terminate the instance. If a change in the
Spot price or available capacity requires Amazon EC2 to terminate your Spot Instances, the Spot request
goes into a terminal state. A request also goes into the terminal state if you cancel the Spot request or
terminate the Spot Instances.
453
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
* The request state is closed if you terminate the instance but do not cancel the request. The request
state is cancelled if you terminate the instance and cancel the request. Even if you terminate a Spot
Instance before you cancel its request, there might be a delay before Amazon EC2 detects that your Spot
Instance was terminated. In this case, the request state can either be closed or cancelled.
Persistent requests
When your Spot Instances are terminated (either by you or Amazon EC2), if the Spot request is a
persistent request, it returns to the pending-evaluation state and then Amazon EC2 can launch a
new Spot Instance when the constraints are met.
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
454
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
az-group-constraint
Amazon EC2 cannot launch all the instances you requested in the same Availability Zone.
bad-parameters
One or more parameters for your Spot request are not valid (for example, the AMI you specified does
not exist). The status message indicates which parameter is not valid.
canceled-before-fulfillment
There is not enough capacity available for the instances that you requested.
constraint-not-fulfillable
The Spot request can't be fulfilled because one or more constraints are not valid (for example, the
Availability Zone does not exist). The status message indicates which constraint is not valid.
fulfilled
The Spot request is active, and Amazon EC2 is launching your Spot Instances.
instance-stopped-by-price
Your instance was stopped because the Spot price exceeded your maximum price.
instance-stopped-by-user
Your instance was stopped because a user stopped the instance or ran the shutdown command from
the instance.
instance-stopped-no-capacity
Your instance was terminated because the Spot price exceeded your maximum price. If your request
is persistent, the process restarts, so your request is pending evaluation.
instance-terminated-by-schedule
Your Spot Instance was terminated at the end of its scheduled duration.
instance-terminated-by-service
You terminated a Spot Instance that had been fulfilled, so the request state is closed (unless it's a
persistent request) and the instance state is terminated.
455
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
instance-terminated-launch-group-constraint
One or more of the instances in your launch group was terminated, so the launch group constraint is
no longer fulfilled.
instance-terminated-no-capacity
Amazon EC2 cannot launch all the instances that you requested at the same time. All instances in a
launch group are started and terminated together.
limit-exceeded
The limit on the number of EBS volumes or total volume storage was exceeded. For more
information about these limits and how to request an increase, see Amazon EBS Limits in the
Amazon Web Services General Reference.
marked-for-stop
After you make a Spot Instance request, it goes into the pending-evaluation state while the
system evaluates the parameters of your request.
pending-fulfillment
The Spot request can't be fulfilled yet because a Spot Instance can't be added to the placement
group at this time.
price-too-low
The request can't be fulfilled yet because your maximum price is below the Spot price. In this case,
no instance is launched and your request remains open.
request-canceled-and-instance-running
You canceled the Spot request while the Spot Instances are still running. The request is cancelled,
but the instances remain running.
schedule-expired
The Spot request expired because it was not fulfilled before the specified date.
system-error
There was an unexpected system error. If this is a recurring issue, please contact AWS Support for
assistance.
456
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
interruption notice (p. 465), giving you the opportunity to proactively manage the Spot Instance. You
can decide to rebalance your workload to new or existing Spot Instances that are not at an elevated risk
of interruption.
It is not always possible for Amazon EC2 to send the rebalance recommendation signal before the two-
minute Spot Instance interruption notice. Therefore, the rebalance recommendation signal can arrive
along with the two-minute interruption notice.
Rebalance recommendations are made available as a CloudWatch event and as an item in the instance
metadata (p. 710) on the Spot Instance. Events are emitted on a best effort basis.
Note
Rebalance recommendations are only supported for Spot Instances that are launched after
November 5, 2020 00:00 UTC.
Topics
• Rebalance actions you can take (p. 457)
• Monitor rebalance recommendation signals (p. 457)
• Services that use the rebalance recommendation signal (p. 459)
Graceful shutdown
When you receive the rebalance recommendation signal for a Spot Instance, you can start your
instance shutdown procedures, which might include ensuring that processes are completed before
stopping them. For example, you can upload system or application logs to Amazon Simple Storage
Service (Amazon S3), you can shut down Amazon SQS workers, or you can complete deregistration
from the Domain Name System (DNS). You can also save your work in external storage and resume it
at a later time.
Prevent new work from being scheduled
When you receive the rebalance recommendation signal for a Spot Instance, you can prevent new
work from being scheduled on the instance, while continuing to use the instance until the scheduled
work is completed.
Proactively launch new replacement instances
You can configure Auto Scaling groups, EC2 Fleet, or Spot Fleet to automatically launch replacement
Spot Instances when a rebalance recommendation signal is emitted. For more information, see
Amazon EC2 Auto Scaling Capacity Rebalancing in the Amazon EC2 Auto Scaling User Guide, and
Capacity Rebalancing (p. 800) for EC2 Fleet and Capacity Rebalancing (p. 843) for Spot Fleet in
this user guide.
457
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
{
"version": "0",
"id": "12345678-1234-1234-1234-123456789012",
"detail-type": "EC2 Instance Rebalance Recommendation",
"source": "aws.ec2",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-2",
"resources": ["arn:aws:ec2:us-east-2:123456789012:instance/i-1234567890abcdef0"],
"detail": {
"instance-id": "i-1234567890abcdef0"
}
}
The following fields form the event pattern that is defined in the rule:
The following example creates an EventBridge rule to send an email, text message, or mobile push
notification every time Amazon EC2 emits a rebalance recommendation signal. The signal is emitted as
an EC2 Instance Rebalance Recommendation event, which triggers the action defined by the rule.
A rule can't have the same name as another rule in the same Region and on the same event bus.
4. For Define pattern, choose Event pattern.
5. Under Event matching pattern, choose Custom pattern.
6. In the Event pattern box, add the following pattern to match the EC2 Instance Rebalance
Recommendation event, and then choose Save.
{
"source": [ "aws.ec2" ],
"detail-type": [ "EC2 Instance Rebalance Recommendation" ]
}
7. For Select event bus, choose AWS default event bus. When an AWS service in your account emits an
event, it always goes to your account's default event bus.
458
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
8. Confirm that Enable the rule on the selected event bus is toggled on.
9. For Target, choose SNS topic to send an email, text message, or mobile push notification when the
event occurs.
10. For Topic, choose an existing topic. You first need to create an Amazon SNS topic using the Amazon
SNS console. For more information, see Using Amazon SNS for application-to-person (A2P)
messaging in the Amazon Simple Notification Service Developer Guide.
11. For Configure input, choose the input for the email, text message, or mobile push notification.
12. Choose Create.
For more information, see Creating a rule for an AWS service and Event Patterns in the Amazon
EventBridge User Guide
We recommend that you check for rebalance recommendation signals every 5 seconds so that you don't
miss an opportunity to act on the rebalance recommendation.
If a Spot Instance receives a rebalance recommendation, the time that the signal was emitted is present
in the instance metadata. You can retrieve the time that the signal was emitted as follows.
IMDSv2
IMDSv1
The following is example output, which indicates the time, in UTC, that the rebalance recommendation
signal was emitted for the Spot Instance.
{"noticeTime": "2020-10-27T08:22:00Z"}
If the signal has not been emitted for the instance, events/recommendations/rebalance is not
present and you receive an HTTP 404 error when you try to retrieve it.
• Amazon EC2 Auto Scaling Capacity Rebalancing in the Amazon EC2 Auto Scaling User Guide
• Capacity Rebalancing (p. 800) in the EC2 Fleet topic in this user guide
• Capacity Rebalancing (p. 843) in the Spot Fleet topic in this user guide
459
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
When Amazon EC2 interrupts a Spot Instance, it either terminates, stops, or hibernates the instance,
depending on what you specified when you created the Spot request.
Demand for Spot Instances can vary significantly from moment to moment, and the availability of Spot
Instances can also vary significantly depending on how many unused EC2 instances are available. It is
always possible that your Spot Instance might be interrupted.
Contents
• Reasons for interruption (p. 460)
• Interruption behavior (p. 460)
• Stop interrupted Spot Instances (p. 461)
• Hibernate interrupted Spot Instances (p. 462)
• Terminate interrupted Spot Instances (p. 465)
• Prepare for interruptions (p. 465)
• Spot Instance interruption notices (p. 465)
• Find interrupted Spot Instances (p. 467)
• Determine whether Amazon EC2 interrupted a Spot Instance (p. 468)
• Billing for interrupted Spot Instances (p. 468)
Price
Amazon EC2 can interrupt your Spot Instance when it needs it back. EC2 reclaims your instance
mainly to repurpose capacity, but it can also occur for other reasons such as host maintenance or
hardware decommission.
Constraints
If your Spot request includes a constraint such as a launch group or an Availability Zone group, the
Spot Instances are terminated as a group when the constraint can no longer be met.
You can see the historical interruption rates for your instance type in the Spot Instance Advisor.
Interruption behavior
You can specify that Amazon EC2 must do one of the following when it interrupts a Spot Instance:
460
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
The way in which you specify the interruption behavior is different depending on how you request Spot
Instances.
• If you request Spot Instances using the launch instance wizard (p. 565), you can specify the
interruption behavior as follows: Select the Persistent request check box and then, from Interruption
behavior, choose an interruption behavior.
• If you request Spot Instances using the Spot console (p. 854), you can specify the interruption
behavior as follows: Select the Maintain target capacity check box and then, from Interruption
behavior, choose an interruption behavior.
• If you configure Spot Instances in a launch template (p. 581), you can specify the interruption
behavior as follows: In the launch template, expand Advanced details and select the Request Spot
Instances check box. Choose Customize and then, from Interruption behavior, choose an interruption
behavior.
• If you configure Spot Instances in the request configuration when using the create-fleet CLI, you can
specify the interruption behavior as follows: For InstanceInterruptionBehavior, specify an
interruption behavior.
• If you configure Spot Instances in the request configuration when using the request-spot-fleet CLI, you
can specify the interruption behavior as follows: For InstanceInterruptionBehavior, specify an
interruption behavior.
• If you configure Spot Instances using the request-spot-instances CLI, you can specify the interruption
behavior as follows: For --instance-interruption-behavior, specify an interruption behavior.
Considerations
For example, consider a Spot Fleet with the lowestPrice allocation strategy. At initial launch,
a c3.large pool meets the lowestPrice criteria for the launch specification. Later, when the
c3.large instances are interrupted, Amazon EC2 stops the instances and replenishes capacity from
another pool that fits the lowestPrice strategy. This time, the pool happens to be a c4.large pool
and Amazon EC2 launches c4.large instances to meet the target capacity. Similarly, Spot Fleet could
move to a c5.large pool the next time. In each of these transitions, Amazon EC2 does not prioritize
pools with earlier stopped instances, but rather prioritizes purely on the specified allocation strategy.
The lowestPrice strategy can lead back to pools with earlier stopped instances. For example, if
instances are interrupted in the c5.large pool and the lowestPrice strategy leads it back to the
c3.large or c4.large pools, the earlier stopped instances are restarted to fulfill target capacity.
461
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
• While a Spot Instance is stopped, you can modify some of its instance attributes, but not the instance
type. If you detach or delete an EBS volume, it is not attached when the Spot Instance is started. If you
detach the root volume and Amazon EC2 attempts to start the Spot Instance, the instance will fail to
start and Amazon EC2 will terminate the stopped instance.
• You can terminate a Spot Instance while it is stopped.
• If you cancel a Spot Instance request, an EC2 Fleet, or a Spot Fleet, Amazon EC2 terminates any
associated Spot Instances that are stopped.
• While an interrupted Spot Instance is stopped, you are charged only for the EBS volumes, which are
preserved. With EC2 Fleet and Spot Fleet, if you have many stopped instances, you can exceed the
limit on the number of EBS volumes for your account. For more information about how you're charged
when a Spot Instance is interrupted, see Billing for interrupted Spot Instances (p. 468).
• Make sure that you are familiar with the implications of stopping an instance. For information about
what happens when an instance is stopped, see Differences between reboot, stop, hibernate, and
terminate (p. 562).
Prerequisites
Spot Instance request type – Must be persistent. You can't specify a launch group in the Spot
Instance request.
• When the instance receives a signal from Amazon EC2, the agent prompts the operating system
to hibernate. If the agent is not installed, or the underlying operating system doesn't support
hibernation, or there isn't enough volume space to save the instance memory, hibernation fails and
Amazon EC2 stops the instance instead.
• The instance memory (RAM) is preserved on the root volume.
• The EBS volumes and private IP addresses of the instance are preserved.
• Instance store volumes and public IP addresses, other than Elastic IP addresses, are not preserved.
For information about hibernating On-Demand Instances, see Hibernate your On-Demand or Reserved
Linux instance (p. 626).
Considerations
462
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
• While the instance is in the process of hibernating, instance health checks might fail.
• When the hibernation process completes, the state of the instance is stopped.
• While the instance is hibernated, you are charged only for the EBS volumes. With EC2 Fleet and Spot
Fleet, if you have many hibernated instances, you can exceed the limit on the number of EBS volumes
for your account.
• Make sure that you are familiar with the implications of hibernating an instance. For information about
what happens when an instance is hibernated, see Differences between reboot, stop, hibernate, and
terminate (p. 562).
Prerequisites
Spot Instance request type – Must be persistent. You can't specify a launch group in the Spot
Instance request.
If you use one of the following operating systems, you must install the hibernation agent (p. 464).
Alternatively, use a supported AMI, which already includes the hibernation agent.
• Amazon Linux 2
• Amazon Linux AMI
• Ubuntu with an AWS-tuned Ubuntu kernel (linux-aws) greater than 4.4.0-1041
Supported Linux AMIs
For information about the supported Windows AMIs, see the prerequisites in the Amazon EC2 User
Guide for Windows Instances.
Start the hibernation agent
We recommend that you use user data to start the hibernation agent at instance launch.
Alternatively, you could start the agent manually. For more information, see Start the hibernation
agent at launch (p. 464).
Supported instance families
Must be large enough to store the instance memory (RAM) during hibernation.
463
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
EBS root volume encryption – recommended, but not a prerequisite for Spot Instance hibernation
We strongly recommend that you use an encrypted EBS volume as the root volume, because
instance memory is stored on the root volume during hibernation. This ensures that the contents
of memory (RAM) are encrypted when the data is at rest on the volume and when data is moving
between the instance and volume.
Use one of the following three options to ensure that the root volume is an encrypted EBS volume:
• EBS encryption by default – You can enable EBS encryption by default to ensure that all new
EBS volumes created in your AWS account are encrypted. This way, you can enable hibernation for
your instances without specifying encryption intent at instance launch. For more information, see
Encryption by default (p. 1539).
• EBS "single-step" encryption – You can launch encrypted EBS-backed EC2 instances from an
unencrypted AMI and also enable hibernation at the same time. For more information, see Use
encryption with EBS-backed AMIs (p. 189).
• Encrypted AMI – You can enable EBS encryption by using an encrypted AMI to launch your
instance. If your AMI does not have an encrypted root snapshot, you can copy it to a new AMI and
request encryption. For more information, see Encrypt an unencrypted image during copy (p. 193)
and Copy an AMI (p. 172).
You must install the hibernation agent on your AMI, unless you plan to use an AMI that already includes
the agent.
The following instructions describe how to install the hibernation agent on a Linux AMI. For the
instructions to install the hibernation agent on a Windows AMI, see Install the hibernation agent on your
on Windows AMI in the Amazon EC2 User Guide for Windows Instances.
1. Verify that your kernel supports hibernation and update the kernel if necessary.
If your AMI doesn't include the agent, install the agent. The hibernation agent is only available on
Ubuntu 16.04 or later.
The hibernation agent must run at instance startup, whether the agent was included in your AMI or you
installed it yourself.
The following instructions describe how to start the hibernation agent on a Linux instance. For the
instructions to start the hibernation agent on a Windows instance, see Start the hibernation agent at
launch in the Amazon EC2 User Guide for Windows Instances.
464
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Follow the steps to request a Spot Instance using your preferred launch method (p. 563), and add the
following to the user data.
#!/bin/bash
/usr/bin/enable-ec2-spot-hibernation
We recommend that you follow these best practices so that you're prepared for a Spot Instance
interruption.
• Create your Spot request using an Auto Scaling group. If your Spot Instances are interrupted, the Auto
Scaling group will automatically launch replacement instances. For more information, see Auto Scaling
groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide.
• When you create your Spot request, specify the default maximum price, which is the On-Demand price.
When your Spot Instances launch, you'll only pay the Spot price.
• Ensure that your instance is ready to go as soon as the request is fulfilled by using an Amazon Machine
Image (AMI) that contains the required software configuration. You can also use user data to run
commands at startup.
• Store important data regularly in a place that isn't affected if the Spot Instance terminates. For
example, you can use Amazon S3, Amazon EBS, or DynamoDB.
• Divide the work into small tasks (using a Grid, Hadoop, or queue-based architecture) or use
checkpoints so that you can save your work frequently.
• Amazon EC2 emits a rebalance recommendation signal to the Spot Instance when the instance is at an
elevated risk of interruption. You can rely on the rebalance recommendation to proactively manage
Spot Instance interruptions without having to wait for the two-minute Spot Instance interruption
notice. For more information, see EC2 instance rebalance recommendations (p. 456).
• Use the two-minute Spot Instance interruption notices to monitor the status of your Spot Instances.
For more information, see Spot Instance interruption notices (p. 465).
• While we make every effort to provide these warnings as soon as possible, it is possible that your Spot
Instance is interrupted before the warnings can be made available. Test your application to ensure that
it handles an unexpected instance interruption gracefully, even if you are monitoring for rebalance
recommendation signals and interruption notices. You can do this by running the application using an
On-Demand Instance and then terminating the On-Demand Instance yourself.
• Run a controlled fault injection experiment with AWS Fault Injection Simulator to test how your
application responds when your Spot Instance is interrupted. For more information, see the Tutorial:
Test Spot Instance interruptions using AWS FIS in the AWS Fault Injection Simulator User Guide.
465
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
interruption notice, but you do not receive a two-minute warning because the hibernation process begins
immediately.
The best way for you to gracefully handle Spot Instance interruptions is to architect your application to
be fault-tolerant. To accomplish this, you can take advantage of Spot Instance interruption notices. We
recommend that you check for these interruption notices every 5 seconds.
The interruption notices are made available as a CloudWatch event and as items in the instance
metadata (p. 710) on the Spot Instance. Events are emitted on a best effort basis.
When Amazon EC2 is going to interrupt your Spot Instance, it emits an event two minutes prior to the
actual interruption (except for hibernation, which gets the interruption notice, but not two minutes in
advance, because hibernation begins immediately). This event can be detected by Amazon CloudWatch
Events. For more information about CloudWatch events, see the Amazon CloudWatch Events User Guide.
For a detailed example that walks you through how to create and use event rules, see Taking Advantage
of Amazon EC2 Spot Instance Interruption Notices.
The following is an example of the event for Spot Instance interruption. The possible values for
instance-action are hibernate, stop, or terminate.
{
"version": "0",
"id": "12345678-1234-1234-1234-123456789012",
"detail-type": "EC2 Spot Instance Interruption Warning",
"source": "aws.ec2",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-2",
"resources": ["arn:aws:ec2:us-east-2:123456789012:instance/i-1234567890abcdef0"],
"detail": {
"instance-id": "i-1234567890abcdef0",
"instance-action": "action"
}
}
instance-action
If your Spot Instance is marked to be stopped or terminated by Amazon EC2, the instance-action
item is present in your instance metadata (p. 710). Otherwise, it is not present. You can retrieve
instance-action as follows.
IMDSv2
IMDSv1
The instance-action item specifies the action and the approximate time, in UTC, when the action will
occur.
The following example output indicates the time at which this instance will be stopped.
466
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
The following example output indicates the time at which this instance will be terminated.
If Amazon EC2 is not preparing to stop or terminate the instance, or if you terminated the instance
yourself, instance-action is not present in the instance metadata and you receive an HTTP 404 error
when you try to retrieve it.
termination-time
This item is maintained for backward compatibility; you should use instance-action instead.
If your Spot Instance is marked for termination by Amazon EC2, the termination-time item is present
in your instance metadata. Otherwise, it is not present. You can retrieve termination-time as follows.
IMDSv2
IMDSv1
The termination-time item specifies the approximate time in UTC when the instance receives the
shutdown signal. The following is example output.
2015-01-05T18:02:00Z
If Amazon EC2 is not preparing to terminate the instance, or if you terminated the Spot Instance yourself,
the termination-time item is either not present in the instance metadata (so you receive an HTTP 404
error) or contains a value that is not a time value.
If Amazon EC2 fails to terminate the instance, the request status is set to fulfilled. The
termination-time value remains in the instance metadata with the original approximate time, which
is now in the past.
467
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Alternatively, in the navigation pane, choose Spot Requests. You can see both Spot Instance
requests and Spot Fleet requests. To view the IDs of the instances, select a Spot Instance request or a
Spot Fleet request and choose the Instances tab. Choose an instance ID to display the instance in the
Instances pane.
3. For each Spot Instance, you can view its state in the Instance State column.
You can list your interrupted Spot Instances using the describe-instances command with the --filters
parameter. To list only the instance IDs in the output, include the --query parameter.
For more information about using CloudTrail, see Log Amazon EC2 and Amazon EBS API calls with AWS
CloudTrail (p. 1001).
Instance usage
Who interrupts the Operating system Interrupted in the first Interrupted in any hour
Spot Instance hour after the first hour
If you stop or terminate Windows and Linux Charged for the seconds Charged for the seconds
the Spot Instance (excluding RHEL and used used
SUSE)
RHEL and SUSE Charged for the full Charged for the full
hour even if you used a hours used, and
partial hour charged a full hour for
the interrupted partial
hour
468
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Who interrupts the Operating system Interrupted in the first Interrupted in any hour
Spot Instance hour after the first hour
If the Amazon EC2 Windows and Linux No charge Charged for the seconds
interrupts the Spot (excluding RHEL and used
Instance SUSE)
While an interrupted Spot Instance is stopped, you are charged only for the EBS volumes, which are
preserved.
With EC2 Fleet and Spot Fleet, if you have many stopped instances, you can exceed the limit on the
number of EBS volumes for your account.
469
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
Benefits
You can use the Spot placement score feature for the following:
• To relocate and scale Spot compute capacity in a different Region, as needed, in response to increased
capacity needs or decreased available capacity in the current Region.
• To identify the most optimal Availability Zone in which to run single-Availability Zone workloads.
• To simulate future Spot capacity needs so that you can pick an optimal Region for the expansion of
your Spot-based workloads.
Topics
• Costs (p. 470)
• How Spot placement score works (p. 470)
• Limitations (p. 472)
• Required IAM permission (p. 473)
• Calculate a Spot placement score (p. 473)
• Example configurations (p. 477)
Costs
There is no additional charge for using the Spot placement score feature.
First, you specify your desired target Spot capacity and your compute requirements, as follows:
1. Specify the target Spot capacity, and optionally the target capacity unit.
470
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
You can specify your desired target Spot capacity in terms of the number of instances or vCPUs, or in
terms of the amount of memory in MiB. To specify the target capacity in number of vCPUs or amount
of memory, you must specify the target capacity unit as vcpu or memory-mib. Otherwise, it defaults
to number of instances.
By specifying your target capacity in terms of the number of vCPUs or the amount of memory, you can
use these units when counting the total capacity. For example, if you want to use a mix of instances
of different sizes, you can specify the target capacity as a total number of vCPUs. The Spot placement
score feature then considers each instance type in the request by its number of vCPUs, and counts the
total number of vCPUs rather than the total number of instances when totaling up the target capacity.
For example, say you specify a total target capacity of 30 vCPUs, and your instance type list consists of
c5.xlarge (4 vCPUs), m5.2xlarge (8 vCPUs), and r5.large (2 vCPUs). To achieve a total of 30 vCPUs, you
could get a mix of 2 c5.xlarge (2*4 vCPUs), 2 m5.2xlarge (2*8 vCPUs), and 3 r5.large (3*2 vCPUs).
2. Specify instance types or instance attributes.
You can either specify the instance types to use, or you can specify the instance attributes that you
need for your compute requirements, and then let Amazon EC2 identify the instance types that have
those attributes. This is known as attribute-based instance type selection.
You can't specify both instance types and instance attributes in the same Spot placement score
request.
If you specify instance types, you must specify at least three different instance types, otherwise
Amazon EC2 will return a low Spot placement score. Similarly, if you specify instance attributes, they
must resolve to at least three different instance types.
For examples of different ways to specify your Spot requirements, see Example configurations (p. 477).
Amazon EC2 calculates the Spot placement score for each Region or Availability Zone, and returns either
the top 10 Regions or the top 10 Availability Zones where your Spot request is likely to succeed. The
default is to return a list of scored Regions. If you plan to launch all of your Spot capacity into a single
Availability Zone, then it's useful to request a list of scored Availability Zones.
You can specify a Region filter to narrow down the Regions that will be returned in the response.
You can combine the Region filter and a request for scored Availability Zones. In this way, the scored
Availability Zones are confined to the Regions for which you've filtered. To find the highest-scored
Availability Zone in a Region, specify only that Region, and the response will return a scored list of all of
the Availability Zones in that Region.
The Spot placement score for each Region or Availability Zone is calculated based on the target capacity,
the composition of the instance types, the historical and current Spot usage trends, and the time of the
request. Because Spot capacity is constantly fluctuating, the same Spot placement score request can
yield different scores when calculated at different times.
Regions and Availability Zones are scored on a scale from 1 to 10. A score of 10 indicates that your Spot
request is highly likely—but not guaranteed—to succeed. A score of 1 indicates that your Spot request is
not likely to succeed at all. The same score might be returned for different Regions or Availability Zones.
If low scores are returned, you can edit your compute requirements and recalculate the score. You can
also request Spot placement score recommendations for the same compute requirements at different
times of the day.
471
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
A Spot placement score is only relevant if your Spot request has exactly the same configuration as the
Spot placement score configuration (target capacity, target capacity unit, and instance types or instance
attributes), and is configured to use the capacity-optimized allocation strategy. Otherwise, the
likelihood of getting available Spot capacity will not align with the score.
While a Spot placement score serves as a guideline, and no score guarantees that your Spot request will
be fully or partially fulfilled, you can use the following information to get the best results:
• Use the same configuration – The Spot placement score is relevant only if the Spot request
configuration (target capacity, target capacity unit, and instance types or instance attributes) in
your Auto Scaling group, EC2 Fleet, or Spot Fleet is the same as what you entered to get the Spot
placement score.
If you used attribute-based instance type selection in your Spot placement score request, you can
use attribute-based instance type selection to configure your Auto Scaling group, EC2 Fleet, or Spot
Fleet. For more information, see Creating an Auto Scaling group with a set of requirements on the
instance types used, Attribute-based instance type selection for EC2 Fleet (p. 785), and Attribute-
based instance type selection for Spot Fleet (p. 825).
Note
If you specified your target capacity in terms of the number of vCPUs or the amount of
memory, and you specified instance types in your Spot placement score configuration,
note that you can’t currently create this configuration in your Auto Scaling group, EC2
Fleet, or Spot Fleet. Instead, you must manually set the instance weighting by using the
WeightedCapacity parameter.
• Use the capacity-optimized allocation strategy – Any score assumes that your fleet request
will be configured to use all Availability Zones (for requesting capacity across Regions) or a single
Availability Zone (if requesting capacity in one Availability Zone) and the capacity-optimized Spot
allocation strategy for your request for Spot capacity to succeed. If you use other allocation strategies,
such as lowest-price, the likelihood of getting available Spot capacity will not align with the score.
• Act on a score immediately – The Spot placement score recommendation reflects the available
Spot capacity at the time of the request, and the same configuration can yield different scores when
calculated at different times due to Spot capacity fluctuations. While a score of 10 means that your
Spot capacity request is highly likely—but not guaranteed—to succeed, for best results we recommend
that you act on a score immediately. We also recommend that you get a fresh score each time you
attempt a capacity request.
Limitations
• Target capacity limit – Your Spot placement score target capacity limit is based on your recent Spot
usage, while accounting for potential usage growth. If you have no recent Spot usage, we provide you
with a low default limit aligned with your Spot request limit.
• Request configurations limit – We can limit the number of new request configurations within a 24-
hour period if we detect patterns not associated with the intended use of the Spot placement score
feature. If you reach the limit, you can retry the request configurations that you've already used, but
you can't specify new request configurations until the next 24-hour period.
• Minimum number of instance types – If you specify instance types, you must specify at least three
different instance types, otherwise Amazon EC2 will return a low Spot placement score. Similarly, if
you specify instance attributes, they must resolve to at least three different instance types. Instance
types are considered different if they have a different name. For example, m5.8xlarge, m5a.8xlarge,
and m5.12xlarge are all considered different.
472
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
The following is an example IAM policy that grants permission to use the
ec2:GetSpotPlacementScores EC2 API action.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:GetSpotPlacementScores",
"Resource": "*"
}
]
}
For information about editing an IAM policy, see Editing IAM policies in the IAM User Guide.
Topics
• Calculate a Spot placement score by specifying instance attributes (console) (p. 473)
• Calculate a Spot placement score by specifying instance types (console) (p. 474)
• Calculate the Spot placement score (AWS CLI) (p. 474)
473
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
10. (Optional) To view the instance types with your specified attributes, expand Preview matching
instance types. To exclude instance types from being used in the placement evaluation, select the
instances and then choose Exclude selected instance types.
11. Choose Load placement scores, and review the results.
12. (Optional) To display the Spot placement score for specific Regions, for Regions to evaluate, select
the Regions to evaluate, and then choose Calculate placement scores.
13. (Optional) To display the Spot placement score for the Availability Zones in the displayed Regions,
select the Provide placement scores per Availability Zone check box. A list of scored Availability
Zones is useful if you want to launch all of your Spot capacity into a single Availability Zone.
14. (Optional) To edit your compute requirements and get a new placement score, choose Edit, make
the necessary adjustments, and then choose Calculate placement scores.
1. (Optional) To generate all of the possible parameters that can be specified for the Spot placement
score configuration, use the get-spot-placement-scores command and the --generate-cli-
skeleton parameter.
Expected output
{
"InstanceTypes": [
""
474
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
],
"TargetCapacity": 0,
"TargetCapacityUnitType": "vcpu",
"SingleAvailabilityZone": true,
"RegionNames": [
""
],
"InstanceRequirementsWithMetadata": {
"ArchitectureTypes": [
"x86_64_mac"
],
"VirtualizationTypes": [
"hvm"
],
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 0
},
"MemoryMiB": {
"Min": 0,
"Max": 0
},
"CpuManufacturers": [
"amd"
],
"MemoryGiBPerVCpu": {
"Min": 0.0,
"Max": 0.0
},
"ExcludedInstanceTypes": [
""
],
"InstanceGenerations": [
"previous"
],
"SpotMaxPricePercentageOverLowestPrice": 0,
"OnDemandMaxPricePercentageOverLowestPrice": 0,
"BareMetal": "excluded",
"BurstablePerformance": "excluded",
"RequireHibernateSupport": true,
"NetworkInterfaceCount": {
"Min": 0,
"Max": 0
},
"LocalStorage": "included",
"LocalStorageTypes": [
"hdd"
],
"TotalLocalStorageGB": {
"Min": 0.0,
"Max": 0.0
},
"BaselineEbsBandwidthMbps": {
"Min": 0,
"Max": 0
},
"AcceleratorTypes": [
"fpga"
],
"AcceleratorCount": {
"Min": 0,
"Max": 0
},
"AcceleratorManufacturers": [
"amd"
475
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
],
"AcceleratorNames": [
"vu9p"
],
"AcceleratorTotalMemoryMiB": {
"Min": 0,
"Max": 0
}
}
},
"DryRun": true,
"MaxResults": 0,
"NextToken": ""
}
2. Create a JSON configuration file using the output from the previous step, and configure it as follows:
a. For TargetCapacity, enter your desired Spot capacity in terms of the number of instances or
vCPUs, or the amount of memory (MiB).
b. For TargetCapacityUnitType, enter the unit for the target capacity. If you omit this
parameter, it defaults to units.
With a Region filter, the response returns only the Regions that you specify. If you specified
true for SingleAvailabilityZone, the response returns only the Availability Zones in the
specified Regions.
e. You can include either InstanceTypes or InstanceRequirements, but not both in the same
configuration.
• To specify a list of instance types, specify the instance types in the InstanceTypes
parameter. Specify at least three different instance types. If you specify only one or two
instance types, Spot placement score returns a low score. For the list of instance types, see
Amazon EC2 Instance Types.
• To specify the instance attributes so that Amazon EC2 will identify the instance
types that match those attributes, specify the attributes that are located in the
InstanceRequirements structure.
You must provide values for VCpuCount, MemoryMiB, and CpuManufacturers. You can
omit the other attributes; when omitted, the default values are used. For a description of
each attribute and their default values, see get-spot-placement-scores in the Amazon EC2
Command Line Reference.
476
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
--cli-input-json file://file_name.json
"Recommendation": [
{
"Region": "us-east-1",
"Score": 7
},
{
"Region": "us-west-1",
"Score": 5
},
...
"Recommendation": [
{
"Region": "us-east-1",
"AvailabilityZoneId": "use1-az1"
"Score": 8
},
{
"Region": "us-east-1",
"AvailabilityZoneId": "usw2-az3"
"Score": 6
},
...
Example configurations
When using the AWS CLI, you can use the following example configurations.
Example configurations
• Example: Specify instance types and target capacity (p. 477)
• Example: Specify instance types, and target capacity in terms of memory (p. 478)
• Example: Specify attributes for attribute-based instance type selection (p. 478)
• Example: Specify attributes for attribute-based instance type selection and return a scored list of
Availability Zones (p. 479)
The following example configuration specifies three different instance types and a target Spot capacity
of 500 Spot Instances.
{
"InstanceTypes": [
"m5.4xlarge",
"r5.2xlarge",
"m4.4xlarge"
],
"TargetCapacity": 500
}
477
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
{
"InstanceTypes": [
"m5.4xlarge",
"r5.2xlarge",
"m4.4xlarge"
],
"TargetCapacity": 500000,
"TargetCapacityUnitType": "memory-mib"
}
{
"TargetCapacity": 5000,
"TargetCapacityUnitType": "vcpu",
"InstanceRequirementsWithMetadata": {
"ArchitectureTypes": ["arm64"],
"VirtualizationTypes": ["hvm"],
"InstanceRequirements": {
"VCpuCount": {
"Min": 1,
"Max": 12
},
"MemoryMiB": {
"Min": 512
}
}
}
}
InstanceRequirementsWithMetadata
In the preceding example, the following required instance attributes are specified:
Note that there are several other optional attributes that you can specify. For the list of attributes, see
get-spot-placement-scores in the Amazon EC2 Command Line Reference.
TargetCapacityUnitType
The TargetCapacityUnitType parameter specifies the unit for the target capacity. In the example,
the target capacity is 5000 and the target capacity unit type is vcpu, which together specify a desired
478
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
target capacity of 5000 vCPUs, where the number of Spot Instances to launch must provide a total of
5000 vCPUs.
Example: Specify attributes for attribute-based instance type selection and return a scored list of
Availability Zones
The following example configuration is configured for attribute-based instance type selection. By
specifying "SingleAvailabilityZone": true, the response will return a list of scored Availability
Zones.
{
"TargetCapacity": 1000,
"TargetCapacityUnitType": "vcpu",
"SingleAvailabilityZone": true,
"InstanceRequirementsWithMetadata": {
"ArchitectureTypes": ["arm64"],
"VirtualizationTypes": ["hvm"],
"InstanceRequirements": {
"VCpuCount": {
"Min": 1,
"Max": 12
},
"MemoryMiB": {
"Min": 512
}
}
}
}
Data feed files arrive in your bucket typically once an hour, and each hour of usage is typically covered in
a single data file. These files are compressed (gzip) before they are delivered to your bucket. Amazon EC2
can write multiple files for a given hour of usage where files are large (for example, when file contents
for the hour exceed 50 MB before compression).
Note
If you don't have a Spot Instance running during a certain hour, you don't receive a data feed file
for that hour.
Spot Instance data feed is supported in all AWS Regions except China (Beijing), China (Ningxia), AWS
GovCloud (US), and the Regions that are disabled by default.
Contents
• Data feed file name and format (p. 479)
• Amazon S3 bucket requirements (p. 480)
• Subscribe to your Spot Instance data feed (p. 481)
• Describe your Spot Instance data feed (p. 481)
• Delete your Spot Instance data feed (p. 481)
479
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
bucket-name.s3.amazonaws.com/optional-prefix/aws-account-id.YYYY-MM-DD-HH.n.unique-id.gz
For example, if your bucket name is my-bucket-name and your prefix is my-prefix, your file names are
similar to the following:
my-bucket-name.s3.amazonaws.com/my-prefix/111122223333.2019-03-17-20.001.pwBdGTJG.gz
For more information about bucket names, see Rules for bucket naming in the Amazon Simple Storage
Service User Guide.
The Spot Instance data feed files are tab-delimited. Each line in the data file corresponds to one instance
hour and contains the fields listed in the following table.
Field Description
Timestamp The timestamp used to determine the price charged for this instance usage.
UsageType The type of usage and instance type being charged for. For m1.small Spot
Instances, this field is set to SpotUsage. For all other instance types, this field is
set to SpotUsage:{instance-type}. For example, SpotUsage:c1.medium.
Operation The product being charged for. For Linux Spot Instances, this field is
set to RunInstances. For Windows Spot Instances, this field is set to
RunInstances:0002. Spot usage is grouped according to Availability Zone.
InstanceID The ID of the Spot Instance that generated this instance usage.
MyBidID The ID for the Spot Instance request that generated this instance usage.
MarketPrice The Spot price at the time specified in the Timestamp field.
Version The version included in the data feed file name for this record.
• You must have FULL_CONTROL permission to the bucket, which includes permission for the
s3:GetBucketAcl and s3:PutBucketAcl actions.
If you're the bucket owner, you have this permission by default. Otherwise, the bucket owner must
grant your AWS account this permission.
• When you subscribe to a data feed, these permissions are used to update the bucket ACL to give the
AWS data feed account FULL_CONTROL permission. The AWS data feed account writes data feed files
to the bucket. If your account doesn't have the required permissions, the data feed files cannot be
written to the bucket.
Note
If you update the ACL and remove the permissions for the AWS data feed account, the data
feed files cannot be written to the bucket. You must resubscribe to the data feed to receive
the data feed files.
480
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
• Each data feed file has its own ACL (separate from the ACL for the bucket). The bucket owner
has FULL_CONTROL permission to the data files. The AWS data feed account has read and write
permissions.
• If you delete your data feed subscription, Amazon EC2 doesn't remove the read and write permissions
for the AWS data feed account on either the bucket or the data files. You must remove these
permissions yourself.
{
"SpotDatafeedSubscription": {
"OwnerId": "111122223333",
"Bucket": "my-bucket-name",
"Prefix": "my-prefix",
"State": "Active"
}
}
{
"SpotDatafeedSubscription": {
"OwnerId": "123456789012",
"Prefix": "spotdata",
"Bucket": "my-s3-bucket",
"State": "Active"
}
}
481
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Instances
requests. If you terminate your Spot Instances but do not cancel the Spot Instance requests, the requests
count against your Spot Instance vCPU limit until Amazon EC2 detects the Spot Instance terminations
and closes the requests.
Each limit specifies the vCPU limit for one or more instance families. For information about the different
instance families, generations, and sizes, see Amazon EC2 Instance Types.
With vCPU limits, you can use your limit in terms of the number of vCPUs that are required to launch
any combination of instance types that meet your changing application needs. For example, say your All
Standard Spot Instance Requests limit is 256 vCPUs, you could request 32 m5.2xlarge Spot Instances
(32 x 8 vCPUs) or 16 c5.4xlarge Spot Instances (16 x 16 vCPUs), or a combination of any Standard
Spot Instance types and sizes that total 256 vCPUs.
Topics
• Monitor Spot Instance limits and usage (p. 482)
• Request a Spot Instance limit increase (p. 482)
For more information, see Amazon EC2 service quotas (p. 1680) in the Amazon EC2 User Guide for Linux
Instances and Viewing service quotas in the Service Quotas User Guide.
With Amazon CloudWatch metrics integration, you can monitor EC2 usage against limits. You can also
configure alarms to warn about approaching limits. For more information, see Service Quotas and
Amazon CloudWatch alarms in the Service Quotas User Guide.
1. Open the Create case, Service limit increase form in the Support Center console at https://
console.aws.amazon.com/support/home#/case/create.
482
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
For more information about viewing limits and requesting a limit increase, see Amazon EC2 service
quotas (p. 1680).
Unlimited mode is suitable for burstable performance Spot Instances only if the instance runs long
enough to accrue CPU credits for bursting. Otherwise, paying for surplus credits makes burstable
performance Spot Instances more expensive than using other instances. For more information, see When
to use unlimited mode versus fixed CPU (p. 261).
Launch credits are meant to provide a productive initial launch experience for T2 instances by providing
sufficient compute resources to configure the instance. Repeated launches of T2 instances to access new
launch credits is not permitted. If you require sustained CPU, you can earn credits (by idling over some
period), use Unlimited mode (p. 260) for T2 Spot Instances, or use an instance type with dedicated CPU.
Dedicated Hosts
An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your
use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses,
including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.
For information about the configurations supported on Dedicated Hosts, see Dedicated Hosts
Configuration.
Contents
• Differences between Dedicated Hosts and Dedicated Instances (p. 484)
• Bring your own license (p. 484)
• Dedicated Host instance capacity (p. 485)
• Burstable T3 instances on Dedicated Hosts (p. 485)
• Dedicated Hosts restrictions (p. 486)
• Pricing and billing (p. 487)
• Work with Dedicated Hosts (p. 488)
• Work with shared Dedicated Hosts (p. 506)
483
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
There are no performance, security, or physical differences between Dedicated Instances and instances
on Dedicated Hosts. However, there are some differences between the two. The following table
highlights some of the key differences between Dedicated Hosts and Dedicated Instances:
These are the general steps to follow in order to bring your own volume licensed machine image into
Amazon EC2.
1. Verify that the license terms controlling the use of your machine images allow usage in a virtualized
cloud environment.
2. After you have verified that your machine image can be used within Amazon EC2, import it using VM
Import/Export. For information about how to import your machine image, see the VM Import/Export
User Guide.
3. After you import your machine image, you can launch instances from it onto active Dedicated Hosts in
your account.
4. When you run these instances, depending on the operating system, you might be required to activate
these instances against your own KMS server.
484
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
Note
To track how your images are used in AWS, enable host recording in AWS Config. You can use
AWS Config to record configuration changes to a Dedicated Host and use the output as a data
source for license reporting. For more information, see Track configuration changes (p. 515).
For example, when you allocate an R5 Dedicated Host, it has 2 sockets and 48 physical cores on which
you can run different instance sizes, such as r5.2xlarge and r5.4xlarge, up to the core capacity
associated with the host. However, for each instance family, there is a limit on the number of instances
that can be run for each instance size. For example, an R5 Dedicated Host supports up to 2 r5.8xlarge
instances, which uses 32 of the physical cores. Additional R5 instances of another size can then be used
to fill the host to core capacity. For the supported number of instance sizes for each instance family, see
Dedicated Hosts Configuration.
The following table shows examples of different instance size combinations that you can run on a
Dedicated Host.
For more information about the instance families and instance size configurations supported on
Dedicated Hosts, see the Dedicated Hosts Configuration Table.
T3 Dedicated Hosts are best suited for running BYOL software with low to moderate CPU utilization. This
includes eligible per-socket, per-core, or per-VM software licenses, such as Windows Server, Windows
Desktop, SQL Server, SUSE Enterprise Linux Server, Red Hat Enterprise Linux, and Oracle Database.
Examples of workloads suited for T3 Dedicated Hosts are small and medium databases, virtual desktops,
development and test environments, code repositories, and product prototypes. T3 Dedicated Hosts are
not recommended for workloads with sustained high CPU utilization or for workloads that experience
correlated CPU bursts simultaneously.
485
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
T3 instances on Dedicated Hosts use the same credit model as T3 instances on shared tenancy hardware.
However, they support the standard credit mode only; they do not support the unlimited credit
mode. In standard mode, T3 instances on Dedicated Hosts earn, spend, and accrue credits in the same
way as burstable instances on shared tenancy hardware. They provide a baseline CPU performance with
the ability to burst above the baseline level. To burst above the baseline, the instance spends credits
that it has accrued in its CPU credit balance. When the accrued credits are depleted, CPU utilization is
lowered to the baseline level. For more information about standard mode, see How standard burstable
performance instances work (p. 268).
T3 Dedicated Hosts support all of the features offered by Amazon EC2 Dedicated Hosts, including
multiple instance sizes on a single host, Host resource groups, and BYOL.
T3 Dedicated Hosts run general purpose burstable T3 instances that share CPU resources of the host
by providing a baseline CPU performance and the ability to burst to a higher level when needed. This
enables T3 Dedicated Hosts, which have 48 cores, to support up to a maximum of 192 instances per
host. In order to efficiently utilize the host’s resources and to provide the best instance performance, the
Amazon EC2 instance placement algorithm automatically calculates the supported number of instances
and instance size combinations that can be launched on the host.
T3 Dedicated Hosts support multiple instance types on the same host. All T3 instance sizes are
supported on Dedicated Hosts. You can run different combinations of T3 instances up to the CPU limit of
the host.
The following table lists the supported instance types, summarizes the performance of each instance
type, and indicates the maximum number of instances of each size that can be launched.
Instance
vCPUs Memory Baseline CPU Network burst Amazon Max number of
type (GiB) utilization per bandwidth EBS burst instances per
vCPU (Gbps) bandwidth Dedicated Host
(Mbps)
t3.nano
2 0.5 5% 5 Up to 2,085 192
t3.micro
2 1 10% 5 Up to 2,085 192
t3.small
2 2 20% 5 Up to 2,085 192
t3.medium
2 4 20% 5 Up to 2,085 192
t3.large
2 8 30% 5 2,780 96
t3.xlarge
4 16 40% 5 2,780 48
t3.2xlarge
8 32 40% 5 2,780 24
You can use the DedicatedHostCPUUtilization Amazon CloudWatch metric to monitor the vCPU
utilization of a Dedicated Host. The metric is available in the EC2 namespace and Per-Host-Metrics
dimension. For more information, see Dedicated Host metrics (p. 965).
486
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
• To run RHEL, SUSE Linux, and SQL Server on Dedicated Hosts, you must bring your own AMIs. RHEL,
SUSE Linux, and SQL Server AMIs that are offered by AWS or that are available on AWS Marketplace
can't be used with Dedicated Hosts. For more information on how to create your own AMI, see Bring
your own license (p. 484).
This restriction does not apply to hosts allocated for high memory instances (u-6tb1.metal,
u-9tb1.metal, u-12tb1.metal, u-18tb1.metal, and u-24tb1.metal). RHEL and SUSE Linux
AMIs that are offered by AWS or that are available on AWS Marketplace can be used with these hosts.
• Up to two On-Demand Dedicated Hosts per instance family, per Region can be allocated. It is possible
to request a limit increase: Request to Raise Allocation Limit on Amazon EC2 Dedicated Hosts.
• The instances that run on a Dedicated Host can only be launched in a VPC.
• Auto Scaling groups are supported when using a launch template that specifies a host resource group.
For more information, see Creating a Launch Template for an Auto Scaling Group in the Amazon EC2
Auto Scaling User Guide.
• Amazon RDS instances are not supported.
• The AWS Free Usage tier is not available for Dedicated Hosts.
• Instance placement control refers to managing instance launches onto Dedicated Hosts. You cannot
launch Dedicated Hosts into placement groups.
Payment Options
• On-Demand Dedicated Hosts (p. 487)
• Dedicated Host Reservations (p. 487)
• Savings Plans (p. 488)
• Pricing for Windows Server on Dedicated Hosts (p. 488)
On-Demand billing is automatically activated when you allocate a Dedicated Host to your account.
The On-Demand price for a Dedicated Host varies by instance family and Region. You pay per second
(with a minimum of 60 seconds) for active Dedicated Host, regardless of the quantity or the size of
instances that you choose to launch on it. For more information about On-Demand pricing, see Amazon
EC2 Dedicated Hosts On-Demand Pricing.
You can release an On-Demand Dedicated Host at any time to stop accruing charges for it. For
information about releasing a Dedicated Host, see Release Dedicated Hosts (p. 502).
• No Upfront—No Upfront Reservations provide you with a discount on your Dedicated Host usage over
a term and do not require an upfront payment. Available in one-year and three-year terms. Only some
instance families support the three-year term for No Upfront Reservations.
• Partial Upfront—A portion of the reservation must be paid upfront and the remaining hours in the
term are billed at a discounted rate. Available in one-year and three-year terms.
487
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
• All Upfront—Provides the lowest effective price. Available in one-year and three-year terms and
covers the entire cost of the term upfront, with no additional future charges.
You must have active Dedicated Hosts in your account before you can purchase reservations. Each
reservation can cover one or more hosts that support the same instance family in a single Availability
Zone. Reservations are applied to the instance family on the host, not the instance size. If you have
three Dedicated Hosts with different instances sizes (m4.xlarge, m4.medium, and m4.large) you can
associate a single m4 reservation with all those Dedicated Hosts. The instance family and Availability
Zone of the reservation must match that of the Dedicated Hosts you want to associate it with.
When a reservation is associated with a Dedicated Host, the Dedicated Host can't be released until the
reservation's term is over.
For more information about reservation pricing, see Amazon EC2 Dedicated Hosts Pricing.
Savings Plans
Savings Plans are a flexible pricing model that offers significant savings over On-Demand Instances. With
Savings Plans, you make a commitment to a consistent amount of usage, in USD per hour, for a term of
one or three years. This provides you with the flexibility to use the Dedicated Hosts that best meet your
needs and continue to save money, instead of making a commitment to a specific Dedicated Host. For
more information, see the AWS Savings Plans User Guide.
In addition, you can also use Windows Server AMIs provided by Amazon to run the latest versions of
Windows Server on Dedicated Hosts. This is common for scenarios where you have existing SQL Server
licenses eligible to run on Dedicated Hosts, but need Windows Server to run the SQL Server workload.
Windows Server AMIs provided by Amazon are supported on current generation instance types (p. 227)
only. For more information, see Amazon EC2 Dedicated Hosts Pricing.
If you no longer need an On-Demand host, you can stop the instances running on the host, direct them
to launch on a different host, and then release the host.
Dedicated Hosts are also integrated with AWS License Manager. With License Manager, you can create a
host resource group, which is a collection of Dedicated Hosts that are managed as a single entity. When
creating a host resource group, you specify the host management preferences, such as auto-allocate and
auto-release, for the Dedicated Hosts. This allows you to launch instances onto Dedicated Hosts without
manually allocating and managing those hosts. For more information, see Host Resource Groups in the
AWS License Manager User Guide.
Contents
• Allocate Dedicated Hosts (p. 489)
• Launch instances onto a Dedicated Host (p. 491)
• Launch instances into a host resource group (p. 493)
488
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
Support for multiple instance sizes of the same instance family on the same Dedicated Host is available
for the following instance families: c5, m5, r5, c5n, r5n, and m5n. Other instance families support only
one instance size on the same Dedicated Host.
Due to a hardware limitation with N-type Dedicated Hosts, such as C5n, M5n, and R5n, you cannot mix
smaller instance sizes (large, xlarge, and 2xlarge) with larger instance sizes (4xlarge, 9xlarge,
18xlarge, and .metal). If you require smaller and larger instance sizes on N-type hosts at the same
time, you must allocate separate hosts for the smaller and larger instance sizes.
New console
• To configure the Dedicated Host to support multiple instance types in the selected instance
family, for Support multiple instance types, choose Enable. Enabling this allows you to
launch different instance sizes from the same instance family onto the Dedicated Host.
For example, if you choose the m5 instance family and choose this option, you can launch
m5.xlarge and m5.4xlarge instances onto the Dedicated Host.
• To configure the Dedicated Host to support a single instance type within the selected instance
family, clear Support multiple instance types, and then for Instance type, choose the
instance type to support. This allows you to launch a single instance type on the Dedicated
Host. For example, if you choose this option and specify m5.4xlarge as the supported
instance type, you can launch only m5.4xlarge instances onto the Dedicated Host.
5. For Availability Zone, choose the Availability Zone in which to allocate the Dedicated Host.
6. To allow the Dedicated Host to accept untargeted instance launches that match its instance
type, for Instance auto-placement, choose Enable. For more information about auto-
placement, see Understand auto-placement and affinity (p. 494).
489
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
7. To enable host recovery for the Dedicated Host, for Host recovery, choose Enable. For more
information, see Host recovery (p. 510).
8. For Quantity, enter the number of Dedicated Hosts to allocate.
9. (Optional) Choose Add new tag and enter a tag key and a tag value.
10. Choose Allocate.
Old console
• To configure the Dedicated Host to support multiple instance types in the selected instance
family, select Support multiple instance types. Enabling this allows you to launch different
instance sizes from the same instance family onto the Dedicated Host. For example, if you
choose the m5 instance family and choose this option, you can launch m5.xlarge and
m5.4xlarge instances onto the Dedicated Host. The instance family must be powered by the
Nitro System.
• To configure the Dedicated Host to support a single instance type within the selected instance
family, clear Support multiple instance types, and then for Instance type, choose the
instance type to support. This allows you to launch a single instance type on the Dedicated
Host. For example, if you choose this option and specify m5.4xlarge as the supported
instance type, you can launch only m5.4xlarge instances onto the Dedicated Host.
5. For Availability Zone, choose the Availability Zone in which to allocate the Dedicated Host.
6. To allow the Dedicated Host to accept untargeted instance launches that match its instance
type, for Instance auto-placement, choose Enable. For more information about auto-
placement, see Understand auto-placement and affinity (p. 494).
7. To enable host recovery for the Dedicated Host, for Host recovery choose Enable. For more
information, see Host recovery (p. 510).
8. For Quantity, enter the number of Dedicated Hosts to allocate.
9. (Optional) Choose Add Tag and enter a tag key and a tag value.
10. Choose Allocate host.
AWS CLI
Use the allocate-hosts AWS CLI command. The following command allocates a Dedicated Host that
supports multiple instance types from the m5 instance family in us-east-1a Availability Zone. The
host also has host recovery enabled and it has auto-placement disabled.
The following command allocates a Dedicated Host that supports untargeted m4.large instance
launches in the eu-west-1a Availability Zone, enables host recovery, and applies a tag with a key of
purpose and a value of production.
490
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
PowerShell
Use the New-EC2Host AWS Tools for Windows PowerShell command. The following command
allocates a Dedicated Host that supports multiple instance types from the m5 instance family in
us-east-1a Availability Zone. The host also has host recovery enabled and it has auto-placement
disabled.
The following commands allocate a Dedicated Host that supports untargeted m4.large instance
launches in the eu-west-1a Availability Zone, enable host recovery, and apply a tag with a key of
purpose and a value of production.
The TagSpecification parameter used to tag a Dedicated Host on creation requires an object
that specifies the type of resource to be tagged, the tag key, and the tag value. The following
commands create the required object.
The following command allocates the Dedicated Host and applies the tag specified in the $tagspec
object.
Before you launch your instances, take note of the limitations. For more information, see Dedicated
Hosts restrictions (p. 486).
You can launch an instance onto a Dedicated Host using the following methods.
Console
To launch an instance onto a specific Dedicated Host from the Dedicated Hosts page
491
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
3. On the Dedicated Hosts page, select a host and choose Actions, Launch Instance(s) onto Host.
4. Select an AMI from the list. SQL Server, SUSE, and RHEL AMIs provided by Amazon EC2 can't be
used with Dedicated Hosts.
5. On the Choose an Instance Type page, select the instance type to launch and then choose Next:
Configure Instance Details.
If the Dedicated Host supports a single instance type only, the supported instance type is
selected by default and can't be changed.
If the Dedicated Host supports multiple instance types, you must select an instance type within
the supported instance family based on the available instance capacity of the Dedicated Host.
We recommend that you launch the larger instance sizes first, and then fill the remaining
instance capacity with the smaller instance sizes as needed.
6. On the Configure Instance Details page, configure the instance settings to suit your needs, and
then for Affinity, choose one of the following options:
• Off—The instance launches onto the specified host, but it is not guaranteed to restart on the
same Dedicated Host if stopped.
• Host—If stopped, the instance always restarts on this specific host.
For more information about Affinity, see Understand auto-placement and affinity (p. 494).
The Tenancy and Host options are pre-configured based on the host that you selected.
7. Choose Review and Launch.
8. On the Review Instance Launch page, choose Launch.
9. When prompted, select an existing key pair or create a new one, and then choose Launch
Instances.
To launch an instance onto a Dedicated Host using the Launch Instance wizard
For more information, see Understand auto-placement and affinity (p. 494).
If you are unable to see these settings, check that you have selected a VPC in the Network
menu.
6. Choose Review and Launch.
492
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
AWS CLI
Use the run-instances AWS CLI command and specify the instance affinity, tenancy, and host in the
Placement request parameter.
PowerShell
Use the New-EC2Instance AWS Tools for Windows PowerShell command and specify the instance
affinity, tenancy, and host in the Placement request parameter.
• You must associate a core- or socket-based license configuration with the AMI.
• You can't use SQL Server, SUSE, or RHEL AMIs provided by Amazon EC2 with Dedicated Hosts.
• You can't target a specific host by choosing a host ID, and you can't enable instance affinity when
launching an instance into a host resource group.
You can launch an instance into a host resource group using the following methods.
New console
493
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
Old console
AWS CLI
Use the run-instances AWS CLI command, and in the Placement request parameter, omit the
Tenancy option and specify the host resource group ARN.
PowerShell
Use the New-EC2Instance AWS Tools for Windows PowerShell command, and in the Placement
request parameter, omit the Tenancy option and specify the host resource group ARN.
Auto-placement
Auto-placement is configured at the host level. It allows you to manage whether instances that you
launch are launched onto a specific host, or onto any available host that has matching configurations.
When the auto-placement of a Dedicated Host is disabled, it only accepts Host tenancy instance launches
that specify its unique host ID. This is the default setting for new Dedicated Hosts.
When the auto-placement of a Dedicated Host is enabled, it accepts any untargeted instance launches
that match its instance type configuration.
When launching an instance, you need to configure its tenancy. Launching an instance onto a Dedicated
Host without providing a specific HostId enables it to launch on any Dedicated Host that has auto-
placement enabled and that matches its instance type.
Host affinity
Host affinity is configured at the instance level. It establishes a launch relationship between an instance
and a Dedicated Host.
494
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
When affinity is set to Host, an instance launched onto a specific host always restarts on the same host
if stopped. This applies to both targeted and untargeted launches.
When affinity is set to Off, and you stop and restart the instance, it can be restarted on any available
host. However, it tries to launch back onto the last Dedicated Host on which it ran (on a best-effort
basis).
New console
Old console
AWS CLI
Use the modify-hosts AWS CLI command. The following example enables auto-placement for the
specified Dedicated Host.
PowerShell
Use the Edit-EC2Host AWS Tools for Windows PowerShell command. The following example enables
auto-placement for the specified Dedicated Host.
495
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
You can modify a Dedicated Host to change the instance types that it supports. If it currently supports
a single instance type, you can modify it to support multiple instance types within that instance family.
Similarly, if it currently supports multiple instance types, you can modify it to support a specific instance
type only.
To modify a Dedicated Host to support multiple instance types, you must first stop all running instances
on the host. The modification takes approximately 10 minutes to complete. The Dedicated Host
transitions to the pending state while the modification is in progress. You can't start stopped instances
or launch new instances on the Dedicated Host while it is in the pending state.
To modify a Dedicated Host that supports multiple instance types to support only a single instance type,
the host must either have no running instances, or the running instances must be of the instance type
that you want the host to support. For example, to modify a host that supports multiple instance types
in the m5 instance family to support only m5.large instances, the Dedicated Host must either have no
running instances, or it must have only m5.large instances running on it.
You can modify the supported instance types using one of the following methods.
New console
• If the Dedicated Host currently supports a specific instance type, Support multiple instance
types is not enabled, and Instance type lists the supported instance type. To modify the host
to support multiple types in the current instance family, for Support multiple instance types,
choose Enable.
You must first stop all instances running on the host before modifying it to support multiple
instance types.
• If the Dedicated Host currently supports multiple instance types in an instance family,
Enabled is selected for Support multiple instance types. To modify the host to support
a specific instance type, for Support multiple instance types, clear Enable, and then for
Instance type, select the specific instance type to support.
You can't change the instance family supported by the Dedicated Host.
5. Choose Save.
Old console
496
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
3. Select the Dedicated Host to modify and choose Actions, Modify Supported Instance Types.
4. Do one of the following, depending on the current configuration of the Dedicated Host:
• If the Dedicated Host currently supports a specific instance type, No is selected for Support
multiple instance types. To modify the host to support multiple types in the current instance
family, for Support multiple instance types, select Yes.
You must first stop all instances running on the host before modifying it to support multiple
instance types.
• If the Dedicated Host currently supports multiple instance types in an instance family, Yes is
selected for Support multiple instance types, and Instance family displays the supported
instance family. To modify the host to support a specific instance type, for Support multiple
instance types, select No, and then for Instance type, select the specific instance type to
support.
You can't change the instance family supported by the Dedicated Host.
5. Choose Save.
AWS CLI
The following command modifies a Dedicated Host to support multiple instance types within the m5
instance family.
The following command modifies a Dedicated Host to support m5.xlarge instances only.
PowerShell
The following command modifies a Dedicated Host to support multiple instance types within the m5
instance family.
The following command modifies a Dedicated Host to support m5.xlarge instances only.
497
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
Note
For T3 instances, you can't change the tenancy from dedicated to host, or from host to
dedicated. Attempting to make one of these unsupported tenancy changes results in the
InvalidTenancy error code.
You can modify an instance's tenancy and affinity using the following methods.
Console
For more information, see Understand auto-placement and affinity (p. 494).
6. Choose Save.
AWS CLI
Use the modify-instance-placement AWS CLI command. The following example changes the
specified instance's affinity from default to host, and specifies the Dedicated Host that the
instance has affinity with.
PowerShell
Use the Edit-EC2InstancePlacement AWS Tools for Windows PowerShell command. The following
example changes the specified instance's affinity from default to host, and specifies the
Dedicated Host that the instance has affinity with.
498
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
New console
Available vCPUs indicates the vCPUs that are available on the Dedicated Host for new instance
launches. For example, a Dedicated Host that supports multiple instance types within the c5
instance family, and that has no instances running on it, has 72 available vCPUs. This means that
you can launch different combinations of instance types onto the Dedicated Host to consume
the 72 available vCPUs.
For information about instances running on the host, choose Running instances.
Old console
AWS CLI
The following example uses the describe-hosts (AWS CLI) command to view the available instance
capacity for a Dedicated Host that supports multiple instance types within the c5 instance family.
The Dedicated Host already has two c5.4xlarge instances and four c5.2xlarge instances running
on it.
"AvailableInstanceCapacity": [
{ "AvailableCapacity": 2,
"InstanceType": "c5.xlarge",
"TotalCapacity": 18 },
{ "AvailableCapacity": 4,
499
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
"InstanceType": "c5.large",
"TotalCapacity": 36 }
],
"AvailableVCpus": 8
PowerShell
You can also apply tags to Dedicated Hosts at the time of creation. For more information, see Allocate
Dedicated Hosts (p. 489).
New console
Old console
AWS CLI
500
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
The following command tags the specified Dedicated Host with Owner=TeamA.
PowerShell
The New-EC2Tag command needs a Tag object, which specifies the key and value pair to be used
for the Dedicated Host tag. The following commands create a Tag object named $tag, with a key
and value pair of Owner and TeamA respectively.
The following command tags the specified Dedicated Host with the $tag object.
Console
AWS CLI
Use the describe-hosts AWS CLI command and then review the state property in the hostSet
response element.
PowerShell
Use the Get-EC2Host AWS Tools for Windows PowerShell command and then review the state
property in the hostSet response element.
501
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
State Description
available AWS hasn't detected an issue with the Dedicated Host. No maintenance or
repairs are scheduled. Instances can be launched onto this Dedicated Host.
released The Dedicated Host has been released. The host ID is no longer in use.
Released hosts can't be reused.
under-assessment AWS is exploring a possible issue with the Dedicated Host. If action must be
taken, you are notified via the AWS Management Console or email. Instances
can't be launched onto a Dedicated Host in this state.
pending The Dedicated Host cannot be used for new instance launches. It is either
being modified to support multiple instance types (p. 496), or a host
recovery (p. 510) is in progress.
permanent-failure An unrecoverable failure has been detected. You receive an eviction notice
through your instances and by email. Your instances might continue to run.
If you stop or terminate all instances on a Dedicated Host with this state,
AWS retires the host. AWS does not restart instances in this state. Instances
can't be launched onto Dedicated Hosts in this state.
released- AWS permanently releases Dedicated Hosts that have failed and no longer
permanent-failure have running instances on them. The Dedicated Host ID is no longer
available for use.
New console
Old console
502
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
AWS CLI
PowerShell
After you release a Dedicated Host, you can't reuse the same host or host ID again, and you are no longer
charged On-Demand billing rates for it. The state of the Dedicated Host is changed to released, and
you are not able to launch any instances onto that host.
Note
If you have recently released Dedicated Hosts, it can take some time for them to stop counting
towards your limit. During this time, you might experience LimitExceeded errors when trying
to allocate new Dedicated Hosts. If this is the case, try allocating new hosts again after a few
minutes.
The instances that were stopped are still available for use and are listed on the Instances page. They
retain their host tenancy setting.
Console
To purchase reservations
• Host instance family—The options listed correspond with the Dedicated Hosts in your
account that are not already assigned to a reservation.
• Availability Zone—The Availability Zone of the Dedicated Hosts in your account that aren't
already assigned to a reservation.
• Payment option—The payment option for the offering.
• Term—The term of the reservation, which can be one or three years.
4. Choose Find offering and select an offering that matches your requirements.
503
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
5. Choose the Dedicated Hosts to associate with the reservation, and then choose Review.
6. Review your order and choose Order.
AWS CLI
To purchase reservations
1. Use the describe-host-reservation-offerings AWS CLI command to list the available offerings
that match your needs. The following example lists the offerings that support instances in the
m4 instance family and have a one-year term.
Note
The term is specified in seconds. A one-year term includes 31,536,000 seconds, and a
three-year term includes 94,608,000 seconds.
The command returns a list of offerings that match your criteria. Note the offeringId of the
offering to purchase.
2. Use the purchase-host-reservation AWS CLI command to purchase the offering and provide
the offeringId noted in the previous step. The following example purchases the specified
reservation and associates it with a specific Dedicated Host that is already allocated in the AWS
account, and it applies a tag with a key of purpose and a value of production.
PowerShell
To purchase reservations
1. Use the Get-EC2HostReservationOffering AWS Tools for Windows PowerShell command to list
the available offerings that match your needs. The following examples list the offerings that
support instances in the m4 instance family and have a one-year term.
Note
The term is specified in seconds. A one-year term includes 31,536,000 seconds, and a
three-year term includes 94,608,000 seconds.
The command returns a list of offerings that match your criteria. Note the offeringId of the
offering to purchase.
2. Use the New-EC2HostReservation AWS Tools for Windows PowerShell command to purchase
the offering and provide the offeringId noted in the previous step. The following example
purchases the specified reservation and associates it with a specific Dedicated Host that is
already allocated in the AWS account.
504
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
You can view details of your Dedicated Host reservations using the following methods.
Console
AWS CLI
PowerShell
PS C:\> Get-EC2HostReservation
You can tag a Dedicated Host Reservation using the command line tools only.
AWS CLI
505
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
PowerShell
The New-EC2Tag command needs a Tag parameter, which specifies the key and value pair to be
used for the Dedicated Host Reservation tag. The following commands create the Tag parameter.
In this model, the AWS account that owns the Dedicated Host (owner) shares it with other AWS accounts
(consumers). Consumers can launch instances onto Dedicated Hosts that are shared with them in
the same way that they would launch instances onto Dedicated Hosts that they allocate in their own
account. The owner is responsible for managing the Dedicated Host and the instances that they launch
onto it. Owners can't modify instances that consumers launch onto shared Dedicated Hosts. Consumers
are responsible for managing the instances that they launch onto Dedicated Hosts shared with them.
Consumers can't view or modify instances owned by other consumers or by the Dedicated Host owner,
and they can't modify Dedicated Hosts that are shared with them.
Contents
• Prerequisites for sharing Dedicated Hosts (p. 507)
• Limitations for sharing Dedicated Hosts (p. 507)
• Related services (p. 507)
• Share across Availability Zones (p. 507)
• Share a Dedicated Host (p. 507)
• Unshare a shared Dedicated Host (p. 508)
• Identify a shared Dedicated Host (p. 509)
• View instances running on a shared Dedicated Host (p. 509)
• Shared Dedicated Host permissions (p. 510)
• Billing and metering (p. 510)
• Dedicated Host limits (p. 510)
• Host recovery and Dedicated Host sharing (p. 510)
506
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
Related services
AWS Resource Access Manager
Dedicated Host sharing integrates with AWS Resource Access Manager (AWS RAM). AWS RAM is a service
that enables you to share your AWS resources with any AWS account or through AWS Organizations.
With AWS RAM, you share resources that you own by creating a resource share. A resource share specifies
the resources to share, and the consumers with whom to share them. Consumers can be individual AWS
accounts, or organizational units or an entire organization from AWS Organizations.
For more information about AWS RAM, see the AWS RAM User Guide.
To identify the location of your Dedicated Hosts relative to your accounts, you must use the Availability
Zone ID (AZ ID). The Availability Zone ID is a unique and consistent identifier for an Availability Zone
across all AWS accounts. For example, use1-az1 is an Availability Zone ID for the us-east-1 Region
and it is the same location in every AWS account.
To view the Availability Zone IDs for the Availability Zones in your account
If you share a Dedicated Host with auto-placement enabled, keep the following in mind as it could lead
to unintended Dedicated Host usage:
• If consumers launch instances with Dedicated Host tenancy and they do not have capacity on a
Dedicated Host that they own in their account, the instance is automatically launched onto the shared
Dedicated Host.
507
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
To share a Dedicated Host, you must add it to a resource share. A resource share is an AWS RAM resource
that lets you share your resources across AWS accounts. A resource share specifies the resources to share,
and the consumers with whom they are shared. You can add the Dedicated Host to an existing resource,
or you can add it to a new resource share.
If you are part of an organization in AWS Organizations and sharing within your organization is enabled,
consumers in your organization are automatically granted access to the shared Dedicated Host.
Otherwise, consumers receive an invitation to join the resource share and are granted access to the
shared Dedicated Host after accepting the invitation.
Note
After you share a Dedicated Host, it could take a few minutes for consumers to have access to it.
You can share a Dedicated Host that you own by using one of the following methods.
To share a Dedicated Host that you own using the Amazon EC2 console
It could take a few minutes for consumers to get access to the shared host.
To share a Dedicated Host that you own using the AWS RAM console
To share a Dedicated Host that you own using the AWS CLI
• Consumers with whom the Dedicated Host was shared can no longer launch new instances onto it.
• Instances owned by consumers that were running on the Dedicated Host at the time of unsharing
continue to run but are scheduled for retirement. Consumers receive retirement notifications for
the instances and they have two weeks to take action on the notifications. However, if the Dedicated
Host is reshared with the consumer within the retirement notice period, the instance retirements are
cancelled.
To unshare a shared Dedicated Host that you own, you must remove it from the resource share. You can
do this by using one of the following methods.
To unshare a shared Dedicated Host that you own using the Amazon EC2 console
508
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
To unshare a shared Dedicated Host that you own using the AWS RAM console
To unshare a shared Dedicated Host that you own using the AWS CLI
Command line
Use the describe-hosts command. The command returns the Dedicated Hosts that you own and
Dedicated Hosts that are shared with you.
To view the instances running on a shared Dedicated Host using the Amazon EC2 console
509
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
Command line
To view the instances running on a shared Dedicated Host using the AWS CLI
Use the describe-hosts command. The command returns the instances running on each Dedicated
Host. Owners see all of the instances running on the host. Consumers only see running instances
that they launched on the shared hosts. InstanceOwnerId shows the AWS account ID of the
instance owner.
Owners are billed for Dedicated Hosts that they share. Consumers are not billed for instances that they
launch onto shared Dedicated Hosts.
Dedicated Host Reservations continue to provide billing discounts for shared Dedicated Hosts. Only
Dedicated Host owners can purchase Dedicated Host Reservations for shared Dedicated Hosts that they
own.
Host recovery
Dedicated Host auto recovery restarts your instances on to a new replacement host when certain
problematic conditions are detected on your Dedicated Host. Host recovery reduces the need for
manual intervention and lowers the operational burden if there is an unexpected Dedicated Host failure
concerning system power or network connectivity events. Other Dedicated Host issues will require
manual intervention to recover from.
Contents
• Host recovery basics (p. 511)
510
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
When a system power or network connectivity failure is detected on your Dedicated Host, Dedicated
Host auto recovery is initiated and Amazon EC2 automatically allocates a replacement Dedicated Host.
The replacement Dedicated Host receives a new host ID, but retains the same attributes as the original
Dedicated Host, including:
• Availability Zone
• Instance type
• Tags
• Auto placement settings
• Reservation
When the replacement Dedicated Host is allocated, the instances are recovered on to the replacement
Dedicated Host. The recovered instances retain the same attributes as the original instances, including:
• Instance ID
• Private IP addresses
• Elastic IP addresses
• EBS volume attachments
• All instance metadata
Additionally, the built-in integration with AWS License Manager automates the tracking and
management of your licenses.
Note
AWS License Manager integration is supported only in Regions in which AWS License Manager is
available.
If instances have a host affinity relationship with the impaired Dedicated Host, the recovered instances
establish host affinity with the replacement Dedicated Host.
When all of the instances have been recovered on to the replacement Dedicated Host, the impaired
Dedicated Host is released, and the replacement Dedicated Host becomes available for use.
511
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
When host recovery is initiated, the AWS account owner is notified by email and by an AWS Personal
Health Dashboard event. A second notification is sent after the host recovery has been successfully
completed.
If you are using AWS License Manager to track your licenses, AWS License Manager allocates new licenses
for the replacement Dedicated Host based on the license configuration limits. If the license configuration
has hard limits that will be breached as a result of the host recovery, the recovery process is not allowed
and you are notified of the host recovery failure through an Amazon SNS notification (if notification
settings have been configured for AWS License Manager). If the license configuration has soft limits
that will be breached as a result of the host recovery, the recovery is allowed to continue and you are
notified of the limit breach through an Amazon SNS notification. For more information, see Using
License Configurations and Settings in License Manager in the AWS License Manager User Guide.
Dedicated Host auto recovery does not occur when hardware or software issues impact the physical
host and manual intervention is required. You will receive a retirement notification in the AWS Personal
Health Dashboard, an Amazon CloudWatch event, and the AWS account owner email address receives a
message regarding the Dedicated Host failure.
Stopped instances are not recovered on to the replacement Dedicated Host. If you attempt to start
a stopped instance that targets the impaired Dedicated Host, the instance start fails. We recommend
that you modify the stopped instance to either target a different Dedicated Host, or to launch on any
available Dedicated Host with matching configurations and auto-placement enabled.
Instances with instance storage are not recovered on to the replacement Dedicated Host. As a remedial
measure, the impaired Dedicated Host is marked for retirement and you receive a retirement notification
after the host recovery is complete. Follow the remedial steps described in the retirement notification
within the specified time period to manually recover the remaining instances on the impaired Dedicated
Host.
To recover instances that are not supported, see Manually recover unsupported instances (p. 514).
Note
Dedicated Host auto recovery of supported metal instance types will take longer to detect and
recover from than non-metal instance types.
Contents
• Enable host recovery (p. 512)
• Disable host recovery (p. 513)
• View the host recovery configuration (p. 513)
You can enable host recovery at the time of Dedicated Host allocation or after allocation.
For more information about enabling host recovery at the time of Dedicated Host allocation, see Allocate
Dedicated Hosts (p. 489).
512
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
To view the host recovery configuration for a Dedicated Host using the console
To view the host recovery configuration for a Dedicated Host using the AWS CLI
The HostRecovery response element indicates whether host recovery is enabled or disabled.
513
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
After the replacement Dedicated Host is allocated, it enters the pending state. It remains in this state
until the host recovery process is complete. You can't launch instances on to the replacement Dedicated
Host while it is in the pending state. Recovered instances on the replacement Dedicated Host remain in
the impaired state during the recovery process.
After the host recovery is complete, the replacement Dedicated Host enters the available state,
and the recovered instances return to the running state. You can launch instances on to the
replacement Dedicated Host after it enters the available state. The original impaired Dedicated Host is
permanently released and it enters the released-permanent-failure state.
If the impaired Dedicated Host has instances that do not support host recovery, such as instances with
instance store-backed volumes, the Dedicated Host is not released. Instead, it is marked for retirement
and enters the permanent-failure state.
For EBS-backed instances that could not be automatically recovered, we recommend that you manually
stop and start the instances to recover them onto a new Dedicated Host. For more information about
stopping your instance, and about the changes that occur in your instance configuration when it's
stopped, see Stop and start your instance (p. 622).
For instance store-backed instances that could not be automatically recovered, we recommend that you
do the following:
1. Launch a replacement instance on a new Dedicated Host from your most recent AMI.
2. Migrate all of the necessary data to the replacement instance.
3. Terminate the original instance on the impaired Dedicated Host.
Related services
Dedicated Host integrates with the following services:
• AWS License Manager—Tracks licenses across your Amazon EC2 Dedicated Hosts (supported only
in Regions in which AWS License Manager is available). For more information, see the AWS License
Manager User Guide.
Pricing
There are no additional charges for using host recovery, but the usual Dedicated Host charges apply. For
more information, see Amazon EC2 Dedicated Hosts Pricing.
As soon as host recovery is initiated, you are no longer billed for the impaired Dedicated Host. Billing for
the replacement Dedicated Host begins only after it enters the available state.
514
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Hosts
If the impaired Dedicated Host was billed using the On-Demand rate, the replacement Dedicated Host
is also billed using the On-Demand rate. If the impaired Dedicated Host had an active Dedicated Host
Reservation, it is transferred to the replacement Dedicated Host.
AWS Config records configuration information for Dedicated Hosts and instances individually, and pairs
this information through relationships. There are three reporting conditions:
• AWS Config recording status—When On, AWS Config is recording one or more AWS resource types,
which can include Dedicated Hosts and Dedicated Instances. To capture the information required for
license reporting, verify that hosts and instances are being recorded with the following fields.
• Host recording status—When Enabled, the configuration information for Dedicated Hosts is recorded.
• Instance recording status—When Enabled, the configuration information for Dedicated Instances is
recorded.
If any of these three conditions are disabled, the icon in the Edit Config Recording button is red. To
derive the full benefit of this tool, ensure that all three recording methods are enabled. When all three
are enabled, the icon is green. To edit the settings, choose Edit Config Recording. You are directed to
the Set up AWS Config page in the AWS Config console, where you can set up AWS Config and start
recording for your hosts, instances, and other supported resource types. For more information, see
Setting up AWS Config using the Console in the AWS Config Developer Guide.
Note
AWS Config records your resources after it discovers them, which might take several minutes.
After AWS Config starts recording configuration changes to your hosts and instances, you can get the
configuration history of any host that you have allocated or released and any instance that you have
launched, stopped, or terminated. For example, at any point in the configuration history of a Dedicated
Host, you can look up how many instances are launched on that host, along with the number of sockets
and cores on the host. For any of those instances, you can also look up the ID of its Amazon Machine
Image (AMI). You can use this information to report on licensing for your own server-bound software
that is licensed per-socket or per-core.
• By using the AWS Config console. For each recorded resource, you can view a timeline page, which
provides a history of configuration details. To view this page, choose the gray icon in the Config
Timeline column of the Dedicated Hosts page. For more information, see Viewing Configuration
Details in the AWS Config Console in the AWS Config Developer Guide.
• By running AWS CLI commands. First, you can use the list-discovered-resources command to get a
list of all hosts and instances. Then, you can use the get-resource-config-history command to get the
configuration details of a host or instance for a specific time interval. For more information, see View
Configuration Details Using the CLI in the AWS Config Developer Guide.
• By using the AWS Config API in your applications. First, you can use the ListDiscoveredResources action
to get a list of all hosts and instances. Then, you can use the GetResourceConfigHistory action to get
the configuration details of a host or instance for a specific time interval.
For example, to get a list of all of your Dedicated Hosts from AWS Config, run a CLI command such as the
following.
515
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Instances
To obtain the configuration history of a Dedicated Host from AWS Config, run a CLI command such as
the following.
For more information, see Viewing Configuration Details in the AWS Config Console.
• AWS CLI: Viewing Configuration Details (AWS CLI) in the AWS Config Developer Guide.
• Amazon EC2 API: GetResourceConfigHistory.
Dedicated Instances
Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware
that's dedicated to a single customer. Dedicated Instances that belong to different AWS accounts are
physically isolated at a hardware level, even if those accounts are linked to a single payer account.
However, Dedicated Instances might share hardware with other instances from the same AWS account
that are not Dedicated Instances.
Note
A Dedicated Host is also a physical server that's dedicated for your use. With a Dedicated
Host, you have visibility and control over how instances are placed on the server. For more
information, see Dedicated Hosts (p. 483).
Topics
• Dedicated Instance basics (p. 516)
• Supported features (p. 517)
• Differences between Dedicated Instances and Dedicated Hosts (p. 518)
• Dedicated Instances limitations (p. 518)
• Pricing for Dedicated Instances (p. 519)
• Work with Dedicated Instances (p. 519)
When you launch an instance, the instance's tenancy attribute determines the hardware that it runs on.
To launch a Dedicated Instance, you must specify an instance tenancy of dedicated.
Note
Instances with a tenancy value of default run on shared tenancy hardware. Instances with
a tenancy value of host run on a Dedicated Host. For more information about working with
Dedicated Hosts, see Dedicated Hosts (p. 483).
516
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Instances
The tenancy of the VPC into which you launch the instance can also determine the instance's tenancy. A
VPC can have a tenancy of either default or dedicated. If you launch an instance into a VPC that has
a tenancy of default, the instance runs on shared tenancy hardware by default, unless you specify a
different tenancy for the instance. If you launch an instance into a VPC that has a tenancy of dedicated,
the instance runs as a Dedicated Instance by default, unless you specify a different tenancy for the
instance.
• Create a VPC with a tenancy of dedicated and launch all instances as Dedicated Instances by default.
For more information, see Create a VPC with an instance tenancy of dedicated (p. 519).
• Create a VPC with a tenancy of default and manually specify a tenancy of dedicated for the
instances that you want to run as Dedicated Instances. For more information, see Launch Dedicated
Instances into a VPC (p. 520).
Supported features
Dedicated Instances support the following features and AWS service integrations:
Topics
• Reserved Instances (p. 517)
• Automatic scaling (p. 517)
• Automatic recovery (p. 517)
• Dedicated Spot Instances (p. 517)
• Burstable performance instances (p. 518)
Reserved Instances
To guarantee that sufficient capacity is available to launch Dedicated Instances, you can purchase
Dedicated Reserved Instances. For more information, see Reserved Instances (p. 381).
When you purchase a Dedicated Reserved Instance, you are purchasing the capacity to launch a
Dedicated Instance into a VPC at a much reduced usage fee; the price break in the usage charge applies
only if you launch an instance with dedicated tenancy. When you purchase a Reserved Instance with
default tenancy, it applies only to a running instance with default tenancy; it does not apply to a
running instance with dedicated tenancy.
You can't use the modification process to change the tenancy of a Reserved Instance after you've
purchased it. However, you can exchange a Convertible Reserved Instance for a new Convertible
Reserved Instance with a different tenancy.
Automatic scaling
You can use Amazon EC2 Auto Scaling to launch Dedicated Instances. For more information, see
Launching Auto Scaling Instances in a VPC in the Amazon EC2 Auto Scaling User Guide.
Automatic recovery
You can configure automatic recovery for a Dedicated Instance if it becomes impaired due to an
underlying hardware failure or a problem that requires AWS involvement to repair. For more information,
see Recover your instance (p. 653).
517
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Instances
Amazon EC2 has systems in place to identify and correct variability in performance. However, it is still
possible to experience short-term variability if you launch multiple T3 Dedicated Instances that have
correlated CPU usage patterns. For these more demanding or correlated workloads, we recommend
using M5 or M5a Dedicated Instances rather than T3 Dedicated Instances.
There are no performance, security, or physical differences between Dedicated Instances and instances
on Dedicated Hosts. However, there are some differences between the two. The following table
highlights some of the key differences between Dedicated Instances and Dedicated Hosts:
For more information about Dedicated Hosts, see Dedicated Hosts (p. 483).
• Some AWS services or their features are not supported with a VPC with the instance tenancy set to
dedicated. Refer to the respective service's documentation to confirm if there are any limitations.
518
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Instances
• Some instance types can't be launched into a VPC with the instance tenancy set to dedicated. For
more information about supported instance types, see Amazon EC2 Dedicated Instances.
• When you launch a Dedicated Instance backed by Amazon EBS, the EBS volume doesn't run on single-
tenant hardware.
Topics
• Create a VPC with an instance tenancy of dedicated (p. 519)
• Launch Dedicated Instances into a VPC (p. 520)
• Display tenancy information (p. 520)
• Change the tenancy of an instance (p. 521)
• Change the tenancy of a VPC (p. 522)
If you launch an instance into a VPC that has an instance tenancy of dedicated, your instance is
automatically a Dedicated Instance, regardless of the tenancy of the instance.
Console
To create a VPC with an instance tenancy of dedicated (Create VPC dialog box)
Command line
To set the tenancy option when you create a VPC using the command line
519
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Instances
Console
To launch a Dedicated Instance into a default tenancy VPC using the console
Command line
To set the tenancy option for an instance during launch using the command line
For more information about launching an instance with a tenancy of host, see Launch instances onto a
Dedicated Host (p. 491).
520
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Dedicated Instances
•
Choose the settings icon ( ) in the top-right corner, toggle to choose Tenancy, and choose
Confirm.
• Select the instance. On the Details tab near the bottom of the page, under Host and
placement group, check the value for Tenancy.
Command line
To describe the tenancy value of a Reserved Instance using the command line
To describe the tenancy value of a Reserved Instance offering using the command line
• You can't change the tenancy of an instance from default to dedicated or host after
launch. And you can't change the tenancy of an instance from dedicated or host to
default after launch.
• For T3 instances, you can't change the tenancy from dedicated to host, or from host to
dedicated. Attempting to make one of these unsupported tenancy changes results in the
InvalidTenancy error code.
Console
521
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
5. For Tenancy, choose whether to run your instance on dedicated hardware or on a Dedicated
Host. Choose Save.
Command line
You can modify the instance tenancy of a VPC using the AWS CLI, an AWS SDK, or the Amazon EC2 API
only.
Command line
To modify the instance tenancy attribute of a VPC using the AWS CLI
Use the modify-vpc-tenancy command and specify the ID of the VPC and instance tenancy value.
The only supported value is default.
By creating Capacity Reservations, you ensure that you always have access to EC2 capacity when you
need it, for as long as you need it. You can create Capacity Reservations at any time, without entering
into a one-year or three-year term commitment, and the capacity is available immediately. Billing starts
as soon as the capacity is provisioned and the Capacity Reservation enters the active state. When you no
longer need it, cancel the Capacity Reservation to stop incurring charges.
Capacity Reservations can only be used by instances that match their attributes. By default, they are
automatically used by running instances that match the attributes. If you don't have any running
522
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
instances that match the attributes of the Capacity Reservation, it remains unused until you launch an
instance with matching attributes.
In addition, you can use Savings Plans and Regional Reserved Instances with your Capacity Reservations
to benefit from billing discounts. AWS automatically applies your discount when the attributes of a
Capacity Reservation match the attributes of a Savings Plan or Regional Reserved Instance. For more
information, see Billing discounts (p. 525).
Contents
• Differences between Capacity Reservations, Reserved Instances, and Savings Plans (p. 523)
• Supported platforms (p. 524)
• Quotas (p. 524)
• Limitations (p. 524)
• Capacity Reservation pricing and billing (p. 525)
• Work with Capacity Reservations (p. 526)
• Work with Capacity Reservation groups (p. 532)
• Capacity Reservations in cluster placement groups (p. 536)
• Capacity Reservations in Local Zones (p. 540)
• Capacity Reservations in Wavelength Zones (p. 540)
• Capacity Reservations on AWS Outposts (p. 541)
• Work with shared Capacity Reservations (p. 542)
• Capacity Reservation Fleets (p. 546)
• CloudWatch metrics for On-Demand Capacity Reservations (p. 557)
† You can combine Capacity Reservations with Savings Plans or Regional Reserved Instances to receive a
discount.
523
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Supported platforms
You must create the Capacity Reservation with the correct platform to ensure that it properly matches
with your instances. Capacity Reservations support the following platforms:
• Linux/UNIX
• Linux with SQL Server Standard
• Linux with SQL Server Web
• Linux with SQL Server Enterprise
• SUSE Linux
• Red Hat Enterprise Linux
• RHEL with SQL Server Standard
• RHEL with SQL Server Enterprise
• RHEL with SQL Server Web
• RHEL with HA
• RHEL with HA and SQL Server Standard
• RHEL with HA and SQL Server Enterprise
When you purchase a Capacity Reservation, you must specify the platform that represents the operating
system for your instance.
• For SUSE Linux and RHEL distributions, excluding BYOL, you must choose the specific platform. For
example, the SUSE Linux or Red Hat Enterprise Linux platform.
• For all other Linux distributions (including Ubuntu), choose the Linux/UNIX platform.
• If you bring your existing RHEL subscription (BYOL), you must choose the Linux/UNIX platform.
For more information about the supported Windows platforms, see Supported platforms in the Amazon
EC2 User Guide for Windows Instances.
Quotas
The number of instances for which you are allowed to reserve capacity is based on your account's On-
Demand Instance quota. You can reserve capacity for as many instances as that quota allows, minus the
number of instances that are already running.
Limitations
Before you create Capacity Reservations, take note of the following limitations and restrictions.
• Active and unused Capacity Reservations count toward your On-Demand Instance limits.
• Capacity Reservations are not transferable from one AWS account to another. However, you can share
Capacity Reservations with other AWS accounts. For more information, see Work with shared Capacity
Reservations (p. 542).
• Zonal Reserved Instance billing discounts do not apply to Capacity Reservations.
• Capacity Reservations can be created in cluster placement groups. Spread and partition placement
groups are not supported.
524
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Pricing
When the Capacity Reservation enters the active state, you are charged the equivalent On-Demand
rate whether you run instances in the reserved capacity or not. If you do not use the reservation, this
shows up as unused reservation on your EC2 bill. When you run an instance that matches the attributes
of a reservation, you just pay for the instance and nothing for the reservation. There are no upfront or
additional charges.
For example, if you create a Capacity Reservation for 20 m4.large Linux instances and run 15
m4.large Linux instances in the same Availability Zone, you will be charged for 15 active instances and
for 5 unused instances in the reservation.
Billing discounts for Savings Plans and Regional Reserved Instances apply to Capacity Reservations. For
more information, see Billing discounts (p. 525).
Billing
Billing starts as soon as the capacity is provisioned and the Capacity Reservation enters the active
state, and it continues while the Capacity Reservation remains in the active state.
Capacity Reservations are billed at per-second granularity. This means that you are charged for partial
hours. For example, if a reservation remains active in your account for 24 hours and 15 minutes, you will
be billed for 24.25 reservation hours.
The following example shows how a Capacity Reservation is billed. The Capacity Reservation is created
for one m4.large Linux instance, which has an On-Demand rate of $0.10 per usage hour. In this
example, the Capacity Reservation is active in the account for five hours. The Capacity Reservation is
unused for the first hour, so it is billed for one unused hour at the m4.large instance type's standard
On-Demand rate. In hours two through five, the Capacity Reservation is occupied by an m4.large
instance. During this time, the Capacity Reservation accrues no charges, and the account is instead billed
for the m4.large instance occupying it. In the sixth hour, the Capacity Reservation is canceled and the
m4.large instance runs normally outside of the reserved capacity. For that hour, it is charged at the On-
Demand rate of the m4.large instance type.
Billing discounts
Billing discounts for Savings Plans and Regional Reserved Instances apply to Capacity Reservations. AWS
automatically applies these discounts to Capacity Reservations that have matching attributes. When
525
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
a Capacity Reservation is used by an instance, the discount is applied to the instance. Discounts are
preferentially applied to instance usage before covering unused Capacity Reservations.
Billing discounts for zonal Reserved Instances do not apply to Capacity Reservations.
You can view the charges online, or you can download a CSV file. For more information, see Capacity
Reservation Line Items in the AWS Billing and Cost Management User Guide.
By default, Capacity Reservations automatically match new instances and running instances that have
matching attributes (instance type, platform, and Availability Zone). This means that any instance with
matching attributes automatically runs in the Capacity Reservation. However, you can also target a
Capacity Reservation for specific workloads. This enables you to explicitly control which instances are
allowed to run in that reserved capacity.
You can specify how the reservation ends. You can choose to cancel the Capacity Reservation or end it
automatically at a specified time. If you specify an end time, the Capacity Reservation is canceled within
an hour of the specified time. For example, if you specify 5/31/2019, 13:30:55, the Capacity Reservation
is guaranteed to end between 13:30:55 and 14:30:55 on 5/31/2019. After a reservation ends, you can no
longer target instances to the Capacity Reservation. Instances running in the reserved capacity continue
to run uninterrupted. If instances targeting a Capacity Reservation are stopped, you cannot restart them
until you remove their Capacity Reservation targeting preference or configure them to target a different
Capacity Reservation.
Contents
• Create a Capacity Reservation (p. 526)
• Launch instances into an existing Capacity Reservation (p. 528)
• Modify a Capacity Reservation (p. 529)
• Modify an instance's Capacity Reservation settings (p. 529)
• View a Capacity Reservation (p. 530)
• Cancel a Capacity Reservation (p. 532)
526
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
at any time. If the Capacity Reservation is open, new instances and existing instances that have matching
attributes automatically run in the capacity of the Capacity Reservation. If the Capacity Reservation is
targeted, instances must specifically target it to run in the reserved capacity.
Your request to create a Capacity Reservation could fail if one of the following is true:
• Amazon EC2 does not have sufficient capacity to fulfill the request. Either try again at a later time,
try a different Availability Zone, or try a smaller capacity. If your application is flexible across instance
types and sizes, try different instance attributes.
• The requested quantity exceeds your On-Demand Instance limit for the selected instance family.
Increase your On-Demand Instance limit for the instance family and try again. For more information,
see On-Demand Instance limits (p. 378).
• open—(Default) The Capacity Reservation matches any instance that has matching attributes
(instance type, platform, and Availability Zone). If you launch an instance with matching
attributes, it is placed into the reserved capacity automatically.
• targeted—The Capacity Reservation only accepts instances that have matching attributes
(instance type, platform, and Availability Zone), and that explicitly target the reservation.
527
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
For example, the following command creates a Capacity Reservation that reserves capacity for three
m5.2xlarge instances running Red Hat Enterprise Linux AMIs in the us-east-1a Availability Zone.
Launching an instance into a Capacity Reservation reduces its available capacity by the number of
instances launched. For example, if you launch three instances, the available capacity of the Capacity
Reservation is reduced by three.
1. Open the Launch Instance wizard by choosing Launch Instances from Dashboard or Instances.
2. Select an Amazon Machine Image (AMI) and an instance type.
3. Complete the Configure Instance Details page. For Capacity Reservation, choose one of the
following options:
• None — Prevents the instances from launching into a Capacity Reservation. The instances run in
On-Demand capacity.
• Open — Launches the instances into any Capacity Reservation that has matching attributes and
sufficient capacity for the number of instances you selected. If there is no matching Capacity
Reservation with sufficient capacity, the instance uses On-Demand capacity.
• Target by ID — Launches the instances into the selected Capacity Reservation. If the selected
Capacity Reservation does not have sufficient capacity for the number of instances you selected,
the instance launch fails.
• Target by group — Launches the instances into any Capacity Reservation with matching
attributes and available capacity in the selected Capacity Reservation group. If the selected
group does not have a Capacity Reservation with matching attributes and available capacity, the
instances launch into On-Demand capacity.
4. Complete the remaining steps to launch the instances.
To launch an instance into an existing Capacity Reservation using the AWS CLI
The following example launches a t2.micro instance into any open Capacity Reservation that has
matching attributes and available capacity:
528
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
The following example launches a t2.micro instance into a targeted Capacity Reservation:
The following example launches a t2.micro instance into a Capacity Reservation group:
When modifying a Capacity Reservation, you can only increase or decrease the quantity and change
the way in which it is released. You cannot change the instance type, EBS optimization, instance store
settings, platform, Availability Zone, or instance eligibility of a Capacity Reservation. If you need to
modify any of these attributes, we recommend that you cancel the reservation, and then create a new
one with the required attributes.
If you specify a new quantity that exceeds your remaining On-Demand Instance limit for the selected
instance type, the update fails.
For example, the following command modifies a Capacity Reservation to reserve capacity for eight
instances.
• Start in any Capacity Reservation that has matching attributes (instance type, platform, and
Availability Zone) and available capacity.
• Start the instance in a specific Capacity Reservation.
529
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
• Start in any Capacity Reservation that has matching attributes and available capacity in a Capacity
Reservation group
• Prevent the instance from starting in a Capacity Reservation.
• Open — Launches the instances into any Capacity Reservation that has matching attributes and
sufficient capacity for the number of instances you selected. If there is no matching Capacity
Reservation with sufficient capacity, the instance uses On-Demand capacity.
• None — Prevents the instances from launching into a Capacity Reservation. The instances run in
On-Demand capacity.
• Specify Capacity Reservation — Launches the instances into the selected Capacity Reservation.
If the selected Capacity Reservation does not have sufficient capacity for the number of instances
you selected, the instance launch fails.
• Specify Capacity Reservation group — Launches the instances into any Capacity Reservation
with matching attributes and available capacity in the selected Capacity Reservation group. If
the selected group does not have a Capacity Reservation with matching attributes and available
capacity, the instances launch into On-Demand capacity.
For example, the following command changes an instance's Capacity Reservation setting to open or
none.
For example, the following command modifies an instance to target a specific Capacity Reservation.
For example, the following command modifies an instance to target a specific Capacity Reservation
group.
530
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
• expired—The Capacity Reservation expired automatically at the date and time specified in your
reservation request. The reserved capacity is no longer available for your use.
• cancelled—The Capacity Reservation was canceled. The reserved capacity is no longer available for
your use.
• pending—The Capacity Reservation request was successful but the capacity provisioning is still
pending.
• failed—The Capacity Reservation request has failed. A request can fail due to request parameters
that are not valid, capacity constraints, or instance limit constraints. You can view a failed request for
60 minutes.
Example output.
{
"CapacityReservations": [
{
"CapacityReservationId": "cr-1234abcd56EXAMPLE ",
"EndDateType": "unlimited",
"AvailabilityZone": "eu-west-1a",
"InstanceMatchCriteria": "open",
"Tags": [],
"EphemeralStorage": false,
"CreateDate": "2019-08-16T09:03:18.000Z",
"AvailableInstanceCount": 1,
"InstancePlatform": "Linux/UNIX",
"TotalInstanceCount": 1,
"State": "active",
"Tenancy": "default",
"EbsOptimized": true,
"InstanceType": "a1.medium",
"PlacementGroupArn": "arn:aws:ec2:us-east-1:123456789012:placement-group/MyPG"
},
{
"CapacityReservationId": "cr-abcdEXAMPLE9876ef ",
"EndDateType": "unlimited",
"AvailabilityZone": "eu-west-1a",
"InstanceMatchCriteria": "open",
"Tags": [],
"EphemeralStorage": false,
"CreateDate": "2019-08-07T11:34:19.000Z",
"AvailableInstanceCount": 3,
"InstancePlatform": "Linux/UNIX",
"TotalInstanceCount": 3,
"State": "cancelled",
"Tenancy": "default",
531
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
"EbsOptimized": true,
"InstanceType": "m5.large"
}
]
}
You can cancel empty Capacity Reservations and Capacity Reservations that have running instances. If
you cancel a Capacity Reservation that has running instances, the instances continue to run normally
outside of the capacity reservation at standard On-Demand Instance rates or at a discounted rate if you
have a matching Savings Plan or Regional Reserved Instance.
After you cancel a Capacity Reservation, instances that target it can no longer launch. Modify these
instances so that they either target a different Capacity Reservation, launch into any open Capacity
Reservation with matching attributes and sufficient capacity, or avoid launching into a Capacity
Reservation. For more information, see Modify an instance's Capacity Reservation settings (p. 529).
You can include multiple Capacity Reservations that have different attributes (instance type, platform,
and Availability Zone) in a single resource group.
When you create resource groups for your Capacity Reservations, you can target instances to a group
of Capacity Reservations instead of an individual Capacity Reservation. Instances that target a group of
Capacity Reservations match with any Capacity Reservation in the group that has matching attributes
(instance type, platform, and Availability Zone) and available capacity. If the group does not have a
Capacity Reservation with matching attributes and available capacity, the instances run using On-
Demand capacity. If a matching Capacity Reservation is added to the targeted group at a later stage, the
instance is automatically matched with and moved into its reserved capacity.
To prevent unintended use of Capacity Reservations in a group, configure the Capacity Reservations in
the group to accept only instances that explicitly target the capacity reservation. To do this, set Instance
532
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
eligibility to targeted (old console) or Only instances that specify this reservation (new console) when
creating the Capacity Reservation using the Amazon EC2 console. When using the AWS CLI, specify --
instance-match-criteria targeted when creating the Capacity Reservation. Doing this ensures
that only instances that explicitly target the group, or a Capacity Reservation in the group, can run in the
group.
If a Capacity Reservation in a group is canceled or expires while it has running instances, the instances
are automatically moved to another Capacity Reservation in the group that has matching attributes
and available capacity. If there are no remaining Capacity Reservations in the group that have matching
attributes and available capacity, the instances run in On-Demand capacity. If a matching Capacity
Reservation is added to the targeted group at a later stage, the instance is automatically moved into its
reserved capacity.
Use the create-group AWS CLI command. For name, provide a descriptive name for the group, and for
configuration, specify two Type request parameters:
{
"GroupConfiguration": {
"Status": "UPDATE_COMPLETE",
"Configuration": [
{
"Type": "AWS::EC2::CapacityReservationPool"
},
{
"Type": "AWS::ResourceGroups::Generic",
"Parameters": [
{
"Values": [
"AWS::EC2::CapacityReservation"
],
"Name": "allowed-resource-types"
}
]
}
]
},
"Group": {
"GroupArn": "arn:aws:resource-groups:sa-east-1:123456789012:group/MyCRGroup",
"Name": "MyCRGroup"
}
}
533
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Use the group-resources AWS CLI command. For group, specify the name of the group to which to add
the Capacity Reservations, and for resources, specify ARNs of the Capacity Reservations to add. To
add multiple Capacity Reservations, separate the ARNs with a space. To get the ARNs of the Capacity
Reservations to add, use the describe-capacity-reservations AWS CLI command and specify the IDs of
the Capacity Reservations.
For example, the following command adds two Capacity Reservations to a group named MyCRGroup.
{
"Failed": [],
"Succeeded": [
"arn:aws:ec2:sa-east-1:123456789012:capacity-reservation/cr-1234567890abcdef1",
"arn:aws:ec2:sa-east-1:123456789012:capacity-reservation/cr-54321abcdef567890"
]
}
Use the list-group-resources AWS CLI command. For group, specify the name of the group.
For example, the following command lists the Capacity Reservations in a group named MyCRGroup.
{
"QueryErrors": [],
"ResourceIdentifiers": [
{
"ResourceType": "AWS::EC2::CapacityReservation",
"ResourceArn": "arn:aws:ec2:sa-east-1:123456789012:capacity-reservation/
cr-1234567890abcdef1"
},
{
"ResourceType": "AWS::EC2::CapacityReservation",
"ResourceArn": "arn:aws:ec2:sa-east-1:123456789012:capacity-reservation/
cr-54321abcdef567890"
}
]
}
To view the groups to which a specific Capacity Reservation has been added (AWS CLI)
For example, the following command lists the groups to which Capacity Reservation
cr-1234567890abcdef1 has been added.
534
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
{
"CapacityReservationGroups": [
{
"OwnerId": "123456789012",
"GroupArn": "arn:aws:resource-groups:sa-east-1:123456789012:group/MyCRGroup"
}
]
}
To view the groups to which a specific Capacity Reservation has been added (console)
The groups to which the Capacity Reservation has been added are listed in the Groups card.
Use the ungroup-resources AWS CLI command. For group, specify the ARN of the group from which to
remove the Capacity Reservation, and for resources specify the ARNs of the Capacity Reservations to
remove. To remove multiple Capacity Reservations, separate the ARNs with a space.
The following example removes two Capacity Reservations from a group named MyCRGroup.
{
"Failed": [],
"Succeeded": [
"arn:aws:ec2:sa-east-1:123456789012:capacity-reservation/cr-0e154d26a16094dd",
"arn:aws:ec2:sa-east-1:123456789012:capacity-reservation/cr-54321abcdef567890"
]
}
To delete a group
Use the delete-group AWS CLI command. For group, provide the name of the group to delete.
{
"Group": {
"GroupArn": "arn:aws:resource-groups:sa-east-1:123456789012:group/MyCRGroup",
"Name": "MyCRGroup"
}
535
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Creating a Capacity Reservation in a cluster placement group ensures that you have access to compute
capacity in your cluster placement groups when you need it, for as long as you need it. This is ideal for
reserving capacity for high-performance (HPC) workloads that require compute scaling. It allows you to
scale your cluster down while ensuring that the capacity remains available for your use so that you can
scale back up when needed.
Topics
• Limitations (p. 536)
• Work with Capacity Reservations in cluster placement groups (p. 536)
Limitations
Keep the following in mind when creating Capacity Reservations in cluster placement groups:
• You can't modify an existing Capacity Reservation that is not in a placement group to reserve capacity
in a placement group. To reserve capacity in a placement group, you must create the Capacity
Reservation in the placement group.
• After you create a Capacity Reservation in a placement group, you can't modify it to reserve capacity
outside of the placement group.
• You can increase your reserved capacity in a placement group by modifying an existing Capacity
Reservation in the placement group, or by creating additional Capacity Reservations in the placement
group. However, you increase your chances of getting an insufficient capacity error.
• You can't share Capacity Reservations that have been created in a cluster placement group.
Topics
• Step 1: (Conditional) Create a cluster placement group for use with a Capacity Reservation (p. 536)
• Step 2: Create a Capacity Reservation in a cluster placement group (p. 537)
• Step 3: Launch instances into the cluster placement group (p. 538)
Step 1: (Conditional) Create a cluster placement group for use with a Capacity Reservation
Perform this step only if you need to create a new cluster placement group. To use an existing cluster
placement group, skip this step and then for Steps 2 and 3, use the ARN of that cluster placement group.
For more information about how to find the ARN of your existing cluster placement group, see Describe a
placement group (p. 1177).
536
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
You can create the cluster placement group using one of the following methods.
Console
Make a note of the placement group ARN returned in the command output, because you'll need
it for the next step.
AWS CLI
Use the create-placement-group command. For --group-name, specify a descriptive name for the
placement group, and for --strategy, specify cluster.
The following example creates a placement group named MyPG that uses the cluster placement
strategy.
Make a note of the placement group ARN returned in the command output, because you'll need it
for the next step.
You create a Capacity Reservation in a cluster placement group in the same way that you create any
Capacity Reservation. However, you must also specify the ARN of the cluster placement group in which
to create the Capacity Reservation. For more information, see Create a Capacity Reservation (p. 526).
Considerations
• The specified cluster placement group must be in the available state. If the cluster placement group
is in the pending, deleting, or deleted state, the request fails.
• The Capacity Reservation and the cluster placement group must be in the same Availability Zone. If
the request to create the Capacity Reservation specifies an Availability Zone that is different from that
of the cluster placement group, the request fails.
• You can create Capacity Reservations only for instance types that are supported by cluster placement
groups. If you specify an unsupported instance type, the request fails. For more information, see
Cluster placement group rules and limitations (p. 1170).
• If you create an open Capacity Reservation in a cluster placement group and there are existing running
instances that have matching attributes (placement group ARN, instance type, Availability Zone,
platform, and tenancy), those instances automatically run in the Capacity Reservation.
537
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
• Your request to create a Capacity Reservation could fail if one of the following is true:
• Amazon EC2 does not have sufficient capacity to fulfill the request. Either try again at a later time,
try a different Availability Zone, or try a smaller capacity. If your workload is flexible across instance
types and sizes, try different instance attributes.
• The requested quantity exceeds your On-Demand Instance limit for the selected instance family.
Increase your On-Demand Instance limit for the instance family and try again. For more information,
see On-Demand Instance limits (p. 378).
You can create the Capacity Reservation in the cluster placement group using one of the following
methods.
Console
You launch an instance into a Capacity Reservation in a cluster placement group in the same way that
you launch an instance into any Capacity Reservation. However, you must also specify the ARN of the
cluster placement group in which to launch the instance. For more information, see Create a Capacity
Reservation (p. 528).
Considerations
• If the Capacity Reservation is open, you do not need to specify the Capacity Reservation in the
instance launch request. If the instance has attributes (placement group ARN, instance type,
Availability Zone, platform, and tenancy) that match a Capacity Reservation in the specified placement
group, the instance automatically runs in the Capacity Reservation.
538
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
• If the Capacity Reservation accepts only targeted instance launches, you must specify the target
Capacity Reservation in addition to the cluster placement group in the request.
• If the Capacity Reservation is in a Capacity Reservation group, you must specify the target Capacity
Reservation group in addition to the cluster placement group in the request. For more information, see
Work with Capacity Reservation groups (p. 532).
You can launch an instance into a Capacity Reservation in a cluster placement group using one of the
following methods.
Console
1. Open the Launch Instance wizard by choosing Launch Instances from the Dashboard or from
the Instances screen.
2. Select an Amazon Machine Image (AMI) and an instance type.
3. Complete the Configure Instance Details page:
a. For Placement group, select Add instance to placement group, choose Add to existing
placement group, and then select the cluster placement group in which to launch the
instance.
b. For Capacity Reservation, choose one of the following options depending on the
configuration of the Capacity Reservation:
• Open — To launch the instances into any open Capacity Reservation in the cluster
placement group that has matching attributes and sufficient capacity.
• Target by ID — To launch the instances into a Capacity Reservation that accepts only
targeted instance launches.
• Target by group — To launch the instances into any Capacity Reservation with matching
attributes and available capacity in the selected Capacity Reservation group.
4. Complete the remaining steps to launch the instances.
For more information, see Launch instances into an existing Capacity Reservation (p. 528).
AWS CLI
To launch instances into an existing Capacity Reservation using the AWS CLI
Use the run-instances command. If you need to target a specific Capacity Reservation or a Capacity
Reservation group, specify the --capacity-reservation-specification parameter. For --
placement, specify the GroupName parameter and then specify the name of the placement group
that you created in the previous steps.
The following command launches an instance into a targeted Capacity Reservation in a cluster
placement group.
For more information, see Launch instances into an existing Capacity Reservation (p. 528).
539
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
You can extend a VPC from its parent AWS Region into a Local Zone by creating a new subnet in that
Local Zone. When you create a subnet in a Local Zone, your VPC is extended to that Local Zone. The
subnet in the Local Zone operates the same as the other subnets in your VPC.
By using Local Zones, you can place Capacity Reservations in multiple locations that are closer to your
users. You create and use Capacity Reservations in Local Zones in the same way that you create and use
Capacity Reservations in regular Availability Zones. The same features and instance matching behavior
apply. For more information about the pricing models that are supported in Local Zones, see AWS Local
Zones FAQs.
Considerations
1. Enable the Local Zone for use in your AWS account. For more information, see Opt in to Local
Zones (p. 1015).
2. Create a Capacity Reservation in the Local Zone. For Availability Zone, choose the Local Zone.
The Local Zone is represented by an AWS Region code followed by an identifier that indicates
the location, for example us-west-2-lax-1a. For more information, see Create a Capacity
Reservation (p. 526).
3. Create a subnet in the Local Zone. For Availability Zone, choose the Local Zone. For more
information, see Creating a subnet in your VPC in the Amazon VPC User Guide.
4. Launch an instance. For Subnet, choose the subnet in the Local Zone (for example subnet-123abc
| us-west-2-lax-1a), and for Capacity Reservation, choose the specification (either open or
target it by ID) that's required for the Capacity Reservation that you created in the Local Zone. For
more information, see Launch instances into an existing Capacity Reservation (p. 528).
When you create On-Demand Capacity Reservations, you can choose the Wavelength Zone and you can
launch instances into a Capacity Reservation in a Wavelength Zone by specifying the subnet associated
with the Wavelength Zone. A Wavelength Zone is represented by an AWS Region code followed by an
identifier that indicates the location, for example us-east-1-wl1-bos-wlz-1.
Wavelength Zones are not available in every Region. For information about the Regions that support
Wavelength Zones, see Available Wavelength Zones in the AWS Wavelength Developer Guide.
Considerations
540
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
1. Enable the Wavelength Zone for use in your AWS account. For more information, see Enable
Wavelength Zones in the Amazon EC2 User Guide for Linux Instances.
2. Create a Capacity Reservation in the Wavelength Zone. For Availability Zone, choose the
Wavelength. The Wavelength is represented by an AWS Region code followed by an identifier that
indicates the location, for example us-east-1-wl1-bos-wlz-1. For more information, see Create
a Capacity Reservation (p. 526).
3. Create a subnet in the Wavelength Zone. For Availability Zone, choose the Wavelength Zone. For
more information, see Creating a subnet in your VPC in the Amazon VPC User Guide.
4. Launch an instance. For Subnet, choose the subnet in the Wavelength Zone (for example
subnet-123abc | us-east-1-wl1-bos-wlz-1), and for Capacity Reservation, choose the
specification (either open or target it by ID) that's required for the Capacity Reservation that you
created in the Wavelength. For more information, see Launch instances into an existing Capacity
Reservation (p. 528).
An Outpost is a pool of AWS compute and storage capacity deployed at a customer site. AWS operates,
monitors, and manages this capacity as part of an AWS Region.
You can create Capacity Reservations on Outposts that you have created in your account. This allows you
to reserve compute capacity on an Outpost at your site. You create and use Capacity Reservations on
Outposts in the same way that you create and use Capacity Reservations in regular Availability Zones.
The same features and instance matching behavior apply.
You can also share Capacity Reservations on Outposts with other AWS accounts within your organization
using AWS Resource Access Manager. For more information about sharing Capacity Reservations, see
Work with shared Capacity Reservations (p. 542).
Prerequisite
You must have an Outpost installed at your site. For more information, see Create an Outpost and order
Outpost capacity in the AWS Outposts User Guide.
Considerations
1. Create a subnet on the Outpost. For more information, see Create a subnet in the AWS Outposts User
Guide.
2. Create a Capacity Reservation on the Outpost.
541
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Note
The Instance Type drop-down lists only instance types that are supported by the
selected Outpost, and the Availability Zone drop-down lists only the Availability Zone
with which the selected Outpost is associated.
3. Launch an instance into the Capacity Reservation. For Subnet choose the subnet that you created in
Step 1, and for Capacity Reservation, select the Capacity Reservation that you created in Step 2. For
more information, see Launch an instance on the Outpost in the AWS Outposts User Guide.
In this model, the AWS account that owns the Capacity Reservation (owner) shares it with other AWS
accounts (consumers). Consumers can launch instances into Capacity Reservations that are shared with
them in the same way that they launch instances into Capacity Reservations that they own in their own
account. The Capacity Reservation owner is responsible for managing the Capacity Reservation and the
instances that they launch into it. Owners cannot modify instances that consumers launch into Capacity
Reservations that they have shared. Consumers are responsible for managing the instances that they
launch into Capacity Reservations shared with them. Consumers cannot view or modify instances owned
by other consumers or by the Capacity Reservation owner.
Contents
• Prerequisites for sharing Capacity Reservations (p. 542)
• Related services (p. 543)
• Share across Availability Zones (p. 543)
• Share a Capacity Reservation (p. 543)
• Stop sharing a Capacity Reservation (p. 544)
• Identify a shared Capacity Reservation (p. 544)
• View shared Capacity Reservation usage (p. 545)
• Shared Capacity Reservation permissions (p. 545)
• Billing and metering (p. 545)
• Instance limits (p. 546)
542
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
• To share a Capacity Reservation with your AWS organization or an organizational unit in your AWS
organization, you must enable sharing with AWS Organizations. For more information, see Enable
Sharing with AWS Organizations in the AWS RAM User Guide.
Related services
Capacity Reservation sharing integrates with AWS Resource Access Manager (AWS RAM). AWS RAM
is a service that enables you to share your AWS resources with any AWS account or through AWS
Organizations. With AWS RAM, you share resources that you own by creating a resource share. A resource
share specifies the resources to share, and the consumers with whom to share them. Consumers can be
individual AWS accounts, or organizational units or an entire organization from AWS Organizations.
For more information about AWS RAM, see the AWS RAM User Guide.
To identify the location of your Capacity Reservations relative to your accounts, you must use the
Availability Zone ID (AZ ID). The AZ ID is a unique and consistent identifier for an Availability Zone across
all AWS accounts. For example, use1-az1 is an AZ ID for the us-east-1 Region and it is the same
location in every AWS account.
• If consumers have running instances that match the attributes of the Capacity Reservation, have the
CapacityReservationPreference parameter set to open, and are not yet running in reserved
capacity, they automatically use the shared Capacity Reservation.
• If consumers launch instances that have matching attributes (instance type, platform, and Availability
Zone) and have the CapacityReservationPreference parameter set to open, they automatically
launch into the shared Capacity Reservation.
To share a Capacity Reservation, you must add it to a resource share. A resource share is an AWS RAM
resource that lets you share your resources across AWS accounts. A resource share specifies the resources
to share, and the consumers with whom they are shared. When you share a Capacity Reservation using
the Amazon EC2 console, you add it to an existing resource share. To add the Capacity Reservation to a
new resource share, you must create the resource share using the AWS RAM console.
If you are part of an organization in AWS Organizations and sharing within your organization is enabled,
consumers in your organization are automatically granted access to the shared Capacity Reservation.
Otherwise, consumers receive an invitation to join the resource share and are granted access to the
shared Capacity Reservation after accepting the invitation.
543
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
You can share a Capacity Reservation that you own using the Amazon EC2 console, AWS RAM console, or
the AWS CLI.
To share a Capacity Reservation that you own using the Amazon EC2 console
It could take a few minutes for consumers to get access to the shared Capacity Reservation.
To share a Capacity Reservation that you own using the AWS RAM console
To share a Capacity Reservation that you own using the AWS CLI
• Instances owned by consumers that were running in the shared capacity at the time sharing stops
continue to run normally outside of the reserved capacity, and the capacity is restored to the Capacity
Reservation subject to Amazon EC2 capacity availability.
• Consumers with whom the Capacity Reservation was shared can no longer launch new instances into
the reserved capacity.
To stop sharing a Capacity Reservation that you own, you must remove it from the resource share. You
can do this using the Amazon EC2 console, AWS RAM console, or the AWS CLI.
To stop sharing a Capacity Reservation that you own using the Amazon EC2 console
To stop sharing a Capacity Reservation that you own using the AWS RAM console
To stop sharing a Capacity Reservation that you own using the AWS CLI
544
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Use the describe-capacity-reservations command. The command returns the Capacity Reservations that
you own and Capacity Reservations that are shared with you. OwnerId shows the AWS account ID of the
Capacity Reservation owner.
The AWS account ID column shows the account IDs of the consumers currently using the Capacity
Reservation. The Launched instances column shows the number of instances each consumer
currently has running in the reserved capacity.
Use the get-capacity-reservation-usage command. AccountId shows the account ID of the account
using the Capacity Reservation. UsedInstanceCount shows the number of instances the consumer
currently has running in the reserved capacity.
Owners are responsible for managing and canceling their shared Capacity Reservations. Owners cannot
modify instances running in the shared Capacity Reservation that are owned by other accounts. Owners
remain responsible for managing instances that they launch into the shared Capacity Reservation.
Consumers are responsible for managing their instances that are running the shared Capacity
Reservation. Consumers cannot modify the shared Capacity Reservation in any way, and they cannot
view or modify instances that are owned by other consumers or the Capacity Reservation owner.
The Capacity Reservation owner is billed for instances that they run inside the Capacity Reservation
and for unused reserved capacity. Consumers are billed for the instances that they run inside the shared
Capacity Reservation.
545
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Instance limits
All Capacity Reservation usage counts toward the Capacity Reservation owner's On-Demand Instance
limits. This includes:
Instances launched into the shared capacity by consumers count towards the Capacity Reservation
owner's On-Demand Instance limit. Consumers' instance limits are a sum of their own On-Demand
Instance limits and the capacity available in the shared Capacity Reservations to which they have access.
A Capacity Reservation Fleet request contains all of the configuration information that's needed to
launch a Capacity Reservation Fleet. Using a single request, you can reserve large amounts of Amazon
EC2 capacity for your workload across multiple instance types, up to a target capacity that you specify.
After you create a Capacity Reservation Fleet, you can manage the Capacity Reservations in the fleet
collectively by modifying or canceling the Capacity Reservation Fleet.
Topics
• How Capacity Reservation Fleets work (p. 178)
• Considerations (p. 367)
• Pricing (p. 547)
• Capacity Reservation Fleet concepts (p. 547)
• Work with Capacity Reservation Fleets (p. 549)
• Example Capacity Reservation Fleet configurations (p. 554)
• Using Service-Linked Roles for Capacity Reservation Fleet (p. 555)
The number of instances for which the Fleet reserves capacity depends on the total target
capacity (p. 547) and the instance type weights (p. 548) that you specify. The instance type for which it
reserves capacity depends on the allocation strategy (p. 548) and instance type priorities (p. 548) that
you use.
If there is insufficient capacity at the time the Fleet is created, and it is unable to immediately meet its
total target capacity, the Fleet asynchronously attempts to create Capacity Reservations until it has
reserved the requested amount of capacity.
When the Fleet reaches its total target capacity, it attempts to maintain that capacity. If a Capacity
Reservation in the Fleet is cancelled, the Fleet automatically creates one or more Capacity Reservations,
depending on your Fleet configuration, to replace the lost capacity and to maintain its total target
capacity.
The Capacity Reservations in the Fleet can't be managed individually. They must be managed
collectively by modifying the Fleet. When you modify a Fleet, the Capacity Reservations in the Fleet are
automatically updated to reflect the changes.
546
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Currently, Capacity Reservation Fleets support the open instance matching criteria, and all Capacity
Reservations launched by a Fleet automatically use this instance matching criteria. With this criteria, new
instances and existing instances that have matching attributes (instance type, platform, and Availability
Zone) automatically run in the Capacity Reservations created by a Fleet. Capacity Reservation Fleets do
not support target instance matching criteria.
Considerations
Keep the following in mind when working with Capacity Reservation Fleets:
• A Capacity Reservation Fleet can be created, modified, viewed, and cancelled using the AWS CLI and
AWS API.
• The Capacity Reservations in a Fleet can't be managed individually. They must be managed collectively
by modifying or cancelling the Fleet.
• A Capacity Reservation Fleet can't span across Regions.
• A Capacity Reservation Fleet can't span across Availability Zones.
• Capacity Reservations created by a Capacity Reservation Fleet are automatically tagged with the
following AWS generated tag:
• Key — aws:ec2-capacity-reservation-fleet
• Value — fleet_id
You can use this tag to identify Capacity Reservations that were created by a Capacity Reservation
Fleet.
Pricing
There are no additional charges for using Capacity Reservation Fleets. You are billed for the individual
Capacity Reservations that are created by your Capacity Reservation Fleets. For more information about
how Capacity Reservations are billed, see Capacity Reservation pricing and billing (p. 525).
Topics
• Total target capacity (p. 547)
• Allocation strategy (p. 548)
• Instance type weight (p. 548)
• Instance type priority (p. 548)
The total target capacity defines the total amount of compute capacity that the Capacity Reservation
Fleet reserves. You specify the total target capacity when you create the Capacity Reservation Fleet.
After the Fleet has been created, Amazon EC2 automatically creates Capacity Reservations to reserve
capacity up to the total target capacity.
The number of instances for which the Capacity Reservation Fleet reserves capacity is determined by the
total target capacity and the instance type weight that you specify for each instance type in the Capacity
Reservation Fleet (total target capacity/instance type weight=number of instances).
You can assign a total target capacity based on units that are meaningful to your workload. For example,
if your workload requires a certain number of vCPUs, you can assign the total target capacity based on
the number of vCPUs required. If your workload requires 2048 vCPUs, specify a total target capacity of
547
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
2048 and then assign instance type weights based on the number of vCPUs provided by the instance
types in the Fleet. For an example, see Instance type weight (p. 548).
Allocation strategy
The allocation strategy for your Capacity Reservation Fleet determines how it fulfills your request for
reserved capacity from the instance type specifications in the Capacity Reservation Fleet configuration.
Currently, only the prioritized allocation strategy is supported. With this strategy, the Capacity
Reservation Fleet creates Capacity Reservations using the priorities that you have assigned to each of
the instance type specifications in the Capacity Reservation Fleet configuration. Lower priority values
indicate higher priority for use. For example, say you create a Capacity Reservation Fleet that uses the
following instance types and priorities:
• m4.16xlarge — priority = 1
• m5.16xlarge — priority = 3
• m5.24xlarge — priority = 2
The Fleet first attempts to create Capacity Reservations for m4.16xlarge. If Amazon EC2 has
insufficient m4.16xlarge capacity, the Fleet attempts to create Capacity Reservations for
m5.24xlarge. If Amazon EC2 has insufficient m5.24xlarge capacity, the Fleet creates Capacity
Reservations for m5.16xlarge.
The instance type weight is a weight that you assign to each instance type in the Capacity Reservation
Fleet. The weight determines how many units of capacity each instance of that specific instance type
counts toward the Fleet's total target capacity.
You can assign weights based on units that are meaningful to your workload. For example, if your
workload requires a certain number of vCPUs, you can assign weights based on the number of vCPUs
provided by each instance type in the Capacity Reservation Fleet. In this case, if you create a Capacity
Reservation Fleet using m4.16xlarge and m5.24xlarge instances, you would assign weights that
correspond to the number of vCPUs for each instance as follows:
The instance type weight determines the number of instances for which the Capacity Reservation
Fleet reserves capacity. For example, if a Capacity Reservation Fleet with a total target capacity of 384
units uses the instance types and weights in the preceding example, the Fleet could reserve capacity
for 6 m4.16xlarge instances (384 total target capacity/64 instance type weight=6 instances), or 4
m5.24xlarge instances (384 / 96 = 4).
If you do not assign instance type weights, or if you assign an instance type weight of 1, the total
target capacity is based purely on instance count. For example, if a Capacity Reservation Fleet with
a total target capacity of 384 units uses the instance types in the preceding example, but omits the
weights or specifies a weight of 1 for both instance types, the Fleet could reserve capacity for either 384
m4.16xlarge instances or 384 m5.24xlarge instances.
The instance type priority is a value that you assign to the instance types in the Fleet. The priorities are
used to determine which of the instance types specified for the Fleet should be prioritized for use.
548
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Topics
• Before you begin (p. 549)
• Capacity Reservation Fleet states (p. 549)
• Create a Capacity Reservation Fleet (p. 549)
• View a Capacity Reservation Fleet (p. 551)
• Modify a Capacity Reservation Fleet (p. 552)
• Cancel a Capacity Reservation Fleet (p. 553)
• submitted — The Capacity Reservation Fleet request has been submitted and Amazon EC2 is
preparing to create the Capacity Reservations.
• modifying — The Capacity Reservation Fleet is being modified. The Fleet remains in this state until
the modification is complete.
• active — The Capacity Reservation Fleet has fulfilled its total target capacity and it is attempting to
maintain this capacity. The Fleet remains in this state until it is modified or deleted.
• partially_fulfilled — The Capacity Reservation Fleet has partially fulfilled its total target
capacity. There is insufficient Amazon EC2 capacity to fulfill the total target capacity. The Fleet is
attempting to asynchronously fulfill its total target capacity.
• expiring — The Capacity Reservation Fleet has reached its end date and it is in the process of
expiring. One or more of its Capacity Reservations might still be active.
• expired — The Capacity Reservation Fleet has reached its end date. The Fleet and its Capacity
Reservations are expired. The Fleet can't create new Capacity Reservations.
• cancelling — The Capacity Reservation Fleet is in the process of being cancelled. One or more of its
Capacity Reservations might still be active.
• cancelled — The Capacity Reservation Fleet has been manually cancelled. The Fleet and its Capacity
Reservations are cancelled and the Fleet can't create new Capacity Reservations.
• failed — The Capacity Reservation Fleet failed to reserve capacity for the specified instance types.
549
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
instances for which the Capacity Reservation Fleet reserves capacity depends on the total target capacity
and instance type weights that you specify in the request. For more information, see Instance type
weight (p. 548) and Total target capacity (p. 547).
When you create the Fleet, you must specify the instance types to use and a priority for each of
those instance types. For more information, see Allocation strategy (p. 548) and Instance type
priority (p. 548).
Note
The AWSServiceRoleForEC2CapacityReservationFleet service-linked role is automatically
created in your account the first time that you create a Capacity Reservation Fleet. For more
information, see Using Service-Linked Roles for Capacity Reservation Fleet (p. 555).
Currently, Capacity Reservation Fleets support the open instance matching criteria only.
You can create a Capacity Reservation Fleet using the command line only.
{
"InstanceType": "instance_type",
"InstancePlatform":"platform",
"Weight": instance_type_weight
"AvailabilityZone":"availability_zone",
"AvailzbilityZoneId" : "az_id",
"EbsOptimized": true|false,
"Priority" : instance_type_priority
}
Expected output.
{
"Status": "status",
"TotalFulfilledCapacity": fulfilled_capacity,
"CapacityReservationFleetId": "cr_fleet_id",
"TotalTargetCapacity": capacity_units
}
Example
instanceTypeSpecification.json
550
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
[
{
"InstanceType": "m5.xlarge",
"InstancePlatform": "Linux/UNIX",
"Weight": 3.0,
"AvailabilityZone":"us-east-1a",
"EbsOptimized": true,
"Priority" : 1
}
]
Example output.
{
"Status": "submitted",
"TotalFulfilledCapacity": 0.0,
"CapacityReservationFleetId": "crf-abcdef01234567890",
"TotalTargetCapacity": 24
}
You can view a Capacity Reservation Fleet using the command line only.
Expected output
{
"CapacityReservationFleets": [
{
"Status": "status",
"EndDate": "yyyy-mm-ddThh:mm:ss.000Z",
"InstanceMatchCriteria": "open",
"Tags": [],
"CapacityReservationFleetId": "cr_fleet_id",
"Tenancy": "dedicated|default",
"InstanceTypeSpecifications": [
{
"CapacityReservationId": "cr1_id",
"AvailabilityZone": "cr1_availability_zone",
"FulfilledCapacity": cr1_used_capacity,
"Weight": cr1_instance_type_weight,
"CreateDate": "yyyy-mm-ddThh:mm:ss.000Z",
"InstancePlatform": "cr1_platform",
"TotalInstanceCount": cr1_number of instances,
"Priority": cr1_instance_type_priority,
"EbsOptimized": true|false,
"InstanceType": "cr1_instance_type"
},
{
"CapacityReservationId": "cr2_id",
"AvailabilityZone": "cr2_availability_zone",
"FulfilledCapacity": cr2_used_capacity,
551
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
"Weight": cr2_instance_type_weight,
"CreateDate": "yyyy-mm-ddThh:mm:ss.000Z",
"InstancePlatform": "cr2_platform",
"TotalInstanceCount": cr2_number of instances,
"Priority": cr2_instance_type_priority,
"EbsOptimized": true|false,
"InstanceType": "cr2_instance_type"
},
],
"TotalTargetCapacity": total_target_capacity,
"TotalFulfilledCapacity": total_target_capacity,
"CreateTime": "yyyy-mm-ddThh:mm:ss.000Z",
"AllocationStrategy": "prioritized"
}
]
}
Example
Example output
{
"CapacityReservationFleets": [
{
"Status": "active",
"EndDate": "2021-12-31T23:59:59.000Z",
"InstanceMatchCriteria": "open",
"Tags": [],
"CapacityReservationFleetId": "crf-abcdef01234567890",
"Tenancy": "default",
"InstanceTypeSpecifications": [
{
"CapacityReservationId": "cr-1234567890abcdef0",
"AvailabilityZone": "us-east-1a",
"FulfilledCapacity": 5.0,
"Weight": 1.0,
"CreateDate": "2021-07-02T08:34:33.398Z",
"InstancePlatform": "Linux/UNIX",
"TotalInstanceCount": 5,
"Priority": 1,
"EbsOptimized": true,
"InstanceType": "m5.xlarge"
}
],
"TotalTargetCapacity": 5,
"TotalFulfilledCapacity": 5.0,
"CreateTime": "2021-07-02T08:34:33.397Z",
"AllocationStrategy": "prioritized"
}
]
}
You can modify the total target capacity and date of a Capacity Reservation Fleet at any time. When you
modify the total target capacity of a Capacity Reservation Fleet, the Fleet automatically creates new
Capacity Reservations, or modifies or cancels existing Capacity Reservations in the Fleet to meet the new
total target capacity. When you modify the end date for the Fleet, the end dates for all of the individual
Capacity Reservations are updated accordingly.
552
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
After you modify a Fleet, its status transitions to modifying. You can't attempt additional modifications
to a Fleet while it is in the modifying state.
You can't modify the tenancy, Availability Zone, instance types, instance platforms, priorities, or weights
used by a Capacity Reservation Fleet. If you need to change any of these parameters, you might need to
cancel the existing Fleet and create a new one with the required parameters.
You can modify a Capacity Reservation Fleet using the command line only.
Expected output
{
"Return": true
}
Example output
{
"Return": true
}
When you no longer need a Capacity Reservation Fleet and the capacity it reserves, you can cancel it.
When you cancel a Fleet, its status changes to cancelled and it can no longer create new Capacity
Reservations. Additionally, all of the individual Capacity Reservations in the Fleet are cancelled and
553
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
the instances that were previously running in the reserved capacity continue to run normally in shared
capacity.
You can cancel a Capacity Reservation Fleet using the command line only.
Expected output
{
"SuccessfulFleetCancellations": [
{
"CurrentFleetState": "state",
"PreviousFleetState": "state",
"CapacityReservationFleetId": "cr_fleet_id_1"
},
{
"CurrentFleetState": "state",
"PreviousFleetState": "state",
"CapacityReservationFleetId": "cr_fleet_id_2"
}
],
"FailedFleetCancellations": [
{
"CapacityReservationFleetId": "cr_fleet_id_3",
"CancelCapacityReservationFleetError": [
{
"Code": "code",
"Message": "message"
}
]
}
]
}
Example output
{
"SuccessfulFleetCancellations": [
{
"CurrentFleetState": "cancelling",
"PreviousFleetState": "active",
"CapacityReservationFleetId": "crf-abcdef01234567890"
}
],
"FailedFleetCancellations": []
}
554
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
It uses a weighting system based on the number of vCPUs provided by the specified instance types. The
total target capacity is 480 vCPUs. The m5.4xlarge provides 16 vCPUs and gets a weight of 16, while
the m5.12xlarge provides 48 vCPUs and gets a weight of 48. This weighting system configures the
Capacity Reservation Fleet to reserve capacity for either 30 m5.4xlarge instances (480/16=30), or 10
m5.12xlarge instances (480/48=10).
The Fleet is configured to prioritize the m5.12xlarge capacity and gets priority of 1, while the
m5.4xlarge gets a lower priority of 2. This means that the fleet will attempt to reserve the
m5.12xlarge capacity first, and only attempt to reserve the m5.4xlarge capacity if Amazon EC2 has
insufficient m5.12xlarge capacity.
The Fleet reserves the capacity for Windows instances and the reservation automatically expires on
October 31, 2021 at 23:59:59 UTC.
[
{
"InstanceType": "m5.4xlarge",
"InstancePlatform":"Windows",
"Weight": 16,
"AvailabilityZone":"us-east-1",
"EbsOptimized": true,
"Priority" : 2
},
{
"InstanceType": "m5.12xlarge",
"InstancePlatform":"Windows",
"Weight": 48,
"AvailabilityZone":"us-east-1",
"EbsOptimized": true,
"Priority" : 1
}
]
A service-linked role makes setting up Capacity Reservation Fleet easier because you don’t have to
manually add the necessary permissions. Capacity Reservation Fleet defines the permissions of its
service-linked roles, and unless defined otherwise, only Capacity Reservation Fleet can assume its roles.
555
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
The defined permissions include the trust policy and the permissions policy, and that permissions policy
cannot be attached to any other IAM entity.
You can delete a service-linked role only after first deleting their related resources. This protects your
Capacity Reservation Fleet resources because you can't inadvertently remove permission to access the
resources.
The role uses the AWSEC2CapacityReservationFleetRolePolicy policy, which includes the following
permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeCapacityReservations",
"ec2:DescribeInstances"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateCapacityReservation",
"ec2:CancelCapacityReservation",
"ec2:ModifyCapacityReservation"
],
"Resource": [
"arn:aws:ec2:*:*:capacity-reservation/*"
],
"Condition": {
"StringLike": {
"ec2:CapacityReservationFleet": "arn:aws:ec2:*:*:capacity-reservation-
fleet/crf-*"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": [
"arn:aws:ec2:*:*:capacity-reservation/*"
],
"Condition": {
"StringEquals": {
"ec2:CreateAction": "CreateCapacityReservation"
}
}
}
]
}
556
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or
delete a service-linked role. For more information, see Service-Linked Role Permissions in the IAM User
Guide.
If you delete this service-linked role, and then need to create it again, you can use the same process to
recreate the role in your account. When you create a Capacity Reservation Fleet, Capacity Reservation
Fleet creates the service-linked role for you again.
On-Demand Capacity Reservations send metric data to CloudWatch every five minutes. Metrics are not
supported for Capacity Reservations that are active for less than five minutes.
For more information about viewing metrics in the CloudWatch console, see Using Amazon CloudWatch
Metrics. For more information about creating alarms, see Creating Amazon CloudWatch Alarms.
Contents
557
Amazon Elastic Compute Cloud
User Guide for Linux Instances
On-Demand Capacity Reservations
Metric Description
Unit: Count
Unit: Count
Unit: Count
Unit: Percent
Dimension Description
CapacityReservationId This globally unique dimension filters the data you request for the
identified capacity reservation only.
558
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance lifecycle
Instance lifecycle
An Amazon EC2 instance transitions through different states from the moment you launch it through to
its termination.
The following illustration represents the transitions between instance states. Notice that you can't
stop and start an instance store-backed instance. For more information about instance store-backed
instances, see Storage for the root device (p. 96).
The following table provides a brief description of each instance state and indicates whether it is billed
or not.
Note
The table indicates billing for instance usage only. Some AWS resources, such as Amazon EBS
volumes and Elastic IP addresses, incur charges regardless of the instance's state. For more
information, see Avoiding Unexpected Charges in the AWS Billing and Cost Management User
Guide.
559
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance launch
Note
Rebooting an instance doesn't start a new instance billing period because the instance stays in
the running state.
Instance launch
When you launch an instance, it enters the pending state. The instance type that you specified at launch
determines the hardware of the host computer for your instance. We use the Amazon Machine Image
(AMI) you specified at launch to boot the instance. After the instance is ready for you, it enters the
running state. You can connect to your running instance and use it the way that you'd use a computer
sitting in front of you.
As soon as your instance transitions to the running state, you're billed for each second, with a one-
minute minimum, that you keep the instance running, even if the instance remains idle and you don't
connect to it.
For more information, see Launch your instance (p. 563) and Connect to your Linux instance (p. 596).
560
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance hibernate (Amazon EBS-backed instances only)
When you stop your instance, it enters the stopping state, and then the stopped state. We don't
charge usage or data transfer fees for your instance after you stop it, but we do charge for the storage
for any Amazon EBS volumes. While your instance is in the stopped state, you can modify certain
attributes of the instance, including the instance type.
When you start your instance, it enters the pending state, and we move the instance to a new host
computer (though in some cases, it remains on the current host). When you stop and start your instance,
you lose any data on the instance store volumes on the previous host computer.
Your instance retains its private IPv4 address, which means that an Elastic IP address associated with the
private IPv4 address or network interface is still associated with your instance. If your instance has an
IPv6 address, it retains its IPv6 address.
Each time you transition an instance from stopped to running, we charge per second when the
instance is running, with a minimum of one minute every time you start your instance.
For more information, see Stop and start your instance (p. 622).
When you hibernate your instance, it enters the stopping state, and then the stopped state. We don't
charge usage for a hibernated instance when it is in the stopped state, but we do charge while it is in
the stopping state, unlike when you stop an instance (p. 560) without hibernating it. We don't charge
usage for data transfer fees, but we do charge for the storage for any Amazon EBS volumes, including
storage for the RAM data.
When you start your hibernated instance, it enters the pending state, and we move the instance to a
new host computer (though in some cases, it remains on the current host).
Your instance retains its private IPv4 address, which means that an Elastic IP address associated with the
private IPv4 address or network interface is still associated with your instance. If your instance has an
IPv6 address, it retains its IPv6 address.
For more information, see Hibernate your On-Demand or Reserved Linux instance (p. 626).
Instance reboot
You can reboot your instance using the Amazon EC2 console, a command line tool, and the Amazon EC2
API. We recommend that you use Amazon EC2 to reboot your instance instead of running the operating
system reboot command from your instance.
Rebooting an instance is equivalent to rebooting an operating system. The instance remains on the same
host computer and maintains its public DNS name, private IP address, and any data on its instance store
volumes. It typically takes a few minutes for the reboot to complete, but the time it takes to reboot
depends on the instance configuration.
Rebooting an instance doesn't start a new instance billing period; per second billing continues without a
further one-minute minimum charge.
561
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance retirement
Instance retirement
An instance is scheduled to be retired when AWS detects the irreparable failure of the underlying
hardware hosting the instance. When an instance reaches its scheduled retirement date, it is stopped or
terminated by AWS. If your instance root device is an Amazon EBS volume, the instance is stopped, and
you can start it again at any time. If your instance root device is an instance store volume, the instance is
terminated, and cannot be used again.
Instance termination
When you've decided that you no longer need an instance, you can terminate it. As soon as the status of
an instance changes to shutting-down or terminated, you stop incurring charges for that instance.
If you enable termination protection, you can't terminate the instance using the console, CLI, or API.
After you terminate an instance, it remains visible in the console for a short while, and then the entry
is automatically deleted. You can also describe a terminated instance using the CLI and API. Resources
(such as tags) are gradually disassociated from the terminated instance, therefore may no longer be
visible on the terminated instance after a short while. You can't connect to or recover a terminated
instance.
Each Amazon EBS volume supports the DeleteOnTermination attribute, which controls whether the
volume is deleted or preserved when you terminate the instance it is attached to. The default is to delete
the root device volume and preserve any other EBS volumes.
Host The instance stays We move the instance We move the instance None
computer on the same host to a new host to a new host
computer computer (though computer (though in
in some cases, it some cases, it remains
remains on the on the current host).
current host).
Private and These addresses The instance keeps its The instance keeps its None
public IPv4 stay the same private IPv4 address. private IPv4 address.
addresses The instance gets The instance gets
a new public IPv4 a new public IPv4
562
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
Elastic IP The Elastic IP The Elastic IP address The Elastic IP address The Elastic
addresses address remains remains associated remains associated IP address is
(IPv4) associated with with the instance with the instance disassociated from
the instance the instance
IPv6 The address stays The instance keeps its The instance keeps its None
address the same IPv6 address IPv6 address
Instance The data is The data is erased The data is erased The data is erased
store preserved
volumes
Root device The volume is The volume is The volume is The volume is
volume preserved preserved preserved deleted by default
RAM The RAM is erased The RAM is erased The RAM is saved The RAM is erased
(contents of to a file on the root
memory) volume
Billing The instance You stop incurring You incur charges You stop incurring
billing hour charges for an while the instance is in charges for
doesn't change. instance as soon as the stopping state, an instance
its state changes but stop incurring as soon as its
to stopping. charges when the state changes to
Each time an instance is in the shutting-down.
instance transitions stopped state. Each
from stopped to time an instance
running, we start a transitions from
new instance billing stopped to running,
period, billing a we start a new
minimum of one instance billing period,
minute every time billing a minimum
you start your of one minute every
instance. time you start your
instance.
Operating system shutdown commands always terminate an instance store-backed instance. You can
control whether operating system shutdown commands stop or terminate an Amazon EBS-backed
instance. For more information, see Change the instance initiated shutdown behavior (p. 650).
When you sign up for AWS, you can get started with Amazon EC2 for free using the AWS Free Tier. You
can use the free tier to launch and use a t2.micro instance for free for 12 months (in Regions where
t2.micro is unavailable, you can use a t3.micro instance under the free tier). If you launch an instance
563
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
that is not within the free tier, you incur the standard Amazon EC2 usage fees for the instance. For more
information, see Amazon EC2 pricing.
Method Documentation
[Amazon EC2 console] Use the launch instance Launch an instance using the Launch Instance
wizard to specify the launch parameters. Wizard (p. 565)
[Amazon EC2 console] Create a launch template Launch an instance from a launch
and launch the instance from the launch template (p. 579)
template.
[Amazon EC2 console] Use an existing instance as Launch an instance using parameters from an
the base. existing instance (p. 593)
[Amazon EC2 console] Use an AMI that you Launch an AWS Marketplace instance (p. 595)
purchased from the AWS Marketplace.
[AWS CLI] Use an AMI that you select. Using Amazon EC2 through the AWS CLI
[AWS Tools for Windows PowerShell] Use an AMI Amazon EC2 from the AWS Tools for Windows
that you select. PowerShell
[AWS CLI] Use EC2 Fleet to provision capacity EC2 Fleet (p. 762)
across different EC2 instance types and
Availability Zones, and across On-Demand
Instance, Reserved Instance, and Spot Instance
purchase models.
[AWS SDK] Use a language-specific AWS SDK to AWS SDK for .NET
launch an instance.
AWS SDK for C++
Note
To launch an EC2 instance into an IPv6-only subnet, you must use Instances built on the Nitro
System (p. 232).
Note
When launching an IPv6-only instance, it is possible that DHCPv6 may not immediately provide
the instance with the IPv6 DNS name server. During this initial delay, the instance may not be
able to resolve public domains.
For instances running on Amazon Linux 2, if you want to immediately update the /etc/
resolv.conf file with the IPv6 DNS name server, run the following cloud-init directive at launch:
564
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
#cloud-config
bootcmd:
- /usr/bin/sed -i -E 's,^nameserver\s+[\.[:digit:]]+$,nameserver fd00:ec2::253,' /
etc/resolv.conf
Another option is to change the configuration file and re-image your AMI so that the file has the
IPv6 DNS name server address immediately on booting.
When you launch your instance, you can launch your instance in a subnet that is associated with one of
the following resources:
After you launch your instance, you can connect to it and use it. To begin, the instance state is pending.
When the instance state is running, the instance has started booting. There might be a short time
before you can connect to the instance. Note that bare metal instance types might take longer to launch.
For more information about bare metal instances, see Instances built on the Nitro System (p. 232).
The instance receives a public DNS name that you can use to contact the instance from the internet.
The instance also receives a private DNS name that other instances within the same VPC can use to
contact the instance. For more information about connecting to your instance, see Connect to your Linux
instance (p. 596).
When you are finished with an instance, be sure to terminate it. For more information, see Terminate
your instance (p. 646).
Before you launch your instance, be sure that you are set up. For more information, see Set up to use
Amazon EC2 (p. 5).
Important
When you launch an instance that's not within the AWS Free Tier, you are charged for the time
that the instance is running, even if it remains idle.
565
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• Step 7: Review Instance Launch and Select Key Pair (p. 571)
• Launch an instance using the new launch instance wizard – beta (p. 571)
When you launch an instance, you can either select an AMI from the list, or you can select a Systems
Manager parameter that points to an AMI ID. For more information, see Using a Systems Manager
parameter to find an AMI.
On the Choose an Amazon Machine Image (AMI) page, use one of two options to choose an AMI. Either
search the list of AMIs (p. 566), or search by Systems Manager parameter (p. 567).
Quick Start
A selection of popular AMIs to help you get started quickly. To select an AMI that is eligible for
the free tier, choose Free tier only in the left pane. These AMIs are marked Free tier eligible.
My AMIs
The private AMIs that you own, or private AMIs that have been shared with you. To view AMIs
that are shared with you, choose Shared with me in the left pane.
AWS Marketplace
An online store where you can buy software that runs on AWS, including AMIs. For more
information about launching an instance from the AWS Marketplace, see Launch an AWS
Marketplace instance (p. 595).
Community AMIs
The AMIs that AWS community members have made available for others to use. To filter the list
of AMIs by operating system, choose the appropriate check box under Operating system. You
can also filter by architecture and root device type.
2. Check the Root device type listed for each AMI. Notice which AMIs are the type that you need, either
ebs (backed by Amazon EBS) or instance-store (backed by instance store). For more information,
see Storage for the root device (p. 96).
3. Check the Virtualization type listed for each AMI. Notice which AMIs are the type that you need,
either hvm or paravirtual. For example, some instance types require HVM. For more information,
see Linux AMI virtualization types (p. 98).
4. Check the Boot mode listed for each AMI. Notice which AMIs use the boot mode that you need,
either legacy-bios or uefi. For more information, see Boot modes (p. 100).
566
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
5. Choose an AMI that meets your needs, and then choose Select.
To remain eligible for the free tier, choose the t2.micro instance type (or the t3.micro instance
type in Regions where t2.micro is unavailable). For more information, see Burstable performance
instances (p. 251).
By default, the wizard displays current generation instance types, and selects the first available instance
type based on the AMI that you selected. To view previous generation instance types, choose All
generations from the filter list.
Note
To set up an instance quickly for testing purposes, choose Review and Launch to accept the
default configuration settings, and launch your instance. Otherwise, to configure your instance
further, choose Next: Configure Instance Details.
567
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• Subnet: You can launch an instance in a subnet associated with an Availability Zone, Local Zone,
Wavelength Zone or Outpost.
To launch the instance in an Availability Zone, select the subnet into which to launch your instance.
You can select No preference to let AWS choose a default subnet in any Availability Zone. To create a
new subnet, choose Create new subnet to go to the Amazon VPC console. When you are done, return
to the wizard and choose Refresh to load your subnet in the list.
To launch the instance in a Local Zone, select a subnet that you created in the Local Zone.
To launch an instance in an Outpost, select a subnet in a VPC that you associated with an Outpost.
• Auto-assign Public IP: Specify whether your instance receives a public IPv4 address. By default,
instances in a default subnet receive a public IPv4 address and instances in a nondefault subnet do not.
You can select Enable or Disable to override the subnet's default setting. For more information, see
Public IPv4 addresses (p. 1019).
• Auto-assign IPv6 IP: Specify whether your instance receives an IPv6 address from the range of the
subnet. Select Enable or Disable to override the subnet's default setting. This option is only available
if you've associated an IPv6 CIDR block with your VPC and subnet. For more information, see Your VPC
and Subnets in the Amazon VPC User Guide.
• Hostname type: Select if you want the guest OS hostname of the EC2 instance to be the Resource
name or IP name. For more information about hostname type and these options, see Amazon EC2
instance hostname types (p. 1034).
• DNS Hostname: Determines if the DNS queries to the IP name and/or the Resource name will respond
with the IPv4 address (A record), IPv6 address (AAAA record), or both. For more information about
these options, see Amazon EC2 instance hostname types (p. 1034).
• Domain join directory: Select the AWS Directory Service directory (domain) to which your Linux
instance is joined after launch. If you select a domain, you must select an IAM role with the required
permissions. For more information, see Seamlessly Join a Linux EC2 Instance to Your AWS Managed
Microsoft AD Directory.
• Placement group: A placement group determines the placement strategy of your instances. Select
an existing placement group, or create a new one. This option is only available if you've selected an
instance type that supports placement groups. For more information, see Placement groups (p. 1167).
• Capacity Reservation: Specify whether to launch the instance into shared capacity, any open Capacity
Reservation, a specific Capacity Reservation, or a Capacity Reservation group. For more information,
see Launch instances into an existing Capacity Reservation (p. 528).
• IAM role: Select an AWS Identity and Access Management (IAM) role to associate with the instance. For
more information, see IAM roles for Amazon EC2 (p. 1275).
• CPU options: Choose Specify CPU options to specify a custom number of vCPUs during launch.
Set the number of CPU cores and threads per core. For more information, see Optimize CPU
options (p. 676).
• Shutdown behavior: Select whether the instance should stop or terminate when shut down. For more
information, see Change the instance initiated shutdown behavior (p. 650).
• Stop - Hibernate behavior: To enable hibernation, select this check box. This option is only available
if your instance meets the hibernation prerequisites. For more information, see Hibernate your On-
Demand or Reserved Linux instance (p. 626).
• Enable termination protection: To prevent accidental termination, select this check box. For more
information, see Enable termination protection (p. 649).
• Monitoring: Select this check box to enable detailed monitoring of your instance using Amazon
CloudWatch. Additional charges apply. For more information, see Monitor your instances using
CloudWatch (p. 958).
• EBS-optimized instance: An Amazon EBS-optimized instance uses an optimized configuration stack
and provides additional, dedicated capacity for Amazon EBS I/O. If the instance type supports this
feature, select this check box to enable it. Additional charges apply. For more information, see Amazon
EBS–optimized instances (p. 1556).
568
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• Tenancy: If you are launching your instance into a VPC, you can choose to run your instance on
isolated, dedicated hardware (Dedicated) or on a Dedicated Host (Dedicated host). Additional charges
may apply. For more information, see Dedicated Instances (p. 516) and Dedicated Hosts (p. 483).
• T2/T3 Unlimited: Select this check box to enable applications to burst beyond the baseline for as
long as needed. Additional charges may apply. For more information, see Burstable performance
instances (p. 251).
• File systems: To create a new file system to mount to your instance, choose Create new file system,
enter a name for the new file system, and then choose Create. The file system is created using Amazon
EFS Quick Create, which applies the service recommended settings. The security groups required to
enable access to the file system are automatically created and attached to the instance and the mount
targets of the file system. You can also choose to manually create and attach the required security
groups. For more information, see Create an EFS file system using Amazon EFS Quick Create (p. 1633).
To mount one or more existing Amazon EFS file systems to your instance, choose Add file system and
then choose the file systems to mount and the mount points to use. For more information, see Create
an EFS file system and mount it to your instance (p. 1634).
• Network interfaces: If you selected a specific subnet, you can specify up to two network interfaces for
your instance:
• For Network Interface, select New network interface to let AWS create a new interface, or select an
existing, available network interface.
• For Primary IP, enter a private IPv4 address from the range of your subnet, or leave Auto-assign to
let AWS choose a private IPv4 address for you.
• For Secondary IP addresses, choose Add IP to assign more than one private IPv4 address to the
selected network interface.
• (IPv6-only) For IPv6 IPs, choose Add IP, and enter an IPv6 address from the range of the subnet, or
leave Auto-assign to let AWS choose one for you.
• Network Card Index: The index of the network card. The primary network interface must be
assigned to network card index 0. Some instance types support multiple network cards.
• Choose Add Device to add a secondary network interface. A secondary network interface can reside
in a different subnet of the VPC, provided it's in the same Availability Zone as your instance.
For more information, see Elastic network interfaces (p. 1067). If you specify more than one network
interface, your instance cannot receive a public IPv4 address. Additionally, if you specify an existing
network interface for eth0, you cannot override the subnet's public IPv4 setting using Auto-assign
Public IP. For more information, see Assign a public IPv4 address during instance launch (p. 1023).
• Kernel ID: (Only valid for paravirtual (PV) AMIs) Select Use default unless you want to use a specific
kernel.
• RAM disk ID: (Only valid for paravirtual (PV) AMIs) Select Use default unless you want to use a specific
RAM disk. If you have selected a kernel, you may need to select a specific RAM disk with the drivers to
support it.
• Enclave: Select Enable to enable the instance for AWS Nitro Enclaves. For more information, see What
is AWS Nitro Enclaves? in the AWS Nitro Enclaves User Guide.
• Metadata accessible: You can enable or disable access to the instance metadata. For more
information, see Use IMDSv2 (p. 711).
• Metadata transport: You can enable or disable the access method to the instance metadata service
that's available for this EC2 instance based on the IP address type (IPv4, IPv6, or IPv4 and IPv6) of the
instance. For more information, see Retrieve instance metadata (p. 718).
• Metadata version: If you enable access to the instance metadata, you can choose to require the use of
Instance Metadata Service Version 2 when requesting instance metadata. For more information, see
Configure instance metadata options for new instances (p. 715).
• Metadata token response hop limit: If you enable instance metadata, you can set the allowable
number of network hops for the metadata token. For more information, see Use IMDSv2 (p. 711).
569
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• User data: You can specify user data to configure an instance during launch, or to run a configuration
script. To attach a file, select the As file option and browse for the file to attach.
• Type: Select instance store or Amazon EBS volumes to associate with your instance. The types of
volume available in the list depend on the instance type you've chosen. For more information, see
Amazon EC2 instance store (p. 1613) and Amazon EBS volumes (p. 1327).
• Device: Select from the list of available device names for the volume.
• Snapshot: Enter the name or ID of the snapshot from which to restore a volume. You can also search
for available shared and public snapshots by typing text into the Snapshot field. Snapshot descriptions
are case-sensitive.
• Size: For EBS volumes, you can specify a storage size. Even if you have selected an AMI and
instance that are eligible for the free tier, to stay within the free tier, you must stay under 30 GiB
of total storage. For more information, see Constraints on the size and configuration of an EBS
volume (p. 1346).
• Volume Type: For EBS volumes, select a volume type. For more information, see Amazon EBS volume
types (p. 1329).
• IOPS: If you have selected a Provisioned IOPS SSD volume type, then you can enter the number of I/O
operations per second (IOPS) that the volume can support.
• Delete on Termination: For Amazon EBS volumes, select this check box to delete the volume when
the instance is terminated. For more information, see Preserve Amazon EBS volumes on instance
termination (p. 650).
• Encrypted: If the instance type supports EBS encryption, you can specify the encryption state of the
volume. If you have enabled encryption by default in this Region, the default customer managed key
is selected for you. You can select a different key or disable encryption. For more information, see
Amazon EBS encryption (p. 1536).
• To select an existing security group, choose Select an existing security group, and select your security
group. You can't edit the rules of an existing security group, but you can copy them to a new group by
choosing Copy to new. Then you can add rules as described in the next step.
• To create a new security group, choose Create a new security group. The wizard automatically defines
the launch-wizard-x security group and creates an inbound rule to allow you to connect to your
instance over SSH (port 22).
• You can add rules to suit your needs. For example, if your instance is a web server, open ports 80
(HTTP) and 443 (HTTPS) to allow internet traffic.
570
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
To add a rule, choose Add Rule, select the protocol to open to network traffic, and then specify the
source. Choose My IP from the Source list to let the wizard add your computer's public IP address.
However, if you are connecting through an ISP or from behind your firewall without a static IP address,
you need to find out the range of IP addresses used by client computers.
Warning
Rules that enable all IP addresses (0.0.0.0/0) to access your instance over SSH or RDP are
acceptable for this short exercise, but are unsafe for production environments. You should
authorize only a specific IP address or range of addresses to access your instance.
In the Select an existing key pair or create a new key pair dialog box, you can choose an existing
key pair, or create a new one. For example, choose Choose an existing key pair, then select the key
pair you created when getting set up. For more information, see Amazon EC2 key pairs and Linux
instances (p. 1288).
Important
If you choose the Proceed without key pair option, you won't be able to connect to the instance
unless you choose an AMI that is configured to allow users another way to log in.
To launch your instance, select the acknowledgment check box, then choose Launch Instances.
(Optional) You can create a status check alarm for the instance (additional fees may apply). (If you're not
sure, you can always add one later.) On the confirmation screen, choose Create status check alarms and
follow the directions. For more information, see Create and edit status check alarms (p. 932).
If the instance fails to launch or the state immediately goes to terminated instead of running, see
Troubleshoot instance launch issues (p. 1683).
This is prerelease documentation for the new launch instance wizard, which is in beta release. The
documentation and the feature are both subject to change. For beta terms and conditions, see the
Betas and Previews section in the AWS Service Terms.
You can launch an instance using the launch instance wizard. The launch instance wizard specifies
the launch parameters that are required for launching an instance. Where the launch instance wizard
provides a default value, you can accept the default or specify your own value. If you accept the default
values, then it's possible to launch an instance by selecting only a key pair.
Before you launch your instance, be sure that you are set up. For more information, see Set up to use
Amazon EC2 (p. 5).
Important
When you launch an instance that's not within the AWS Free Tier, you are charged for the time
that the instance is running, even if it remains idle.
Topics
• About the new launch instance wizard (p. 572)
• Quickly launch an instance (p. 572)
571
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
Current improvements
Quickly get up and running with our new one-page design. See all of your settings in one location.
No need to navigate back and forth between steps to ensure your configuration is correct. Use the
Summary panel for an overview and to easily navigate the page.
• Improved AMI selector
New users – Use the Quick Start AMI selector to select an operating system so that you can quickly
launch an instance.
Experienced users – The AMI selector displays your recently used AMIs and the AMIs that you own,
making it easier to select the AMIs that you care about. You can still browse the full catalog to find
new AMIs.
Work in progress
We’re working continuously to improve the experience. Here’s what we’re currently working on:
• Missing features
• Missing features to be added in the future: Domain join, CPU options, EFS file system integration,
and AMI selection by using an Amazon EC2 Systems Manager parameter.
• Defaults and dependency assistance
• Default values will be provided for all fields.
• Additional logic will be added to help you set up your instance configuration correctly (for example,
we’ll disable parameters that are not available with your current settings).
• Further simplified designs
• Simplified views and summaries and a more responsive design will be added to make the one-
page experience more scalable.
• Simplified networking features will be added to help you to configure your firewall rules quickly
and easily (for example, we’ll select common preset rules).
There will be many more improvements to the launch experience in the months ahead.
We're in the process of rolling out the new launch instance wizard to several AWS Regions. If it's not
available in your currently selected Region, you can select a different Region to check if it's available
there.
We’d appreciate your feedback on this early release. We’ll use your feedback to continue improving the
experience over the next few months. You can send us feedback directly from the EC2 console, or use the
Provide feedback link at the bottom of this page.
572
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
Except for the key pair, the launch instance wizard provides default values for all of the parameters. You
can accept any or all of the defaults, or configure an instance by specifying your own values for each
parameter. The parameters are grouped in the launch instance wizard. The following instructions take
you through each parameter group.
573
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
The instance name is a tag, where the key is Name, and the value is the name that you specify. You can
tag the instance, the volumes, and elastic graphics. For Spot Instances, you can tag the Spot Instance
request only. For information about tags, see Tag your Amazon EC2 resources (p. 1666).
• For Name, enter a descriptive name for the instance. If you don't specify a name, the instance can be
identified by its ID, which is automatically generated when you launch the instance.
• To add additional tags, choose Add additional tags. Choose Add tag, and then enter a key and value,
and select the resource type to tag. Choose Add tag again for each additional tag to add.
An Amazon Machine Image (AMI) contains the information required to create an instance. For example,
an AMI might contain the software that's required to act as a web server, such as Linux, Apache, and your
website.
You can find a suitable AMI as follows. With each option for finding an AMI, you can choose Cancel (at
top right) to return to the launch instance wizard without choosing an AMI.
Search bar
To search through all available AMIs, enter a keyword in the AMI search bar and then press Enter. To
select an AMI, choose Select.
Recents
Choose Recently launched or Currently in use, and then, from Amazon Machine Image (AMI),
select an AMI.
My AMIs
The private AMIs that you own, or private AMIs that have been shared with you.
Choose Owned by me or Shared with me, and then, from Amazon Machine Image (AMI), select an
AMI.
Quick Start
AMIs are grouped by operating system (OS) to help you get started quickly.
First select the OS that you need, and then, from Amazon Machine Image (AMI), select an AMI. To
select an AMI that is eligible for the free tier, make sure that the AMI is marked Free tier eligible.
Browse more AMIs
The AWS Marketplace is an online store where you can buy software that runs on AWS, including
AMIs. For more information about launching an instance from the AWS Marketplace, see Launch
an AWS Marketplace instance (p. 595). In Community AMIs, you can find AMIs that AWS
community members have made available for others to use.
• To filter the list of AMIs, select one or more check boxes under Refine results on the left of the
screen. The filter options are different depending on the selected search category.
574
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• Check the Root device type listed for each AMI. Notice which AMIs are the type that you need:
either ebs (backed by Amazon EBS) or instance-store (backed by instance store). For more
information, see Storage for the root device (p. 96).
• Check the Virtualization type listed for each AMI. Notice which AMIs are the type that you need:
either hvm or paravirtual. For example, some instance types require HVM. For more information,
see Linux AMI virtualization types (p. 98).
• Check the Boot mode listed for each AMI. Notice which AMIs use the boot mode that you need:
either legacy-bios or uefi. For more information, see Boot modes (p. 100).
• Choose an AMI that meets your needs, and then choose Select.
Instance type
The instance type defines the hardware configuration and size of the instance. Larger instance types have
more CPU and memory. For more information, see Instance types.
• For Instance type, select the instance type for the instance.
If your AWS account is less than 12 months old, you can use EC2 under the Free Tier by selecting the
t2.micro instance type (or the t3.micro instance type in Regions where t2.micro is unavailable). For
more information about t2.micro and t3.micro, see Burstable performance instances (p. 251).
• Compare instance types: You can compare different instance types by the following attributes:
number of vCPUs, architecture, amount of memory (GiB), amount of storage (GB), storage type, and
network performance.
For Key pair name, choose an existing key pair, or choose Create new key pair to create a new one. For
more information, see Amazon EC2 key pairs and Linux instances (p. 1288).
Important
If you choose the Proceed without key pair (Not recommended) option, you won't be able to
connect to the instance unless you choose an AMI that is configured to allow users another way
to log in.
Network settings
• Networking platform: If applicable, whether to launch the instance into a VPC or EC2-Classic. If you
choose Virtual Private Cloud (VPC), specify the subnet in the Network interfaces section. If you
choose EC2-Classic, ensure that the specified instance type is supported in EC2-Classic and then
specify the Availability Zone for the instance. Note that we are retiring EC2-Classic on August 15, 2022.
• VPC: Select an existing VPC in which to create the security group.
• Subnet: You can launch an instance in a subnet associated with an Availability Zone, Local Zone,
Wavelength Zone, or Outpost.
To launch the instance in an Availability Zone, select the subnet in which to launch your instance. To
create a new subnet, choose Create new subnet to go to the Amazon VPC console. When you are
done, return to the wizard and choose the Refresh icon to load your subnet in the list.
To launch the instance in a Local Zone, select a subnet that you created in the Local Zone.
To launch an instance in an Outpost, select a subnet in a VPC that you associated with the Outpost.
• Auto-assign Public IP: Specify whether your instance receives a public IPv4 address. By default,
instances in a default subnet receive a public IPv4 address, and instances in a nondefault subnet do
575
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
not. You can select Enable or Disable to override the subnet's default setting. For more information,
see Public IPv4 addresses (p. 1019).
• Firewall (security groups): Use a security group to define firewall rules for your instance. These
rules specify which incoming network traffic is delivered to your instance. All other traffic is
ignored. For more information about security groups, see Amazon EC2 security groups for Linux
instances (p. 1303).
If you add a network interface, you must specify the same security group in the network interface.
You can add rules to suit your needs. For example, if your instance is a web server, open ports 80
(HTTP) and 443 (HTTPS) to allow internet traffic.
To add a rule, choose Add security group rule. For Type, select the network traffic type. The
Protocol field is automatically filled in with the protocol to open to network traffic. For Source
type, select the source type. To let the wizard add your computer's public IP address, choose My
IP. However, if you are connecting through an ISP or from behind your firewall without a static IP
address, you need to find out the range of IP addresses used by client computers.
Warning
Rules that enable all IP addresses (0.0.0.0/0) to access your instance over SSH or RDP are
acceptable if you are briefly launching a test instance and will stop or terminate it soon, but
are unsafe for production environments. You should authorize only a specific IP address or
range of addresses to access your instance.
• Advanced network configuration – Available only if you choose a subnet.
Network interface
• Device index: The index of the network card. The primary network interface must be assigned to
network card index 0. Some instance types support multiple network cards.
• Network interface: Select New interface to let Amazon EC2 create a new interface, or select an
existing, available network interface.
• Description: (Optional) A description for the new network interface.
• Subnet: The subnet in which to create the new network interface. For the primary network interface
(eth0), this is the subnet in which the instance is launched. If you've entered an existing network
interface for eth0, the instance is launched in the subnet in which the network interface is located.
• Security groups: One or more security groups in your VPC with which to associate the network
interface.
• Primary IP: A private IPv4 address from the range of your subnet. Leave blank to let Amazon EC2
choose a private IPv4 address for you.
• Secondary IP: One or more additional private IPv4 addresses from the range of your subnet. Choose
Manually assign and enter an IP address. Choose Add IP to add another IP address. Alternatively,
choose Automatically assign to let Amazon EC2 choose one for you, and enter a value to indicate
the number of IP addresses to add.
• (IPv6-only) IPv6 IPs: An IPv6 address from the range of the subnet. Choose Manually assign and
enter an IP address. Choose Add IP to add another IP address. Alternatively, choose Automatically
assign to let Amazon EC2 choose one for you, and enter a value to indicate the number of IP
addresses to add.
• IPv4 Prefixes: The IPv4 prefixes for the network interface.
• IPv6 Prefixes: The IPv6 prefixes for the network interface.
576
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• Delete on termination: Whether the network interface is deleted when the instance is deleted.
• Elastic Fabric Adapter: Indicates whether the network interface is an Elastic Fabric Adapter. For
more information, see Elastic Fabric Adapter.
Choose Add network interface to add a secondary network interface. A secondary network interface
can reside in a different subnet of the VPC, provided it's in the same Availability Zone as your instance.
For more information, see Elastic network interfaces (p. 1067). If you specify more than one network
interface, your instance cannot receive a public IPv4 address. Additionally, if you specify an existing
network interface for eth0, you cannot override the subnet's public IPv4 setting using Auto-assign
Public IP. For more information, see Assign a public IPv4 address during instance launch (p. 1023).
Configure storage
The AMI you selected includes one or more volumes of storage, including the root volume. You can
specify additional volumes to attach to the instance.
You can use the Simple or Advanced view. With the Simple view, you specify the size and type of
volume. To specify all volume parameters, choose the Advanced view (at top right of the card).
By using the Advanced view, you can configure each volume as follows:
• Volume type: Select Amazon EBS or instance store volumes to associate with your instance.
The volume types available in the list depend on the instance type that you've chosen. For more
information, see Amazon EC2 instance store (p. 1613) and Amazon EBS volumes (p. 1327).
• Device name: Select from the list of available device names for the volume.
• Snapshot: Select the snapshot from which to restore the volume. You can search for available shared
and public snapshots by entering text into the Snapshot field.
• Size (GiB): For EBS volumes, you can specify a storage size. If you have selected an AMI and instance
that are eligible for the free tier, keep in mind that to stay within the free tier, you must stay under 30
GiB of total storage. For more information, see Constraints on the size and configuration of an EBS
volume (p. 1346).
• Volume type: For EBS volumes, select a volume type. For more information, see Amazon EBS volume
types (p. 1329).
• IOPS: If you have selected a Provisioned IOPS SSD volume type, then you can enter the number of I/O
operations per second (IOPS) that the volume can support.
• Delete on termination: For Amazon EBS volumes, choose Yes to delete the volume when the instance
is terminated, or choose No to keep the volume. For more information, see Preserve Amazon EBS
volumes on instance termination (p. 650).
• Encrypted: If the instance type supports EBS encryption, you can choose Yes to enable encryption for
the volume. If you have enabled encryption by default in this Region, encryption is enabled for you.
For more information, see Amazon EBS encryption (p. 1536).
• Key: If you selected Yes for Encrypted, then you must select a customer managed key to use to
encrypt the volume. If you have enabled encryption by default in this Region, the default customer
managed key is selected for you. You can select a different key or specify the ARN of any customer
managed key that you created.
Advanced details
For Advanced details, expand the section to view the fields and specify any additional parameters for
the instance.
• Purchasing option: Choose Request Spot Instances to request Spot Instances at the Spot price,
capped at the On-Demand price, and choose Customize to change the default Spot Instance settings.
577
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
You can set your maximum price, and change the request type, request duration, and interruption
behavior. If you do not request a Spot Instance, EC2 launches an On-Demand Instance by default. For
more information, see Spot Instances (p. 424).
• IAM instance profile: Select an AWS Identity and Access Management (IAM) instance profile to
associate with the instance. For more information, see IAM roles for Amazon EC2 (p. 1275).
• Hostname type: Select if you want the guest OS hostname of the EC2 instance to be the Resource
name (RBN) or IP name (IPBN). For more information about hostname type and these options, see
Amazon EC2 instance hostname types (p. 1034).
• DNS Hostname: Determines if the DNS queries to the IP name and/or the Resource name will respond
with the IPv4 address (A record), IPv6 address (AAAA record), or both. For more information about
these options, see Amazon EC2 instance hostname types (p. 1034).
• Shutdown behavior: Select whether the instance should stop or terminate when shut down. For more
information, see Change the instance initiated shutdown behavior (p. 650).
• Stop - Hibernate behavior: To enable hibernation, choose Enable. This field is available only if your
instance meets the hibernation prerequisites. For more information, see Hibernate your On-Demand or
Reserved Linux instance (p. 626).
• Termination protection: To prevent accidental termination, choose Enable. For more information, see
Enable termination protection (p. 649).
• Detailed CloudWatch monitoring: Choose Enable to enable detailed monitoring of your instance
using Amazon CloudWatch. Additional charges apply. For more information, see Monitor your
instances using CloudWatch (p. 958).
• Credit specification: Choose Unlimited to enable applications to burst beyond the baseline for as long
as needed. This field is only valid for T instances. Additional charges may apply. For more information,
see Burstable performance instances (p. 251).
• Placement group name: Specify a placement group in which to launch the instance. You can select an
existing placement group, or create a new one. Not all instance types support launching an instance in
a placement group. For more information, see Placement groups (p. 1167).
• EBS-optimized instance: An instance that's optimized for Amazon EBS uses an optimized
configuration stack and provides additional, dedicated capacity for Amazon EBS I/O. If the instance
type supports this feature, choose Enable to enable it. Additional charges apply. For more information,
see Amazon EBS–optimized instances (p. 1556).
• Capacity Reservation: Specify whether to launch the instance into any open Capacity Reservation
(Open), a specific Capacity Reservation (Target by ID), or a Capacity Reservation group (Target
by group). To specify that a Capacity Reservation should not be used, choose None. For more
information, see Launch instances into an existing Capacity Reservation (p. 528).
• Tenancy: Choose whether to run your instance on shared hardware (Shared), isolated, dedicated
hardware (Dedicated), or on a Dedicated Host (Dedicated host). If you choose to launch the instance
onto a Dedicated Host, you can specify whether to launch the instance into a host resource group or
you can target a specific Dedicated Host. Additional charges may apply. For more information, see
Dedicated Instances (p. 516) and Dedicated Hosts (p. 483).
• RAM disk ID: (Only valid for paravirtual (PV) AMIs) Select a RAM disk for the instance. If you have
selected a kernel, you might need to select a specific RAM disk with the drivers to support it.
• Kernel ID: (Only valid for paravirtual (PV) AMIs) Select a kernel for the instance.
• Nitro Enclave: Allows you to create isolated execution environments, called enclaves, from Amazon
EC2 instances. Select Enable to enable the instance for AWS Nitro Enclaves. For more information, see
What is AWS Nitro Enclaves? in the AWS Nitro Enclaves User Guide.
• Metadata accessible: You can enable or disable access to the instance metadata. For more
information, see Configure instance metadata options for new instances (p. 715).
• Metadata version: If you enable access to the instance metadata, you can choose to require the use of
Instance Metadata Service Version 2 when requesting instance metadata. For more information, see
Configure instance metadata options for new instances (p. 715).
578
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• Metadata response hop limit: If you enable instance metadata, you can set the allowable number of
network hops for the metadata token. For more information, see Configure instance metadata options
for new instances (p. 715).
• User data: You can specify user data to configure an instance during launch, or to run a configuration
script. For more information, see Run commands on your Linux instance at launch (p. 704).
Summary
Use the Summary panel to specify the number of instances to launch, to review your instance
configuration, and to launch your instances.
• Number of instances: Enter the number of instances to launch. All of the instances will launch with
the same configuration.
Tip
To ensure faster instance launches, break up large requests into smaller batches. For example,
create five separate launch requests for 100 instances each instead of one launch request for
500 instances.
• Review the details of your instance, and make any necessary changes. You can navigate directly to a
section by choosing its link in the Summary panel.
• When you're ready to launch your instance, choose Launch instance.
If the instance fails to launch or the state immediately goes to terminated instead of running, see
Troubleshoot instance launch issues (p. 1683).
(Optional) You can create a billing alert for the instance. (If you're not sure, you can always add one
later.) On the confirmation screen, Under Next Steps, choose Create billing alerts and follow the
directions.
For each launch template, you can create one or more numbered launch template versions. Each version
can have different launch parameters. When you launch an instance from a launch template, you can
use any version of the launch template. If you do not specify a version, the default version is used. You
can set any version of the launch template as the default version—by default, it's the first version of the
launch template.
The following diagram shows a launch template with three versions. The first version specifies the
instance type, AMI ID, subnet, and key pair to use to launch the instance. The second version is based
on the first version and also specifies a security group for the instance. The third version uses different
values for some of the parameters. Version 2 is set as the default version. If you launched an instance
from this launch template, the launch parameters from version 2 would be used if no other version were
specified.
579
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
Contents
• Launch template restrictions (p. 580)
• Use launch templates to control launch parameters (p. 580)
• Control the use of launch templates (p. 581)
• Create a launch template (p. 581)
• Modify a launch template (manage launch template versions) (p. 588)
• Launch an instance from a launch template (p. 591)
• Use launch templates with Amazon EC2 Auto Scaling (p. 592)
• Use launch templates with EC2 Fleet (p. 592)
• Use launch templates with Spot Fleet (p. 593)
• Delete a launch template (p. 593)
• You are limited to creating 5,000 launch templates per Region and 10,000 versions per launch
template.
• Launch template parameters are optional. However, you must ensure that your request to launch an
instance includes all required parameters. For example, if your launch template does not include an
AMI ID, you must specify both the launch template and an AMI ID when you launch an instance.
• Launch template parameters are not fully validated when you create the launch template. If you
specify incorrect values for parameters, or if you do not use supported parameter combinations, no
instances can launch using this launch template. Ensure that you specify the correct values for the
parameters and that you use supported parameter combinations. For example, to launch an instance in
a placement group, you must specify a supported instance type.
• You can tag a launch template, but you cannot tag a launch template version.
• Launch templates are immutable. To modify a launch template, you must create a new version of the
launch template.
• Launch template versions are numbered in the order in which they are created. When you create a
launch template version, you cannot specify the version number yourself.
580
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
Note
You cannot remove launch template parameters during launch (for example, you cannot specify
a null value for the parameter). To remove a parameter, create a new version of the launch
template without the parameter and use that version to launch the instance.
To launch instances, IAM users must have permissions to use the ec2:RunInstances action. IAM
users must also have permissions to create or use the resources that are created or associated with the
instance. You can use resource-level permissions for the ec2:RunInstances action to control the
launch parameters that users can specify. Alternatively, you can grant users permissions to launch an
instance using a launch template. This enables you to manage launch parameters in a launch template
rather than in an IAM policy, and to use a launch template as an authorization vehicle for launching
instances. For example, you can specify that users can only launch instances using a launch template, and
that they can only use a specific launch template. You can also control the launch parameters that users
can override in the launch template. For example policies, see Launch templates (p. 1250).
Take care when granting users permissions to use the ec2:CreateLaunchTemplate and
ec2:CreateLaunchTemplateVersion actions. You cannot use resource-level permissions to control
which resources users can specify in the launch template. To restrict the resources that are used to
launch an instance, ensure that you grant permissions to create launch templates and launch template
versions only to appropriate administrators.
Tasks
• Create a new launch template using parameters you define (p. 581)
• Create a launch template from an existing launch template (p. 586)
• Create a launch template from an instance (p. 586)
Console
To create a new launch template using defined parameters using the console
581
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• AMI: An AMI from which to launch the instance. To search through all available AMIs, choose
Search for AMI. To select a commonly used AMI, choose Quick Start. Or, choose AWS
Marketplace or Community AMIs. You can use an AMI that you own or find a suitable AMI.
7. For Instance type, you can either select an instance type, or you can specify instance attributes
and let Amazon EC2 identify the instance types with those attributes.
Note
Specifying instance attributes is supported only when using Auto Scaling groups, EC2
Fleet, and Spot Fleet to launch instances. For more information, see Creating an Auto
Scaling group using attribute-based instance type selection, Attribute-based instance
type selection for EC2 Fleet (p. 785), and Attribute-based instance type selection for
Spot Fleet (p. 825).
If you plan to use the launch template in the launch instance wizard or with the
RunInstances API, you must select an instance type.
• Instance type: Ensure that the instance type is compatible with the AMI that you've specified.
For more information, see Instance types (p. 226).
• Advanced: To specify instance attributes and let Amazon EC2 identify the instance types with
those attributes, choose Advanced, and then choose Specify instance type attributes.
• Number of vCPUs: Enter the minimum and maximum number of vCPUs for your compute
requirements. To indicate no limits, enter a minimum of 0, and leave the maximum blank.
• Amount of memory (MiB): Enter the minimum and maximum amount of memory, in MiB,
for your compute requirements. To indicate no limits, enter a minimum of 0, and leave the
maximum blank.
• Expand Optional instance type attributes and choose Add attribute to express
your compute requirements in more detail. For information about each attribute, see
InstanceRequirementsRequest in the Amazon EC2 API Reference.
• Resulting instance types: You can preview the instance types that match the specified
attributes. To exclude instance types, choose Add attribute, and from the Attribute list,
choose Excluded instance types. From the Attribute value list, select the instance types to
exclude.
8. For Key pair (login), provide the following information:
• Key pair name: The key pair for the instance. For more information, see Amazon EC2 key pairs
and Linux instances (p. 1288).
9. For Network settings, provide the following information:
• Networking platform: If applicable, whether to launch the instance into a VPC or EC2-Classic.
If you choose Virtual Private Cloud (VPC), specify the subnet in the Network interfaces
section. If you choose EC2-Classic, ensure that the specified instance type is supported in
EC2-Classic and specify the Availability Zone for the instance. Note that we are retiring EC2-
Classic on August 15, 2022.
• Security groups: One or more security groups to associate with the instance. If you add a
network interface to the launch template, omit this setting and specify the security groups
as part of the network interface specification. You cannot launch an instance from a launch
template that specifies security groups and a network interface. For more information, see
Amazon EC2 security groups for Linux instances (p. 1303).
10. For Storage (volumes), specify volumes to attach to the instance besides the volumes specified
by the AMI (Volume 1 (AMI Root)). To add a new volume, choose Add new volume.
• Volume type: The instance store or Amazon EBS volumes with which to associate your
instance. The type of volume depends on the instance type that you've chosen. For more
information, see Amazon EC2 instance store (p. 1613) and Amazon EBS volumes (p. 1327).
• Device name: A device name for the volume.
582
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• Device index: The device number for the network interface, for example, eth0 for the
primary network interface. If you leave the field blank, AWS creates the primary network
interface.
• Network interface: The ID of the network interface, or leave blank to let AWS create a new
network interface.
• Description: (Optional) A description for the new network interface.
• Subnet: The subnet in which to create a new network interface. For the primary network
interface (eth0), this is the subnet in which the instance is launched. If you've entered an
existing network interface for eth0, the instance is launched in the subnet in which the
network interface is located.
• Security groups: One or more security groups in your VPC with which to associate the
network interface.
• Auto-assign public IP: Whether to automatically assign a public IP address to the network
interface with the device index of eth0. This setting can only be enabled for a single, new
network interface.
• Primary IP: A private IPv4 address from the range of your subnet. Leave blank to let AWS
choose a private IPv4 address for you.
• Secondary IP: A secondary private IPv4 address from the range of your subnet. Leave blank to
let AWS choose one for you.
• (IPv6-only) IPv6 IPs: An IPv6 address from the range of the subnet.
• IPv4 Prefixes: The IPv4 prefixes for the network interface.
• IPv6 Prefixes: The IPv6 prefixes for the network interface.
• Delete on termination: Whether the network interface is deleted when the instance is
deleted.
• Elastic Fabric Adapter: Indicates whether the network interface is an Elastic Fabric Adapter.
For more information, see Elastic Fabric Adapter.
• Network card index: The index of the network card. The primary network interface must be
assigned to network card index 0. Some instance types support multiple network cards.
13. For Advanced details, expand the section to view the fields and specify any additional
parameters for the instance.
583
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• Purchasing option: The purchasing model. Choose Request Spot Instances to request Spot
Instances at the Spot price, capped at the On-Demand price, and choose Customize to change
the default Spot Instance settings. If you do not request a Spot Instance, EC2 launches an On-
Demand Instance by default. For more information, see Spot Instances (p. 424).
• IAM instance profile: An AWS Identity and Access Management (IAM) instance profile to
associate with the instance. For more information, see IAM roles for Amazon EC2 (p. 1275).
• Hostname type: Determines if you want the guest OS hostname of the EC2 instance to be
the Resource name (RBN) or IP name (IPBN). For more information, see Amazon EC2 instance
hostname types (p. 1034).
• DNS Hostname: Determines if the DNS queries to the IP name and/or the Resource name
will respond with the IPv4 address (A record), IPv6 address (AAAA record), or both. For more
information, see Amazon EC2 instance hostname types (p. 1034).
• Shutdown behavior: Whether the instance should stop or terminate when shut down. For
more information, see Change the instance initiated shutdown behavior (p. 650).
• Stop - Hibernate behavior: Whether the instance is enabled for hibernation. This field is
only valid for instances that meet the hibernation prerequisites. For more information, see
Hibernate your On-Demand or Reserved Linux instance (p. 626).
• Termination protection: Whether to prevent accidental termination. For more information,
see Enable termination protection (p. 649).
• Detailed CloudWatch monitoring: Whether to enable detailed monitoring of the instance
using Amazon CloudWatch. Additional charges apply. For more information, see Monitor your
instances using CloudWatch (p. 958).
• Elastic inference: An elastic inference accelerator to attach to your EC2 CPU instance. For
more information, see Working with Amazon Elastic Inference in the Amazon Elastic Inference
Developer Guide.
• Credit specification: Whether to enable applications to burst beyond the baseline for as long
as needed. This field is only valid for T instances. Additional charges may apply. For more
information, see Burstable performance instances (p. 251).
• Placement group name: Specify a placement group in which to launch the instance. Not all
instance types can be launched in a placement group. For more information, see Placement
groups (p. 1167).
• EBS-optimized instance: Provides additional, dedicated capacity for Amazon EBS I/O. Not all
instance types support this feature, and additional charges apply. For more information, see
Amazon EBS–optimized instances (p. 1556).
• Capacity Reservation: Specify whether to launch the instance into any open Capacity
Reservation (Open), a specific Capacity Reservation (Target by ID), or a Capacity
Reservation group (Target by group). To specify that a Capacity Reservation should not
be used, choose None. For more information, see Launch instances into an existing Capacity
Reservation (p. 528).
• Tenancy: Choose whether to run your instance on shared hardware (Shared), isolated,
dedicated hardware (Dedicated), or on a Dedicated Host (Dedicated host). If you choose
to launch the instance onto a Dedicated Host, you can specify whether to launch the
instance into a host resource group or you can target a specific Dedicated Host. Additional
charges may apply. For more information, see Dedicated Instances (p. 516) and Dedicated
Hosts (p. 483).
• RAM disk ID: (Only valid for paravirtual (PV) AMIs) A RAM disk for the instance. If you have
specified a kernel, you may need to specify a specific RAM disk with the drivers to support it.
• Kernel ID: (Only valid for paravirtual (PV) AMIs) A kernel for the instance.
• Nitro Enclave: Allows you to create isolated execution environments, called enclaves, from
Amazon EC2 instances. For more information, see What is AWS Nitro Enclaves? in the AWS
Nitro Enclaves User Guide.
584
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• License configurations: You can launch instances against the specified license configuration
to track your license usage. For more information, see Create a License Configuration in the
AWS License Manager User Guide.
• Metadata accessible: Whether to enable or disable access to the instance metadata. For more
information, see Use IMDSv2 (p. 711).
• Metadata transport: Whether to enable or disable the access method to the instance
metadata service that's available for this EC2 instance based on the IP address type
(IPv4, IPv6, or IPv4 and IPv6) of the instance. For more information, see Retrieve instance
metadata (p. 718).
• Metadata version: If you enable access to the instance metadata, you can choose to require
the use of Instance Metadata Service Version 2 when requesting instance metadata. For more
information, see Configure instance metadata options for new instances (p. 715).
• Metadata response hop limit: If you enable instance metadata, you can set the allowable
number of network hops for the metadata token. For more information, see Use
IMDSv2 (p. 711).
• User data: You can specify user data to configure an instance during launch, or to run a
configuration script. For more information, see Run commands on your Linux instance at
launch (p. 704).
14. Choose Create launch template.
AWS CLI
• Use the create-launch-template command. The following example creates a launch template
that specifies the following:
The template assigns a public IP address and an IPv6 address to the instance and creates a tag
for the instance(Name=webserver).
{
"NetworkInterfaces": [{
"AssociatePublicIpAddress": true,
"DeviceIndex": 0,
"Ipv6AddressCount": 1,
"SubnetId": "subnet-7b16de0c"
}],
"ImageId": "ami-8c1be5f6",
"InstanceType": "r4.4xlarge",
"TagSpecifications": [{
"ResourceType": "instance",
585
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
"Tags": [{
"Key":"Name",
"Value":"webserver"
}]
}],
"CpuOptions": {
"CoreCount":4,
"ThreadsPerCore":2
}
}
{
"LaunchTemplate": {
"LatestVersionNumber": 1,
"LaunchTemplateId": "lt-01238c059e3466abc",
"LaunchTemplateName": "TemplateForWebServer",
"DefaultVersionNumber": 1,
"CreatedBy": "arn:aws:iam::123456789012:root",
"CreateTime": "2017-11-27T09:13:24.000Z"
}
}
To create a launch template from an existing launch template using the console
Console
586
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
AWS CLI
You can use the AWS CLI to create a launch template from an existing instance by first getting
the launch template data from an instance, and then creating a launch template using the launch
template data.
To get launch template data from an instance using the AWS CLI
• Use the get-launch-template-data command and specify the instance ID. You can use the output
as a base to create a new launch template or launch template version. By default, the output
includes a top-level LaunchTemplateData object, which cannot be specified in your launch
template data. Use the --query option to exclude this object.
{
"Monitoring": {},
"ImageId": "ami-8c1be5f6",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"DeleteOnTermination": true
}
}
],
"EbsOptimized": false,
"Placement": {
"Tenancy": "default",
"GroupName": "",
"AvailabilityZone": "us-east-1a"
},
"InstanceType": "t2.micro",
"NetworkInterfaces": [
{
"Description": "",
"NetworkInterfaceId": "eni-35306abc",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "10.0.0.72"
}
],
"SubnetId": "subnet-7b16de0c",
"Groups": [
"sg-7c227019"
],
"Ipv6Addresses": [
{
"Ipv6Address": "2001:db8:1234:1a00::123"
}
],
"PrivateIpAddress": "10.0.0.72"
}
]
}
587
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
Use the create-launch-template command to create a launch template using the output from the
previous procedure. For more information about creating a launch template using the AWS CLI, see
Create a new launch template using parameters you define (p. 581).
You can create different versions of a launch template, set the default version, describe a launch
template version, and delete versions that you no longer require.
Tasks
• Create a launch template version (p. 588)
• Set the default launch template version (p. 589)
• Describe a launch template version (p. 589)
• Delete a launch template version (p. 590)
When you create a launch template version, you can specify new launch parameters or use an existing
version as the base for the new version. For more information about the launch parameters, see Create a
launch template (p. 581).
Console
AWS CLI
• Use the create-launch-template-version command. You can specify a source version on which to
base the new version. The new version inherits the launch parameters from this version, and you
can override parameters using --launch-template-data. The following example creates a
new version based on version 1 of the launch template and specifies a different AMI ID.
588
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
You can set the default version for the launch template. When you launch an instance from a launch
template and do not specify a version, the instance is launched using the parameters of the default
version.
Console
AWS CLI
To set the default launch template version using the AWS CLI
• Use the modify-launch-template command and specify the version that you want to set as the
default.
Using the console, you can view all the versions of the selected launch template, or get a list of the
launch templates whose latest or default version matches a specific version number. Using the AWS CLI,
you can describe all versions, individual versions, or a range of versions of a specified launch template.
You can also describe all the latest versions or all the default versions of all the launch templates in your
account.
Console
589
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• To get a list of all the launch templates whose default version matches a specific version
number: From the search bar, choose Default version, and then choose a version number.
AWS CLI
• Use the describe-launch-template-versions command and specify the version numbers. In the
following example, versions 1 and 3 are specified.
To describe all the latest and default launch template versions in your account using the
AWS CLI
If you no longer require a launch template version, you can delete it. You cannot replace the version
number after you delete it. You cannot delete the default version of the launch template; you must first
assign a different version as the default.
Console
AWS CLI
• Use the delete-launch-template-versions command and specify the version numbers to delete.
590
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
Instances that are launched using a launch template are automatically assigned two tags with the keys
aws:ec2launchtemplate:id and aws:ec2launchtemplate:version. You cannot remove or edit
these tags.
Console
AWS CLI
• Use the run-instances command and specify the --launch-template parameter. Optionally
specify the launch template version to use. If you don't specify the version, the default version is
used.
• To override a launch template parameter, specify the parameter in the run-instances command.
The following example overrides the instance type that's specified in the launch template (if any).
• If you specify a nested parameter that's part of a complex structure, the instance is launched using
the complex structure as specified in the launch template plus any additional nested parameters
that you specify.
In the following example, the instance is launched with the tag Owner=TeamA as well as any other
tags that are specified in the launch template. If the launch template has an existing tag with a
key of Owner, the value is replaced with TeamA.
In the following example, the instance is launched with a volume with the device name /dev/
xvdb as well as any other block device mappings that are specified in the launch template. If the
591
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
launch template has an existing volume defined for /dev/xvdb, its values are replaced with the
specified values.
If the instance fails to launch or the state immediately goes to terminated instead of running, see
Troubleshoot instance launch issues (p. 1683).
Before you can create an Auto Scaling group using a launch template, you must create a launch template
that includes the parameters required to launch an instance in an Auto Scaling group, such as the ID
of the AMI. The console provides guidance to help you create a template that you can use with Auto
Scaling.
To create a launch template to use with Auto Scaling using the console
To create or update an Amazon EC2 Auto Scaling group with a launch template using the
AWS CLI
592
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
To create an EC2 Fleet with a launch template using the AWS CLI
• Use the create-fleet command. Use the --launch-template-configs parameter to specify the
launch template and any overrides for the launch template.
For more information, see Spot Fleet request types (p. 822).
To create a Spot Fleet request with a launch template using the AWS CLI
• Use the request-spot-fleet command. Use the LaunchTemplateConfigs parameter to specify the
launch template and any overrides for the launch template.
Console
AWS CLI
• Use the delete-launch-template (AWS CLI) command and specify the launch template.
The following configuration details are copied from the selected instance into the launch wizard:
593
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
• AMI ID
• Instance type
• Availability Zone, or the VPC and subnet in which the selected instance is located
• Public IPv4 address. If the selected instance currently has a public IPv4 address, the new instance
receives a public IPv4 address - regardless of the selected instance's default public IPv4 address
setting. For more information about public IPv4 addresses, see Public IPv4 addresses (p. 1019).
• Placement group, if applicable
• IAM role associated with the instance, if applicable
• Shutdown behavior setting (stop or terminate)
• Termination protection setting (true or false)
• CloudWatch monitoring (enabled or disabled)
• Amazon EBS-optimization setting (true or false)
• Tenancy setting, if launching into a VPC (shared or dedicated)
• Kernel ID and RAM disk ID, if applicable
• User data, if specified
• Tags associated with the instance, if applicable
• Security groups associated with the instance
The following configuration details are not copied from your selected instance. Instead, the wizard
applies their default settings or behavior:
• Number of network interfaces: The default is one network interface, which is the primary network
interface (eth0).
• Storage: The default storage configuration is determined by the AMI and the instance type.
New console
When you are ready, choose Launch to select a key pair and launch your instance.
5. If the instance fails to launch or the state immediately goes to terminated instead of
running, see Troubleshoot instance launch issues (p. 1683).
Old console
594
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Launch
4. The launch wizard opens on the Review Instance Launch page. You can make any necessary
changes by choosing the appropriate Edit link.
When you are ready, choose Launch to select a key pair and launch your instance.
5. If the instance fails to launch or the state immediately goes to terminated instead of
running, see Troubleshoot instance launch issues (p. 1683).
To launch an instance from the AWS Marketplace using the launch wizard
The wizard creates a new security group according to the vendor's specifications for the product. The
security group may include rules that allow all IPv4 addresses (0.0.0.0/0) access on SSH (port 22)
on Linux or RDP (port 3389) on Windows. We recommend that you adjust these rules to allow only a
specific address or range of addresses to access your instance over those ports.
595
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
Important
Check the vendor's usage instructions carefully, as you may need to use a specific user name
to log in to the instance. For more information about accessing your subscription details,
see Manage your AWS Marketplace subscriptions (p. 133).
10. If the instance fails to launch or the state immediately goes to terminated instead of running, see
Troubleshoot instance launch issues (p. 1683).
Launch an AWS Marketplace AMI instance using the API and CLI
To launch instances from AWS Marketplace products using the API or command line tools, first ensure
that you are subscribed to the product. You can then launch an instance with the product's AMI ID using
the following methods:
Method Documentation
AWS CLI Use the run-instances command, or see the following topic for more
information: Launching an Instance.
AWS Tools for Windows Use the New-EC2Instance command, or see the following topic for
PowerShell more information: Launch an Amazon EC2 Instance Using Windows
PowerShell
To connect to a Windows instance, see Connect to your Windows instance in the Amazon EC2 User Guide
for Windows Instances.
Connection options
The operating system of your local computer determines the options that you have to connect from your
local computer to your Linux instance.
596
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
You can get the ID of your instance using the Amazon EC2 console (from the Instance ID column). If
you prefer, you can use the describe-instances (AWS CLI) or Get-EC2Instance (AWS Tools for Windows
PowerShell) command.
• Get the public DNS name of the instance.
You can get the public DNS for your instance using the Amazon EC2 console. Check
the Public IPv4 DNS column. If this column is hidden, choose the settings icon (
) in the top-right
corner of the screen and select Public IPv4 DNS. If you prefer, you can use the describe-instances (AWS
CLI) or Get-EC2Instance (AWS Tools for Windows PowerShell) command.
• (IPv6 only) Get the IPv6 address of the instance.
If you've assigned an IPv6 address to your instance, you can optionally connect to the instance using
its IPv6 address instead of a public IPv4 address or public IPv4 DNS hostname. Your local computer
must have an IPv6 address and must be configured to use IPv6. You can get the IPv6 address of
your instance using the Amazon EC2 console. Check the IPv6 IPs field. If you prefer, you can use the
597
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
describe-instances (AWS CLI) or Get-EC2Instance (AWS Tools for Windows PowerShell) command. For
more information about IPv6, see IPv6 addresses (p. 1020).
• Get the user name for your instance.
You can connect to your instance using the user name for your user account or the default user name
for the AMI that you used to launch your instance.
• Get the user name for your user account.
For more information about how to create a user account, see Manage user accounts on your
Amazon Linux instance (p. 660).
• Get the default user name for the AMI that you used to launch your instance:
• For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.
• For a CentOS AMI, the user name is centos or ec2-user.
• For a Debian AMI, the user name is admin.
• For a Fedora AMI, the user name is fedora or ec2-user.
• For a RHEL AMI, the user name is ec2-user or root.
• For a SUSE AMI, the user name is ec2-user or root.
• For an Ubuntu AMI, the user name is ubuntu.
• For an Oracle AMI, the user name is ec2-user.
• For a Bitnami AMI, the user name is bitnami.
• Otherwise, check with the AMI provider.
Ensure that the security group associated with your instance allows incoming SSH traffic from your
IP address. The default security group for the VPC does not allow incoming SSH traffic by default.
The security group created by the launch instance wizard enables SSH traffic by default. For more
information, see Authorize inbound traffic for your Linux instances (p. 1285).
Get the fully-qualified path to the location on your computer of the .pem file for the key pair that you
specified when you launched the instance. For more information, see Identify the key pair that was
specified at launch . If you can't find your private key file, see Connect to your Linux instance if you
lose your private key.
• Set the permissions of your private key
If you will use an SSH client on a macOS or Linux computer to connect to your Linux instance, use the
following command to set the permissions of your private key file so that only you can read it.
If you do not set these permissions, then you cannot connect to your instance using this key pair. For
more information, see Error: Unprotected private key file (p. 1693).
598
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
First you get the instance fingerprint. Then, when you connect to the instance, you are prompted to
verify the fingerprint. You can compare the fingerprint you obtained with the fingerprint displayed for
verification. If these fingerprints don't match, someone might be attempting a "man-in-the-middle"
attack. If they match, you can confidently connect to your instance.
• To get the instance fingerprint, you must use the AWS CLI. For information about installing the AWS
CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.
• The instance must not be in the pending state. The fingerprint is available only after the first boot of
the instance is complete.
1. On your local computer (not on the instance), use the get-console-output (AWS CLI) command as
follows to obtain the fingerprint:
2. Here is an example of what you should look for in the output. The exact output can vary by the
operating system, AMI version, and whether you had AWS create the key.
ec2: #############################################################
ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
ec2: 1024 SHA256:7HItIgTONZ/b0CH9c5Dq1ijgqQ6kFn86uQhQ5E/F9pU root@ip-10-0-2-182 (DSA)
ec2: 256 SHA256:l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY root@ip-10-0-2-182 (ECDSA)
ec2: 256 SHA256:kpEa+rw/Uq3zxaYZN8KT501iBtJOIdHG52dFi66EEfQ no comment (ED25519)
ec2: 2048 SHA256:L8l6pepcA7iqW/jBecQjVZClUrKY+o2cHLI0iHerbVc root@ip-10-0-2-182 (RSA)
ec2: -----END SSH HOST KEY FINGERPRINTS-----
ec2: #############################################################
The following instructions explain how to connect to your instance using an SSH client. If you
receive an error while attempting to connect to your instance, see Troubleshoot connecting to your
instance (p. 1686). For more connection options, see Connect to your Linux instance (p. 596).
Prerequisites
Before you connect to your Linux instance, complete the following prerequisites.
After you launch an instance, it can take a few minutes for the instance to be ready so that you can
connect to it. Check that your instance has passed its status checks. You can view this information in
the Status check column on the Instances page.
599
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
Get the public DNS name and user name to connect to your instance
To find the public DNS name or IP address of your instance and the user name that you should use
to connect to your instance, see Prerequisites for connecting to your instance (p. 596).
Locate the private key and set the permissions
To locate the private key that is required to connect to your instance, and to set the key permissions,
see Locate the private key and set the permissions (p. 598).
Install an SSH client on your local computer as needed
Your local computer might have an SSH client installed by default. You can verify this by typing
ssh at the command line. If your computer doesn't recognize the command, you can install an SSH
client.
• Recent versions of Windows Server 2019 and Windows 10 - OpenSSH is included as an installable
component. For more information, see OpenSSH in Windows.
• Earlier versions of Windows - Download and install OpenSSH. For more information, see Win32-
OpenSSH.
• Linux and macOS X - Download and install OpenSSH. For more information, see https://
www.openssh.com.
1. In a terminal window, use the ssh command to connect to the instance. You specify the path and file
name of the private key (.pem), the user name for your instance, and the public DNS name or IPv6
address for your instance. For more information about how to find the private key, the user name
for your instance, and the DNS name or IPv6 address for an instance, see Locate the private key and
set the permissions (p. 598) and Get information about your instance (p. 597). To connect to your
instance, use one of the following commands.
• (Public DNS) To connect using your instance's public DNS name, enter the following command.
• (IPv6) Alternatively, if your instance has an IPv6 address, to connect using your instance's IPv6
address, enter the following command.
2. (Optional) Verify that the fingerprint in the security alert matches the fingerprint that you previously
obtained in (Optional) Get the instance fingerprint (p. 599). If these fingerprints don't match,
someone might be attempting a "man-in-the-middle" attack. If they match, continue to the next
step.
3. Enter yes.
600
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
Prerequisites
The general prerequisites for transferring files to an instance are the same as the general prerequisites
for connecting to an instance. For more information, see General prerequisites for connecting to your
instance (p. 596).
• Install an SCP client
Most Linux, Unix, and Apple computers include an SCP client by default. If yours doesn't, the OpenSSH
project provides a free implementation of the full suite of SSH tools, including an SCP client. For more
information, see https://www.openssh.com.
The following procedure steps you through using SCP to transfer a file using the instance's public DNS
name, or the IPv6 address if your instance has one.
To use SCP to transfer files between your computer and your instance
1. Determine the location of the source file on your computer and the destination path on the instance.
In the following examples, the name of the private key file is my-key-pair.pem, the file to transfer
is my-file.txt, the user name for the instance is ec2-user, the public DNS name of the instance is
my-instance-public-dns-name, and the IPv6 address of the instance is my-instance-IPv6-
address.
• (Public DNS) To transfer a file to the destination on the instance, enter the following command
from your computer.
• (IPv6) To transfer a file to the destination on the instance if the instance has an IPv6 address,
enter the following command from your computer. The IPv6 address must be enclosed in square
brackets ([ ]), which must be escaped (\).
2. If you haven't already connected to the instance using SSH, you see a response like the following:
(Optional) You can optionally verify that the fingerprint in the security alert matches the instance
fingerprint. For more information, see (Optional) Get the instance fingerprint (p. 599).
601
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
Enter yes.
3. If the transfer is successful, the response is similar to the following:
4. To transfer a file in the other direction (from your Amazon EC2 instance to your computer), reverse
the order of the host parameters. For example, you can transfer my-file.txt from your EC2
instance to the a destination on your local computer as my-file2.txt, as shown in the following
examples.
• (Public DNS) To transfer a file to a destination on your computer, enter the following command
from your computer.
• (IPv6) To transfer a file to a destination on your computer if the instance has an IPv6 address,
enter the following command from your computer. The IPv6 address must be enclosed in square
brackets ([ ]), which must be escaped (\).
Troubleshoot
If you receive an error while attempting to connect to your instance, see Troubleshoot connecting to your
instance (p. 1686).
You can use EC2 Instance Connect to connect to your instances using the Amazon EC2 console (browser-
based client), the Amazon EC2 Instance Connect CLI, or the SSH client of your choice.
When you connect to an instance using EC2 Instance Connect, the Instance Connect API pushes a
one-time-use SSH public key to the instance metadata (p. 710) where it remains for 60 seconds. An
IAM policy attached to your IAM user authorizes your IAM user to push the public key to the instance
metadata. The SSH daemon uses AuthorizedKeysCommand and AuthorizedKeysCommandUser,
which are configured when Instance Connect is installed, to look up the public key from the instance
metadata for authentication, and connects you to the instance.
You can use EC2 Instance Connect to connect to instances that have public or private IP addresses. For
more information, see Connect using EC2 Instance Connect (p. 609).
Tip
If you are connecting to a Linux instance from a local computer running Windows, see the following
documentation instead:
602
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
• Connect to your Linux instance from Windows using PuTTY (p. 612)
• Connect to your Linux instance using SSH (p. 599)
• Connect to your Linux instance from Windows using Windows Subsystem for Linux (p. 618)
Contents
• Set up EC2 Instance Connect (p. 603)
• Connect using EC2 Instance Connect (p. 609)
• Uninstall EC2 Instance Connect (p. 612)
For more information about setting up EC2 Instance Connect, see Securing your bastion hosts with
Amazon EC2 Instance Connect.
Limitations
• You can install EC2 Instance Connect on the following supported Linux distributions:
• Amazon Linux 2 (any version)
• Ubuntu 16.04 or later
• If you configured the AuthorizedKeysCommand and AuthorizedKeysCommandUser settings
for SSH authentication, the EC2 Instance Connect installation will not update them. As a result, you
cannot use Instance Connect.
• Verify the general prerequisites for connecting to your instance using SSH.
For more information, see General prerequisites for connecting to your instance (p. 596).
• Install an SSH client on your local computer.
Your local computer most likely has an SSH client installed by default. You can check for an SSH client
by typing ssh at the command line. If your local computer doesn't recognize the command, you can
install an SSH client. For information about installing an SSH client on Linux or macOS X, see http://
www.openssh.com. For information about installing an SSH client on Windows 10, see OpenSSH in
Windows.
• Install the AWS CLI on your local computer.
To configure the IAM permissions, you must use the AWS CLI. For more information about installing
the AWS CLI, see Installing the AWS CLI in the AWS Command Line Interface User Guide.
• [Ubuntu] Install the AWS CLI on your instance.
603
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
To install EC2 Instance Connect on an Ubuntu instance, you must use the AWS CLI on the instance. For
more information about installing the AWS CLI, see Installing the AWS CLI in the AWS Command Line
Interface User Guide.
You must configure the following network access so that your users can connect to your instance using
EC2 Instance Connect:
• If your users will access your instance over the internet, then your instance must have a public IP
address and be in a public subnet. For more information, see Enable internet access in the Amazon VPC
User Guide.
• If your users will access your instance through the instance's private IP address, then you must
establish private network connectivity to your VPC, such as by using AWS Direct Connect, AWS Site-to-
Site VPN, or VPC peering, so that your users can reach the instance's private IP address.
• Ensure that the security group associated with your instance allows inbound SSH traffic (p. 1286)
on port 22 from your IP address or from your network. The default security group for the VPC does
not allow incoming SSH traffic by default. The security group created by the launch wizard allows
incoming SSH traffic by default. For more information, see Authorize inbound traffic for your Linux
instances (p. 1285).
• (Amazon EC2 console browser-based client) Ensure that the security group associated with your
instance allows inbound SSH traffic from the IP address range for this service. To identify the address
range, download the JSON file provided by AWS and filter for the subset for EC2 Instance Connect,
using EC2_INSTANCE_CONNECT as the service value. For more information about downloading the
JSON file and filtering by service, see AWS IP address ranges in the Amazon Web Services General
Reference.
You can skip this task if you used one of the following AMIs to launch your instance because they come
preinstalled with EC2 Instance Connect:
For earlier versions of these AMIs, you must install Instance Connect on every instance that will support
connecting using Instance Connect.
Installing Instance Connect configures the SSH daemon on the instance. The procedure for installing
Instance Connect is different for instances launched using Amazon Linux 2 and Ubuntu.
Amazon Linux 2
Use the SSH key pair that was assigned to your instance when you launched it and the default
user name of the AMI that you used to launch your instance. For Amazon Linux 2, the default
user name is ec2-user.
For example, if your instance was launched using Amazon Linux 2, your instance's public
DNS name is ec2-a-b-c-d.us-west-2.compute.amazonaws.com, and the key pair is
my_ec2_private_key.pem, use the following command to SSH into your instance:
604
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
For more information about connecting to your instance, see Connect to your Linux instance
using SSH (p. 599).
2. Install the EC2 Instance Connect package on your instance.
eic_curl_authorized_keys
eic_harvest_hostkeys
eic_parse_authorized_keys
eic_run_authorized_keys
3. (Optional) Verify that Instance Connect was successfully installed on your instance.
Use the sudo less command to check that the /etc/ssh/sshd_config file was correctly
updated as follows:
AuthorizedKeysCommand /opt/aws/bin/eic_run_authorized_keys %u %f
AuthorizedKeysCommandUser ec2-instance-connect
Note
If you previously configured AuthorizedKeysCommand and
AuthorizedKeysCommandUser, the Instance Connect installation will not change the
values and you will not be able to use Instance Connect.
Ubuntu
To install EC2 Instance Connect on an instance launched with Ubuntu 16.04 or later
Use the SSH key pair that was assigned to your instance when you launched it and use the
default user name of the AMI that you used to launch your instance. For an Ubuntu AMI, the
user name is ubuntu.
605
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
If your instance was launched using Ubuntu, your instance's public DNS name is ec2-a-b-c-
d.us-west-2.compute.amazonaws.com, and the key pair is my_ec2_private_key.pem,
use the following command to SSH into your instance:
For more information about connecting to your instance, see Connect to your Linux instance
using SSH (p. 599).
2. (Optional) Ensure your instance has the latest Ubuntu AMI.
For Ubuntu, use the following commands to update all the packages on your instance.
eic_curl_authorized_keys
eic_harvest_hostkeys
eic_parse_authorized_keys
eic_run_authorized_keys
4. (Optional) Verify that Instance Connect was successfully installed on your instance.
Note
If you previously configured AuthorizedKeysCommand and
AuthorizedKeysCommandUser, the Instance Connect installation will not change the
values and you will not be able to use Instance Connect.
606
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
For more information about the EC2 Instance Connect package, see aws/aws-ec2-instance-connect-
config on the GitHub website.
Task 3: (Optional) Install the EC2 Instance Connect CLI on your computer
The EC2 Instance Connect CLI provides a simplified experience to connect to EC2 instances through
a single command, mssh instance_id. For more information, see Connect using the EC2 Instance
Connect CLI (p. 610).
Note
There is no need to install the EC2 Instance Connect CLI if users will only use the Amazon EC2
console (browser-based client) or an SSH client to connect to an instance.
Use pip to install the ec2instanceconnectcli package. For more information, see aws/aws-ec2-
instance-connect-cli on the GitHub website, and https://pypi.org/project/ec2instanceconnectcli/ on the
Python Package Index (PyPI) website.
For your IAM principals to connect to an instance using EC2 Instance Connect, you must grant them
permission to push the public key to the instance. You grant them the permission by creating an IAM
policy and attaching the policy to the IAM principals that require the permission. For more information,
see Actions, resources, and condition keys for Amazon EC2 Instance Connect.
The following instructions explain how to create the policy and attach it to an IAM user using the AWS
CLI. The same policy could be applied to other IAM principals, such as IAM roles. For instructions that
use the AWS Management Console, see Creating IAM policies (console), Adding permissions by attaching
policies directly to the user, and Creating IAM roles in the IAM User Guide.
To grant an IAM principal permission for EC2 Instance Connect (AWS CLI)
The following is an example policy document. You can omit the statement for the
ec2:DescribeInstances action if your users will only use an SSH client to connect to your
instances. You can replace the specified instances in Resource with the wildcard * to grant users
access to all EC2 instances using EC2 Instance Connect.
{
"Version": "2012-10-17",
607
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
"Statement": [
{
"Effect": "Allow",
"Action": "ec2-instance-connect:SendSSHPublicKey",
"Resource": [
"arn:aws:ec2:region:account-id:instance/i-1234567890abcdef0",
"arn:aws:ec2:region:account-id:instance/i-0598c7d356eba48d7"
],
"Condition": {
"StringEquals": {
"ec2:osuser": "ami-username"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:DescribeInstances",
"Resource": "*"
}
]
}
The preceding policy allows access to specific instances, identified by their instance ID. Alternatively,
you can use resource tags to control access to an instance. Attribute-based access control is an
authorization strategy that defines permissions based on tags that can be attached to users and
AWS resources. For example, the following policy allows an IAM user to access an instance only if
that instance has a resource tag with key=tag-key and value=tag-value. For more information
about using tags to control access to your AWS resources, see Controlling access to AWS resources in
the IAM User Guide.
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":"ec2-instance-connect:SendSSHPublicKey",
"Resource": "arn:aws:ec2:region:account-id:instance/*",
"Condition":{
"StringEquals":{
"aws:ResourceTag/tag-key":"tag-value"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:DescribeInstances",
"Resource": "*"
}
]
}
2. Use the create-policy command to create a new managed policy, and specify the JSON document
that you created to use as the content for the new policy.
3. Use the attach-user-policy command to attach the managed policy to the specified IAM user. For the
--user-name parameter, specify the friendly name (not the ARN) of the IAM user.
608
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
If you receive an error while attempting to connect to your instance, see Troubleshoot connecting to your
instance (p. 1686) and How do I troubleshoot issues connecting to my EC2 instance using EC2 Instance
Connect?.
Topics
• Limitations (p. 609)
• Prerequisites (p. 609)
• Connect using EC2 Instance Connect (p. 609)
Limitations
Prerequisites
For more information, see Set up EC2 Instance Connect (p. 603).
• (Optional) Install an SSH client on your local computer.
There is no need to install an SSH client if users only use the Amazon EC2 console (browser-based
client) or the EC2 Instance Connect CLI to connect to an instance. Your local computer most likely has
an SSH client installed by default. You can check for an SSH client by typing ssh at the command line.
If your local computer doesn't recognize the command, you can install an SSH client. For information
about installing an SSH client on Linux or macOS X, see http://www.openssh.com. For information
about installing an SSH client on Windows 10, see OpenSSH in Windows.
• (Optional) Install the EC2 Instance Connect CLI on your local computer.
There is no need to install the EC2 Instance Connect CLI if users only use the Amazon EC2 console
(browser-based client) or an SSH client to connect to an instance. For more information, see Task 3:
(Optional) Install the EC2 Instance Connect CLI on your computer (p. 607). This connection method
works for instances with public IP addresses.
Options
• Connect using the Amazon EC2 console (browser-based client) (p. 610)
• Connect using the EC2 Instance Connect CLI (p. 610)
609
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
• Connect using your own key and SSH client (p. 611)
You can connect to an instance using the Amazon EC2 console (browser-based client) by selecting
the instance from the console and choosing to connect using EC2 Instance Connect. Instance Connect
handles the permissions and provides a successful connection.
To connect to your instance using the browser-based client from the Amazon EC2 console
You can connect to an instance using the EC2 Instance Connect CLI by providing only the instance ID,
while the Instance Connect CLI performs the following three actions in one call: it generates a one-time-
use SSH public key, pushes the key to the instance where it remains for 60 seconds, and connects the
user to the instance. You can use basic SSH/SFTP commands with the Instance Connect CLI.
This connection method works for instances with public and private IP addresses. When connecting to an
instance that only has private IP addresses, the local computer from which you are initiating the session
must have connectivity to the EC2 Instance Connect service endpoint (to push your SSH public key to the
instance) as well as network connectivity to the instance's private IP address. The EC2 Instance Connect
service endpoint is reachable over the internet or over an AWS Direct Connect public virtual interface. To
connect to the instance's private IP address, you can leverage services such as AWS Direct Connect, AWS
Site-to-Site VPN, or VPC peering.
Note
-i is not supported when using mssh. When using the mssh command to connect to your
instance, you do not need to specify any kind of identity file because Instance Connect manages
the key pair.
Amazon Linux 2
Use the mssh command with the instance ID as follows. You do not need to specify the user name
for the AMI.
$ mssh i-001234a4bf70dec41EXAMPLE
Ubuntu
Use the mssh command with the instance ID and the default user name for the Ubuntu AMI as
follows. You must specify the user name for the AMI or you get the following error: Authentication
failed.
$ mssh ubuntu@i-001234a4bf70dec41EXAMPLE
610
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
You can use your own SSH key and connect to your instance from the SSH client of your choice while
using the EC2 Instance Connect API. This enables you to benefit from the Instance Connect capability to
push a public key to the instance. This connection method works for instances with public and private IP
addresses.
Requirements
To connect to your instance using your own key and any SSH client
You can generate new SSH private and public keys, my_key and my_key.pub, using the following
command:
Use the send-ssh-public-key command to push your SSH public key to the instance. If you launched
your instance using Amazon Linux 2, the default user name for the AMI is ec2-user. If you launched
your instance using Ubuntu, the default user name for the AMI is ubuntu.
The following example pushes the public key to the specified instance in the specified Availability
Zone, to authenticate ec2-user.
Use the ssh command to connect to the instance using the private key before the public key is
removed from the instance metadata (you have 60 seconds before it is removed). Specify the private
key that corresponds to the public key, the default user name for the AMI that you used to launch
your instance, and the instance's public DNS name (if connecting over a private network, specify the
private DNS name or IP address). Add the IdentitiesOnly=yes option to ensure that only the
files in the ssh config and the specified key are used for the connection.
611
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
Amazon Linux 2
You can uninstall EC2 Instance Connect on Amazon Linux 2 2.0.20190618 or later, where EC2
Instance Connect is preconfigured.
1. Connect to your instance using SSH. Specify the SSH key pair you used for your instance when
you launched it and the default user name for the Amazon Linux 2 AMI, which is ec2-user.
For example, the following ssh command connects to the instance with the public DNS
name ec2-a-b-c-d.us-west-2.compute.amazonaws.com, using the key pair
my_ec2_private_key.pem.
Ubuntu
1. Connect to your instance using SSH. Specify the SSH key pair you used for your instance when
you launched it and the default user name for the Ubuntu AMI, which is ubuntu.
For example, the following ssh command connects to the instance with the public DNS
name ec2-a-b-c-d.us-west-2.compute.amazonaws.com, using the key pair
my_ec2_private_key.pem.
The following instructions explain how to connect to your instance using PuTTY, a free SSH client
for Windows. If you receive an error while attempting to connect to your instance, see Troubleshoot
connecting to your instance (p. 1686).
612
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
Prerequisites
Before you connect to your Linux instance using PuTTY, complete the following prerequisites.
After you launch an instance, it can take a few minutes for the instance to be ready so that you can
connect to it. Check that your instance has passed its status checks. You can view this information in
the Status check column on the Instances page.
Verify the general prerequisites for connecting to your instance
To find the public DNS name or IP address of your instance and the user name that you should use
to connect to your instance, see General prerequisites for connecting to your instance (p. 596).
Install PuTTY on your local computer
Download and install PuTTY from the PuTTY download page. If you already have an older version of
PuTTY installed, we recommend that you download the latest version. Be sure to install the entire
suite.
Convert your private key using PuTTYgen
Locate the private key (.pem file) for the key pair that you specified when you launched the instance.
Convert the .pem file to a .ppk file for use with PuTTY. For more information, follow the steps in the
next section.
PuTTY does not natively support the private key format for SSH keys. PuTTY provides a tool named
PuTTYgen, which converts keys to the required format for PuTTY. You must convert your private key
(.pem file) into this format (.ppk file) as follows in order to connect to your instance using PuTTY.
3. Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem
file, choose the option to display files of all types.
4. Select your .pem file for the key pair that you specified when you launched your instance and
choose Open. PuTTYgen displays a notice that the .pem file was successfully imported. Choose OK.
5. To save the key in the format that PuTTY can use, choose Save private key. PuTTYgen displays a
warning about saving the key without a passphrase. Choose Yes.
Note
A passphrase on a private key is an extra layer of protection. Even if your private key is
discovered, it can't be used without the passphrase. The downside to using a passphrase
is that it makes automation harder because human intervention is needed to log on to an
instance, or to copy files to an instance.
613
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
6. Specify the same name for the key that you used for the key pair (for example, my-key-pair) and
choose Save. PuTTY automatically adds the .ppk file extension.
Your private key is now in the correct format for use with PuTTY. You can now connect to your instance
using PuTTY's SSH client.
1. Start PuTTY (from the Start menu, choose All Programs, PuTTY, PuTTY).
2. In the Category pane, choose Session and complete the following fields:
For information about how to get the user name for your instance, and the public DNS name or
IPv6 address of your instance, see Get information about your instance (p. 597).
b. Ensure that the Port value is 22.
c. Under Connection type, select SSH.
614
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
3. (Optional) You can configure PuTTY to automatically send 'keepalive' data at regular intervals to
keep the session active. This is useful to avoid disconnecting from your instance due to session
inactivity. In the Category pane, choose Connection, and then enter the required interval in the
Seconds between keepalives field. For example, if your session disconnects after 10 minutes of
inactivity, enter 180 to configure PuTTY to send keepalive data every 3 minutes.
4. In the Category pane, expand Connection, expand SSH, and then choose Auth. Complete the
following:
a. Choose Browse.
b. Select the .ppk file that you generated for your key pair and choose Open.
c. (Optional) If you plan to start this session again later, you can save the session information for
future use. Under Category, choose Session, enter a name for the session in Saved Sessions,
and then choose Save.
d. Choose Open.
5. If this is the first time you have connected to this instance, PuTTY displays a security alert dialog box
that asks whether you trust the host to which you are connecting.
a. (Optional) Verify that the fingerprint in the security alert dialog box matches the fingerprint
that you previously obtained in (Optional) Get the instance fingerprint (p. 599). If these
fingerprints don't match, someone might be attempting a "man-in-the-middle" attack. If they
match, continue to the next step.
b. Choose Yes. A window opens and you are connected to your instance.
Note
If you specified a passphrase when you converted your private key to PuTTY's format,
you must provide that passphrase when you log in to the instance.
If you receive an error while attempting to connect to your instance, see Troubleshoot connecting to your
instance (p. 1686).
Transfer files to your Linux instance using the PuTTY Secure Copy client
The PuTTY Secure Copy client (PSCP) is a command line tool that you can use to transfer files between
your Windows computer and your Linux instance. If you prefer a graphical user interface (GUI), you
can use an open source GUI tool named WinSCP. For more information, see Transfer files to your Linux
instance using WinSCP (p. 616).
To use PSCP, you need the private key you generated in Convert your private key using
PuTTYgen (p. 613). You also need the public DNS name of your Linux instance, or the IPv6 address if
your instance has one.
The following example transfers the file Sample_file.txt from the C:\ drive on a Windows computer
to the my-instance-user-name home directory on an Amazon Linux instance. To transfer a file, use
one of the following commands.
• (Public DNS) To transfer a file using your instance's public DNS name, enter the following command.
• (IPv6) Alternatively, if your instance has an IPv6 address, to transfer a file using your instance's IPv6
address, enter the following command. The IPv6 address must be enclosed in square brackets ([ ]).
615
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
Requirements
• You must have the private key that you generated in Convert your private key using
PuTTYgen (p. 613).
• You must have the public DNS name of your Linux instance.
• Your Linux instance must have scp installed. For some operating systems, you install the openssh-
clients package. For others, such as the Amazon ECS-optimized AMI, you install the scp package.
Check the documentation for your Linux distribution.
1. Download and install WinSCP from http://winscp.net/eng/download.php. For most users, the
default installation options are OK.
2. Start WinSCP.
3. At the WinSCP login screen, for Host name, enter one of the following:
• (Public DNS or IPv4 address) To log in using your instance's public DNS name or public IPv4
address, enter the public DNS name or public IPv4 address for your instance.
• (IPv6) Alternatively, if your instance has an IPv6 address, to log in using your instance's IPv6
address, enter the IPv6 address for your instance.
4. For User name, enter the default user name for your AMI.
• For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.
• For a CentOS AMI, the user name is centos or ec2-user.
• For a Debian AMI, the user name is admin.
• For a Fedora AMI, the user name is fedora or ec2-user.
• For a RHEL AMI, the user name is ec2-user or root.
• For a SUSE AMI, the user name is ec2-user or root.
• For an Ubuntu AMI, the user name is ubuntu.
• For an Oracle AMI, the user name is ec2-user.
• For a Bitnami AMI, the user name is bitnami.
• Otherwise, check with the AMI provider.
5. Specify the private key for your instance. For Private key, enter the path to your private key,
or choose the "..." button to browse for the file. To open the advanced site settings, for newer
versions of WinSCP, choose Advanced. To find the Private key file setting, under SSH, choose
Authentication.
616
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
WinSCP requires a PuTTY private key file (.ppk). You can convert a .pem security key file to
the .ppk format using PuTTYgen. For more information, see Convert your private key using
PuTTYgen (p. 613).
6. (Optional) In the left panel, choose Directories. For Remote directory, enter the path for the
directory to which to add files. To open the advanced site settings for newer versions of WinSCP,
choose Advanced. To find the Remote directory setting, under Environment, choose Directories.
7. Choose Login. To add the host fingerprint to the host cache, choose Yes.
617
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
8. After the connection is established, in the connection window your Linux instance is on the right
and your local machine is on the left. You can drag and drop files between the remote file system
and your local machine. For more information on WinSCP, see the project documentation at http://
winscp.net/eng/docs/start.
If you receive an error that you cannot run SCP to start the transfer, verify that you installed scp on
the Linux instance.
The following instructions explain how to connect to your instance using a Linux distribution on the
Windows Subsystem for Linux (WSL). WSL is a free download and enables you to run native Linux
command line tools directly on Windows, alongside your traditional Windows desktop, without the
overhead of a virtual machine.
By installing WSL, you can use a native Linux environment to connect to your Linux EC2 instances instead
of using PuTTY or PuTTYgen. The Linux environment makes it easier to connect to your Linux instances
because it comes with a native SSH client that you can use to connect to your Linux instances and change
the permissions of the .pem key file. The Amazon EC2 console provides the SSH command for connecting
to the Linux instance, and you can get verbose output from the SSH command for troubleshooting. For
more information, see the Windows Subsystem for Linux Documentation.
618
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
Note
After you've installed the WSL, all the prerequisites and steps are the same as those described in
Connect to your Linux instance using SSH (p. 599), and the experience is just like using native
Linux.
If you receive an error while attempting to connect to your instance, see Troubleshoot connecting to your
instance (p. 1686).
Contents
• Prerequisites (p. 599)
• Connect to your Linux instance using WSL (p. 619)
• Transfer files to Linux instances from Linux using SCP (p. 620)
• Uninstall WSL (p. 622)
Prerequisites
Before you connect to your Linux instance, complete the following prerequisites.
After you launch an instance, it can take a few minutes for the instance to be ready so that you can
connect to it. Check that your instance has passed its status checks. You can view this information in
the Status check column on the Instances page.
Verify the general prerequisites for connecting to your instance
To find the public DNS name or IP address of your instance and the user name that you should use
to connect to your instance, see General prerequisites for connecting to your instance (p. 596).
Install the Windows Subsystem for Linux (WSL) and a Linux distribution on your local computer
Install the WSL and a Linux distribution using the instructions in the Windows 10 Installation Guide.
The example in the instructions installs the Ubuntu distribution of Linux, but you can install any
distribution. You are prompted to restart your computer for the changes to take effect.
Copy the private key from Windows to WSL
In a WSL terminal window, copy the .pem file (for the key pair that you specified when you launched
the instance) from Windows to WSL. Note the fully-qualified path to the .pem file on WSL to use
when connecting to your instance. For information about how to specify the path to your Windows
hard drive, see How do I access my C drive?. For more information about key pairs and Windows
instances, see Amazon EC2 key pairs and Windows instances.
1. In a terminal window, use the ssh command to connect to the instance. You specify the path and file
name of the private key (.pem), the user name for your instance, and the public DNS name or IPv6
address for your instance. For more information about how to find the private key, the user name
for your instance, and the DNS name or IPv6 address for an instance, see Locate the private key and
set the permissions (p. 598) and Get information about your instance (p. 597). To connect to your
instance, use one of the following commands.
619
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
• (Public DNS) To connect using your instance's public DNS name, enter the following command.
• (IPv6) Alternatively, if your instance has an IPv6 address, you can connect to the instance using
its IPv6 address. Specify the ssh command with the path to the private key (.pem) file, the
appropriate user name, and the IPv6 address.
2. (Optional) Verify that the fingerprint in the security alert matches the fingerprint that you previously
obtained in (Optional) Get the instance fingerprint (p. 599). If these fingerprints don't match,
someone might be attempting a "man-in-the-middle" attack. If they match, continue to the next
step.
3. Enter yes.
Prerequisites
The general prerequisites for transferring files to an instance are the same as the general prerequisites
for connecting to an instance. For more information, see General prerequisites for connecting to your
instance (p. 596).
• Install an SCP client
Most Linux, Unix, and Apple computers include an SCP client by default. If yours doesn't, the OpenSSH
project provides a free implementation of the full suite of SSH tools, including an SCP client. For more
information, see https://www.openssh.com.
The following procedure steps you through using SCP to transfer a file. If you've already connected to
the instance with SSH and have verified its fingerprints, you can start with the step that contains the SCP
command (step 4).
1. Transfer a file to your instance using the instance's public DNS name. For example, if the name of
the private key file is my-key-pair, the file to transfer is SampleFile.txt, the user name is my-
620
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect
• (IPv6) Alternatively, if your instance has an IPv6 address, you can transfer a file using the
instance's IPv6 address. The IPv6 address must be enclosed in square brackets ([ ]), which must
be escaped (\).
2. (Optional) Verify that the fingerprint in the security alert matches the fingerprint that you previously
obtained in (Optional) Get the instance fingerprint (p. 599). If these fingerprints don't match,
someone might be attempting a "man-in-the-middle" attack. If they match, continue to the next
step.
3. Enter yes.
If you receive a "bash: scp: command not found" error, you must first install scp on your Linux
instance. For some operating systems, this is located in the openssh-clients package. For
Amazon Linux variants, such as the Amazon ECS-optimized AMI, use the following command to
install scp:
4. To transfer files in the other direction (from your Amazon EC2 instance to your local computer),
reverse the order of the host parameters. For example, to transfer the SampleFile.txt file from
your EC2 instance back to the home directory on your local computer as SampleFile2.txt, use
one of the following commands on your local computer.
• (Public DNS) To transfer a file using your instance's public DNS name, enter the following
command.
• (IPv6) Alternatively, if your instance has an IPv6 address, to transfer files in the other direction
using the instance's IPv6 address, enter the following command.
621
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Stop and start
Uninstall WSL
For information about uninstalling Windows Subsystem for Linux, see How do I uninstall a WSL
Distribution?.
Before attempting to connect to an instance using Session Manager, ensure that the necessary setup
steps have been completed. For more information and instructions, see Setting up Session Manager.
To connect to a Linux instance using Session Manager using the Amazon EC2 console
Troubleshooting
If you receive an error that you’re not authorized to perform one or more Systems Manager actions
(ssm:command-name), then you must update your policies to allow you to start sessions from the
Amazon EC2 console. For more information, see Quickstart default IAM policies for Session Manager in
the AWS Systems Manager User Guide.
When you stop an instance, we shut it down. We don't charge usage for a stopped instance, or data
transfer fees, but we do charge for the storage for any Amazon EBS volumes. Each time you start a
stopped instance we charge a minimum of one minute for usage. After one minute, we charge only for
the seconds you use. For example, if you run an instance for 20 seconds and then stop it, we charge for
a full one minute. If you run an instance for 3 minutes and 40 seconds, we charge for exactly 3 minutes
and 40 seconds of usage.
While the instance is stopped, you can treat its root volume like any other volume, and modify it (for
example, repair file system problems or update software). You just detach the volume from the stopped
instance, attach it to a running instance, make your changes, detach it from the running instance, and
then reattach it to the stopped instance. Make sure that you reattach it using the storage device name
that's specified as the root device in the block device mapping for the instance.
If you decide that you no longer need an instance, you can terminate it. As soon as the state of an
instance changes to shutting-down or terminated, we stop charging for that instance. For more
622
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Stop and start
information, see Terminate your instance (p. 646). If you'd rather hibernate the instance, see Hibernate
your On-Demand or Reserved Linux instance (p. 626). For more information, see Differences between
reboot, stop, hibernate, and terminate (p. 562).
Contents
• Overview (p. 623)
• What happens when you stop an instance (p. 624)
• Stop and start your instances (p. 624)
• Modify a stopped instance (p. 625)
• Troubleshoot stopping your instance (p. 625)
Overview
You can only stop an Amazon EBS-backed instance. To verify the root device type of your instance,
describe the instance and check whether the device type of its root volume is ebs (Amazon EBS-backed
instance) or instance store (instance store-backed instance). For more information, see Determine
the root device type of your AMI (p. 97).
• The instance performs a normal shutdown and stops running; its status changes to stopping and
then stopped.
• Any Amazon EBS volumes remain attached to the instance, and their data persists.
• Any data stored in the RAM of the host computer or the instance store volumes of the host computer
is gone.
• In most cases, the instance is migrated to a new underlying host computer when it's started (though in
some cases, it remains on the current host).
• The instance retains its private IPv4 addresses and any IPv6 addresses when stopped and started. We
release the public IPv4 address and assign a new one when you start it.
• The instance retains its associated Elastic IP addresses. You're charged for any Elastic IP addresses
associated with a stopped instance. With EC2-Classic, an Elastic IP address is dissociated from your
instance when you stop it. For more information, see EC2-Classic (p. 1183).
• When you stop and start a Windows instance, the EC2Config service performs tasks on the instance,
such as changing the drive letters for any attached Amazon EBS volumes. For more information
about these defaults and how you can change them, see Configuring a Windows instance using the
EC2Config service in the Amazon EC2 User Guide for Windows Instances.
• If your instance is in an Auto Scaling group, the Amazon EC2 Auto Scaling service marks the stopped
instance as unhealthy, and may terminate it and launch a replacement instance. For more information,
see Health Checks for Auto Scaling Instances in the Amazon EC2 Auto Scaling User Guide.
• When you stop a ClassicLink instance, it's unlinked from the VPC to which it was linked. You must
link the instance to the VPC again after starting it. For more information about ClassicLink, see
ClassicLink (p. 1190).
For more information, see Differences between reboot, stop, hibernate, and terminate (p. 562).
You can modify the following attributes of an instance only when it is stopped:
• Instance type
• User data
• Kernel
623
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Stop and start
• RAM disk
If you try to modify these attributes while the instance is running, Amazon EC2 returns the
IncorrectInstanceState error.
By default, when you initiate a shutdown from an Amazon EBS-backed instance (for example, using the
shutdown or poweroff command), the instance stops. You can change this behavior so that it terminates
instead. For more information, see Change the instance initiated shutdown behavior (p. 650).
Using the halt command from an instance does not initiate a shutdown. If used, the instance does not
terminate; instead, it places the CPU into HLT and the instance remains running.
New console
1. When you stop an instance, the data on any instance store volumes is erased. Before you stop an
instance, verify that you've copied any data that you need from your instance store volumes to
persistent storage, such as Amazon EBS or Amazon S3.
2. In the navigation pane, choose Instances and select the instance.
3. Choose Instance state, Stop instance. If this option is disabled, either the instance is already
stopped or its root device is an instance store volume.
4. When prompted for confirmation, choose Stop. It can take a few minutes for the instance to
stop.
5. (Optional) While your instance is stopped, you can modify certain instance attributes. For more
information, see Modify a stopped instance (p. 625).
6. To start the stopped instance, select the instance, and choose Instance state, Start instance.
7. It can take a few minutes for the instance to enter the running state.
Old console
1. When you stop an instance, the data on any instance store volumes is erased. Before you stop an
instance, verify that you've copied any data that you need from your instance store volumes to
persistent storage, such as Amazon EBS or Amazon S3.
624
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Stop and start
To stop and start an Amazon EBS-backed instance using the command line
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
You can use AWS Fault Injection Simulator to test how your application responds when your instance is
stopped and started. For more information, see the AWS Fault Injection Simulator User Guide.
• To change the instance type, see Change the instance type (p. 367).
• To change the user data for your instance, see Work with instance user data (p. 726).
• To enable or disable EBS–optimization for your instance, see Modifying EBS–Optimization (p. 1580).
• To change the DeleteOnTermination attribute of the root volume for your instance, see Update
the block device mapping of a running instance (p. 1654). You are not required to stop the instance to
change this attribute.
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
625
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
You can hibernate an instance only if it's enabled for hibernation (p. 635) and it meets the hibernation
prerequisites (p. 627).
If an instance or application takes a long time to bootstrap and build a memory footprint in order to
become fully productive, you can use hibernation to pre-warm the instance. To pre-warm the instance,
you:
You're not charged for instance usage for a hibernated instance when it is in the stopped state.
However, you are charged for instance usage while the instance is in the stopping state, while the
contents of the RAM are transferred to the EBS root volume. (This is different from when you stop
an instance (p. 622) without hibernating it.) You're not charged for data transfer. However, you are
charged for storage of any EBS volumes, including storage for the RAM contents.
If you no longer need an instance, you can terminate it at any time, including when it is in a stopped
(hibernated) state. For more information, see Terminate your instance (p. 646).
Note
For information about using hibernation on Windows instances, see Hibernate Your Windows
Instance in the Amazon EC2 User Guide for Windows Instances.
For information about hibernating Spot Instances, see Hibernate interrupted Spot
Instances (p. 462).
Contents
• Overview of hibernation (p. 627)
• Hibernation prerequisites (p. 627)
• Limitations (p. 630)
• Configure an existing AMI to support hibernation (p. 631)
• Enable hibernation for an instance (p. 635)
• Disable KASLR on an instance (Ubuntu only) (p. 638)
• Hibernate an instance (p. 639)
• Start a hibernated instance (p. 641)
• Troubleshoot hibernation (p. 641)
626
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
Overview of hibernation
The following diagram shows a basic overview of the hibernation process.
• When you initiate hibernation, the instance moves to the stopping state. Amazon EC2 signals
the operating system to perform hibernation (suspend-to-disk). The hibernation freezes all of the
processes, saves the contents of the RAM to the EBS root volume, and then performs a regular
shutdown.
• After the shutdown is complete, the instance moves to the stopped state.
• Any EBS volumes remain attached to the instance, and their data persists, including the saved contents
of the RAM.
• Any Amazon EC2 instance store volumes remain attached to the instance, but the data on the instance
store volumes is lost.
• In most cases, the instance is migrated to a new underlying host computer when it's started. This is
also what happens when you stop and start an instance.
• When you start the instance, the instance boots up and the operating system reads in the contents of
the RAM from the EBS root volume, before unfreezing processes to resume its state.
• The instance retains its private IPv4 addresses and any IPv6 addresses. When you start the instance,
the instance continues to retain its private IPv4 addresses and any IPv6 addresses.
• Amazon EC2 releases the public IPv4 address. When you start the instance, Amazon EC2 assigns a new
public IPv4 address to the instance.
• The instance retains its associated Elastic IP addresses. You're charged for any Elastic IP addresses that
are associated with a hibernated instance. With EC2-Classic, an Elastic IP address is disassociated from
your instance when you hibernate it. For more information, see EC2-Classic (p. 1183).
• When you hibernate a ClassicLink instance, it's unlinked from the VPC to which it was linked. You must
link the instance to the VPC again after starting it. For more information, see ClassicLink (p. 1190).
For information about how hibernation differs from reboot, stop, and terminate, see Differences between
reboot, stop, hibernate, and terminate (p. 562).
Hibernation prerequisites
To hibernate an On-Demand Instance or Reserved Instance, the following prerequisites must be in
place:
• Supported Linux AMIs (p. 628)
• Supported instance families (p. 629)
• Instance size (p. 629)
• Instance RAM size (p. 629)
627
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
* For CentOS, Fedora, and Red Hat Enterprise Linux, hibernation is supported on Nitro-based instances
only.
† We recommend disabling KASLR on instances with Ubuntu 20.04 LTS – Focal, Ubuntu 18.04 LTS -
Bionic, and Ubuntu 16.04 LTS - Xenial. For more information, see Disable KASLR on an instance (Ubuntu
only) (p. 638).
628
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
To configure your own AMI to support hibernation, see Configure an existing AMI to support
hibernation (p. 631).
Support for other versions of Ubuntu and other operating systems is coming soon.
For information about the supported Windows AMIs, see Supported Windows AMIs in the Amazon EC2
User Guide for Windows Instances.
To see the available instance types that support hibernation in a specific Region
The available instance types vary by Region. To see the available instance types that support hibernation
in a Region, use the describe-instance-types command with the --region parameter. Include the --
filters parameter to scope the results to the instance types that support hibernation and the --
query parameter to scope the output to the value of InstanceType.
Example output
c3.2xlarge
c3.4xlarge
c3.8xlarge
c3.large
c3.xlarge
c4.2xlarge
c4.4xlarge
c4.8xlarge
...
Instance size
Not supported for bare metal instances.
629
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
If you choose a Provisioned IOPS SSD volume type, you must provision the EBS volume with the
appropriate IOPS to achieve optimum performance for hibernation. For more information, see Amazon
EBS volume types (p. 1329).
Use one of the following three options to ensure that the root volume is an encrypted EBS volume:
• EBS encryption by default – You can enable EBS encryption by default to ensure that all new
EBS volumes created in your AWS account are encrypted. This way, you can enable hibernation for
your instances without specifying encryption intent at instance launch. For more information, see
Encryption by default (p. 1539).
• EBS "single-step" encryption – You can launch encrypted EBS-backed EC2 instances from an
unencrypted AMI and also enable hibernation at the same time. For more information, see Use
encryption with EBS-backed AMIs (p. 189).
• Encrypted AMI – You can enable EBS encryption by using an encrypted AMI to launch your instance.
If your AMI does not have an encrypted root snapshot, you can copy it to a new AMI and request
encryption. For more information, see Encrypt an unencrypted image during copy (p. 193) and Copy an
AMI (p. 172).
Purchasing options
This feature is available for On-Demand Instances and Reserved Instances. It is not available for
Spot Instances. For information about hibernating a Spot Instance, see Hibernate interrupted Spot
Instances (p. 462).
Limitations
• When you hibernate an instance, the data on any instance store volumes is lost.
• You can't hibernate an instance that has more than 150 GB of RAM.
• If you create a snapshot or AMI from an instance that is hibernated or has hibernation enabled, you
might not be able to connect to the instance.
• You can't change the instance type or size of an instance when hibernation is enabled.
• You can't hibernate an instance that is in an Auto Scaling group or used by Amazon ECS. If your
instance is in an Auto Scaling group and you try to hibernate it, the Amazon EC2 Auto Scaling service
marks the stopped instance as unhealthy, and might terminate it and launch a replacement instance.
For more information, see Health Checks for Auto Scaling Instances in the Amazon EC2 Auto Scaling
User Guide.
• You can't hibernate an instance that is configured to boot in UEFI mode.
• If you hibernate an instance that was launched into a Capacity Reservation, the Capacity Reservation
does not ensure that the hibernated instance can resume after you try to start it.
630
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
• We do not support keeping an instance hibernated for more than 60 days. To keep the instance for
longer than 60 days, you must start the hibernated instance, stop the instance, and start it.
• We constantly update our platform with upgrades and security patches, which can conflict with
existing hibernated instances. We notify you about critical updates that require a start for hibernated
instances so that we can perform a shutdown or a reboot to apply the necessary upgrades and security
patches.
For more information, see Update instance software on your Amazon Linux instance (p. 656).
No additional configuration is required for the following AMIs because they're already
configured to support hibernation:
631
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
5. Stop the instance and create an AMI. For more information, see Create a Linux AMI from an
instance (p. 136).
5. Stop the instance and create an AMI. For more information, see Create a Linux AMI from an
instance (p. 136).
2. Install the Fedora Extra Packages for Enterprise Linux (EPEL) repository.
632
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
2. Install the Fedora Extra Packages for Enterprise Linux (EPEL) repository.
633
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
634
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
Note
The linux-aws-hwe kernel package is fully supported by Canonical. The package will
continue to receive regular updates until standard support for Ubuntu 16.04 LTS ends
in April 2021, and will receive additional security updates until the Extended Security
Maintenance support ends in 2024. For more information, see Amazon EC2 Hibernation for
Ubuntu 16.04 LTS now available on the Canonical Ubuntu Blog.
2. Install the ec2-hibinit-agent package from the repositories.
Console
1. Follow the Launch an instance using the Launch Instance Wizard (p. 565) procedure.
2. On the Choose an Amazon Machine Image (AMI) page, select an AMI that
supports hibernation. For more information about supported AMIs, see Hibernation
prerequisites (p. 627).
3. On the Choose an Instance Type page, select a supported instance type, and choose Next:
Configure Instance Details. For information about supported instance types, see Hibernation
prerequisites (p. 627).
4. On the Configure Instance Details page, for Stop - Hibernate Behavior, select the Enable
hibernation as an additional stop behavior check box.
5. On the Add Storage page, for the root volume, specify the following information:
• For Size (GiB), enter the EBS root volume size. The volume must be large enough to store the
RAM contents and accommodate your expected usage.
• For Volume Type, select a supported EBS volume type, General Purpose SSD (gp2 and gp3) or
Provisioned IOPS SSD (io1 and io2).
• For Encryption, select the encryption key for the volume. If you enabled encryption by
default in this AWS Region, the default encryption key is selected.
635
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
For more information about the prerequisites for the root volume, see Hibernation
prerequisites (p. 627).
6. Continue as prompted by the wizard. When you've finished reviewing your options on the
Review Instance Launch page, choose Launch. For more information, see Launch an instance
using the Launch Instance Wizard (p. 565).
AWS CLI
Use the run-instances command to launch an instance. Specify the EBS root volume parameters
using the --block-device-mappings file://mapping.json parameter, and enable
hibernation using the --hibernation-options Configured=true parameter.
[
{
"DeviceName": "/dev/xvda",
"Ebs": {
"VolumeSize": 30,
"VolumeType": "gp2",
"Encrypted": true
}
}
]
Note
The value for DeviceName must match the root device name that's associated with the
AMI. To find the root device name, use the describe-images command.
If you enabled encryption by default in this AWS Region, you can omit "Encrypted":
true.
PowerShell
Use the New-EC2Instance command to launch an instance. Specify the EBS root
volume by first defining the block device mapping, and then adding it to the command
using the -BlockDeviceMappings parameter. Enable hibernation using the -
HibernationOptions_Configured $true parameter.
636
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
PS C:\> New-EC2Instance `
-ImageId ami-0abcdef1234567890 `
-InstanceType m5.large `
-BlockDeviceMappings $ebs_encrypt `
-HibernationOptions_Configured $true `
-MinCount 1 `
-MaxCount 1 `
-KeyName MyKeyPair
Note
The value for DeviceName must match the root device name associated with the AMI. To
find the root device name, use the Get-EC2Image command.
If you enabled encryption by default in this AWS Region, you can omit Encrypted =
$true from the block device mapping.
New console
Old console
AWS CLI
The following field in the output indicates that the instance is enabled for hibernation.
"HibernationOptions": {
637
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
"Configured": true
}
PowerShell
To view if an instance is enabled for hibernation using the AWS Tools for Windows PowerShell
Get-EC2Instance `
-Filter @{ Name="hibernation-options.configured"; Value="true"}
The output lists the EC2 instances that are enabled for hibernation.
KASLR is a standard Linux kernel security feature that helps to mitigate exposure to and ramifications
of yet-undiscovered memory access vulnerabilities by randomizing the base address value of the
kernel. With KASLR enabled, there is a possibility that the instance might not resume after it has been
hibernated.
1. Connect to your instance using SSH. For more information, see Connect to your Linux instance using
SSH (p. 599).
2. Open the /etc/default/grub.d/50-cloudimg-settings.cfg file in your editor of choice. Edit
the GRUB_CMDLINE_LINUX_DEFAULT line to append the nokaslr option to its end, as shown in
the following example.
6. Run the following command to confirm that nokaslr has been added.
638
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
Hibernate an instance
You can hibernate an instance if the instance is enabled for hibernation (p. 635) and meets the
hibernation prerequisites (p. 627). If an instance cannot hibernate successfully, a normal shutdown
occurs.
New console
Old console
AWS CLI
PowerShell
To hibernate an Amazon EBS-backed instance using the AWS Tools for Windows PowerShell
Use the Stop-EC2Instance command and specify the -Hibernate $true parameter.
Stop-EC2Instance `
639
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
-InstanceId i-1234567890abcdef0 `
-Hibernate $true
New console
Old console
AWS CLI
Use the describe-instances command and specify the state-reason-code filter to see the
instances on which hibernation was initiated.
The following field in the output indicates that hibernation was initiated on the instance.
"StateReason": {
"Code": "Client.UserInitiatedHibernate"
}
PowerShell
To view if hibernation was initiated on an instance using the AWS Tools for Windows PowerShell
Use the Get-EC2Instance command and specify the state-reason-code filter to see the instances
on which hibernation was initiated.
Get-EC2Instance `
-Filter @{Name="state-reason-code";Value="Client.UserInitiatedHibernate"}
The output lists the EC2 instances on which hibernation was initiated.
640
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Hibernate
New console
Old console
AWS CLI
PowerShell
To start a hibernated instance using the AWS Tools for Windows PowerShell
Start-EC2Instance `
-InstanceId i-1234567890abcdef0
Troubleshoot hibernation
Use this information to help diagnose and fix issues that you might encounter when hibernating an
instance.
You must wait for about two minutes after launch before hibernating.
641
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Reboot
Takes too long to transition from stopping to stopped, and memory state not
restored after start
If it takes a long time for your hibernating instance to transition from the stopping state to stopped,
and if the memory state is not restored after you start, this could indicate that hibernation was not
properly configured.
Check the instance system log and look for messages that are related to hibernation. To access the
system log, connect (p. 596) to the instance or use the get-console-output command. Find the log lines
from the hibinit-agent. If the log lines indicate a failure or the log lines are missing, there was most
likely a failure configuring hibernation at launch.
For example, the following message indicates that the instance root volume is not large enough:
hibinit-agent: Insufficient disk space. Cannot create setup for hibernation.
Please allocate a larger root device.
If the last log line from the hibinit-agent is hibinit-agent: Running: swapoff /swap,
hibernation was successfully configured.
If you do not see any logs from these processes, your AMI might not support hibernation. For
information about supported AMIs, see Hibernation prerequisites (p. 627). If you used your
own AMI, make sure that you followed the instructions for Configure an existing AMI to support
hibernation (p. 631).
Rebooting an instance doesn't start a new instance billing period (with a minimum one-minute charge),
unlike stopping and starting your instance.
We might schedule your instance for a reboot for necessary maintenance, such as to apply updates that
require a reboot. No action is required on your part; we recommend that you wait for the reboot to occur
within its scheduled window. For more information, see Scheduled events for your instances (p. 935).
We recommend that you use the Amazon EC2 console, a command line tool, or the Amazon EC2 API to
reboot your instance instead of running the operating system reboot command from your instance. If
you use the Amazon EC2 console, a command line tool, or the Amazon EC2 API to reboot your instance,
we perform a hard reboot if the instance does not cleanly shut down within a few minutes. If you use
AWS CloudTrail, then using Amazon EC2 to reboot your instance also creates an API record of when your
instance was rebooted.
New console
642
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Retire
Old console
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
You can use AWS Fault Injection Simulator to test how your application responds when your instance is
rebooted. For more information, see the AWS Fault Injection Simulator User Guide.
Instance retirement
An instance is scheduled to be retired when AWS detects irreparable failure of the underlying hardware
that hosts the instance. When an instance reaches its scheduled retirement date, it is stopped or
terminated by AWS.
• If your instance root device is an Amazon EBS volume, the instance is stopped, and you can start it
again at any time. Starting the stopped instance migrates it to new hardware.
• If your instance root device is an instance store volume, the instance is terminated, and cannot be used
again.
For more information about the types of instance events, see Scheduled events for your
instances (p. 935).
Contents
• Identify instances scheduled for retirement (p. 643)
• Actions to take for EBS-backed instances scheduled for retirement (p. 645)
• Actions to take for instance-store backed instances scheduled for retirement (p. 645)
643
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Retire
Important
If an instance is scheduled for retirement, we recommend that you take action as soon as
possible because the instance might be unreachable. (The email notification you receive states
the following: "Due to this degradation your instance could already be unreachable.") For more
information about the recommended action you should take, see Check if your instance is
reachable.
Email notification
If your instance is scheduled for retirement, you receive an email prior to the event with the instance ID
and retirement date.
The email is sent to the primary account holder and the operations contact. For more information, see
Adding, changing, or removing alternate contacts in the AWS Billing and Cost Management User Guide.
Console identification
If you use an email account that you do not check regularly for instance retirement notifications, you can
use the Amazon EC2 console or the command line to determine if any of your instances are scheduled for
retirement.
3. If you have an instance with a scheduled event listed, select its link below the Region name to go to
the Events page.
4. The Events page lists all resources that have events associated with them. To view instances that are
scheduled for retirement, select Instance resources from the first filter list, and then Instance stop
or retirement from the second filter list.
5. If the filter results show that an instance is scheduled for retirement, select it, and note the date and
time in the Start time field in the details pane. This is your instance retirement date.
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
644
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Retire
If you are not sure whether your instance is backed by EBS or instance store, see Determine the root
device type of your instance (p. 1641).
When you are notified that your instance is scheduled for retirement, we recommend that you take the
following action as soon as possible:
• Check if your instance is reachable by either connecting (p. 596) to or pinging your instance.
• If your instance is reachable, you should plan to stop/start your instance at an appropriate time before
the scheduled retirement date, when the impact is minimal. For more information about stopping
and starting your instance, and what to expect when your instance is stopped, such as the effect on
public, private, and Elastic IP addresses that are associated with your instance, see Stop and start your
instance (p. 622). Note that data on instance store volumes is lost when you stop and start your
instance.
• If your instance is unreachable, you should take immediate action and perform a stop/start (p. 622)
to recover your instance.
• Alternatively, if you want to terminate (p. 646) your instance, plan to do so as soon as possible so
that you stop incurring charges for the instance.
Create an EBS-backed AMI from your instance so that you have a backup. To ensure data integrity,
stop the instance before you create the AMI. You can wait for the scheduled retirement date when the
instance is stopped, or stop the instance yourself before the retirement date. You can start the instance
again at any time. For more information, see Create an Amazon EBS-backed Linux AMI (p. 134).
After you create an AMI from your instance, you can use the AMI to launch a replacement instance. From
the Amazon EC2 console, select your new AMI and then choose Actions, Launch. Follow the wizard to
launch your instance. For more information about each step in the wizard, see Launch an instance using
the Launch Instance Wizard (p. 565).
645
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Terminate
When you are notified that your instance is scheduled for retirement, we recommend that you take the
following action as soon as possible:
• Check if your instance is reachable by either connecting (p. 596) to or pinging your instance.
• If your instance is unreachable, there is likely very little that can be done to recover your instance.
For more information, see Troubleshoot an unreachable instance (p. 1721). AWS will terminate your
instance on the scheduled retirement date, so, for an unreachable instance, you can immediately
terminate (p. 646) the instance yourself.
Create an instance store-backed AMI from your instance using the AMI tools, as described in Create an
instance store-backed Linux AMI (p. 139). From the Amazon EC2 console, select your new AMI and then
choose Actions, Launch. Follow the wizard to launch your instance. For more information about each
step in the wizard, see Launch an instance using the Launch Instance Wizard (p. 565).
Transfer your data to an EBS volume, take a snapshot of the volume, and then create AMI from the
snapshot. You can launch a replacement instance from your new AMI. For more information, see Convert
your instance store-backed AMI to an Amazon EBS-backed AMI (p. 150).
You can't connect to or start an instance after you've terminated it. However, you can launch additional
instances using the same AMI. If you'd rather stop and start your instance, or hibernate it, see Stop and
start your instance (p. 622) or Hibernate your On-Demand or Reserved Linux instance (p. 626). For
more information, see Differences between reboot, stop, hibernate, and terminate (p. 562).
Contents
• Instance termination (p. 646)
• Terminating multiple instances with termination protection across Availability Zones (p. 647)
• What happens when you terminate an instance (p. 648)
• Terminate an instance (p. 648)
• Enable termination protection (p. 649)
• Change the instance initiated shutdown behavior (p. 650)
• Preserve Amazon EBS volumes on instance termination (p. 650)
• Troubleshoot instance termination (p. 652)
Instance termination
After you terminate an instance, it remains visible in the console for a short while, and then the entry
is automatically deleted. You cannot delete the terminated instance entry yourself. After an instance is
terminated, resources such as tags and volumes are gradually disassociated from the instance and may
no longer be visible on the terminated instance after a short while.
646
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Terminate
When an instance terminates, the data on any instance store volumes associated with that instance is
deleted.
By default, Amazon EBS root device volumes are automatically deleted when the instance terminates.
However, by default, any additional EBS volumes that you attach at launch, or any EBS volumes that
you attach to an existing instance persist even after the instance terminates. This behavior is controlled
by the volume's DeleteOnTermination attribute, which you can modify. For more information, see
Preserve Amazon EBS volumes on instance termination (p. 650).
You can prevent an instance from being terminated accidentally by someone using the AWS
Management Console, the CLI, and the API. This feature is available for both Amazon EC2 instance store-
backed and Amazon EBS-backed instances. Each instance has a DisableApiTermination attribute
with the default value of false (the instance can be terminated through Amazon EC2). You can modify
this instance attribute while the instance is running or stopped (in the case of Amazon EBS-backed
instances). For more information, see Enable termination protection (p. 649).
You can control whether an instance should stop or terminate when shutdown is initiated from the
instance using an operating system command for system shutdown. For more information, see Change
the instance initiated shutdown behavior (p. 650).
If you run a script on instance termination, your instance might have an abnormal termination, because
we have no way to ensure that shutdown scripts run. Amazon EC2 attempts to shut an instance down
cleanly and run any system shutdown scripts; however, certain events (such as hardware failure) may
prevent these system shutdown scripts from running.
• The specified instances that are in the same Availability Zone as the protected instance are not
terminated.
• The specified instances that are in different Availability Zones, where no other specified instances are
protected, are successfully terminated.
Instance B Disabled
Instance D Disabled
If you attempt to terminate all of these instances in the same request, the request reports failure with
the following results:
• Instance A and Instance B are successfully terminated because none of the specified instances in us-
east-1a are enabled for termination protection.
• Instance C and Instance D fail to terminate because at least one of the specified instances in us-
east-1b (Instance C) is enabled for termination protection.
647
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Terminate
• The API request will send a button press event to the guest.
• Various system services will be stopped as a result of the button press event. systemd handles a
graceful shutdown of the system. Graceful shutdown is triggered by the ACPI shutdown button press
event from the hypervisor.
• ACPI shutdown will be initiated.
• The instance will shut down when the graceful shutdown process exits. There is no configurable OS
shutdown time.
Terminate an instance
You can terminate an instance using the AWS Management Console or the command line.
By default, when you initiate a shutdown from an Amazon EBS-backed instance (using the shutdown or
poweroff commands), the instance stops. The halt command does not initiate a shutdown. If used, the
instance does not terminate; instead, it places the CPU into HLT and the instance remains running.
New console
1. Before you terminate an instance, verify that you won't lose any data by checking that your
Amazon EBS volumes won't be deleted on termination and that you've copied any data that you
need from your instance store volumes to persistent storage, such as Amazon EBS or Amazon
S3.
2. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
3. In the navigation pane, choose Instances.
4. Select the instance, and choose Instance state, Terminate instance.
5. Choose Terminate when prompted for confirmation.
Old console
1. Before you terminate an instance, verify that you won't lose any data by checking that your
Amazon EBS volumes won't be deleted on termination and that you've copied any data that you
need from your instance store volumes to persistent storage, such as Amazon EBS or Amazon
S3.
2. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
3. In the navigation pane, choose Instances.
4. Select the instance, and choose Actions, Instance State, Terminate.
5. Choose Yes, Terminate when prompted for confirmation.
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
648
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Terminate
You can use AWS Fault Injection Simulator to test how your application responds when your instance is
terminated. For more information, see the AWS Fault Injection Simulator User Guide.
The DisableApiTermination attribute does not prevent you from terminating an instance by
initiating shutdown from the instance (using an operating system command for system shutdown) when
the InstanceInitiatedShutdownBehavior attribute is set. For more information, see Change the
instance initiated shutdown behavior (p. 650).
Limitations
You can't enable termination protection for Spot Instances—a Spot Instance is terminated when the
Spot price exceeds the amount you're willing to pay for Spot Instances. However, you can prepare
your application to handle Spot Instance interruptions. For more information, see Spot Instance
interruptions (p. 460).
The DisableApiTermination attribute does not prevent Amazon EC2 Auto Scaling from terminating
an instance. For instances in an Auto Scaling group, use the following Amazon EC2 Auto Scaling features
instead of Amazon EC2 termination protection:
• To prevent instances that are part of an Auto Scaling group from terminating on scale in, use instance
scale-in protection. For more information, see Using instance scale-in protection in the Amazon EC2
Auto Scaling User Guide.
• To prevent Amazon EC2 Auto Scaling from terminating unhealthy instances, suspend the
ReplaceUnhealthy process. For more information, see Suspending and Resuming Scaling Processes
in the Amazon EC2 Auto Scaling User Guide.
• To specify which instances Amazon EC2 Auto Scaling should terminate first, choose a termination
policy. For more information, see Customizing the Termination Policy in the Amazon EC2 Auto Scaling
User Guide.
1. Select the instance, and choose Actions, Instance Settings, Change Termination Protection.
2. Choose Yes, Enable.
649
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Terminate
1. Select the instance, and choose Actions, Instance Settings, Change Termination Protection.
2. Choose Yes, Disable.
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
You can update the InstanceInitiatedShutdownBehavior attribute using the Amazon EC2 console
or the command line. The InstanceInitiatedShutdownBehavior attribute only applies when you
perform a shutdown from the operating system of the instance itself; it does not apply when you stop an
instance using the StopInstances API or the Amazon EC2 console.
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
The default value for the DeleteOnTermination attribute differs depending on whether the volume is
the root volume of the instance or a non-root volume attached to the instance.
650
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Terminate
Root volume
By default, the DeleteOnTermination attribute for the root volume of an instance is set to true.
Therefore, the default is to delete the root volume of the instance when the instance terminates. The
DeleteOnTermination attribute can be set by the creator of an AMI as well as by the person who
launches an instance. When the attribute is changed by the creator of an AMI or by the person who
launches an instance, the new setting overrides the original AMI default setting. We recommend that
you verify the default setting for the DeleteOnTermination attribute after you launch an instance
with an AMI.
Non-root volume
By default, when you attach a non-root EBS volume to an instance (p. 1353), its
DeleteOnTermination attribute is set to false. Therefore, the default is to preserve these
volumes. After the instance terminates, you can take a snapshot of the preserved volume or attach
it to another instance. You must delete a volume to avoid incurring further charges. For more
information, see Delete an Amazon EBS volume (p. 1380).
To verify the value of the DeleteOnTermination attribute for an EBS volume that is in use, look at the
instance's block device mapping. For more information, see View the EBS volumes in an instance block
device mapping (p. 1654).
You can change the value of the DeleteOnTermination attribute for a volume when you launch the
instance or while the instance is running.
Examples
• Change the root volume to persist at launch using the console (p. 651)
• Change the root volume to persist at launch using the command line (p. 652)
• Change the root volume of a running instance to persist using the command line (p. 652)
To change the root volume of an instance to persist at launch using the console
In the new console experience, you can verify the setting by viewing details for the root device volume
on the instance's details pane. On the Storage tab, under Block devices, scroll right to view the Delete
on termination setting for the volume. By default, Delete on termination is Yes. If you change the
default behavior, Delete on termination is No.
In the old console experience, you can verify the setting by viewing details for the root device volume
on the instance's details pane. Next to Block devices, choose the entry for the root device volume. By
default, Delete on termination is True. If you change the default behavior, Delete on termination is
False.
651
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Terminate
Change the root volume to persist at launch using the command line
When you launch an EBS-backed instance, you can use one of the following commands to change the
root device volume to persist. For more information about these command line interfaces, see Access
Amazon EC2 (p. 3).
--block-device-mappings file://mapping.json
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": false,
"SnapshotId": "snap-1234567890abcdef0",
"VolumeType": "gp2"
}
}
]
Change the root volume of a running instance to persist using the command line
You can use one of the following commands to change the root device volume of a running EBS-backed
instance to persist. For more information about these command line interfaces, see Access Amazon
EC2 (p. 3).
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": false
}
}
]
652
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recover
If your instance is in the shutting-down state for longer than usual, it should be cleaned up
(terminated) by automated processes within the Amazon EC2 service. For more information, see Delayed
instance termination (p. 1700).
A recovered instance is identical to the original instance, including the instance ID, private IP addresses,
Elastic IP addresses, and all instance metadata. If the impaired instance has a public IPv4 address, the
instance retains the public IPv4 address after recovery. If the impaired instance is in a placement group,
the recovered instance runs in the placement group.
When the StatusCheckFailed_System alarm is triggered, and the recover action is initiated, you
will be notified by the Amazon SNS topic that you selected when you created the alarm and associated
the recover action. During instance recovery, the instance is migrated during an instance reboot, and
any data that is in-memory is lost. When the process is complete, information is published to the SNS
topic you've configured for the alarm. Anyone who is subscribed to this SNS topic will receive an email
notification that includes the status of the recovery attempt and any further instructions. You will notice
an instance reboot on the recovered instance.
Topics
• Requirements (p. 653)
• Create an Amazon CloudWatch alarm to recover an instance (p. 653)
• Troubleshoot instance recovery failures (p. 654)
Requirements
The recover action is supported only on instances with the following characteristics:
• Uses one of the following instance types: A1, C3, C4, C5, C5a, C5n, C6g, C6gn, Inf1, C6i, M3, M4, M5,
M5a, M5n, M5zn, M6g, M6i, P3, R3, R4, R5, R5a, R5b, R5n, R6g, R6i, T2, T3, T3a, T4g, high memory
(virtualized only), X1, X1e
• Runs in a virtual private cloud (VPC)
• Uses default or dedicated instance tenancy
• Has only EBS volumes (do not configure instance store volumes)
653
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure instances
The automatic recovery process attempts to recover your instance for up to three separate failures per
day. If the instance system status check failure persists, we recommend that you manually stop and start
the instance. For more information, see Stop and start your instance (p. 622).
Your instance may subsequently be retired if automatic recovery fails and a hardware degradation is
determined to be the root cause for the original system status check failure.
Contents
• Common configuration scenarios (p. 654)
• Manage software on your Amazon Linux instance (p. 655)
• Manage user accounts on your Amazon Linux instance (p. 660)
• Processor state control for your EC2 instance (p. 663)
• I/O scheduler (p. 669)
• Set the time for your Linux instance (p. 670)
• Optimize CPU options (p. 676)
• Change the hostname of your Amazon Linux instance (p. 699)
• Set up dynamic DNS on Your Amazon Linux instance (p. 702)
• Run commands on your Linux instance at launch (p. 704)
• Instance metadata and user data (p. 710)
Amazon Linux instances come pre-configured with an ec2-user account, but you may want to add
other user accounts that do not have super-user privileges. For more information on adding and
removing user accounts, see Manage user accounts on your Amazon Linux instance (p. 660).
654
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Manage software
The default time configuration for Amazon Linux instances uses Amazon Time Sync Service to set the
system time on an instance. The default time zone is UTC. For more information on setting the time zone
for an instance or using your own time server, see Set the time for your Linux instance (p. 670).
If you have your own network with a domain name registered to it, you can change the hostname
of an instance to identify itself as part of that domain. You can also change the system prompt to
show a more meaningful name without changing the hostname settings. For more information, see
Change the hostname of your Amazon Linux instance (p. 699). You can configure an instance to use
a dynamic DNS service provider. For more information, see Set up dynamic DNS on Your Amazon Linux
instance (p. 702).
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance
that can be used to perform common configuration tasks and even run scripts after the instance starts.
You can pass two types of user data to Amazon EC2: cloud-init directives and shell scripts. For more
information, see Run commands on your Linux instance at launch (p. 704).
Contents
• Update instance software on your Amazon Linux instance (p. 656)
• Add repositories on an Amazon Linux instance (p. 657)
• Find software packages on an Amazon Linux instance (p. 658)
• Install software packages on an Amazon Linux instance (p. 659)
• Prepare to compile software on an Amazon Linux instance (p. 660)
It is important to keep software up to date. Many packages in a Linux distribution are updated frequently
to fix bugs, add features, and protect against security exploits. For more information, see Update
instance software on your Amazon Linux instance (p. 656).
By default, Amazon Linux instances launch with the following repositories enabled:
While there are many packages available in these repositories that are updated by Amazon Web
Services, there might be a package that you want to install that is contained in another repository.
For more information, see Add repositories on an Amazon Linux instance (p. 657). For help finding
packages in enabled repositories, see Find software packages on an Amazon Linux instance (p. 658). For
information about installing software on an Amazon Linux instance, see Install software packages on an
Amazon Linux instance (p. 659).
Not all software is available in software packages stored in repositories; some software must be
compiled on an instance from its source code. For more information, see Prepare to compile software on
an Amazon Linux instance (p. 660).
Amazon Linux instances manage their software using the yum package manager. The yum package
manager can install, remove, and update software, as well as manage all of the dependencies for each
package. Debian-based Linux distributions, like Ubuntu, use the apt-get command and dpkg package
manager, so the yum examples in the following sections do not work for those distributions.
655
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Manage software
1. (Optional) Start a screen session in your shell window. Sometimes you might experience a network
interruption that can disconnect the SSH connection to your instance. If this happens during a
long software update, it can leave the instance in a recoverable, although confused state. A screen
session allows you to continue running the update even if your connection is interrupted, and you
can reconnect to the session later without problems.
b. If your session is disconnected, log back into your instance and list the available screens.
c. Reconnect to the screen using the screen -r command and the process ID from the previous
command.
d. When you are finished using screen, use the exit command to close the session.
2. Run the yum update command. Optionally, you can add the --security flag to apply only security
updates.
3. Review the packages listed, enter y, and press Enter to accept the updates. Updating all of the
packages on a system can take several minutes. The yum output shows the status of the update
while it is running.
4. (Optional) Reboot your instance to ensure that you are using the latest packages and libraries from
your update; kernel updates are not loaded until a reboot occurs. Updates to any glibc libraries
should also be followed by a reboot. For updates to packages that control services, it might be
sufficient to restart the services to pick up the updates, but a system reboot ensures that all previous
package and library updates are complete.
656
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Manage software
Use this procedure to update a single package (and its dependencies) and not the entire system.
1. Run the yum update command with the name of the package to update.
2. Review the package information listed, enter y, and press Enter to accept the update or updates.
Sometimes there will be more than one package listed if there are package dependencies that must
be resolved. The yum output shows the status of the update while it is running.
3. (Optional) Reboot your instance to ensure that you are using the latest packages and libraries from
your update; kernel updates are not loaded until a reboot occurs. Updates to any glibc libraries
should also be followed by a reboot. For updates to packages that control services, it might be
sufficient to restart the services to pick up the updates, but a system reboot ensures that all previous
package and library updates are complete.
While there are many packages available in these repositories that are updated by Amazon Web Services,
there might be a package that you want to install that is contained in another repository.
Important
This information applies to Amazon Linux. For information about other distributions, see their
specific documentation.
To install a package from a different repository with yum, you need to add the repository information
to the /etc/yum.conf file or to its own repository.repo file in the /etc/yum.repos.d directory.
You can do this manually, but most yum repositories provide their own repository.repo file at their
repository URL.
The resulting output lists the installed repositories and reports the status of each. Enabled
repositories display the number of packages they contain.
1. Find the location of the .repo file. This will vary depending on the repository you are adding. In this
example, the .repo file is at https://www.example.com/repository.repo.
2. Add the repository with the yum-config-manager command.
657
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Manage software
After you install a repository, you must enable it as described in the next procedure.
• Use the yum-config-manager command with the --enable repository flag. The following
command enables the Extra Packages for Enterprise Linux (EPEL) repository from the Fedora project.
By default, this repository is present in /etc/yum.repos.d on Amazon Linux AMI instances, but it
is not enabled.
Note
To enable the EPEL repository on Amazon Linux 2, use the following command:
For information on enabling the EPEL repository on other distributions, such as Red Hat and
CentOS, see the EPEL documentation at https://fedoraproject.org/wiki/EPEL.
658
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Manage software
Multiple word search queries in quotation marks only return results that match the exact query. If you
don't see the expected package, simplify your search to one keyword and then scan the results. You can
also try keyword synonyms to broaden your search.
For more information about packages for Amazon Linux 2 and Amazon Linux, see the following:
Use the yum install package command, replacing package with the name of the software to install. For
example, to install the links text-based web browser, enter the following command.
You can also use yum install to install RPM package files that you have downloaded from the internet.
To do this, append the path name of an RPM file to the installation command instead of a repository
package name.
To view a list of installed packages on your instance, use the following command.
659
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Manage users
Because software compilation is not a task that every Amazon EC2 instance requires, these tools are not
installed by default, but they are available in a package group called "Development Tools" that is easily
added to an instance with the yum groupinstall command.
Software source code packages are often available for download (from websites such as https://
github.com/ and http://sourceforge.net/) as a compressed archive file, called a tarball. These tarballs will
usually have the .tar.gz file extension. You can decompress these archives with the tar command.
After you have decompressed and unarchived the source code package, you should look for a README or
INSTALL file in the source code directory that can provide you with further instructions for compiling
and installing the source code.
• Run the yumdownloader --source package command to download the source code for package.
For example, to download the source code for the htop package, enter the following command.
The location of the source RPM is in the directory from which you ran the command.
660
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Manage users
• For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.
• For a CentOS AMI, the user name is centos or ec2-user.
• For a Debian AMI, the user name is admin.
• For a Fedora AMI, the user name is fedora or ec2-user.
• For a RHEL AMI, the user name is ec2-user or root.
• For a SUSE AMI, the user name is ec2-user or root.
• For an Ubuntu AMI, the user name is ubuntu.
• For an Oracle AMI, the user name is ec2-user.
• For a Bitnami AMI, the user name is bitnami.
• Otherwise, check with the AMI provider.
Note
Linux system users should not be confused with AWS Identity and Access Management (IAM)
users. For more information, see IAM users in the IAM User Guide.
Contents
• Considerations (p. 661)
• Create a user account (p. 661)
• Remove a user account (p. 663)
Considerations
Using the default user account is adequate for many applications. However, you may choose to add
user accounts so that individuals can have their own files and workspaces. Furthermore, creating user
accounts for new users is much more secure than granting multiple (possibly inexperienced) users access
to the default user account, because the default user account can cause a lot of damage to a system
when used improperly. For more information, see Tips for Securing Your EC2 Instance.
To enable users SSH access to your EC2 instance using a Linux system user account, you must share the
SSH key with the user. Alternatively, you can use EC2 Instance Connect to provide access to users without
the need to share and manage SSH keys. For more information, see Connect to your Linux instance using
EC2 Instance Connect (p. 602).
1. Create a new key pair (p. 1289). You must provide the .pem file to the user for whom you are
creating the user account. They must use this file to connect to the instance.
2. Retrieve the public key from the key pair that you created in the previous step.
$ ssh-keygen -y -f /path_to_key_pair/key-pair-name.pem
The command returns the public key, as shown in the following example.
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6Vhz2ItxCih
+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/
661
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Manage users
d6RJhJOI0iBXrlsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/
cQk+0FzZqaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi
+z7wB3RbBQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
• Ubuntu
Include the --disabled-password parameter to create the user account without a password.
5. Switch to the new account so that the directory and file that you create will have the proper
ownership.
The prompt changes from ec2-user to newuser to indicate that you have switched the shell
session to the new account.
6. Add the SSH public key to the user account. First create a directory in the user's home directory
for the SSH key file, then create the key file, and finally paste the public key into the key file, as
described in the following sub-steps.
a. Create a .ssh directory in the newuser home directory and change its file permissions to 700
(only the owner can read, write, or open the directory).
Important
Without these exact file permissions, the user will not be able to log in.
b. Create a file named authorized_keys in the .ssh directory and change its file permissions to
600 (only the owner can read or write to the file).
Important
Without these exact file permissions, the user will not be able to log in.
c. Open the authorized_keys file using your favorite text editor (such as vim or nano).
Paste the public key that you retrieved in Step 2 into the file and save the changes.
662
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Processor state control
Important
Ensure that you paste the public key in one continuous line. The public key must not be
split over multiple lines.
The user should now be able to log into the newuser account on your instance, using the
private key that corresponds to the public key that you added to the authorized_keys file.
For more information about the different methods of connecting to a Linux instance, see
Connect to your Linux instance (p. 596).
Use the userdel command to remove the user account from the system. When you specify the -r
parameter, the user's home directory and mail spool are deleted. To keep the user's home directory and
mail spool, omit the -r parameter.
P-states control the desired performance (in CPU frequency) from a core. P-states are numbered starting
from P0 (the highest performance setting where the core is allowed to use Intel Turbo Boost Technology
to increase frequency if possible), and they go from P1 (the P-state that requests the maximum baseline
frequency) to P15 (the lowest possible frequency).
The following instance types provide the ability for an operating system to control processor C-states
and P-states:
The following instance types provide the ability for an operating system to control processor C-states:
663
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Processor state control
AWS Graviton processors have built-in power saving modes and operate at a fixed frequency. Therefore,
they do not provide the ability for the operating system to control C-states and P-states.
You might want to change the C-state or P-state settings to increase processor performance consistency,
reduce latency, or tune your instance for a specific workload. The default C-state and P-state settings
provide maximum performance, which is optimal for most workloads. However, if your application would
benefit from reduced latency at the cost of higher single- or dual-core frequencies, or from consistent
performance at lower frequencies as opposed to bursty Turbo Boost frequencies, consider experimenting
with the C-state or P-state settings that are available to these instances.
The following sections describe the different processor state configurations and how to monitor the
effects of your configuration. These procedures were written for, and apply to Amazon Linux; however,
they may also work for other Linux distributions with a Linux kernel version of 3.9 or newer. For more
information about other Linux distributions and processor state control, see your system-specific
documentation.
Note
The examples on this page use the following:
• The turbostat utility to display processor frequency and C-state information. The turbostat
utility is available on Amazon Linux by default.
• The stress command to simulate a workload. To install stress, first enable the EPEL repository
by running sudo amazon-linux-extras install epel, and then run sudo yum install -y stress.
If the output does not display the C-state information, include the --debug option in the
command (sudo turbostat --debug stress <options>).
Contents
• Highest performance with maximum Turbo Boost frequency (p. 664)
• High performance and low latency by limiting deeper C-states (p. 665)
• Baseline performance with the lowest variability (p. 667)
The following example shows a c4.8xlarge instance with two cores actively performing work reaching
their maximum processor Turbo Boost frequency.
664
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Processor state control
In this example, vCPUs 21 and 28 are running at their maximum Turbo Boost frequency because
the other cores have entered the C6 sleep state to save power and provide both power and thermal
headroom for the working cores. vCPUs 3 and 10 (each sharing a processor core with vCPUs 21 and 28)
are in the C1 state, waiting for instruction.
In the following example, all 18 cores are actively performing work, so there is no headroom for
maximum Turbo Boost, but they are all running at the "all core Turbo Boost" speed of 3.2 GHz.
A common scenario for disabling deeper sleep states is a Redis database application, which stores the
database in system memory for the fastest possible query response time.
665
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Processor state control
2. Edit the kernel line of the first entry and add the intel_idle.max_cstate=1 option to set C1 as
the deepest C-state for idle cores.
# created by imagebuilder
default=0
timeout=1
hiddenmenu
The following example shows a c4.8xlarge instance with two cores actively performing work at the "all
core Turbo Boost" core frequency.
666
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Processor state control
...
1 1 10 0.02 1.97 2.90 0 99.98 0.00 0.00 0.00
1 1 28 99.67 3.20 2.90 0 0.33
1 2 11 0.04 2.63 2.90 0 99.96 0.00 0.00 0.00
1 2 29 0.02 2.11 2.90 0 99.98
...
In this example, the cores for vCPUs 19 and 28 are running at 3.2 GHz, and the other cores are in the C1
C-state, awaiting instruction. Although the working cores are not reaching their maximum Turbo Boost
frequency, the inactive cores will be much faster to respond to new requests than they would be in the
deeper C6 C-state.
Intel Advanced Vector Extensions (AVX or AVX2) workloads can perform well at lower frequencies, and
AVX instructions can use more power. Running the processor at a lower frequency, by disabling Turbo
Boost, can reduce the amount of power used and keep the speed more consistent. For more information
about optimizing your instance configuration and workload for AVX, see http://www.intel.com/
content/dam/www/public/us/en/documents/white-papers/performance-xeon-e5-v3-advanced-vector-
extensions-paper.pdf.
This section describes how to limit deeper sleep states and disable Turbo Boost (by requesting the P1 P-
state) to provide low-latency and the lowest processor speed variability for these types of workloads.
To limit deeper sleep states and disable Turbo Boost on Amazon Linux 2
6. When you need the low processor speed variability that the P1 P-state provides, run the following
command to disable Turbo Boost.
7. When your workload is finished, you can re-enable Turbo Boost with the following command.
667
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Processor state control
To limit deeper sleep states and disable Turbo Boost on Amazon Linux AMI
2. Edit the kernel line of the first entry and add the intel_idle.max_cstate=1 option to set C1 as
the deepest C-state for idle cores.
# created by imagebuilder
default=0
timeout=1
hiddenmenu
5. When you need the low processor speed variability that the P1 P-state provides, run the following
command to disable Turbo Boost.
6. When your workload is finished, you can re-enable Turbo Boost with the following command.
The following example shows a c4.8xlarge instance with two vCPUs actively performing work at the
baseline core frequency, with no Turbo Boost.
668
Amazon Elastic Compute Cloud
User Guide for Linux Instances
I/O scheduler
The cores for vCPUs 21 and 28 are actively performing work at the baseline processor speed of 2.9
GHz, and all inactive cores are also running at the baseline speed in the C1 C-state, ready to accept
instructions.
I/O scheduler
The I/O scheduler is a part of the Linux operating system that sorts and merges I/O requests and
determines the order in which they are processed.
I/O schedulers are particularly beneficial for devices such as magnetic hard drives, where seek time can
be expensive and where it is optimal to merge co-located requests. I/O schedulers have less of an effect
with solid state devices and virtualized environments. This is because for solid state devices, sequential
and random access don't differ, and for virtualized environments, the host provides its own layer of
scheduling.
This topic discusses the Amazon Linux I/O scheduler. For more information about the I/O scheduler used
by other Linux distributions, refer to their respective documentation.
Topics
• Supported schedulers (p. 669)
• Default scheduler (p. 669)
• Change the scheduler (p. 670)
Supported schedulers
Amazon Linux supports the following I/O schedulers:
• deadline — The Deadline I/O scheduler sorts I/O requests and handles them in the most efficient
order. It guarantees a start time for each I/O request. It also gives I/O requests that have been pending
for too long a higher priority.
• cfq — The Completely Fair Queueing (CFQ) I/O scheduler attempts to fairly allocate I/O resources
between processes. It sorts and inserts I/O requests into per-process queues.
• noop — The No Operation (noop) I/O scheduler inserts all I/O requests into a FIFO queue and then
merges them into a single request. This scheduler does not do any request sorting.
Default scheduler
No Operation (noop) is the default I/O scheduler for Amazon Linux. This scheduler is used for the
following reasons:
• Many instance types use virtualized devices where the underlying host performs scheduling for the
instance.
• Solid state devices are used in many instance types where the benefits of an I/O scheduler have less
effect.
• It is the least invasive I/O scheduler, and it can be customized if needed.
669
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set the time
You can view the I/O scheduler for a device using the following command, which uses nvme0n1 as an
example.
$ cat /sys/block/nvme0n1/queue/scheduler
To set the I/O scheduler for the device, use the following command.
Amazon provides the Amazon Time Sync Service, which is accessible from all EC2 instances, and is also
used by other AWS services. This service uses a fleet of satellite-connected and atomic reference clocks
in each AWS Region to deliver accurate current time readings of the Coordinated Universal Time (UTC)
global standard through Network Time Protocol (NTP). The Amazon Time Sync Service automatically
smooths any leap seconds that are added to UTC.
The Amazon Time Sync Service is available through NTP at the 169.254.169.123 IPv4 address or the
fd00:ec2::123 IPv6 address for any instance running in a VPC. The IPv6 address is only accessible
on Instances built on the Nitro System (p. 232). Your instance does not require access to the internet,
and you do not have to configure your security group rules or your network ACL rules to allow access.
The latest versions of Amazon Linux 2 and Amazon Linux AMIs synchronize with the Amazon Time Sync
Service by default.
Use the following procedures to configure the Amazon Time Sync Service on your instance using the
chrony client. Alternatively, you can use external NTP sources. For more information about NTP and
public time sources, see http://www.ntp.org/. An instance needs access to the internet for the external
NTP time sources to work.
For Windows instances, see Set the time for a Windows instance.
Topics
• Configure the time for EC2 instances with IPv4 addresses (p. 670)
• Configure the time for EC2 instances with IPv6 addresses (p. 674)
• Change the time zone on Amazon Linux (p. 674)
• Compare timestamps (p. 676)
670
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set the time
Topics
• Configure the Amazon Time Sync Service on Amazon Linux AMI (p. 671)
• Configure the Amazon Time Sync Service on Ubuntu (p. 672)
• Configure the Amazon Time Sync Service on SUSE Linux (p. 674)
With the Amazon Linux AMI, you must edit the chrony configuration file to add a server entry for the
Amazon Time Sync Service.
3. Open the /etc/chrony.conf file using a text editor (such as vim or nano). Verify that the file
includes the following line:
If the line is present, then the Amazon Time Sync Service is already configured and you can go to the
next step. If not, add the line after any other server or pool statements that are already present in
the file, and save your changes.
4. Restart the chrony daemon (chronyd).
Starting chronyd: [ OK ]
Note
On RHEL and CentOS (up to version 6), the service name is chrony instead of chronyd.
5. Use the chkconfig command to configure chronyd to start at each system boot.
6. Verify that chrony is using the 169.254.169.123 IP address to synchronize the time.
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
671
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set the time
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 169.254.169.123 3 6 17 43 -30us[ -226us] +/- 287us
^- ec2-12-34-231-12.eu-west> 2 6 17 43 -388us[ -388us] +/- 11ms
^- tshirt.heanet.ie 1 6 17 44 +178us[ +25us] +/- 1959us
^? tbag.heanet.ie 0 6 0 - +0ns[ +0ns] +/- 0ns
^? bray.walcz.net 0 6 0 - +0ns[ +0ns] +/- 0ns
^? 2a05:d018:c43:e312:ce77:> 0 6 0 - +0ns[ +0ns] +/- 0ns
^? 2a05:d018:dab:2701:b70:b> 0 6 0 - +0ns[ +0ns] +/- 0ns
1. Connect to your instance and use apt to install the chrony package.
Note
If necessary, update your instance first by running sudo apt update.
2. Open the /etc/chrony/chrony.conf file using a text editor (such as vim or nano). Add the
following line before any other server or pool statements that are already present in the file, and
save your changes:
672
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set the time
4. Verify that chrony is using the 169.254.169.123 IP address to synchronize the time.
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not
combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too
variable.
|| .- xxxx [ yyyy ] +/-
zzzz
|| Reachability register (octal) -. | xxxx = adjusted
offset,
|| Log2(Polling interval) --. | | yyyy = measured
offset,
|| \ | | zzzz = estimated
error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 169.254.169.123 3 6 17 12 +15us[ +57us] +/-
320us
^- tbag.heanet.ie 1 6 17 13 -3488us[-3446us] +/-
1779us
^- ec2-12-34-231-12.eu-west- 2 6 17 13 +893us[ +935us] +/-
7710us
^? 2a05:d018:c43:e312:ce77:6 0 6 0 10y +0ns[ +0ns] +/-
0ns
^? 2a05:d018:d34:9000:d8c6:5 0 6 0 10y +0ns[ +0ns] +/-
0ns
^? tshirt.heanet.ie 0 6 0 10y +0ns[ +0ns] +/-
0ns
^? bray.walcz.net 0 6 0 10y +0ns[ +0ns] +/-
0ns
673
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set the time
Open the /etc/chrony.conf file using a text editor (such as vim or nano). Verify that the file contains
the following line:
If this line is not present, add it. Comment out any other server or pool lines. Open yaST and enable the
chrony service.
Depending on the Linux distribution you are using, when you reach the step to edit the chrony.conf file,
you'll be using the IPv6 endpoint of the Amazon Time Sync Service (fd00:ec2::123) rather than the
IPv4 endpoint (169.254.169.123):
Save the file and verify that chrony is using the fd00:ec2::123 IPv6 address to synchronize time:
In the output, if you see the fd00:ec2::123 IPv6 address, the configuration is complete.
674
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set the time
1. Identify the time zone to use on the instance. The /usr/share/zoneinfo directory contains a
hierarchy of time zone data files. Browse the directory structure at that location to find a file for
your time zone.
Some of the entries at this location are directories (such as America), and these directories contain
time zone files for specific cities. Find your city (or a city in your time zone) to use for the instance.
2. Update the /etc/sysconfig/clock file with the new time zone. In this example, we use the time
zone data file for Los Angeles, /usr/share/zoneinfo/America/Los_Angeles.
a. Open the /etc/sysconfig/clock file with your favorite text editor (such as vim or nano).
You need to use sudo with your editor command because /etc/sysconfig/clock is owned
by root.
b. Locate the ZONE entry, and change it to the time zone file (omitting the /usr/share/
zoneinfo section of the path). For example, to change to the Los Angeles time zone, change
the ZONE entry to the following:
ZONE="America/Los_Angeles"
Note
Do not change the UTC=true entry to another value. This entry is for the hardware
clock, and does not need to be adjusted when you're setting a different time zone on
your instance.
c. Save the file and exit the text editor.
3. Create a symbolic link between /etc/localtime and the time zone file so that the instance finds
the time zone file when it references local time information.
4. Reboot the system to pick up the new time zone information in all services and applications.
5. (Optional) Confirm that the current time zone is updated to the new time zone by using the date
command. The current time zone appears in the output. In the following example, the current time
zone is PDT, which refers to the Los Angeles time zone.
675
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Compare timestamps
If you're using the Amazon Time Sync Service, you can compare the timestamps on your Amazon EC2
instances with ClockBound to determine the true time of an event. ClockBound measures the clock
accuracy of your EC2 instance, and allows you to check if a given timestamp is in the past or future
with respect to your instance's current clock. This information is valuable for determining the order and
consistency of events and transactions across EC2 instances, independent of each instance's geographic
location.
ClockBound is an open source daemon and library. To learn more about ClockBound, including
installation instructions, see ClockBound on GitHub.
In most cases, there is an Amazon EC2 instance type that has a combination of memory and number
of vCPUs to suit your workloads. However, you can specify the following CPU options to optimize your
instance for specific workloads or business needs:
• Number of CPU cores: You can customize the number of CPU cores for the instance. You might do
this to potentially optimize the licensing costs of your software with an instance that has sufficient
amounts of RAM for memory-intensive workloads but fewer CPU cores.
• Threads per core: You can disable multithreading by specifying a single thread per CPU core. You
might do this for certain workloads, such as high performance computing (HPC) workloads.
You can specify these CPU options during instance launch. There is no additional or reduced charge
for specifying CPU options. You're charged the same as instances that are launched with default CPU
options.
Contents
• Rules for specifying CPU options (p. 676)
• CPU cores and threads per CPU core per instance type (p. 677)
• Specify CPU options for your instance (p. 697)
• View the CPU options for your instance (p. 698)
• CPU options can only be specified during instance launch and cannot be modified after launch.
• When you launch an instance, you must specify both the number of CPU cores and threads per core in
the request. For example requests, see Specify CPU options for your instance (p. 697).
• The number of vCPUs for the instance is the number of CPU cores multiplied by the threads per core.
To specify a custom number of vCPUs, you must specify a valid number of CPU cores and threads per
core for the instance type. You cannot exceed the default number of vCPUs for the instance. For more
information, see CPU cores and threads per CPU core per instance type (p. 677).
• To disable multithreading, specify one thread per core.
676
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
• When you change the instance type (p. 367) of an existing instance, the CPU options automatically
change to the default CPU options for the new instance type.
• The specified CPU options persist after you stop, start, or reboot an instance.
CPU cores and threads per CPU core per instance type
The following tables list the instance types that support specifying CPU options.
Contents
• Accelerated computing instances (p. 677)
• Compute optimized instances (p. 679)
• General purpose instances (p. 683)
• Memory optimized instances (p. 689)
• Storage optimized instances (p. 695)
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
f1.2xlarge 8 4 2 1 to 4 1, 2
f1.4xlarge 16 8 2 1 to 8 1, 2
f1.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
g3.4xlarge 16 8 2 1 to 8 1, 2
g3.8xlarge 32 16 2 1 to 16 1, 2
g3.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
g3s.xlarge 4 2 2 1, 2 1, 2
g4ad.xlarge 4 2 2 2 1, 2
g4ad.2xlarge 8 4 2 2, 4 1, 2
g4ad.4xlarge 16 8 2 2, 4, 8 1, 2
g4ad.8xlarge 32 16 2 2, 4, 8, 16 1, 2
g4ad.16xlarge 64 32 2 2, 4, 8, 16, 32 1, 2
g4dn.xlarge 4 2 2 1, 2 1, 2
g4dn.2xlarge 8 4 2 1 to 4 1, 2
g4dn.4xlarge 16 8 2 1 to 8 1, 2
677
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
g4dn.8xlarge 32 16 2 1 to 16 1, 2
g4dn.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
g5g.xlarge 4 4 1 1 to 4 1
g5g.2xlarge 8 8 1 1 to 8 1
g5g.4xlarge 16 16 1 1 to 16 1
g5g.8xlarge 32 32 1 1 to 32 1
g5g.16xlarge 64 64 1 1 to 64 1
inf1.xlarge 4 2 2 2 1, 2
inf1.2xlarge 8 4 2 2, 4 1, 2
inf1.6xlarge 24 12 2 2, 4, 6, 8, 10, 1, 2
12
inf1.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48
p2.xlarge 4 2 2 1, 2 1, 2
p2.8xlarge 32 16 2 1 to 16 1, 2
p2.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
p3.2xlarge 8 4 2 1 to 4 1, 2
p3.8xlarge 32 16 2 1 to 16 1, 2
p3.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
678
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
p4d.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48
vt1.3xlarge 12 6 2 6 1, 2
vt1.6xlarge 24 12 2 6, 12 1, 2
vt1.24xlarge 96 48 2 6, 12, 24 1, 2
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
c4.large 2 1 2 1 1, 2
c4.xlarge 4 2 2 1, 2 1, 2
c4.2xlarge 8 4 2 1 to 4 1, 2
c4.4xlarge 16 8 2 1 to 8 1, 2
c4.8xlarge 36 18 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18
c5.large 2 1 2 1 1, 2
c5.xlarge 4 2 2 2 1, 2
c5.2xlarge 8 4 2 2, 4 1, 2
c5.4xlarge 16 8 2 2, 4, 6, 8 1, 2
679
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
22, 24, 26, 28,
30, 32, 34, 36
c5a.large 2 1 2 1 1, 2
c5a.xlarge 4 2 2 1, 2 1, 2
c5a.2xlarge 8 4 2 1 to 4 1, 2
c5a.4xlarge 16 8 2 1, 2, 3, 4, 8 1, 2
c5a.8xlarge 32 16 2 1, 2, 3, 4, 8, 12, 1, 2
16
c5a.12xlarge 48 24 2 1, 2, 3, 4, 8, 12, 1, 2
16, 20, 24
c5a.16xlarge 64 32 2 1, 2, 3, 4, 8, 12, 1, 2
16, 20, 24, 28,
32
c5a.24xlarge 96 48 2 1, 2, 3, 4, 8, 12, 1, 2
16, 20, 24, 28,
32, 36, 40, 44,
48
c5ad.large 2 1 2 1 1, 2
c5ad.xlarge 4 2 2 1, 2 1, 2
c5ad.2xlarge 8 4 2 1 to 4 1, 2
c5ad.4xlarge 16 8 2 1, 2, 3, 4, 8 1, 2
c5ad.8xlarge 32 16 2 1, 2, 3, 4, 8, 12, 1, 2
16
c5ad.12xlarge 48 24 2 1, 2, 3, 4, 8, 12, 1, 2
16, 20, 24
c5ad.16xlarge 64 32 2 1, 2, 3, 4, 8, 12, 1, 2
16, 20, 24, 28,
32
c5ad.24xlarge 96 48 2 1, 2, 3, 4, 8, 12, 1, 2
16, 20, 24, 28,
32, 36, 40, 44,
48
c5d.large 2 1 2 1 1, 2
680
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
c5d.xlarge 4 2 2 2 1, 2
c5d.2xlarge 8 4 2 2, 4 1, 2
c5d.4xlarge 16 8 2 2, 4, 6, 8 1, 2
c5d.9xlarge 36 18 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18
c5n.large 2 1 2 1 1, 2
c5n.xlarge 4 2 2 2 1, 2
c5n.2xlarge 8 4 2 2, 4 1, 2
c5n.4xlarge 16 8 2 2, 4, 6, 8 1, 2
c5n.9xlarge 36 18 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18
c6g.medium 1 1 1 1 1
c6g.large 2 2 1 1, 2 1
c6g.xlarge 4 4 1 1 to 4 1
c6g.2xlarge 8 8 1 1 to 8 1
c6g.4xlarge 16 16 1 1 to 16 1
c6g.8xlarge 32 32 1 1 to 32 1
c6g.12xlarge 48 48 1 1 to 48 1
c6g.16xlarge 64 64 1 1 to 64 1
681
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
c6gd.medium 1 1 1 1 1
c6gd.large 2 2 1 1, 2 1
c6gd.xlarge 4 4 1 1 to 4 1
c6gd.2xlarge 8 8 1 1 to 8 1
c6gd.4xlarge 16 16 1 1 to 16 1
c6gd.8xlarge 32 32 1 1 to 32 1
c6gd.12xlarge 48 48 1 1 to 48 1
c6gd.16xlarge 64 64 1 1 to 64 1
c6gn.medium 1 1 1 1 1
c6gn.large 2 2 1 1, 2 1
c6gn.xlarge 4 4 1 1 to 4 1
c6gn.2xlarge 8 8 1 1 to 8 1
c6gn.4xlarge 16 16 1 1 to 16 1
c6gn.8xlarge 32 32 1 1 to 32 1
c6gn.12xlarge 48 48 1 1 to 48 1
c6gn.16xlarge 64 64 1 1 to 64 1
c6i.large 2 1 2 1 1, 2
c6i.xlarge 4 2 2 1, 2 1, 2
c6i.2xlarge 8 4 2 2, 4 1, 2
c6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2
c6i.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
c6i.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
c6i.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
c6i.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48
682
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
m4.large 2 1 2 1 1, 2
m4.xlarge 4 2 2 1, 2 1, 2
m4.2xlarge 8 4 2 1 to 4 1, 2
m4.4xlarge 16 8 2 1 to 8 1, 2
m4.10xlarge 40 20 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20
m4.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
m5.large 2 1 2 1 1, 2
m5.xlarge 4 2 2 2 1, 2
m5.2xlarge 8 4 2 2, 4 1, 2
m5.4xlarge 16 8 2 2, 4, 6, 8 1, 2
m5.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
m5.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
683
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
22, 24, 26, 28,
30, 32, 34, 36,
38, 40, 42, 44,
46, 48
m5a.large 2 1 2 1 1, 2
m5a.xlarge 4 2 2 2 1, 2
m5a.2xlarge 8 4 2 2, 4 1, 2
m5a.4xlarge 16 8 2 2, 4, 6, 8 1, 2
m5ad.large 2 1 2 1 1, 2
m5ad.xlarge 4 2 2 2 1, 2
m5ad.2xlarge 8 4 2 2, 4 1, 2
m5ad.4xlarge 16 8 2 2, 4, 6, 8 1, 2
m5ad.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
m5d.large 2 1 2 1 1, 2
m5d.xlarge 4 2 2 2 1, 2
m5d.2xlarge 8 4 2 2, 4 1, 2
m5d.4xlarge 16 8 2 2, 4, 6, 8 1, 2
m5d.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
684
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
m5d.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
m5dn.large 2 1 2 1 1, 2
m5dn.xlarge 4 2 2 2 1, 2
m5dn.2xlarge 8 4 2 2, 4 1, 2
m5dn.4xlarge 16 8 2 2, 4, 6, 8 1, 2
m5dn.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
m5dn.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
m5n.large 2 1 2 1 1, 2
m5n.xlarge 4 2 2 2 1, 2
m5n.2xlarge 8 4 2 2, 4 1, 2
m5n.4xlarge 16 8 2 2, 4, 6, 8 1, 2
m5n.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
685
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
m5n.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
m5zn.large 2 1 2 1 1, 2
m5zn.xlarge 4 2 2 1, 2 1, 2
m5zn.2xlarge 8 4 2 2, 4 1, 2
m5zn.3xlarge 12 6 2 2, 4, 6 1, 2
m5zn.6xlarge 24 12 2 2, 4, 6, 8, 10, 1, 2
12
m5zn.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
m6a.large 2 1 2 1 1, 2
m6a.xlarge 4 2 2 1, 2 1, 2
m6a.2xlarge 8 4 2 1 to 4 1, 2
m6a.4xlarge 16 8 2 1 to 8 1, 2
m6a.8xlarge 32 16 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8, 16
m6a.12xlarge 48 24 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8, 16, 24
m6a.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 32
m6a.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 32,
48
686
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
m6g.medium 1 1 1 1 1
m6g.large 2 2 1 1, 2 1
m6g.xlarge 4 4 1 1 to 4 1
m6g.2xlarge 8 8 1 1 to 8 1
m6g.4xlarge 16 16 1 1 to 16 1
m6g.8xlarge 32 32 1 1 to 32 1
m6g.12xlarge 48 48 1 1 to 48 1
m6g.16xlarge 64 64 1 1 to 64 1
m6gd.medium 1 1 1 1 1
m6gd.large 2 2 1 1, 2 1
m6gd.xlarge 4 4 1 1 to 4 1
m6gd.2xlarge 8 8 1 1 to 8 1
m6gd.4xlarge 16 16 1 1 to 16 1
m6gd.8xlarge 32 32 1 1 to 32 1
m6gd.12xlarge 48 48 1 1 to 48 1
m6gd.16xlarge 64 64 1 1 to 64 1
m6i.large 2 1 2 1 1, 2
m6i.xlarge 4 2 2 1, 2 1, 2
m6i.2xlarge 8 4 2 2, 4 1, 2
m6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2
m6i.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
m6i.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
m6i.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
687
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
m6i.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48
t2.nano 1 1 1 1 1
t2.micro 1 1 1 1 1
t2.small 1 1 1 1 1
t2.medium 2 2 1 1, 2 1
t2.large 2 2 1 1, 2 1
t2.xlarge 4 4 1 1 to 4 1
t2.2xlarge 8 8 1 1 to 8 1
t3.nano 2 1 2 1 1, 2
t3.micro 2 1 2 1 1, 2
t3.small 2 1 2 1 1, 2
t3.medium 2 1 2 1 1, 2
t3.large 2 1 2 1 1, 2
t3.xlarge 4 2 2 2 1, 2
t3.2xlarge 8 4 2 2, 4 1, 2
t3a.nano 2 1 2 1 1, 2
t3a.micro 2 1 2 1 1, 2
t3a.small 2 1 2 1 1, 2
t3a.medium 2 1 2 1 1, 2
t3a.large 2 1 2 1 1, 2
t3a.xlarge 4 2 2 2 1, 2
t3a.2xlarge 8 4 2 2, 4 1, 2
688
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
r4.large 2 1 2 1 1, 2
r4.xlarge 4 2 2 1, 2 1, 2
r4.2xlarge 8 4 2 1 to 4 1, 2
r4.4xlarge 16 8 2 1 to 8 1, 2
r4.8xlarge 32 16 2 1 to 16 1, 2
r4.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
r5.large 2 1 2 1 1, 2
r5.xlarge 4 2 2 2 1, 2
r5.2xlarge 8 4 2 2, 4 1, 2
r5.4xlarge 16 8 2 2, 4, 6, 8 1, 2
r5.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
r5.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
r5a.large 2 1 2 1 1, 2
r5a.xlarge 4 2 2 2 1, 2
r5a.2xlarge 8 4 2 2, 4 1, 2
r5a.4xlarge 16 8 2 2, 4, 6, 8 1, 2
689
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
r5ad.large 2 1 2 1 1, 2
r5ad.xlarge 4 2 2 2 1, 2
r5ad.2xlarge 8 4 2 2, 4 1, 2
r5ad.4xlarge 16 8 2 2, 4, 6, 8 1, 2
r5ad.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
r5b.large 2 1 2 1 1, 2
r5b.xlarge 4 2 2 2 1, 2
r5b.2xlarge 8 4 2 2, 4 1, 2
r5b.4xlarge 16 8 2 2, 4, 6, 8 1, 2
r5b.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
r5b.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
690
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
r5d.large 2 1 2 1 1, 2
r5d.xlarge 4 2 2 2 1, 2
r5d.2xlarge 8 4 2 2, 4 1, 2
r5d.4xlarge 16 8 2 2, 4, 6, 8 1, 2
r5d.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
r5d.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
r5dn.large 2 1 2 1 1, 2
r5dn.xlarge 4 2 2 2 1, 2
r5dn.2xlarge 8 4 2 2, 4 1, 2
r5dn.4xlarge 16 8 2 2, 4, 6, 8 1, 2
r5dn.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
r5dn.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
r5n.large 2 1 2 1 1, 2
691
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
r5n.xlarge 4 2 2 2 1, 2
r5n.2xlarge 8 4 2 2, 4 1, 2
r5n.4xlarge 16 8 2 2, 4, 6, 8 1, 2
r5n.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
r5n.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
r6g.medium 1 1 1 1 1
r6g.large 2 2 1 1, 2 1
r6g.xlarge 4 4 1 1 to 4 1
r6g.2xlarge 8 8 1 1 to 8 1
r6g.4xlarge 16 16 1 1 to 16 1
r6g.8xlarge 32 32 1 1 to 32 1
r6g.12xlarge 48 48 1 1 to 48 1
r6g.16xlarge 64 64 1 1 to 64 1
r6gd.medium 1 1 1 1 1
r6gd.large 2 2 1 1, 2 1
r6gd.xlarge 4 4 1 1 to 4 1
r6gd.2xlarge 8 8 1 1 to 8 1
r6gd.4xlarge 16 16 1 1 to 16 1
r6gd.8xlarge 32 32 1 1 to 32 1
r6gd.12xlarge 48 48 1 1 to 48 1
r6gd.16xlarge 64 64 1 1 to 64 1
r6i.large 2 1 2 1 1, 2
692
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
r6i.xlarge 4 2 2 1, 2 1, 2
r6i.2xlarge 8 4 2 2, 4 1, 2
r6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2
r6i.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
r6i.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
r6i.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
r6i.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48
224
u-6tb1.56xlarge 224 1 1 to 224 1
448
u-6tb1.112xlarge 224 2 1 to 224 1, 2
448
u-9tb1.112xlarge 224 2 1 to 224 1, 2
448
u-12tb1.112xlarge 224 2 1 to 224 1, 2
x1.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
x1e.xlarge 4 2 2 1, 2 1, 2
x1e.2xlarge 8 4 2 1 to 4 1, 2
693
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
x1e.4xlarge 16 8 2 1 to 8 1, 2
x1e.8xlarge 32 16 2 1 to 16 1, 2
x1e.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
x2gd.medium 1 1 1 1 1
x2gd.large 2 2 1 1, 2 1
x2gd.xlarge 4 4 1 1 to 4 1
x2gd.2xlarge 8 8 1 1 to 8 1
x2gd.4xlarge 16 16 1 1 to 16 1
x2gd.8xlarge 32 32 1 1 to 32 1
x2gd.12xlarge 48 48 1 1 to 48 1
x2gd.16xlarge 64 64 1 1 to 64 1
x2iezn.2xlarge8 4 2 2, 4 1, 2
x2iezn.4xlarge16 8 2 2, 4, 6, 8 1, 2
x2iezn.6xlarge24 12 2 2, 4, 6, 8, 10, 1, 2
12
x2iezn.8xlarge32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
48
x2iezn.12xlarge 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
z1d.large 2 1 2 1 1, 2
z1d.xlarge 4 2 2 2 1, 2
z1d.2xlarge 8 4 2 2, 4 1, 2
z1d.3xlarge 12 6 2 2, 4, 6 1, 2
z1d.6xlarge 24 12 2 2, 4, 6, 8, 10, 1, 2
12
694
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
d2.xlarge 4 2 2 1, 2 1, 2
d2.2xlarge 8 4 2 1 to 4 1, 2
d2.4xlarge 16 8 2 1 to 8 1, 2
d2.8xlarge 36 18 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18
d3.xlarge 4 2 2 1, 2 1, 2
d3.2xlarge 8 4 2 2, 4 1, 2
d3.4xlarge 16 8 2 2, 4, 6, 8 1, 2
d3.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
d3en.large 2 1 2 1 1, 2
d3en.xlarge 4 2 2 1, 2 1, 2
d3en.2xlarge 8 4 2 2, 4 1, 2
d3en.4xlarge 16 8 2 2, 4, 6, 8 1, 2
d3en.6xlarge 24 12 2 2, 4, 6, 8, 10, 1, 2
12
d3en.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
d3en.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
h1.2xlarge 8 4 2 1 to 4 1, 2
h1.4xlarge 16 8 2 1 to 8 1, 2
h1.8xlarge 32 16 2 1 to 16 1, 2
h1.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
695
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
20, 22, 24, 26,
28, 30, 32
i3.large 2 1 2 1 1, 2
i3.xlarge 4 2 2 1, 2 1, 2
i3.2xlarge 8 4 2 1 to 4 1, 2
i3.4xlarge 16 8 2 1 to 8 1, 2
i3.8xlarge 32 16 2 1 to 16 1, 2
i3.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
i3en.large 2 1 2 1 1, 2
i3en.xlarge 4 2 2 2 1, 2
i3en.2xlarge 8 4 2 2, 4 1, 2
i3en.3xlarge 12 6 2 2, 4, 6 1, 2
i3en.6xlarge 24 12 2 2, 4, 6, 8, 10, 1, 2
12
i3en.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
im4gn.large 2 2 1 1, 2 1
im4gn.xlarge 4 4 1 1 to 4 1
im4gn.2xlarge 8 8 1 1 to 8 1
im4gn.4xlarge 16 16 1 1 to 16 1
im4gn.8xlarge 32 32 1 1 to 32 1
im4gn.16xlarge64 64 1 1 to 64 1
is4gen.medium 1 1 1 1 1
is4gen.large 2 2 1 1, 2 1
is4gen.xlarge 4 4 1 1 to 4 1
696
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
Instance type Default vCPUs Default CPU Default Valid CPU Valid threads
cores threads per cores per core
core
is4gen.2xlarge8 8 1 1 to 8 1
is4gen.4xlarge16 16 1 1 to 16 1
is4gen.8xlarge32 32 1 1 to 32 1
Disable multithreading
To disable multithreading, specify one thread per core.
1. Follow the Launch an instance using the Launch Instance Wizard (p. 565) procedure.
2. On the Configure Instance Details page, for CPU options, choose Specify CPU options.
3. For Core count, choose the number of required CPU cores. In this example, to specify the default
CPU core count for an r4.4xlarge instance, choose 8.
4. To disable multithreading, for Threads per core, choose 1.
5. Continue as prompted by the wizard. When you've finished reviewing your options on the Review
Instance Launch page, choose Launch. For more information, see Launch an instance using the
Launch Instance Wizard (p. 565).
Use the run-instances AWS CLI command and specify a value of 1 for ThreadsPerCore for the --cpu-
options parameter. For CoreCount, specify the number of CPU cores. In this example, to specify the
default CPU core count for an r4.4xlarge instance, specify a value of 8.
697
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize CPU options
1. Follow the Launch an instance using the Launch Instance Wizard (p. 565) procedure.
2. On the Configure Instance Details page, for CPU options, choose Specify CPU options.
3. To get six vCPUs, specify three CPU cores and two threads per core, as follows:
Use the run-instances AWS CLI command and specify the number of CPU cores and number of threads
in the --cpu-options parameter. You can specify three CPU cores and two threads per core to get six
vCPUs.
Alternatively, specify six CPU cores and one thread per core (disable multithreading) to get six vCPUs:
New console
Old console
698
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the hostname
...
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
"PublicDnsName": "ec2-198-51-100-5.eu-central-1.compute.amazonaws.com",
"State": {
"Code": 16,
"Name": "running"
},
"EbsOptimized": false,
"LaunchTime": "2018-05-08T13:40:33.000Z",
"PublicIpAddress": "198.51.100.5",
"PrivateIpAddress": "172.31.2.206",
"ProductCodes": [],
"VpcId": "vpc-1a2b3c4d",
"CpuOptions": {
"CoreCount": 34,
"ThreadsPerCore": 1
},
"StateTransitionReason": "",
...
}
]
...
In the output that's returned, the CoreCount field indicates the number of cores for the instance. The
ThreadsPerCore field indicates the number of threads per core.
Alternatively, connect to your instance and use a tool such as lscpu to view the CPU information for your
instance.
You can use AWS Config to record, assess, audit, and evaluate configuration changes for instances,
including terminated instances. For more information, see Getting Started with AWS Config in the AWS
Config Developer Guide.
A typical Amazon EC2 private DNS name for an EC2 instance configured to use IP-based naming with
an IPv4 address looks something like this: ip-12-34-56-78.us-west-2.compute.internal, where
the name consists of the internal domain, the service (in this case, compute), the region, and a form of
the private IPv4 address. Part of this hostname is displayed at the shell prompt when you log into your
instance (for example, ip-12-34-56-78). Each time you stop and restart your Amazon EC2 instance
(unless you are using an Elastic IP address), the public IPv4 address changes, and so does your public DNS
name, system hostname, and shell prompt.
Important
This information applies to Amazon Linux. For information about other distributions, see their
specific documentation.
699
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the hostname
part of that domain. This also changes the shell prompt so that it displays the first portion of this name
instead of the hostname supplied by AWS (for example, ip-12-34-56-78). If you do not have a public
DNS name registered, you can still change the hostname, but the process is a little different.
In order for your hostname update to persist, you must verify that the preserve_hostname cloud-init
setting is set to true. You can run the following command to edit or add this setting:
sudo vi /etc/cloud/cloud.cfg
If the preserve_hostname setting is not listed, add the following line of text to the end of the file:
preserve_hostname: true
Follow this procedure if you already have a public DNS name registered.
1. • For Amazon Linux 2: Use the hostnamectl command to set your hostname to reflect the fully
qualified domain name (such as webserver.mydomain.com).
• For Amazon Linux AMI: On your instance, open the /etc/sysconfig/network configuration file
in your favorite text editor and change the HOSTNAME entry to reflect the fully qualified domain
name (such as webserver.mydomain.com).
HOSTNAME=webserver.mydomain.com
Alternatively, you can reboot using the Amazon EC2 console (on the Instances page, select the
instance and choose Instance state, Reboot instance).
3. Log into your instance and verify that the hostname has been updated. Your prompt should show
the new hostname (up to the first ".") and the hostname command should show the fully-qualified
domain name.
1. • For Amazon Linux 2: Use the hostnamectl command to set your hostname to reflect the desired
system hostname (such as webserver).
• For Amazon Linux AMI: On your instance, open the /etc/sysconfig/network configuration
file in your favorite text editor and change the HOSTNAME entry to reflect the desired system
hostname (such as webserver).
HOSTNAME=webserver.localdomain
700
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the hostname
2. Open the /etc/hosts file in your favorite text editor and change the entry beginning with
127.0.0.1 to match the example below, substituting your own hostname.
Alternatively, you can reboot using the Amazon EC2 console (on the Instances page, select the
instance and choose Instance state, Reboot instance).
4. Log into your instance and verify that the hostname has been updated. Your prompt should show
the new hostname (up to the first ".") and the hostname command should show the fully-qualified
domain name.
1. Create a file in /etc/profile.d that sets the environment variable called NICKNAME to the value
you want in the shell prompt. For example, to set the system nickname to webserver, run the
following command.
2. Open the /etc/bashrc (Red Hat) or /etc/bash.bashrc (Debian/Ubuntu) file in your favorite text
editor (such as vim or nano). You need to use sudo with the editor command because /etc/bashrc
and /etc/bash.bashrc are owned by root.
3. Edit the file and change the shell prompt variable (PS1) to display your nickname instead of
the hostname. Find the following line that sets the shell prompt in /etc/bashrc or /etc/
bash.bashrc (several surrounding lines are shown below for context; look for the line that starts
with [ "$PS1"):
# Turn on checkwinsize
shopt -s checkwinsize
[ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@\h \W]\\$ "
# You might want to have e.g. tty in prompt (e.g. more virtual machines)
# and console windows
Change the \h (the symbol for hostname) in that line to the value of the NICKNAME variable.
# Turn on checkwinsize
shopt -s checkwinsize
[ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@$NICKNAME \W]\\$ "
# You might want to have e.g. tty in prompt (e.g. more virtual machines)
# and console windows
701
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set up dynamic DNS
4. (Optional) To set the title on shell windows to the new nickname, complete the following steps.
c. Open the /etc/sysconfig/bash-prompt-xterm file in your favorite text editor (such as vim
or nano). You need to use sudo with the editor command because /etc/sysconfig/bash-
prompt-xterm is owned by root.
d. Add the following line to the file.
5. Log out and then log back in to pick up the new nickname value.
• How do I assign a static hostname to a private Amazon EC2 instance running RHEL 7 or Centos 7?
Dynamic DNS services provide custom DNS host names within their domain area that can be easy to
remember and that can also be more relevant to your host's use case; some of these services are also free
of charge. You can use a dynamic DNS provider with Amazon EC2 and configure the instance to update
the IP address associated with a public DNS name each time the instance starts. There are many different
providers to choose from, and the specific details of choosing a provider and registering a name with
them are outside the scope of this guide.
Important
This information applies to Amazon Linux. For information about other distributions, see their
specific documentation.
1. Sign up with a dynamic DNS service provider and register a public DNS name with their service. This
procedure uses the free service from noip.com/free as an example.
2. Configure the dynamic DNS update client. After you have a dynamic DNS service provider and
a public DNS name registered with their service, point the DNS name to the IP address for your
instance. Many providers (including noip.com) allow you to do this manually from your account page
on their website, but many also support software update clients. If an update client is running on
your EC2 instance, your dynamic DNS record is updated each time the IP address changes, as after
702
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Set up dynamic DNS
a shutdown and restart. In this example, you install the noip2 client, which works with the service
provided by noip.com.
a. Enable the Extra Packages for Enterprise Linux (EPEL) repository to gain access to the noip2
client.
Note
Amazon Linux instances have the GPG keys and repository information for the EPEL
repository installed by default; however, Red Hat and CentOS instances must first
install the epel-release package before you can enable the EPEL repository. For
more information and to download the latest version of this package, see https://
fedoraproject.org/wiki/EPEL.
c. Create the configuration file. Enter the login and password information when prompted and
answer the subsequent questions to configure the client.
This command starts the client, which reads the configuration file (/etc/no-ip2.conf) that you
created earlier and updates the IP address for the public DNS name that you chose.
5. Verify that the update client has set the correct IP address for your dynamic DNS name. Allow a few
minutes for the DNS records to update, and then try to connect to your instance using SSH with the
public DNS name that you configured in this procedure.
703
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Run commands at launch
If you are interested in more complex automation scenarios, consider using AWS CloudFormation and
AWS OpsWorks. For more information, see the AWS CloudFormation User Guide and the AWS OpsWorks
User Guide.
For information about running commands on your Windows instance at launch, see Run commands on
your Windows instance at launch and Manage Windows instance configuration in the Amazon EC2 User
Guide for Windows Instances.
In the following examples, the commands from the Install a LAMP Web Server on Amazon Linux 2 (p. 15)
are converted to a shell script and a set of cloud-init directives that run when the instance launches. In
each example, the following tasks are performed by the user data:
Contents
• Prerequisites (p. 704)
• User data and shell scripts (p. 704)
• User data and the console (p. 705)
• User data and cloud-init directives (p. 707)
• User data and the AWS CLI (p. 708)
Prerequisites
The following examples assume that your instance has a public DNS name that is reachable from the
internet. For more information, see Step 1: Launch an instance (p. 10). You must also configure your
security group to allow SSH (port 22), HTTP (port 80), and HTTPS (port 443) connections. For more
information about these prerequisites, see Set up to use Amazon EC2 (p. 5).
Also, these instructions are intended for use with Amazon Linux 2, and the commands and directives
may not work for other Linux distributions. For more information about other distributions, such as their
support for cloud-init, see their specific documentation.
704
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Run commands at launch
instance. You should allow a few minutes of extra time for the tasks to complete before you test that the
user script has finished successfully.
Important
By default, user data scripts and cloud-init directives run only during the boot cycle when you
first launch an instance. You can update your configuration to ensure that your user data scripts
and cloud-init directives run every time you restart your instance. For more information, see
How can I utilize user data to automatically run a script with every restart of my Amazon EC2
Linux instance? in the AWS Knowledge Center.
User data shell scripts must start with the #! characters and the path to the interpreter you want to read
the script (commonly /bin/bash). For a great introduction on shell scripting, see the BASH Programming
HOW-TO at the Linux Documentation Project (tldp.org).
Scripts entered as user data are run as the root user, so do not use the sudo command in the script.
Remember that any files you create will be owned by root; if you need non-root users to have file
access, you should modify the permissions accordingly in the script. Also, because the script is not run
interactively, you cannot include commands that require user feedback (such as yum update without the
-y flag).
If you use an AWS API, including the AWS CLI, in a user data script, you must use an instance profile when
launching the instance. An instance profile provides the appropriate AWS credentials required by the user
data script to issue the API call. For more information, see Using instance profiles in the IAM User Guide.
The permissions you assign to the IAM role depend on which services you are calling with the API. For
more information, see IAM roles for Amazon EC2.
When a user data script is processed, it is copied to and run from /var/lib/cloud/
instances/instance-id/. The script is not deleted after it is run. Be sure to delete the user data
scripts from /var/lib/cloud/instances/instance-id/ before you create an AMI from the
instance. Otherwise, the script will exist in this directory on any instance launched from the AMI.
In the example script below, the script creates and configures our web server.
#!/bin/bash
yum update -y
amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
yum install -y httpd mariadb-server
systemctl start httpd
systemctl enable httpd
usermod -a -G apache ec2-user
chown -R ec2-user:apache /var/www
chmod 2775 /var/www
find /var/www -type d -exec chmod 2775 {} \;
find /var/www -type f -exec chmod 0664 {} \;
echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php
705
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Run commands at launch
Allow enough time for the instance to launch and run the commands in your script, and then check to
see that your script has completed the tasks that you intended.
For our example, in a web browser, enter the URL of the PHP test file the script created. This URL is the
public DNS address of your instance followed by a forward slash and the file name.
http://my.public.dns.amazonaws.com/phpinfo.php
You should see the PHP information page. If you are unable to see the PHP information page, check that
the security group you are using contains a rule to allow HTTP (port 80) traffic. For more information, see
Add rules to a security group (p. 1311).
(Optional) If your script did not accomplish the tasks you were expecting it to, or if you just want to
verify that your script completed without errors, examine the cloud-init output log file at /var/log/
cloud-init-output.log and look for error messages in the output.
For additional debugging information, you can create a Mime multipart archive that includes a cloud-init
data section with the following directive:
This directive sends command output from your script to /var/log/cloud-init-output.log. For
more information about cloud-init data formats and creating Mime multi part archive, see cloud-init
Formats.
New console
Old console
706
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Run commands at launch
The cloud-init user directives can be passed to an instance at launch the same way that a script
is passed, although the syntax is different. For more information about cloud-init, go to http://
cloudinit.readthedocs.org/en/latest/index.html.
Important
By default, user data scripts and cloud-init directives run only during the boot cycle when you
first launch an instance. You can update your configuration to ensure that your user data scripts
and cloud-init directives run every time you restart your instance. For more information, see
How can I utilize user data to automatically run a script with every restart of my Amazon EC2
Linux instance? in the AWS Knowledge Center.
Adding these tasks at boot time adds to the amount of time it takes to boot an instance. You should
allow a few minutes of extra time for the tasks to complete before you test that your user data directives
have completed.
1. Follow the procedure for launching an instance at Launch an instance using the Launch
Instance Wizard (p. 565), but when you get to the section called “Step 3: Configure Instance
Details” (p. 567) in that procedure, enter your cloud-init directive text in the User data field, and
then complete the launch procedure.
In the example below, the directives create and configure a web server on Amazon Linux 2. The
#cloud-config line at the top is required in order to identify the commands as cloud-init
directives.
#cloud-config
repo_update: true
repo_upgrade: all
packages:
- httpd
- mariadb-server
runcmd:
- [ sh, -c, "amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2" ]
- systemctl start httpd
- sudo systemctl enable httpd
- [ sh, -c, "usermod -a -G apache ec2-user" ]
- [ sh, -c, "chown -R ec2-user:apache /var/www" ]
- chmod 2775 /var/www
707
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Run commands at launch
2. Allow enough time for the instance to launch and run the directives in your user data, and then
check to see that your directives have completed the tasks you intended.
For our example, in a web browser, enter the URL of the PHP test file the directives created. This URL
is the public DNS address of your instance followed by a forward slash and the file name.
http://my.public.dns.amazonaws.com/phpinfo.php
You should see the PHP information page. If you are unable to see the PHP information page,
check that the security group you are using contains a rule to allow HTTP (port 80) traffic. For more
information, see Add rules to a security group (p. 1311).
3. (Optional) If your directives did not accomplish the tasks you were expecting them to, or if you
just want to verify that your directives completed without errors, examine the output log file at /
var/log/cloud-init-output.log and look for error messages in the output. For additional
debugging information, you can add the following line to your directives:
On Windows, you can use the AWS Tools for Windows PowerShell instead of using the AWS CLI. For more
information, see User data and the Tools for Windows PowerShell in the Amazon EC2 User Guide for
Windows Instances.
To specify user data when you launch your instance, use the run-instances command with the --user-
data parameter. With run-instances, the AWS CLI performs base64 encoding of the user data for you.
The following example shows how to specify a script as a string on the command line:
The following example shows how to specify a script using a text file. Be sure to use the file:// prefix
to specify the file.
#!/bin/bash
708
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Run commands at launch
yum update -y
service httpd start
chkconfig httpd on
You can modify the user data of a stopped instance using the modify-instance-attribute command. With
modify-instance-attribute, the AWS CLI does not perform base64 encoding of the user data for you.
• On a Linux computer, use the base64 command to encode the user data.
• On a Windows computer, use the certutil command to encode the user data. Before you can use this
file with the AWS CLI, you must remove the first (BEGIN CERTIFICATE) and last (END CERTIFICATE)
lines.
Use the --attribute and --value parameters to use the encoded text file to specify the user data. Be
sure to use the file:// prefix to specify the file.
To delete the existing user data, use the modify-instance-attribute command as follows:
To retrieve the user data for an instance, use the describe-instance-attribute command. With describe-
instance-attribute, the AWS CLI does not perform base64 decoding of the user data for you.
The following is example output with the user data base64 encoded.
{
"UserData": {
"Value":
"IyEvYmluL2Jhc2gKeXVtIHVwZGF0ZSAteQpzZXJ2aWNlIGh0dHBkIHN0YXJ0CmNoa2NvbmZpZyBodHRwZCBvbg=="
},
"InstanceId": "i-1234567890abcdef0"
}
• On a Linux computer , use the --query option to get the encoded user data and the base64
command to decode it.
709
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
• On a Windows computer, use the --query option to get the coded user data and the certutil
command to decode it. Note that the encoded output is stored in a file and the decoded output is
stored in another file.
#!/bin/bash
yum update -y
service httpd start
chkconfig httpd on
You can also use instance metadata to access user data that you specified when launching your instance.
For example, you can specify parameters for configuring your instance, or include a simple script. You
can build generic AMIs and use user data to modify the configuration files supplied at launch time. For
example, if you run web servers for various small businesses, they can all use the same generic AMI and
retrieve their content from the Amazon S3 bucket that you specify in the user data at launch. To add a
new customer at any time, create a bucket for the customer, add their content, and launch your AMI with
the unique bucket name provided to your code in the user data. If you launch more than one instance
at the same time, the user data is available to all instances in that reservation. Each instance that is part
of the same reservation has a unique ami-launch-index number, allowing you to write code that
controls what to do. For example, the first host might elect itself as the original node in a cluster. For a
detailed AMI launch example, see Example: AMI launch index value (p. 737).
EC2 instances can also include dynamic data, such as an instance identity document that is generated
when the instance is launched. For more information, see Dynamic data categories (p. 736).
Important
Although you can only access instance metadata and user data from within the instance itself,
the data is not protected by authentication or cryptographic methods. Anyone who has direct
access to the instance, and potentially any software running on the instance, can view its
metadata. Therefore, you should not store sensitive data, such as passwords or long-lived
encryption keys, as user data.
Note
The examples in this section use the IPv4 address of the instance metadata service:
169.254.169.254. If you are retrieving instance metadata for EC2 instances over the IPv6
address, ensure that you enable and use the IPv6 address instead: fd00:ec2::254. The IPv6
address of the instance metadata service is compatible with IMDSv2 commands. The IPv6
address is only accessible on Instances built on the Nitro System (p. 232).
Contents
• Use IMDSv2 (p. 711)
• Configure the instance metadata options (p. 714)
• Retrieve instance metadata (p. 718)
• Work with instance user data (p. 726)
710
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
Use IMDSv2
You can access instance metadata from a running instance using one of the following methods:
By default, you can use either IMDSv1 or IMDSv2, or both. The instance metadata service distinguishes
between IMDSv1 and IMDSv2 requests based on whether, for any given request, either the PUT or GET
headers, which are unique to IMDSv2, are present in that request. For more information, see Add defense
in depth against open firewalls, reverse proxies, and SSRF vulnerabilities with enhancements to the EC2
Instance Metadata Service.
You can configure the instance metadata service on each instance such that local code or users must use
IMDSv2. When you specify that IMDSv2 must be used, IMDSv1 no longer works. For more information,
see Configure the instance metadata options (p. 714).
The following example uses a Linux shell script and IMDSv2 to retrieve the top-level instance metadata
items. The example:
• Creates a session token lasting six hours (21,600 seconds) using the PUT request
• Stores the session token header in a variable named TOKEN
• Requests the top-level metadata items using the token
Separate commands
711
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
Then, use the token to generate top-level metadata items using the following command.
Combined commands
You can store the token and combine the commands. The following example combines the above two
commands and stores the session token header in a variable named TOKEN.
Note
If there is an error in creating the token, instead of a valid token, an error message is stored in
the variable, and the command will not work.
After you've created a token, you can reuse it until it expires. In the following example command, which
gets the ID of the AMI used to launch the instance, the token that is stored in $TOKEN in the previous
example is reused.
When you use IMDSv2 to request instance metadata, the request must include the following:
1. Use a PUT request to initiate a session to the instance metadata service. The PUT request returns a
token that must be included in subsequent GET requests to the instance metadata service. The token
is required to access metadata using IMDSv2.
2. Include the token in all GET requests to the instance metadata service. When token usage is set to
required, requests without a valid token or with an expired token receive a 401 - Unauthorized
HTTP error code. For information about changing the token usage requirement, see modify-instance-
metadata-options in the AWS CLI Command Reference.
• The token is an instance-specific key. The token is not valid on other EC2 instances and will be
rejected if you attempt to use it outside of the instance on which it was generated.
• The PUT request must include a header that specifies the time to live (TTL) for the token, in seconds,
up to a maximum of six hours (21,600 seconds). The token represents a logical session. The TTL
specifies the length of time that the token is valid and, therefore, the duration of the session.
• After a token expires, to continue accessing instance metadata, you must create a new session using
another PUT.
• You can choose to reuse a token or create a new token with every request. For a small number of
requests, it might be easier to generate and immediately use a token each time you need to access
the instance metadata service. But for efficiency, you can specify a longer duration for the token
and reuse it rather than having to write a PUT request every time you need to request instance
metadata. There is no practical limit on the number of concurrent tokens, each representing its own
session. IMDSv2 is, however, still constrained by normal instance metadata service connection and
throttling limits. For more information, see Query throttling (p. 725).
HTTP GET and HEAD methods are allowed in IMDSv2 instance metadata requests. PUT requests are
rejected if they contain an X-Forwarded-For header.
By default, the response to PUT requests has a response hop limit (time to live) of 1 at the IP protocol
level. You can adjust the hop limit using the modify-instance-metadata-options command if you
712
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
need to make it larger. For example, you might need a larger hop limit for backward compatibility with
container services running on the instance. For more information, see modify-instance-metadata-options
in the AWS CLI Command Reference.
If your software uses IMDSv1, use the following tools to help reconfigure your software to use IMDSv2.
• AWS software: The latest versions of the AWS SDKs and CLIs support IMDSv2. To use IMDSv2, make
sure that your EC2 instances have the latest versions of the AWS SDKs and CLIs. For information about
updating the CLI, see Installing, updating, and uninstalling the AWS CLI in the AWS Command Line
Interface User Guide.
• CloudWatch: IMDSv2 uses token-backed sessions, while IMDSv1 does not. The MetadataNoToken
CloudWatch metric tracks the number of calls to the instance metadata service that are using IMDSv1.
By tracking this metric to zero, you can determine if and when all of your software has been upgraded
to use IMDSv2. For more information, see Instance metrics (p. 961).
• Updates to EC2 APIs and CLIs: For existing instances, you can use the modify-instance-metadata-
options CLI command (or the ModifyInstanceMetadataOptions API) to require the use of IMDSv2.
For new instances, you can use the run-instances CLI command (or the RunInstances API) and the
metadata-options parameter to launch new instances that require the use of IMDSv2.
To require the use of IMDSv2 on all new instances launched by Auto Scaling groups, your Auto Scaling
groups can use either a launch template or a launch configuration. When you create a launch template
or create a launch configuration, you must configure the MetadataOptions parameters to require
the use of IMDSv2. After you configure the launch template or launch configuration, the Auto Scaling
group launches new instances using the new launch template or launch configuration, but existing
instances are not affected.
Furthermore, you can choose an additional layer of protection to enforce the change from IMDSv1
to IMDSv2. At the access management layer with respect to the APIs called via EC2 Role credentials,
you can use a new condition key in either IAM policies or AWS Organizations service control
policies (SCPs). Specifically, by using the policy condition key ec2:RoleDelivery with a value
of 2.0 in your IAM policies, API calls made with EC2 Role credentials obtained from IMDSv1 will
receive an UnauthorizedOperation response. The same thing can be achieved more broadly
with that condition required by an SCP. This ensures that credentials delivered via IMDSv1 cannot
713
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
actually be used to call APIs because any API calls not matching the specified condition will
receive an UnauthorizedOperation error. For example IAM policies, see Work with instance
metadata (p. 1262). For more information, see Service Control Policies in the AWS Organizations User
Guide.
Using the above tools, we recommend that you follow this path for transitioning to IMDSv2:
Update the SDKs, CLIs, and your software that use Role credentials on their EC2 instances to IMDSv2-
compatible versions. For information about updating the CLI, see Upgrading to the latest version of the
AWS CLI in the AWS Command Line Interface User Guide.
Then, change your software that directly accesses instance metadata (in other words, that does not use
an SDK) using the IMDSv2 requests.
Track your transition progress by using the CloudWatch metric MetadataNoToken. This metric shows
the number of calls to the instance metadata service that are using IMDSv1 on your instances. For more
information, see Instance metrics (p. 961).
Everything is ready on all instances when the CloudWatch metric MetadataNoToken records zero
IMDSv1 usage. At this stage, you can do the following:
• For existing instances: You can require IMDSv2 use through the modify-instance-metadata-options
command. You can make these changes on running instances; you do not need to restart your
instances.
• For new instances: When launching a new instance, you can do one of the following:
• In the Amazon EC2 console launch instance wizard, set Metadata accessible to Enabled and
Metadata version to V2. For more information, see Step 3: Configure Instance Details (p. 567).
• Use the run-instances command to specify that only IMDSv2 is to be used.
Updating instance metadata options for existing instances is available only through the API or AWS CLI.
It is currently not available in the Amazon EC2 console. For more information, see Configure the instance
metadata options (p. 714).
714
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
You can also use IAM condition keys in an IAM policy or SCP to do the following:
• Allow an instance to launch only if it's configured to require the use of IMDSv2
• Restrict the number of allowed hops
• Turn off access to instance metadata
Note
You should proceed cautiously and conduct careful testing before making any changes. Take
note of the following:
• If you enforce the use of IMDSv2, applications or agents that use IMDSv1 for instance
metadata access will break.
• If you turn off all access to instance metadata, applications or agents that rely on instance
metadata access to function will break.
• For IMDSv2, you must use /latest/api/ token when retrieving the token.
Topics
• Configure instance metadata options for new instances (p. 715)
• Modify instance metadata options for existing instances (p. 717)
Console
• When launching a new instance in the Amazon EC2 console, select the following options on the
Configure Instance Details page:
For more information, see Step 3: Configure Instance Details (p. 567).
AWS CLI
715
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
--instance-type c3.large
...
--metadata-options "HttpEndpoint=enabled,HttpTokens=required"
AWS CloudFormation
To specify the metadata options for an instance using AWS CloudFormation, see the
AWS::EC2::LaunchTemplate MetadataOptions property in the AWS CloudFormation User Guide.
To ensure that IAM users can only launch instances that require the use of IMDSv2 when requesting
instance metadata, you can specify that the condition to require IMDSv2 must be met before an instance
can be launched. For the example IAM policy, see Work with instance metadata (p. 1262).
By default, the IPv4 endpoint is enabled and the IPv6 endpoint is disabled. This is true even if you are
launching an instance into an IPv6-only subnet. You can choose to enable or disable these endpoints
at instance launch. The IPv6 endpoint for IMDS is only accessible on Instances built on the Nitro
System (p. 232). For more information on the metadata options, see run-instances in the AWS CLI
command reference. The following example shows you how to enable both IPv4 and IPv6 endpoints for
IMDS:
Console
• To ensure that access to your instance metadata is turned off, regardless of which version of the
instance metadata service you are using, launch the instance in the Amazon EC2 console with
the following option selected on the Configure Instance Details page:
For more information, see Step 3: Configure Instance Details (p. 567).
AWS CLI
To ensure that access to your instance metadata is turned off, regardless of which version of the
instance metadata service you are using, launch the instance with --metadata-options set to
HttpEndpoint=disabled. You can turn access on later by using the modify-instance-metadata-
options command.
716
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
AWS CloudFormation
To specify the metadata options for an instance using AWS CloudFormation, see the
AWS::EC2::LaunchTemplate MetadataOptions property in the AWS CloudFormation User Guide.
Currently only the AWS SDK or AWS CLI support modifying the instance metadata options on existing
instances. You can't use the Amazon EC2 console for modifying instance metadata options.
You can opt in to require that IMDSv2 is used when requesting instance metadata. Use the modify-
instance-metadata-options CLI command and set the http-tokens parameter to required. When you
specify a value for http-tokens, you must also set http-endpoint to enabled.
For existing instances, you can change the settings of the PUT response hop limit. Use the modify-
instance-metadata-options CLI command and set the http-put-response-hop-limit parameter
to the required number of hops. In the following example, the hop limit is set to 3. Note that when
specifying a value for http-put-response-hop-limit, you must also set http-endpoint to
enabled.
You can use the modify-instance-metadata-options CLI command with http-tokens set to optional
to restore the use of IMDSv1 when requesting instance metadata.
By default, the IPv4 endpoint is enabled and the IPv6 endpoint is disabled. This is true even if you
have launched an instance into an IPv6-only subnet. The IPv6 endpoint for IMDS is only accessible on
Instances built on the Nitro System (p. 232). For more information about the metadata options, see
modify-instance-metadata-options in the AWS CLI command reference. The following example shows you
how to turn on the IPv6 endpoint for the instance metadata service.
717
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
You can turn off access to your instance metadata by disabling the HTTP endpoint of the instance
metadata service, regardless of which version of the instance metadata service you are using. You can
reverse this change at any time by enabling the HTTP endpoint. Use the modify-instance-metadata-
options CLI command and set the http-endpoint parameter to disabled.
To control which IAM users can modify the instance metadata options, specify a policy that prevents
all users other than users with a specified role to use the ModifyInstanceMetadataOptions API. For the
example IAM policy, see Work with instance metadata (p. 1262).
Instance metadata is divided into categories. For a description of each instance metadata category, see
Instance metadata categories (p. 728).
To view all categories of instance metadata from within a running instance, use the following IPv4 or
IPv6 URIs:
http://169.254.169.254/latest/meta-data/
This IPv4 address is a link-local address and it is valid only from the instance. For more information, see
Link-local address on Wikipedia.
http://[fd00:ec2::254]/latest/meta-data/
The IP addresses are link-local address and are valid only from the instance. For more information, see
Link-local address on Wikipedia.
Note
The examples in this section use the IPv4 address of the instance metadata service:
169.254.169.254. If you are retrieving instance metadata for EC2 instances over the IPv6
address, ensure that you enable and use the IPv6 address instead: fd00:ec2::254. The IPv6
address of the instance metadata service is compatible with IMDSv2 commands. The IPv6
address is only accessible on Instances built on the Nitro System (p. 232).
The command format is different, depending on whether you use IMDSv1 or IMDSv2. By default, you can
use both instance metadata services. To require the use of IMDSv2, see Use IMDSv2 (p. 711).
718
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
You can use a tool such as cURL, as shown in the following example.
IMDSv2
IMDSv1
Note that you are not billed for HTTP requests used to retrieve instance metadata and user data.
Considerations
To avoid problems with instance metadata retrieval, consider the following:
• The AWS SDKs use IMDSv2 calls by default. If the IMDSv2 call receives no response, the SDK retries
the call and, if still unsuccessful, uses IMDSv1. This can result in a delay. In a container environment, if
the hop limit is 1, the IMDSv2 response does not return because going to the container is considered
an additional network hop. To avoid the process of falling back to IMDSv1 and the resultant delay, in
a container environment we recommend that you set the hop limit to 2. For more information, see
Configure the instance metadata options (p. 714).
• For IMDSv2, you must use /latest/api/token when retrieving the token. Issuing PUT requests to
any version-specific path, for example /2021-03-23/api/token, will result in the metadata service
returning 403 Forbidden errors. This behavior is intended.
A request for a specific metadata resource returns the appropriate value, or a 404 - Not Found HTTP
error code if the resource is not available.
A request for a general metadata resource (the URI ends with a /) returns a list of available resources, or
a 404 - Not Found HTTP error code if there is no such resource. The list items are on separate lines,
terminated by line feeds (ASCII 10).
For requests made using Instance Metadata Service Version 2, the following HTTP error codes can be
returned:
719
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
This example gets the available versions of the instance metadata. These versions do not necessarily
correlate with an Amazon EC2 API version. The earlier versions are available to you in case you have
scripts that rely on the structure and information present in a previous version.
IMDSv2
IMDSv1
720
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
latest
This example gets the top-level metadata items. For more information, see Instance metadata
categories (p. 728).
IMDSv2
IMDSv1
721
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
security-groups
services/
The following examples get the values of some of the top-level metadata items that were obtained in
the preceding example. The IMDSv2 requests use the stored token that was created in the preceding
example command, assuming it has not expired.
IMDSv2
IMDSv1
IMDSv2
IMDSv1
IMDSv2
IMDSv1
IMDSv2
722
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
IMDSv1
IMDSv2
IMDSv1
IMDSv2
IMDSv1
IMDSv2
723
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9T
rDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpE
Ibb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4
nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0Fkb
FFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTb
NYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE my-public-key
IMDSv1
IMDSv2
IMDSv1
In the following examples, the sample instance has tags on instance metadata enabled (p. 1678) and the
instance tags Name=MyInstance and Environment=Dev.
This example gets all the instance tag keys for an instance.
IMDSv2
724
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
IMDSv1
The following example gets the value of the Name key that was obtained in the preceding example. The
IMDSv2 request uses the stored token that was created in the preceding example command, assuming it
has not expired.
IMDSv2
IMDSv1
Query throttling
We throttle queries to the instance metadata service on a per-instance basis, and we place limits on the
number of simultaneous connections from an instance to the instance metadata service.
If you're using the instance metadata service to retrieve AWS security credentials, avoid querying for
credentials during every transaction or concurrently from a high number of threads or processes, as
this might lead to throttling. Instead, we recommend that you cache the credentials until they start
approaching their expiry time.
If you are throttled while accessing the instance metadata service, retry your query with an exponential
backoff strategy.
The following example uses Linux iptables and its owner module to prevent the Apache webserver
(based on its default installation user ID of apache) from accessing 169.254.169.254. It uses a deny rule
to reject all instance metadata requests (whether IMDSv1 or IMDSv2) from any process running as that
user.
$ sudo iptables --append OUTPUT --proto tcp --destination 169.254.169.254 --match owner --
uid-owner apache --jump REJECT
725
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
Or, you can consider only allowing access to particular users or groups, by using allow rules. Allow rules
might be easier to manage from a security perspective, because they require you to make a decision
about what software needs access to instance metadata. If you use allow rules, it's less likely you will
accidentally allow software to access the metadata service (that you did not intend to have access) if
you later change the software or configuration on an instance. You can also combine group usage with
allow rules, so that you can add and remove users from a permitted group without needing to change
the firewall rule.
The following example prevents access to the instance metadata service by all processes, except for
processes running in the user account trustworthy-user.
$ sudo iptables --append OUTPUT --proto tcp --destination 169.254.169.254 --match owner !
--uid-owner trustworthy-user --jump REJECT
Note
• To use local firewall rules, you need to adapt the preceding example commands to suit your
needs.
• By default, iptables rules are not persistent across system reboots. They can be made to be
persistent by using OS features, not described here.
• The iptables owner module only matches group membership if the group is the primary
group of a given local user. Other groups are not matched.
If you are using FreeBSD or OpenBSD, you can also consider using PF or IPFW. The following examples
limit access to the instance metadata service to just the root user.
PF
$ pass out inet proto tcp from any to 169.254.169.254 user root
IPFW
Note
The order of the PF and IPFW commands matter. PF defaults to last matching rule and IPFW
defaults to first matching rule.
• User data must be base64-encoded. The Amazon EC2 console can perform the base64-encoding for
you or accept base64-encoded input.
• User data is limited to 16 KB, in raw form, before it is base64-encoded. The size of a string of length n
after base64-encoding is ceil(n/3)*4.
• User data must be base64-decoded when you retrieve it. If you retrieve the data using instance
metadata or the console, it's decoded for you automatically.
726
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
• User data is treated as opaque data: what you give is what you get back. It is up to the instance to be
able to interpret it.
• If you stop an instance, modify its user data, and start the instance, the updated user data is not run
when you start the instance.
To retrieve user data from within a running instance, use the following URI.
http://169.254.169.254/latest/user-data
A request for user data returns the data as it is (content type application/octet-stream).
This example returns user data that was provided as comma-separated text.
IMDSv2
IMDSv1
IMDSv2
727
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
IMDSv1
To retrieve user data for an instance from your own computer, see User data and the AWS CLI (p. 708).
http://169.254.169.254/latest/dynamic/
Note
The examples in this section use the IPv4 address of the instance metadata service:
169.254.169.254. If you are retrieving instance metadata for EC2 instances over the IPv6
address, ensure that you enable and use the IPv6 address instead: fd00:ec2::254. The IPv6
address of the instance metadata service is compatible with IMDSv2 commands. The IPv6
address is only accessible on Instances built on the Nitro System (p. 232).
This example shows how to retrieve the high-level instance identity categories.
IMDSv2
IMDSv1
For more information about dynamic data and examples of how to retrieve it, see Instance identity
documents (p. 740).
728
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
When Amazon EC2 releases a new instance metadata category, the instance metadata for the new
category might not be available for existing instances. With instances built on the Nitro system (p. 232),
you can retrieve instance metadata only for the categories that were available at launch. For instances
with the Xen hypervisor, you can stop and then start (p. 622) the instance to update the categories that
are available for the instance.
The following table lists the categories of instance metadata. Some of the category names include
placeholders for data that is unique to your instance. For example, mac represents the MAC address
for the network interface. You must replace the placeholders with actual values when you retrieve the
instance metadata.
729
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
730
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
731
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
732
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
733
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
734
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
735
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
fws/instance- Value showing whether the customer has enabled detailed 2009-04-04
monitoring one-minute monitoring in CloudWatch. Valid values:
enabled | disabled
736
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
Alice wants to launch four instances of her favorite database AMI, with the first acting as the original
instance and the remaining three acting as replicas. When she launches them, she wants to add user
data about the replication strategy for each replica. She is aware that this data will be available to all
four instances, so she needs to structure the user data in a way that allows each instance to recognize
which parts are applicable to it. She can do this using the ami-launch-index instance metadata value,
which will be unique for each instance. If she starts more than one instance at the same time, the ami-
launch-index indicates the order in which the instances were launched. The value of the first instance
launched is 0.
Alice launches four instances using the run-instances command, specifying the user data.
After they're launched, all instances have a copy of the user data and the common metadata shown here:
Instance 1
Metadata Value
instance-id i-1234567890abcdef0
ami-launch-index 0
public-hostname ec2-203-0-113-25.compute-1.amazonaws.com
public-ipv4 67.202.51.223
737
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
Metadata Value
local-hostname ip-10-251-50-12.ec2.internal
local-ipv4 10.251.50.35
Instance 2
Metadata Value
instance-id i-0598c7d356eba48d7
ami-launch-index 1
public-hostname ec2-67-202-51-224.compute-1.amazonaws.com
public-ipv4 67.202.51.224
local-hostname ip-10-251-50-36.ec2.internal
local-ipv4 10.251.50.36
Instance 3
Metadata Value
instance-id i-0ee992212549ce0e7
ami-launch-index 2
public-hostname ec2-67-202-51-225.compute-1.amazonaws.com
public-ipv4 67.202.51.225
local-hostname ip-10-251-50-37.ec2.internal
local-ipv4 10.251.50.37
Instance 4
Metadata Value
instance-id i-1234567890abcdef0
ami-launch-index 3
public-hostname ec2-67-202-51-226.compute-1.amazonaws.com
public-ipv4 67.202.51.226
local-hostname ip-10-251-50-38.ec2.internal
local-ipv4 10.251.50.38
Alice can use the ami-launch-index value to determine which portion of the user data is applicable to
a particular instance.
738
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
1. She connects to one of the instances, and retrieves the ami-launch-index for that instance to
ensure it is one of the replicas:
IMDSv2
For the following steps, the IMDSv2 requests use the stored token from the preceding IMDSv2
command, assuming the token has not expired.
IMDSv1
IMDSv1
IMDSv1
4. Finally, Alice uses the cut command to extract the portion of the user data that is applicable to that
instance.
IMDSv2
IMDSv1
739
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
The instance identity document is generated when the instance is stopped and started, restarted, or
launched. The instance identity document is exposed (in plaintext JSON format) through the Instance
Metadata Service. The IPv4 address 169.254.169.254 is a link-local address and is valid only from the
instance. For more information, see Link-local address on Wikipedia. The IPv6 address fd00:ec2::254
is a unique local address and is valid only from the instance. For more information, see Unique local
address on Wikipedia.
Note
The examples in this section use the IPv4 address of the instance metadata service:
169.254.169.254. If you are retrieving instance metadata for EC2 instances over the IPv6
address, ensure that you enable and use the IPv6 address instead: fd00:ec2::254. The IPv6
address of the instance metadata service is compatible with IMDSv2 commands. The IPv6
address is only accessible on Instances built on the Nitro System (p. 232).
You can retrieve the instance identity document from a running instance at any time. The instance
identity document includes the following information:
Data Description
devpayProductCodes Deprecated.
The AWS Marketplace product code of the AMI used to launch the instance.
marketplaceProductCodes
pendingTime The date and time that the instance was launched.
architecture The architecture of the AMI used to launch the instance (i386 | x86_64 |
arm64).
ramdiskId The ID of the RAM disk associated with the instance, if applicable.
740
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
Connect to the instance and run one of the following commands depending on the Instance Metadata
Service (IMDS) version used by the instance.
IMDSv2
IMDSv1
$ curl http://169.254.169.254/latest/dynamic/instance-identity/document
{
"devpayProductCodes" : null,
"marketplaceProductCodes" : [ "1abc2defghijklm3nopqrs4tu" ],
"availabilityZone" : "us-west-2b",
"privateIp" : "10.158.112.84",
"version" : "2017-09-30",
"instanceId" : "i-1234567890abcdef0",
"billingProducts" : null,
"instanceType" : "t2.micro",
"accountId" : "123456789012",
"imageId" : "ami-5fb8c835",
"pendingTime" : "2016-11-19T16:32:11Z",
"architecture" : "x86_64",
"kernelId" : null,
"ramdiskId" : null,
"region" : "us-west-2"
}
The plaintext instance identity document is accompanied by three hashed and encrypted signatures. You
can use these signatures to verify the origin and authenticity of the instance identity document and the
information that it includes. The following signatures are provided:
Each signature is available at a different endpoint in the instance metadata. You can use any one of these
signatures depending on your hashing and encryption requirements. To verify the signatures, you must
use the corresponding AWS public certificate.
The following topics provide detailed steps for validating the instance identity document using each
signature.
741
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
• Use the PKCS7 signature to verify the instance identity document (p. 742)
• Use the base64-encoded signature to verify the instance identity document (p. 745)
• Use the RSA-2048 signature to verify the instance identity document (p. 749)
To verify the instance identity document using the PKCS7 signature and the AWS DSA public
certificate
IMDSv2
IMDSv1
3. Add the contents of the instance identity document from the instance metadata to a file named
document. Use one of the following commands depending on the IMDS version used by the
instance.
IMDSv2
IMDSv1
$ curl -s http://169.254.169.254/latest/dynamic/instance-identity/document
>> document
4. Add the AWS DSA public certificate to a new file named certificate. Use one of the following
commands depending on the Region of your instance.
The following AWS public certificate is for all AWS Regions, except Hong Kong, Bahrain, China,
and GovCloud.
742
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
The AWS public certificate for the Hong Kong Region is as follows.
Bahrain Region
743
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
12AztQ8bFWsrTgTzPE3p6U5ckcgV1TAJBgcqhkjOOAQDAy8AMCwCFB2NZGWm5EDl
86ayV3c1PEDukgQIAhQow38rQkN/VwHVeSW9DqEshXHjuQ==
-----END CERTIFICATE-----" >> certificate
The AWS public certificate for the Cape Town Region is as follows.
Milan Region
China Regions
The AWS public certificate for the China (Beijing) and China (Ningxia) Regions is as follows.
744
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
gsw9+QSnEJeYWnmivJWOBdn9CyDpN7cpHVmeGgNJL2fvImWyWe2f2Kq/BL9l7N7C
P2ZT52/sH9orlck1n2zO8xPi7MItgPHQwu3OxsGQsAdWucdxjHGtdchulpo1uJ31
jsTAPKZ3p1/sxPXBBAgBMatPHhRBqhwHO/Twm4J3GmTLWN7oVDds4W3bPKQfnw3r
vtBj/SM4/IgQ3xJslFcl90TZbQbgxIi88R/gWTbs7GsyT2PzstU30yLdJhKfdZKz
/aIzraHvoDTWFaOdy0+OOaECAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAdSzN2+0E
V1BfR3DPWJHWRf1b7zl+1X/ZseW2hYE5r6YxrLv+1VPf/L5I6kB7GEtqhZUqteY7
zAceoLrVu/7OynRyfQetJVGichaaxLNM3lcr6kcxOowb+WQQ84cwrB3keykH4gRX
KHB2rlWSxta+2panSEO1JX2q5jhcFP90rDOtZjlpYv57N/Z9iQ+dvQPJnChdq3BK
5pZlnIDnVVxqRike7BFy8tKyPj7HzoPEF5mh9Kfnn1YoSVu+61lMVv/qRjnyKfS9
c96nE98sYFj0ZVBzXw8Sq4Gh8FiVmFHbQp1peGC19idOUqxPxWsasWxQXO0azYsP
9RyWLHKxH1dMuA==
-----END CERTIFICATE-----" >> certificate
GovCloud Regions
The AWS public certificate for the AWS GovCloud Regions is as follows.
5. Use the OpenSSL smime command to verify the signature. Include the -verify option to indicate
that the signature needs to be verified, and the -noverify option to indicate that the certificate
does not need to be verified.
$ openssl smime -verify -in pkcs7 -inform PEM -content document -certfile certificate -
noverify
If the signature is valid, the Verification successful message appears. If the signature cannot
be verified, contact AWS Support.
To validate the instance identity document using the base64-encoded signature and the AWS
RSA public certificate
745
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
IMDSv2
IMDSv1
$ curl -s http://169.254.169.254/latest/dynamic/instance-identity/signature |
base64 -d >> signature
3. Retrieve the plaintext instance identity document from the instance metadata and add it to a file
named document. Use one of the following commands depending on the IMDS version used by the
instance.
IMDSv2
IMDSv1
$ curl -s http://169.254.169.254/latest/dynamic/instance-identity/document
>> document
4. Add the AWS RSA public certificate to a new file named certificate. Use one of the following
commands, depending on the Region of your instance.
The following AWS public certificate is for all AWS Regions, except Hong Kong, Bahrain, China,
and GovCloud.
The AWS public certificate for the Hong Kong Region is as follows.
746
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
Bahrain Region
The AWS public certificate for the Cape Town Region is as follows.
747
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
Milan Region
China Regions
The AWS public certificate for the China (Beijing) and China (Ningxia) Regions is as follows.
GovCloud Regions
The AWS public certificate for the AWS GovCloud Regions is as follows.
748
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
5. Extract the public key from the AWS RSA public certificate and save it to a file named key.
If the signature is valid, the Verified OK message appears. If the signature cannot be verified,
contact AWS Support.
To verify the instance identity document using the RSA-2048 signature and the AWS
RSA-2048 public certificate
IMDSv2
IMDSv1
3. Add the contents of the instance identity document from the instance metadata to a file named
document. Use one of the following commands depending on the IMDS version used by the
instance.
IMDSv2
749
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
IMDSv1
$ curl -s http://169.254.169.254/latest/dynamic/instance-identity/document
>> document
4. Add the AWS RSA-2048 public certificate for your Region to a new file named certificate. Use
one of the following commands depending on the Region of your instance.
• Northern Virginia
• Ohio
750
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
• Oregon
• Northern California
• Canada (Central)
751
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
YXpvbiBXZWIgU2VydmljZXMgTExDMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEAhDUh6j1ACSt057nSxAcwMaGr8Ez87VA2RW2HyY8l9XoHndnxmP50Cqld
+26AJtltlqHpI1YdtnZ6OrVgVhXcVtbvte0lZ3ldEzC3PMvmISBhHs6A3SWHA9ln
InHbToLX/SWqBHLOX78HkPRaG2k0COHpRy+fG9gvz8HCiQaXCbWNFDHZev9OToNI
xhXBVzIa3AgUnGMalCYZuh5AfVRCEeALG60kxMMC8IoAN7+HG+pMdqAhJxGUcMO0
LBvmTGGeWhi04MUZWfOkwn9JjQZuyLg6B1OD4Y6s0LB2P1MovmSJKGY4JcF8Qu3z
xxUbl7Bh9pvzFR5gJN1pjM2n3gJEPwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQAJ
UNKM+gIIHNk0G0tzv6vZBT+o/vt+tIp8lEoZwaPQh1121iw/I7ZvhMLAigx7eyvf
IxUt9/nf8pxWaeGzi98RbSmbap+uxYRynqe1p5rifTamOsguuPrhVpl12OgRWLcT
rjg/K60UMXRsmg2w/cxV45pUBcyVb5h6Op5uEVAVq+CVns13ExiQL6kk3guG4+Yq
LvP1p4DZfeC33a2Rfre2IHLsJH5D4SdWcYqBsfTpf3FQThH0l0KoacGrXtsedsxs
9aRd7OzuSEJ+mBxmzxSjSwM84Ooh78DjkdpQgv967p3d+8NiSLt3/n7MgnUy6WwB
KtDujDnB+ttEHwRRngX7
-----END CERTIFICATE-----" >> certificate
• São Paulo
• Frankfurt
752
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
BgNVHQ4EFgQUxC2l6pvJaRflgu3MUdN6zTuP6YcwgY4GA1UdIwSBhjCBg4AUxC2l
6pvJaRflgu3MUdN6zTuP6YehYKReMFwxCzAJBgNVBAYTAlVTMRkwFwYDVQQIExBX
YXNoaW5ndG9uIFN0YXRlMRAwDgYDVQQHEwdTZWF0dGxlMSAwHgYDVQQKExdBbWF6
b24gV2ViIFNlcnZpY2VzIExMQ4IJAKD+v6LeR/WrMBIGA1UdEwEB/wQIMAYBAf8C
AQAwDQYJKoZIhvcNAQELBQADggEBAIK+DtbUPppJXFqQMv1f2Gky5/82ZwgbbfXa
HBeGSii55b3tsyC3ZW5ZlMJ7Dtnr3vUkiWbV1EUaZGOUlndUFtXUMABCb/coDndw
CAr53XTv7UwGVNe/AFO/6pQDdPxXn3xBhF0mTKPrOGdvYmjZUtQMSVb9lbMWCFfs
w+SwDLnm5NF4yZchIcTs2fdpoyZpOHDXy0xgxO1gWhKTnYbaZOxkJvEvcckxVAwJ
obF8NyJla0/pWdjhlHafEXEN8lyxyTTyOa0BGTuYOBD2cTYYynauVKY4fqHUkr3v
Z6fboaHEd4RFamShM8uvSu6eEFD+qRmvqlcodbpsSOhuGNLzhOQ=
-----END CERTIFICATE-----" >> certificate
• London
• Paris
• Ireland
753
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
OTA2MTlaGA8yMTk1MDQwMzA5MDYxOVowXDELMAkGA1UEBhMCVVMxGTAXBgNVBAgT
EFdhc2hpbmd0b24gU3RhdGUxEDAOBgNVBAcTB1NlYXR0bGUxIDAeBgNVBAoTF0Ft
YXpvbiBXZWIgU2VydmljZXMgTExDMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEAjE7nVu+aHLtzp9FYV25Qs1mvJ1JXD7J0iQ1Gs/RirW9a5ZECCtc4ssnf
zQHq2JRVr0GRchvDrbm1HaP/avtFQR/Thvfltwu9AROVT22dUOTvERdkNzveoFCy
hf52Rqf0DMrLXG8ZmQPPXPDFAv+sVMWCDftcChxRYZ6mP9O+TpgYNT1krD5PdvJU
7HcXrkNHDYqbsg8A+Mu2hzl0QkvUET83Csg1ibeK54HP9w+FsD6F5W+6ZSHGJ88l
FI+qYKs7xsjJQYgXWfEt6bbckWs1kZIaIOyMzYdPF6ClYzEec/UhIe/uJyUUNfpT
VIsI5OltBbcPF4c7Y20jOIwwI2SgOQIDAQABo4HUMIHRMAsGA1UdDwQEAwIHgDAd
BgNVHQ4EFgQUF2DgPUZivKQR/Zl8mB/MxIkjZDUwgY4GA1UdIwSBhjCBg4AUF2Dg
PUZivKQR/Zl8mB/MxIkjZDWhYKReMFwxCzAJBgNVBAYTAlVTMRkwFwYDVQQIExBX
YXNoaW5ndG9uIFN0YXRlMRAwDgYDVQQHEwdTZWF0dGxlMSAwHgYDVQQKExdBbWF6
b24gV2ViIFNlcnZpY2VzIExMQ4IJAOrmqHuaUt0vMBIGA1UdEwEB/wQIMAYBAf8C
AQAwDQYJKoZIhvcNAQELBQADggEBAGm6+57W5brzJ3+T8/XsIdLTuiBSe5ALgSqI
qnO5usUKAeQsa+kZIJPyEri5i8LEodh46DAF1RlXTMYgXXxl0YggX88XPmPtok17
l4hib/D9/lu4IaFIyLzYNSzsETYWKWoGVe7ZFz60MTRTwY2u8YgJ5dec7gQgPSGj
avB0vTIgoW41G58sfw5b+wjXCsh0nROon79RcQFFhGnvup0MZ+JbljyhZUYFzCli
31jPZiKzqWa87xh2DbAyvj2KZrZtTe2LQ48Z4G8wWytJzxEeZdREe4NoETf+Mu5G
4CqoaPR05KWkdNUdGNwXewydb3+agdCgfTs+uAjeXKNdSpbhMYg=
-----END CERTIFICATE-----" >> certificate
• Milan
• Stockholm
754
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
FEyxIdEjoeO1jhTsck3R
-----END CERTIFICATE-----" >> certificate
• Bahrain
• Cape Town
• Sydney
755
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
Ma5IRGj4YbRmJkBybw+AAV9Icb5LJNOMWPi34OWM+2tMh+8L234v/JA6ogpdPuDr
sM6YFHMZ0NWo58MQ0FnEj2D7H58Ti//vFPl0TaaPWaAIRF85zBiJtKcFJ6vPidqK
f2/SDuAvZmyHC8ZBHg1moX9bR5FsU3QazfbW+c+JzAQWHj2AaQrGSCITxCMlS9sJ
l51DeoZBjnx8cnRe+HCaC4YoRBiqIQIDAQABo4HUMIHRMAsGA1UdDwQEAwIHgDAd
BgNVHQ4EFgQU/wHIo+r5U31VIsPoWoRVsNXGxowwgY4GA1UdIwSBhjCBg4AU/wHI
o+r5U31VIsPoWoRVsNXGxoyhYKReMFwxCzAJBgNVBAYTAlVTMRkwFwYDVQQIExBX
YXNoaW5ndG9uIFN0YXRlMRAwDgYDVQQHEwdTZWF0dGxlMSAwHgYDVQQKExdBbWF6
b24gV2ViIFNlcnZpY2VzIExMQ4IJAL2bOgb+dq9rMBIGA1UdEwEB/wQIMAYBAf8C
AQAwDQYJKoZIhvcNAQELBQADggEBACobLvj8IxlQyORTz/9q7/VJL509/p4HAeve
92riHp6+Moi0/dSEYPeFTgdWB9W3YCNc34Ss9TJq2D7t/zLGGlbI4wYXU6VJjL0S
hCjWeIyBXUZOZKFCb0DSJeUElsTRSXSFuVrZ9EAwjLvHni3BaC9Ve34iP71ifr75
8Tpk6PEj0+JwiijFH8E4GhcV5chB0/iooU6ioQqJrMwFYnwo1cVZJD5v6D0mu9bS
TMIJLJKv4QQQqPsNdjiB7G9bfkB6trP8fUVYLHLsVlIy5lGx+tgwFEYkG1N8IOO/
2LCawwaWm8FYAFd3IZl04RImNs/IMG7VmH1bf4swHOBHgCN1uYo=
-----END CERTIFICATE-----" >> certificate
• Tokyo
• Seoul
756
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
• Osaka
• Mumbai
• Hong Kong
757
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
lMZoQXjffkVZZ97J7RNDW06oB7kj3WVE8a7U4WEOfnO/CbMUf/x99CckNDwpjgW+
K8V8SzAsQDvYZs2KaE+18GFfLVF1TGUYK2rPSZMHyX+v/TIlc/qUceBycrIQ/kke
jDFsihUMLqgmOV2hXKUpIsmiWMGrFQV4AeV0iXP8L/ZhcepLf1t5SbsGdUA3AUY1
3If8s81uTheiQjwY5t9nM0SY/1Th/tL3+RaEI79VNEVfG1FQ8mgqCK0ar4m0oZJl
tmmEJM7xeURdpBBx36Di
-----END CERTIFICATE-----" >> certificate
• Singapore
• Ningxia
• Beijing
758
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance metadata and user data
EFdhc2hpbmd0b24gU3RhdGUxEDAOBgNVBAcTB1NlYXR0bGUxIDAeBgNVBAoTF0Ft
YXpvbiBXZWIgU2VydmljZXMgTExDMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEAvVBz+WQNdPiM9S+aUULOQEriTmNDUrjLWLr7SfaOJScBzis5D5ju0jh1
+qJdkbuGKtFX5OTWTm8pWhInX+hIOoS3exC4BaANoa1A3o6quoG+Rsv72qQf8LLH
sgEi6+LMlCN9TwnRKOToEabmDKorss4zFl7VSsbQJwcBSfOcIwbdRRaW9Ab6uJHu
79L+mBR3Ea+G7vSDrVIA8goAPkae6jY9WGw9KxsOrcvNdQoEkqRVtHo4bs9fMRHU
Etphj2gh4ObXlFN92VtvzD6QBs3CcoFWgyWGvzg+dNG5VCbsiiuRdmii3kcijZ3H
Nv1wCcZoEAqH72etVhsuvNRC/xAP8wIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQA8
ezx5LRjzUU9EYWYhyYIEShFlP1qDHs7F4L46/5lc4pL8FPoQm5CZuAF31DJhYi/b
fcV7i3n++/ymQbCLC6kAg8DUB7NrcROll5ag8d/JXGzcTCnlDXLXx1905fPNa+jI
0q5quTmdmiSi0taeaKZmyUdhrB+a7ohWdSdlokEIOtbH1P+g5yll3bI2leYE6Tm8
LKbyfK/532xJPqO9abx4Ddn89ZEC6vvWVNDgTsxERg992Wi+/xoSw3XxkgAryIv1
zQ4dQ6irFmXwCWJqc6kHg/M5W+z60S/94+wGTXmp+19U6Rkq5jVMLh16XJXrXwHe
4KcgIS/aQGVgjM6wivVA
-----END CERTIFICATE-----" >> certificate
759
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Elastic Inference
5. Use the OpenSSL smime command to verify the signature. Include the -verify option to indicate
that the signature needs to be verified, and the -noverify option to indicate that the certificate
does not need to be verified.
$ openssl smime -verify -in rsa2048 -inform PEM -content document -certfile certificate
-noverify
If the signature is valid, the Verification successful message appears. If the signature cannot
be verified, contact AWS Support.
Amazon EI distributes model operations defined by TensorFlow, Apache MXNet, and the Open Neural
Network Exchange (ONNX) format through MXNet between low-cost, DL inference accelerators and the
CPU of the instance.
For more information about Amazon Elastic Inference, see the Amazon EI Developer Guide.
For information about identifying Windows instances, see Identify EC2 Windows instances in the Amazon
EC2 User Guide for Windows Instances.
760
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Inspect the system UUID
In the following example output, the UUID starts with "EC2", which indicates that the system is probably
an EC2 instance.
EC2E1916-9099-7CAF-FD21-012345ABCDEF
45E12AEC-DCD1-B213-94ED-012345ABCDEF
Alternatively, for instances built on the Nitro system, you can use the following command:
If the output is an instance ID, as the following example output, the system is an EC2 instance:
i-0af01c0123456789a
Example : Get the UUID from the hypervisor (PV AMIs only)
Use the following command to get the UUID from the hypervisor:
In the following example output, the UUID starts with "ec2", which indicates that the system is probably
an EC2 instance.
ec2e1916-9099-7caf-fd21-012345abcdef
761
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet
Topics
• EC2 Fleet (p. 762)
• Spot Fleet (p. 822)
• Monitor fleet events using Amazon EventBridge (p. 875)
• Tutorials for EC2 Fleet and Spot Fleet (p. 889)
• Example configurations for EC2 Fleet and Spot Fleet (p. 900)
• Fleet quotas (p. 924)
EC2 Fleet
An EC2 Fleet contains the configuration information to launch a fleet—or group—of instances. In a single
API call, a fleet can launch multiple instance types across multiple Availability Zones, using the On-
Demand Instance, Reserved Instance, and Spot Instance purchasing options together. Using EC2 Fleet,
you can:
• Define separate On-Demand and Spot capacity targets and the maximum amount you’re willing to pay
per hour
• Specify the instance types that work best for your applications
• Specify how Amazon EC2 should distribute your fleet capacity within each purchasing option
You can also set a maximum amount per hour that you’re willing to pay for your fleet, and EC2 Fleet
launches instances until it reaches the maximum amount. When the maximum amount you're willing to
pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
The EC2 Fleet attempts to launch the number of instances that are required to meet the target capacity
specified in your request. If you specified a total maximum price per hour, it fulfills the capacity until it
reaches the maximum amount that you’re willing to pay. The fleet can also attempt to maintain its target
Spot capacity if your Spot Instances are interrupted. For more information, see How Spot Instances
work (p. 429).
762
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet limitations
You can specify an unlimited number of instance types per EC2 Fleet. Those instance types can be
provisioned using both On-Demand and Spot purchasing options. You can also specify multiple
Availability Zones, specify different maximum Spot prices for each instance, and choose additional
Spot options for each fleet. Amazon EC2 uses the specified options to provision capacity when the fleet
launches.
While the fleet is running, if Amazon EC2 reclaims a Spot Instance because of a price increase or instance
failure, EC2 Fleet can try to replace the instances with any of the instance types that you specify. This
makes it easier to regain capacity during a spike in Spot pricing. You can develop a flexible and elastic
resourcing strategy for each fleet. For example, within specific fleets, your primary capacity can be On-
Demand supplemented with less-expensive Spot capacity if available.
If you have Reserved Instances and you specify On-Demand Instances in your fleet, EC2 Fleet uses your
Reserved Instances. For example, if your fleet specifies an On-Demand Instance as c4.large, and you
have Reserved Instances for c4.large, you receive the Reserved Instance pricing.
There is no additional charge for using EC2 Fleet. You pay only for the EC2 instances that the fleet
launches for you.
Contents
• EC2 Fleet limitations (p. 763)
• Burstable performance instances (p. 763)
• EC2 Fleet request types (p. 764)
• EC2 Fleet configuration strategies (p. 782)
• Work with EC2 Fleets (p. 805)
Unlimited mode is suitable for burstable performance Spot Instances only if the instance runs long
enough to accrue CPU credits for bursting. Otherwise, paying for surplus credits makes burstable
performance Spot Instances more expensive than using other instances. For more information, see When
to use unlimited mode versus fixed CPU (p. 261).
Launch credits are meant to provide a productive initial launch experience for T2 instances by providing
sufficient compute resources to configure the instance. Repeated launches of T2 instances to access new
launch credits is not permitted. If you require sustained CPU, you can earn credits (by idling over some
period), use Unlimited mode (p. 260) for T2 Spot Instances, or use an instance type with dedicated CPU.
763
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
instant
If you configure the request type as instant, EC2 Fleet places a synchronous one-time request for
your desired capacity. In the API response, it returns the instances that launched, along with errors
for those instances that could not be launched. For more information, see Use an EC2 Fleet of type
'instant' (p. 764).
request
If you configure the request type as request, EC2 Fleet places an asynchronous one-time request
for your desired capacity. Thereafter, if capacity is diminished because of Spot interruptions, the
fleet does not attempt to replenish Spot Instances, nor does it submit requests in alternative Spot
capacity pools if capacity is unavailable.
maintain
(Default) If you configure the request type as maintain, EC2 Fleet places an asynchronous request
for your desired capacity, and maintains capacity by automatically replenishing any interrupted Spot
Instances.
All three types of requests benefit from an allocation strategy. For more information, see Allocation
strategies for Spot Instances (p. 783).
For workloads that need a launch-only API to launch EC2 instances, you can use the RunInstances API.
However, with RunInstances, you can only launch On-Demand Instances or Spot Instances, but not
both in the same request. Furthermore, when you use RunInstances to launch Spot Instances, your Spot
Instance request is limited to one instance type and one Availability Zone. This targets a single Spot
capacity pool (a set of unused instances with the same instance type and Availability Zone). If the Spot
capacity pool does not have sufficient Spot Instance capacity for your request, the RunInstances call fails.
Instead of using RunInstances to launch Spot Instances, we recommend that you rather use the
CreateFleet API with the type parameter set to instant for the following benefits:
• Launch On-Demand Instances and Spot Instances in one request. An EC2 Fleet can launch On-
Demand Instances, Spot Instances, or both. The request for Spot Instances is fulfilled if there is
available capacity and the maximum price per hour for your request exceeds the Spot price.
• Increase the availability of Spot Instances. By using an EC2 Fleet of type instant, you can launch
Spot Instances following Spot best practices with the resulting benefits:
• Spot best practice: Be flexible about instance types and Availability Zones.
Benefit: By specifying several instance types and Availability Zones, you increase the number of Spot
capacity pools. This gives the Spot service a better chance of finding and allocating your desired
Spot compute capacity. A good rule of thumb is to be flexible across at least 10 instance types for
each workload and make sure that all Availability Zones are configured for use in your VPC.
• Spot best practice: Use the capacity-optimized allocation strategy.
764
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
Benefit: The capacity-optimized allocation strategy automatically provisions instances from the
most-available Spot capacity pools. Because your Spot Instance capacity is sourced from pools with
optimal capacity, this decreases the possibility that your Spot Instances will be interrupted when
Amazon EC2 needs the capacity back.
• Get access to a wider set of capabilities. For workloads that need a launch-only API, and where you
prefer to manage the lifecycle of your instance rather than let EC2 Fleet manage it for you, use the EC2
Fleet of type instant instead of the RunInstances API. EC2 Fleet provides a wider set of capabilities
than RunInstances, as demonstrated in the following examples. For all other workloads, you should
use Amazon EC2 Auto Scaling because it supplies a more comprehensive feature set for a wide variety
of workloads, like ELB-backed applications, containerized workloads, and queue processing jobs.
AWS services like Amazon EC2 Auto Scaling and Amazon EMR use EC2 Fleet of type instant to launch EC2
instances.
1. Configure the CreateFleet request type as instant. For more information, see Create an EC2
Fleet (p. 812). Note that after you make the API call, you can't modify it.
2. When you make the API call, EC2 Fleet places a synchronous one-time request for your desired
capacity.
3. The API response lists the instances that launched, along with errors for those instances that could not
be launched.
4. You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history
of your EC2 Fleet.
5. After your instances have launched, you can delete the fleet request. When deleting the fleet request,
you can also choose to terminate the associated instances, or leave them running.
6. You can terminate the instances at any time.
Examples
The following examples show how to use EC2 Fleet of type instant for different use cases. For more
information about using the EC2 CreateFleet API parameters, see CreateFleet in the Amazon EC2 API
Reference.
Examples
• Example 1: Launch Spot Instances with the capacity-optimized allocation strategy (p. 766)
• Example 2: Launch a single Spot Instance with the capacity-optimized allocation strategy (p. 767)
• Example 3: Launch Spot Instances using instance weighting (p. 768)
• Example 4: Launch Spot Instances within single Availability zone (p. 770)
• Example 5: Launch Spot Instances of single instance type within single Availability zone (p. 771)
• Example 6: Launch Spot Instances only if minimum target capacity can be launched (p. 772)
• Example 7: Launch Spot Instances only if minimum target capacity can be launched of same Instance
Type in a single Availability Zone (p. 774)
• Example 8: Launch instances with multiple Launch Templates (p. 775)
• Example 9: Launch Spot Instance with a base of On-Demand Instances (p. 777)
765
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
• Example 10: Launch Spot Instances using capacity-optimized allocation strategy with a base of On-
Demand Instances using Capacity Reservations and the prioritized allocation strategy (p. 778)
• Example 11: Launch Spot Instances using capacity-optimized-prioritized allocation strategy (p. 780)
The following example specifies the parameters required in an EC2 Fleet of type instant: a launch
template, target capacity, default purchasing option, and launch template overrides.
• The launch template is identified by its launch template name and version number.
• The 12 launch template overrides specify 4 different instance types and 3 different subnets, each in a
separate Availability Zone. Each instance type and subnet combination defines a Spot capacity pool,
resulting in 12 Spot capacity pools.
• The target capacity for the fleet is 20 instances.
• The default purchasing option is spot, which results in the fleet attempting to launch 20 Spot
Instances into the Spot capacity pool with optimal capacity for the number of instances that are
launching.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized"
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-e7188bab"
766
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-49e41922"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
}
Example 2: Launch a single Spot Instance with the capacity-optimized allocation strategy
You can optimally launch one Spot Instance at a time by making multiple EC2 Fleet API calls of type
instant, by setting the TotalTargetCapacity to 1.
The following example specifies the parameters required in an EC2 Fleet of type instant: a launch
template, target capacity, default purchasing option, and launch template overrides. The launch
template is identified by its launch template name and version number. The 12 launch template
overrides have 4 different instance types and 3 different subnets, each in a separate Availability Zone.
The target capacity for the fleet is 1 instance, and the default purchasing option is spot, which results
in the fleet attempting to launch a Spot Instance from one of the 12 Spot capacity pools based on the
capacity-optimized allocation strategy, to launch a Spot Instance from the most-available capacity pool.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized"
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-49e41922"
},
767
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-49e41922"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 1,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
}
The following examples use instance weighting, which means that the price is per unit hour instead of
per instance hour. Each launch configuration lists a different instance type and a different weight based
on how many units of the workload can run on the instance assuming a unit of the workload requires
a 15 GB of memory and 4 vCPUs. For example an m5.xlarge (4 vCPUs and 16 GB of memory) can run
one unit and is weighted 1, m5.2xlarge (8 vCPUs and 32 GB of memory) can run 2 units and is weighted
2, and so on. The total target capacity is set to 40 units. The default purchasing option is spot, and the
allocation strategy is capacity-optimized, which results in either 40 m5.xlarge (40 divided by 1), 20
m5.2xlarge (40 divided by 2), 10 m5.4xlarge (40 divided by 4), 5 m5.8xlarge (40 divided by 8), or a mix
of the instance types with weights adding up to the desired capacity based on the capacity-optimized
allocation strategy.
For more information, see EC2 Fleet instance weighting (p. 803).
{
"SpotOptions":{
768
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
"AllocationStrategy":"capacity-optimized"
},
"LaunchTemplateConfigs":[
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"m5.xlarge",
"SubnetId":"subnet-fae8c380",
"WeightedCapacity":1
},
{
"InstanceType":"m5.xlarge",
"SubnetId":"subnet-e7188bab",
"WeightedCapacity":1
},
{
"InstanceType":"m5.xlarge",
"SubnetId":"subnet-49e41922",
"WeightedCapacity":1
},
{
"InstanceType":"m5.2xlarge",
"SubnetId":"subnet-fae8c380",
"WeightedCapacity":2
},
{
"InstanceType":"m5.2xlarge",
"SubnetId":"subnet-e7188bab",
"WeightedCapacity":2
},
{
"InstanceType":"m5.2xlarge",
"SubnetId":"subnet-49e41922",
"WeightedCapacity":2
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-fae8c380",
"WeightedCapacity":4
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-e7188bab",
"WeightedCapacity":4
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-49e41922",
"WeightedCapacity":4
},
{
"InstanceType":"m5.8xlarge",
"SubnetId":"subnet-fae8c380",
"WeightedCapacity":8
},
{
"InstanceType":"m5.8xlarge",
"SubnetId":"subnet-e7188bab",
"WeightedCapacity":8
},
{
"InstanceType":"m5.8xlarge",
769
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
"SubnetId":"subnet-49e41922",
"WeightedCapacity":8
}
]
}
],
"TargetCapacitySpecification":{
"TotalTargetCapacity":40,
"DefaultTargetCapacityType":"spot"
},
"Type":"instant"
}
The 12 launch template overrides have different instance types and subnets (each in a separate
Availability Zone) but the same weighted capacity. The total target capacity is 20 instances, the default
purchasing option is spot, and the Spot allocation strategy is capacity-optimized. The EC2 Fleet launches
20 Spot Instances all in a single AZ, from the Spot capacity pool(s) with optimal capacity using the
launch specifications.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized",
"SingleAvailabilityZone": true
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-fae8c380"
},
770
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-49e41922"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
}
Example 5: Launch Spot Instances of single instance type within single Availability zone
You can configure a fleet to launch all instances of the same instance type and in a single Availability
Zone by setting the SpotOptions SingleInstanceType to true and SingleAvailabilityZone to true.
The 12 launch template overrides have different instance types and subnets (each in a separate
Availability Zone) but the same weighted capacity. The total target capacity is 20 instances, the default
purchasing option is spot, the Spot allocation strategy is capacity-optimized. The EC2 Fleet launches
20 Spot Instances of the same instance type all in a single AZ from the Spot Instance pool with optimal
capacity using the launch specifications.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized",
"SingleInstanceType": true,
"SingleAvailabilityZone": true
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5.4xlarge",
771
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-49e41922"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
}
Example 6: Launch Spot Instances only if minimum target capacity can be launched
You can configure a fleet to launch instances only if the minimum target capacity can be launched
by setting the Spot options MinTargetCapacity to the minimum target capacity you want to launch
together.
The 12 launch template overrides have different instance types and subnets (each in a separate
Availability Zone) but the same weighted capacity. The total target capacity and the minimum target
capacity are both set to 20 instances, the default purchasing option is spot, the Spot allocation strategy
is capacity-optimized. The EC2 Fleet launches 20 Spot Instances from the Spot capacity pool with
optimal capacity using the launch template overrides, only if it can launch all 20 instances at the same
time.
{
"SpotOptions": {
772
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
"AllocationStrategy": "capacity-optimized",
"MinTargetCapacity": 20
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-49e41922"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
773
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
Example 7: Launch Spot Instances only if minimum target capacity can be launched of same
Instance Type in a single Availability Zone
You can configure a fleet to launch instances only if the minimum target capacity can be launched
with a single instance type in a single Availability Zone by setting the Spot options MinTargetCapacity
to the minimum target capacity you want to launch together along with SingleInstanceType and
SingleAvailabilityZone options.
The 12 launch specifications which override the launch template, have different instance types and
subnets (each in a separate Availability Zone) but the same weighted capacity. The total target capacity
and the minimum target capacity are both set to 20 instances, the default purchasing option is spot, the
Spot allocation strategy is capacity-optimized, the SingleInstanceType is true and SingleAvailabilityZone
is true. The EC2 Fleet launches 20 Spot Instances of the same Instance type all in a single AZ from the
Spot capacity pool with optimal capacity using the launch specifications, only if it can launch all 20
instances at the same time.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized",
"SingleInstanceType": true,
"SingleAvailabilityZone": true,
"MinTargetCapacity": 20
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5d.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-e7188bab"
774
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
},
{
"InstanceType":"m5.4xlarge",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5d.4xlarge",
"SubnetId":"subnet-49e41922"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
}
You can configure a fleet to launch instances with different launch specifications for different instance
types or a group of instance types, by specifying multiple launch templates. In this example we want
have different EBS volume sizes for different instance types and we have that configured in the launch
templates ec2-fleet-lt-4xl, ec2-fleet-lt-9xl and ec2-fleet-lt-18xl.
In this example, we are using 3 different launch templates for the 3 instance types based on their size.
The launch specification overrides on all the launch templates use instance weights based on the vCPUs
on the instance type. The total target capacity is 144 units, the default purchasing option is spot, and
the Spot allocation strategy is capacity-optimized. The EC2 Fleet can either launch 9 c5n.4xlarge (144
divided by 16) using the launch template ec2-fleet-4xl or 4 c5n.9xlarge (144 divided by 36) using the
launch template ec2-fleet-9xl, or 2 c5n.18xlarge (144 divided by 72) using the launch template ec2-
fleet-18xl, or a mix of the instance types with weights adding up to the desired capacity based on the
capacity-optimized allocation strategy.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized"
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt-18xl",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5n.18xlarge",
"SubnetId":"subnet-fae8c380",
"WeightedCapacity":72
},
{
"InstanceType":"c5n.18xlarge",
"SubnetId":"subnet-e7188bab",
"WeightedCapacity":72
775
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
},
{
"InstanceType":"c5n.18xlarge",
"SubnetId":"subnet-49e41922",
"WeightedCapacity":72
}
]
},
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt-9xl",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5n.9xlarge",
"SubnetId":"subnet-fae8c380",
"WeightedCapacity":36
},
{
"InstanceType":"c5n.9xlarge",
"SubnetId":"subnet-e7188bab",
"WeightedCapacity":36
},
{
"InstanceType":"c5n.9xlarge",
"SubnetId":"subnet-49e41922",
"WeightedCapacity":36
}
]
},
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt-4xl",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5n.4xlarge",
"SubnetId":"subnet-fae8c380",
"WeightedCapacity":16
},
{
"InstanceType":"c5n.4xlarge",
"SubnetId":"subnet-e7188bab",
"WeightedCapacity":16
},
{
"InstanceType":"c5n.4xlarge",
"SubnetId":"subnet-49e41922",
"WeightedCapacity":16
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 144,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
}
776
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
The following example specifies the total target capacity of 20 instances for the fleet, and a target
capacity of 5 On-Demand Instances. The default purchasing option is spot. The fleet launches
5 On-Demand Instance as specified, but needs to launch 15 more instances to fulfill the total
target capacity. The purchasing option for the difference is calculated as TotalTargetCapacity –
OnDemandTargetCapacity = DefaultTargetCapacityType, which results in the fleet launching 15 Spot
Instances form one of the 12 Spot capacity pools based on the capacity-optimized allocation strategy.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized"
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-e7188bab"
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-49e41922"
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-fae8c380"
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-e7188bab"
},
777
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-49e41922"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"OnDemandTargetCapacity": 5,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
}
Example 10: Launch Spot Instances using capacity-optimized allocation strategy with a base of
On-Demand Instances using Capacity Reservations and the prioritized allocation strategy
You can configure a fleet to use On-Demand Capacity Reservations first when launching a base of
On-Demand Instances with the default target capacity type as spot by setting the usage strategy for
Capacity Reservations to use-capacity-reservations-first. And if multiple instance pools have unused
Capacity Reservations, the chosen On-Demand allocation strategy is applied. In this example, the On-
Demand allocation strategy is prioritized.
In this example, there are 6 available unused Capacity Reservations. This is less than the fleet's target
On-Demand capacity of 10 On-Demand Instances.
The account has the following 6 unused Capacity Reservations in 2 pools. The number of Capacity
Reservations in each pool is indicated by AvailableInstanceCount.
{
"CapacityReservationId": "cr-111",
"InstanceType": "m5.large",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1a",
"AvailableInstanceCount": 3,
"InstanceMatchCriteria": "open",
"State": "active"
}
{
"CapacityReservationId": "cr-222",
"InstanceType": "c5.large",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1a",
"AvailableInstanceCount": 3,
"InstanceMatchCriteria": "open",
"State": "active"
}
The following fleet configuration shows only the pertinent configurations for this example. The On-
Demand allocation strategy is prioritized, and the usage strategy for Capacity Reservations is use-
capacity-reservations-first. The Spot allocation strategy is capacity-optimized. The total target capacity
is 20, the On-Demand target capacity is 10, and the default target capacity type is spot.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized"
},
"OnDemandOptions":{
"CapacityReservationOptions": {
778
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
"UsageStrategy": "use-capacity-reservations-first"
},
"AllocationStrategy":"prioritized"
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5.large",
"SubnetId":"subnet-fae8c380",
"Priority": 1.0
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-e7188bab",
"Priority": 2.0
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-49e41922",
"Priority": 3.0
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-fae8c380",
"Priority": 4.0
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-e7188bab",
"Priority": 5.0
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-49e41922",
"Priority": 6.0
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-fae8c380",
"Priority": 7.0
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-e7188bab",
"Priority": 8.0
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-49e41922",
"Priority": 9.0
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-fae8c380",
"Priority": 10.0
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-e7188bab",
"Priority": 11.0
},
779
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-49e41922",
"Priority": 12.0
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"OnDemandTargetCapacity": 10,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
}
After you create the instant fleet using the preceding configuration, the following 20 instances are
launched to meet the target capacity:
• 7 c5.large On-Demand Instances in us-east-1a – c5.large in us-east-1a is prioritized first, and there are
3 available unused c5.large Capacity Reservations. The Capacity Reservations are used first to launch
3 On-Demand Instances plus 4 additional On-Demand Instances are launched according to the On-
Demand allocation strategy, which is prioritized in this example.
• 3 m5.large On-Demand Instances in us-east-1a – m5.large in us-east-1a is prioritized second, and there
are 3 available unused c3.large Capacity Reservations.
• 10 Spot Instances from one of the 12 Spot capacity pools that has the optimal capacity according to
the capacity-optimized allocation strategy.
After the fleet is launched, you can run describe-capacity-reservations to see how many unused Capacity
Reservations are remaining. In this example, you should see the following response, which shows that all
of the c5.large and m5.large Capacity Reservations were used.
{
"CapacityReservationId": "cr-111",
"InstanceType": "m5.large",
"AvailableInstanceCount": 0
}
{
"CapacityReservationId": "cr-222",
"InstanceType": "c5.large",
"AvailableInstanceCount": 0
}
The following example specifies the parameters required in an EC2 Fleet of type instant: a launch
template, target capacity, default purchasing option, and launch template overrides. The launch
template is identified by its launch template name and version number. The 12 launch specifications
which override the launch template have 4 different instance types with a priority assigned, and 3
different subnets, each in a separate Availability Zone. The target capacity for the fleet is 20 instances,
and the default purchasing option is spot, which results in the fleet attempting to launch 20 Spot
Instances from one of the 12 Spot capacity pools based on the capacity-optimized-prioritized allocation
strategy, which implements priorities on a best-effort basis, but optimizes for capacity first.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized-prioritized"
},
780
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet request types
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification":{
"LaunchTemplateName":"ec2-fleet-lt1",
"Version":"$Latest"
},
"Overrides":[
{
"InstanceType":"c5.large",
"SubnetId":"subnet-fae8c380",
"Priority": 1.0
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-e7188bab",
"Priority": 1.0
},
{
"InstanceType":"c5.large",
"SubnetId":"subnet-49e41922",
"Priority": 1.0
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-fae8c380",
"Priority": 2.0
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-e7188bab",
"Priority": 2.0
},
{
"InstanceType":"c5d.large",
"SubnetId":"subnet-49e41922",
"Priority": 2.0
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-fae8c380",
"Priority": 3.0
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-e7188bab",
"Priority": 3.0
},
{
"InstanceType":"m5.large",
"SubnetId":"subnet-49e41922",
"Priority": 3.0
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-fae8c380",
"Priority": 4.0
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-e7188bab",
"Priority": 4.0
},
{
"InstanceType":"m5d.large",
"SubnetId":"subnet-49e41922",
"Priority": 4.0
781
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"DefaultTargetCapacityType": "spot"
},
"Type": "instant"
}
The EC2 Fleet attempts to launch the number of instances that are required to meet the target capacity
that you specify in the fleet request. The fleet can comprise only On-Demand Instances, only Spot
Instances, or a combination of both On-Demand Instances and Spot Instances. The request for Spot
Instances is fulfilled if there is available capacity and the maximum price per hour for your request
exceeds the Spot price. The fleet also attempts to maintain its target capacity if your Spot Instances are
interrupted.
You can also set a maximum amount per hour that you’re willing to pay for your fleet, and EC2 Fleet
launches instances until it reaches the maximum amount. When the maximum amount you're willing to
pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
A Spot capacity pool is a set of unused EC2 instances with the same instance type and Availability Zone.
When you create an EC2 Fleet, you can include multiple launch specifications, which vary by instance
type, Availability Zone, subnet, and maximum price. The fleet selects the Spot capacity pools that
are used to fulfill the request, based on the launch specifications included in your request, and the
configuration of the request. The Spot Instances come from the selected pools.
An EC2 Fleet enables you to provision large amounts of EC2 capacity that makes sense for your
application based on number of cores or instances, or amount of memory. For example, you can specify
an EC2 Fleet to launch a target capacity of 200 instances, of which 130 are On-Demand Instances and
the rest are Spot Instances.
Use the appropriate configuration strategies to create an EC2 Fleet that meets your needs.
Contents
• Plan an EC2 Fleet (p. 782)
• Allocation strategies for Spot Instances (p. 783)
• Attribute-based instance type selection for EC2 Fleet (p. 785)
• Configure EC2 Fleet for On-Demand backup (p. 799)
• Capacity Rebalancing (p. 800)
• Maximum price overrides (p. 802)
• Control spending (p. 802)
• EC2 Fleet instance weighting (p. 803)
• Determine whether you want to create an EC2 Fleet that submits a synchronous or asynchronous one-
time request for the desired target capacity, or one that maintains a target capacity over time. For
more information, see EC2 Fleet request types (p. 764).
782
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
lowest-price
The Spot Instances come from the Spot capacity pool with the lowest price. This is the default
strategy.
diversified
The Spot Instances are distributed across all Spot capacity pools.
capacity-optimized
The Spot Instances come from the Spot capacity pool with optimal capacity for the number of
instances that are launching. You can optionally set a priority for each instance type in your fleet
using capacity-optimized-prioritized. EC2 Fleet optimizes for capacity first, but honors
instance type priorities on a best-effort basis.
With Spot Instances, pricing changes slowly over time based on long-term trends in supply and
demand, but capacity fluctuates in real time. The capacity-optimized strategy automatically
launches Spot Instances into the most available pools by looking at real-time capacity data and
predicting which are the most available. This works well for workloads such as big data and
analytics, image and media rendering, machine learning, and high performance computing that may
have a higher cost of interruption associated with restarting work and checkpointing. By offering the
possibility of fewer interruptions, the capacity-optimized strategy can lower the overall cost of
your workload.
783
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
InstancePoolsToUseCount
The Spot Instances are distributed across the number of Spot capacity pools that you specify. This
parameter is valid only when used in combination with lowest-price.
If your fleet runs workloads that may have a higher cost of interruption associated with restarting work
and checkpointing, then use the capacity-optimized strategy. This strategy offers the possibility of
fewer interruptions, which can lower the overall cost of your workload. Use the capacity-optimized-
prioritized strategy for workloads where the possibility of disruption must be minimized and the
preference for certain instance types matters.
If your fleet is small or runs for a short time, the probability that your Spot Instances will be interrupted
is low, even with all of the instances in a single Spot capacity pool. Therefore, the lowest-price
strategy is likely to meet your needs while providing the lowest cost.
If your fleet is large or runs for a long time, you can improve the availability of your fleet by distributing
the Spot Instances across multiple pools using the diversified strategy. For example, if your EC2 Fleet
specifies 10 pools and a target capacity of 100 instances, the fleet launches 10 Spot Instances in each
pool. If the Spot price for one pool exceeds your maximum price for this pool, only 10% of your fleet is
affected. Using this strategy also makes your fleet less sensitive to increases in the Spot price in any one
pool over time. With the diversified strategy, the EC2 Fleet does not launch Spot Instances into any
pools with a Spot price that is equal to or higher than the On-Demand price.
To create a cheap and diversified fleet, use the lowest-price strategy in combination with
InstancePoolsToUseCount. You can use a low or high number of Spot capacity pools across which
to allocate your Spot Instances. For example, if you run batch processing, we recommend specifying a
low number of Spot capacity pools (for example, InstancePoolsToUseCount=2) to ensure that your
queue always has compute capacity while maximizing savings. If you run a web service, we recommend
specifying a high number of Spot capacity pools (for example, InstancePoolsToUseCount=10) to
minimize the impact if a Spot capacity pool becomes temporarily unavailable.
For On-Demand Instance target capacity, EC2 Fleet always selects the cheapest instance type based on
the public On-Demand price, while continuing to follow the allocation strategy (either lowest-price,
capacity-optimized, or diversified) for Spot Instances.
784
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
For example, if your target capacity is 10 Spot Instances, and you specify 2 Spot capacity pools (for
InstancePoolsToUseCount), EC2 Fleet will draw on the two cheapest pools to fulfill your Spot
capacity.
Note that EC2 Fleet attempts to draw Spot Instances from the number of pools that you specify on
a best effort basis. If a pool runs out of Spot capacity before fulfilling your target capacity, EC2 Fleet
will continue to fulfill your request by drawing from the next cheapest pool. To ensure that your
target capacity is met, you might receive Spot Instances from more than the number of pools that you
specified. Similarly, if most of the pools have no Spot capacity, you might receive your full target capacity
from fewer than the number of pools that you specified.
You can also express your pool priorities by using the capacity-optimized-prioritized
allocation strategy and then setting the order of instance types to use from highest to lowest priority.
Using priorities is supported only if your fleet uses a launch template. Note that when you set
priorities for capacity-optimized-prioritized, the same priorities are also applied to your
On-Demand Instances if the On-Demand AllocationStrategy is set to prioritized. For an
example configuration, see Example 10: Launch Spot Instances in a capacity-optimized fleet with
priorities (p. 912).
Attribute-based instance type selection is ideal for workloads and frameworks that can be flexible about
what instance types they use, such as when running containers or web fleets, processing big data, and
implementing continuous integration and deployment (CI/CD) tooling.
Benefits
• With so many instance types available, finding the right instance types for your workload can be
time consuming. When you specify instance attributes, the instance types will automatically have the
required attributes for your workload.
• To manually specify multiple instance types for an EC2 Fleet, you must create a separate launch
template override for each instance type. But with attribute-based instance type selection, to provide
multiple instance types, you need only specify the instance attributes in the launch template or in a
launch template override.
785
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
• When you specify instance attributes rather than instance types, your fleet can use newer generation
instance types as they’re released, "future proofing" the fleet's configuration.
• When you specify instance attributes rather than instance types, EC2 Fleet can select from a wide
range of instance types for launching Spot Instances, which adheres to the Spot best practice of
instance type flexibility (p. 428).
Topics
• How attribute-based instance type selection works (p. 786)
• Considerations (p. 787)
• Create an EC2 Fleet with attribute-based instance type selection (p. 788)
• Examples of configurations that are valid and not valid (p. 790)
• Preview instance types with specified attributes (p. 796)
Topics
• Types of instance attributes (p. 786)
• Where to configure attribute-based instance type selection (p. 786)
• How EC2 Fleet uses attribute-based instance type selection when provisioning a fleet (p. 787)
• Understand price protection (p. 787)
There are several instance attributes that you can specify to express your compute requirements. For a
description of each attribute and the default values, see InstanceRequirements in the Amazon EC2 API
Reference.
Depending on whether you use the console or the AWS CLI, you can specify the instance attributes for
attribute-based instance type selection as follows:
In the console, you can specify the instance attributes in one or both of the following fleet configuration
components:
• In a launch template, and reference the launch template in the fleet request
• In the fleet request
In the AWS CLI, you can specify the instance attributes in one or all of the following fleet configuration
components:
• In a launch template, and reference the launch template in the fleet request
• In a launch template override
If you want a mix of instances that use different AMIs, you can specify instance attributes in multiple
launch template overrides. For example, different instance types can use x86 and Arm-based
processors.
786
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
• In a launch specification
How EC2 Fleet uses attribute-based instance type selection when provisioning a fleet
• EC2 Fleet identifies the instance types that have the specified attributes.
• EC2 Fleet uses price protection to determine which instance types to exclude.
• EC2 Fleet determines the capacity pools from which it will consider launching the instances based on
the AWS Regions or Availability Zones that have matching instance types.
• EC2 Fleet applies the specified allocation strategy to determine from which capacity pools to launch
the instances.
Note that attribute-based instance type selection does not pick the capacity pools from which to
provision the fleet; that's the job of the allocation strategies. There might be a large number of
instance types with the specified attributes, and some of them might be expensive. The default
allocation strategy of lowest-price for Spot and On-Demand guarantees that EC2 Fleet will launch
instances from the least expensive capacity pools.
If you specify an allocation strategy, EC2 Fleet will launch instances according to the specified
allocation strategy.
• For Spot Instances, attribute-based instance type selection supports the capacity-optimized and
lowest-price allocation strategies.
• For On-Demand Instances, attribute-based instance type selection supports the lowest-price
allocation strategy.
• If there is no capacity for the instance types with the specified instance attributes, no instances can be
launched, and the fleet returns an error.
Price protection is a feature that prevents your EC2 Fleet from using instance types that you would
consider too expensive even if they happen to fit the attributes that you specified. When you create a
fleet with attribute-based instance type selection, price protection is enabled by default, with separate
thresholds for On-Demand Instances and Spot Instances. When Amazon EC2 selects instance types with
your attributes, it excludes instance types priced above your threshold. The thresholds represent the
maximum you'll pay, expressed as a percentage above the least expensive M, C, or R instance type with
your specified attributes.
If you don't specify a threshold, the following thresholds are used by default:
Considerations
• You can specify either instance types or instance attributes in an EC2 Fleet, but not both at the same
time.
When using the CLI, the launch template overrides will override the launch template. For example,
if the launch template contains an instance type and the launch template override contains instance
attributes, the instances that are identified by the instance attributes will override the instance type in
the launch template.
• When using the CLI, when you specify instance attributes as overrides, you can't also specify weights or
priorities.
787
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
• Use the create-fleet (AWS CLI) command to create an EC2 Fleet. Specify the fleet configuration in a
JSON file.
The following JSON file contains all of the parameters that can be specified when configuring
an EC2 Fleet. The parameters for attribute-based instance type selection are located in the
InstanceRequirements structure. For a description of each attribute and the default values, see
InstanceRequirements in the Amazon EC2 API Reference.
Note
When InstanceRequirements is included in the fleet configuration, InstanceType and
WeightedCapacity must be excluded; they cannot determine the fleet configuration at the
same time as instance attributes.
{
"DryRun": true,
"ClientToken": "",
"SpotOptions": {
"AllocationStrategy": "capacity-optimized",
"MaintenanceStrategies": {
"CapacityRebalance": {
"ReplacementStrategy": "launch"
}
},
"InstanceInterruptionBehavior": "stop",
"InstancePoolsToUseCount": 0,
"SingleInstanceType": true,
"SingleAvailabilityZone": true,
"MinTargetCapacity": 0,
"MaxTotalPrice": ""
},
"OnDemandOptions": {
"AllocationStrategy": "prioritized",
"CapacityReservationOptions": {
"UsageStrategy": "use-capacity-reservations-first"
},
"SingleInstanceType": true,
"SingleAvailabilityZone": true,
"MinTargetCapacity": 0,
"MaxTotalPrice": ""
},
"ExcessCapacityTerminationPolicy": "no-termination",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "",
"LaunchTemplateName": "",
"Version": ""
788
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
},
"Overrides": [
{
"InstanceType": "r5ad.large",
"MaxPrice": "",
"SubnetId": "",
"AvailabilityZone": "",
"WeightedCapacity": 0.0,
"Priority": 0.0,
"Placement": {
"AvailabilityZone": "",
"Affinity": "",
"GroupName": "",
"PartitionNumber": 0,
"HostId": "",
"Tenancy": "host",
"SpreadDomain": "",
"HostResourceGroupArn": ""
},
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 0
},
"MemoryMiB": {
"Min": 0,
"Max": 0
},
"CpuManufacturers": [
"amd"
],
"MemoryGiBPerVCpu": {
"Min": 0.0,
"Max": 0.0
},
"ExcludedInstanceTypes": [
""
],
"InstanceGenerations": [
"previous"
],
"SpotMaxPricePercentageOverLowestPrice": 0,
"OnDemandMaxPricePercentageOverLowestPrice": 0,
"BareMetal": "excluded",
"BurstablePerformance": "required",
"RequireHibernateSupport": true,
"NetworkInterfaceCount": {
"Min": 0,
"Max": 0
},
"LocalStorage": "required",
"LocalStorageTypes": [
"hdd"
],
"TotalLocalStorageGB": {
"Min": 0.0,
"Max": 0.0
},
"BaselineEbsBandwidthMbps": {
"Min": 0,
"Max": 0
},
"AcceleratorTypes": [
"fpga"
],
"AcceleratorCount": {
789
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
"Min": 0,
"Max": 0
},
"AcceleratorManufacturers": [
"xilinx"
],
"AcceleratorNames": [
"vu9p"
],
"AcceleratorTotalMemoryMiB": {
"Min": 0,
"Max": 0
}
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 0,
"OnDemandTargetCapacity": 0,
"SpotTargetCapacity": 0,
"DefaultTargetCapacityType": "spot",
"TargetCapacityUnitType": "vcpu"
},
"TerminateInstancesWithExpiration": true,
"Type": "instant",
"ValidFrom": "1970-01-01T00:00:00",
"ValidUntil": "1970-01-01T00:00:00",
"ReplaceUnhealthyInstances": true,
"TagSpecifications": [
{
"ResourceType": "route-table",
"Tags": [
{
"Key": "",
"Value": ""
}
]
}
],
"Context": ""
}
Configurations are considered not valid when they contain the following:
Example configurations
• Valid configuration: Single launch template with overrides (p. 791)
• Valid configuration: Single launch template with multiple InstanceRequirements (p. 792)
• Valid configuration: Two launch templates, each with overrides (p. 792)
• Configuration not valid: Overrides contain InstanceRequirements and InstanceType (p. 793)
790
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
• Configuration not valid: Two Overrides contain InstanceRequirements and InstanceType (p. 794)
• Valid configuration: Only InstanceRequirements specified, no overlapping attribute values (p. 795)
• Configuration not valid: Overlapping attribute values (p. 795)
The following configuration is valid. It contains one launch template and one Overrides structure
containing one InstanceRequirements structure. A text explanation of the example configuration
follows.
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "My-launch-template",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 2,
"Max": 8
},
"MemoryMib": {
"Min": 0,
"Max": 10240
},
"MemoryGiBPerVCpu": {
"Max": 10000
},
"RequireHibernateSupport": true
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 5000,
"DefaultTargetCapacityType": "spot",
"TargetCapacityUnitType": "vcpu"
}
}
}
InstanceRequirements
To use attribute-based instance selection, you must include the InstanceRequirements structure in
your fleet configuration, and specify the desired attributes for the instances in the fleet.
• VCpuCount – The instance types must have a minimum of 2 and a maximum of 8 vCPUs.
• MemoryMiB – The instance types must have a maximum of 10240 MiB of memory. A minimum of 0
indicates no minimum limit.
• MemoryGiBPerVCpu – The instance types must have a maximum of 10,000 GiB of memory per vCPU.
The Min parameter is optional. By omitting it, you indicate no minimum limit.
TargetCapacityUnitType
791
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
The TargetCapacityUnitType parameter specifies the unit for the target capacity. In the example,
the target capacity is 5000 and the target capacity unit type is vcpu, which together specify a desired
target capacity of 5,000 vCPUs. EC2 Fleet will launch enough instances so that the total number of
vCPUs in the fleet is 5,000 vCPUs.
The following configuration is valid. It contains one launch template and one Overrides
structure containing two InstanceRequirements structures. The attributes specified
in InstanceRequirements are valid because the values do not overlap—the first
InstanceRequirements structure specifies a VCpuCount of 0-2 vCPUs, while the second
InstanceRequirements structure specifies 4-8 vCPUs.
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
},
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 4,
"Max": 8
},
"MemoryMiB": {
"Min": 0
}
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 1,
"DefaultTargetCapacityType": "spot"
}
}
}
The following configuration is valid. It contains two launch templates, each with one Overrides
structure containing one InstanceRequirements structure. This configuration is useful for arm and
x86 architecture support in the same fleet.
{
"LaunchTemplateConfigs": [
{
792
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
"LaunchTemplateSpecification": {
"LaunchTemplateName": "armLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
},
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "x86LaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 1,
"DefaultTargetCapacityType": "spot"
}
}
}
The following configuration is not valid. The Overrides structure contains both
InstanceRequirements and InstanceType. For the Overrides, you can specify either
InstanceRequirements or InstanceType, but not both.
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
793
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
"Min": 0
}
}
},
{
"InstanceType": "m5.large"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 1,
"DefaultTargetCapacityType": "spot"
}
}
}
The following configuration is not valid. The Overrides structures contain both
InstanceRequirements and InstanceType. You can specify either InstanceRequirements or
InstanceType, but not both, even if they're in different Overrides structures.
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
}
]
},
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyOtherLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "m5.large"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 1,
"DefaultTargetCapacityType": "spot"
}
}
}
794
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
}
]
},
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyOtherLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 4,
"Max": 8
},
"MemoryMiB": {
"Min": 0
}
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 1,
"DefaultTargetCapacityType": "spot"
}
}
}
795
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
},
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 1,
"DefaultTargetCapacityType": "spot"
}
}
}
To preview a list of instance types by specifying attributes using the AWS CLI
1. (Optional) To generate all of the possible attributes that can be specified, use the get-instance-
types-from-instance-requirements command and the --generate-cli-skeleton parameter. You
can optionally direct the output to a file to save it by using input > attributes.json.
Expected output
{
"DryRun": true,
"ArchitectureTypes": [
"x86_64_mac"
796
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
],
"VirtualizationTypes": [
"paravirtual"
],
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 0
},
"MemoryMiB": {
"Min": 0,
"Max": 0
},
"CpuManufacturers": [
"intel"
],
"MemoryGiBPerVCpu": {
"Min": 0.0,
"Max": 0.0
},
"ExcludedInstanceTypes": [
""
],
"InstanceGenerations": [
"current"
],
"SpotMaxPricePercentageOverLowestPrice": 0,
"OnDemandMaxPricePercentageOverLowestPrice": 0,
"BareMetal": "included",
"BurstablePerformance": "excluded",
"RequireHibernateSupport": true,
"NetworkInterfaceCount": {
"Min": 0,
"Max": 0
},
"LocalStorage": "required",
"LocalStorageTypes": [
"hdd"
],
"TotalLocalStorageGB": {
"Min": 0.0,
"Max": 0.0
},
"BaselineEbsBandwidthMbps": {
"Min": 0,
"Max": 0
},
"AcceleratorTypes": [
"inference"
],
"AcceleratorCount": {
"Min": 0,
"Max": 0
},
"AcceleratorManufacturers": [
"xilinx"
],
"AcceleratorNames": [
"t4"
],
"AcceleratorTotalMemoryMiB": {
"Min": 0,
"Max": 0
}
},
"MaxResults": 0,
797
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
"NextToken": ""
}
2. Create a JSON configuration file using the output from the previous step, and configure it as follows:
Note
You must provide values for ArchitectureTypes, VirtualizationTypes, VCpuCount,
and MemoryMiB. You can omit the other attributes; when omitted, the default values are
used.
For a description of each attribute and their default values, see get-instance-types-from-
instance-requirements in the Amazon EC2 Command Line Reference.
In this example, the required attributes are included in the JSON file. They are
ArchitectureTypes, VirtualizationTypes, VCpuCount, and MemoryMiB. In addition, the
optional InstanceGenerations attribute is also included. Note that for MemoryMiB, the Max
value can be omitted to indicate that there is no limit.
"ArchitectureTypes": [
"x86_64"
],
"VirtualizationTypes": [
"hvm"
],
"InstanceRequirements": {
"VCpuCount": {
"Min": 4,
"Max": 6
},
"MemoryMiB": {
"Min": 2048
},
"InstanceGenerations": [
"current"
]
}
}
Example output
798
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
------------------------------------------
|GetInstanceTypesFromInstanceRequirements|
+----------------------------------------+
|| InstanceTypes ||
|+--------------------------------------+|
|| InstanceType ||
|+--------------------------------------+|
|| c4.xlarge ||
|| c5.xlarge ||
|| c5a.xlarge ||
|| c5ad.xlarge ||
|| c5d.xlarge ||
|| c5n.xlarge ||
|| d2.xlarge ||
...
4. After identifying instance types that meet your needs, make note of the instance attributes that you
used so that you can use them when configuring your fleet request.
For example, you have configured three launch template overrides, each with a different instance type:
c3.large, c4.large, and c5.large. The On-Demand price for c5.large is less than the price for
c4.large. c3.large is the cheapest. If you do not use priority to determine the order, the fleet fulfills
On-Demand capacity by starting with c3.large, and then c5.large. Because you often have unused
Reserved Instances for c4.large, you can set the launch template override priority so that the order is
c4.large, c3.large, and then c5.large.
Capacity Reservations are configured as either open or targeted. EC2 Fleet can launch On-Demand
Instances into either open or targeted Capacity Reservations, as follows:
• If a Capacity Reservation is open, On-Demand Instances that have matching attributes automatically
run in the reserved capacity.
• If a Capacity Reservation is targeted, On-Demand Instances must specifically target it to run in the
reserved capacity. This is useful for using up specific Capacity Reservations or for controlling when to
use specific Capacity Reservations.
799
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
If you use targeted Capacity Reservations in your EC2 Fleet, there must be enough Capacity
Reservations to fulfil the target On-Demand capacity, otherwise the launch fails. To avoid a launch
fail, rather add the targeted Capacity Reservations to a resource group, and then target the resource
group. The resource group doesn't need to have enough Capacity Reservations; if it runs out of Capacity
Reservations before the target On-Demand capacity is fulfilled, the fleet can launch the remaining target
capacity into regular On-Demand capacity.
1. Configure the fleet as type instant. You can't use Capacity Reservations for fleets of other types.
2. Configure the usage strategy for Capacity Reservations as use-capacity-reservations-first.
3. In the launch template, for Capacity reservation, choose either Open or Target by group. If you
choose Target by group, specify the Capacity Reservations resource group ID.
When the fleet attempts to fulfil the On-Demand capacity, if it finds that multiple instance pools have
unused matching Capacity Reservations, it determines the pools in which to launch the On-Demand
Instances based on the On-Demand allocation strategy (lowest-price or prioritized).
For examples of how to configure a fleet to use Capacity Reservations to fulfil On-Demand capacity, see
EC2 Fleet example configurations (p. 900), specifically Examples 5 through 7.
For information about configuring Capacity Reservations, see On-Demand Capacity Reservations (p. 522)
and the On-Demand Capacity Reservation FAQs.
Capacity Rebalancing
You can configure EC2 Fleet to launch a replacement Spot Instance when Amazon EC2 emits a rebalance
recommendation to notify you that a Spot Instance is at an elevated risk of interruption. Capacity
Rebalancing helps you maintain workload availability by proactively augmenting your fleet with a new
Spot Instance before a running instance is interrupted by Amazon EC2. For more information, see EC2
instance rebalance recommendations (p. 456).
To configure EC2 Fleet to launch a replacement Spot Instance, use the create-fleet (AWS CLI) command
and the relevant parameters in the MaintenanceStrategies structure. For more information, see the
example launch configuration (p. 910).
Limitations
• Capacity Rebalancing is available only for fleets of type maintain.
• When the fleet is running, you can't modify the Capacity Rebalancing setting. To change the Capacity
Rebalancing setting, you must delete the fleet and create a new fleet.
Configuration options
The ReplacementStrategy for EC2 Fleet supports the following two values:
launch-before-terminate
EC2 Fleet terminates the Spot Instances that receive a rebalance notification after new replacement
Spot Instances are launched. When you specify launch-before-terminate, you must also specify
a value for termination-delay. After the new replacement instances are launched, EC2 Fleet
waits for the duration of the termination-delay, and then terminates the old instances. For
termination-delay, the minimum is 120 seconds (2 minutes), and the maximum is 7200 seconds
(2 hours).
800
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
We recommend that you use launch-before-terminate only if you can predict how long your
instance shutdown procedures will take to complete. This will ensure that the old instances are
terminated only after the shutdown procedures are completed. Note that Amazon EC2 can interrupt
the old instances with a two-minute warning before the termination-delay.
EC2 Fleet launches replacement Spot Instances when a rebalance notification is emitted for existing
Spot Instances. EC2 Fleet does not terminate the instances that receive a rebalance notification. You
can terminate the old instances, or you can leave them running. You are charged for all instances
while they are running.
Considerations
If you configure an EC2 Fleet for Capacity Rebalancing, consider the following:
EC2 Fleet can launch new replacement Spot Instances until fulfilled capacity is double target
capacity
When an EC2 Fleet is configured for Capacity Rebalancing, the fleet attempts to launch a new
replacement Spot Instance for every Spot Instance that receives a rebalance recommendation. After
a Spot Instance receives a rebalance recommendation, it is no longer counted as part of the fulfilled
capacity. Depending on the replacement strategy, EC2 Fleet either terminates the instance after a
preconfigured termination delay, or leaves it running. This gives you the opportunity to perform
rebalancing actions (p. 457) on the instance.
If your fleet reaches double its target capacity, it stops launching new replacement instances even if
the replacement instances themselves receive a rebalance recommendation.
For example, you create an EC2 Fleet with a target capacity of 100 Spot Instances. All of the Spot
Instances receive a rebalance recommendation, which causes EC2 Fleet to launch 100 replacement
Spot Instances. This raises the number of fulfilled Spot Instances to 200, which is double the target
capacity. Some of the replacement instances receive a rebalance recommendation, but no more
replacement instances are launched because the fleet cannot exceed double its target capacity.
Note that you are charged for all of the instances while they are running.
We recommend that you configure EC2 Fleet to terminate Spot Instances that receive a rebalance
recommendation
If you configure your EC2 Fleet for Capacity Rebalancing, we recommend that you choose launch-
before-terminate with an appropriate termination delay only if you can predict how long your
instance shutdown procedures will take to complete. This will ensure that the old instances are
terminated only after the shutdown procedures are completed.
If you choose to terminate the instances that are recommended for rebalance yourself, we
recommend that you monitor the rebalance recommendation signal that is received by the Spot
Instances in the fleet. By monitoring the signal, you can quickly perform rebalancing actions (p. 457)
on the affected instances before Amazon EC2 interrupts them, and then you can manually terminate
them. If you do not terminate the instances, you continue paying for them while they are running.
EC2 Fleet does not automatically terminate the instances that receive a rebalance recommendation.
You can set up notifications using Amazon EventBridge or instance metadata. For more information,
see Monitor rebalance recommendation signals (p. 457).
801
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
EC2 Fleet does not count instances that receive a rebalance recommendation when calculating
fulfilled capacity during scale in or out
If your EC2 Fleet is configured for Capacity Rebalancing, and you change the target capacity to either
scale in or scale out, the fleet does not count the instances that are marked for rebalance as part of
the fulfilled capacity, as follows:
• Scale in – If you decrease your desired target capacity, the fleet terminates instances that are
not marked for rebalance until the desired capacity is reached. The instances that are marked for
rebalance are not counted towards the fulfilled capacity.
For example, you create an EC2 Fleet with a target capacity of 100 Spot Instances. 10 instances
receive a rebalance recommendation, so the fleet launches 10 new replacement instances,
resulting in a fulfilled capacity of 110 instances. You then reduce the target capacity to 50 (scale
in), but the fulfilled capacity is actually 60 instances because the 10 instances that are marked for
rebalance are not terminated by the fleet. You need to manually terminate these instances, or you
can leave them running.
• Scale out – If you increase your desired target capacity, the fleet launches new instances until the
desired capacity is reached. The instances that are marked for rebalance are not counted towards
the fulfilled capacity.
For example, you create an EC2 Fleet with a target capacity of 100 Spot Instances. 10 instances
receive a rebalance recommendation, so the fleet launches 10 new replacement instances,
resulting in a fulfilled capacity of 110 instances. You then increase the target capacity to 200
(scale out), but the fulfilled capacity is actually 210 instances because the 10 instances that are
marked for rebalance are not counted by the fleet as part of the target capacity. You need to
manually terminate these instances, or you can leave them running.
Provide as many Spot capacity pools in the request as possible
Configure your EC2 Fleet to use multiple instance types and Availability Zones. This provides the
flexibility to launch Spot Instances in various Spot capacity pools. For more information, see Be
flexible about instance types and Availability Zones (p. 428).
Avoid an elevated risk of interruption of replacement Spot Instances
Your replacement Spot Instances may be at an elevated risk of interruption if you use the lowest-
price allocation strategy. This is because Amazon EC2 will always launch instances in the lowest-
priced pool that has available capacity at that moment, even if your replacement Spot Instances
are likely to be interrupted soon after being launched. To avoid an elevated risk of interruption, we
strongly recommend against using the lowest-price allocation strategy, and instead recommend
the capacity-optimized or capacity-optimized-prioritized allocation strategy. These
strategies ensure that replacement Spot Instances are launched in the most optimal Spot capacity
pools, and are therefore less likely to be interrupted in the near future. For more information, see
Use the capacity optimized allocation strategy (p. 428).
You can optionally specify a maximum price in one or more launch specifications. This price is specific
to the launch specification. If a launch specification includes a specific price, the EC2 Fleet uses this
maximum price, overriding the global maximum price. Any other launch specifications that do not
include a specific maximum price still use the global maximum price.
Control spending
EC2 Fleet stops launching instances when it has met one of the following parameters: the
TotalTargetCapacity or the MaxTotalPrice (the maximum amount you’re willing to pay). To
802
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
control the amount you pay per hour for your fleet, you can specify the MaxTotalPrice. When the
maximum total price is reached, EC2 Fleet stops launching instances even if it hasn’t met the target
capacity.
The following examples show two different scenarios. In the first, EC2 Fleet stops launching instances
when it has met the target capacity. In the second, EC2 Fleet stops launching instances when it has
reached the maximum amount you’re willing to pay (MaxTotalPrice).
EC2 Fleet launches 10 On-Demand Instances because the total of $1.00 (10 instances x $0.10) does not
exceed the MaxTotalPrice of $1.50 for On-Demand Instances.
If EC2 Fleet launches the On-Demand target capacity (10 On-Demand Instances), the total cost per
hour would be $1.00. This is more than the amount ($0.80) specified for MaxTotalPrice for On-
Demand Instances. To prevent spending more than you're willing to pay, EC2 Fleet launches only 8 On-
Demand Instances (below the On-Demand target capacity) because launching more would exceed the
MaxTotalPrice for On-Demand Instances.
By default, the price that you specify is per instance hour. When you use the instance weighting feature,
the price that you specify is per unit hour. You can calculate your price per unit hour by dividing your
price for an instance type by the number of units that it represents. EC2 Fleet calculates the number of
instances to launch by dividing the target capacity by the instance weight. If the result isn't an integer,
the fleet rounds it up to the next integer, so that the size of your fleet is not below its target capacity.
The fleet can select any pool that you specify in your launch specification, even if the capacity of the
instances launched exceeds the requested target capacity.
The following table includes examples of calculations to determine the price per unit for an EC2 Fleet
with a target capacity of 10.
Instance Instance Target Number of Price per Price per unit hour
type weight capacity instances instance
launched hour
803
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet configuration strategies
Instance Instance Target Number of Price per Price per unit hour
type weight capacity instances instance
launched hour
(10 divided (.05 divided by 2)
by 2)
Use EC2 Fleet instance weighting as follows to provision the target capacity that you want in the pools
with the lowest price per unit at the time of fulfillment:
1. Set the target capacity for your EC2 Fleet either in instances (the default) or in the units of your
choice, such as virtual CPUs, memory, storage, or throughput.
2. Set the price per unit.
3. For each launch specification, specify the weight, which is the number of units that the instance type
represents toward the target capacity.
• A target capacity of 24
• A launch specification with an instance type r3.2xlarge and a weight of 6
• A launch specification with an instance type c3.xlarge and a weight of 5
The weights represent the number of units that instance type represents toward the target capacity. If
the first launch specification provides the lowest price per unit (price for r3.2xlarge per instance hour
divided by 6), the EC2 Fleet would launch four of these instances (24 divided by 6).
If the second launch specification provides the lowest price per unit (price for c3.xlarge per instance
hour divided by 5), the EC2 Fleet would launch five of these instances (24 divided by 5, result rounded
up).
The EC2 Fleet would launch four instances (30 divided by 8, result rounded up). With the lowest-
price strategy, all four instances come from the pool that provides the lowest price per unit. With
the diversified strategy, the fleet launches one instance in each of the three pools, and the fourth
instance in whichever of the three pools provides the lowest price per unit.
804
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
If your fleet includes Spot Instances, Amazon EC2 can attempt to maintain your fleet target capacity as
Spot prices change.
An EC2 Fleet request of type maintain or request remains active until it expires or you delete it. When
you delete a fleet of type maintain or request, you can specify whether deletion terminates the
instances in that fleet.
Contents
• EC2 Fleet request states (p. 805)
• EC2 Fleet prerequisites (p. 806)
• EC2 Fleet health checks (p. 809)
• Generate an EC2 Fleet JSON configuration file (p. 809)
• Create an EC2 Fleet (p. 812)
• Tag an EC2 Fleet (p. 814)
• Monitor your EC2 Fleet (p. 816)
• Modify an EC2 Fleet (p. 817)
• Delete an EC2 Fleet (p. 818)
submitted
The EC2 Fleet request is being evaluated and Amazon EC2 is preparing to launch the target number
of instances. The request can include On-Demand Instances, Spot Instances, or both.
active
The EC2 Fleet request has been validated and Amazon EC2 is attempting to maintain the target
number of running instances. The request remains in this state until it is modified or deleted.
modifying
The EC2 Fleet request is being modified. The request remains in this state until the modification is
fully processed or the request is deleted. Only a maintain fleet type can be modified. This state
does not apply to other request types.
deleted_running
The EC2 Fleet request is deleted and does not launch additional instances. Its existing instances
continue to run until they are interrupted or terminated manually. The request remains in this state
until all instances are interrupted or terminated. Only an EC2 Fleet of type maintain or request
can have running instances after the EC2 Fleet request is deleted. A deleted instant fleet with
running instances is not supported. This state does not apply to instant fleets.
805
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
deleted_terminating
The EC2 Fleet request is deleted and its instances are terminating. The request remains in this state
until all instances are terminated.
deleted
The EC2 Fleet is deleted and has no running instances. The request is deleted two days after its
instances are terminated.
The following illustration represents the transitions between the EC2 Fleet request states. If you exceed
your fleet limits, the request is deleted immediately.
Launch template
A launch template includes information about the instances to launch, such as the instance type,
Availability Zone, and the maximum price that you are willing to pay. For more information, see Launch
an instance from a launch template (p. 579).
806
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
Ensure that this role exists before you use the AWS CLI or an API to create an EC2 Fleet.
Note
An instant EC2 Fleet does not require this role.
If you no longer need to use EC2 Fleet, we recommend that you delete the AWSServiceRoleForEC2Fleet
role. After this role is deleted from your account, you can create the role again if you create another fleet.
For more information, see Using service-linked roles in the IAM User Guide.
Grant access to customer managed keys for use with encrypted AMIs and EBS
snapshots
If you specify an encrypted AMI (p. 189) or an encrypted Amazon EBS snapshot (p. 1536) in your EC2
Fleet and you use an AWS KMS key for encryption, you must grant the AWSServiceRoleForEC2Fleet role
permission to use the customer managed key so that Amazon EC2 can launch instances on your behalf.
To do this, you must add a grant to the customer managed key, as shown in the following procedure.
When providing permissions, grants are an alternative to key policies. For more information, see Using
grants and Using key policies in AWS KMS in the AWS Key Management Service Developer Guide.
To grant the AWSServiceRoleForEC2Fleet role permissions to use the customer managed key
• Use the create-grant command to add a grant to the customer managed key and to specify the
principal (the AWSServiceRoleForEC2Fleet service-linked role) that is given permission to perform
the operations that the grant permits. The customer managed key is specified by the key-id
parameter and the ARN of the customer managed key. The principal is specified by the grantee-
principal parameter and the ARN of the AWSServiceRoleForEC2Fleet service-linked role.
807
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:ListRoles",
"iam:PassRole",
"iam:ListInstanceProfiles"
],
"Resource":"arn:aws:iam::123456789012:role/DevTeam*"
}
]
}
The ec2:* grants an IAM user permission to call all Amazon EC2 API actions. To limit the user to
specific Amazon EC2 API actions, specify those actions instead.
An IAM user must have permission to call the iam:ListRoles action to enumerate
existing IAM roles, the iam:PassRole action to specify the EC2 Fleet role, and the
iam:ListInstanceProfiles action to enumerate existing instance profiles.
(Optional) To enable an IAM user to create roles or instance profiles using the IAM console, you must
also add the following actions to the policy:
• iam:AddRoleToInstanceProfile
• iam:AttachRolePolicy
• iam:CreateInstanceProfile
• iam:CreateRole
• iam:GetRole
• iam:ListPolicies
5. On the Review policy page, enter a policy name and description, and choose Create policy.
6. In the navigation pane, choose Users and select the user.
7. On the Permissions tab, choose Add permissions.
8. Choose Attach existing policies directly. Select the policy that you created earlier and choose Next:
Review.
808
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
EC2 Fleet determines the health status of an instance by using the status checks provided by Amazon
EC2. An instance is determined as unhealthy when the status of either the instance status check or the
system status check is impaired for three consecutive health status checks. For more information, see
Status checks for your instances (p. 928).
You can configure your fleet to replace unhealthy Spot Instances. After setting
ReplaceUnhealthyInstances to true, a Spot Instance is replaced when it is reported as unhealthy.
The fleet can go below its target capacity for up to a few minutes while an unhealthy Spot Instance is
being replaced.
Requirements
• Health check replacement is supported only for EC2 Fleets that maintain a target capacity (fleets of
type maintain), and not for fleets of type request or instant.
• Health check replacement is supported only for Spot Instances. This feature is not supported for On-
Demand Instances.
• You can configure your EC2 Fleet to replace unhealthy instances only when you create it.
• IAM users can use health check replacement only if they have permission to call the
ec2:DescribeInstanceStatus action.
1. Follow the steps for creating an EC2 Fleet. For more information, see Create an EC2 Fleet (p. 812).
2. To configure the fleet to replace unhealthy Spot Instances, in the JSON file, for
ReplaceUnhealthyInstances, enter true.
To generate a JSON file with all possible EC2 Fleet parameters using the command line
• Use the create-fleet (AWS CLI) command and the --generate-cli-skeleton parameter to
generate an EC2 Fleet JSON file, and direct the output to a file to save it.
Example output
{
"DryRun": true,
"ClientToken": "",
"SpotOptions": {
"AllocationStrategy": "capacity-optimized",
"MaintenanceStrategies": {
809
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
"CapacityRebalance": {
"ReplacementStrategy": "launch"
}
},
"InstanceInterruptionBehavior": "hibernate",
"InstancePoolsToUseCount": 0,
"SingleInstanceType": true,
"SingleAvailabilityZone": true,
"MinTargetCapacity": 0,
"MaxTotalPrice": ""
},
"OnDemandOptions": {
"AllocationStrategy": "prioritized",
"CapacityReservationOptions": {
"UsageStrategy": "use-capacity-reservations-first"
},
"SingleInstanceType": true,
"SingleAvailabilityZone": true,
"MinTargetCapacity": 0,
"MaxTotalPrice": ""
},
"ExcessCapacityTerminationPolicy": "termination",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "",
"LaunchTemplateName": "",
"Version": ""
},
"Overrides": [
{
"InstanceType": "r5.metal",
"MaxPrice": "",
"SubnetId": "",
"AvailabilityZone": "",
"WeightedCapacity": 0.0,
"Priority": 0.0,
"Placement": {
"AvailabilityZone": "",
"Affinity": "",
"GroupName": "",
"PartitionNumber": 0,
"HostId": "",
"Tenancy": "dedicated",
"SpreadDomain": "",
"HostResourceGroupArn": ""
},
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 0
},
"MemoryMiB": {
"Min": 0,
"Max": 0
},
"CpuManufacturers": [
"amd"
],
"MemoryGiBPerVCpu": {
"Min": 0.0,
"Max": 0.0
},
"ExcludedInstanceTypes": [
""
],
810
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
"InstanceGenerations": [
"previous"
],
"SpotMaxPricePercentageOverLowestPrice": 0,
"OnDemandMaxPricePercentageOverLowestPrice": 0,
"BareMetal": "included",
"BurstablePerformance": "required",
"RequireHibernateSupport": true,
"NetworkInterfaceCount": {
"Min": 0,
"Max": 0
},
"LocalStorage": "excluded",
"LocalStorageTypes": [
"ssd"
],
"TotalLocalStorageGB": {
"Min": 0.0,
"Max": 0.0
},
"BaselineEbsBandwidthMbps": {
"Min": 0,
"Max": 0
},
"AcceleratorTypes": [
"inference"
],
"AcceleratorCount": {
"Min": 0,
"Max": 0
},
"AcceleratorManufacturers": [
"amd"
],
"AcceleratorNames": [
"a100"
],
"AcceleratorTotalMemoryMiB": {
"Min": 0,
"Max": 0
}
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 0,
"OnDemandTargetCapacity": 0,
"SpotTargetCapacity": 0,
"DefaultTargetCapacityType": "on-demand",
"TargetCapacityUnitType": "memory-mib"
},
"TerminateInstancesWithExpiration": true,
"Type": "instant",
"ValidFrom": "1970-01-01T00:00:00",
"ValidUntil": "1970-01-01T00:00:00",
"ReplaceUnhealthyInstances": true,
"TagSpecifications": [
{
"ResourceType": "fleet",
"Tags": [
{
"Key": "",
"Value": ""
}
811
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
]
}
],
"Context": ""
}
You can specify multiple launch specifications that override the launch template. The launch
specifications can vary by instance type, Availability Zone, subnet, and maximum price, and can include
a different weighted capacity. Alternatively, you can specify the attributes that an instance must have,
and Amazon EC2 will identify all the instance types with those attributes. For more information, see
Attribute-based instance type selection for EC2 Fleet (p. 785).
If you do not specify a parameter, the fleet uses the default value for the parameter.
Specify the fleet parameters in a JSON file. For more information, see Generate an EC2 Fleet JSON
configuration file (p. 809).
• Use the create-fleet (AWS CLI) command to create an EC2 Fleet and specify the JSON file that
contains the fleet configuration parameters.
For example configuration files, see EC2 Fleet example configurations (p. 900).
{
"FleetId": "fleet-12a34b55-67cd-8ef9-ba9b-9208dEXAMPLE"
}
The following is example output for a fleet of type instant that launched the target capacity.
{
"FleetId": "fleet-12a34b55-67cd-8ef9-ba9b-9208dEXAMPLE",
"Errors": [],
"Instances": [
{
"LaunchTemplateAndOverrides": {
"LaunchTemplateSpecification": {
812
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
"LaunchTemplateId": "lt-01234a567b8910abcEXAMPLE",
"Version": "1"
},
"Overrides": {
"InstanceType": "c5.large",
"AvailabilityZone": "us-east-1a"
}
},
"Lifecycle": "on-demand",
"InstanceIds": [
"i-1234567890abcdef0",
"i-9876543210abcdef9"
],
"InstanceType": "c5.large",
"Platform": null
},
{
"LaunchTemplateAndOverrides": {
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-01234a567b8910abcEXAMPLE",
"Version": "1"
},
"Overrides": {
"InstanceType": "c4.large",
"AvailabilityZone": "us-east-1a"
}
},
"Lifecycle": "on-demand",
"InstanceIds": [
"i-5678901234abcdef0",
"i-5432109876abcdef9"
]
]
}
The following is example output for a fleet of type instant that launched part of the target capacity
with errors for instances that were not launched.
{
"FleetId": "fleet-12a34b55-67cd-8ef9-ba9b-9208dEXAMPLE",
"Errors": [
{
"LaunchTemplateAndOverrides": {
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-01234a567b8910abcEXAMPLE",
"Version": "1"
},
"Overrides": {
"InstanceType": "c4.xlarge",
"AvailabilityZone": "us-east-1a",
}
},
"Lifecycle": "on-demand",
"ErrorCode": "InsufficientInstanceCapacity",
"ErrorMessage": ""
},
],
"Instances": [
{
"LaunchTemplateAndOverrides": {
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-01234a567b8910abcEXAMPLE",
"Version": "1"
},
813
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
"Overrides": {
"InstanceType": "c5.large",
"AvailabilityZone": "us-east-1a"
}
},
"Lifecycle": "on-demand",
"InstanceIds": [
"i-1234567890abcdef0",
"i-9876543210abcdef9"
]
]
}
The following is example output for a fleet of type instant that launched no instances.
{
"FleetId": "fleet-12a34b55-67cd-8ef9-ba9b-9208dEXAMPLE",
"Errors": [
{
"LaunchTemplateAndOverrides": {
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-01234a567b8910abcEXAMPLE",
"Version": "1"
},
"Overrides": {
"InstanceType": "c4.xlarge",
"AvailabilityZone": "us-east-1a",
}
},
"Lifecycle": "on-demand",
"ErrorCode": "InsufficientCapacity",
"ErrorMessage": ""
},
{
"LaunchTemplateAndOverrides": {
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-01234a567b8910abcEXAMPLE",
"Version": "1"
},
"Overrides": {
"InstanceType": "c5.large",
"AvailabilityZone": "us-east-1a",
}
},
"Lifecycle": "on-demand",
"ErrorCode": "InsufficientCapacity",
"ErrorMessage": ""
},
],
"Instances": []
}
When you tag a fleet request, the instances and volumes that are launched by the fleet are not
automatically tagged. You need to explicitly tag the instances and volumes launched by the fleet. You
can choose to assign tags to only the fleet request, or to only the instances launched by the fleet, or to
only the volumes attached to the instances launched by the fleet, or to all three.
814
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
Note
For instant fleet types, you can tag volumes that are attached to On-Demand Instances
and Spot Instances. For request or maintain fleet types, you can only tag volumes that are
attached to On-Demand Instances.
For more information about how tags work, see Tag your Amazon EC2 resources (p. 1666).
Prerequisite
Grant the IAM user the permission to tag resources. For more information, see Example: Tag
resources (p. 1258).
• The ec2:CreateTags action. This grants the IAM user permission to create tags.
• The ec2:CreateFleet action. This grants the IAM user permission to create an EC2 Fleet request.
• For Resource, we recommend that you specify "*". This allows users to tag all resource types.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TagEC2FleetRequest",
"Effect": "Allow",
"Action": [
"ec2:CreateTags",
"ec2:CreateFleet"
],
"Resource": "*"
}
Important
We currently do not support resource-level permissions for the create-fleet resource. If you
specify create-fleet as a resource, you will get an unauthorized exception when you try to
tag the fleet. The following example illustrates how not to set the policy.
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags",
"ec2:CreateFleet"
],
"Resource": "arn:aws:ec2:us-east-1:111122223333:create-fleet/*"
}
To tag an EC2 Fleet request when you create it, specify the key-value pair in the JSON file (p. 809) used
to create the fleet. The value for ResourceType must be fleet. If you specify another value, the fleet
request fails.
To tag instances and volumes when they are launched by the fleet, specify the tags in the launch
template (p. 581) that is referenced in the EC2 Fleet request.
815
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
Note
You can't tag volumes attached to Spot Instances that are launched by a request or maintain
fleet type.
To tag an existing EC2 Fleet request, instance, and volume (AWS CLI)
The returned list of running instances is refreshed periodically and might be out of date.
{
"Fleets": [
{
"Type": "maintain",
"FulfilledCapacity": 2.0,
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"Version": "2",
"LaunchTemplateId": "lt-07b3bc7625cdab851"
}
}
],
"TerminateInstancesWithExpiration": false,
"TargetCapacitySpecification": {
"OnDemandTargetCapacity": 0,
"SpotTargetCapacity": 2,
"TotalTargetCapacity": 2,
"DefaultTargetCapacityType": "spot"
},
"FulfilledOnDemandCapacity": 0.0,
"ActivityStatus": "fulfilled",
"FleetId": "fleet-76e13e99-01ef-4bd6-ba9b-9208de883e7f",
"ReplaceUnhealthyInstances": false,
"SpotOptions": {
"InstanceInterruptionBehavior": "terminate",
"InstancePoolsToUseCount": 1,
"AllocationStrategy": "lowest-price"
},
"FleetState": "active",
816
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
"ExcessCapacityTerminationPolicy": "termination",
"CreateTime": "2018-04-10T16:46:03.000Z"
}
]
}
Use the describe-fleet-instances command to describe the instances for the specified EC2 Fleet.
{
"ActiveInstances": [
{
"InstanceId": "i-09cd595998cb3765e",
"InstanceHealth": "healthy",
"InstanceType": "m4.large",
"SpotInstanceRequestId": "sir-86k84j6p"
},
{
"InstanceId": "i-09cf95167ca219f17",
"InstanceHealth": "healthy",
"InstanceType": "m4.large",
"SpotInstanceRequestId": "sir-dvxi7fsm"
}
],
"FleetId": "fleet-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE"
}
Use the describe-fleet-history command to describe the history for the specified EC2 Fleet for the
specified time.
{
"HistoryRecords": [],
"FleetId": "fleet-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE",
"LastEvaluatedTime": "1970-01-01T00:00:00.000Z",
"StartTime": "2018-04-09T23:53:20.000Z"
}
You can only modify an EC2 Fleet that is of type maintain. You cannot modify an EC2 Fleet of type
request or instant.
817
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
When you increase the target capacity, the EC2 Fleet launches the additional instances according to the
instance purchasing option specified for DefaultTargetCapacityType, which are either On-Demand
Instances or Spot Instances.
If the DefaultTargetCapacityType is spot, the EC2 Fleet launches the additional Spot Instances
according to its allocation strategy. If the allocation strategy is lowest-price, the fleet launches
the instances from the lowest-priced Spot capacity pool in the request. If the allocation strategy is
diversified, the fleet distributes the instances across the pools in the request.
When you decrease the target capacity, the EC2 Fleet deletes any open requests that exceed the new
target capacity. You can request that the fleet terminate instances until the size of the fleet reaches
the new target capacity. If the allocation strategy is lowest-price, the fleet terminates the instances
with the highest price per unit. If the allocation strategy is diversified, the fleet terminates instances
across the pools. Alternatively, you can request that EC2 Fleet keep the fleet at its current size, but not
replace any Spot Instances that are interrupted or any instances that you terminate manually.
When an EC2 Fleet terminates a Spot Instance because the target capacity was decreased, the instance
receives a Spot Instance interruption notice.
Use the modify-fleet command to update the target capacity of the specified EC2 Fleet.
If you are decreasing the target capacity but want to keep the fleet at its current size, you can modify the
previous command as follows.
When you delete an EC2 Fleet, you must specify if you want to also terminate its instances. If you specify
that the instances must be terminated when the fleet is deleted, it enters the deleted_terminating
state. Otherwise, it enters the deleted_running state, and the instances continue to run until they are
interrupted or you terminate them manually.
Restrictions
• You can delete up to 25 instant fleets in a single request. If you exceed this number, no instant
fleets are deleted and an error is returned. There is no restriction on the number of fleets of type
maintain or request that can be deleted in a single request.
• Up to 1000 instances can be terminated in a single request to delete instant fleets.
Use the delete-fleets command and the --terminate-instances parameter to delete the specified
EC2 Fleet and terminate the instances.
818
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
{
"UnsuccessfulFleetDeletions": [],
"SuccessfulFleetDeletions": [
{
"CurrentFleetState": "deleted_terminating",
"PreviousFleetState": "active",
"FleetId": "fleet-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE"
}
]
}
You can modify the previous command using the --no-terminate-instances parameter to delete
the specified EC2 Fleet without terminating the instances.
Note
--no-terminate-instances is not supported for instant fleets.
{
"UnsuccessfulFleetDeletions": [],
"SuccessfulFleetDeletions": [
{
"CurrentFleetState": "deleted_running",
"PreviousFleetState": "active",
"FleetId": "fleet-4b8aaae8-dfb5-436d-a4c6-3dafa4c6b7dcEXAMPLE"
}
]
}
• ExceededInstantFleetNumForDeletion
• fleetIdDoesNotExist
• fleetIdMalformed
• fleetNotInDeletableState
• NoTerminateInstancesNotSupported
• UnauthorizedOperation
• unexpectedError
819
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
Troubleshooting ExceededInstantFleetNumForDeletion
If you try to delete more than 25 instant fleets in a single request, the
ExceededInstantFleetNumForDeletion error is returned. The following is example output for this
error.
{
"UnsuccessfulFleetDeletions": [
{
"FleetId": " fleet-5d130460-0c26-bfd9-2c32-0100a098f625",
"Error": {
"Message": "Can’t delete more than 25 instant fleets in a single
request.",
"Code": "ExceededInstantFleetNumForDeletion"
}
},
{
"FleetId": "fleet-9a941b23-0286-5bf4-2430-03a029a07e31",
"Error": {
"Message": "Can’t delete more than 25 instant fleets in a single
request.",
"Code": "ExceededInstantFleetNumForDeletion"
}
}
.
.
.
],
"SuccessfulFleetDeletions": []
}
Troubleshoot NoTerminateInstancesNotSupported
If you specify that the instances in an instant fleet must not be terminated when you delete the fleet,
the NoTerminateInstancesNotSupported error is returned. --no-terminate-instances is not
supported for instant fleets. The following is example output for this error.
{
"UnsuccessfulFleetDeletions": [
{
"FleetId": "fleet-5d130460-0c26-bfd9-2c32-0100a098f625",
"Error": {
"Message": "NoTerminateInstances option is not supported for
instant fleet",
"Code": "NoTerminateInstancesNotSupported"
}
}
],
"SuccessfulFleetDeletions": []
Troubleshoot UnauthorizedOperation
If you do not have permission to terminate instances, you get the UnauthorizedOperation error when
deleting a fleet that must terminate its instances. The following is the error response.
820
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2 Fleets
VPiU5v2s-
UgZ7h0p2yth6ysUdhlONg6dBYu8_y_HtEI54invCj4CoK0qawqzMNe6rcmCQHvtCxtXsbkgyaEbcwmrm2m01-
EMhekLFZeJLr
DtYOpYcEl4_nWFX1wtQDCnNNCmxnJZAoJvb3VMDYpDTsxjQv1PxODZuqWHs23YXWVywzgnLtHeRf2o4lUhGBw17mXsS07k7XAfdPMP_
PT9vrHtQiILor5VVTsjSPWg7edj__1rsnXhwPSu8gI48ZLRGrPQqFq0RmKO_QIE8N8s6NWzCK4yoX-9gDcheurOGpkprPIC9YPGMLK9
</Message></Error></Errors><RequestID>89b1215c-7814-40ae-a8db-41761f43f2b0</RequestID></
Response>
To resolve the error, you must add the ec2:TerminateInstances action to the IAM policy, as shown in
the following example.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DeleteFleetsAndTerminateInstances",
"Effect": "Allow",
"Action": [
"ec2:DeleteFleets"
"ec2:TerminateInstances"
],
"Resource": "*"
}
]
}
821
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet
Spot Fleet
A Spot Fleet is set of Spot Instances and optionally On-Demand Instances that is launched based on
criteria that you specify. The Spot Fleet selects the Spot capacity pools that meet your needs and
launches Spot Instances to meet the target capacity for the fleet. By default, Spot Fleets are set to
maintain target capacity by launching replacement instances after Spot Instances in the fleet are
terminated. You can submit a Spot Fleet as a one-time request, which does not persist after the instances
have been terminated. You can include On-Demand Instance requests in a Spot Fleet request.
Topics
• Spot Fleet request types (p. 822)
• Spot Fleet configuration strategies (p. 822)
• Work with Spot Fleets (p. 848)
• CloudWatch metrics for Spot Fleet (p. 867)
• Automatic scaling for Spot Fleet (p. 869)
request
If you configure the request type as request, Spot Fleet places an asynchronous one-time request
for your desired capacity. Thereafter, if capacity is diminished because of Spot interruptions, the
fleet does not attempt to replenish Spot Instances, nor does it submit requests in alternative Spot
capacity pools if capacity is unavailable.
maintain
If you configure the request type as maintain, Spot Fleet places an asynchronous request for
your desired capacity, and maintains capacity by automatically replenishing any interrupted Spot
Instances.
To specify the type of request in the Amazon EC2 console, do the following when creating a Spot Fleet
request:
• To create a Spot Fleet of type request, clear the Maintain target capacity check box.
• To create a Spot Fleet of type maintain, select the Maintain target capacity check box.
For more information, see Create a Spot Fleet request using defined parameters (console) (p. 855).
Both types of requests benefit from an allocation strategy. For more information, see Allocation strategy
for Spot Instances (p. 823).
The Spot Fleet attempts to launch the number of Spot Instances and On-Demand Instances to meet the
target capacity that you specified in the Spot Fleet request. The request for Spot Instances is fulfilled
if there is available capacity and the maximum price you specified in the request exceeds the current
Spot price. The Spot Fleet also attempts to maintain its target capacity fleet if your Spot Instances are
interrupted.
822
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
You can also set a maximum amount per hour that you’re willing to pay for your fleet, and Spot Fleet
launches instances until it reaches the maximum amount. When the maximum amount you're willing to
pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
A Spot capacity pool is a set of unused EC2 instances with the same instance type (for example,
m5.large), operating system, Availability Zone, and network platform. When you make a Spot Fleet
request, you can include multiple launch specifications, that vary by instance type, AMI, Availability Zone,
or subnet. The Spot Fleet selects the Spot capacity pools that are used to fulfill the request, based on
the launch specifications included in your Spot Fleet request, and the configuration of the Spot Fleet
request. The Spot Instances come from the selected pools.
Contents
• Plan a Spot Fleet request (p. 823)
• Allocation strategy for Spot Instances (p. 823)
• Attribute-based instance type selection for Spot Fleet (p. 825)
• On-Demand in Spot Fleet (p. 843)
• Capacity Rebalancing (p. 843)
• Spot price overrides (p. 845)
• Control spending (p. 846)
• Spot Fleet instance weighting (p. 846)
• Determine whether you want to create a Spot Fleet that submits a one-time request for the desired
target capacity, or one that maintains a target capacity over time.
• Determine the instance types that meet your application requirements.
• Determine the target capacity for your Spot Fleet request. You can set the target capacity in instances
or in custom units. For more information, see Spot Fleet instance weighting (p. 846).
• Determine what portion of the Spot Fleet target capacity must be On-Demand capacity. You can
specify 0 for On-Demand capacity.
• Determine your price per unit, if you are using instance weighting. To calculate the price per unit,
divide the price per instance hour by the number of units (or weight) that this instance represents. If
you are not using instance weighting, the default price per unit is the price per instance hour.
• Review the possible options for your Spot Fleet request. For more information, see the request-spot-
fleet command in the AWS CLI Command Reference. For additional examples, see Spot Fleet example
configurations (p. 913).
lowestPrice
The Spot Instances come from the pool with the lowest price. This is the default strategy.
diversified
823
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
capacityOptimized
The Spot Instances come from the pools with optimal capacity for the number of instances
that are launching. You can optionally set a priority for each instance type in your fleet using
capacityOptimizedPrioritized. Spot Fleet optimizes for capacity first, but honors instance
type priorities on a best-effort basis.
With Spot Instances, pricing changes slowly over time based on long-term trends in supply and
demand, but capacity fluctuates in real time. The capacityOptimized strategy automatically
launches Spot Instances into the most available pools by looking at real-time capacity data and
predicting which are the most available. This works well for workloads such as big data and
analytics, image and media rendering, machine learning, and high performance computing that may
have a higher cost of interruption associated with restarting work and checkpointing. By offering the
possibility of fewer interruptions, the capacityOptimized strategy can lower the overall cost of
your workload.
The Spot Instances are distributed across the number of Spot pools that you specify. This parameter
is valid only when used in combination with lowestPrice.
If your fleet runs workloads that may have a higher cost of interruption associated with restarting work
and checkpointing, then use the capacityOptimized strategy. This strategy offers the possibility
of fewer interruptions, which can lower the overall cost of your workload. This is the recommended
strategy. Use the capacityOptimizedPrioritized strategy for workloads where the possibility of
disruption must be minimized and the preference for certain instance types matters.
If your fleet is small or runs for a short time, the probability that your Spot Instances may be interrupted
is low, even with all the instances in a single Spot capacity pool. Therefore, the lowestPrice strategy is
likely to meet your needs while providing the lowest cost.
If your fleet is large or runs for a long time, you can improve the availability of your fleet by distributing
the Spot Instances across multiple pools. For example, if your Spot Fleet request specifies 10 pools and
a target capacity of 100 instances, the fleet launches 10 Spot Instances in each pool. If the Spot price
for one pool exceeds your maximum price for this pool, only 10% of your fleet is affected. Using this
824
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
strategy also makes your fleet less sensitive to increases in the Spot price in any one pool over time. With
the diversified strategy, the Spot Fleet does not launch Spot Instances into any pools with a Spot
price that is equal to or higher than the On-Demand price.
To create a cheap and diversified fleet, use the lowestPrice strategy in combination with
InstancePoolsToUseCount. You can use a low or high number of Spot pools across which to allocate
your Spot Instances. For example, if you run batch processing, we recommend specifying a low number
of Spot pools (for example, InstancePoolsToUseCount=2) to ensure that your queue always has
compute capacity while maximizing savings. If you run a web service, we recommend specifying a high
number of Spot pools (for example, InstancePoolsToUseCount=10) to minimize the impact if a Spot
capacity pool becomes temporarily unavailable.
For On-Demand Instance target capacity, Spot Fleet always selects the least expensive instance type
based on the public On-Demand price, while continuing to follow the allocation strategy (either
lowestPrice, capacityOptimized, or diversified) for Spot Instances.
For example, if your target capacity is 10 Spot Instances, and you specify 2 Spot capacity pools (for
InstancePoolsToUseCount), Spot Fleet will draw on the two cheapest pools to fulfill your Spot
capacity.
Note that Spot Fleet attempts to draw Spot Instances from the number of pools that you specify
on a best effort basis. If a pool runs out of Spot capacity before fulfilling your target capacity, Spot
Fleet will continue to fulfill your request by drawing from the next cheapest pool. To ensure that your
target capacity is met, you might receive Spot Instances from more than the number of pools that you
specified. Similarly, if most of the pools have no Spot capacity, you might receive your full target capacity
from fewer than the number of pools that you specified.
You can also express your pool priorities by using the capacityOptimizedPrioritized allocation
strategy and then setting the order of instance types to use from highest to lowest priority. Using
priorities is supported only if your fleet uses a launch template. Note that when you set priorities for
capacityOptimizedPrioritized, the same priorities are also applied to your On-Demand Instances
if the OnDemandAllocationStrategy is set to prioritized. For an example configuration, see
Example 10: Launch Spot Instances in a capacity-optimized fleet with priorities (p. 923).
825
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
types, you can specify the attributes that an instance must have, and Amazon EC2 will identify all
the instance types with those attributes. This is known as attribute-based instance type selection. For
example, you can specify the minimum and maximum number of vCPUs required for your instances,
and Spot Fleet will launch the instances using any available instance types that meet those vCPU
requirements.
Attribute-based instance type selection is ideal for workloads and frameworks that can be flexible about
what instance types they use, such as when running containers or web fleets, processing big data, and
implementing continuous integration and deployment (CI/CD) tooling.
Benefits
• With so many instance types available, finding the right instance types for your workload can be
time consuming. When you specify instance attributes, the instance types will automatically have the
required attributes for your workload.
• To manually specify multiple instance types for a Spot Fleet, you must create a separate launch
template override for each instance type. But with attribute-based instance type selection, to provide
multiple instance types, you need only specify the instance attributes in the launch template or in a
launch template override.
• When you specify instance attributes rather than instance types, your fleet can use newer generation
instance types as they’re released, "future proofing" the fleet's configuration.
• When you specify instance attributes rather than instance types, Spot Fleet can select from a wide
range of instance types for launching Spot Instances, which adheres to the Spot best practice of
instance type flexibility (p. 428).
Topics
• How attribute-based instance type selection works (p. 826)
• Considerations (p. 828)
• Create a Spot Fleet with attribute-based instance type selection (p. 828)
• Examples of configurations that are valid and not valid (p. 834)
• Preview instance types with specified attributes (p. 840)
Topics
• Types of instance attributes (p. 826)
• Where to configure attribute-based instance type selection (p. 827)
• How Spot Fleet uses attribute-based instance type selection when provisioning a fleet (p. 827)
• Understand price protection (p. 827)
There are several instance attributes that you can specify to express your compute requirements. For a
description of each attribute and the default values, see InstanceRequirements in the Amazon EC2 API
Reference.
826
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
Depending on whether you use the console or the AWS CLI, you can specify the instance attributes for
attribute-based instance type selection as follows:
In the console, you can specify the instance attributes in one or both of the following fleet configuration
components:
• In a launch template, and reference the launch template in the fleet request
• In the fleet request
In the AWS CLI, you can specify the instance attributes in one or all of the following fleet configuration
components:
• In a launch template, and reference the launch template in the fleet request
• In a launch template override
If you want a mix of instances that use different AMIs, you can specify instance attributes in multiple
launch template overrides. For example, different instance types can use x86 and Arm-based
processors.
• In a launch specification
How Spot Fleet uses attribute-based instance type selection when provisioning a fleet
• Spot Fleet identifies the instance types that have the specified attributes.
• Spot Fleet uses price protection to determine which instance types to exclude.
• Spot Fleet determines the capacity pools from which it will consider launching the instances based on
the AWS Regions or Availability Zones that have matching instance types.
• Spot Fleet applies the specified allocation strategy to determine from which capacity pools to launch
the instances.
Note that attribute-based instance type selection does not pick the capacity pools from which to
provision the fleet; that's the job of the allocation strategies. There might be a large number of
instance types with the specified attributes, and some of them might be expensive. The default
allocation strategy of lowest-price for Spot and On-Demand guarantees that Spot Fleet will launch
instances from the least expensive capacity pools.
If you specify an allocation strategy, Spot Fleet will launch instances according to the specified
allocation strategy.
• For Spot Instances, attribute-based instance type selection supports the capacity-optimized and
lowest-price allocation strategies.
• For On-Demand Instances, attribute-based instance type selection supports the lowest-price
allocation strategy.
• If there is no capacity for the instance types with the specified instance attributes, no instances can be
launched, and the fleet returns an error.
Price protection is a feature that prevents your Spot Fleet from using instance types that you would
consider too expensive even if they happen to fit the attributes that you specified. When you create a
fleet with attribute-based instance type selection, price protection is enabled by default, with separate
827
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
thresholds for On-Demand Instances and Spot Instances. When Amazon EC2 selects instance types with
your attributes, it excludes instance types priced above your threshold. The thresholds represent the
maximum you'll pay, expressed as a percentage above the least expensive M, C, or R instance type with
your specified attributes.
If you don't specify a threshold, the following thresholds are used by default:
Considerations
• You can specify either instance types or instance attributes in a Spot Fleet, but not both at the same
time.
When using the CLI, the launch template overrides will override the launch template. For example,
if the launch template contains an instance type and the launch template override contains instance
attributes, the instances that are identified by the instance attributes will override the instance type in
the launch template.
• When using the CLI, when you specify instance attributes as overrides, you can't also specify weights or
priorities.
• You can specify a maximum of three InstanceRequirements structures in a request configuration.
Topics
• Create a Spot Fleet using the console (p. 828)
• Create a Spot Fleet using the AWS CLI (p. 829)
While creating the Spot Fleet, configure the fleet for attribute-based instance type selection as
follows:
a. For Instance type requirements, choose Specify instance attributes that match your compute
requirements.
b. For vCPUs, enter the desired minimum and maximum number of vCPUs. To specify no limit,
select No minimum, No maximum, or both.
c. For Memory (GiB), enter the desired minimum and maximum amount of memory. To specify no
limit, select No minimum, No maximum, or both.
d. (Optional) For Additional instance attributes, you can optionally specify one or more attributes
to express your compute requirements in more detail. Each additional attribute adds further
constraints to your request.
828
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
e. (Optional) Expand Preview matching instance types to view the instance types that have your
specified attributes.
• Use the request-spot-fleet (AWS CLI) command to create a Spot Fleet. Specify the fleet
configuration in a JSON file.
The following JSON file contains all of the parameters that can be specified when configuring
a Spot Fleet. The parameters for attribute-based instance type selection are located in the
InstanceRequirements structure. For a description of each attribute and the default values, see
InstanceRequirements in the Amazon EC2 API Reference.
Note
When InstanceRequirements is included in the fleet configuration, InstanceType and
WeightedCapacity must be excluded; they cannot determine the fleet configuration at the
same time as instance attributes.
{
"DryRun": true,
"SpotFleetRequestConfig": {
"AllocationStrategy": "diversified",
"OnDemandAllocationStrategy": "lowestPrice",
"SpotMaintenanceStrategies": {
"CapacityRebalance": {
"ReplacementStrategy": "launch"
}
},
"ClientToken": "",
"ExcessCapacityTerminationPolicy": "default",
"FulfilledCapacity": 0.0,
"OnDemandFulfilledCapacity": 0.0,
"IamFleetRole": "",
"LaunchSpecifications": [
{
"SecurityGroups": [
{
"GroupName": "",
"GroupId": ""
}
],
"AddressingType": "",
"BlockDeviceMappings": [
{
"DeviceName": "",
"VirtualName": "",
"Ebs": {
"DeleteOnTermination": true,
"Iops": 0,
"SnapshotId": "",
"VolumeSize": 0,
"VolumeType": "st1",
"KmsKeyId": "",
"Throughput": 0,
829
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
"OutpostArn": "",
"Encrypted": true
},
"NoDevice": ""
}
],
"EbsOptimized": true,
"IamInstanceProfile": {
"Arn": "",
"Name": ""
},
"ImageId": "",
"InstanceType": "vt1.24xlarge",
"KernelId": "",
"KeyName": "",
"Monitoring": {
"Enabled": true
},
"NetworkInterfaces": [
{
"AssociatePublicIpAddress": true,
"DeleteOnTermination": true,
"Description": "",
"DeviceIndex": 0,
"Groups": [
""
],
"Ipv6AddressCount": 0,
"Ipv6Addresses": [
{
"Ipv6Address": ""
}
],
"NetworkInterfaceId": "",
"PrivateIpAddress": "",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": ""
}
],
"SecondaryPrivateIpAddressCount": 0,
"SubnetId": "",
"AssociateCarrierIpAddress": true,
"InterfaceType": "",
"NetworkCardIndex": 0,
"Ipv4Prefixes": [
{
"Ipv4Prefix": ""
}
],
"Ipv4PrefixCount": 0,
"Ipv6Prefixes": [
{
"Ipv6Prefix": ""
}
],
"Ipv6PrefixCount": 0
}
],
"Placement": {
"AvailabilityZone": "",
"GroupName": "",
"Tenancy": "dedicated"
},
"RamdiskId": "",
830
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
"SpotPrice": "",
"SubnetId": "",
"UserData": "",
"WeightedCapacity": 0.0,
"TagSpecifications": [
{
"ResourceType": "placement-group",
"Tags": [
{
"Key": "",
"Value": ""
}
]
}
],
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 0
},
"MemoryMiB": {
"Min": 0,
"Max": 0
},
"CpuManufacturers": [
"intel"
],
"MemoryGiBPerVCpu": {
"Min": 0.0,
"Max": 0.0
},
"ExcludedInstanceTypes": [
""
],
"InstanceGenerations": [
"previous"
],
"SpotMaxPricePercentageOverLowestPrice": 0,
"OnDemandMaxPricePercentageOverLowestPrice": 0,
"BareMetal": "included",
"BurstablePerformance": "excluded",
"RequireHibernateSupport": true,
"NetworkInterfaceCount": {
"Min": 0,
"Max": 0
},
"LocalStorage": "required",
"LocalStorageTypes": [
"ssd"
],
"TotalLocalStorageGB": {
"Min": 0.0,
"Max": 0.0
},
"BaselineEbsBandwidthMbps": {
"Min": 0,
"Max": 0
},
"AcceleratorTypes": [
"fpga"
],
"AcceleratorCount": {
"Min": 0,
"Max": 0
},
"AcceleratorManufacturers": [
831
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
"amd"
],
"AcceleratorNames": [
"t4"
],
"AcceleratorTotalMemoryMiB": {
"Min": 0,
"Max": 0
}
}
}
],
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "",
"LaunchTemplateName": "",
"Version": ""
},
"Overrides": [
{
"InstanceType": "t4g.large",
"SpotPrice": "",
"SubnetId": "",
"AvailabilityZone": "",
"WeightedCapacity": 0.0,
"Priority": 0.0,
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 0
},
"MemoryMiB": {
"Min": 0,
"Max": 0
},
"CpuManufacturers": [
"amd"
],
"MemoryGiBPerVCpu": {
"Min": 0.0,
"Max": 0.0
},
"ExcludedInstanceTypes": [
""
],
"InstanceGenerations": [
"current"
],
"SpotMaxPricePercentageOverLowestPrice": 0,
"OnDemandMaxPricePercentageOverLowestPrice": 0,
"BareMetal": "excluded",
"BurstablePerformance": "excluded",
"RequireHibernateSupport": true,
"NetworkInterfaceCount": {
"Min": 0,
"Max": 0
},
"LocalStorage": "included",
"LocalStorageTypes": [
"ssd"
],
"TotalLocalStorageGB": {
"Min": 0.0,
"Max": 0.0
},
832
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
"BaselineEbsBandwidthMbps": {
"Min": 0,
"Max": 0
},
"AcceleratorTypes": [
"gpu"
],
"AcceleratorCount": {
"Min": 0,
"Max": 0
},
"AcceleratorManufacturers": [
"xilinx"
],
"AcceleratorNames": [
"vu9p"
],
"AcceleratorTotalMemoryMiB": {
"Min": 0,
"Max": 0
}
}
}
]
}
],
"SpotPrice": "",
"TargetCapacity": 0,
"OnDemandTargetCapacity": 0,
"OnDemandMaxTotalPrice": "",
"SpotMaxTotalPrice": "",
"TerminateInstancesWithExpiration": true,
"Type": "request",
"ValidFrom": "1970-01-01T00:00:00",
"ValidUntil": "1970-01-01T00:00:00",
"ReplaceUnhealthyInstances": true,
"InstanceInterruptionBehavior": "hibernate",
"LoadBalancersConfig": {
"ClassicLoadBalancersConfig": {
"ClassicLoadBalancers": [
{
"Name": ""
}
]
},
"TargetGroupsConfig": {
"TargetGroups": [
{
"Arn": ""
}
]
}
},
"InstancePoolsToUseCount": 0,
"Context": "",
"TargetCapacityUnitType": "memory-mib",
"TagSpecifications": [
{
"ResourceType": "instance",
"Tags": [
{
"Key": "",
"Value": ""
}
]
}
833
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
]
}
}
Configurations are considered not valid when they contain the following:
Example configurations
• Valid configuration: Single launch template with overrides (p. 834)
• Valid configuration: Single launch template with multiple InstanceRequirements (p. 835)
• Valid configuration: Two launch templates, each with overrides (p. 836)
• Configuration not valid: Overrides contain InstanceRequirements and InstanceType (p. 837)
• Configuration not valid: Two Overrides contain InstanceRequirements and InstanceType (p. 837)
• Valid configuration: Only InstanceRequirements specified, no overlapping attribute values (p. 838)
• Configuration not valid: Overlapping attribute values (p. 839)
The following configuration is valid. It contains one launch template and one Overrides structure
containing one InstanceRequirements structure. A text explanation of the example configuration
follows.
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::000000000000:role/aws-ec2-spot-fleet-tagging-role",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "My-launch-template",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 2,
"Max": 8
},
"MemoryMib": {
"Min": 0,
"Max": 10240
},
"MemoryGiBPerVCpu": {
"Max": 10000
},
"RequireHibernateSupport": true
834
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
}
}
]
}
],
"TargetCapacity": 5000,
"OnDemandTargetCapacity": 0,
"TargetCapacityUnitType": "vcpu"
}
}
InstanceRequirements
To use attribute-based instance selection, you must include the InstanceRequirements structure in
your fleet configuration, and specify the desired attributes for the instances in the fleet.
• VCpuCount – The instance types must have a minimum of 2 and a maximum of 8 vCPUs.
• MemoryMiB – The instance types must have a maximum of 10240 MiB of memory. A minimum of 0
indicates no minimum limit.
• MemoryGiBPerVCpu – The instance types must have a maximum of 10,000 GiB of memory per vCPU.
The Min parameter is optional. By omitting it, you indicate no minimum limit.
TargetCapacityUnitType
The TargetCapacityUnitType parameter specifies the unit for the target capacity. In the example,
the target capacity is 5000 and the target capacity unit type is vcpu, which together specify a desired
target capacity of 5,000 vCPUs. Spot Fleet will launch enough instances so that the total number of
vCPUs in the fleet is 5,000 vCPUs.
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::000000000000:role/aws-ec2-spot-fleet-tagging-role",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
835
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
},
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 4,
"Max": 8
},
"MemoryMiB": {
"Min": 0
}
}
}
]
}
],
"TargetCapacity": 1,
"OnDemandTargetCapacity": 0,
"Type": "maintain"
}
}
The following configuration is valid. It contains two launch templates, each with one Overrides
structure containing one InstanceRequirements structure. This configuration is useful for arm and
x86 architecture support in the same fleet.
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::000000000000:role/aws-ec2-spot-fleet-tagging-role",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "armLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
},
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "x86LaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
836
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
}
}
}
]
}
],
"TargetCapacity": 1,
"OnDemandTargetCapacity": 0,
"Type": "maintain"
}
}
The following configuration is not valid. The Overrides structure contains both
InstanceRequirements and InstanceType. For the Overrides, you can specify either
InstanceRequirements or InstanceType, but not both.
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::000000000000:role/aws-ec2-spot-fleet-tagging-role",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
},
{
"InstanceType": "m5.large"
}
]
}
],
"TargetCapacity": 1,
"OnDemandTargetCapacity": 0,
"Type": "maintain"
}
}
The following configuration is not valid. The Overrides structures contain both
InstanceRequirements and InstanceType. You can specify either InstanceRequirements or
InstanceType, but not both, even if they're in different Overrides structures.
{
"SpotFleetRequestConfig": {
837
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::000000000000:role/aws-ec2-spot-fleet-tagging-role",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
}
]
},
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyOtherLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "m5.large"
}
]
}
],
"TargetCapacity": 1,
"OnDemandTargetCapacity": 0,
"Type": "maintain"
}
}
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::000000000000:role/aws-ec2-spot-fleet-tagging-role",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
838
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
}
]
},
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyOtherLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 4,
"Max": 8
},
"MemoryMiB": {
"Min": 0
}
}
}
]
}
],
"TargetCapacity": 1,
"OnDemandTargetCapacity": 0,
"Type": "maintain"
}
}
The following configuration is not valid. The two InstanceRequirements structures each contain
"VCpuCount": {"Min": 0, "Max": 2}. The values for these attributes overlap, which will result in
duplicate capacity pools.
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::000000000000:role/aws-ec2-spot-fleet-tagging-role",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "MyLaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
},
839
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
{
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 2
},
"MemoryMiB": {
"Min": 0
}
}
}
}
]
}
],
"TargetCapacity": 1,
"OnDemandTargetCapacity": 0,
"Type": "maintain"
}
}
To preview a list of instance types by specifying attributes using the AWS CLI
1. (Optional) To generate all of the possible attributes that can be specified, use the get-instance-
types-from-instance-requirements command and the --generate-cli-skeleton parameter. You
can optionally direct the output to a file to save it by using input > attributes.json.
Expected output
{
"DryRun": true,
"ArchitectureTypes": [
"x86_64_mac"
],
"VirtualizationTypes": [
"paravirtual"
],
"InstanceRequirements": {
"VCpuCount": {
"Min": 0,
"Max": 0
},
"MemoryMiB": {
"Min": 0,
"Max": 0
},
"CpuManufacturers": [
"intel"
],
"MemoryGiBPerVCpu": {
"Min": 0.0,
840
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
"Max": 0.0
},
"ExcludedInstanceTypes": [
""
],
"InstanceGenerations": [
"current"
],
"SpotMaxPricePercentageOverLowestPrice": 0,
"OnDemandMaxPricePercentageOverLowestPrice": 0,
"BareMetal": "included",
"BurstablePerformance": "excluded",
"RequireHibernateSupport": true,
"NetworkInterfaceCount": {
"Min": 0,
"Max": 0
},
"LocalStorage": "required",
"LocalStorageTypes": [
"hdd"
],
"TotalLocalStorageGB": {
"Min": 0.0,
"Max": 0.0
},
"BaselineEbsBandwidthMbps": {
"Min": 0,
"Max": 0
},
"AcceleratorTypes": [
"inference"
],
"AcceleratorCount": {
"Min": 0,
"Max": 0
},
"AcceleratorManufacturers": [
"xilinx"
],
"AcceleratorNames": [
"t4"
],
"AcceleratorTotalMemoryMiB": {
"Min": 0,
"Max": 0
}
},
"MaxResults": 0,
"NextToken": ""
}
2. Create a JSON configuration file using the output from the previous step, and configure it as follows:
Note
You must provide values for ArchitectureTypes, VirtualizationTypes, VCpuCount,
and MemoryMiB. You can omit the other attributes; when omitted, the default values are
used.
For a description of each attribute and their default values, see get-instance-types-from-
instance-requirements in the Amazon EC2 Command Line Reference.
841
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
d. For MemoryMiB, specify the minimum and maximum amount of memory in MiB. To specify no
minimum limit, for Min, specify 0. To specify no maximum limit, omit the Max parameter.
e. You can optionally specify one or more of the other attributes to further constrain the list of
instance types that are returned.
3. To preview the instance types that have the attributes that you specified in the JSON file, use the
get-instance-types-from-instance-requirements command, and specify the name and path to your
JSON file by using the --cli-input-json parameter. You can optionally format the output to
appear in a table format.
In this example, the required attributes are included in the JSON file. They are
ArchitectureTypes, VirtualizationTypes, VCpuCount, and MemoryMiB. In addition, the
optional InstanceGenerations attribute is also included. Note that for MemoryMiB, the Max
value can be omitted to indicate that there is no limit.
"ArchitectureTypes": [
"x86_64"
],
"VirtualizationTypes": [
"hvm"
],
"InstanceRequirements": {
"VCpuCount": {
"Min": 4,
"Max": 6
},
"MemoryMiB": {
"Min": 2048
},
"InstanceGenerations": [
"current"
]
}
}
Example output
------------------------------------------
|GetInstanceTypesFromInstanceRequirements|
+----------------------------------------+
|| InstanceTypes ||
|+--------------------------------------+|
|| InstanceType ||
|+--------------------------------------+|
|| c4.xlarge ||
|| c5.xlarge ||
|| c5a.xlarge ||
|| c5ad.xlarge ||
|| c5d.xlarge ||
|| c5n.xlarge ||
|| d2.xlarge ||
...
842
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
4. After identifying instance types that meet your needs, make note of the instance attributes that you
used so that you can use them when configuring your fleet request.
For example, you have configured three launch template overrides, each with a different instance type:
c3.large, c4.large, and c5.large. The On-Demand price for c5.large is less than for c4.large.
c3.large is the cheapest. If you do not use priority to determine the order, the fleet fulfills On-Demand
capacity by starting with c3.large, and then c5.large. Because you often have unused Reserved
Instances for c4.large, you can set the launch template override priority so that the order is c4.large,
c3.large, and then c5.large.
Capacity Rebalancing
You can configure Spot Fleet to launch a replacement Spot Instance when Amazon EC2 emits a rebalance
recommendation to notify you that a Spot Instance is at an elevated risk of interruption. Capacity
Rebalancing helps you maintain workload availability by proactively augmenting your fleet with a new
Spot Instance before a running instance is interrupted by Amazon EC2. For more information, see EC2
instance rebalance recommendations (p. 456).
To configure Spot Fleet to launch a replacement Spot Instance, you can use the Amazon EC2 console or
the AWS CLI.
• Amazon EC2 console: You must select the Capacity rebalance check box when you create the Spot
Fleet. For more information, see step 6.d. in Create a Spot Fleet request using defined parameters
(console) (p. 855).
• AWS CLI: Use the request-spot-fleet command and the relevant parameters in the
SpotMaintenanceStrategies structure. For more information, see the example launch
configuration (p. 921).
Limitations
• Capacity Rebalancing is available only for fleets of type maintain.
• When the fleet is running, you can't modify the Capacity Rebalancing setting. To change the Capacity
Rebalancing setting, you must delete the fleet and create a new fleet.
Configuration options
The ReplacementStrategy for Spot Fleet supports the following two values:
843
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
launch-before-terminate
Spot Fleet terminates the Spot Instances that receive a rebalance notification after new replacement
Spot Instances are launched. When you specify launch-before-terminate, you must also specify
a value for termination-delay. After the new replacement instances are launched, Spot Fleet
waits for the duration of the termination-delay, and then terminates the old instances. For
termination-delay, the minimum is 120 seconds (2 minutes), and the maximum is 7200 seconds
(2 hours).
We recommend that you use launch-before-terminate only if you can predict how long your
instance shutdown procedures will take to complete. This will ensure that the old instances are
terminated only after the shutdown procedures are completed. Note that Amazon EC2 can interrupt
the old instances with a two-minute warning before the termination-delay.
We strongly recommend against using the lowestPrice allocation strategy in combination with
launch-before-terminate to avoid having replacement Spot Instances that are also at an
elevated risk of interruption.
launch
Spot Fleet launches replacement Spot Instances when a rebalance notification is emitted for existing
Spot Instances. Spot Fleet does not terminate the instances that receive a rebalance notification. You
can terminate the old instances, or you can leave them running. You are charged for all instances
while they are running.
Considerations
If you configure a Spot Fleet for Capacity Rebalancing, consider the following:
Spot Fleet can launch new replacement Spot Instances until fulfilled capacity is double target
capacity
When a Spot Fleet is configured for Capacity Rebalancing, the fleet attempts to launch a new
replacement Spot Instance for every Spot Instance that receives a rebalance recommendation. After
a Spot Instance receives a rebalance recommendation, it is no longer counted as part of the fulfilled
capacity. Depending on the replacement strategy, Spot Fleet either terminates the instance after
a preconfigured termination delay, or leaves it running. This gives you the opportunity to perform
rebalancing actions (p. 457) on the instance.
If your fleet reaches double its target capacity, it stops launching new replacement instances even if
the replacement instances themselves receive a rebalance recommendation.
For example, you create a Spot Fleet with a target capacity of 100 Spot Instances. All of the Spot
Instances receive a rebalance recommendation, which causes Spot Fleet to launch 100 replacement
Spot Instances. This raises the number of fulfilled Spot Instances to 200, which is double the target
capacity. Some of the replacement instances receive a rebalance recommendation, but no more
replacement instances are launched because the fleet cannot exceed double its target capacity.
Note that you are charged for all of the instances while they are running.
We recommend that you configure Spot Fleet to terminate Spot Instances that receive a rebalance
recommendation
If you configure your Spot Fleet for Capacity Rebalancing, we recommend that you choose launch-
before-terminate with an appropriate termination delay only if you can predict how long your
instance shutdown procedures will take to complete. This will ensure that the old instances are
terminated only after the shutdown procedures are completed.
If you choose to terminate the instances that are recommended for rebalance yourself, we
recommend that you monitor the rebalance recommendation signal that is received by the Spot
Instances in the fleet. By monitoring the signal, you can quickly perform rebalancing actions (p. 457)
844
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
on the affected instances before Amazon EC2 interrupts them, and then you can manually terminate
them. If you do not terminate the instances, you continue paying for them while they are running.
Spot Fleet does not automatically terminate the instances that receive a rebalance recommendation.
You can set up notifications using Amazon EventBridge or instance metadata. For more information,
see Monitor rebalance recommendation signals (p. 457).
Spot Fleet does not count instances that receive a rebalance recommendation when calculating
fulfilled capacity during scale in or out
If your Spot Fleet is configured for Capacity Rebalancing, and you change the target capacity to
either scale in or scale out, the fleet does not count the instances that are marked for rebalance as
part of the fulfilled capacity, as follows:
• Scale in – If you decrease your desired target capacity, the fleet terminates instances that are
not marked for rebalance until the desired capacity is reached. The instances that are marked for
rebalance are not counted towards the fulfilled capacity.
For example, you create a Spot Fleet with a target capacity of 100 Spot Instances. 10 instances
receive a rebalance recommendation, so the fleet launches 10 new replacement instances,
resulting in a fulfilled capacity of 110 instances. You then reduce the target capacity to 50 (scale
in), but the fulfilled capacity is actually 60 instances because the 10 instances that are marked for
rebalance are not terminated by the fleet. You need to manually terminate these instances, or you
can leave them running.
• Scale out – If you increase your desired target capacity, the fleet launches new instances until the
desired capacity is reached. The instances that are marked for rebalance are not counted towards
the fulfilled capacity.
For example, you create a Spot Fleet with a target capacity of 100 Spot Instances. 10 instances
receive a rebalance recommendation, so the fleet launches 10 new replacement instances,
resulting in a fulfilled capacity of 110 instances. You then increase the target capacity to 200
(scale out), but the fulfilled capacity is actually 210 instances because the 10 instances that are
marked for rebalance are not counted by the fleet as part of the target capacity. You need to
manually terminate these instances, or you can leave them running.
Provide as many Spot capacity pools in the request as possible
Configure your Spot Fleet to use multiple instance types and Availability Zones. This provides the
flexibility to launch Spot Instances in various Spot capacity pools. For more information, see Be
flexible about instance types and Availability Zones (p. 428).
Avoid an elevated risk of interruption of replacement Spot Instances
Your replacement Spot Instances may be at an elevated risk of interruption if you use the lowest-
price allocation strategy. This is because Amazon EC2 will always launch instances in the lowest-
priced pool that has available capacity at that moment, even if your replacement Spot Instances
are likely to be interrupted soon after being launched. To avoid an elevated risk of interruption, we
strongly recommend against using the lowest-price allocation strategy, and instead recommend
the capacity-optimized or capacity-optimized-prioritized allocation strategy. These
strategies ensure that replacement Spot Instances are launched in the most optimal Spot capacity
pools, and are therefore less likely to be interrupted in the near future. For more information, see
Use the capacity optimized allocation strategy (p. 428).
You can optionally specify a maximum price in one or more launch specifications. This price is specific
to the launch specification. If a launch specification includes a specific price, the Spot Fleet uses this
845
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
maximum price, overriding the global maximum price. Any other launch specifications that do not
include a specific maximum price still use the global maximum price.
Control spending
Spot Fleet stops launching instances when it has either reached the target capacity or the maximum
amount you’re willing to pay. To control the amount you pay per hour for your fleet, you can specify the
SpotMaxTotalPrice for Spot Instances and the OnDemandMaxTotalPrice for On-Demand Instances.
When the maximum total price is reached, Spot Fleet stops launching instances even if it hasn’t met the
target capacity.
The following examples show two different scenarios. In the first, Spot Fleet stops launching instances
when it has met the target capacity. In the second, Spot Fleet stops launching instances when it has
reached the maximum amount you’re willing to pay.
Spot Fleet launches 10 On-Demand Instances because the total of $1.00 (10 instances x $0.10) does not
exceed the OnDemandMaxTotalPrice of $1.50.
If Spot Fleet launches the On-Demand target capacity (10 On-Demand Instances), the total cost per
hour would be $1.00. This is more than the amount ($0.80) specified for OnDemandMaxTotalPrice.
To prevent spending more than you're willing to pay, Spot Fleet launches only 8 On-Demand
Instances (below the On-Demand target capacity) because launching more would exceed the
OnDemandMaxTotalPrice.
By default, the price that you specify is per instance hour. When you use the instance weighting feature,
the price that you specify is per unit hour. You can calculate your price per unit hour by dividing your
price for an instance type by the number of units that it represents. Spot Fleet calculates the number
of Spot Instances to launch by dividing the target capacity by the instance weight. If the result isn't an
integer, the Spot Fleet rounds it up to the next integer, so that the size of your fleet is not below its
target capacity. Spot Fleet can select any pool that you specify in your launch specification, even if the
capacity of the instances launched exceeds the requested target capacity.
The following tables provide examples of calculations to determine the price per unit for a Spot Fleet
request with a target capacity of 10.
846
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet configuration strategies
Instance type Instance Price per Price per unit Number of instances launched
weight instance hour hour
Instance type Instance Price per Price per unit Number of instances launched
weight instance hour hour
Use Spot Fleet instance weighting as follows to provision the target capacity that you want in the pools
with the lowest price per unit at the time of fulfillment:
1. Set the target capacity for your Spot Fleet either in instances (the default) or in the units of your
choice, such as virtual CPUs, memory, storage, or throughput.
2. Set the price per unit.
3. For each launch configuration, specify the weight, which is the number of units that the instance
type represents toward the target capacity.
• A target capacity of 24
• A launch specification with an instance type r3.2xlarge and a weight of 6
• A launch specification with an instance type c3.xlarge and a weight of 5
The weights represent the number of units that instance type represents toward the target capacity. If
the first launch specification provides the lowest price per unit (price for r3.2xlarge per instance hour
divided by 6), the Spot Fleet would launch four of these instances (24 divided by 6).
If the second launch specification provides the lowest price per unit (price for c3.xlarge per instance
hour divided by 5), the Spot Fleet would launch five of these instances (24 divided by 5, result rounded
up).
• A target capacity of 30
• A launch specification with an instance type c3.2xlarge and a weight of 8
• A launch specification with an instance type m3.xlarge and a weight of 8
• A launch specification with an instance type r3.xlarge and a weight of 8
The Spot Fleet would launch four instances (30 divided by 8, result rounded up). With the lowestPrice
strategy, all four instances come from the pool that provides the lowest price per unit. With the
847
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
diversified strategy, the Spot Fleet launches one instance in each of the three pools, and the fourth
instance in whichever pool provides the lowest price per unit.
If you fleet includes Spot Instances, Amazon EC2 can attempt to maintain your fleet target capacity as
Spot prices change.
It is not possible to modify the target capacity of a one-time request after it's been submitted. To change
the target capacity, cancel the request and submit a new one.
A Spot Fleet request remains active until it expires or you cancel it. When you cancel a fleet request, you
can specify whether canceling the request terminates the Spot Instances in that fleet.
Contents
• Spot Fleet request states (p. 848)
• Spot Fleet health checks (p. 849)
• Spot Fleet permissions (p. 850)
• Create a Spot Fleet request (p. 854)
• Tag a Spot Fleet (p. 858)
• Monitor your Spot Fleet (p. 864)
• Modify a Spot Fleet request (p. 864)
• Cancel a Spot Fleet request (p. 866)
• submitted – The Spot Fleet request is being evaluated and Amazon EC2 is preparing to launch the
target number of instances.
• active – The Spot Fleet has been validated and Amazon EC2 is attempting to maintain the target
number of running Spot Instances. The request remains in this state until it is modified or canceled.
• modifying – The Spot Fleet request is being modified. The request remains in this state until the
modification is fully processed or the Spot Fleet is canceled. A one-time request cannot be modified,
and this state does not apply to such Spot requests.
• cancelled_running – The Spot Fleet is canceled and does not launch additional Spot Instances. Its
existing Spot Instances continue to run until they are interrupted or terminated. The request remains
in this state until all instances are interrupted or terminated.
• cancelled_terminating – The Spot Fleet is canceled and its Spot Instances are terminating. The
request remains in this state until all instances are terminated.
• cancelled – The Spot Fleet is canceled and has no running Spot Instances. The Spot Fleet request is
deleted two days after its instances were terminated.
The following illustration represents the transitions between the request states. If you exceed your Spot
Fleet limits, the request is canceled immediately.
848
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
Spot Fleet determines the health status of an instance by using the status checks provided by Amazon
EC2. An instance is determined as unhealthy when the status of either the instance status check or the
system status check is impaired for three consecutive health checks. For more information, see Status
checks for your instances (p. 928).
You can configure your fleet to replace unhealthy Spot Instances. After enabling health check
replacement, a Spot Instance is replaced when it is reported as unhealthy. The fleet could go below its
target capacity for up to a few minutes while an unhealthy Spot Instance is being replaced.
Requirements
• Health check replacement is supported only for Spot Fleets that maintain a target capacity (fleets of
type maintain), not for one-time Spot Fleets (fleets of type request).
• Health check replacement is supported only for Spot Instances. This feature is not supported for On-
Demand Instances.
• You can configure your Spot Fleet to replace unhealthy instances only when you create it.
• IAM users can use health check replacement only if they have permission to call the
ec2:DescribeInstanceStatus action.
Console
To configure a Spot Fleet to replace unhealthy Spot Instances using the console
1. Follow the steps for creating a Spot Fleet. For more information, see Create a Spot Fleet request
using defined parameters (console) (p. 855).
2. To configure the fleet to replace unhealthy Spot Instances, for Health check, choose Replace
unhealthy instances. To enable this option, you must first choose Maintain target capacity.
849
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
AWS CLI
To configure a Spot Fleet to replace unhealthy Spot Instances using the AWS CLI
1. Follow the steps for creating a Spot Fleet. For more information, see Create a Spot Fleet using
the AWS CLI (p. 857).
2. To configure the fleet to replace unhealthy Spot Instances, for ReplaceUnhealthyInstances,
enter true.
If you use the Amazon EC2 console to create a Spot Fleet, it creates two service-linked roles named
AWSServiceRoleForEC2SpotFleet and AWSServiceRoleForEC2Spot, and a role named aws-ec2-
spot-fleet-tagging-role that grant the Spot Fleet the permissions to request, launch, terminate,
and tag resources on your behalf. If you use the AWS CLI or an API, you must ensure that these roles
exist.
Use the following instructions to grant the required permissions and create the roles.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances",
"ec2:CreateTags",
"ec2:RequestSpotFleet",
"ec2:ModifySpotFleetRequest",
"ec2:CancelSpotFleetRequests",
"ec2:DescribeSpotFleetRequests",
"ec2:DescribeSpotFleetInstances",
"ec2:DescribeSpotFleetRequestHistory"
],
"Resource": "*"
},
{
"Effect": "Allow",
850
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::*:role/aws-ec2-spot-fleet-tagging-role"
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole",
"iam:ListRoles",
"iam:ListInstanceProfiles"
],
"Resource": "*"
}
]
}
The preceding example policy grants an IAM user the permissions required for most Spot Fleet use
cases. To limit the user to specific API actions, specify only those API actions instead.
Important
If you specify a role for the IAM instance profile in the launch specification or launch
template, you must grant the IAM user the permission to pass the role to the service. To do
this, in the IAM policy include "arn:aws:iam::*:role/IamInstanceProfile-role"
as a resource for the iam:PassRole action. For more information, see Granting a user
permissions to pass a role to an AWS service in the IAM User Guide.
Add the following Spot Fleet API actions to your policy, as needed:
• ec2:RequestSpotFleet
• ec2:ModifySpotFleetRequest
• ec2:CancelSpotFleetRequests
• ec2:DescribeSpotFleetRequests
• ec2:DescribeSpotFleetInstances
• ec2:DescribeSpotFleetRequestHistory
(Optional) To enable an IAM user to create roles or instance profiles using the IAM console, you must
add the following actions to the policy:
• iam:AddRoleToInstanceProfile
• iam:AttachRolePolicy
• iam:CreateInstanceProfile
851
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
• iam:CreateRole
• iam:GetRole
• iam:ListPolicies
4. Choose Review policy.
5. On the Review policy page, enter a policy name and description, and choose Create policy.
6. In the navigation pane, choose Users and select the user.
7. Choose Permissions, Add permissions.
8. Choose Attach existing policies directly. Select the policy that you created earlier and choose Next:
Review.
9. Choose Add permissions.
Amazon EC2 uses the service-linked role named AWSServiceRoleForEC2SpotFleet to launch and
manage instances on your behalf.
Important
If you specify an encrypted AMI (p. 189) or an encrypted Amazon EBS snapshot (p. 1536) in your
Spot Fleet, you must grant the AWSServiceRoleForEC2SpotFleet role permission to use the
CMK so that Amazon EC2 can launch instances on your behalf. For more information, see Grant
access to CMKs for use with encrypted AMIs and EBS snapshots (p. 853).
Under most circumstances, you don't need to manually create a service-linked role. Amazon EC2 creates
the AWSServiceRoleForEC2SpotFleet service-linked role the first time you create a Spot Fleet using the
console.
If you had an active Spot Fleet request before October 2017, when Amazon EC2 began supporting this
service-linked role, Amazon EC2 created the AWSServiceRoleForEC2SpotFleet role in your AWS account.
For more information, see A new role appeared in my AWS account in the IAM User Guide.
852
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
If you use the AWS CLI or an API to create a Spot Fleet, you must first ensure that this role exists.
If you no longer need to use Spot Fleet, we recommend that you delete the
AWSServiceRoleForEC2SpotFleet role. After this role is deleted from your account, Amazon EC2 will
create the role again if you request a Spot Fleet using the console. For more information, see Deleting a
Service-Linked Role in the IAM User Guide.
Grant access to CMKs for use with encrypted AMIs and EBS snapshots
If you specify an encrypted AMI (p. 189) or an encrypted Amazon EBS snapshot (p. 1536) in your Spot
Fleet request and you use a customer managed customer master key (CMK) for encryption, you must
grant the AWSServiceRoleForEC2SpotFleet role permission to use the CMK so that Amazon EC2 can
launch instances on your behalf. To do this, you must add a grant to the CMK, as shown in the following
procedure.
When providing permissions, grants are an alternative to key policies. For more information, see Using
Grants and Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.
• Use the create-grant command to add a grant to the CMK and to specify the principal (the
AWSServiceRoleForEC2SpotFleet service-linked role) that is given permission to perform the
operations that the grant permits. The CMK is specified by the key-id parameter and the ARN of
the CMK. The principal is specified by the grantee-principal parameter and the ARN of the
AWSServiceRoleForEC2SpotFleet service-linked role.
853
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
854
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
a. To define the launch parameters in the Spot console, choose Manually configure launch
parameters.
b. For AMI, choose one of the basic AMIs provided by AWS, or choose Search for AMI to use an
AMI from our user community, the AWS Marketplace, or one of your own.
c. (Optional) For Key pair name, choose an existing key pair or create a new one.
[New key pair] Choose Create new key pair to go the Key Pairs page. When you are done,
return to the Spot Requests page and refresh the list.
d. (Optional) Expand Additional launch parameters, and do the following:
i. (Optional) To enable Amazon EBS optimization, for EBS-optimized, select Launch EBS-
optimized instances.
ii. (Optional) To add temporary block-level storage for your instances, for Instance store,
choose Attach at launch.
iii. (Optional) To add storage, choose Add new volume, and specify additional instance store
volumes or Amazon EBS volumes, depending on the instance type.
iv. (Optional) By default, basic monitoring is enabled for your instances. To enable detailed
monitoring, for Monitoring, select Enable CloudWatch detailed monitoring.
v. (Optional) To run a Dedicated Spot Instance, for Tenancy, choose Dedicated - run a
dedicated instance.
vi. (Optional) For Security groups, choose one or more security groups or create a new one.
[New security group] Choose Create new security group to go the Security Groups page.
When you are done, return to the Spot Requests and refresh the list.
vii. (Optional) To make your instances reachable from the internet, for Auto-assign IPv4 Public
IP, choose Enable.
855
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
viii. (Optional) To launch your Spot Instances with an IAM role, for IAM instance profile, choose
the role.
ix. (Optional) To run a start-up script, copy it to User data.
x. (Optional) To add a tag, choose Create tag and enter the key and value for the tag, and
choose Create. Repeat for each tag.
For each tag, to tag the instances and the Spot Fleet request with the same tag, ensure that
both Instances and Fleet are selected. To tag only the instances launched by the fleet, clear
Fleet. To tag only the Spot Fleet request, clear Instances.
4. For Additional request details, do the following:
a. Review the additional request details. To make changes, clear Apply defaults.
b. (Optional) For IAM fleet role, you can use the default role or choose a different role. To use the
default role after changing the role, choose Use default role.
c. (Optional) For Maximum price, you can use the default maximum price (the On-Demand price)
or specify the maximum price you are willing to pay. If your maximum price is lower than the
Spot price for the instance types that you selected, your Spot Instances are not launched.
d. (Optional) To create a request that is valid only during a specific time period, edit Request valid
from and Request valid until.
e. (Optional) By default, we terminate your Spot Instances when the Spot Fleet request expires. To
keep them running after your request expires, clear Terminate the instances when the request
expires.
f. (Optional) To register your Spot Instances with a load balancer, choose Receive traffic from one
or more load balancers and choose one or more Classic Load Balancers or target groups.
5. For Minimum compute unit, choose the minimum hardware specifications (vCPUs, memory, and
storage) that you need for your application or task, either as specs or as an instance type.
• For as specs, specify the required number of vCPUs and amount of memory.
• For as an instance type, accept the default instance type, or choose Change instance type to
choose a different instance type.
6. For Target capacity, do the following:
a. For Total target capacity, specify the number of units to request. For the type of unit, you can
choose Instances, vCPUs, or Memory (MiB). To specify a target capacity of 0 so that you can
add capacity later, choose Maintain target capacity.
b. (Optional) For Include On-Demand base capacity, specify the number of On-Demand units to
request. The number must be less than the Total target capacity. Amazon EC2 calculates the
difference, and allocates the difference to Spot units to request.
Important
To specify optional On-Demand capacity, you must first choose a launch template.
c. (Optional) By default, the Spot service terminates Spot Instances when they are interrupted.
To maintain the target capacity, select Maintain target capacity. You can then specify that the
Spot service terminates, stops, or hibernates Spot Instances when they are interrupted. To do
so, choose the corresponding option from Interruption behavior.
d. (Optional) To allow Spot Fleet to launch a replacement Spot Instance when an instance
rebalance notification is emitted for an existing Spot Instance in the fleet, select Capacity
rebalance, and then choose an instance replacement strategy. If you choose Launch before
terminate, specify the delay (in seconds) before Spot Fleet terminates the old instances. For
more information, see Capacity Rebalancing (p. 843).
e. (Optional) To control the amount you pay per hour for all the Spot Instances in your fleet,
select Set maximum cost for Spot Instances and then enter the maximum total amount you're
willing to pay per hour. When the maximum total amount is reached, Spot Fleet stops launching
856
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
Spot Instances even if it hasn’t met the target capacity. For more information, see Control
spending (p. 846).
7. For Network, do the following:
[New VPC] Choose Create new VPC to go the Amazon VPC console. When you are done, return
to the wizard and refresh the list.
b. (Optional) For Availability Zone, let AWS choose the Availability Zones for your Spot Instances,
or specify one or more Availability Zones.
If you have more than one subnet in an Availability Zone, choose the appropriate subnet from
Subnet. To add subnets, choose Create new subnet to go to the Amazon VPC console. When
you are done, return to the wizard and refresh the list.
8. For Instance type requirements, you can either specify instance attributes and let Amazon EC2
identify the optimal instance types with these attributes, or you can specify a list of instances. For
more information, see Attribute-based instance type selection for Spot Fleet (p. 825).
a. If you choose Specify instance attributes that match your compute requirements, specify your
instance attributes as follows:
i. For vCPUs, enter the desired minimum and maximum number of vCPUs. To specify no limit,
select No minimum, No maximum, or both.
ii. For Memory (GiB), enter the desired minimum and maximum amount of memory. To
specify no limit, select No minimum, No maximum, or both.
iii. (Optional) For Additional instance attributes, you can optionally specify one or more
attributes to express your compute requirements in more detail. Each additional attribute
adds a further constraint to your request. You can omit the additional attributes; when
omitted, the default values are used. For a description of each attribute and their default
values, see get-spot-placement-scores in the Amazon EC2 Command Line Reference.
iv. (Optional) To view the instance types with your specified attributes, expand Preview
matching instance types. To exclude instance types from being used in your request, select
the instances and then choose Exclude selected instance types.
b. If you choose Manually select instance types, Spot Fleet provides a default list of instance
types. To select more instance types, choose Add instance types, select the instance types to
use in your request, and choose Select. To delete instance types, select the instance types and
choose Delete.
9. For Allocation strategy, choose the strategy that meets your needs. For more information, see
Allocation strategy for Spot Instances (p. 823).
10. For Your fleet request at a glance, review your fleet configuration, and make any adjustments if
necessary.
11. (Optional) To download a copy of the launch configuration for use with the AWS CLI, choose JSON
config.
12. Choose Launch.
The Spot Fleet request type is fleet. When the request is fulfilled, requests of type instance are
added, where the state is active and the status is fulfilled.
857
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
For example configuration files, see Spot Fleet example configurations (p. 913).
{
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE"
}
When you tag a Spot Fleet request, the instances and volumes that are launched by the Spot Fleet are
not automatically tagged. You need to explicitly tag the instances and volumes launched by the Spot
Fleet. You can choose to assign tags to only the Spot Fleet request, or to only the instances launched by
the fleet, or to only the volumes attached to the instances launched by the fleet, or to all three.
Note
Volume tags are only supported for volumes that are attached to On-Demand Instances. You
can't tag volumes that are attached to Spot Instances.
For more information about how tags work, see Tag your Amazon EC2 resources (p. 1666).
Contents
• Prerequisite (p. 858)
• Tag a new Spot Fleet (p. 859)
• Tag a new Spot Fleet and the instances and volumes that it launches (p. 860)
• Tag an existing Spot Fleet (p. 862)
• View Spot Fleet request tags (p. 863)
Prerequisite
Grant the IAM user the permission to tag resources. For more information, see Example: Tag
resources (p. 1258).
• The ec2:CreateTags action. This grants the IAM user permission to create tags.
• The ec2:RequestSpotFleet action. This grants the IAM user permission to create a Spot Fleet
request.
• For Resource, you must specify "*". This allows users to tag all resource types.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TagSpotFleetRequest",
"Effect": "Allow",
"Action": [
858
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
"ec2:CreateTags",
"ec2:RequestSpotFleet"
],
"Resource": "*"
}
]
}
Important
We currently do not support resource-level permissions for the spot-fleet-request
resource. If you specify spot-fleet-request as a resource, you will get an unauthorized
exception when you try to tag the fleet. The following example illustrates how not to set the
policy.
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags",
"ec2:RequestSpotFleet"
],
"Resource": "arn:aws:ec2:us-east-1:111122223333:spot-fleet-request/*"
}
1. Follow the Create a Spot Fleet request using defined parameters (console) (p. 855) procedure.
2. To add a tag, expand Additional configurations, choose Add new tag, and enter the key and value
for the tag. Repeat for each tag.
For each tag, you can tag the Spot Fleet request and the instances with the same tag. To tag both,
ensure that both Instance tags and Fleet tags are selected. To tag only the Spot Fleet request, clear
Instance tags. To tag only the instances launched by the fleet, clear Fleet tags.
3. Complete the required fields to create a Spot Fleet request, and then choose Launch. For more
information, see Create a Spot Fleet request using defined parameters (console) (p. 855).
To tag a Spot Fleet request when you create it, configure the Spot Fleet request configuration as follows:
In the following example, the Spot Fleet request is tagged with two tags: Key=Environment and
Value=Production, and Key=Cost-Center and Value=123.
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::111122223333:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-0123456789EXAMPLE",
859
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
"InstanceType": "c4.large"
}
],
"SpotPrice": "5",
"TargetCapacity": 2,
"TerminateInstancesWithExpiration": true,
"Type": "maintain",
"ReplaceUnhealthyInstances": true,
"InstanceInterruptionBehavior": "terminate",
"InstancePoolsToUseCount": 1,
"TagSpecifications": [
{
"ResourceType": "spot-fleet-request",
"Tags": [
{
"Key": "Environment",
"Value":"Production"
},
{
"Key": "Cost-Center",
"Value":"123"
}
]
}
]
}
}
Tag a new Spot Fleet and the instances and volumes that it launches
To tag a new Spot Fleet request and the instances and volumes that it launches using the AWS CLI
To tag a Spot Fleet request when you create it, and to tag the instances and volumes when they are
launched by the fleet, configure the Spot Fleet request configuration as follows:
Instance tags:
Alternatively, you can specify the tags for the instance in the launch template (p. 581) that is
referenced in the Spot Fleet request.
Volume tags:
• Specify the tags for the volumes in the launch template (p. 581) that is referenced in the Spot Fleet
request. Volume tagging in LaunchSpecifications is not supported.
In the following example, the Spot Fleet request is tagged with two tags: Key=Environment and
Value=Production, and Key=Cost-Center and Value=123. The instances that are launched by the fleet are
860
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
tagged with one tag (which is the same as one of the tags for the Spot Fleet request): Key=Cost-Center
and Value=123.
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::111122223333:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-0123456789EXAMPLE",
"InstanceType": "c4.large",
"TagSpecifications": [
{
"ResourceType": "instance",
"Tags": [
{
"Key": "Cost-Center",
"Value": "123"
}
]
}
]
}
],
"SpotPrice": "5",
"TargetCapacity": 2,
"TerminateInstancesWithExpiration": true,
"Type": "maintain",
"ReplaceUnhealthyInstances": true,
"InstanceInterruptionBehavior": "terminate",
"InstancePoolsToUseCount": 1,
"TagSpecifications": [
{
"ResourceType": "spot-fleet-request",
"Tags": [
{
"Key": "Environment",
"Value":"Production"
},
{
"Key": "Cost-Center",
"Value":"123"
}
]
}
]
}
}
To tag instances when they are launched by the fleet, you can either specify the tags in the launch
template (p. 581) that is referenced in the Spot Fleet request, or you can specify the tags in the Spot
Fleet request configuration as follows:
In the following example, the instances that are launched by the fleet are tagged with one tag:
Key=Cost-Center and Value=123.
861
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "default",
"IamFleetRole": "arn:aws:iam::111122223333:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-0123456789EXAMPLE",
"InstanceType": "c4.large",
"TagSpecifications": [
{
"ResourceType": "instance",
"Tags": [
{
"Key": "Cost-Center",
"Value": "123"
}
]
}
]
}
],
"SpotPrice": "5",
"TargetCapacity": 2,
"TerminateInstancesWithExpiration": true,
"Type": "maintain",
"ReplaceUnhealthyInstances": true,
"InstanceInterruptionBehavior": "terminate",
"InstancePoolsToUseCount": 1
}
}
To tag volumes attached to On-Demand Instances launched by a Spot Fleet using the AWS CLI
To tag volumes when they are created by the fleet, you must specify the tags in the launch
template (p. 581) that is referenced in the Spot Fleet request.
Note
Volume tags are only supported for volumes that are attached to On-Demand Instances. You
can't tag volumes that are attached to Spot Instances.
Volume tagging in LaunchSpecifications is not supported.
After you have created a Spot Fleet request, you can add tags to the fleet request using the console.
You can use the create-tags command to tag existing resources. In the following example, the existing
Spot Fleet request is tagged with Key=purpose and Value=test.
862
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
Use the describe-tags command to view the tags for the specified resource. In the following example,
you describe the tags for the specified Spot Fleet request.
{
"Tags": [
{
"Key": "Environment",
"ResourceId": "sfr-11112222-3333-4444-5555-66666EXAMPLE",
"ResourceType": "spot-fleet-request",
"Value": "Production"
},
{
"Key": "Another key",
"ResourceId": "sfr-11112222-3333-4444-5555-66666EXAMPLE",
"ResourceType": "spot-fleet-request",
"Value": "Another value"
}
]
}
You can also view the tags of a Spot Fleet request by describing the Spot Fleet request.
Use the describe-spot-fleet-requests command to view the configuration of the specified Spot Fleet
request, which includes any tags that were specified for the fleet request.
{
"SpotFleetRequestConfigs": [
{
"ActivityStatus": "fulfilled",
"CreateTime": "2020-02-13T02:49:19.709Z",
"SpotFleetRequestConfig": {
"AllocationStrategy": "capacityOptimized",
"OnDemandAllocationStrategy": "lowestPrice",
"ExcessCapacityTerminationPolicy": "Default",
"FulfilledCapacity": 2.0,
"OnDemandFulfilledCapacity": 0.0,
"IamFleetRole": "arn:aws:iam::111122223333:role/aws-ec2-spot-fleet-tagging-
role",
"LaunchSpecifications": [
{
"ImageId": "ami-0123456789EXAMPLE",
"InstanceType": "c4.large"
}
],
863
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
"TargetCapacity": 2,
"OnDemandTargetCapacity": 0,
"Type": "maintain",
"ReplaceUnhealthyInstances": false,
"InstanceInterruptionBehavior": "terminate"
},
"SpotFleetRequestId": "sfr-11112222-3333-4444-5555-66666EXAMPLE",
"SpotFleetRequestState": "active",
"Tags": [
{
"Key": "Environment",
"Value": "Production"
},
{
"Key": "Another key",
"Value": "Another value"
}
]
}
]
}
Use the describe-spot-fleet-instances command to describe the Spot Instances for the specified Spot
Fleet.
Use the describe-spot-fleet-request-history command to describe the history for the specified Spot Fleet
request.
864
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
Note
You can't modify a one-time Spot Fleet request. You can only modify a Spot Fleet request if you
selected Maintain target capacity when you created the Spot Fleet request.
When you increase the target capacity, the Spot Fleet launches additional Spot Instances. When you
increase the On-Demand portion, the Spot Fleet launches additional On-Demand Instances.
When you increase the target capacity, the Spot Fleet launches the additional Spot Instances according
to the allocation strategy for its Spot Fleet request. If the allocation strategy is lowestPrice, the Spot
Fleet launches the instances from the lowest-priced Spot capacity pool in the Spot Fleet request. If the
allocation strategy is diversified, the Spot Fleet distributes the instances across the pools in the Spot
Fleet request.
When you decrease the target capacity, the Spot Fleet cancels any open requests that exceed the new
target capacity. You can request that the Spot Fleet terminate Spot Instances until the size of the fleet
reaches the new target capacity. If the allocation strategy is lowestPrice, the Spot Fleet terminates
the instances with the highest price per unit. If the allocation strategy is diversified, the Spot Fleet
terminates instances across the pools. Alternatively, you can request that the Spot Fleet keep the fleet at
its current size, but not replace any Spot Instances that are interrupted or that you terminate manually.
When a Spot Fleet terminates an instance because the target capacity was decreased, the instance
receives a Spot Instance interruption notice.
Use the modify-spot-fleet-request command to update the target capacity of the specified Spot Fleet
request.
You can modify the previous command as follows to decrease the target capacity of the specified Spot
Fleet without terminating any Spot Instances as a result.
865
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Spot Fleets
Use the cancel-spot-fleet-requests command to cancel the specified Spot Fleet request and terminate
the instances.
{
"SuccessfulFleetRequests": [
{
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE",
"CurrentSpotFleetRequestState": "cancelled_terminating",
"PreviousSpotFleetRequestState": "active"
}
],
"UnsuccessfulFleetRequests": []
}
You can modify the previous command as follows to cancel the specified Spot Fleet request without
terminating the instances.
{
"SuccessfulFleetRequests": [
{
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE",
"CurrentSpotFleetRequestState": "cancelled_running",
"PreviousSpotFleetRequestState": "active"
}
],
"UnsuccessfulFleetRequests": []
866
Amazon Elastic Compute Cloud
User Guide for Linux Instances
CloudWatch metrics for Spot Fleet
For more information about CloudWatch metrics provided by Amazon EC2, see Monitor your instances
using CloudWatch (p. 958).
Metric Description
AvailableInstancePoolsCount The Spot capacity pools specified in the Spot Fleet request.
Units: Count
BidsSubmittedForCapacity The capacity for which Amazon EC2 has submitted Spot Fleet
requests.
Units: Count
EligibleInstancePoolCount The Spot capacity pools specified in the Spot Fleet request
where Amazon EC2 can fulfill requests. Amazon EC2 does not
fulfill requests in pools where the maximum price you're willing
to pay for Spot Instances is less than the Spot price or the Spot
price is greater than the price for On-Demand Instances.
Units: Count
Units: Count
Units: Percent
Units: Count
Units: Percent
867
Amazon Elastic Compute Cloud
User Guide for Linux Instances
CloudWatch metrics for Spot Fleet
Metric Description
Units: Count
Units: Count
If the unit of measure for a metric is Count, the most useful statistic is Average.
Dimensions Description
Metrics are grouped first by namespace, and then by the various combinations of dimensions within each
namespace. For example, you can view all Spot Fleet metrics or Spot Fleet metrics groups by Spot Fleet
request ID, instance type, or Availability Zone.
868
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Automatic scaling for Spot Fleet
If you are using instance weighting (p. 846), keep in mind that Spot Fleet can exceed the target capacity
as needed. Fulfilled capacity can be a floating-point number but target capacity must be an integer,
so Spot Fleet rounds up to the next integer. You must take these behaviors into account when you
look at the outcome of a scaling policy when an alarm is triggered. For example, suppose that the
target capacity is 30, the fulfilled capacity is 30.1, and the scaling policy subtracts 1. When the alarm is
triggered, the automatic scaling process subtracts 1 from 30.1 to get 29.1 and then rounds it up to 30, so
no scaling action is taken. As another example, suppose that you selected instance weights of 2, 4, and 8,
and a target capacity of 10, but no weight 2 instances were available so Spot Fleet provisioned instances
of weights 4 and 8 for a fulfilled capacity of 12. If the scaling policy decreases target capacity by 20%
and an alarm is triggered, the automatic scaling process subtracts 12*0.2 from 12 to get 9.6 and then
rounds it up to 10, so no scaling action is taken.
The scaling policies that you create for Spot Fleet support a cooldown period. This is the number of
seconds after a scaling activity completes where previous trigger-related scaling activities can influence
future scaling events. For scale-out policies, while the cooldown period is in effect, the capacity that
has been added by the previous scale-out event that initiated the cooldown is calculated as part of
the desired capacity for the next scale out. The intention is to continuously (but not excessively) scale
out. For scale in policies, the cooldown period is used to block subsequent scale in requests until it has
expired. The intention is to scale in conservatively to protect your application's availability. However, if
another alarm triggers a scale-out policy during the cooldown period after a scale-in, automatic scaling
scales out your scalable target immediately.
869
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Automatic scaling for Spot Fleet
We recommend that you scale based on instance metrics with a 1-minute frequency because that
ensures a faster response to utilization changes. Scaling on metrics with a 5-minute frequency can
result in slower response time and scaling on stale metric data. To send metric data for your instances
to CloudWatch in 1-minute periods, you must specifically enable detailed monitoring. For more
information, see Enable or turn off detailed monitoring for your instances (p. 959) and Create a Spot
Fleet request using defined parameters (console) (p. 855).
For more information about configuring scaling for Spot Fleet, see the following resources:
In addition to the IAM permissions for Spot Fleet (p. 850) and Amazon EC2, the IAM user that accesses
fleet scaling settings must have the appropriate permissions for the services that support dynamic
scaling. IAM users must have permissions to use the actions shown in the following example policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"application-autoscaling:*",
"ec2:DescribeSpotFleetRequests",
"ec2:ModifySpotFleetRequest",
"cloudwatch:DeleteAlarms",
"cloudwatch:DescribeAlarmHistory",
"cloudwatch:DescribeAlarms",
"cloudwatch:DescribeAlarmsForMetric",
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics",
"cloudwatch:PutMetricAlarm",
"cloudwatch:DisableAlarmActions",
"cloudwatch:EnableAlarmActions",
"iam:CreateServiceLinkedRole",
"sns:CreateTopic",
"sns:Subscribe",
"sns:Get*",
"sns:List*"
],
"Resource": "*"
}
]
}
You can also create your own IAM policies that allow more fine-grained permissions for calls to the
Application Auto Scaling API. For more information, see Authentication and Access Control in the
Application Auto Scaling User Guide.
The Application Auto Scaling service also needs permission to describe your Spot Fleet and
CloudWatch alarms, and permissions to modify your Spot Fleet target capacity on your behalf.
If you enable automatic scaling for your Spot Fleet, it creates a service-linked role named
870
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Automatic scaling for Spot Fleet
You can create multiple target tracking scaling policies for a Spot Fleet, provided that each of them
uses a different metric. The fleet scales based on the policy that provides the largest fleet capacity. This
enables you to cover multiple scenarios and ensure that there is always enough capacity to process your
application workloads.
To ensure application availability, the fleet scales out proportionally to the metric as fast as it can, but
scales in more gradually.
When a Spot Fleet terminates an instance because the target capacity was decreased, the instance
receives a Spot Instance interruption notice.
Do not edit or delete the CloudWatch alarms that Spot Fleet manages for a target tracking scaling policy.
Spot Fleet deletes the alarms automatically when you delete the target tracking scaling policy.
Limitation
The Spot Fleet request must have a request type of maintain. Automatic scaling is not supported for
requests of type request, or Spot blocks.
1. Register the Spot Fleet request as a scalable target using the register-scalable-target command.
2. Create a scaling policy using the put-scaling-policy command.
871
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Automatic scaling for Spot Fleet
When you create a step scaling policy, you must specify one of the following scaling adjustment types:
• Add – Increase the target capacity of the fleet by a specified number of capacity units or a specified
percentage of the current capacity.
• Remove – Decrease the target capacity of the fleet by a specified number of capacity units or a
specified percentage of the current capacity.
• Set to – Set the target capacity of the fleet to the specified number of capacity units.
When an alarm is triggered, the automatic scaling process calculates the new target capacity using the
fulfilled capacity and the scaling policy, and then updates the target capacity accordingly. For example,
suppose that the target capacity and fulfilled capacity are 10 and the scaling policy adds 1. When
the alarm is triggered, the automatic scaling process adds 1 to 10 to get 11, so Spot Fleet launches 1
instance.
When a Spot Fleet terminates an instance because the target capacity was decreased, the instance
receives a Spot Instance interruption notice.
Limitation
The Spot Fleet request must have a request type of maintain. Automatic scaling is not supported for
requests of type request, or Spot blocks.
Prerequisites
• Consider which CloudWatch metrics are important to your application. You can create CloudWatch
alarms based on metrics provided by AWS or your own custom metrics.
• For the AWS metrics that you will use in your scaling policies, enable CloudWatch metrics collection if
the service that provides the metrics does not enable it by default.
The Specify metric and conditions page appears, showing a graph and other information about the
metric you selected.
6. For Period, choose the evaluation period for the alarm, for example, 1 minute. When evaluating the
alarm, each period is aggregated into one data point.
Note
A shorter period creates a more sensitive alarm.
7. For Conditions, define the alarm by defining the threshold condition. For example, you can define
a threshold to trigger the alarm whenever the value of the metric is greater than or equal to 80
percent.
872
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Automatic scaling for Spot Fleet
8. Under Additional configuration, for Datapoints to alarm, specify how many datapoints (evaluation
periods) must be in the ALARM state to trigger the alarm, for example, 1 evaluation period or 2 out
of 3 evaluation periods. This creates an alarm that goes to ALARM state if that many consecutive
periods are breaching. For more information, see Evaluating an Alarm in the Amazon CloudWatch
User Guide.
9. For Missing data treatment, choose one of the options (or leave the default of Treat missing data
as missing). For more information, see Configuring How CloudWatch Alarms Treat Missing Data in
the Amazon CloudWatch User Guide.
10. Choose Next.
11. (Optional) To receive notification of a scaling event, for Notification, you can choose or create
the Amazon SNS topic you want to use to receive notifications. Otherwise, you can delete the
notification now and add one later as needed.
12. Choose Next.
13. Under Add a description, enter a name and description for the alarm and choose Next.
14. Choose Create alarm.
To configure step scaling policies for your Spot Fleet using the AWS CLI
1. Register the Spot Fleet request as a scalable target using the register-scalable-target command.
2. Create a scaling policy using the put-scaling-policy command.
3. Create an alarm that triggers the scaling policy using the put-metric-alarm command.
873
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Automatic scaling for Spot Fleet
activities at specific times. When you create a scheduled action, you specify an existing Spot Fleet, when
the scaling activity should occur, minimum capacity, and maximum capacity. You can create scheduled
actions that scale one time only or that scale on a recurring schedule.
You can only create a scheduled action for Spot Fleets that already exist. You can't create a scheduled
action at the same time that you create a Spot Fleet.
Limitation
The Spot Fleet request must have a request type of maintain. Automatic scaling is not supported for
requests of type request, or Spot blocks.
874
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Monitor fleet events
• put-scheduled-action
• describe-scheduled-actions
• delete-scheduled-action
With Amazon EventBridge, you can create rules that trigger programmatic actions in response to
an event. For example, you can create two EventBridge rules, one that's triggered when a fleet state
changes, and one that's triggered when an instance in the fleet is terminated. You can configure the first
rule so that, if the fleet state changes, the rule invokes an SNS topic to send an email notification to
you. You can configure the second rule so that, if an instance is terminated, the rule invokes a Lambda
function to launch a new instance.
Topics
• EC2 Fleet event types (p. 875)
• Spot Fleet event types (p. 880)
• Create Amazon EventBridge rules (p. 884)
There are five EC2 Fleet event types. For each event type, there are several sub-types.
The events are sent to EventBridge in JSON format. The following fields in the event form the event
pattern that is defined in the rule, and which trigger an action:
"source": "aws.ec2fleet"
Event types
875
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet event types
{
"version": "0",
"id": "715ed6b3-b8fc-27fe-fad6-528c7b8bf8a2",
"detail-type": "EC2 Fleet State Change",
"source": "aws.ec2fleet",
"account": "123456789012",
"time": "2020-11-09T09:00:20Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:fleet/fleet-598fb973-87b7-422d-
be4d-6b0809bfff0a"
],
"detail": {
"sub-type": "active"
}
}
active
The EC2 Fleet request has been validated and Amazon EC2 is attempting to maintain the target
number of running instances.
cancelled
The EC2 Fleet request is canceled and has no running instances. The EC2 Fleet will be deleted two
days after its instances are terminated.
cancelled_running
The EC2 Fleet request is canceled and does not launch additional instances. Its existing instances
continue to run until they are interrupted or terminated. The request remains in this state until all
instances are interrupted or terminated.
cancelled_terminating
The EC2 Fleet request is canceled and its instances are terminating. The request remains in this state
until all instances are terminated.
expired
The EC2 Fleet request has expired. If the request was created with
TerminateInstancesWithExpiration set, a subsequent terminated event indicates that the
instances are terminated.
876
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet event types
modify_in_progress
The EC2 Fleet request is being modified. The request remains in this state until the modification is
fully processed.
modify_succeeded
The EC2 Fleet request is being evaluated and Amazon EC2 is preparing to launch the target number
of instances.
progress
{
"version": "0",
"id": "19331f74-bf4b-a3dd-0f1b-ddb1422032b9",
"detail-type": "EC2 Fleet Spot Instance Request Change",
"source": "aws.ec2fleet",
"account": "123456789012",
"time": "2020-11-09T09:00:05Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:fleet/
fleet-83fd4e48-552a-40ef-9532-82a3acca5f10"
],
"detail": {
"spot-instance-request-id": "sir-rmqske6h",
"description": "SpotInstanceRequestId sir-rmqske6h, PreviousState:
cancelled_running",
"sub-type": "cancelled"
}
}
active
The Spot Instance request is fulfilled and has an associated Spot Instance.
cancelled
You cancelled the Spot Instance request, or the Spot Instance request expired.
disabled
877
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet event types
{
"version": "0",
"id": "542ce428-c8f1-0608-c015-e8ed6522c5bc",
"detail-type": "EC2 Fleet Instance Change",
"source": "aws.ec2fleet",
"account": "123456789012",
"time": "2020-11-09T09:00:23Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:fleet/fleet-598fb973-87b7-422d-
be4d-6b0809bfff0a"
],
"detail": {
"instance-id": "i-0c594155dd5ff1829",
"description": "{\"instanceType\":\"c5.large\",\"image\":\"ami-6057e21a\",
\"productDescription\":\"Linux/UNIX\",\"availabilityZone\":\"us-east-1d\"}",
"sub-type": "launched"
}
}
launched
An instance termination notification was sent when a Spot Instance was terminated by Amazon EC2
during scale-down, when the target capacity of the fleet was modified down, for example, from a
target capacity of 4 to a target capacity of 3.
{
"version": "0",
"id": "76529817-d605-4571-7224-d36cc1b2c0c4",
"detail-type": "EC2 Fleet Information",
"source": "aws.ec2fleet",
"account": "123456789012",
"time": "2020-11-09T08:17:07Z",
"region": "us-east-1",
878
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet event types
"resources": [
"arn:aws:ec2:us-east-1:123456789012:fleet/fleet-8becf5fe-
bb9e-415d-8f54-3fa5a8628b91"
],
"detail": {
"description": "c4.xlarge, ami-0947d2ba12ee1ff75, Linux/UNIX, us-east-1a,
Spot price in either SpotFleetRequestConfigData or SpotFleetLaunchSpecification or
LaunchTemplate or LaunchTemplateOverrides is less than Spot market price $0.0619",
"sub-type": "launchSpecUnusable"
}
}
fleetProgressHalted
The price in every launch specification is not valid because it is below the Spot price (all the launch
specifications have produced launchSpecUnusable events). A launch specification might become
valid if the Spot price changes.
launchSpecTemporarilyBlacklisted
The configuration is not valid and several attempts to launch instances have failed. For more
information, see the description of the event.
launchSpecUnusable
The price in a launch specification is not valid because it is below the Spot price.
registerWithLoadBalancersFailed
An attempt to register instances with load balancers failed. For more information, see the
description of the event.
{
"version": "0",
"id": "69849a22-6d0f-d4ce-602b-b47c1c98240e",
"detail-type": "EC2 Fleet Error",
"source": "aws.ec2fleet",
"account": "123456789012",
"time": "2020-10-07T01:44:24Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:fleet/fleet-9bb19bc6-60d3-4fd2-ae47-
d33e68eafa08"
],
"detail": {
"description": "m3.large, ami-00068cd7555f543d5, Linux/UNIX: IPv6 is not supported
for the instance type 'm3.large'. ",
"sub-type": "spotFleetRequestConfigurationInvalid"
}
}
879
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet event types
iamFleetRoleInvalid
The EC2 Fleet does not have the required permissions to either launch or terminate an instance.
allLaunchSpecsTemporarilyBlacklisted
None of the configurations are valid, and several attempts to launch instances have failed. For more
information, see the description of the event.
spotInstanceCountLimitExceeded
You’ve reached the limit on the number of Spot Instances that you can launch.
spotFleetRequestConfigurationInvalid
The configuration is not valid. For more information, see the description of the event.
The events are sent to EventBridge in JSON format. The following fields in the event form the event
pattern that is defined in the rule, and which trigger an action:
"source": "aws.ec2spotfleet"
Event types
• EC2 Spot Fleet State Change (p. 880)
• EC2 Spot Fleet Spot Instance Request Change (p. 881)
• EC2 Spot Fleet Instance Change (p. 882)
• EC2 Spot Fleet Information (p. 883)
• EC2 Spot Fleet Error (p. 884)
{
"version": "0",
"id": "d1af1091-6cc3-2e24-203a-3b870e455d5b",
"detail-type": "EC2 Spot Fleet State Change",
"source": "aws.ec2spotfleet",
"account": "123456789012",
880
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet event types
"time": "2020-11-09T08:57:06Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:spot-fleet-request/sfr-4b6d274d-0cea-4b2c-
b3be-9dc627ad1f55"
],
"detail": {
"sub-type": "submitted"
}
}
active
The Spot Fleet request has been validated and Amazon EC2 is attempting to maintain the target
number of running instances.
cancelled
The Spot Fleet request is canceled and has no running instances. The Spot Fleet will be deleted two
days after its instances are terminated.
cancelled_running
The Spot Fleet request is canceled and does not launch additional instances. Its existing instances
continue to run until they are interrupted or terminated. The request remains in this state until all
instances are interrupted or terminated.
cancelled_terminating
The Spot Fleet request is canceled and its instances are terminating. The request remains in this
state until all instances are terminated.
expired
The Spot Fleet request has expired. If the request was created with
TerminateInstancesWithExpiration set, a subsequent terminated event indicates that the
instances are terminated.
modify_in_progress
The Spot Fleet request is being modified. The request remains in this state until the modification is
fully processed.
modify_succeeded
The Spot Fleet request is being evaluated and Amazon EC2 is preparing to launch the target number
of instances.
progress
881
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet event types
{
"version": "0",
"id": "cd141ef0-14af-d670-a71d-fe46e9971bd2",
"detail-type": "EC2 Spot Fleet Spot Instance Request Change",
"source": "aws.ec2spotfleet",
"account": "123456789012",
"time": "2020-11-09T08:53:21Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:spot-fleet-request/sfr-
a98d2133-941a-47dc-8b03-0f94c6852ad1"
],
"detail": {
"spot-instance-request-id": "sir-a2w9gc5h",
"description": "SpotInstanceRequestId sir-a2w9gc5h, PreviousState:
cancelled_running",
"sub-type": "cancelled"
}
}
active
The Spot Instance request is fulfilled and has an associated Spot Instance.
cancelled
You cancelled the Spot Instance request, or the Spot Instance request expired.
disabled
{
"version": "0",
"id": "11591686-5bd7-bbaa-eb40-d46529c2710f",
"detail-type": "EC2 Spot Fleet Instance Change",
"source": "aws.ec2spotfleet",
"account": "123456789012",
"time": "2020-11-09T07:25:02Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:spot-fleet-request/sfr-c8a764a4-bedc-4b62-
af9c-0095e6e3ba61"
],
"detail": {
"instance-id": "i-08b90df1e09c30c9b",
882
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet event types
"description": "{\"instanceType\":\"r4.2xlarge\",\"image\":\"ami-032930428bf1abbff
\",\"productDescription\":\"Linux/UNIX\",\"availabilityZone\":\"us-east-1a\"}",
"sub-type": "launched"
}
}
launched
An instance termination notification was sent when a Spot Instance was terminated by Amazon EC2
during scale-down, when the target capacity of the fleet was modified down, for example, from a
target capacity of 4 to a target capacity of 3.
{
"version": "0",
"id": "73a60f70-3409-a66c-635c-7f66c5f5b669",
"detail-type": "EC2 Spot Fleet Information",
"source": "aws.ec2spotfleet",
"account": "123456789012",
"time": "2020-11-08T20:56:12Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:spot-fleet-request/sfr-2531ea06-
af18-4647-8757-7d69c94971b1"
],
"detail": {
"description": "r3.8xlarge, ami-032930428bf1abbff, Linux/UNIX, us-east-1a, Spot bid
price is less than Spot market price $0.5291",
"sub-type": "launchSpecUnusable"
}
}
fleetProgressHalted
The price in every launch specification is not valid because it is below the Spot price (all the launch
specifications have produced launchSpecUnusable events). A launch specification might become
valid if the Spot price changes.
launchSpecTemporarilyBlacklisted
The configuration is not valid and several attempts to launch instances have failed. For more
information, see the description of the event.
883
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create EventBridge rules
launchSpecUnusable
The price in a launch specification is not valid because it is below the Spot price.
registerWithLoadBalancersFailed
An attempt to register instances with load balancers failed. For more information, see the
description of the event.
{
"version": "0",
"id": "10adc4e7-675c-643e-125c-5bfa1b1ba5d2",
"detail-type": "EC2 Spot Fleet Error",
"source": "aws.ec2spotfleet",
"account": "123456789012",
"time": "2020-11-09T06:56:07Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:spot-fleet-request/
sfr-38725d30-25f1-4f30-83ce-2907c56dba17"
],
"detail": {
"description": "r4.2xlarge, ami-032930428bf1abbff, Linux/UNIX: The
associatePublicIPAddress parameter can only be specified for the network interface with
DeviceIndex 0. ",
"sub-type": "spotFleetRequestConfigurationInvalid"
}
}
iamFleetRoleInvalid
The Spot Fleet does not have the required permissions to either launch or terminate an instance.
allLaunchSpecsTemporarilyBlacklisted
None of the configurations are valid, and several attempts to launch instances have failed. For more
information, see the description of the event.
spotInstanceCountLimitExceeded
You’ve reached the limit on the number of Spot Instances that you can launch.
spotFleetRequestConfigurationInvalid
The configuration is not valid. For more information, see the description of the event.
884
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create EventBridge rules
You can write an EventBridge rule and automate what actions to take when the event pattern matches
the rule.
Topics
• Create Amazon EventBridge rules to monitor EC2 Fleet events (p. 885)
• Create Amazon EventBridge rules to monitor Spot Fleet events (p. 887)
The following fields form the event pattern that is defined in the rule:
"source": "aws.ec2fleet"
For the list of EC2 Fleet events and example event data, see the section called “EC2 Fleet event
types” (p. 875).
Examples
• Create an EventBridge rule to send a notification (p. 885)
• Create an EventBridge rule to trigger a Lambda function (p. 886)
To create an EventBridge rule to send a notification when an EC2 Fleet state changes
A rule can't have the same name as another rule in the same Region and on the same event bus.
4. For Define pattern, choose Event pattern.
5. Under Event matching pattern, you can choose Pre-defined pattern by service or Custom pattern.
The Custom pattern allows you to create a more detailed rule.
885
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create EventBridge rules
• In the Event pattern box, add the following pattern to match the EC2 Fleet Instance
Change event for this example, and then choose Save.
{
"source": ["aws.ec2fleet"],
"detail-type": ["EC2 Fleet Instance Change"]
}
6. For Select event bus, choose AWS default event bus. When an AWS service in your account emits an
event, it always goes to your account's default event bus.
7. Confirm that Enable the rule on the selected event bus is toggled on.
8. For Target, choose SNS topic to send an email, text message, or mobile push notification when the
event occurs.
9. For Topic, choose an existing topic. You first need to create an Amazon SNS topic using the Amazon
SNS console. For more information, see Using Amazon SNS for application-to-person (A2P)
messaging in the Amazon Simple Notification Service Developer Guide.
10. For Configure input, choose the input for the email, text message, or mobile push notification.
11. Choose Create.
For more information, see Amazon EventBridge rules and Amazon EventBridge event patterns in the
Amazon EventBridge User Guide
To create an EventBridge rule to trigger a Lambda function when an instance in an EC2 Fleet
changes state
For more information about using Lambda, see Create a Lambda function with the console in the
AWS Lambda Developer Guide.
4. Open the Amazon EventBridge console at https://console.aws.amazon.com/events/.
5. Choose Create rule.
6. Enter a Name for the rule, and, optionally, a description.
A rule can't have the same name as another rule in the same Region and on the same event bus.
7. For Define pattern, choose Event pattern.
8. Under Event matching pattern, you can choose Pre-defined pattern by service or Custom pattern.
The Custom pattern allows you to create a more detailed rule.
886
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create EventBridge rules
• In the Event pattern box, add the following pattern to match the EC2 Fleet Instance
Change event and launched sub-type for this example, and then choose Save.
{
"source": ["aws.ec2fleet"],
"detail-type": ["EC2 Fleet Instance Change"],
"detail": {
"sub-type": ["launched"]
}
}
9. For Target, choose Lambda function, and for Function, choose the function that you created to
respond when the event occurs.
10. Choose Create.
In this example, the Lambda function will be triggered when the EC2 Fleet Instance Change
event with the sub-type launched occurs.
For a tutorial on how to create a Lambda function and an EventBridge rule that runs the Lambda
function, see Tutorial: Log the State of an Amazon EC2 Instance Using EventBridge in the AWS Lambda
Developer Guide.
The following fields form the event pattern that is defined in the rule:
"source": "aws.ec2spotfleet"
For the list of Spot Fleet events and example event data, see the section called “Spot Fleet event
types” (p. 880).
Examples
• Create an EventBridge rule to send a notification (p. 885)
• Create an EventBridge rule to trigger a Lambda function (p. 886)
887
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create EventBridge rules
To create an EventBridge rule to send a notification when a Spot Fleet state changes
A rule can't have the same name as another rule in the same Region and on the same event bus.
4. For Define pattern, choose Event pattern.
5. Under Event matching pattern, you can choose Pre-defined pattern by service or Custom pattern.
The Custom pattern allows you to create a more detailed rule.
• In the Event pattern box, add the following pattern to match the EC2 Spot Fleet
Instance Change event for this example, and then choose Save.
{
"source": ["aws.ec2spotfleet"],
"detail-type": ["EC2 Spot Fleet Instance Change"]
}
6. For Select event bus, choose AWS default event bus. When an AWS service in your account emits an
event, it always goes to your account's default event bus.
7. Confirm that Enable the rule on the selected event bus is toggled on.
8. For Target, choose SNS topic to send an email, text message, or mobile push notification when the
event occurs.
9. For Topic, choose an existing topic. You first need to create an Amazon SNS topic using the Amazon
SNS console. For more information, see Using Amazon SNS for application-to-person (A2P)
messaging in the Amazon Simple Notification Service Developer Guide.
10. For Configure input, choose the input for the email, text message, or mobile push notification.
11. Choose Create.
For more information, see Amazon EventBridge rules and Amazon EventBridge event patterns in the
Amazon EventBridge User Guide
888
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorials
which triggers the action defined by the rule. Before creating the EventBridge rule, you must create the
Lambda function.
To create an EventBridge rule to trigger a Lambda function when an instance in a Spot Fleet
changes state
For more information about using Lambda, see Create a Lambda function with the console in the
AWS Lambda Developer Guide.
4. Open the Amazon EventBridge console at https://console.aws.amazon.com/events/.
5. Choose Create rule.
6. Enter a Name for the rule, and, optionally, a description.
A rule can't have the same name as another rule in the same Region and on the same event bus.
7. For Define pattern, choose Event pattern.
8. Under Event matching pattern, you can choose Pre-defined pattern by service or Custom pattern.
The Custom pattern allows you to create a more detailed rule.
• In the Event pattern box, add the following pattern to match the EC2 Spot Fleet
Instance Change event and launched sub-type for this example, and then choose Save.
{
"source": ["aws.ec2spotfleet"],
"detail-type": ["EC2 Spot Fleet Instance Change"],
"detail": {
"sub-type": ["launched"]
}
}
9. For Target, choose Lambda function, and for Function, choose the function that you created to
respond when the event occurs.
10. Choose Create.
In this example, the Lambda function will be triggered when the EC2 Fleet Instance Change
event with the sub-type launched occurs.
For a tutorial on how to create a Lambda function and an EventBridge rule that runs the Lambda
function, see Tutorial: Log the State of an Amazon EC2 Instance Using EventBridge in the AWS Lambda
Developer Guide.
889
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Use EC2 Fleet with instance weighting
Tutorials
• Tutorial: Use EC2 Fleet with instance weighting (p. 890)
• Tutorial: Use EC2 Fleet with On-Demand as the primary capacity (p. 892)
• Tutorial: Launch On-Demand Instances using targeted Capacity Reservations (p. 893)
• Tutorial: Use Spot Fleet with instance weighting (p. 898)
Objective
Example Corp, a pharmaceutical company, wants to use the computational power of Amazon EC2 for
screening chemical compounds that might be used to fight cancer.
Planning
Example Corp first reviews Spot Best Practices. Next, Example Corp determines the requirements for
their EC2 Fleet.
Instance types
Example Corp has a compute- and memory-intensive application that performs best with at least 60 GB
of memory and eight virtual CPUs (vCPUs). They want to maximize these resources for the application at
the lowest possible price. Example Corp decides that any of the following EC2 instance types would meet
their needs:
r3.2xlarge 61 8
r3.4xlarge 122 16
r3.8xlarge 244 32
With instance weighting, target capacity can equal a number of instances (the default) or a combination
of factors such as cores (vCPUs), memory (GiBs), and storage (GBs). By considering the base for their
application (60 GB of RAM and eight vCPUs) as one unit, Example Corp decides that 20 times this
amount would meet their needs. So the company sets the target capacity of their EC2 Fleet request to
20.
Instance weights
After determining the target capacity, Example Corp calculates instance weights. To calculate the
instance weight for each instance type, they determine the units of each instance type that are required
to reach the target capacity as follows:
890
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Use EC2 Fleet with instance weighting
Therefore, Example Corp assigns instance weights of 1, 2, and 4 to the respective launch configurations
in their EC2 Fleet request.
Example Corp uses the On-Demand price per instance hour as a starting point for their price. They could
also use recent Spot prices, or a combination of the two. To calculate the price per unit hour, they divide
their starting price per instance hour by the weight. For example:
Instance type On-Demand price Instance weight Price per unit hour
Example Corp could use a global price per unit hour of $0.7 and be competitive for all three instance
types. They could also use a global price per unit hour of $0.7 and a specific price per unit hour of $0.9 in
the r3.8xlarge launch specification.
Verify permissions
Before creating an EC2 Fleet, Example Corp verifies that it has an IAM role with the required permissions.
For more information, see EC2 Fleet prerequisites (p. 806).
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-07b3bc7625cdab851",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "r3.2xlarge",
"SubnetId": "subnet-482e4972",
"WeightedCapacity": 1
},
{
"InstanceType": "r3.4xlarge",
"SubnetId": "subnet-482e4972",
"WeightedCapacity": 2
},
{
891
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Use EC2 Fleet with On-
Demand as the primary capacity
"InstanceType": "r3.8xlarge",
"MaxPrice": "0.90",
"SubnetId": "subnet-482e4972",
"WeightedCapacity": 4
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"DefaultTargetCapacityType": "spot"
}
}
Example Corp creates the EC2 Fleet using the following create-fleet command.
Fulfillment
The allocation strategy determines which Spot capacity pools your Spot Instances come from.
With the lowest-price strategy (which is the default strategy), the Spot Instances come from the pool
with the lowest price per unit at the time of fulfillment. To provide 20 units of capacity, the EC2 Fleet
launches either 20 r3.2xlarge instances (20 divided by 1), 10 r3.4xlarge instances (20 divided by 2),
or 5 r3.8xlarge instances (20 divided by 4).
If Example Corp used the diversified strategy, the Spot Instances would come from all three pools.
The EC2 Fleet would launch 6 r3.2xlarge instances (which provide 6 units), 3 r3.4xlarge instances
(which provide 6 units), and 2 r3.8xlarge instances (which provide 8 units), for a total of 20 units.
Objective
ABC Online, a restaurant delivery company, wants to be able to provision Amazon EC2 capacity across
EC2 instance types and purchasing options to achieve their desired scale, performance, and cost.
Plan
ABC Online requires a fixed capacity to operate during peak periods, but would like to benefit from
increased capacity at a lower price. ABC Online determines the following requirements for their EC2
Fleet:
• On-Demand Instance capacity – ABC Online requires 15 On-Demand Instances to ensure that they can
accommodate traffic at peak periods.
• Spot Instance capacity – ABC Online would like to improve performance, but at a lower price, by
provisioning 5 Spot Instances.
892
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Launch On-Demand Instances
using targeted Capacity Reservations
Verify permissions
Before creating an EC2 Fleet, ABC Online verifies that it has an IAM role with the required permissions.
For more information, see EC2 Fleet prerequisites (p. 806).
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-07b3bc7625cdab851",
"Version": "2"
}
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 20,
"OnDemandTargetCapacity":15,
"DefaultTargetCapacityType": "spot"
}
}
ABC Online creates the EC2 Fleet using the following create-fleet command.
Fulfillment
The allocation strategy determines that the On-Demand capacity is always fulfilled, while the balance of
the target capacity is fulfilled as Spot if there is capacity and availability.
You will learn how to configure a fleet to use targeted On-Demand Capacity Reservations first when
launching On-Demand Instances. You will also learn how to configure the fleet so that, when the total
On-Demand target capacity exceeds the number of available unused Capacity Reservations, the fleet
uses the specified allocation strategy for selecting the instance pools in which to launch the remaining
target capacity.
893
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Launch On-Demand Instances
using targeted Capacity Reservations
In this tutorial, the fleet configuration is as follows:
Note that you can also use the prioritized allocation strategy instead of the lowest-price
allocation strategy.
To launch On-Demand Instances into targeted Capacity Reservations, you must perform a number of
steps, as follows:
• Step 1: Create Capacity Reservations (p. 894)
• Step 2: Create a Capacity Reservation resource group (p. 895)
• Step 3: Add the Capacity Reservations to the Capacity Reservation resource group (p. 895)
• (Optional) Step 4: View the Capacity Reservations in the resource group (p. 895)
• Step 5: Create a launch template that specifies that the Capacity Reservation targets a specific
resource group (p. 896)
• (Optional) Step 6: Describe the launch template (p. 896)
• Step 7: Create an EC2 Fleet (p. 897)
• (Optional) Step 8: View the number of remaining unused Capacity Reservations (p. 898)
cr-1234567890abcdef1
894
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Launch On-Demand Instances
using targeted Capacity Reservations
--instance-match-criteria targeted
cr-54321abcdef567890
Example output
{
"Failed": [],
"Succeeded": [
"arn:aws:ec2:us-east-1:123456789012:capacity-reservation/cr-1234567890abcdef1",
"arn:aws:ec2:us-east-1:123456789012:capacity-reservation/cr-54321abcdef567890"
]
}
Example output
{
"ResourceIdentifiers": [
{
895
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Launch On-Demand Instances
using targeted Capacity Reservations
"ResourceType": "AWS::EC2::CapacityReservation",
"ResourceArn": "arn:aws:ec2:us-east-1:123456789012:capacity-reservation/
cr-1234567890abcdef1"
},
{
"ResourceType": "AWS::EC2::CapacityReservation",
"ResourceArn": "arn:aws:ec2:us-east-1:123456789012:capacity-reservation/
cr-54321abcdef567890"
}
]
}
Example output
{
"LaunchTemplateVersions": [
{
"LaunchTemplateId": "lt-01234567890example",
"LaunchTemplateName": "my-launch-template",
"VersionNumber": 1,
"CreateTime": "2021-01-19T20:50:19.000Z",
"CreatedBy": "arn:aws:iam::123456789012:user/Admin",
"DefaultVersion": true,
"LaunchTemplateData": {
"ImageId": "ami-0947d2ba12ee1ff75",
"CapacityReservationSpecification": {
"CapacityReservationTarget": {
"CapacityReservationResourceGroupArn": "arn:aws:resource-groups:us-
east-1:123456789012:group/my-cr-group"
}
}
}
}
896
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Launch On-Demand Instances
using targeted Capacity Reservations
]
}
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "my-launch-template",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "c5.xlarge",
"AvailabilityZone": "us-east-1a"
},
{
"InstanceType": "c5.xlarge",
"AvailabilityZone": "us-east-1b"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 10,
"DefaultTargetCapacityType": "on-demand"
},
"OnDemandOptions": {
"AllocationStrategy": "lowest-price",
"CapacityReservationOptions": {
"UsageStrategy": "use-capacity-reservations-first"
}
},
"Type": "instant"
}
After you create the instant fleet using the preceding configuration, the following 10 instances are
launched to meet the target capacity:
• The Capacity Reservations are used first to launch 6 On-Demand Instances as follows:
• 3 On-Demand Instances are launched into the 3 c5.xlarge targeted Capacity Reservations in
us-east-1a
• 3 On-Demand Instances are launched into the 3 c5.xlarge targeted Capacity Reservations in
us-east-1b
• To meet the target capacity, 4 additional On-Demand Instances are launched into regular On-Demand
capacity according to the On-Demand allocation strategy, which is lowest-price in this example.
897
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Use Spot Fleet with instance weighting
However, because the pools are the same price (because price is per Region and not per Availability
Zone), the fleet launches the remaining 4 On-Demand Instances into either of the pools.
{ "CapacityReservationId": "cr-111",
"InstanceType": "c5.xlarge",
"AvailableInstanceCount": 0
}
{ "CapacityReservationId": "cr-222",
"InstanceType": "c5.xlarge",
"AvailableInstanceCount": 0
}
Objective
Example Corp, a pharmaceutical company, wants to leverage the computational power of Amazon EC2
for screening chemical compounds that might be used to fight cancer.
Planning
Example Corp first reviews Spot Best Practices. Next, Example Corp determines the following
requirements for their Spot Fleet.
Instance types
Example Corp has a compute- and memory-intensive application that performs best with at least 60 GB
of memory and eight virtual CPUs (vCPUs). They want to maximize these resources for the application at
the lowest possible price. Example Corp decides that any of the following EC2 instance types would meet
their needs:
r3.2xlarge 61 8
r3.4xlarge 122 16
r3.8xlarge 244 32
With instance weighting, target capacity can equal a number of instances (the default) or a combination
of factors such as cores (vCPUs), memory (GiBs), and storage (GBs). By considering the base for their
898
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tutorial: Use Spot Fleet with instance weighting
application (60 GB of RAM and eight vCPUs) as 1 unit, Example Corp decides that 20 times this amount
would meet their needs. So the company sets the target capacity of their Spot Fleet request to 20.
Instance weights
After determining the target capacity, Example Corp calculates instance weights. To calculate the
instance weight for each instance type, they determine the units of each instance type that are required
to reach the target capacity as follows:
Therefore, Example Corp assigns instance weights of 1, 2, and 4 to the respective launch configurations
in their Spot Fleet request.
Example Corp uses the On-Demand price per instance hour as a starting point for their price. They could
also use recent Spot prices, or a combination of the two. To calculate the price per unit hour, they divide
their starting price per instance hour by the weight. For example:
Instance type On-Demand price Instance weight Price per unit hour
Example Corp could use a global price per unit hour of $0.7 and be competitive for all three instance
types. They could also use a global price per unit hour of $0.7 and a specific price per unit hour of $0.9 in
the r3.8xlarge launch specification.
Verify permissions
Before creating a Spot Fleet request, Example Corp verifies that it has an IAM role with the required
permissions. For more information, see Spot Fleet permissions (p. 850).
{
"SpotPrice": "0.70",
"TargetCapacity": 20,
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.2xlarge",
"SubnetId": "subnet-482e4972",
"WeightedCapacity": 1
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.4xlarge",
899
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Example configurations
"SubnetId": "subnet-482e4972",
"WeightedCapacity": 2
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.8xlarge",
"SubnetId": "subnet-482e4972",
"SpotPrice": "0.90",
"WeightedCapacity": 4
}
]
}
Example Corp creates the Spot Fleet request using the request-spot-fleet command.
For more information, see Spot Fleet request types (p. 822).
Fulfillment
The allocation strategy determines which Spot capacity pools your Spot Instances come from.
With the lowestPrice strategy (which is the default strategy), the Spot Instances come from the pool
with the lowest price per unit at the time of fulfillment. To provide 20 units of capacity, the Spot Fleet
launches either 20 r3.2xlarge instances (20 divided by 1), 10 r3.4xlarge instances (20 divided by 2),
or 5 r3.8xlarge instances (20 divided by 4).
If Example Corp used the diversified strategy, the Spot Instances would come from all three pools.
The Spot Fleet would launch 6 r3.2xlarge instances (which provide 6 units), 3 r3.4xlarge instances
(which provide 6 units), and 2 r3.8xlarge instances (which provide 8 units), for a total of 20 units.
Topics
• EC2 Fleet example configurations (p. 900)
• Spot Fleet example configurations (p. 913)
Examples
• Example 1: Launch Spot Instances as the default purchasing option (p. 901)
• Example 2: Launch On-Demand Instances as the default purchasing option (p. 901)
• Example 3: Launch On-Demand Instances as the primary capacity (p. 902)
• Example 4: Launch Spot Instances using the lowest-price allocation strategy (p. 902)
900
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
• Example 5: Launch On-Demand Instances using multiple Capacity Reservations (p. 903)
• Example 6: Launch On-Demand Instances using Capacity Reservations when the total target capacity
exceeds the number of unused Capacity Reservations (p. 905)
• Example 7: Launch On-Demand Instances using targeted Capacity Reservations (p. 908)
• Example 8: Configure Capacity Rebalancing to launch replacement Spot Instances (p. 910)
• Example 9: Launch Spot Instances in a capacity-optimized fleet (p. 911)
• Example 10: Launch Spot Instances in a capacity-optimized fleet with priorities (p. 912)
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-0e8c754449b27161c",
"Version": "1"
}
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 2,
"DefaultTargetCapacityType": "spot"
}
}
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-0e8c754449b27161c",
"Version": "1"
}
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 2,
"DefaultTargetCapacityType": "on-demand"
}
}
901
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-0e8c754449b27161c",
"Version": "1"
}
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 2,
"OnDemandTargetCapacity": 1,
"DefaultTargetCapacityType": "spot"
}
}
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-0e8c754449b27161c",
"Version": "1"
}
"Overrides": [
{
"InstanceType": "c4.large",
"WeightedCapacity": 1,
"SubnetId": "subnet-a4f6c5d3"
},
{
"InstanceType": "c3.large",
"WeightedCapacity": 1,
"SubnetId": "subnet-a4f6c5d3"
},
{
"InstanceType": "c5.large",
"WeightedCapacity": 1,
"SubnetId": "subnet-a4f6c5d3"
902
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 2,
"DefaultTargetCapacityType": "spot"
}
}
Note that you can also use the prioritized allocation strategy instead of the lowest-price
allocation strategy.
Capacity Reservations
The account has the following 15 unused Capacity Reservations in 3 different pools. The number of
Capacity Reservations in each pool is indicated by AvailableInstanceCount.
{
"CapacityReservationId": "cr-111",
"InstanceType": "m5.large",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1a",
"AvailableInstanceCount": 5,
"InstanceMatchCriteria": "open",
"State": "active"
}
{
"CapacityReservationId": "cr-222",
"InstanceType": "m4.xlarge",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1a",
"AvailableInstanceCount": 5,
"InstanceMatchCriteria": "open",
"State": "active"
}
903
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
{
"CapacityReservationId": "cr-333",
"InstanceType": "m4.2xlarge",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1a",
"AvailableInstanceCount":5,
"InstanceMatchCriteria": "open",
"State": "active"
}
Fleet configuration
The following fleet configuration shows only the pertinent configurations for this example. The
total target capacity is 12, and the default target capacity type is on-demand. The On-Demand
allocation strategy is lowest-price. The usage strategy for Capacity Reservations is use-capacity-
reservations-first.
Note
The fleet type must be of type instant. Other fleet types do not support use-capacity-
reservations-first.
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-abc1234567example",
"Version": "1"
}
"Overrides": [
{
"InstanceType": "m5.large",
"AvailabilityZone": "us-east-1a",
"WeightedCapacity": 1
},
{
"InstanceType": "m4.xlarge",
"AvailabilityZone": "us-east-1a",
"WeightedCapacity": 1
},
{
"InstanceType": "m4.2xlarge",
"AvailabilityZone": "us-east-1a",
"WeightedCapacity": 1
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 12,
"DefaultTargetCapacityType": "on-demand"
},
"OnDemandOptions": {
"AllocationStrategy": "lowest-price"
"CapacityReservationOptions": {
"UsageStrategy": "use-capacity-reservations-first"
904
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
}
},
"Type": "instant",
}
After you create the instant fleet using the preceding configuration, the following 12 instances are
launched to meet the target capacity:
After the fleet is launched, you can run describe-capacity-reservations to see how many unused Capacity
Reservations are remaining. In this example, you should see the following response, which shows that
all of the m5.large and m4.xlarge Capacity Reservations were used, with 3 m4.2xlarge Capacity
Reservations remaining unused.
{
"CapacityReservationId": "cr-111",
"InstanceType": "m5.large",
"AvailableInstanceCount": 0
}
{
"CapacityReservationId": "cr-222",
"InstanceType": "m4.xlarge",
"AvailableInstanceCount": 0
}
{
"CapacityReservationId": "cr-333",
"InstanceType": "m4.2xlarge",
"AvailableInstanceCount": 3
}
905
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
• On-Demand allocation strategy: lowest-price (When the number of unused Capacity Reservations
is less than the On-Demand target capacity, the fleet determines the pools in which to launch the
remaining On-Demand capacity based on the On-Demand allocation strategy.)
Note that you can also use the prioritized allocation strategy instead of the lowest-price
allocation strategy.
Capacity Reservations
The account has the following 15 unused Capacity Reservations in 3 different pools. The number of
Capacity Reservations in each pool is indicated by AvailableInstanceCount.
{
"CapacityReservationId": "cr-111",
"InstanceType": "m5.large",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1a",
"AvailableInstanceCount": 5,
"InstanceMatchCriteria": "open",
"State": "active"
}
{
"CapacityReservationId": "cr-222",
"InstanceType": "m4.xlarge",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1a",
"AvailableInstanceCount": 5,
"InstanceMatchCriteria": "open",
"State": "active"
}
{
"CapacityReservationId": "cr-333",
"InstanceType": "m4.2xlarge",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1a",
"AvailableInstanceCount":5,
"InstanceMatchCriteria": "open",
"State": "active"
}
Fleet configuration
The following fleet configuration shows only the pertinent configurations for this example. The
total target capacity is 16, and the default target capacity type is on-demand. The On-Demand
allocation strategy is lowest-price. The usage strategy for Capacity Reservations is use-capacity-
reservations-first.
Note
The fleet type must be instant. Other fleet types do not support use-capacity-
reservations-first.
906
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-0e8c754449b27161c",
"Version": "1"
}
"Overrides": [
{
"InstanceType": "m5.large",
"AvailabilityZone": "us-east-1a",
"WeightedCapacity": 1
},
{
"InstanceType": "m4.xlarge",
"AvailabilityZone": "us-east-1a",
"WeightedCapacity": 1
},
{
"InstanceType": "m4.2xlarge",
"AvailabilityZone": "us-east-1a",
"WeightedCapacity": 1
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 16,
"DefaultTargetCapacityType": "on-demand"
},
"OnDemandOptions": {
"AllocationStrategy": "lowest-price"
"CapacityReservationOptions": {
"UsageStrategy": "use-capacity-reservations-first"
}
},
"Type": "instant",
}
After you create the instant fleet using the preceding configuration, the following 16 instances are
launched to meet the target capacity:
After the fleet is launched, you can run describe-capacity-reservations to see how many unused Capacity
Reservations are remaining. In this example, you should see the following response, which shows that all
of the Capacity Reservations in all of the pools were used.
{
"CapacityReservationId": "cr-111",
"InstanceType": "m5.large",
907
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
"AvailableInstanceCount": 0
}
{
"CapacityReservationId": "cr-222",
"InstanceType": "m4.xlarge",
"AvailableInstanceCount": 0
}
{
"CapacityReservationId": "cr-333",
"InstanceType": "m4.2xlarge",
"AvailableInstanceCount": 0
}
Note that you can also use the prioritized allocation strategy instead of the lowest-price
allocation strategy.
For a walkthrough of the procedures that you must perform to accomplish this example, see Tutorial:
Launch On-Demand Instances using targeted Capacity Reservations (p. 893).
Capacity Reservations
The account has the following 6 unused Capacity Reservations in 2 different pools. In this example, the
pools differ by their Availability Zones. The number of Capacity Reservations in each pool is indicated by
AvailableInstanceCount.
{
"CapacityReservationId": "cr-111",
"InstanceType": "c5.xlarge",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1a",
"AvailableInstanceCount": 3,
"InstanceMatchCriteria": "open",
"State": "active"
}
908
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
{
"CapacityReservationId": "cr-222",
"InstanceType": "c5.xlarge",
"InstancePlatform": "Linux/UNIX",
"AvailabilityZone": "us-east-1b",
"AvailableInstanceCount": 3,
"InstanceMatchCriteria": "open",
"State": "active"
}
Fleet configuration
The following fleet configuration shows only the pertinent configurations for this example. The
total target capacity is 10, and the default target capacity type is on-demand. The On-Demand
allocation strategy is lowest-price. The usage strategy for Capacity Reservations is use-capacity-
reservations-first.
In this example, the On-Demand Instance price for c5.xlarge in us-east-1 is $0.17 per hour.
Note
The fleet type must be instant. Other fleet types do not support use-capacity-
reservations-first.
{
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "my-launch-template",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "c5.xlarge",
"AvailabilityZone": "us-east-1a"
},
{
"InstanceType": "c5.xlarge",
"AvailabilityZone": "us-east-1b"
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 10,
"DefaultTargetCapacityType": "on-demand"
},
"OnDemandOptions": {
"AllocationStrategy": "lowest-price",
"CapacityReservationOptions": {
"UsageStrategy": "use-capacity-reservations-first"
}
},
"Type": "instant"
}
After you create the instant fleet using the preceding configuration, the following 10 instances are
launched to meet the target capacity:
• The Capacity Reservations are used first to launch 6 On-Demand Instances as follows:
• 3 On-Demand Instances are launched into the 3 c5.xlarge targeted Capacity Reservations in
us-east-1a
909
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
• 3 On-Demand Instances are launched into the 3 c5.xlarge targeted Capacity Reservations in
us-east-1b
• To meet the target capacity, 4 additional On-Demand Instances are launched into regular On-Demand
capacity according to the On-Demand allocation strategy, which is lowest-price in this example.
However, because the pools are the same price (because price is per Region and not per Availability
Zone), the fleet launches the remaining 4 On-Demand Instances into either of the pools.
After the fleet is launched, you can run describe-capacity-reservations to see how many unused Capacity
Reservations are remaining. In this example, you should see the following response, which shows that all
of the Capacity Reservations in all of the pools were used.
{
"CapacityReservationId": "cr-111",
"InstanceType": "c5.xlarge",
"AvailableInstanceCount": 0
}
{
"CapacityReservationId": "cr-222",
"InstanceType": "c5.xlarge",
"AvailableInstanceCount": 0
}
The effectiveness of the Capacity Rebalancing strategy depends on the number of Spot capacity pools
specified in the EC2 Fleet request. We recommend that you configure the fleet with a diversified set of
instance types and Availability Zones, and for AllocationStrategy, specify capacity-optimized.
For more information about what you should consider when configuring an EC2 Fleet for Capacity
Rebalancing, see Capacity Rebalancing (p. 800).
{
"ExcessCapacityTerminationPolicy": "termination",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "LaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "c3.large",
"WeightedCapacity": 1,
"Placement": {
910
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
"AvailabilityZone": "us-east-1a"
}
},
{
"InstanceType": "c4.large",
"WeightedCapacity": 1,
"Placement": {
"AvailabilityZone": "us-east-1a"
}
},
{
"InstanceType": "c5.large",
"WeightedCapacity": 1,
"Placement": {
"AvailabilityZone": "us-east-1a"
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 5,
"DefaultTargetCapacityType": "spot"
},
"SpotOptions": {
"AllocationStrategy": "capacity-optimized",
"MaintenanceStrategies": {
"CapacityRebalance": {
"ReplacementStrategy": "launch-before-terminate",
"TerminationDelay": "720"
}
}
}
}
In the following example, the three launch specifications specify three Spot capacity pools. The target
capacity is 50 Spot Instances. The EC2 Fleet attempts to launch 50 Spot Instances into the Spot capacity
pool with optimal capacity for the number of instances that are launching.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized",
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "my-launch-template",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "r4.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2a"
},
},
{
911
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet example configurations
"InstanceType": "m4.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
},
},
{
"InstanceType": "c5.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 50,
"DefaultTargetCapacityType": "spot"
}
}
When using the capacity-optimized-prioritized allocation strategy, you can use the Priority
parameter to specify the priorities of the Spot capacity pools, where the lower the number the higher
priority. You can also set the same priority for several Spot capacity pools if you favor them equally. If
you do not set a priority for a pool, the pool will be considered last in terms of priority.
In the following example, the three launch specifications specify three Spot capacity pools. Each pool is
prioritized, where the lower the number the higher priority. The target capacity is 50 Spot Instances. The
EC2 Fleet attempts to launch 50 Spot Instances into the Spot capacity pool with the highest priority on a
best-effort basis, but optimizes for capacity first.
{
"SpotOptions": {
"AllocationStrategy": "capacity-optimized-prioritized"
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "my-launch-template",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "r4.2xlarge",
"Priority": 1,
"Placement": {
"AvailabilityZone": "us-west-2a"
},
},
912
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
{
"InstanceType": "m4.2xlarge",
"Priority": 2,
"Placement": {
"AvailabilityZone": "us-west-2b"
},
},
{
"InstanceType": "c5.2xlarge",
"Priority": 3,
"Placement": {
"AvailabilityZone": "us-west-2b"
}
}
]
}
],
"TargetCapacitySpecification": {
"TotalTargetCapacity": 50,
"DefaultTargetCapacityType": "spot"
}
Examples
• Example 1: Launch Spot Instances using the lowest-priced Availability Zone or subnet in the
Region (p. 913)
• Example 2: Launch Spot Instances using the lowest-priced Availability Zone or subnet in a specified
list (p. 914)
• Example 3: Launch Spot Instances using the lowest-priced instance type in a specified list (p. 915)
• Example 4. Override the price for the request (p. 916)
• Example 5: Launch a Spot Fleet using the diversified allocation strategy (p. 918)
• Example 6: Launch a Spot Fleet using instance weighting (p. 920)
• Example 7: Launch a Spot Fleet with On-Demand capacity (p. 921)
• Example 8: Configure Capacity Rebalancing to launch replacement Spot Instances (p. 921)
• Example 9: Launch Spot Instances in a capacity-optimized fleet (p. 922)
• Example 10: Launch Spot Instances in a capacity-optimized fleet with priorities (p. 923)
{
"TargetCapacity": 20,
913
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "m3.medium",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
]
}
Availability Zones
The Spot Fleet launches the instances in the default subnet of the lowest-priced Availability Zone that
you specified.
{
"TargetCapacity": 20,
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "m3.medium",
"Placement": {
"AvailabilityZone": "us-west-2a, us-west-2b"
},
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
]
}
Subnets
You can specify default subnets or nondefault subnets, and the nondefault subnets can be from a
default VPC or a nondefault VPC. The Spot service launches the instances in whichever subnet is in the
lowest-priced Availability Zone.
You can't specify different subnets from the same Availability Zone in a Spot Fleet request.
914
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
"TargetCapacity": 20,
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "m3.medium",
"SubnetId": "subnet-a61dafcf, subnet-65ea5f08",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
]
}
If the instances are launched in a default VPC, they receive a public IPv4 address by default. If the
instances are launched in a nondefault VPC, they do not receive a public IPv4 address by default. Use
a network interface in the launch specification to assign a public IPv4 address to instances launched in
a nondefault VPC. When you specify a network interface, you must include the subnet ID and security
group ID using the network interface.
...
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"InstanceType": "m3.medium",
"NetworkInterfaces": [
{
"DeviceIndex": 0,
"SubnetId": "subnet-1a2b3c4d",
"Groups": [ "sg-1a2b3c4d" ],
"AssociatePublicIpAddress": true
}
],
"IamInstanceProfile": {
"Arn": "arn:aws:iam::880185128111:instance-profile/my-iam-role"
}
}
...
Availability Zone
{
"TargetCapacity": 20,
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
915
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "cc2.8xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
}
},
{
"ImageId": "ami-1a2b3c4d",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "r3.8xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
}
}
]
}
Subnet
{
"TargetCapacity": 20,
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "cc2.8xlarge",
"SubnetId": "subnet-1a2b3c4d"
},
{
"ImageId": "ami-1a2b3c4d",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "r3.8xlarge",
"SubnetId": "subnet-1a2b3c4d"
}
]
}
The following examples specify a maximum price for the fleet request and maximum prices for two
of the three launch specifications. The maximum price for the fleet request is used for any launch
916
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
specification that does not specify a maximum price. The Spot Fleet launches the instances using the
instance type with the lowest price.
Availability Zone
{
"SpotPrice": "1.00",
"TargetCapacity": 30,
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c3.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
},
"SpotPrice": "0.10"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c3.4xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
},
"SpotPrice": "0.20"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c3.8xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
}
}
]
}
Subnet
{
"SpotPrice": "1.00",
"TargetCapacity": 30,
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c3.2xlarge",
"SubnetId": "subnet-1a2b3c4d",
"SpotPrice": "0.10"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c3.4xlarge",
"SubnetId": "subnet-1a2b3c4d",
"SpotPrice": "0.20"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c3.8xlarge",
"SubnetId": "subnet-1a2b3c4d"
}
]
}
917
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
Availability Zone
{
"SpotPrice": "0.70",
"TargetCapacity": 30,
"AllocationStrategy": "diversified",
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c4.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
}
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "m3.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
}
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
}
}
]
}
Subnet
{
"SpotPrice": "0.70",
"TargetCapacity": 30,
"AllocationStrategy": "diversified",
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c4.2xlarge",
"SubnetId": "subnet-1a2b3c4d"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "m3.2xlarge",
"SubnetId": "subnet-1a2b3c4d"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.2xlarge",
"SubnetId": "subnet-1a2b3c4d"
918
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
}
]
}
A best practice to increase the chance that a spot request can be fulfilled by EC2 capacity in the event
of an outage in one of the Availability Zones is to diversify across zones. For this scenario, include each
Availability Zone available to you in the launch specification. And, instead of using the same subnet each
time, use three unique subnets (each mapping to a different zone).
Availability Zone
{
"SpotPrice": "0.70",
"TargetCapacity": 30,
"AllocationStrategy": "diversified",
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c4.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2a"
}
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "m3.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
}
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2c"
}
}
]
}
Subnet
{
"SpotPrice": "0.70",
"TargetCapacity": 30,
"AllocationStrategy": "diversified",
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c4.2xlarge",
"SubnetId": "subnet-1a2b3c4d"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "m3.2xlarge",
"SubnetId": "subnet-2a2b3c4d"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.2xlarge",
"SubnetId": "subnet-3a2b3c4d"
919
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
}
]
}
If the r3.2xlarge request is successful, Spot provisions 4 of these instances. Divide 20 by 6 for a total
of 3.33 instances, then round up to 4 instances.
If the c3.xlarge request is successful, Spot provisions 7 of these instances. Divide 20 by 3 for a total of
6.66 instances, then round up to 7 instances.
For more information, see Spot Fleet instance weighting (p. 846).
Availability Zone
{
"SpotPrice": "0.70",
"TargetCapacity": 20,
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.2xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
},
"WeightedCapacity": 6
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c3.xlarge",
"Placement": {
"AvailabilityZone": "us-west-2b"
},
"WeightedCapacity": 3
}
]
}
Subnet
{
"SpotPrice": "0.70",
"TargetCapacity": 20,
"IamFleetRole": "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-tagging-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.2xlarge",
"SubnetId": "subnet-1a2b3c4d",
"WeightedCapacity": 6
},
920
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c3.xlarge",
"SubnetId": "subnet-1a2b3c4d",
"WeightedCapacity": 3
}
]
}
The following example specifies the desired target capacity as 10, of which 5 must be On-Demand
capacity. Spot capacity is not specified; it is implied in the balance of the target capacity minus the On-
Demand capacity. Amazon EC2 launches 5 capacity units as On-Demand, and 5 capacity units (10-5=5) as
Spot if there is available Amazon EC2 capacity and availability.
{
"IamFleetRole": "arn:aws:iam::781603563322:role/aws-ec2-spot-fleet-tagging-role",
"AllocationStrategy": "lowestPrice",
"TargetCapacity": 10,
"SpotPrice": null,
"ValidFrom": "2018-04-04T15:58:13Z",
"ValidUntil": "2019-04-04T15:58:13Z",
"TerminateInstancesWithExpiration": true,
"LaunchSpecifications": [],
"Type": "maintain",
"OnDemandTargetCapacity": 5,
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateId": "lt-0dbb04d4a6cca5ad1",
"Version": "2"
},
"Overrides": [
{
"InstanceType": "t2.medium",
"WeightedCapacity": 1,
"SubnetId": "subnet-d0dc51fb"
}
]
}
]
}
921
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
Note
We recommend using launch-before-terminate only if you can predict how long your
instance shutdown procedures will take to complete. This ensures that the old instances are
terminated only after the shutdown procedures are completed. You are charged for all instances
while they are running.
The effectiveness of the Capacity Rebalancing strategy depends on the number of Spot capacity pools
specified in the Spot Fleet request. We recommend that you configure the fleet with a diversified set of
instance types and Availability Zones, and for AllocationStrategy, specify capacityOptimized.
For more information about what you should consider when configuring a Spot Fleet for Capacity
Rebalancing, see Capacity Rebalancing (p. 843).
{
"SpotFleetRequestConfig": {
"AllocationStrategy": "capacityOptimized",
"IamFleetRole": "arn:aws:iam::000000000000:role/aws-ec2-spot-fleet-tagging-role",
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "LaunchTemplate",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "c3.large",
"WeightedCapacity": 1,
"Placement": {
"AvailabilityZone": "us-east-1a"
}
},
{
"InstanceType": "c4.large",
"WeightedCapacity": 1,
"Placement": {
"AvailabilityZone": "us-east-1a"
}
},
{
"InstanceType": "c5.large",
"WeightedCapacity": 1,
"Placement": {
"AvailabilityZone": "us-east-1a"
}
}
]
}
],
"TargetCapacity": 5,
"SpotMaintenanceStrategies": {
"CapacityRebalance": {
"ReplacementStrategy": "launch-before-terminate",
"TerminationDelay": "720"
}
}
}
}
922
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Spot Fleet example configurations
In the following example, the three launch specifications specify three Spot capacity pools. The target
capacity is 50 Spot Instances. The Spot Fleet attempts to launch 50 Spot Instances into the Spot capacity
pool with optimal capacity for the number of instances that are launching.
{
"TargetCapacity": "50",
"SpotFleetRequestConfig": {
"AllocationStrategy": "capacityOptimized",
},
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "my-launch-template",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "r4.2xlarge",
"AvailabilityZone": "us-west-2a"
},
{
"InstanceType": "m4.2xlarge",
"AvailabilityZone": "us-west-2b"
},
{
"InstanceType": "c5.2xlarge",
"AvailabilityZone": "us-west-2b"
}
]
}
]
}
When using the capacityOptimizedPrioritized allocation strategy, you can use the Priority
parameter to specify the priorities of the Spot capacity pools, where the lower the number the higher
priority. You can also set the same priority for several Spot capacity pools if you favor them equally. If
you do not set a priority for a pool, the pool will be considered last in terms of priority.
In the following example, the three launch specifications specify three Spot capacity pools. Each pool is
prioritized, where the lower the number the higher priority. The target capacity is 50 Spot Instances. The
Spot Fleet attempts to launch 50 Spot Instances into the Spot capacity pool with the highest priority on
a best-effort basis, but optimizes for capacity first.
{
"TargetCapacity": "50",
"SpotFleetRequestConfig": {
"AllocationStrategy": "capacityOptimizedPrioritized"
},
923
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Fleet quotas
"LaunchTemplateConfigs": [
{
"LaunchTemplateSpecification": {
"LaunchTemplateName": "my-launch-template",
"Version": "1"
},
"Overrides": [
{
"InstanceType": "r4.2xlarge",
"Priority": 1,
"AvailabilityZone": "us-west-2a"
},
{
"InstanceType": "m4.2xlarge",
"Priority": 2,
"AvailabilityZone": "us-west-2b"
},
{
"InstanceType": "c5.2xlarge",
"Priority": 3,
"AvailabilityZone": "us-west-2b"
}
]
}
]
}
Fleet quotas
The usual Amazon EC2 quotas apply to instances launched by an EC2 Fleet or a Spot Fleet, such as Spot
Instance limits (p. 481) and volume limits (p. 1637). In addition, the following limits apply:
• The number of active EC2 Fleets and Spot Fleets per Region: 1,000* †
• The number of Spot capacity pools (unique combination of instance type and subnet): 300* ‡
• The size of the user data in a launch specification: 16 KB †
• The target capacity per EC2 Fleet or Spot Fleet: 10,000
• The target capacity across all EC2 Fleets and Spot Fleets in a Region: 100,000*
• An EC2 Fleet request or a Spot Fleet request can't span Regions.
• An EC2 Fleet request or a Spot Fleet request can't span different subnets from the same Availability
Zone.
* These limits apply to both your EC2 Fleets and your Spot Fleets.
† These are hard limits. You cannot request a limit increase for these limits.
‡ This limit only applies to fleets of type request or maintain. This limit does not apply to instant
fleets.
If you need more than the default limits for target capacity, complete the AWS Support Center Create
case form to request a limit increase. For Limit type, choose EC2 Fleet, choose a Region, and then choose
Target Fleet Capacity per Fleet (in units) or Target Fleet Capacity per Region (in units), or both.
924
Amazon Elastic Compute Cloud
User Guide for Linux Instances
After you have defined your monitoring goals and have created your monitoring plan, the next step is
to establish a baseline for normal Amazon EC2 performance in your environment. You should measure
Amazon EC2 performance at various times and under different load conditions. As you monitor Amazon
EC2, you should store a history of monitoring data that you collect. You can compare current Amazon
EC2 performance to this historical data to help you to identify normal performance patterns and
performance anomalies, and devise methods to address them. For example, you can monitor CPU
utilization, disk I/O, and network utilization for your EC2 instances. When performance falls outside your
established baseline, you might need to reconfigure or optimize the instance to reduce CPU utilization,
improve disk I/O, or reduce network traffic.
925
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Automated and manual monitoring
Monitoring tools
• Automated monitoring tools (p. 926)
• Manual monitoring tools (p. 927)
• System status checks – monitor the AWS systems required to use your instance to ensure that they
are working properly. These checks detect problems with your instance that require AWS involvement
to repair. When a system status check fails, you can choose to wait for AWS to fix the issue or you can
resolve it yourself (for example, by stopping and restarting or terminating and replacing an instance).
Examples of problems that cause system status checks to fail include:
• Loss of network connectivity
• Loss of system power
• Software issues on the physical host
• Hardware issues on the physical host that impact network reachability
For more information, see Status checks for your instances (p. 928).
• Instance status checks – monitor the software and network configuration of your individual instance.
These checks detect problems that require your involvement to repair. When an instance status check
fails, typically you will need to address the problem yourself (for example, by rebooting the instance
or by making modifications in your operating system). Examples of problems that may cause instance
status checks to fail include:
• Failed system status checks
• Misconfigured networking or startup configuration
• Exhausted memory
• Corrupted file system
• Incompatible kernel
For more information, see Status checks for your instances (p. 928).
• Amazon CloudWatch alarms – watch a single metric over a time period you specify, and perform
one or more actions based on the value of the metric relative to a given threshold over a number
of time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon
SNS) topic or Amazon EC2 Auto Scaling policy. Alarms invoke actions for sustained state changes only.
CloudWatch alarms will not invoke actions simply because they are in a particular state; the state
926
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Manual monitoring tools
must have changed and been maintained for a specified number of periods. For more information, see
Monitor your instances using CloudWatch (p. 958).
• Amazon EventBridge – automate your AWS services and respond automatically to system events.
Events from AWS services are delivered to EventBridge in near real time, and you can specify
automated actions to take when an event matches a rule you write. For more information, see What is
Amazon EventBridge?.
• Amazon CloudWatch Logs – monitor, store, and access your log files from Amazon EC2 instances, AWS
CloudTrail, or other sources. For more information, see the Amazon CloudWatch Logs User Guide.
• CloudWatch agent – collect logs and system-level metrics from both hosts and guests on your
EC2 instances and on-premises servers. For more information, see Collecting Metrics and Logs
from Amazon EC2 Instances and On-Premises Servers with the CloudWatch Agent in the Amazon
CloudWatch User Guide.
• AWS Management Pack for Microsoft System Center Operations Manager – links Amazon EC2
instances and the Windows or Linux operating systems running inside them. The AWS Management
Pack is an extension to Microsoft System Center Operations Manager. It uses a designated computer in
your datacenter (called a watcher node) and the Amazon Web Services APIs to remotely discover and
collect information about your AWS resources. For more information, see AWS Management Pack for
Microsoft System Center.
• Make monitoring a priority to head off small problems before they become big ones.
927
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Monitor the status of your instances
• Create and implement a monitoring plan that collects monitoring data from all of the parts in your
AWS solution so that you can more easily debug a multi-point failure if one occurs. Your monitoring
plan should address, at a minimum, the following questions:
• What are your goals for monitoring?
• What resources you will monitor?
• How often you will monitor these resources?
• What monitoring tools will you use?
• Who will perform the monitoring tasks?
• Who should be notified when something goes wrong?
• Automate monitoring tasks as much as possible.
• Check the log files on your EC2 instances.
A status check gives you the information that results from automated checks performed by Amazon EC2.
These automated checks detect whether specific issues are affecting your instances. The status check
information, together with the data provided by Amazon CloudWatch, gives you detailed operational
visibility into each of your instances.
You can also see status of specific events that are scheduled for your instances. The status of events
provides information about upcoming activities that are planned for your instances, such as rebooting or
retirement. They also provide the scheduled start and end time of each event.
Contents
• Status checks for your instances (p. 928)
• Scheduled events for your instances (p. 935)
Status checks are performed every minute, returning a pass or a fail status. If all checks pass, the overall
status of the instance is OK. If one or more checks fail, the overall status is impaired. Status checks are
built into Amazon EC2, so they cannot be disabled or deleted.
When a status check fails, the corresponding CloudWatch metric for status checks is incremented. For
more information, see Status check metrics (p. 967). You can use these metrics to create CloudWatch
alarms that are triggered based on the result of the status checks. For example, you can create an alarm
to warn you if status checks fail on a specific instance. For more information, see Create and edit status
check alarms (p. 932).
You can also create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and
automatically recovers the instance if it becomes impaired due to an underlying issue. For more
information, see Recover your instance (p. 653).
928
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance status checks
Contents
• Types of status checks (p. 929)
• View status checks (p. 930)
• Report instance status (p. 932)
• Create and edit status check alarms (p. 932)
The following are examples of problems that can cause system status checks to fail:
Note
If you perform a restart from the operating system on a bare metal instance, the system status
check might temporarily return a fail status. When the instance becomes available, the system
status check should return a pass status.
The following are examples of problems that can cause instance status checks to fail:
Note
If you perform a restart from the operating system on a bare metal instance, the instance status
check might temporarily return a fail status. When the instance becomes available, the instance
status check should return a pass status.
929
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance status checks
New console
If your instance has a failed status check, you typically must address the problem yourself (for
example, by rebooting the instance or by making instance configuration changes). However, if
your instance has a failed status check and has been unreachable for over 20 minutes, choose
Open support case to submit a request for assistance. To troubleshoot system or instance
status check failures yourself, see Troubleshoot instances with failed status checks (p. 1700).
5. To review the CloudWatch metrics for status checks, select the instance, and then choose the
Monitoring tab. Scroll until you see the graphs for the following metrics:
Old console
930
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance status checks
If you have an instance with a failed status check and the instance has been unreachable for
over 20 minutes, choose AWS Support to submit a request for assistance. To troubleshoot
system or instance status check failures yourself, see Troubleshoot instances with failed status
checks (p. 1700).
5. To review the CloudWatch metrics for status checks, select the instance, and then choose the
Monitoring tab. Scroll until you see the graphs for the following metrics:
To get the status of all instances with an instance status of impaired, use the following command.
If you have an instance with a failed status check, see Troubleshoot instances with failed status
checks (p. 1700).
931
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance status checks
We use reported feedback to identify issues impacting multiple customers, but do not respond to
individual account issues. Providing feedback does not change the status check results that you currently
see for the instance.
Old console
932
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance status checks
New console
If you add an email address to the list of recipients or created a new topic, Amazon SNS sends a
subscription confirmation email message to each new address. Each recipient must confirm the
subscription by choosing the link contained in that message. Alert notifications are sent only to
confirmed addresses.
6. For Alarm action, turn the toggle on to specify an action to take when the alarm is triggered.
Select the action.
7. For Alarm thresholds, specify the metric and criteria for the alarm.
You can leave the default settings for Group samples by (Average) and Type of data to sample
(Status check failed:either), or you can change them to suit your needs.
For Consecutive period, set the number of periods to evaluate and, in Period, enter the
evaluation period duration before triggering the alarm and sending an email.
8. (Optional) For Sample metric data, choose Add to dashboard.
9. Choose Create.
Old console
If you selected Recover this instance in the previous step, select Status Check Failed (System).
7. In For at least, set the number of periods you want to evaluate and in consecutive periods,
select the evaluation period duration before triggering the alarm and sending an email.
8. (Optional) In Name of alarm, replace the default name with another name for the alarm.
9. Choose Create Alarm.
933
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance status checks
Important
If you added an email address to the list of recipients or created a new topic, Amazon
SNS sends a subscription confirmation email message to each new address. Each
recipient must confirm the subscription by choosing the link contained in that message.
Alert notifications are sent only to confirmed addresses.
If you need to make changes to an instance status alarm, you can edit it.
New console
Old console
1. Select an existing SNS topic or create a new one. For more information, see Using the AWS CLI with
Amazon SNS in the AWS Command Line Interface User Guide.
2. Use the following list-metrics command to view the available Amazon CloudWatch metrics for
Amazon EC2.
934
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
The period is the time frame, in seconds, in which Amazon CloudWatch metrics are collected. This
example uses 300, which is 60 seconds multiplied by 5 minutes. The evaluation period is the number
of consecutive periods for which the value of the metric must be compared to the threshold. This
example uses 2. The alarm actions are the actions to perform when this alarm is triggered. This
example configures the alarm to send an email using Amazon SNS.
Scheduled events are managed by AWS; you cannot schedule events for your instances. You can view the
events scheduled by AWS, customize scheduled event notifications to include or remove tags from the
email notification, perform actions when an instance is scheduled to reboot, retire, or stop.
To update the contact information for your account so that you can be sure to be notified about
scheduled events, go to the Account Settings page.
Contents
• Types of scheduled events (p. 935)
• View scheduled events (p. 935)
• Customize scheduled event notifications (p. 939)
• Work with instances scheduled to stop or retire (p. 941)
• Work with instances scheduled for reboot (p. 942)
• Work with instances scheduled for maintenance (p. 944)
• Reschedule a scheduled event (p. 944)
• Define event windows for scheduled events (p. 946)
• Instance stop: At the scheduled time, the instance is stopped. When you start it again, it's migrated to
a new host. Applies only to instances backed by Amazon EBS.
• Instance retirement: At the scheduled time, the instance is stopped if it is backed by Amazon EBS, or
terminated if it is backed by instance store.
• Instance reboot: At the scheduled time, the instance is rebooted.
• System reboot: At the scheduled time, the host for the instance is rebooted.
• System maintenance: At the scheduled time, the instance might be temporarily affected by network
maintenance or power maintenance.
935
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
New console
• Alternatively, in the navigation pane, choose EC2 Dashboard. Any resources with an
associated event are displayed under Scheduled events.
Old console
• Alternatively, in the navigation pane, choose EC2 Dashboard. Any resources with an
associated event are displayed under Scheduled Events.
936
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
• Some events are also shown for affected resources. For example, in the navigation pane,
choose Instances and select an instance. If the instance has an associated instance stop or
instance retirement event, it is displayed in the lower pane.
AWS CLI
To view scheduled events for your instances using the AWS CLI
[
"Events": [
{
"InstanceEventId": "instance-event-0d59937288b749b32",
"Code": "system-reboot",
"Description": "The instance is scheduled for a reboot",
"NotAfter": "2019-03-15T22:00:00.000Z",
"NotBefore": "2019-03-14T20:00:00.000Z",
"NotBeforeDeadline": "2019-04-05T11:00:00.000Z"
}
]
]
[
"Events": [
{
"InstanceEventId": "instance-event-0e439355b779n26",
"Code": "instance-stop",
"Description": "The instance is running on degraded hardware",
"NotBefore": "2015-05-23T00:00:00.000Z"
}
]
]
PowerShell
To view scheduled events for your instances using the AWS Tools for Windows PowerShell
937
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
Code : instance-stop
Description : The instance is running on degraded hardware
NotBefore : 5/23/2015 12:00:00 AM
Instance metadata
You can retrieve information about active maintenance events for your instances from the instance
metadata (p. 710) by using Instance Metadata Service Version 2 or Instance Metadata Service
Version 1.
IMDSv2
IMDSv1
The following is example output with information about a scheduled system reboot event, in JSON
format.
[
{
"NotBefore" : "21 Jan 2019 09:00:43 GMT",
"Code" : "system-reboot",
"Description" : "scheduled reboot",
"EventId" : "instance-event-0d59937288b749b32",
"NotAfter" : "21 Jan 2019 09:17:23 GMT",
"State" : "active"
}
]
To view event history about completed or canceled events for your instances using instance
metadata
You can retrieve information about completed or canceled events for your instances from instance
metadata (p. 710) by using Instance Metadata Service Version 2 or Instance Metadata Service
Version 1.
IMDSv2
IMDSv1
938
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
The following is example output with information about a system reboot event that was canceled,
and a system reboot event that was completed, in JSON format.
[
{
"NotBefore" : "21 Jan 2019 09:00:43 GMT",
"Code" : "system-reboot",
"Description" : "[Canceled] scheduled reboot",
"EventId" : "instance-event-0d59937288b749b32",
"NotAfter" : "21 Jan 2019 09:17:23 GMT",
"State" : "canceled"
},
{
"NotBefore" : "29 Jan 2019 09:00:43 GMT",
"Code" : "system-reboot",
"Description" : "[Completed] scheduled reboot",
"EventId" : "instance-event-0d59937288b749b32",
"NotAfter" : "29 Jan 2019 09:17:23 GMT",
"State" : "completed"
}
]
AWS Health
You can use the AWS Personal Health Dashboard to learn about events that can affect your instance.
The AWS Personal Health Dashboard organizes issues in three groups: open issues, scheduled
changes, and other notifications. The scheduled changes group contains items that are ongoing or
upcoming.
For more information, see Getting started with the AWS Personal Health Dashboard in the AWS
Health User Guide.
When you customize event notifications to include tags, you can choose to include:
• All of the tags that are associated with the affected resource
• Only specific tags that are associated with the affected resource
For example, suppose that you assign application, costcenter, project, and owner tags to all
of your instances. You can choose to include all of the tags in event notifications. Alternatively, if you'd
like to see only the owner and project tags in event notifications, then you can choose to include only
those tags.
After you select the tags to include, the event notifications will include the resource ID (instance ID or
Dedicated Host ID) and the tag key and value pairs that are associated with the affected resource.
Topics
• Include tags in event notifications (p. 940)
• Remove tags from event notifications (p. 940)
939
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
You can include tags in event notifications by using one of the following methods.
New console
• To include all of the tags associated with the affected instance or Dedicated Host, select
Include all resource tags.
• To manually select the tags to include, select Choose the tags to include, and then for
Choose the tags to include, enter the tag key and press Enter.
6. Choose Save.
AWS CLI
Use the register-instance-event-notification-attributes AWS CLI command and specify the tags to
include by using the InstanceTagKeys parameter.
New console
940
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
• To remove all tags from event notifications, clear Include resource tags in event
notifications.
• To remove specific tags from event notifications, choose Remove (X) for the tags listed below
the Choose the tags to include field.
5. Choose Save.
AWS CLI
Use the deregister-instance-event-notification-attributes AWS CLI command and specify the tags to
remove by using the InstanceTagKeys parameter.
New console
AWS CLI
941
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
volume, the instance is scheduled to stop. If the root device is an instance store volume, the instance is
scheduled to terminate. For more information, see Instance retirement (p. 643).
Important
Any data stored on instance store volumes is lost when an instance is stopped, hibernated, or
terminated. This includes instance store volumes that are attached to an instance that has an
EBS volume as the root device. Be sure to save data from your instance store volumes that you
might need later before the instance is stopped, hibernated, or terminated.
You can wait for the instance to stop as scheduled. Alternatively, you can stop and start the instance
yourself, which migrates it to a new host. For more information about stopping your instance, in addition
to information about the changes to your instance configuration when it's stopped, see Stop and start
your instance (p. 622).
You can automate an immediate stop and start in response to a scheduled instance stop event. For more
information, see Automating Actions for EC2 Instances in the AWS Health User Guide.
If your instance is part of an Auto Scaling group with health checks enabled, then the instance is replaced
when a scheduled event is created for that instance. The Auto Scaling group does not wait for the
scheduled event to complete, but replaces the instance immediately upon receiving the notification.
We recommend that you launch a replacement instance from your most recent AMI and migrate all
necessary data to the replacement instance before the instance is scheduled to terminate. Then, you can
terminate the original instance, or wait for it to terminate as scheduled.
If you stop your linked EC2-Classic instance (p. 1193), it is automatically unlinked from the VPC and the
VPC security groups are no longer associated with the instance. You can link your instance to the VPC
again after you've restarted it.
If your instance is part of an Auto Scaling group with health checks enabled, then the instance is replaced
when a scheduled event is created for that instance. The Auto Scaling group does not wait for the
scheduled event to complete, but replaces the instance immediately upon receiving the notification.
New console
942
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
Old console
AWS CLI
To view the type of scheduled reboot event using the AWS CLI
For scheduled reboot events, the value for Code is either system-reboot or instance-reboot.
The following example output shows a system-reboot event.
[
"Events": [
{
"InstanceEventId": "instance-event-0d59937288b749b32",
"Code": "system-reboot",
"Description": "The instance is scheduled for a reboot",
"NotAfter": "2019-03-14T22:00:00.000Z",
"NotBefore": "2019-03-14T20:00:00.000Z",
"NotBeforeDeadline": "2019-04-05T11:00:00.000Z"
}
]
]
You can wait for the instance reboot to occur within its scheduled maintenance window,
reschedule (p. 944) the instance reboot to a date and time that suits you, or reboot (p. 642) the
instance yourself at a time that is convenient for you.
After your instance is rebooted, the scheduled event is cleared and the event's description is updated.
The pending maintenance to the underlying host is completed, and you can begin using your instance
again after it has fully booted.
It is not possible for you to reboot the system yourself. You can wait for the system reboot to occur
during its scheduled maintenance window, or you can reschedule (p. 944) the system reboot to a date
and time that suits you. A system reboot typically completes in a matter of minutes. After the system
reboot has occurred, the instance retains its IP address and DNS name, and any data on local instance
store volumes is preserved. After the system reboot is complete, the scheduled event for the instance is
cleared, and you can verify that the software on your instance is operating as expected.
Alternatively, if it is necessary to maintain the instance at a different time and you can't reschedule the
system reboot, then you can stop and start an Amazon EBS-backed instance, which migrates it to a new
943
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
host. However, the data on the local instance store volumes is not preserved. You can also automate
an immediate instance stop and start in response to a scheduled system reboot event. For more
information, see Automating Actions for EC2 Instances in the AWS Health User Guide. For an instance
store-backed instance, if you can't reschedule the system reboot, then you can launch a replacement
instance from your most recent AMI, migrate all necessary data to the replacement instance before the
scheduled maintenance window, and then terminate the original instance.
During network maintenance, scheduled instances lose network connectivity for a brief period of time.
Normal network connectivity to your instance is restored after maintenance is complete.
During power maintenance, scheduled instances are taken offline for a brief period, and then rebooted.
When a reboot is performed, all of your instance's configuration settings are retained.
After your instance has rebooted (this normally takes a few minutes), verify that your application is
working as expected. At this point, your instance should no longer have a scheduled event associated
with it, or if it does, the description of the scheduled event begins with [Completed]. It sometimes takes
up to 1 hour for the instance status description to refresh. Completed maintenance events are displayed
on the Amazon EC2 console dashboard for up to a week.
You can wait for the maintenance to occur as scheduled. Alternatively, you can stop and start the
instance, which migrates it to a new host. For more information about stopping your instance, in addition
to information about the changes to your instance configuration when it's stopped, see Stop and start
your instance (p. 622).
You can automate an immediate stop and start in response to a scheduled maintenance event. For more
information, see Automating Actions for EC2 Instances in the AWS Health User Guide.
You can wait for the maintenance to occur as scheduled. Alternatively, if you want to maintain normal
operation during a scheduled maintenance window, you can launch a replacement instance from
your most recent AMI, migrate all necessary data to the replacement instance before the scheduled
maintenance window, and then terminate the original instance.
New console
944
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
Only events that have an event deadline date, indicated by a value for Deadline, can be
rescheduled. If one of the selected events does not have a deadline date, Actions, Schedule
event is disabled.
5. For New start time, enter a new date and time for the event. The new date and time must occur
before the Event deadline.
6. Choose Save.
It might take a minute or 2 for the updated event start time to be reflected in the console.
Old console
Only events that have an event deadline date, indicated by a value for Event Deadline, can be
rescheduled.
5. For Event start time, enter a new date and time for the event. The new date and time must
occur before the Event Deadline.
6. Choose Schedule Event.
It might take a minute or 2 for the updated event start time to be reflected in the console.
AWS CLI
1. Only events that have an event deadline date, indicated by a value for NotBeforeDeadline,
can be rescheduled. Use the describe-instance-status command to view the
NotBeforeDeadline parameter value.
The following example output shows a system-reboot event that can be rescheduled because
NotBeforeDeadline contains a value.
[
"Events": [
{
"InstanceEventId": "instance-event-0d59937288b749b32",
"Code": "system-reboot",
"Description": "The instance is scheduled for a reboot",
"NotAfter": "2019-03-14T22:00:00.000Z",
"NotBefore": "2019-03-14T20:00:00.000Z",
"NotBeforeDeadline": "2019-04-05T11:00:00.000Z"
}
]
]
2. To reschedule the event, use the modify-instance-event-start-time command. Specify the new
event start time by using the not-before parameter. The new event start time must fall before
the NotBeforeDeadline.
945
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
It might take a minute or 2 before the describe-instance-status command returns the updated
not-before parameter value.
Limitations
• Only events with an event deadline date can be rescheduled. The event can be rescheduled up to the
event deadline date. The Deadline column in the console and the NotBeforeDeadline field in the
AWS CLI indicate if the event has a deadline date.
• Only events that have not yet started can be rescheduled. The Start time column in the console and
the NotBefore field in the AWS CLI indicate the event start time. Events that are scheduled to start in
the next 5 minutes cannot be rescheduled.
• The new event start time must be at least 60 minutes from the current time.
• If you reschedule multiple events using the console, the event deadline date is determined by the
event with the earliest event deadline date.
You can use event windows to maximize workload availability by specifying event windows that occur
during off-peak periods for your workload. You can also align the event windows with your internal
maintenance schedules.
You define an event window by specifying a set of time ranges. The minimum time range is 2 hours. The
combined time ranges must total at least 4 hours.
You can associate one or more instances with an event window by using either instance IDs or instance
tags. You can also associate Dedicated Hosts with an event window by using the host ID.
Warning
Event windows are applicable only for scheduled events that stop, reboot, or terminate
instances.
Event windows are not applicable for:
946
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
Considerations
• All event window times are in UTC.
• The minimum weekly event window duration is 4 hours.
• The time ranges within an event window must each be at least 2 hours.
• Only one target type (instance ID, Dedicated Host ID, or instance tag) can be associated with an event
window.
• A target (instance ID, Dedicated Host ID, or instance tag) can only be associated with one event
window.
• A maximum of 100 instance IDs, or 50 Dedicated Host IDs, or 50 instance tags can be associated with
an event window. The instance tags can be associated with any number of instances.
• A maximum of 200 event windows can be created per AWS Region.
• Multiple instances that are associated with event windows can potentially have scheduled events occur
at the same time.
• If AWS has already scheduled an event, modifying an event window won't change the time of the
scheduled event. If the event has a deadline date, you can reschedule the event (p. 944).
• You can stop and start an instance prior to the scheduled event, which migrates the instance to a new
host, and the scheduled event will no longer take place.
Console
AWS CLI
Expected output
{
"InstanceEventWindows": [
{
"InstanceEventWindowId": "iew-0abcdef1234567890",
"Name": "myEventWindowName",
"CronExpression": "* 21-23 * * 2,3",
"AssociationTarget": {
"InstanceIds": [
"i-1234567890abcdef0",
947
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
"i-0598c7d356eba48d7"
],
"Tags": [],
"DedicatedHostIds": []
},
"State": "active",
"Tags": []
}
...
],
"NextToken": "9d624e0c-388b-4862-a31e-a85c64fc1d4a"
}
To describe event windows that match one or more filters using the AWS CLI
When a filter is used, it performs a direct match. However, the instance-id filter is different. If
there is no direct match to the instance ID, then it falls back to indirect associations with the event
window, such as the instance's tags or Dedicated Host ID (if the instance is on a Dedicated Host).
For the list of supported filters, see describe-instance-event-windows in the AWS CLI Reference.
Expected output
In the following example, the instance is on a Dedicated Host, which is associated with the event
window.
{
"InstanceEventWindows": [
{
"InstanceEventWindowId": "iew-0dbc0adb66f235982",
"TimeRanges": [
{
"StartWeekDay": "sunday",
"StartHour": 2,
"EndWeekDay": "sunday",
"EndHour": 8
}
],
948
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
"Name": "myEventWindowName",
"AssociationTarget": {
"InstanceIds": [],
"Tags": [],
"DedicatedHostIds": [
"h-0140d9a7ecbd102dd"
]
},
"State": "active",
"Tags": []
}
]
}
For the event window constraints, see Considerations (p. 947) earlier in this topic.
Console
Note that you can create the event window without associating a target with the window. Later,
you can modify the window to associate one or more targets.
949
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
7. (Optional) For Event window tags, choose Add tag, and enter the key and value for the tag.
Repeat for each tag.
8. Choose Create event window.
AWS CLI
To create an event window using the AWS CLI, you first create the event window, and then you
associate one or more targets with the event window.
You can define either a set of time ranges or a cron expression when creating the event window, but
not both.
To create an event window with a time range using the AWS CLI
Use the create-instance-event-window command and specify the --time-range parameter. You
can't also specify the --cron-expression parameter.
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"TimeRanges": [
{
"StartWeekDay": "monday",
"StartHour": 2,
"EndWeekDay": "wednesday",
"EndHour": 8
}
],
"Name": "myEventWindowName",
"State": "creating",
"Tags": [
{
"Key": "K1",
"Value": "V1"
}
]
}
}
To create an event window with a cron expression using the AWS CLI
950
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
--tag-specifications "ResourceType=instance-event-window,Tags=[{Key=K1,Value=V1}]"
\
--name myEventWindowName
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"Name": "myEventWindowName",
"CronExpression": "* 21-23 * * 2,3",
"State": "creating",
"Tags": [
{
"Key": "K1",
"Value": "V1"
}
]
}
}
You can associate only one type of target (instance IDs, Dedicated Host IDs, or instance tags) with an
event window.
To associate instance tags with an event window using the AWS CLI
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"Name": "myEventWindowName",
"CronExpression": "* 21-23 * * 2,3",
"AssociationTarget": {
"InstanceIds": [],
"Tags": [
{
"Key": "k2",
"Value": "v2"
},
{
"Key": "k1",
"Value": "v1"
}
],
"DedicatedHostIds": []
},
"State": "creating"
}
951
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
To associate one or more instances with an event window using the AWS CLI
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"Name": "myEventWindowName",
"CronExpression": "* 21-23 * * 2,3",
"AssociationTarget": {
"InstanceIds": [
"i-1234567890abcdef0",
"i-0598c7d356eba48d7"
],
"Tags": [],
"DedicatedHostIds": []
},
"State": "creating"
}
}
To associate a Dedicated Host with an event window using the AWS CLI
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"Name": "myEventWindowName",
"CronExpression": "* 21-23 * * 2,3",
"AssociationTarget": {
"InstanceIds": [],
"Tags": [],
"DedicatedHostIds": [
"h-029fa35a02b99801d"
]
},
"State": "creating"
952
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
}
}
Console
AWS CLI
To modify an event window using the AWS CLI, you can modify the time range or cron expression,
and associate or disassociate one or more targets with the event window.
You can modify either a time range or a cron expression when modifying the event window, but not
both.
To modify the time range of an event window using the AWS CLI
Use the modify-instance-event-window command and specify the event window to modify. Specify
the --time-range parameter to modify the time range. You can't also specify the --cron-
expression parameter.
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"TimeRanges": [
{
"StartWeekDay": "monday",
"StartHour": 2,
"EndWeekDay": "wednesday",
"EndHour": 8
}
],
"Name": "myEventWindowName",
953
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
"AssociationTarget": {
"InstanceIds": [
"i-0abcdef1234567890",
"i-0be35f9acb8ba01f0"
],
"Tags": [],
"DedicatedHostIds": []
},
"State": "creating",
"Tags": [
{
"Key": "K1",
"Value": "V1"
}
]
}
}
To modify a set of time ranges for an event window using the AWS CLI
Use the modify-instance-event-window command and specify the event window to modify. Specify
the --time-range parameter to modify the time range. You can't also specify the --cron-
expression parameter in the same call.
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"TimeRanges": [
{
"StartWeekDay": "monday",
"StartHour": 2,
"EndWeekDay": "wednesday",
"EndHour": 8
},
{
"StartWeekDay": "thursday",
"StartHour": 2,
"EndWeekDay": "friday",
"EndHour": 8
}
],
"Name": "myEventWindowName",
"AssociationTarget": {
"InstanceIds": [
"i-0abcdef1234567890",
"i-0be35f9acb8ba01f0"
],
"Tags": [],
"DedicatedHostIds": []
},
"State": "creating",
"Tags": [
{
"Key": "K1",
954
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
"Value": "V1"
}
]
}
}
To modify the cron expression of an event window using the AWS CLI
Use the modify-instance-event-window command and specify the event window to modify. Specify
the --cron-expression parameter to modify the cron expression. You can't also specify the --
time-range parameter.
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"Name": "myEventWindowName",
"CronExpression": "* 21-23 * * 2,3",
"AssociationTarget": {
"InstanceIds": [
"i-0abcdef1234567890",
"i-0be35f9acb8ba01f0"
],
"Tags": [],
"DedicatedHostIds": []
},
"State": "creating",
"Tags": [
{
"Key": "K1",
"Value": "V1"
}
]
}
}
You can associate additional targets with an event window. You can also disassociate existing targets
from an event window. However, only one type of target (instance IDs, Dedicated Host IDs, or
instance tags) can be associated with an event window.
For the instructions on how to associate targets with an event window, see Associate a target with an
event window.
To disassociate instance tags from an event window using the AWS CLI
955
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
--region us-east-1 \
--instance-event-window-id iew-0abcdef1234567890 \
--association-target "InstanceTags=[{Key=k2,Value=v2},{Key=k1,Value=v1}]"
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"Name": "myEventWindowName",
"CronExpression": "* 21-23 * * 2,3",
"AssociationTarget": {
"InstanceIds": [],
"Tags": [],
"DedicatedHostIds": []
},
"State": "creating"
}
}
To disassociate one or more instances from an event window using the AWS CLI
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"Name": "myEventWindowName",
"CronExpression": "* 21-23 * * 2,3",
"AssociationTarget": {
"InstanceIds": [],
"Tags": [],
"DedicatedHostIds": []
},
"State": "creating"
}
}
To disassociate a Dedicated Host from an event window using the AWS CLI
956
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scheduled events
Expected output
{
"InstanceEventWindow": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"Name": "myEventWindowName",
"CronExpression": "* 21-23 * * 2,3",
"AssociationTarget": {
"InstanceIds": [],
"Tags": [],
"DedicatedHostIds": []
},
"State": "creating"
}
}
Console
AWS CLI
Use the delete-instance-event-window command and specify the event window to delete.
Use the --force-delete parameter if the event window is currently associated with targets.
Expected output
{
"InstanceEventWindowState": {
"InstanceEventWindowId": "iew-0abcdef1234567890",
"State": "deleting"
}
957
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Monitor your instances using CloudWatch
To tag an event window when you create it, see Create event windows (p. 949).
Console
AWS CLI
Use the create-tags command to tag existing resources. In the following example, the existing event
window is tagged with Key=purpose and Value=test.
By default, Amazon EC2 sends metric data to CloudWatch in 5-minute periods. To send metric data for
your instance to CloudWatch in 1-minute periods, you can enable detailed monitoring on the instance.
For more information, see Enable or turn off detailed monitoring for your instances (p. 959).
The Amazon EC2 console displays a series of graphs based on the raw data from Amazon CloudWatch.
Depending on your needs, you might prefer to get data for your instances from Amazon CloudWatch
instead of the graphs in the console.
For more information about Amazon CloudWatch, see the Amazon CloudWatch User Guide.
Contents
• Enable or turn off detailed monitoring for your instances (p. 959)
• List the available CloudWatch metrics for your instances (p. 961)
958
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enable detailed monitoring
The following describes the data interval and charge for basic and detailed monitoring for instances.
Detailed Data is available in 1-minute periods. You are charged per metric that is sent
monitoring To get this level of data, you must to CloudWatch. You are not charged
specifically enable it for the instance. for data storage. For more information,
For the instances where you've enabled see Paid tier and Example 1 - EC2
detailed monitoring, you can also get Detailed Monitoring on the Amazon
aggregated data across groups of CloudWatch pricing page.
similar instances.
Topics
• Required IAM permissions (p. 959)
• Enable detailed monitoring (p. 959)
• Turn off detailed monitoring (p. 960)
New console
959
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enable detailed monitoring
When launching an instance using the AWS Management Console, select the Monitoring check box
on the Configure Instance Details page.
Old console
When launching an instance using the AWS Management Console, select the Monitoring check box
on the Configure Instance Details page.
AWS CLI
Use the following monitor-instances command to enable detailed monitoring for the specified
instances.
Use the run-instances command with the --monitoring flag to enable detailed monitoring.
New console
960
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
5. Choose Save.
Old console
AWS CLI
Use the following unmonitor-instances command to turn off detailed monitoring for the specified
instances.
For information about getting the statistics for these metrics, see Get statistics for metrics for your
instances (p. 972).
Contents
• Instance metrics (p. 961)
• CPU credit metrics (p. 964)
• Dedicated Host metrics (p. 965)
• Amazon EBS metrics for Nitro-based instances (p. 965)
• Status check metrics (p. 967)
• Traffic mirroring metrics (p. 968)
• Amazon EC2 metric dimensions (p. 968)
• Amazon EC2 usage metrics (p. 968)
• List metrics using the console (p. 969)
• List metrics using the AWS CLI (p. 971)
Instance metrics
The AWS/EC2 namespace includes the following instance metrics.
961
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
Metric Description
CPUUtilization The percentage of allocated EC2 compute units that are currently
in use on the instance. This metric identifies the processing power
required to run an application on a selected instance.
Units: Percent
To calculate the average I/O operations per second (IOPS) for the
period, divide the total operations in the period by the number of
seconds in that period.
Units: Count
To calculate the average I/O operations per second (IOPS) for the
period, divide the total operations in the period by the number of
seconds in that period.
Units: Count
DiskReadBytes Bytes read from all instance store volumes available to the instance.
Units: Bytes
DiskWriteBytes Bytes written to all instance store volumes available to the instance.
962
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
Metric Description
this number by 300 to find Bytes/second. If you have detailed (1-
minute) monitoring, divide it by 60.
Units: Bytes
MetadataNoToken The number of times the instance metadata service was successfully
accessed using a method that does not use a token.
Units: Count
Units: Bytes
NetworkOut The number of bytes sent out by the instance on all network
interfaces. This metric identifies the volume of outgoing network
traffic from a single instance.
The number reported is the number of bytes sent during the period.
If you are using basic (5-minute) monitoring and the statistic is
Sum, you can divide this number by 300 to find Bytes/second. If
you have detailed (1-minute) monitoring and the statistic is Sum,
divide it by 60.
Units: Bytes
Units: Count
963
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
Metric Description
NetworkPacketsOut The number of packets sent out by the instance on all network
interfaces. This metric identifies the volume of outgoing traffic in
terms of the number of packets on a single instance.
Units: Count
Metric Description
CPUCreditUsage The number of CPU credits spent by the instance for CPU
utilization. One CPU credit equals one vCPU running at 100%
utilization for one minute or an equivalent combination of vCPUs,
utilization, and time (for example, one vCPU running at 50%
utilization for two minutes or two vCPUs running at 25% utilization
for two minutes).
Credits are accrued in the credit balance after they are earned,
and removed from the credit balance when they are spent. The
credit balance has a maximum limit, determined by the instance
size. After the limit is reached, any new credits that are earned are
discarded. For T2 Standard, launch credits do not count towards the
limit.
964
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
Metric Description
Units: Credits (vCPU-minutes)
CPUSurplusCreditsCharged The number of spent surplus credits that are not paid down by
earned CPU credits, and which thus incur an additional charge.
Spent surplus credits are charged when any of the following occurs:
Metric Description
Unit: Percent
Metric values for Nitro-based instances will always be integers (whole numbers), whereas values for Xen-
based instances support decimals. Therefore, low instance CPU utilization on Nitro-based instances may
appear to be rounded down to 0.
965
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
Metric Description
Unit: Count
Unit: Count
Unit: Bytes
Unit: Bytes
966
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
Metric Description
The Sum statistic is not applicable to this metric.
Unit: Percent
Unit: Percent
For information about the metrics provided for your EBS volumes, see Amazon EBS metrics (p. 1596).
For information about the metrics provided for your Spot fleets, see CloudWatch metrics for Spot
Fleet (p. 867).
Metric Description
StatusCheckFailed Reports whether the instance has passed both the instance status
check and the system status check in the last minute.
Units: Count
StatusCheckFailed_Instance Reports whether the instance has passed the instance status check
in the last minute.
Units: Count
StatusCheckFailed_System Reports whether the instance has passed the system status check in
the last minute.
967
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
Metric Description
By default, this metric is available at a 1-minute frequency at no
charge.
Units: Count
Dimension Description
AutoScalingGroupName This dimension filters the data you request for all instances in a
specified capacity group. An Auto Scaling group is a collection of
instances you define if you're using Auto Scaling. This dimension is
available only for Amazon EC2 metrics when the instances are in
such an Auto Scaling group. Available for instances with Detailed or
Basic Monitoring enabled.
ImageId This dimension filters the data you request for all instances running
this Amazon EC2 Amazon Machine Image (AMI). Available for
instances with Detailed Monitoring enabled.
InstanceId This dimension filters the data you request for the identified
instance only. This helps you pinpoint an exact instance from which
to monitor data.
InstanceType This dimension filters the data you request for all instances
running with this specified instance type. This helps you categorize
your data by the type of instance running. For example, you
might compare data from an m1.small instance and an m1.large
instance to determine which has the better business value for
your application. Available for instances with Detailed Monitoring
enabled.
Amazon EC2 usage metrics correspond to AWS service quotas. You can configure alarms that alert you
when your usage approaches a service quota. For more information about CloudWatch integration with
service quotas, see Service Quotas Integration and Usage Metrics.
968
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
Metric Description
ResourceCount The number of the specified resources running in your account. The
resources are defined by the dimensions associated with the metric.
The following dimensions are used to refine the usage metrics that are published by Amazon EC2.
Dimension Description
Service The name of the AWS service containing the resource. For Amazon
EC2 usage metrics, the value for this dimension is EC2.
Type The type of entity that is being reported. Currently, the only valid
value for Amazon EC2 usage metrics is Resource.
Resource The type of resource that is running. Currently, the only valid value
for Amazon EC2 usage metrics is vCPU, which returns information
on instances that are running.
Class The class of resource being tracked. For Amazon EC2 usage metrics
with vCPU as the value of the Resource dimension, the valid
values are Standard/OnDemand, F/OnDemand, G/OnDemand,
Inf/OnDemand, P/OnDemand, and X/OnDemand.
The values for this dimension define the first letter of the instance
types that are reported by the metric. For example, Standard/
OnDemand returns information about all running instances with
types that start with A, C, D, H, I, M, R, T, and Z, and G/OnDemand
returns information about all running instances with types that
start with G.
969
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
970
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List available metrics
5. To sort the metrics, use the column heading. To graph a metric, select the check box next to the
metric. To filter by resource, choose the resource ID and then choose Add to search. To filter by
metric, choose the metric name and then choose Add to search.
To list all the available metrics for Amazon EC2 (AWS CLI)
The following example specifies the AWS/EC2 namespace to view all the metrics for Amazon EC2.
{
"Metrics": [
{
"Namespace": "AWS/EC2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-1234567890abcdef0"
}
],
"MetricName": "NetworkOut"
},
{
"Namespace": "AWS/EC2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-1234567890abcdef0"
}
],
"MetricName": "CPUUtilization"
971
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get statistics for metrics
},
{
"Namespace": "AWS/EC2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-1234567890abcdef0"
}
],
"MetricName": "NetworkIn"
},
...
]
}
The following example specifies the AWS/EC2 namespace and the InstanceId dimension to view the
results for the specified instance only.
The following example specifies the AWS/EC2 namespace and a metric name to view the results for the
specified metric only.
Contents
• Statistics overview (p. 972)
• Get statistics for a specific instance (p. 973)
• Aggregate statistics across instances (p. 976)
• Aggregate statistics by Auto Scaling group (p. 978)
• Aggregate statistics by AMI (p. 979)
Statistics overview
Statistics are metric data aggregations over specified periods of time. CloudWatch provides statistics
based on the metric data points provided by your custom data or provided by other services in AWS to
CloudWatch. Aggregations are made using the namespace, metric name, dimensions, and the data point
unit of measure, within the time period you specify. The following table describes the available statistics.
Statistic Description
Minimum The lowest value observed during the specified period. You can use this value to
determine low volumes of activity for your application.
972
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get statistics for metrics
Statistic Description
Maximum The highest value observed during the specified period. You can use this value to
determine high volumes of activity for your application.
Sum All values submitted for the matching metric added together. This statistic can be
useful for determining the total volume of a metric.
Average The value of Sum / SampleCount during the specified period. By comparing this
statistic with the Minimum and Maximum, you can determine the full scope of a metric
and how close the average use is to the Minimum and Maximum. This comparison
helps you to know when to increase or decrease your resources as needed.
SampleCount The count (number) of data points used for the statistical calculation.
pNN.NN The value of the specified percentile. You can specify any percentile, using up to two
decimal places (for example, p95.45).
Requirements
• You must have the ID of the instance. You can get the instance ID using the AWS Management Console
or the describe-instances command.
• By default, basic monitoring is enabled, but you can enable detailed monitoring. For more information,
see Enable or turn off detailed monitoring for your instances (p. 959).
973
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get statistics for metrics
974
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get statistics for metrics
5. In the search field, enter CPUUtilization and press Enter. Choose the row for the specific
instance, which displays a graph for the CPUUtilization metric for the instance. To name the graph,
choose the pencil icon. To change the time range, select one of the predefined values or choose
custom.
6. To change the statistic or the period for the metric, choose the Graphed metrics tab. Choose the
column heading or an individual value, and then choose a different value.
Use the following get-metric-statistics command to get the CPUUtilization metric for the specified
instance, using the specified period and time interval:
The following is example output. Each value represents the maximum CPU utilization percentage for a
single EC2 instance.
975
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get statistics for metrics
{
"Datapoints": [
{
"Timestamp": "2016-10-19T00:18:00Z",
"Maximum": 0.33000000000000002,
"Unit": "Percent"
},
{
"Timestamp": "2016-10-19T03:18:00Z",
"Maximum": 99.670000000000002,
"Unit": "Percent"
},
{
"Timestamp": "2016-10-19T07:18:00Z",
"Maximum": 0.34000000000000002,
"Unit": "Percent"
},
{
"Timestamp": "2016-10-19T12:18:00Z",
"Maximum": 0.34000000000000002,
"Unit": "Percent"
},
...
],
"Label": "CPUUtilization"
}
Note that Amazon CloudWatch cannot aggregate data across AWS Regions. Metrics are completely
separate between Regions.
This example shows you how to use detailed monitoring to get the average CPU usage for your EC2
instances. Because no dimension is specified, CloudWatch returns statistics for all dimensions in the AWS/
EC2 namespace.
Important
This technique for retrieving all dimensions across an AWS namespace does not work for custom
namespaces that you publish to Amazon CloudWatch. With custom namespaces, you must
specify the complete set of dimensions that are associated with any given data point to retrieve
statistics that include the data point.
976
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get statistics for metrics
5. To change the statistic or the period for the metric, choose the Graphed metrics tab. Choose the
column heading or an individual value, and then choose a different value.
Use the get-metric-statistics command as follows to get the average of the CPUUtilization metric across
your instances.
{
"Datapoints": [
{
"SampleCount": 238.0,
"Timestamp": "2016-10-12T07:18:00Z",
"Average": 0.038235294117647062,
"Unit": "Percent"
},
{
"SampleCount": 240.0,
"Timestamp": "2016-10-12T09:18:00Z",
"Average": 0.16670833333333332,
"Unit": "Percent"
},
{
977
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get statistics for metrics
"SampleCount": 238.0,
"Timestamp": "2016-10-11T23:18:00Z",
"Average": 0.041596638655462197,
"Unit": "Percent"
},
...
],
"Label": "CPUUtilization"
}
This example shows you how to retrieve the total bytes written to disk for one Auto Scaling group. The
total is computed for 1-minute periods for a 24-hour interval across all EC2 instances in the specified
Auto Scaling group.
To display DiskWriteBytes for the instances in an Auto Scaling group (AWS CLI)
{
"Datapoints": [
{
"SampleCount": 18.0,
"Timestamp": "2016-10-19T21:36:00Z",
"Sum": 0.0,
"Unit": "Bytes"
},
{
"SampleCount": 5.0,
"Timestamp": "2016-10-19T21:42:00Z",
"Sum": 0.0,
"Unit": "Bytes"
}
],
"Label": "DiskWriteBytes"
978
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get statistics for metrics
Note that Amazon CloudWatch cannot aggregate data across AWS Regions. Metrics are completely
separate between Regions.
This example shows you how to determine average CPU utilization for all instances that use a specific
Amazon Machine Image (AMI). The average is over 60-second time intervals for a one-day period.
The following is example output. Each value represents an average CPU utilization percentage for the
EC2 instances running the specified AMI.
{
"Datapoints": [
{
"Timestamp": "2016-10-10T07:00:00Z",
"Average": 0.041000000000000009,
"Unit": "Percent"
},
{
"Timestamp": "2016-10-10T14:00:00Z",
"Average": 0.079579831932773085,
"Unit": "Percent"
},
{
"Timestamp": "2016-10-10T06:00:00Z",
"Average": 0.036000000000000011,
"Unit": "Percent"
},
...
],
"Label": "CPUUtilization"
979
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Graph metrics
For more information about the metrics and the data they provide to the graphs, see List the available
CloudWatch metrics for your instances (p. 961).
You can also use the CloudWatch console to graph metric data generated by Amazon EC2 and other AWS
services. For more information, see Graph Metrics in the Amazon CloudWatch User Guide.
For examples, see Creating Amazon CloudWatch Alarms in the Amazon CloudWatch User Guide.
New console
980
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create an alarm
7. For Alarm thresholds, select the metric and criteria for the alarm. For example, you can
leave the default settings for Group samples by (Average) and Type of data to sample (CPU
utilization). For Alarm when, choose >= and enter 0.80. For Consecutive period, enter 1. For
Period, select 5 minutes.
8. (Optional) For Sample metric data, choose Add to dashboard.
9. Choose Create.
Old console
a. Choose create topic. For Send a notification to, enter a name for the SNS topic. For With
these recipients, enter one or more email addresses to receive notification.
b. Specify the metric and the criteria for the policy. For example, you can leave the default
settings for Whenever (Average of CPU Utilization). For Is, choose >= and enter 80 percent.
For For at least, enter 1 consecutive period of 5 Minutes.
c. Choose Create Alarm.
You can edit your CloudWatch alarm settings from the Amazon EC2 console or the CloudWatch console.
If you want to delete your alarm, you can do so from the CloudWatch console. For more information, see
Editing or Deleting a CloudWatch Alarm in the Amazon CloudWatch User Guide.
981
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
There are a number of scenarios in which you might want to automatically stop or terminate your
instance. For example, you might have instances dedicated to batch payroll processing jobs or scientific
computing tasks that run for a period of time and then complete their work. Rather than letting those
instances sit idle (and accrue charges), you can stop or terminate them, which can help you to save
money. The main difference between using the stop and the terminate alarm actions is that you can
easily start a stopped instance if you need to run it again later, and you can keep the same instance
ID and root volume. However, you cannot start a terminated instance. Instead, you must launch a new
instance.
You can add the stop, terminate, reboot, or recover actions to any alarm that is set on an Amazon EC2
per-instance metric, including basic and detailed monitoring metrics provided by Amazon CloudWatch
(in the AWS/EC2 namespace), as well as any custom metrics that include the InstanceId dimension, as
long as its value refers to a valid running Amazon EC2 instance.
Console support
You can create alarms using the Amazon EC2 console or the CloudWatch console. The procedures in
this documentation use the Amazon EC2 console. For procedures that use the CloudWatch console, see
Create Alarms That Stop, Terminate, Reboot, or Recover an Instance in the Amazon CloudWatch User
Guide.
Permissions
If you are an AWS Identity and Access Management (IAM) user, you must have the
iam:CreateServiceLinkedRole to create or modify an alarm that performs EC2 alarm actions.
Contents
• Add stop actions to Amazon CloudWatch alarms (p. 982)
• Add terminate actions to Amazon CloudWatch alarms (p. 984)
• Add reboot actions to Amazon CloudWatch alarms (p. 985)
• Add recover actions to Amazon CloudWatch alarms (p. 987)
• Use the Amazon CloudWatch console to view alarm and action history (p. 989)
• Amazon CloudWatch alarm action scenarios (p. 989)
982
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
Instances that use an Amazon EBS volume as the root device can be stopped or terminated, whereas
instances that use the instance store as the root device can only be terminated.
New console
Alternatively, you can choose the plus sign ( ) in the Alarm status column.
4. On the Manage CloudWatch alarms page, do the following:
Old console
a. To receive an email when the alarm is triggered, for Send a notification to, choose an
existing Amazon SNS topic, or choose create topic to create a new one.
To create a new topic, for Send a notification to, enter a name for the topic, and then for
With these recipients, enter the email addresses of the recipients (separated by commas).
After you create the alarm, you will receive a subscription confirmation email that you must
accept before you can get notifications for this topic.
b. Choose Take the action, Stop this instance.
983
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
c. For Whenever, choose the statistic you want to use and then choose the metric. In this
example, choose Average and CPU Utilization.
d. For Is, specify the metric threshold. In this example, enter 10 percent.
e. For For at least, specify the evaluation period for the alarm. In this example, enter 24
consecutive period(s) of 1 Hour.
f. To change the name of the alarm, for Name of alarm, enter a new name. Alarm names
must contain only ASCII characters.
If you don't enter a name for the alarm, Amazon CloudWatch automatically creates one for
you.
Note
You can adjust the alarm configuration based on your own requirements before
creating the alarm, or you can edit them later. This includes the metric, threshold,
duration, action, and notification settings. However, after you create an alarm, you
cannot edit its name later.
g. Choose Create Alarm.
New console
Alternatively, you can choose the plus sign ( ) in the Alarm status column.
4. On the Manage CloudWatch alarms page, do the following:
984
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
Note
You can adjust the alarm configuration based on your own requirements before
creating the alarm, or you can edit them later. This includes the metric, threshold,
duration, action, and notification settings. However, after you create an alarm, you
cannot edit its name later.
h. Choose Create.
Old console
a. To receive an email when the alarm is triggered, for Send a notification to, choose an
existing Amazon SNS topic, or choose create topic to create a new one.
To create a new topic, for Send a notification to, enter a name for the topic, and then for
With these recipients, enter the email addresses of the recipients (separated by commas).
After you create the alarm, you will receive a subscription confirmation email that you must
accept before you can get notifications for this topic.
b. Choose Take the action, Terminate this instance.
c. For Whenever, choose a statistic and then choose the metric. In this example, choose
Average and CPU Utilization.
d. For Is, specify the metric threshold. In this example, enter 10 percent.
e. For For at least, specify the evaluation period for the alarm. In this example, enter 24
consecutive period(s) of 1 Hour.
f. To change the name of the alarm, for Name of alarm, enter a new name. Alarm names
must contain only ASCII characters.
If you don't enter a name for the alarm, Amazon CloudWatch automatically creates one for
you.
Note
You can adjust the alarm configuration based on your own requirements before
creating the alarm, or you can edit them later. This includes the metric, threshold,
duration, action, and notification settings. However, after you create an alarm, you
cannot edit its name later.
g. Choose Create Alarm.
Rebooting an instance doesn't start a new instance billing period (with a minimum one-minute charge),
unlike stopping and restarting your instance. For more information, see Reboot your instance (p. 642).
985
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
Important
To avoid a race condition between the reboot and recover actions, avoid setting the same
number of evaluation periods for a reboot alarm and a recover alarm. We recommend that you
set reboot alarms to three evaluation periods of one minute each. For more information, see
Evaluating an alarm in the Amazon CloudWatch User Guide.
New console
Alternatively, you can choose the plus sign ( ) in the Alarm status column.
4. On the Manage CloudWatch alarms page, do the following:
Old console
a. To receive an email when the alarm is triggered, for Send a notification to, choose an
existing Amazon SNS topic, or choose create topic to create a new one.
To create a new topic, for Send a notification to, enter a name for the topic, and for With
these recipients, enter the email addresses of the recipients (separated by commas). After
you create the alarm, you will receive a subscription confirmation email that you must
accept before you can get notifications for this topic.
b. Select Take the action, Reboot this instance.
c. For Whenever, choose Status Check Failed (Instance).
d. For For at least, specify the evaluation period for the alarm. In this example, enter 3
consecutive period(s) of 5 Minutes.
986
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
e. To change the name of the alarm, for Name of alarm, enter a new name. Alarm names
must contain only ASCII characters.
If you don't enter a name for the alarm, Amazon CloudWatch automatically creates one for
you.
f. Choose Create Alarm.
CloudWatch prevents you from adding a recovery action to an alarm that is on an instance which does
not support recovery actions.
When the StatusCheckFailed_System alarm is triggered, and the recover action is initiated, you are
notified by the Amazon SNS topic that you chose when you created the alarm and associated the recover
action. During instance recovery, the instance is migrated during an instance reboot, and any data that
is in-memory is lost. When the process is complete, information is published to the SNS topic you've
configured for the alarm. Anyone who is subscribed to this SNS topic receives an email notification that
includes the status of the recovery attempt and any further instructions. You notice an instance reboot
on the recovered instance.
Note
The recover action can be used only with StatusCheckFailed_System, not with
StatusCheckFailed_Instance.
The recover action is supported only on instances with the following characteristics:
• Use one of the following instance types: A1, C3, C4, C5, C5a, C5n, C6g, C6gn, Inf1, C6i, M3, M4, M5,
M5a, M5n, M5zn, M6g, M6i, P3, R3, R4, R5, R5a, R5b, R5n, R6g, R6i, T2, T3, T3a, T4g, high memory
(virtualized only), X1, X1e
• Use default or dedicated instance tenancy
• Use EBS volumes only (do not configure instance store volumes). For more information, see 'Recover
this instance' is disabled.
If your instance has a public IP address, it retains the public IP address after recovery.
Important
To avoid a race condition between the reboot and recover actions, avoid setting the same
number of evaluation periods for a reboot alarm and a recover alarm. We recommend that you
set recover alarms to two evaluation periods of one minute each. For more information, see
Evaluating an Alarm in the Amazon CloudWatch User Guide.
987
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
New console
Alternatively, you can choose the plus sign ( ) in the Alarm status column.
4. On the Manage CloudWatch alarms page, do the following:
Old console
a. To receive an email when the alarm is triggered, for Send a notification to, choose an
existing Amazon SNS topic, or choose create topic to create a new one.
To create a new topic, for Send a notification to, enter a name for the topic, and for With
these recipients, enter the email addresses of the recipients (separated by commas). After
you create the alarm, you will receive a subscription confirmation email that you must
accept before you can get email for this topic.
Note
• Users must subscribe to the specified SNS topic to receive email notifications
when the alarm is triggered.
• The AWS account root user always receives email notifications when automatic
instance recovery actions occur, even if an SNS topic is not specified.
988
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
• The AWS account root user always receives email notifications when automatic
instance recovery actions occur, even if it is not subscribed to the specified SNS
topic.
b. Select Take the action, Recover this instance.
c. For Whenever, choose Status Check Failed (System).
d. For For at least, specify the evaluation period for the alarm. In this example, enter 2
consecutive period(s) of 5 Minutes.
e. To change the name of the alarm, for Name of alarm, enter a new name. Alarm names
must contain only ASCII characters.
If you don't enter a name for the alarm, Amazon CloudWatch automatically creates one for
you.
f. Choose Create Alarm.
989
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
New console
Old console
990
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
Scenario 1: Stop idle development and test instances
Create an alarm that stops an instance used for software development or testing when it has been idle
for at least an hour.
Setting Value
1 Stop
2 Maximum
3 CPU Utilization
4 <=
5 10%
6 1
7 1 Hour
Setting Value
2 Average
3 CPU Utilization
4 <=
5 5%
6 24
7 1 Hour
Scenario 3: Send email about web servers with unusually high traffic
Create an alarm that sends email when an instance exceeds 10 GB of outbound network traffic per day.
Setting Value
1 Email
2 Sum
3 Network Out
4 >
5 10 GB
6 24
991
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create alarms that stop, terminate,
reboot, or recover an instance
Setting Value
7 1 Hour
Setting Value
2 Sum
3 Network Out
4 >
5 1 GB
6 1
7 1 Hour
Setting Value
1 Stop
2 Average
4 -
5 -
6 1
7 15 Minutes
Setting Value
1 Terminate
992
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Automate Amazon EC2 with EventBridge
Setting Value
2 Maximum
3 Network Out
4 <=
5 100,000 bytes
6 1
7 5 Minutes
993
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Collect metrics using the CloudWatch agent
The monitoring scripts demonstrate how to produce and consume custom metrics for Amazon
CloudWatch. These sample Perl scripts comprise a fully functional example that reports memory, swap,
and disk space utilization metrics for a Linux instance.
Standard Amazon CloudWatch usage charges for custom metrics apply to your use of these scripts. For
more information, see the Amazon CloudWatch pricing page.
Contents
• Supported systems (p. 994)
• Required permissions (p. 995)
• Install required packages (p. 995)
• Install monitoring scripts (p. 996)
• mon-put-instance-data.pl (p. 997)
• mon-get-instance-stats.pl (p. 1000)
• View your custom metrics in the console (p. 1001)
• Troubleshoot (p. 1001)
Supported systems
The monitoring scripts were tested on instances using the following systems. Using the monitoring
scripts on any other operating system is unsupported.
• Amazon Linux 2
• Amazon Linux AMI 2014.09.2 and later
• Red Hat Enterprise Linux 6.9 and 7.4
• SUSE Linux Enterprise Server 12
• Ubuntu Server 14.04 and 16.04
994
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deprecated: Collect metrics using
the CloudWatch monitoring scripts
Required permissions
Ensure that the scripts have permission to call the following actions by associating an IAM role with your
instance:
• cloudwatch:PutMetricData
• cloudwatch:GetMetricStatistics
• cloudwatch:ListMetrics
• ec2:DescribeTags
For more information, see Work with IAM roles (p. 1278).
To install the required packages on Amazon Linux 2 and Amazon Linux AMI
1. Log on to your instance. For more information, see Connect to your Linux instance (p. 596).
2. At a command prompt, install packages as follows:
1. Log on to your instance. For more information, see Connect to your Linux instance (p. 596).
2. At a command prompt, install packages as follows:
1. Log on to your instance. For more information, see Connect to your Linux instance (p. 596).
2. At a command prompt, install packages as follows:
1. Log on to your instance. For more information, see Connect to your Linux instance (p. 596).
2. At a command prompt, install packages as follows:
995
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deprecated: Collect metrics using
the CloudWatch monitoring scripts
3. Run CPAN as an elevated user:
sudo cpan
Press ENTER through the prompts until you see the following prompt:
cpan[1]>
4. At the CPAN prompt, run each of the below commands: run one command and it installs, and when
you return to the CPAN prompt, run the next command. Press ENTER like before when prompted to
continue through the process:
1. Log on to your instance. For more information, see Connect to your Linux instance (p. 596).
2. On servers running SUSE Linux Enterprise Server 12, you might need to download the perl-
Switch package. You can download and install this package using the following commands:
wget http://download.opensuse.org/repositories/devel:/languages:/perl/SLE_12_SP3/
noarch/perl-Switch-2.17-32.1.noarch.rpm
sudo rpm -i perl-Switch-2.17-32.1.noarch.rpm
1. At a command prompt, move to a folder where you want to store the monitoring scripts and run the
following command to download them:
curl https://aws-cloudwatch.s3.amazonaws.com/downloads/
CloudWatchMonitoringScripts-1.2.2.zip -O
2. Run the following commands to install the monitoring scripts you downloaded:
The package for the monitoring scripts contains the following files:
996
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deprecated: Collect metrics using
the CloudWatch monitoring scripts
• CloudWatchClient.pm – Shared Perl module that simplifies calling Amazon CloudWatch from other
scripts.
• mon-put-instance-data.pl – Collects system metrics on an Amazon EC2 instance (memory, swap, disk
space utilization) and sends them to Amazon CloudWatch.
• mon-get-instance-stats.pl – Queries Amazon CloudWatch and displays the most recent utilization
statistics for the EC2 instance on which this script is run.
• awscreds.template – File template for AWS credentials that stores your access key ID and secret access
key.
• LICENSE.txt – Text file containing the Apache 2.0 license.
• NOTICE.txt – Copyright notice.
mon-put-instance-data.pl
This script collects memory, swap, and disk space utilization data on the current system. It then makes a
remote call to Amazon CloudWatch to report the collected data as custom metrics.
Options
Name Description
--mem-used-incl-cache- If you include this option, memory currently used for cache and
buff buffers is counted as "used" when the metrics are reported for --
mem-util, --mem-used, and --mem-avail.
--disk-path=/ --disk-path=/home
997
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deprecated: Collect metrics using
the CloudWatch monitoring scripts
Name Description
--disk-space-util Collects and sends the DiskSpaceUtilization metric for the selected
disks. The metric is reported in percentages.
Note that the disk utilization metrics calculated by this script differ
from the values calculated by the df -k -l command. If you find the
values from df -k -l more useful, you can change the calculations in
the script.
--disk-space-used Collects and sends the DiskSpaceUsed metric for the selected disks.
The metric is reported by default in gigabytes.
--disk-space-avail Collects and sends the DiskSpaceAvailable metric for the selected
disks. The metric is reported in gigabytes.
--disk-space- Specifies units in which to report disk space usage. If not specified,
units=UNITS disk space is reported in gigabytes. UNITS may be one of the
following: bytes, kilobytes, megabytes, gigabytes.
--aws-access-key- Specifies the AWS access key ID to use to identify the caller. Must be
id=VALUE used together with the --aws-secret-key option. Do not use this
option with the --aws-credential-file parameter.
--aws-secret-key=VALUE Specifies the AWS secret access key to use to sign the request to
CloudWatch. Must be used together with the --aws-access-key-
id option. Do not use this option with --aws-credential-file
parameter.
--aws-iam-role=VALUE Specifies the IAM role used to provide AWS credentials. The value
=VALUE is required. If no credentials are specified, the default IAM
role associated with the EC2 instance is applied. Only one IAM role
can be used. If no IAM roles are found, or if more than one IAM role
is found, the script will return an error.
--aggregated[=only] Adds aggregated metrics for instance type, AMI ID, and overall for
the Region. The value =only is optional; if specified, the script
reports only aggregated metrics.
998
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deprecated: Collect metrics using
the CloudWatch monitoring scripts
Name Description
--auto-scaling[=only] Adds aggregated metrics for the Auto Scaling group. The value
=only is optional; if specified, the script reports only Auto Scaling
metrics. The IAM policy associated with the IAM account or role
using the scripts need to have permissions to call the EC2 action
DescribeTags.
--verify Performs a test run of the script that collects the metrics, prepares
a complete HTTP request, but does not actually call CloudWatch
to report the data. This option also checks that credentials are
provided. When run in verbose mode, this option outputs the
metrics that will be sent to CloudWatch.
--from-cron Use this option when calling the script from cron. When this option
is used, all diagnostic output is suppressed, but error messages are
sent to the local system log of the user account.
Examples
The following examples assume that you provided an IAM role or awscreds.conf file. Otherwise, you
must provide credentials using the --aws-access-key-id and --aws-secret-key parameters for
these commands.
The following example performs a simple test run without posting data to CloudWatch.
The following example collects all available memory metrics and sends them to CloudWatch, counting
cache and buffer memory as used
The following example collects aggregated metrics for an Auto Scaling group and sends them to Amazon
CloudWatch without reporting individual instance metrics.
The following example collects aggregated metrics for instance type, AMI ID and region, and sends them
to Amazon CloudWatch without reporting individual instance metrics
To set a cron schedule for metrics reported to CloudWatch, start editing the crontab using the crontab
-e command. Add the following command to report memory and disk space utilization to CloudWatch
every five minutes:
999
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Deprecated: Collect metrics using
the CloudWatch monitoring scripts
If the script encounters an error, it writes the error message in the system log.
mon-get-instance-stats.pl
This script queries CloudWatch for statistics on memory, swap, and disk space metrics within the time
interval provided using the number of most recent hours. This data is provided for the Amazon EC2
instance on which this script is run.
Options
Name Description
--aws-access-key- Specifies the AWS access key ID to use to identify the caller. Must be
id=VALUE used together with the --aws-secret-key option. Do not use this
option with the --aws-credential-file option.
--aws-secret-key=VALUE Specifies the AWS secret access key to use to sign the request to
CloudWatch. Must be used together with the --aws-access-key-
id option. Do not use this option with --aws-credential-file
option.
--aws-iam-role=VALUE Specifies the IAM role used to provide AWS credentials. The value
=VALUE is required. If no credentials are specified, the default IAM
role associated with the EC2 instance is applied. Only one IAM role
can be used. If no IAM roles are found, or if more than one IAM role
is found, the script will return an error.
--verify Performs a test run of the script. This option also checks that
credentials are provided.
Example
To get utilization statistics for the last 12 hours, run the following command:
./mon-get-instance-stats.pl --recent-hours=12
CPU Utilization
Average: 1.06%, Minimum: 0.00%, Maximum: 15.22%
Memory Utilization
1000
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Log API calls with AWS CloudTrail
Swap Utilization
Average: N/A, Minimum: N/A, Maximum: N/A
Troubleshoot
The CloudWatchClient.pm module caches instance metadata locally. If you create an AMI from an
instance where you have run the monitoring scripts, any instances launched from the AMI within the
cache TTL (default: six hours, 24 hours for Auto Scaling groups) emit metrics using the instance ID of
the original instance. After the cache TTL time period passes, the script retrieves fresh data and the
monitoring scripts use the instance ID of the current instance. To immediately correct this, remove the
cached data using the following command:
rm /var/tmp/aws-mon/instance-id
To learn more about CloudTrail, see the AWS CloudTrail User Guide.
1001
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Understand Amazon EC2 and Amazon EBS log file entries
in Event history. You can view, search, and download recent events in your AWS account. For more
information, see Viewing Events with CloudTrail Event History.
For an ongoing record of events in your AWS account, including events for Amazon EC2 and Amazon
EBS, create a trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when
you create a trail in the console, the trail applies to all Regions. The trail logs events from all Regions in
the AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you
can configure other AWS services to further analyze and act upon the event data collected in CloudTrail
logs. For more information, see:
All Amazon EC2 actions, and Amazon EBS management actions, are logged by CloudTrail and
are documented in the Amazon EC2 API Reference. For example, calls to the RunInstances,
DescribeInstances, or CreateImage actions generate entries in the CloudTrail log files.
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or IAM user credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
The following log file record shows that a user terminated an instance.
{
"Records":[
{
"eventVersion":"1.03",
"userIdentity":{
"type":"Root",
"principalId":"123456789012",
"arn":"arn:aws:iam::123456789012:root",
"accountId":"123456789012",
"accessKeyId":"AKIAIOSFODNN7EXAMPLE",
"userName":"user"
},
"eventTime":"2016-05-20T08:27:45Z",
"eventSource":"ec2.amazonaws.com",
"eventName":"TerminateInstances",
"awsRegion":"us-west-2",
1002
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Audit users that connect via EC2 Instance Connect
"sourceIPAddress":"198.51.100.1",
"userAgent":"aws-cli/1.10.10 Python/2.7.9 Windows/7botocore/1.4.1",
"requestParameters":{
"instancesSet":{
"items":[{
"instanceId":"i-1a2b3c4d"
}]
}
},
"responseElements":{
"instancesSet":{
"items":[{
"instanceId":"i-1a2b3c4d",
"currentState":{
"code":32,
"name":"shutting-down"
},
"previousState":{
"code":16,
"name":"running"
}
}]
}
},
"requestID":"be112233-1ba5-4ae0-8e2b-1c302EXAMPLE",
"eventID":"6e12345-2a4e-417c-aa78-7594fEXAMPLE",
"eventType":"AwsApiCall",
"recipientAccountId":"123456789012"
}
]
}
To audit SSH activity via EC2 Instance Connect using the AWS CloudTrail console
{
"eventVersion": "1.05",
"userIdentity": {
"type": "IAMUser",
"principalId": "ABCDEFGONGNOMOOCB6XYTQEXAMPLE",
"arn": "arn:aws:iam::1234567890120:user/IAM-friendly-name",
1003
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Audit users that connect via EC2 Instance Connect
"accountId": "123456789012",
"accessKeyId": "ABCDEFGUKZHNAW4OSN2AEXAMPLE",
"userName": "IAM-friendly-name",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2018-09-21T21:37:58Z"}
}
},
"eventTime": "2018-09-21T21:38:00Z",
"eventSource": "ec2-instance-connect.amazonaws.com",
"eventName": "SendSSHPublicKey ",
"awsRegion": "us-west-2",
"sourceIPAddress": "123.456.789.012",
"userAgent": "aws-cli/1.15.61 Python/2.7.10 Darwin/16.7.0 botocore/1.10.60",
"requestParameters": {
"instanceId": "i-0123456789EXAMPLE",
"osUser": "ec2-user",
"SSHKey": {
"publicKey": "ssh-rsa ABCDEFGHIJKLMNO01234567890EXAMPLE"
}
},
"responseElements": null,
"requestID": "1a2s3d4f-bde6-11e8-a892-f7ec64543add",
"eventID": "1a2w3d4r5-a88f-4e28-b3bf-30161f75be34",
"eventType": "AwsApiCall",
"recipientAccountId": "0987654321"
}
If you have configured your AWS account to collect CloudTrail events in an S3 bucket, you can
download and audit the information programmatically. For more information, see Getting and
Viewing Your CloudTrail Log Files in the AWS CloudTrail User Guide.
1004
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Regions and Zones
You can control whether the instance receives a public IP address from Amazon's pool of public IP
addresses. The public IP address of an instance is associated with your instance only until it is stopped
or terminated. If you require a persistent public IP address, you can allocate an Elastic IP address for
your AWS account and associate it with an instance or a network interface. An Elastic IP address remains
associated with your AWS account until you release it, and you can move it from one instance to another
as needed. You can bring your own IP address range to your AWS account, where it appears as an address
pool, and then allocate Elastic IP addresses from your address pool.
To increase network performance and reduce latency, you can launch instances in a placement group.
You can get significantly higher packet per second (PPS) performance using enhanced networking. You
can accelerate high performance computing and machine learning applications using an Elastic Fabric
Adapter (EFA), which is a network device that you can attach to a supported instance type.
Features
• Regions and Zones (p. 1005)
• Amazon EC2 instance IP addressing (p. 1018)
• Amazon EC2 instance hostname types (p. 1034)
• Bring your own IP addresses (BYOIP) in Amazon EC2 (p. 1039)
• Assigning prefixes to Amazon EC2 network interfaces (p. 1048)
• Elastic IP addresses (p. 1059)
• Elastic network interfaces (p. 1067)
• Amazon EC2 instance network bandwidth (p. 1098)
• Enhanced networking on Linux (p. 1100)
• Elastic Fabric Adapter (p. 1128)
• Placement groups (p. 1167)
• Network maximum transmission unit (MTU) for your EC2 instance (p. 1179)
• Virtual private clouds (p. 1183)
• EC2-Classic (p. 1183)
1005
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Regions
• Wavelength Zones allow developers to build applications that deliver ultra-low latencies to 5G devices
and end users. Wavelength deploys standard AWS compute and storage services to the edge of
telecommunication carriers' 5G networks.
AWS operates state-of-the-art, highly available data centers. Although rare, failures can occur that affect
the availability of instances that are in the same location. If you host all of your instances in a single
location that is affected by a failure, none of your instances would be available.
To help you determine which deployment is best for you, see AWS Wavelength FAQs.
Contents
• Regions (p. 1006)
• Availability Zones (p. 1010)
• Local Zones (p. 1012)
• Wavelength Zones (p. 1015)
• AWS Outposts (p. 1017)
Regions
Each Amazon EC2 Region is designed to be isolated from the other Amazon EC2 Regions. This achieves
the greatest possible fault tolerance and stability.
When you view your resources, you see only the resources that are tied to the Region that you specified.
This is because Regions are isolated from each other, and we don't automatically replicate resources
across Regions.
When you launch an instance, you must select an AMI that's in the same Region. If the AMI is in another
Region, you can copy the AMI to the Region you're using. For more information, see Copy an AMI (p. 170).
Note that there is a charge for data transfer between Regions. For more information, see Amazon EC2
Pricing - Data Transfer.
Contents
• Available Regions (p. 1006)
• Regions and endpoints (p. 1008)
• Describe your Regions (p. 1008)
• Get the Region name (p. 1009)
• Specify the Region for a resource (p. 1009)
Available Regions
Your account determines the Regions that are available to you.
• An AWS account provides multiple Regions so that you can launch Amazon EC2 instances in locations
that meet your requirements. For example, you might want to launch instances in Europe to be closer
to your European customers or to meet legal requirements.
• An AWS GovCloud (US-West) account provides access to the AWS GovCloud (US-West) Region and the
AWS GovCloud (US-East) Region. For more information, see AWS GovCloud (US).
• An Amazon AWS (China) account provides access to the Beijing and Ningxia Regions only. For more
information, see AWS in China.
1006
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Regions
The following table lists the Regions provided by an AWS account. You can't describe or access additional
Regions from an AWS account, such as AWS GovCloud (US) Region or the China Regions. To use a Region
introduced after March 20, 2019, you must enable the Region. For more information, see Managing AWS
Regions in the AWS General Reference.
For information about available Wavelength Zones, see Available Wavelength Zones in the AWS
Wavelength Developer Guide. For information about available Local Zones, see the section called
“Available Local Zones” (p. 1013).
The number and mapping of Availability Zones per Region may vary between AWS accounts. To get a list
of the Availability Zones that are available to your account, you can use the Amazon EC2 console or the
command line interface. For more information, see Describe your Regions (p. 1008).
1007
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Regions
For more information about endpoints and protocols in AWS GovCloud (US-West), see AWS GovCloud
(US-West) Endpoints in the AWS GovCloud (US) User Guide.
3. Your EC2 resources for this Region are displayed on the EC2 Dashboard in the Resources section.
• Use the describe-regions command as follows to describe the Regions that are enabled for your
account.
To describe all Regions, including any Regions that are disabled for your account, add the --all-
regions option as follows.
1008
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Regions
To find your Regions using the AWS Tools for Windows PowerShell
• Use the Get-EC2Region command as follows to describe the Regions for your account.
PS C:\> Get-EC2Region
• Use the get-regions command as follows to describe the name of the specified Region.
Ohio
Considerations
Some AWS resources might not be available in all Regions. Ensure that you can create the resources that
you need in the desired Regions before you launch an instance.
You can set the value of an environment variable to the desired Regional endpoint (for example,
https://ec2.us-east-2.amazonaws.com):
1009
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Availability Zones
Alternatively, you can use the --region (AWS CLI) or -Region (AWS Tools for Windows PowerShell)
command line option with each individual command. For example, --region us-east-2.
For more information about the endpoints for Amazon EC2, see Amazon Elastic Compute Cloud
Endpoints.
Availability Zones
Each Region has multiple, isolated locations known as Availability Zones. When you launch an instance,
you can select an Availability Zone or let us choose one for you. If you distribute your instances across
multiple Availability Zones and one instance fails, you can design your application so that an instance in
another Availability Zone can handle requests.
You can also use Elastic IP addresses to mask the failure of an instance in one Availability Zone by rapidly
remapping the address to an instance in another Availability Zone. For more information, see Elastic IP
addresses (p. 1059).
An Availability Zone is represented by a Region code followed by a letter identifier; for example,
us-east-1a. To ensure that resources are distributed across the Availability Zones for a Region, we
independently map Availability Zones to names for each AWS account. For example, the Availability Zone
us-east-1a for your AWS account might not be the same location as us-east-1a for another AWS
account.
To coordinate Availability Zones across accounts, you must use the AZ ID, which is a unique and
consistent identifier for an Availability Zone. For example, use1-az1 is an AZ ID for the us-east-1
Region and it has the same location in every AWS account.
You can view AZ IDs to determine the location of resources in one account relative to the resources in
another account. For example, if you share a subnet in the Availability Zone with the AZ ID use-az2 with
another account, this subnet is available to that account in the Availability Zone whose AZ ID is also use-
az2. The AZ ID for each VPC and subnet is displayed in the Amazon VPC console. For more information,
see Working with Shared VPCs in the Amazon VPC User Guide.
As Availability Zones grow over time, our ability to expand them can become constrained. If this
happens, we might restrict you from launching an instance in a constrained Availability Zone unless you
already have an instance in that Availability Zone. Eventually, we might also remove the constrained
Availability Zone from the list of Availability Zones for new accounts. Therefore, your account might have
a different number of available Availability Zones in a Region than another account.
Contents
• Describe your Availability Zones (p. 1011)
• Launch instances in an Availability Zone (p. 1011)
• Migrate an instance to another Availability Zone (p. 1011)
1010
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Availability Zones
1. Use the describe-availability-zones command as follows to describe the Availability Zones within the
specified Region.
2. Use the describe-availability-zones command as follows to describe the Availability Zones regardless
of the opt-in status.
To find your Availability Zones using the AWS Tools for Windows PowerShell
Use the Get-EC2AvailabilityZone command as follows to describe the Availability Zones within the
specified Region.
When you launch an instance, you can optionally specify an Availability Zone in the Region that you are
using. If you do not specify an Availability Zone, we select an Availability Zone for you. When you launch
your initial instances, we recommend that you accept the default Availability Zone, because this allows us
to select the best Availability Zone for you based on system health and available capacity. If you launch
additional instances, specify a Zone only if your new instances must be close to, or separated from, your
running instances.
1011
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Local Zones
1. Create an AMI from the instance. The procedure depends on your operating system and the type of
root device volume for the instance. For more information, see the documentation that corresponds
to your operating system and root device volume:
Local Zones
A Local Zone is an extension of an AWS Region in geographic proximity to your users. Local Zones have
their own connections to the internet and support AWS Direct Connect, so that resources created in a
Local Zone can serve local users with low-latency communications. For more information, see AWS Local
Zones.
A Local Zone is represented by a Region code followed by an identifier that indicates the location, for
example, us-west-2-lax-1a. For more information, see Available Local Zones (p. 1013).
To use a Local Zone, you must first enable it. For more information, see the section called “Opt in to
Local Zones” (p. 1015). Next, create a subnet in the Local Zone. Finally, launch any of the following
resources in the Local Zone subnet, so that your applications are close to your end users:
1012
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Local Zones
In addition to the preceding list, the following resources are available in the Los Angeles Local Zones:
Contents
• Available Local Zones (p. 1013)
• Describe your Local Zones (p. 1014)
• Opt in to Local Zones (p. 1015)
• Launch instances in a Local Zone (p. 1015)
1013
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Local Zones
1. Use the describe-availability-zones command as follows to describe the Local Zones in the specified
Region.
2. Use the describe-availability-zones command as follows to describe the Local Zones regardless of
whether they are enabled.
To find your Local Zones using the AWS Tools for Windows PowerShell
Use the Get-EC2AvailabilityZone command as follows to describe the Local Zones in the specified
Region.
1014
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Wavelength Zones
Consideration
Some AWS resources might not be available in all Regions. Make sure that you can create the resources
that you need in the desired Regions or Local Zones before launching an instance in a specific Local Zone.
You can allocate the following IP addresses from a network border group:
1. Enable Local Zones. For more information, see Opt in to Local Zones (p. 1015).
2. Create a VPC in a Region that supports the Local Zone. For more information, see Creating a VPC in
the Amazon VPC User Guide.
3. Create a subnet. Select the Local Zone when you create the subnet. For more information, see
Creating a subnet in your VPC in the Amazon VPC User Guide.
4. Launch an instance, and select the subnet that you created in the Local Zone. For more information,
see Launch your instance (p. 563).
Wavelength Zones
AWS Wavelength enables developers to build applications that deliver ultra-low latencies to mobile
devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of
telecommunication carriers' 5G networks. Developers can extend a virtual private cloud (VPC) to one or
1015
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Wavelength Zones
more Wavelength Zones, and then use AWS resources like Amazon EC2 instances to run applications that
require ultra-low latency and a connection to AWS services in the Region.
A Wavelength Zone is an isolated zone in the carrier location where the Wavelength infrastructure is
deployed. Wavelength Zones are tied to a Region. A Wavelength Zone is a logical extension of a Region,
and is managed by the control plane in the Region.
A Wavelength Zone is represented by a Region code followed by an identifier that indicates the
Wavelength Zone, for example, us-east-1-wl1-bos-wlz-1.
To use a Wavelength Zone, you must first opt in to the Zone. For more information, see the section called
“Enable Wavelength Zones” (p. 1017). Next, create a subnet in the Wavelength Zone. Finally, launch your
resources in the Wavelength Zones subnet, so that your applications are closer to your end users.
Wavelength Zones are not available in every Region. For information about the Regions that support
Wavelength Zones, see Available Wavelength Zones in the AWS Wavelength Developer Guide.
Contents
• Describe your Wavelength Zones (p. 1016)
• Enable Wavelength Zones (p. 1017)
• Launch instances in a Wavelength Zone (p. 1017)
1. Use the describe-availability-zones command as follows to describe the Wavelength Zones within
the specified Region.
1016
Amazon Elastic Compute Cloud
User Guide for Linux Instances
AWS Outposts
To find your Wavelength Zone using the AWS Tools for Windows PowerShell
Use the Get-EC2AvailabilityZone command as follows to describe the Wavelength Zone within the
specified Region.
Considerations
• Some AWS resources are not available in all Regions. Make sure that you can create the resources
that you need in the desired Region or Wavelength Zone before launching an instance in a specific
Wavelength Zone.
For information about how to launch an instance in a Wavelength Zone, see Get started with AWS
Wavelength in the AWS Wavelength Developer Guide.
AWS Outposts
AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to
customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables
customers to build and run applications on premises using the same programming interfaces as in AWS
1017
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance IP addressing
Regions, while using local compute and storage resources for lower latency and local data processing
needs.
An Outpost is a pool of AWS compute and storage capacity deployed at a customer site. AWS operates,
monitors, and manages this capacity as part of an AWS Region. You can create subnets on your Outpost
and specify them when you create AWS resources such as EC2 instances, EBS volumes, ECS clusters, and
RDS instances. Instances in Outpost subnets communicate with other instances in the AWS Region using
private IP addresses, all within the same VPC.
To begin using AWS Outposts, you must create an Outpost and order Outpost capacity. For more
information about Outposts configurations, see our catalog. After your Outpost equipment is installed,
the compute and storage capacity is available for you when you launch Amazon EC2 instances and create
Amazon EBS volumes on your Outpost.
The root volume must be 30 GB or smaller. You can specify data volumes in the block device mapping of
the AMI or the instance to provide additional storage. To trim unused blocks from the boot volume, see
How to Build Sparse EBS Volumes in the AWS Partner Network Blog.
We recommend that you increase the NVMe timeout for the root volume. For more information, see I/O
operation timeout (p. 1556).
For information about how to create an Outpost, see Get started with AWS Outposts in the AWS
Outposts User Guide.
The following create-volume command creates an empty 50 GB volume on the specified Outpost.
You can dynamically modify the size of your Amazon EBS gp2 volumes without detaching them. For
more information about modifying a volume without detaching it, see Request modifications to your EBS
volumes (p. 1525).
Contents
• Private IPv4 addresses (p. 1019)
• Public IPv4 addresses (p. 1019)
• Elastic IP addresses (IPv4) (p. 1020)
1018
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Private IPv4 addresses
• IPv4-only subnets: You can only create resources in these subnets with IPv4 addresses assigned to
them.
• IPv6-only subnets: You can only create resources in these subnets with IPv6 addresses assigned to
them.
• IPv4 and IPv6 subnets: You can create resources in these subnets with either IPv4 or IPv6 addresses
assigned to them.
When you launch an EC2 instance into an IPv4-only or dual stack (IPv4 and IPv6) subnet, the instance
receives a primary private IP address from the IPv4 address range of the subnet. For more information,
see VPC and subnet sizing in the Amazon VPC User Guide. If you don't specify a primary private IP
address when you launch the instance, we select an available IP address in the subnet's IPv4 range
for you. Each instance has a default network interface (eth0) that is assigned the primary private
IPv4 address. You can also specify additional private IPv4 addresses, known as secondary private IPv4
addresses. Unlike primary private IP addresses, secondary private IP addresses can be reassigned from
one instance to another. For more information, see Multiple IP addresses (p. 1026).
A private IPv4 address, regardless of whether it is a primary or secondary address, remains associated
with the network interface when the instance is stopped and started, or hibernated and started, and is
released when the instance is terminated.
When you launch an instance in a default VPC, we assign it a public IP address by default. When you
launch an instance into a nondefault VPC, the subnet has an attribute that determines whether instances
launched into that subnet receive a public IP address from the public IPv4 address pool. By default, we
don't assign a public IP address to instances launched in a nondefault subnet.
You can control whether your instance receives a public IP address as follows:
• Modifying the public IP addressing attribute of your subnet. For more information, see Modifying the
public IPv4 addressing attribute for your subnet in the Amazon VPC User Guide.
1019
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Elastic IP addresses (IPv4)
• Enabling or disabling the public IP addressing feature during launch, which overrides the subnet's
public IP addressing attribute. For more information, see Assign a public IPv4 address during instance
launch (p. 1023).
A public IP address is assigned to your instance from Amazon's pool of public IPv4 addresses, and is not
associated with your AWS account. When a public IP address is disassociated from your instance, it is
released back into the public IPv4 address pool, and you cannot reuse it.
You cannot manually associate or disassociate a public IP (IPv4) address from your instance. Instead, in
certain cases, we release the public IP address from your instance, or assign it a new one:
• We release your instance's public IP address when it is stopped, hibernated, or terminated. Your
stopped or hibernated instance receives a new public IP address when it is started.
• We release your instance's public IP address when you associate an Elastic IP address with it. When you
disassociate the Elastic IP address from your instance, it receives a new public IP address.
• If the public IP address of your instance in a VPC has been released, it will not receive a new one if
there is more than one network interface attached to your instance.
• If your instance's public IP address is released while it has a secondary private IP address that is
associated with an Elastic IP address, the instance does not receive a new public IP address.
If you require a persistent public IP address that can be associated to and from instances as you require,
use an Elastic IP address instead.
If you use dynamic DNS to map an existing DNS name to a new instance's public IP address, it might take
up to 24 hours for the IP address to propagate through the Internet. As a result, new instances might
not receive traffic while terminated instances continue to receive requests. To solve this problem, use an
Elastic IP address. You can allocate your own Elastic IP address, and associate it with your instance. For
more information, see Elastic IP addresses (p. 1059).
Note
Instances that access other instances through their public NAT IP address are charged for
regional or Internet data transfer, depending on whether the instances are in the same Region.
IPv6 addresses
You can optionally associate an IPv6 CIDR block with your VPC and associate IPv6 CIDR blocks with
your subnets. The IPv6 CIDR block for your VPC is automatically assigned from Amazon's pool of IPv6
addresses; you cannot choose the range yourself. For more information, see the following topics in the
Amazon VPC User Guide:
IPv6 addresses are globally unique and can be configured to remain private or reachable over the
Internet. For more information about IPv6, see IP Addressing in Your VPC in the Amazon VPC User Guide.
1020
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with the IPv4 addresses for your instances
Your instance receives an IPv6 address if an IPv6 CIDR block is associated with your VPC and subnet, and
if one of the following is true:
• Your subnet is configured to automatically assign an IPv6 address to an instance during launch. For
more information, see Modifying the IPv6 addressing attribute for your subnet.
• You assign an IPv6 address to your instance during launch.
• You assign an IPv6 address to the primary network interface of your instance after launch.
• You assign an IPv6 address to a network interface in the same subnet, and attach the network
interface to your instance after launch.
When your instance receives an IPv6 address during launch, the address is associated with the primary
network interface (eth0) of the instance. You can disassociate the IPv6 address from the network
interface.
An IPv6 address persists when you stop and start, or hibernate and start, your instance, and is released
when you terminate your instance. You cannot reassign an IPv6 address while it's assigned to another
network interface—you must first unassign it.
You can assign additional IPv6 addresses to your instance by assigning them to a network interface
attached to your instance. The number of IPv6 addresses you can assign to a network interface and
the number of network interfaces you can attach to an instance varies per instance type. For more
information, see IP addresses per network interface per instance type (p. 1070).
Contents
• View the IPv4 addresses (p. 1021)
• Assign a public IPv4 address during instance launch (p. 1023)
The public IPv4 address is displayed as a property of the network interface in the console, but it's
mapped to the primary private IPv4 address through NAT. Therefore, if you inspect the properties of your
network interface on your instance, for example, through ifconfig (Linux) or ipconfig (Windows), the
public IPv4 address is not displayed. To determine your instance's public IPv4 address from an instance,
use instance metadata.
New console
• Public IPv4 address — The public IPv4 address. If you associated an Elastic IP address with
the instance or the primary network interface, this is the Elastic IP address.
1021
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with the IPv4 addresses for your instances
Old console
To view the IPv4 addresses for an instance using the command line
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
1022
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with the IPv4 addresses for your instances
1. Connect to your instance. For more information, see Connect to your Linux instance (p. 596).
2. Use the following command to access the private IP address:
IMDSv2
IMDSv1
IMDSv2
IMDSv1
If an Elastic IP address is associated with the instance, the value returned is that of the Elastic IP
address.
Considerations
• You can't manually disassociate the public IP address from your instance after launch. Instead, it's
automatically released in certain cases, after which you cannot reuse it. For more information, see
Public IPv4 addresses (p. 1019). If you require a persistent public IP address that you can associate
or disassociate at will, assign an Elastic IP address to the instance after launch instead. For more
information, see Elastic IP addresses (p. 1059).
• You cannot auto-assign a public IP address if you specify more than one network interface.
Additionally, you cannot override the subnet setting using the auto-assign public IP feature if you
specify an existing network interface for eth0.
• The public IP addressing feature is only available during launch. However, whether you assign a public
IP address to your instance during launch or not, you can associate an Elastic IP address with your
1023
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with the IPv6 addresses for your instances
instance after it's launched. For more information, see Elastic IP addresses (p. 1059). You can also
modify your subnet's public IPv4 addressing behavior. For more information, see Modifying the public
IPv4 addressing attribute for your subnet.
To enable or disable the public IP addressing feature using the command line
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
Contents
• View the IPv6 addresses (p. 1024)
• Assign an IPv6 address to an instance (p. 1025)
• Unassign an IPv6 address from an instance (p. 1026)
New console
1024
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with the IPv6 addresses for your instances
Old console
To view the IPv6 addresses for an instance using the command line
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
1. Connect to your instance. For more information, see Connect to your Linux instance (p. 596).
2. Use the following command to view the IPv6 address (you can get the MAC address from
http://169.254.169.254/latest/meta-data/network/interfaces/macs/).
IMDSv2
IMDSv1
1025
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Multiple IP addresses
3. On the Configure Instance Details page, for Network, select a VPC and for Subnet, select a subnet.
For Auto-assign IPv6 IP, choose Enable.
4. Follow the remaining steps in the wizard to launch your instance.
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
• Use the --ipv6-addresses option with the run-instances command (AWS CLI)
• Use the Ipv6Addresses property for -NetworkInterface in the New-EC2Instance command (AWS
Tools for Windows PowerShell)
• assign-ipv6-addresses (AWS CLI)
• Register-EC2Ipv6AddressList (AWS Tools for Windows PowerShell)
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
Multiple IP addresses
You can specify multiple private IPv4 and IPv6 addresses for your instances. The number of network
interfaces and private IPv4 and IPv6 addresses that you can specify for an instance depends on the
instance type. For more information, see IP addresses per network interface per instance type (p. 1070).
1026
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Multiple IP addresses
It can be useful to assign multiple IP addresses to an instance in your VPC to do the following:
• Host multiple websites on a single server by using multiple SSL certificates on a single server and
associating each certificate with a specific IP address.
• Operate network appliances, such as firewalls or load balancers, that have multiple IP addresses for
each network interface.
• Redirect internal traffic to a standby instance in case your instance fails, by reassigning the secondary
IP address to the standby instance.
Contents
• How multiple IP addresses work (p. 1027)
• Work with multiple IPv4 addresses (p. 1028)
• Work with multiple IPv6 addresses (p. 1031)
• You can assign a secondary private IPv4 address to any network interface. The network interface does
not need to be attached to the instance.
• You can assign multiple IPv6 addresses to a network interface that's in a subnet that has an associated
IPv6 CIDR block.
• You must choose a secondary IPv4 address from the IPv4 CIDR block range of the subnet for the
network interface.
• You must choose IPv6 addresses from the IPv6 CIDR block range of the subnet for the network
interface.
• You associate security groups with network interfaces, not individual IP addresses. Therefore, each IP
address you specify in a network interface is subject to the security group of its network interface.
• Multiple IP addresses can be assigned and unassigned to network interfaces attached to running or
stopped instances.
• Secondary private IPv4 addresses that are assigned to a network interface can be reassigned to
another one if you explicitly allow it.
• An IPv6 address cannot be reassigned to another network interface; you must first unassign the IPv6
address from the existing network interface.
• When assigning multiple IP addresses to a network interface using the command line tools or API, the
entire operation fails if one of the IP addresses can't be assigned.
• Primary private IPv4 addresses, secondary private IPv4 addresses, Elastic IP addresses, and IPv6
addresses remain with a secondary network interface when it is detached from an instance or attached
to an instance.
• Although you can't detach the primary network interface from an instance, you can reassign the
secondary private IPv4 address of the primary network interface to another network interface.
The following list explains how multiple IP addresses work with Elastic IP addresses (IPv4 only):
• Each private IPv4 address can be associated with a single Elastic IP address, and vice versa.
• When a secondary private IPv4 address is reassigned to another interface, the secondary private IPv4
address retains its association with an Elastic IP address.
• When a secondary private IPv4 address is unassigned from an interface, an associated Elastic IP
address is automatically disassociated from the secondary private IPv4 address.
1027
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Multiple IP addresses
Contents
• Assign a secondary private IPv4 address (p. 1028)
• Configure the operating system on your instance to recognize secondary private IPv4
addresses (p. 1029)
• Associate an Elastic IP address with the secondary private IPv4 address (p. 1030)
• View your secondary private IPv4 addresses (p. 1030)
• Unassign a secondary private IPv4 address (p. 1031)
• To assign a secondary private IPv4 address when launching an instance (p. 1028)
• To assign a secondary IPv4 address during launch using the command line (p. 1029)
• To assign a secondary private IPv4 address to a network interface (p. 1029)
• To assign a secondary private IPv4 to an existing instance using the command line (p. 1029)
• To add another network interface, choose Add Device. The console enables you to specify up
to two network interfaces when you launch an instance. After you launch the instance, choose
Network Interfaces in the navigation pane to add additional network interfaces. The total
number of network interfaces that you can attach varies by instance type. For more information,
see IP addresses per network interface per instance type (p. 1070).
Important
When you add a second network interface, the system can no longer auto-assign a public
IPv4 address. You will not be able to connect to the instance over IPv4 unless you assign
an Elastic IP address to the primary network interface (eth0). You can assign the Elastic
IP address after you complete the Launch wizard. For more information, see Work with
Elastic IP addresses (p. 1060).
• For each network interface, under Secondary IP addresses, choose Add IP, and then enter a
private IP address from the subnet range, or accept the default Auto-assign value to let Amazon
select an address.
6. On the next Add Storage page, you can specify volumes to attach to the instance besides the
volumes specified by the AMI (such as the root device volume), and then choose Next: Add Tags.
7. On the Add Tags page, specify tags for the instance, such as a user-friendly name, and then choose
Next: Configure Security Group.
8. On the Configure Security Group page, select an existing security group or create a new one.
Choose Review and Launch.
1028
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Multiple IP addresses
9. On the Review Instance Launch page, review your settings, and then choose Launch to choose a key
pair and launch your instance. If you're new to Amazon EC2 and haven't created any key pairs, the
wizard prompts you to create one.
Important
After you have added a secondary private IP address to a network interface, you must connect
to the instance and configure the secondary private IP address on the instance itself. For more
information, see Configure the operating system on your instance to recognize secondary
private IPv4 addresses (p. 1029).
To assign a secondary IPv4 address during launch using the command line
• You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
Alternatively, you can assign a secondary private IPv4 address to an instance. Choose Instances in the
navigation pane, select the instance, and then choose Actions, Networking, Manage IP Addresses. You
can configure the same information as you did in the steps above. The IP address is assigned to the
primary network interface (eth0) for the instance.
To assign a secondary private IPv4 to an existing instance using the command line
• You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
• If you are using Amazon Linux, the ec2-net-utils package can take care of this step for you. It
configures additional network interfaces that you attach while the instance is running, refreshes
1029
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Multiple IP addresses
secondary IPv4 addresses during DHCP lease renewal, and updates the related routing rules. You can
immediately refresh the list of interfaces by using the command sudo service network restart
and then view the up-to-date list using ip addr li. If you require manual control over your network
configuration, you can remove the ec2-net-utils package. For more information, see Configure your
network interface using ec2-net-utils (p. 1096).
• If you are using another Linux distribution, see the documentation for your Linux distribution. Search
for information about configuring additional network interfaces and secondary IPv4 addresses. If the
instance has two or more interfaces on the same subnet, search for information about using routing
rules to work around asymmetric routing.
For information about configuring a Windows instance, see Configuring a secondary private IP address
for your Windows instance in a VPC in the Amazon EC2 User Guide for Windows Instances.
To associate an Elastic IP address with a secondary private IPv4 address using the command
line
• You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
1030
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Multiple IP addresses
• You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
Contents
• Assign multiple IPv6 addresses (p. 1031)
• View your IPv6 addresses (p. 1033)
• Unassign an IPv6 address (p. 1033)
1031
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Multiple IP addresses
4. On the Configure Instance Details page, select a VPC from the Network list, and a subnet from the
Subnet list.
5. In the Network Interfaces section, do the following, and then choose Next: Add Storage:
• To assign a single IPv6 address to the primary network interface (eth0), under IPv6 IPs, choose
Add IP. To add a secondary IPv6 address, choose Add IP again. You can enter an IPv6 address from
the range of the subnet, or leave the default Auto-assign value to let Amazon choose an IPv6
address from the subnet for you.
• Choose Add Device to add another network interface and repeat the steps above to add one
or more IPv6 addresses to the network interface. The console enables you to specify up to two
network interfaces when you launch an instance. After you launch the instance, choose Network
Interfaces in the navigation pane to add additional network interfaces. The total number of
network interfaces that you can attach varies by instance type. For more information, see IP
addresses per network interface per instance type (p. 1070).
6. Follow the next steps in the wizard to attach volumes and tag your instance.
7. On the Configure Security Group page, select an existing security group or create a new one. If
you want your instance to be reachable over IPv6, ensure that your security group has rules that
allow access from IPv6 addresses. For more information, see Security group rules for different use
cases (p. 1318). Choose Review and Launch.
8. On the Review Instance Launch page, review your settings, and then choose Launch to choose a key
pair and launch your instance. If you're new to Amazon EC2 and haven't created any key pairs, the
wizard prompts you to create one.
You can use the Instances screen Amazon EC2 console to assign multiple IPv6 addresses to an existing
instance. This assigns the IPv6 addresses to the primary network interface (eth0) for the instance. To
assign a specific IPv6 address to the instance, ensure that the IPv6 address is not already assigned to
another instance or network interface.
Alternatively, you can assign multiple IPv6 addresses to an existing network interface. The network
interface must have been created in a subnet that has an associated IPv6 CIDR block. To assign a specific
IPv6 address to the network interface, ensure that the IPv6 address is not already assigned to another
network interface.
1032
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Multiple IP addresses
CLI overview
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
CLI overview
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
1033
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 instance hostnames
4. Under IPv6 Addresses, choose Unassign for the IPv6 address to unassign.
5. Choose Yes, Update.
CLI overview
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
Contents
• Types of EC2 hostnames (p. 1034)
• Where you see RBN and IPBN (p. 1036)
• Modify RBN configurations (p. 1038)
• IP address-based naming (IPBN): The legacy naming scheme where, when you launch an instance,
the private IPv4 address of the instance is included in the hostname of the instance. The IP address-
based name exists for the life of the EC2 instance. When used as the Private DNS hostname, it will only
return the private IPv4 address (A record).
• Resource-based naming (RBN): When you launch an instance, the EC2 instance ID is included in the
hostname of the instance. The resource-based name exists for the life of the EC2 instance. When used
as the Private DNS hostname, it can return both the Private IPv4 address (A record) and/or the IPv6
Global Unicast Address (AAAA record).
1034
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Types of EC2 hostnames
The EC2 instance guest OS hostname (of type IPBN or RBN) depends on the subnet settings:
• If the instance is launched into an IPv4-only subnet, you can select either IPBN or RBN.
• If the instance is launched into a dual-stack (IPv4+IPv6) subnet, you can select either IPBN or RBN.
• If the instance is launched into an IPv6-only subnet, RBN is used automatically.
Contents
• IP address-based naming (p. 1035)
• Resource-based naming (p. 1035)
• The difference between IPBN and RBN (p. 1035)
IP address-based naming
When you launch an EC2 instance with IP address-based naming (IPBN), the guest OS hostname is
configured to use the private IPv4 address
Resource-based naming
Resource-based naming (RBN) is used automatically when you launch EC2 instances in IPv6-only subnets.
RBN is not selected by default when you launch an instance in dual-stack (IPv4+IPv6) subnets, but
it is an option that you can select depending on the subnet settings. After you launch an instance,
you can manage the guest OS hostname configuration. For more information, see Modify RBN
configurations (p. 1038).
When you launch an EC2 instance with a resource-based hostname type, the guest OS hostname is
configured to use the EC2 instance ID.
1035
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Where you see RBN and IPBN
Contents
• When creating an EC2 instance (p. 1036)
• When viewing the details of an existing EC2 instance (p. 1037)
Scenario 1
You create an EC2 instance in the wizard and, when you configure the details, you choose a subnet that
you configured to be IPv6-only.
In this case, the Hostname type Resource name is selected automatically and is not modifiable. DNS
Hostname options are displayed, but Enable resource-based IPv6 (AAAA record) DNS requests is
selected automatically and is not modifiable here. This means that the hostname for your EC2 instance is
an RBN, and DNS requests to the RBN will resolve to the IPv6 address (AAAA record) of this EC2 instance.
Scenario 2
You create an EC2 instance in the wizard and, when you configure the details, you choose a subnet
configured with an IPv4 CIDR block or both an IPv4 and IPv6 CIDR block ("dual stack").
1036
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Where you see RBN and IPBN
In this case, the RBN options Hostname type and Resource-based DNS are visible. Enable IP name IPV4
(A record) DNS requests is selected automatically and can't be changed. This means that requests to the
IP name will resolve to the IPv4 address (A record) of this EC2 instance.
The RBN options default to the configurations of the subnet, but you can modify the options for this
instance depending on the subnet settings:
• Hostname type: Determines whether you want the guest OS hostname of the EC2 instance to be the
resource name (RBN) or IP name (IPBN).
• Enable resource-based IPV4 (A record) DNS requests: Determines whether requests to your resource
name resolve to the private IPv4 address (A record) of this EC2 instance.
• Enable resource-based IPV6 (AAAA record) DNS requests: Determines whether requests to your
resource name resolve to the IPv6 GUA address (AAAA record) of this EC2 instance.
You can see the following details related to IPBN and RBN:
In addition, if you connect to your EC2 instance directly over SSH and enter the hostname command,
you'll see the hostname in either the IPBN or RBN format.
1037
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Modify RBN configurations
Contents
• Subnets (p. 1038)
• EC2 instances (p. 1038)
Subnets
Modify the RBN configurations for a subnet by selecting a subnet in the VPC console and choosing
Actions, Edit subnet settings.
• Hostname type: Determines whether you want the default setting of the guest OS hostname of the
EC2 instance launched in the subnet to be the resource name (RBN) or IP name (IPBN).
• Enable DNS hostname IPv4 (A record) requests: Determines whether DNS requests/queries to your
resource name resolve to the IPv4 address (A record) of this EC2 instance.
• Enable DNS hostname IPv6 (AAAA record) requests: Determines whether DNS requests/queries to
your resource name resolve to the IPv6 address (AAAA record) of this EC2 instance.
EC2 instances
Follow the steps in this section to modify the RBN configurations for an EC2 instance.
Important
• To change the Use RBN as guest OS hostname setting, you must first stop the instance. To
change the Answer DNS hostname IPv4 (A record) request or Answer DNS hostname IPv6
(AAAA record) requests settings, you don't have to stop the instance.
• To modify any of the RBN settings for non-EBS backed EC2 instance types, you cannot stop
the instance. You must terminate the instance and launch a new instance with the desired
RBN configurations.
To stop the instance, select the instance and choose Instance state, Stop instance.
3. Select the instance and choose Actions, Instance settings, Change resource based naming options.
• Use RBN as guest OS hostname: Determines whether you want the guest OS hostname of the
EC2 instance to be the resource name (RBN) or IP name (IPBN).
• Answer DNS hostname IPv4 (A record) requests: Determines whether DNS requests/queries to
your resource name resolve to the IPv4 address of this EC2 instance.
• Answer DNS hostname IPv6 (AAAA record) requests: Determines whether DNS requests/queries
to your resource name resolve to the IPv6 address (AAAA record) of this EC2 instance.
4. Choose Save.
5. If you stopped the instance, start it again.
1038
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Bring your own IP addresses
BYOIP is not available in all Regions and for all resources. For a list of supported Regions and resources,
see the FAQ for Bring Your Own IP.
Note
The following steps describe how to bring your own IP address range for use in Amazon EC2
only. For steps to bring your own IP address range for use in AWS Global Accelerator, see Bring
your own IP addresses (BYOIP) in the AWS Global Accelerator Developer Guide.
Contents
• Requirements and quotas (p. 1039)
• Configure your BYOIP address range (p. 1039)
• Work with your address range (p. 1046)
• Learn more (p. 1047)
• Preparation
For authentication purposes, create an RSA key pair and use it to generate a self-signed X.509
certificate.
• RIR configuration
1039
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure your BYOIP address range
Register with the Resource Public Key Infrastructure (RPKI) of your RIR, and file a Route Origin
Authorization (ROA) that defines the desired address range, the autonomous system numbers (ASNs)
allowed to advertise the address range, and an expiration date. Upload the self-signed certificate to
your RDAP record comments.
• Amazon configuration
Sign a CIDR authorization context message with the private RSA key that you created, and upload the
message and signature to Amazon using the AWS Command Line Interface.
To bring on multiple address ranges, you must repeat this process with each address range. Bringing on
an address range has no effect on any address ranges that you brought on previously.
To configure BYOIP, complete the following tasks. For some tasks, you run Linux commands. On
Windows, you can use the Windows Subsystem for Linux to run the Linux commands.
Tasks
• Create a key pair and certificate (p. 1040)
• Create an ROA object in your RIR (p. 1043)
• Update the RDAP record in your RIR (p. 1044)
• Provision the address range in AWS (p. 1044)
• Advertise the address range through AWS (p. 1045)
• Deprovision the address range (p. 1046)
Copy the commands below and replace only the placeholder values (in colored italic text).
This procedure follows the best practice of encrypting your private RSA key and requiring a pass phrase
to access it.
The -aes256 parameter specifies the algorithm used to encrypt the private key. The command
returns the following output, including prompts to set a pass phrase:
......+++
.+++
Enter PEM pass phrase: xxxxxxx
Verifying - Enter PEM pass phrase: xxxxxxx
1040
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure your BYOIP address range
This returns a pass-phrase prompt and the contents of the key, which shouild be similar to the
following:
1041
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure your BYOIP address range
01:9d:cc:44:06:4f:8c:71:e0:a5:89:00:02:c5:16:
28:06:c2:07:05:50:71:58:c6:3b:9f:56:8d:f6:63:
cd:35:f9:a5:0b:55:54:7e:bc:ae:e7:22:1f:cf:03:
4d:90:b0:8c:29:23:06:1c:60:f8:e2:24:24:12:c4:
e7:09:21:f3:68:c8:1d:28:af:67:ad:df:97:02:f0:
cf:e1:34:f8:78:44:2d:26:49:ae:7d:8c:63:a2:71:
9a:29:37:a8:d3:54:38:5f:d9:fb:79:ac:76:3d:a5:
b9
prime1:
00:e3:c2:50:bf:de:3c:69:f3:32:72:e8:ff:28:25:
02:af:ed:37:6f:33:05:23:e1:54:96:38:76:41:1c:
bb:f8:7a:f2:5a:6a:26:b4:b9:08:c8:a3:55:03:6b:
c0:18:8a:da:a1:5f:53:66:08:27:a1:18:7f:32:b9:
78:ff:bf:a5:77:0b:33:0a:0e:49:91:af:53:6b:38:
d9:d2:cf:94:2c:9d:d4:34:e1:9e:a2:84:04:25:3e:
62:7d:ea:0e:30:2a:d8:28:0b:b0:18:a7:23:f4:83:
56:be:e3:fb:23:6f:5f:a8:dd:84:08:e2:90:ff:17:
bd:5c:fa:a6:b3:b4:7e:cf:47
prime2:
00:dd:73:6d:f2:36:64:f7:f8:9c:a9:b5:fd:1f:2a:
31:2f:38:d2:be:c7:05:0a:ce:2f:5c:2f:f3:b3:06:
ae:72:38:80:b5:3f:3d:93:f3:98:0e:7b:58:bc:93:
06:70:b3:ec:65:a4:6e:ae:05:3e:a5:98:82:44:2d:
dd:24:e7:d1:72:ba:93:6e:e1:d3:ef:5f:94:83:e8:
61:aa:77:1e:23:93:d2:af:23:be:2e:b0:67:8e:06:
88:66:17:4a:61:4c:79:2b:58:a0:71:5e:2c:93:d2:
84:bc:ce:39:c9:94:49:fc:ca:c2:29:1a:03:b6:f2:
38:eb:2e:96:87:35:9f:cc:5d
exponent1:
00:df:2c:d7:27:4b:42:f3:a6:c4:b6:68:ad:2d:cf:
26:54:f1:23:32:a9:51:ce:18:cc:63:ee:ab:a1:9d:
e0:6a:d9:3e:85:6e:22:c3:4f:d4:d5:95:86:86:35:
9d:23:ef:5b:d0:68:b2:35:f6:a3:ae:6d:6c:a6:6d:
ab:ad:1f:43:a9:e4:a5:7c:a3:07:5f:e3:e6:df:d7:
f3:49:68:f2:0e:ce:10:d4:48:88:c3:42:8d:35:59:
6d:f5:67:d5:c3:49:18:4a:15:39:d6:ce:60:a3:05:
d7:88:71:a8:f2:cd:fd:74:60:ab:32:71:a0:16:f6:
52:2d:bb:c6:81:ac:c9:dd:9d
exponent2:
00:db:9c:da:7f:27:24:70:aa:33:ab:36:58:e4:ec:
31:c4:b3:e4:83:df:d9:07:43:3c:c2:7e:a7:7e:76:
74:cf:bf:6b:1c:d3:af:9c:a7:29:b7:ca:e9:50:71:
ba:24:50:ba:72:7e:64:68:dd:b8:a7:fe:9b:c9:43:
76:99:5f:f0:5d:87:dc:28:4d:7a:a1:5c:37:6b:ad:
2c:16:22:75:58:31:03:f2:3e:4f:1f:fc:3f:66:20:
e2:69:e4:55:16:33:01:c3:53:ec:21:21:94:b1:b0:
47:84:fa:3b:62:c6:55:ad:85:e2:91:62:44:26:cd:
06:57:6d:67:48:85:8c:88:dd
coefficient:
3f:85:ff:ac:1c:67:ce:50:5b:c9:c0:53:29:00:dd:
6a:d2:23:1f:f7:73:00:c6:76:6e:0d:44:67:2d:f1:
93:99:8d:31:e3:8b:2f:68:8c:c3:83:d4:be:e2:32:
14:50:ff:79:37:85:4b:22:9f:92:c3:32:9f:eb:c9:
61:86:c7:8b:88:68:b6:ad:e3:49:22:0b:b4:f8:23:
ae:83:33:b3:f9:f5:eb:aa:77:3d:f0:d0:f0:fe:55:
4f:a1:ec:64:a2:be:fb:05:0d:dc:92:52:de:db:34:
ad:00:51:52:e1:74:c2:5f:5b:10:cd:f1:05:74:6f:
9a:77:5a:e5:87:d5:4f:01
1042
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure your BYOIP address range
$ cat public-key.pem
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxQVx0SOB1SgIYd7HonIq
KIswkU2yXtfmLMTU42uF8isqVRiBVgxoWbOOBQh5TzjklSfjaj++MPeqDOwz0t8a
PZGkMmQRZ9mBKdhAaub399OyhzUZmWVJpJ9MxzkhKTZmNnzMSEgcXsJcURQJ4sJk
nf/Ew7xyTGPRbwCL1rk7L+ZdLSSpPmvdSuPrTt1HQ0e0p6OVlxMX7Aa1t4NcnaN0
wbMfIuf2IlTnDQKcu4HtvxYsGN2glyQeq+p7heh/JkYCOK+L5DEbDpQISQ52TzXs
Hm6KPit0N5cG4G5jig/8/bL5PDf/oVEwbSF9H0bWxvjyyMN8VkRxqzEp9gc7D1bg
ywIDAQAB
-----END PUBLIC KEY-----
3. Generate an X.509 certificate using the key pair created in the previous. In this example, the
certificate expires in 365 days, after which time it cannot be trusted. Be sure to set the expiration
appropriately. The tr -d "\n" command strips newline characters (line breaks) from the output.
You need to provide a Common Name when prompted, but the other fields can be left blank.
$ openssl req -new -x509 -key private-key.pem -days 365 | tr -d "\n" > certificate.pem
$ cat certificate.pem
The output should be a long, PEM-encoded string without line breaks, prefaced by -----BEGIN
CERTIFICATE----- and followed by -----END CERTIFICATE-----.
1043
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure your BYOIP address range
When you migrate advertisements from an on-premises workload to AWS, you must create an ROA for
your existing ASN before creating the ROAs for Amazon's ASNs. Otherwise, you might see an impact to
your existing routing and advertisements.
• For ARIN, add the certificate in the "Public Comments" section for your address range. Do not add it to
the comments section for your organization.
• For RIPE, add the certificate as a new "descr" field for your address range. Do not add it to the
comments section for your organization.
• For APNIC, email the public key to [email protected] to manually add it to the "remarks" field for
your address range. Send the email using the APNIC authorized contact for the IP addresses.
1. Compose message
Compose the plaintext authorization message. The format of the message is as follows, where the
date is the expiry date of the message:
1|aws|account|cidr|YYYYMMDD|SHA256|RSAPSS
Replace the account number, address range, and expiry date with your own values to create a
message resembling the following:
text_message="1|aws|0123456789AB|198.51.100.0/24|20211231|SHA256|RSAPSS"
This is not to be confused with an ROA message, which has a similar appearance.
2. Sign message
Sign the plaintext message using the private key that you created previously. The signature returned
by this command is a long string that you need to use in the next step.
Important
We recommend that you copy and paste this command. Except for the message content, do
not modify or replace any of the values.
1044
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure your BYOIP address range
3. Provision address
Use the AWS CLI provision-byoip-cidr command to provision the address range. The --cidr-
authorization-context option uses the message and signature strings that you created
previously.
It can take up to one week to complete the provisioning process for publicly advertisable ranges. Use
the describe-byoip-cidrs command to monitor progress, as in this example:
If there are issues during provisioning and the status goes to failed-provision, you must run the
provision-byoip-cidr command again after the issues have been resolved.
To provision an IPv6 address range that will not be publicly advertised, use the following provision-
byoip-cidr command.
If you provisioned an IPv6 address range that will not be publicly advertised, you do not need to
complete this step.
We recommend that you stop advertising the address range from other locations before you advertise
it through AWS. If you keep advertising your IP address range from other locations, we can't reliably
1045
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with your address range
support it or troubleshoot issues. Specifically, we can't guarantee that traffic to the address range will
enter our network.
To minimize down time, you can configure your AWS resources to use an address from your address
pool before it is advertised, and then simultaneously stop advertising it from the current location and
start advertising it through AWS. For more information about allocating an Elastic IP address from your
address pool, see Allocate an Elastic IP address (p. 1060).
Limitations
• You can run the advertise-byoip-cidr command at most once every 10 seconds, even if you specify
different address ranges each time.
• You can run the withdraw-byoip-cidr command at most once every 10 seconds, even if you specify
different address ranges each time.
To stop advertising the address range, use the following withdraw-byoip-cidr command.
You cannot deprovision a portion of the address range. If you want to use a more specific address range
with AWS, deprovision the entire address range and provision a more specific address range.
(IPv4) To release each Elastic IP address, use the following release-address command.
(IPv6) To disassociate an IPv6 CIDR block, use the following disassociate-vpc-cidr-block command.
To stop advertising the address range, use the following withdraw-byoip-cidr command.
1046
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Learn more
To view information about the IPv4 address pools that you've provisioned in your account, use the
following describe-public-ipv4-pools command.
To create an Elastic IP address from your IPv4 address pool, use the allocate-address command. You can
use the --public-ipv4-pool option to specify the ID of the address pool returned by describe-
byoip-cidrs. Or you can use the --address option to specify an address from the address range that
you provisioned.
To create a VPC and specify an IPv6 CIDR from your IPv6 address pool, use the following create-vpc
command. To let Amazon choose the IPv6 CIDR from your IPv6 address pool, omit the --ipv6-cidr-
block option.
To associate an IPv6 CIDR block from your IPv6 address pool with a VPC, use the following associate-
vpc-cidr-block command. To let Amazon choose the IPv6 CIDR from your IPv6 address pool, omit the --
ipv6-cidr-block option.
To view your VPCs and the associated IPv6 address pool information, use the describe-vpcs command. To
view information about associated IPv6 CIDR blocks from a specific IPv6 address pool, use the following
get-associated-ipv6-pool-cidrs command.
If you disassociate the IPv6 CIDR block from your VPC, it's released back into your IPv6 address pool.
For more information about working with IPv6 CIDR blocks in the VPC console, see Working with VPCs
and Subnets in the Amazon VPC User Guide.
Learn more
For more information, see the AWS Online Tech talk Deep Dive on Bring Your Own IP.
1047
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Assigning prefixes
• Automatic assignment — AWS chooses the prefix from your VPC subnet’s IPv4 or IPv6 CIDR and
assigns to your network interface.
• Manual Assignment — You specify the prefix from your VPC subnet’s IPv4 and IPv6 CIDRs, and AWS
verifies that the prefix is not already assigned to other resources before assigning it to your network
interface.
• Increased IP addresses on a network interface — When you use a prefix, you assign a block of IP
addresses as opposed to individual IP addresses. This increases the number of IP addresses on a
network interface.
• Simplified VPC management for containers — In container applications, each container requires a
unique IP address. Assigning prefixes to your instance simplifies the management of your VPCs, as
you can launch and terminate containers without having to call Amazon EC2 APIs for individual IP
assignments.
Topics
• Basics for assigning prefixes (p. 1048)
• Considerations and limits for prefixes (p. 1049)
• Work with prefixes (p. 1049)
1048
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Considerations and limits for prefixes
ip-private-ipv4-address.ec2.internal
ip-private-ipv4-address.region.compute.internal
After you have created the network interface, use the attach-network-interface AWS CLI command to
attach the network interface to your instance. You must configure your operating system to work with
network interfaces with prefixes. For more information, see Configure your operating system for network
interfaces with prefixes (p. 1056).
Topics
• Assign automatic prefixes during network interface creation (p. 1049)
• Assign specific prefixes during network interface creation (p. 1052)
1049
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with prefixes
Console
a. To automatically assign an IPv4 prefix, for IPv4 prefix delegation, choose Auto-assign.
Then for Number of IPv4 prefixes, specify the number of prefixes to assign.
b. To automatically assign an IPv6 prefix, for IPv6 prefix delegation, choose Auto-assign.
Then for Number of IPv6 prefixes, specify the number of prefixes to assign.
Note
IPv6 prefix delegation appears only if the selected subnet is enabled for IPv6.
5. Select the security groups to associate with the network interface and assign resource tags if
needed.
6. Choose Create network interface.
AWS CLI
Example output
{
"NetworkInterface": {
"AvailabilityZone": "us-west-2a",
"Description": "IPv4 automatic example",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-044c2de2c4EXAMPLE"
}
],
"InterfaceType": "interface",
"Ipv6Addresses": [],
"MacAddress": "02:98:65:dd:18:47",
"NetworkInterfaceId": "eni-02b80b4668EXAMPLE",
"OwnerId": "123456789012",
"PrivateIpAddress": "10.0.0.62",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "10.0.0.62"
}
],
"Ipv4Prefixes": [
1050
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with prefixes
{
"Ipv4Prefix": "10.0.0.208/28"
}
],
"RequesterId": "AIDAIV5AJI5LXF5XXDPCO",
"RequesterManaged": false,
"SourceDestCheck": true,
"Status": "pending",
"SubnetId": "subnet-047cfed18eEXAMPLE",
"TagSet": [],
"VpcId": "vpc-0e12f52b21EXAMPLE"
}
}
Example output
{
"NetworkInterface": {
"AvailabilityZone": "us-west-2a",
"Description": "IPv6 automatic example",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-044c2de2c4EXAMPLE"
}
],
"InterfaceType": "interface",
"Ipv6Addresses": [],
"MacAddress": "02:bb:e4:31:fe:09",
"NetworkInterfaceId": "eni-006edbcfa4EXAMPLE",
"OwnerId": "123456789012",
"PrivateIpAddress": "10.0.0.73",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "10.0.0.73"
}
],
"Ipv6Prefixes": [
{
"Ipv6Prefix": "2600:1f13:fc2:a700:1768::/80"
}
],
"RequesterId": "AIDAIV5AJI5LXF5XXDPCO",
"RequesterManaged": false,
"SourceDestCheck": true,
"Status": "pending",
"SubnetId": "subnet-047cfed18eEXAMPLE",
"TagSet": [],
"VpcId": "vpc-0e12f52b21EXAMPLE"
}
}
1051
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with prefixes
Console
a. To assign a specific IPv4 prefix, for IPv4 prefix delegation, choose Custom. Then choose
Add new prefix and enter the prefix to use.
b. To assign a specific IPv6 prefix, for IPv6 prefix delegation, choose Custom. Then choose
Add new prefix and enter the prefix to use.
Note
IPv6 prefix delegation appears only if the selected subnet is enabled for IPv6.
5. Select the security groups to associate with the network interface and assign resource tags if
needed.
6. Choose Create network interface.
AWS CLI
Use the create-network-interface command and set --ipv4-prefixes to the prefixes. AWS selects
IP addresses from this range. In the following example, the prefix CIDR is 10.0.0.208/28.
Example output
{
"NetworkInterface": {
"AvailabilityZone": "us-west-2a",
"Description": "IPv4 manual example",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-044c2de2c4EXAMPLE"
}
],
"InterfaceType": "interface",
"Ipv6Addresses": [],
"MacAddress": "02:98:65:dd:18:47",
"NetworkInterfaceId": "eni-02b80b4668EXAMPLE",
"OwnerId": "123456789012",
"PrivateIpAddress": "10.0.0.62",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "10.0.0.62"
1052
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with prefixes
}
],
"Ipv4Prefixes": [
{
"Ipv4Prefix": "10.0.0.208/28"
}
],
"RequesterId": "AIDAIV5AJI5LXF5XXDPCO",
"RequesterManaged": false,
"SourceDestCheck": true,
"Status": "pending",
"SubnetId": "subnet-047cfed18eEXAMPLE",
"TagSet": [],
"VpcId": "vpc-0e12f52b21EXAMPLE"
}
}
Example output
{
"NetworkInterface": {
"AvailabilityZone": "us-west-2a",
"Description": "IPv6 automatic example",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-044c2de2c4EXAMPLE"
}
],
"InterfaceType": "interface",
"Ipv6Addresses": [],
"MacAddress": "02:bb:e4:31:fe:09",
"NetworkInterfaceId": "eni-006edbcfa4EXAMPLE",
"OwnerId": "123456789012",
"PrivateIpAddress": "10.0.0.73",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "10.0.0.73"
}
],
"Ipv6Prefixes": [
{
"Ipv6Prefix": "2600:1f13:fc2:a700:1768::/80"
}
],
"RequesterId": "AIDAIV5AJI5LXF5XXDPCO",
"RequesterManaged": false,
"SourceDestCheck": true,
"Status": "pending",
"SubnetId": "subnet-047cfed18eEXAMPLE",
1053
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with prefixes
"TagSet": [],
"VpcId": "vpc-0e12f52b21EXAMPLE"
}
}
Console
AWS CLI
You can use the assign-ipv6-addresses command to assign IPv6 prefixes and the assign-private-ip-
addresses command to assign IPv4 prefixes to existing network interfaces.
Example output
{
"NetworkInterfaceId": "eni-081fbb4095EXAMPLE",
"AssignedIpv4Prefixes": [
{
"Ipv4Prefix": "10.0.0.176/28"
}
]
1054
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with prefixes
Example output
{
"AssignedIpv6Prefixes": [
"2600:1f13:fc2:a700:18bb::/80"
],
"NetworkInterfaceId": "eni-00d577338cEXAMPLE"
}
Console
AWS CLI
Use the assign-private-ip-addresses command and set --ipv4-prefixes to the prefix. AWS
selects IPv4 addresses from this range. In the following example, the prefix CIDR is 10.0.0.208/28.
Example output
{
"NetworkInterfaceId": "eni-081fbb4095EXAMPLE",
1055
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with prefixes
"AssignedIpv4Prefixes": [
{
"Ipv4Prefix": "10.0.0.208/28"
}
]
}
Example output
{
"NetworkInterfaceId": "eni-00d577338cEXAMPLE",
"AssignedIpv6Prefixes": [
{
"Ipv6Prefix": "2600:1f13:fc2:a700:18bb::/80"
}
]
}
If you are not using Amazon Linux, you can use a Container Network Interface (CNI) for Kubernetes plug-
in, or dockerd if you use Docker to manage your containers.
Console
AWS CLI
You can use the describe-network-interfaces AWS CLI command to view the prefixes assigned to
your network interfaces.
1056
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with prefixes
Example output
{
"NetworkInterfaces": [
{
"AvailabilityZone": "us-west-2a",
"Description": "IPv4 automatic example",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-044c2de2c4EXAMPLE"
}
],
"InterfaceType": "interface",
"Ipv6Addresses": [],
"MacAddress": "02:98:65:dd:18:47",
"NetworkInterfaceId": "eni-02b80b4668EXAMPLE",
"OwnerId": "123456789012",
"PrivateIpAddress": "10.0.0.62",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "10.0.0.62"
}
],
"Ipv4Prefixes": [
{
"Ipv4Prefix": "10.0.0.208/28"
}
],
"Ipv6Prefixes": [],
"RequesterId": "AIDAIV5AJI5LXF5XXDPCO",
"RequesterManaged": false,
"SourceDestCheck": true,
"Status": "available",
"SubnetId": "subnet-05eef9fb78EXAMPLE",
"TagSet": [],
"VpcId": "vpc-0e12f52b2146bf252"
},
{
"AvailabilityZone": "us-west-2a",
"Description": "IPv6 automatic example",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-044c2de2c411c91b5"
}
],
"InterfaceType": "interface",
"Ipv6Addresses": [],
"MacAddress": "02:bb:e4:31:fe:09",
"NetworkInterfaceId": "eni-006edbcfa4EXAMPLE",
"OwnerId": "123456789012",
"PrivateIpAddress": "10.0.0.73",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "10.0.0.73"
}
],
"Ipv4Prefixes": [],
"Ipv6Prefixes": [
1057
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with prefixes
{
"Ipv6Prefix": "2600:1f13:fc2:a700:1768::/80"
}
],
"RequesterId": "AIDAIV5AJI5LXF5XXDPCO",
"RequesterManaged": false,
"SourceDestCheck": true,
"Status": "available",
"SubnetId": "subnet-05eef9fb78EXAMPLE",
"TagSet": [],
"VpcId": "vpc-0e12f52b21EXAMPLE"
}
]
}
Console
• To remove all assigned prefixes, for IPv4 prefix delegation and IPv6 prefix delegation,
choose Do not assign.
• To remove specific assigned prefixes, for IPv4 prefix delegation or IPv6 prefix delegation,
choose Custom and then choose Unassign next to the prefixes to remove.
Note
IPv6 prefix delegation appears only if the selected subnet is enabled for IPv6.
5. Choose Save.
AWS CLI
You can use the unassign-ipv6-addresses command to remove IPv6 prefixes and the unassign-
private-ip-addresses commands to remove IPv4 prefixes from your existing network interfaces.
Use the unassign-private-ip-addresses command and set --ipv4-prefix to the address that you
want to remove.
Use the unassign-ipv6-addresses command and set --ipv6-prefix to the address that you want
to remove.
1058
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Elastic IP addresses
Elastic IP addresses
An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address
is allocated to your AWS account, and is yours until you release it. By using an Elastic IP address, you
can mask the failure of an instance or software by rapidly remapping the address to another instance
in your account. Alternatively, you can specify the Elastic IP address in a DNS record for your domain, so
that your domain points to your instance. For more information, see the documentation for your domain
registrar, or Set up dynamic DNS on Your Amazon Linux instance (p. 702).
An Elastic IP address is a public IPv4 address, which is reachable from the internet. If your instance does
not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable
communication with the internet. For example, this allows you to connect to your instance from your
local computer.
Contents
• Elastic IP address pricing (p. 1059)
• Elastic IP address basics (p. 1059)
• Work with Elastic IP addresses (p. 1060)
• Use reverse DNS for email applications (p. 1066)
• Elastic IP address limit (p. 1067)
For more information, see Elastic IP Addresses on the Amazon EC2 Pricing, On-Demand Pricing page.
1059
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Elastic IP addresses
• You can disassociate an Elastic IP address from a resource, and then associate it with a different
resource. To avoid unexpected behavior, ensure that all active connections to the resource named in
the existing association are closed before you make the change. After you have associated your Elastic
IP address to a different resource, you can reopen your connections to the newly associated resource.
• A disassociated Elastic IP address remains allocated to your account until you explicitly release it. We
impose a small hourly charge for Elastic IP addresses that are not associated with a running instance.
• When you associate an Elastic IP address with an instance that previously had a public IPv4 address,
the public DNS host name of the instance changes to match the Elastic IP address.
• We resolve a public DNS host name to the public IPv4 address or the Elastic IP address of the instance
outside the network of the instance, and to the private IPv4 address of the instance from within the
network of the instance.
• An Elastic IP address comes from Amazon's pool of IPv4 addresses, or from a custom IP address pool
that you have brought to your AWS account.
• When you allocate an Elastic IP address from an IP address pool that you have brought to your AWS
account, it does not count toward your Elastic IP address limits. For more information, see Elastic IP
address limit (p. 1067).
• When you allocate the Elastic IP addresses, you can associate the Elastic IP addresses with a network
border group. This is the location from which we advertise the CIDR block. Setting the network border
group limits the CIDR block to this group. If you do not specify the network border group, we set the
border group containing all of the Availability Zones in the Region (for example, us-west-2).
• An Elastic IP address is for use in a specific network border group only.
• An Elastic IP address is for use in a specific Region only, and cannot be moved to a different Region.
Tasks
• Allocate an Elastic IP address (p. 1060)
• Describe your Elastic IP addresses (p. 1061)
• Tag an Elastic IP address (p. 1062)
• Associate an Elastic IP address with an instance or network interface (p. 1063)
• Disassociate an Elastic IP address (p. 1064)
• Release an Elastic IP address (p. 1065)
• Recover an Elastic IP address (p. 1065)
You can allocate an Elastic IP address using one of the following methods.
New console
1060
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Elastic IP addresses
• Amazon's pool of IPv4 addresses—If you want an IPv4 address to be allocated from
Amazon's pool of IPv4 addresses.
• My pool of public IPv4 addresses—If you want to allocate an IPv4 address from an IP address
pool that you have brought to your AWS account. This option is disabled if you do not have
any IP address pools.
• Customer owned pool of IPv4 addresses—If you want to allocate an IPv4 address from a
pool created from your on-premises network for use with an AWS Outpost. This option is
disabled if you do not have an AWS Outpost.
5. (Optional) Add or remove a tag.
[Remove a tag] Choose Remove to the right of the tag’s Key and Value.
6. Choose Allocate.
Old console
AWS CLI
New console
1061
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Elastic IP addresses
Old console
AWS CLI
You can tag an Elastic IP address using one of the following methods.
New console
Old console
1062
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Elastic IP addresses
7. Choose Save.
AWS CLI
PowerShell
The New-EC2Tag command needs a Tag parameter, which specifies the key and value pair to be
used for the Elastic IP address tag. The following commands create the Tag parameter.
You can associate an Elastic IP address with an instance or network interface using one of the following
methods.
New console
1063
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Elastic IP addresses
3. Select the Elastic IP address to associate and choose Actions, Associate Elastic IP address.
4. For Resource type, choose Network interface.
5. For Network interface, choose the network interface with which to associate the Elastic IP
address. You can also enter text to search for a specific network interface.
6. (Optional) For Private IP address, specify a private IP address with which to associate the Elastic
IP address.
7. Choose Associate.
Old console
AWS CLI
You can disassociate an Elastic IP address using one of the following methods.
New console
Old console
1064
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with Elastic IP addresses
AWS CLI
New console
Old console
AWS CLI
• You cannot recover an Elastic IP address if it has been allocated to another AWS account, or if it will
result in your exceeding your Elastic IP address limit.
• You cannot recover tags associated with an Elastic IP address.
• You can recover an Elastic IP address using the Amazon EC2 API or a command line tool only.
1065
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Use reverse DNS for email applications
AWS CLI
Use the allocate-address AWS CLI command and specify the IP address using the --address
parameter as follows.
PowerShell
Use the New-EC2Address AWS Tools for Windows PowerShell command and specify the IP address
using the -Address parameter as follows.
Considerations
• Before you create a reverse DNS record, you must set a corresponding forward DNS record (record type
A) that points to your Elastic IP address.
• If a reverse DNS record is associated with an Elastic IP address, the Elastic IP address is locked to your
account and cannot be released from your account until the record is removed.
1066
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Elastic IP address limit
You can't create a reverse DNS record using the methods above. AWS must assign the static reverse DNS
records for you. Open Request to remove reverse DNS and email sending limitations and provide us with
your Elastic IP addresses and reverse DNS records.
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/ and choose Elastic IPs from the
navigation pane.
You can verify your limit in either the Amazon EC2 console or the Service Quotas console. Do one of the
following:
Choose Limits from the navigation pane, and then enter IP in the search field. The limit is EC2-VPC
Elastic IPs. If you have access to EC2-Classic, there is an additional limit, EC2-Classic Elastic IPs.
• Open the Service Quotas console at https://console.aws.amazon.com/servicequotas/.
On the Dashboard, choose Amazon Elastic Compute Cloud (Amazon EC2). If Amazon Elastic Compute
Cloud (Amazon EC2) is not listed on the Dashboard, choose AWS services, enter EC2 in the search
field, and then choose Amazon Elastic Compute Cloud (Amazon EC2).
On the Amazon EC2 service quotas page, enter IP in the search field. The limit is EC2-VPC Elastic
IPs. If you have access to EC2-Classic, there is an additional limit, EC2-Classic Elastic IPs. For more
information, choose the limit.
If you think your architecture warrants additional Elastic IP addresses, you can request a quota increase
directly from the Service Quotas console.
• A primary private IPv4 address from the IPv4 address range of your VPC
• One or more secondary private IPv4 addresses from the IPv4 address range of your VPC
• One Elastic IP address (IPv4) per private IPv4 address
1067
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network interface basics
You can create and configure network interfaces and attach them to instances in the same Availability
Zone. Your account might also have requester-managed network interfaces, which are created and
managed by AWS services to enable you to use other resources and services. You cannot manage these
network interfaces yourself. For more information, see Requester-managed network interfaces (p. 1097).
This AWS resource is referred to as a network interface in the AWS Management Console and the Amazon
EC2 API. Therefore, we use "network interface" in this documentation instead of "elastic network
interface". The term "network interface" in this documentation always means "elastic network interface".
Contents
• Network interface basics (p. 1068)
• Network cards (p. 1069)
• IP addresses per network interface per instance type (p. 1070)
• Work with network interfaces (p. 1086)
• Scenarios for network interfaces (p. 1094)
• Best practices for configuring network interfaces (p. 1096)
• Requester-managed network interfaces (p. 1097)
Each instance has a default network interface, called the primary network interface. You cannot detach
a primary network interface from an instance. You can create and attach additional network interfaces.
The maximum number of network interfaces that you can use varies by instance type. For more
information, see IP addresses per network interface per instance type (p. 1070).
In a VPC, all subnets have a modifiable attribute that determines whether network interfaces created in
that subnet (and therefore instances launched into that subnet) are assigned a public IPv4 address. For
more information, see IP addressing behavior for your subnet in the Amazon VPC User Guide. The public
IPv4 address is assigned from Amazon's pool of public IPv4 addresses. When you launch an instance, the
IP address is assigned to the primary network interface that's created.
When you create a network interface, it inherits the public IPv4 addressing attribute from the subnet.
If you later modify the public IPv4 addressing attribute of the subnet, the network interface keeps the
setting that was in effect when it was created. If you launch an instance and specify an existing network
interface as the primary network interface, the public IPv4 address attribute is determined by this
network interface.
1068
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network cards
If you have an Elastic IP address, you can associate it with one of the private IPv4 addresses for the
network interface. You can associate one Elastic IP address with each private IPv4 address.
If you disassociate an Elastic IP address from a network interface, you can release it back to the address
pool. This is the only way to associate an Elastic IP address with an instance in a different subnet or VPC,
as network interfaces are specific to subnets.
If you associate IPv6 CIDR blocks with your VPC and subnet, you can assign one or more IPv6 addresses
from the subnet range to a network interface. Each IPv6 address can be assigned to one network
interface.
All subnets have a modifiable attribute that determines whether network interfaces created in that
subnet (and therefore instances launched into that subnet) are automatically assigned an IPv6 address
from the range of the subnet. For more information, see IP addressing behavior for your subnet in the
Amazon VPC User Guide. When you launch an instance, the IPv6 address is assigned to the primary
network interface that's created.
Prefix Delegation
A Prefix Delegation prefix is a reserved private IPv4 or IPv6 CIDR range that you allocate for automatic
or manual assignment to network interfaces that are associated with an instance. By using Delegated
Prefixes, you can launch services faster by assigning a range of IP addresses as a single prefix.
Termination behavior
You can set the termination behavior for a network interface that's attached to an instance. You can
specify whether the network interface should be automatically deleted when you terminate the instance
to which it's attached.
Source/destination checking
You can enable or disable source/destination checks, which ensure that the instance is either the source
or the destination of any traffic that it receives. Source/destination checks are enabled by default. You
must disable source/destination checks if the instance runs services such as network address translation,
routing, or firewalls.
Monitoring IP traffic
You can enable a VPC flow log on your network interface to capture information about the IP traffic
going to and from a network interface. After you've created a flow log, you can view and retrieve its data
in Amazon CloudWatch Logs. For more information, see VPC Flow Logs in the Amazon VPC User Guide.
Network cards
Instances with multiple network cards provide higher network performance, including bandwidth
capabilities above 100 Gbps and improved packet rate performance. Each network interface is attached
to a network card. The primary network interface must be assigned to network card index 0.
If you enable Elastic Fabric Adapter (EFA) when you launch an instance that supports multiple network
cards, all network cards are available. You can assign up to one EFA per network card. An EFA counts as a
network interface.
The following instances support multiple network cards. All other instance types support one network
card.
1069
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
dl1.24xlarge 4
p4d.24xlarge 4
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
a1.medium 2 4 4
a1.large 3 10 10
a1.xlarge 4 15 15
a1.2xlarge 4 15 15
a1.4xlarge 8 30 30
a1.metal 8 30 30
c3.large 3 10 10
c3.xlarge 4 15 15
c3.2xlarge 4 15 15
c3.4xlarge 8 30 30
c3.8xlarge 8 30 30
c4.large 3 10 10
c4.xlarge 4 15 15
c4.2xlarge 4 15 15
c4.4xlarge 8 30 30
c4.8xlarge 8 30 30
c5.large 3 10 10
c5.xlarge 4 15 15
c5.2xlarge 4 15 15
c5.4xlarge 8 30 30
1070
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
c5.9xlarge 8 30 30
c5.12xlarge 8 30 30
c5.18xlarge 15 50 50
c5.24xlarge 15 50 50
c5.metal 15 50 50
c5a.large 3 10 10
c5a.xlarge 4 15 15
c5a.2xlarge 4 15 15
c5a.4xlarge 8 30 30
c5a.8xlarge 8 30 30
c5a.12xlarge 8 30 30
c5a.16xlarge 15 50 50
c5a.24xlarge 15 50 50
c5ad.large 3 10 10
c5ad.xlarge 4 15 15
c5ad.2xlarge 4 15 15
c5ad.4xlarge 8 30 30
c5ad.8xlarge 8 30 30
c5ad.12xlarge 8 30 30
c5ad.16xlarge 15 50 50
c5ad.24xlarge 15 50 50
c5d.large 3 10 10
c5d.xlarge 4 15 15
c5d.2xlarge 4 15 15
c5d.4xlarge 8 30 30
c5d.9xlarge 8 30 30
c5d.12xlarge 8 30 30
c5d.18xlarge 15 50 50
c5d.24xlarge 15 50 50
c5d.metal 15 50 50
1071
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
c5n.large 3 10 10
c5n.xlarge 4 15 15
c5n.2xlarge 4 15 15
c5n.4xlarge 8 30 30
c5n.9xlarge 8 30 30
c5n.18xlarge 15 50 50
c5n.metal 15 50 50
c6g.medium 2 4 4
c6g.large 3 10 10
c6g.xlarge 4 15 15
c6g.2xlarge 4 15 15
c6g.4xlarge 8 30 30
c6g.8xlarge 8 30 30
c6g.12xlarge 8 30 30
c6g.16xlarge 15 50 50
c6g.metal 15 50 50
c6gd.medium 2 4 4
c6gd.large 3 10 10
c6gd.xlarge 4 15 15
c6gd.2xlarge 4 15 15
c6gd.4xlarge 8 30 30
c6gd.8xlarge 8 30 30
c6gd.12xlarge 8 30 30
c6gd.16xlarge 15 50 50
c6gd.metal 15 50 50
c6gn.medium 2 4 4
c6gn.large 3 10 10
c6gn.xlarge 4 15 15
c6gn.2xlarge 4 15 15
c6gn.4xlarge 8 30 30
1072
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
c6gn.8xlarge 8 30 30
c6gn.12xlarge 8 30 30
c6gn.16xlarge 15 50 50
c6i.large 3 10 10
c6i.xlarge 4 15 15
c6i.2xlarge 4 15 15
c6i.4xlarge 8 30 30
c6i.8xlarge 8 30 30
c6i.12xlarge 8 30 30
c6i.16xlarge 15 50 50
c6i.24xlarge 15 50 50
c6i.32xlarge 15 50 50
c6i.metal 15 50 50
d2.xlarge 4 15 15
d2.2xlarge 4 15 15
d2.4xlarge 8 30 30
d2.8xlarge 8 30 30
d3.xlarge 4 3 3
d3.2xlarge 4 5 5
d3.4xlarge 4 10 10
d3.8xlarge 3 20 20
d3en.large 4 2 2
d3en.xlarge 4 3 3
d3en.2xlarge 4 5 5
d3en.4xlarge 4 10 10
d3en.6large 4 15 15
d3en.8xlarge 4 20 20
d3en.12xlarge 3 30 30
1073
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
f1.2xlarge 4 15 15
f1.4xlarge 8 30 30
f1.16xlarge 8 50 50
g3s.xlarge 4 15 15
g3.4xlarge 8 30 30
g3.8xlarge 8 30 30
g3.16xlarge 15 50 50
g4ad.xlarge 2 4 4
g4ad.2xlarge 2 4 4
g4ad.4xlarge 3 10 10
g4ad.8xlarge 4 15 15
g4ad.16xlarge 8 30 30
g4dn.xlarge 3 10 10
g4dn.2xlarge 3 10 10
g4dn.4xlarge 3 10 10
g4dn.8xlarge 4 15 15
g4dn.12xlarge 8 30 30
g4dn.16xlarge 4 15 15
g4dn.metal 15 50 50
g5.xlarge 4 15 15
g5.2xlarge 4 15 15
g5.4xlarge 8 30 30
g5.8xlarge 8 30 30
g5.12xlarge 15 50 50
g5.16xlarge 8 30 30
g5.24xlarge 15 50 50
g5.48xlarge 15 50 50
1074
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
g5g.xlarge 4 15 15
g5g.2xlarge 4 15 15
g5g.4xlarge 8 30 30
g5g.8xlarge 8 30 30
g5g.16xlarge 15 50 50
h1.2xlarge 4 15 15
h1.4xlarge 8 30 30
h1.8xlarge 8 30 30
h1.16xlarge 15 50 50
hpc6a.48xlarge 2 50 50
i2.xlarge 4 15 15
i2.2xlarge 4 15 15
i2.4xlarge 8 30 30
i2.8xlarge 8 30 30
i3.large 3 10 10
i3.xlarge 4 15 15
i3.2xlarge 4 15 15
i3.4xlarge 8 30 30
i3.8xlarge 8 30 30
i3.16xlarge 15 50 50
i3.metal 15 50 50
i3en.large 3 10 10
i3en.xlarge 4 15 15
i3en.2xlarge 4 15 15
i3en.3xlarge 4 15 15
i3en.6xlarge 8 30 30
i3en.12xlarge 8 30 30
i3en.24xlarge 15 50 50
i3en.metal 15 50 50
1075
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
im4gn.large 3 10 10
im4gn.xlarge 4 15 15
im4gn.2xlarge 4 15 15
im4gn.4xlarge 8 30 30
im4gn.8xlarge 8 30 30
im4gn.16xlarge 15 50 50
inf1.xlarge 4 10 10
inf1.2xlarge 4 10 10
inf1.6xlarge 8 30 30
inf1.24xlarge 15 30 30
is4gen.medium 2 4 4
is4gen.large 3 10 10
is4gen.xlarge 4 15 15
is4gen.2xlarge 4 15 15
is4gen.4xlarge 8 30 30
is4gen.8xlarge 8 30 30
m4.large 2 10 10
m4.xlarge 4 15 15
m4.2xlarge 4 15 15
1076
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
m4.4xlarge 8 30 30
m4.10xlarge 8 30 30
m4.16xlarge 8 30 30
m5.large 3 10 10
m5.xlarge 4 15 15
m5.2xlarge 4 15 15
m5.4xlarge 8 30 30
m5.8xlarge 8 30 30
m5.12xlarge 8 30 30
m5.16xlarge 15 50 50
m5.24xlarge 15 50 50
m5.metal 15 50 50
m5a.large 3 10 10
m5a.xlarge 4 15 15
m5a.2xlarge 4 15 15
m5a.4xlarge 8 30 30
m5a.8xlarge 8 30 30
m5a.12xlarge 8 30 30
m5a.16xlarge 15 50 50
m5a.24xlarge 15 50 50
m5ad.large 3 10 10
m5ad.xlarge 4 15 15
m5ad.2xlarge 4 15 15
m5ad.4xlarge 8 30 30
m5ad.8xlarge 8 30 30
m5ad.12xlarge 8 30 30
m5ad.16xlarge 15 50 50
m5ad.24xlarge 15 50 50
m5d.large 3 10 10
m5d.xlarge 4 15 15
1077
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
m5d.2xlarge 4 15 15
m5d.4xlarge 8 30 30
m5d.8xlarge 8 30 30
m5d.12xlarge 8 30 30
m5d.16xlarge 15 50 50
m5d.24xlarge 15 50 50
m5d.metal 15 50 50
m5dn.large 3 10 10
m5dn.xlarge 4 15 15
m5dn.2xlarge 4 15 15
m5dn.4xlarge 8 30 30
m5dn.8xlarge 8 30 30
m5dn.12xlarge 8 30 30
m5dn.16xlarge 15 50 50
m5dn.24xlarge 15 50 50
m5dn.metal 15 50 50
m5n.large 3 10 10
m5n.xlarge 4 15 15
m5n.2xlarge 4 15 15
m5n.4xlarge 8 30 30
m5n.8xlarge 8 30 30
m5n.12xlarge 8 30 30
m5n.16xlarge 15 50 50
m5n.24xlarge 15 50 50
m5n.metal 15 50 50
m5zn.large 3 10 10
m5zn.xlarge 4 15 15
m5zn.2xlarge 4 15 15
m5zn.3xlarge 8 30 30
m5zn.6xlarge 8 30 30
1078
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
m5zn.12xlarge 15 50 50
m5zn.metal 15 50 50
m6a.large 3 10 10
m6a.xlarge 4 15 15
m6a.2xlarge 4 15 15
m6a.4xlarge 8 30 30
m6a.8xlarge 8 30 30
m6a.12xlarge 8 30 30
m6a.16xlarge 15 50 50
m6a.24xlarge 15 50 50
m6a.32xlarge 15 50 50
m6a.48xlarge 15 50 50
m6g.medium 2 4 4
m6g.large 3 10 10
m6g.xlarge 4 15 15
m6g.2xlarge 4 15 15
m6g.4xlarge 8 30 30
m6g.8xlarge 8 30 30
m6g.12xlarge 8 30 30
m6g.16xlarge 15 50 50
m6g.metal 15 50 50
m6gd.medium 2 4 4
m6gd.large 3 10 10
m6gd.xlarge 4 15 15
m6gd.2xlarge 4 15 15
m6gd.4xlarge 8 30 30
m6gd.8xlarge 8 30 30
m6gd.12xlarge 8 30 30
m6gd.16xlarge 15 50 50
m6gd.metal 15 50 50
1079
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
m6i.large 3 10 10
m6i.xlarge 4 15 15
m6i.2xlarge 4 15 15
m6i.4xlarge 8 30 30
m6i.8xlarge 8 30 30
m6i.12xlarge 8 30 30
m6i.16xlarge 15 50 50
m6i.24xlarge 15 50 50
m6i.32xlarge 15 50 50
m6i.metal 15 50 50
mac1.metal 8 30 30
p2.xlarge 4 15 15
p2.8xlarge 8 30 30
p2.16xlarge 8 30 30
p3.2xlarge 4 15 15
p3.8xlarge 8 30 30
p3.16xlarge 8 30 30
p3dn.24xlarge 15 50 50
r3.large 3 10 10
r3.xlarge 4 15 15
r3.2xlarge 4 15 15
r3.4xlarge 8 30 30
r3.8xlarge 8 30 30
r4.large 3 10 10
r4.xlarge 4 15 15
r4.2xlarge 4 15 15
r4.4xlarge 8 30 30
r4.8xlarge 8 30 30
r4.16xlarge 15 50 50
1080
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
r5.large 3 10 10
r5.xlarge 4 15 15
r5.2xlarge 4 15 15
r5.4xlarge 8 30 30
r5.8xlarge 8 30 30
r5.12xlarge 8 30 30
r5.16xlarge 15 50 50
r5.24xlarge 15 50 50
r5.metal 15 50 50
r5a.large 3 10 10
r5a.xlarge 4 15 15
r5a.2xlarge 4 15 15
r5a.4xlarge 8 30 30
r5a.8xlarge 8 30 30
r5a.12xlarge 8 30 30
r5a.16xlarge 15 50 50
r5a.24xlarge 15 50 50
r5ad.large 3 10 10
r5ad.xlarge 4 15 15
r5ad.2xlarge 4 15 15
r5ad.4xlarge 8 30 30
r5ad.8xlarge 8 30 30
r5ad.12xlarge 8 30 30
r5ad.16xlarge 15 50 50
r5ad.24xlarge 15 50 50
r5b.large 3 10 10
r5b.xlarge 4 15 15
r5b.2xlarge 4 15 15
r5b.4xlarge 8 30 30
r5b.8xlarge 8 30 30
1081
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
r5b.12xlarge 8 30 30
r5b.16xlarge 15 50 50
r5b.24xlarge 15 50 50
r5b.metal 15 50 50
r5d.large 3 10 10
r5d.xlarge 4 15 15
r5d.2xlarge 4 15 15
r5d.4xlarge 8 30 30
r5d.8xlarge 8 30 30
r5d.12xlarge 8 30 30
r5d.16xlarge 15 50 50
r5d.24xlarge 15 50 50
r5d.metal 15 50 50
r5dn.large 3 10 10
r5dn.xlarge 4 15 15
r5dn.2xlarge 4 15 15
r5dn.4xlarge 8 30 30
r5dn.8xlarge 8 30 30
r5dn.12xlarge 8 30 30
r5dn.16xlarge 15 50 50
r5dn.24xlarge 15 50 50
r5dn.metal 15 50 50
r5n.large 3 10 10
r5n.xlarge 4 15 15
r5n.2xlarge 4 15 15
r5n.4xlarge 8 30 30
r5n.8xlarge 8 30 30
r5n.12xlarge 8 30 30
r5n.16xlarge 15 50 50
r5n.24xlarge 15 50 50
1082
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
r5n.metal 15 50 50
r6g.medium 2 4 4
r6g.large 3 10 10
r6g.xlarge 4 15 15
r6g.2xlarge 4 15 15
r6g.4xlarge 8 30 30
r6g.8xlarge 8 30 30
r6g.12xlarge 8 30 30
r6g.16xlarge 15 50 50
r6g.metal 15 50 50
r6gd.medium 2 4 4
r6gd.large 3 10 10
r6gd.xlarge 4 15 15
r6gd.2xlarge 4 15 15
r6gd.4xlarge 8 30 30
r6gd.8xlarge 8 30 30
r6gd.12xlarge 8 30 30
r6gd.16xlarge 15 50 50
r6gd.metal 15 50 50
r6i.large 3 10 10
r6i.xlarge 4 15 15
r6i.2xlarge 4 15 15
r6i.4xlarge 8 30 30
r6i.8xlarge 8 30 30
r6i.12xlarge 8 30 30
r6i.16xlarge 15 50 50
r6i.24xlarge 15 50 50
r6i.32xlarge 15 50 50
r6i.metal 15 50 50
1083
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
t2.nano 2 2 2
t2.micro 2 2 2
t2.small 3 4 4
t2.medium 3 6 6
t2.large 3 12 12
t2.xlarge 3 15 15
t2.2xlarge 3 15 15
t3.nano 2 2 2
t3.micro 2 2 2
t3.small 3 4 4
t3.medium 3 6 6
t3.large 3 12 12
t3.xlarge 4 15 15
t3.2xlarge 4 15 15
t3a.nano 2 2 2
t3a.micro 2 2 2
t3a.small 2 4 4
t3a.medium 3 6 6
t3a.large 3 12 12
t3a.xlarge 4 15 15
t3a.2xlarge 4 15 15
t4g.nano 2 2 2
t4g.micro 2 2 2
t4g.small 3 4 4
t4g.medium 3 6 6
t4g.large 3 12 12
t4g.xlarge 4 15 15
t4g.2xlarge 4 15 15
u-6tb1.56xlarge15 50 50
15
u-6tb1.112xlarge 50 50
1084
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IP addresses per network interface per instance type
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
u-6tb1.metal 15 50 50
15
u-9tb1.112xlarge 50 50
u-9tb1.metal 15 50 50
15
u-12tb1.112xlarge 50 50
u-12tb1.metal 15 50 50
u-18tb1.metal 15 50 50
u-24tb1.metal 15 50 50
vt1.3xlarge 4 15 15
vt1.6xlarge 8 30 30
vt1.24xlarge 15 50 50
x1.16xlarge 8 30 30
x1.32xlarge 8 30 30
x1e.xlarge 3 10 10
x1e.2xlarge 4 15 15
x1e.4xlarge 4 15 15
x1e.8xlarge 4 15 15
x1e.16xlarge 8 30 30
x1e.32xlarge 8 30 30
x2gd.medium 2 4 4
x2gd.large 3 10 10
x2gd.xlarge 4 15 15
x2gd.2xlarge 4 15 15
x2gd.4xlarge 8 30 30
x2gd.8xlarge 8 30 30
x2gd.12xlarge 8 30 30
x2gd.16xlarge 15 50 50
x2gd.metal 15 50 50
x2iezn.2xlarge 4 15 15
x2iezn.4xlarge 8 30 30
x2iezn.6xlarge 8 30 30
1085
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with network interfaces
Instance type Maximum network Private IPv4 addresses IPv6 addresses per
interfaces per interface interface
x2iezn.8xlarge 8 30 30
x2iezn.12xlarge15 50 50
x2iezn.metal 15 50 50
z1d.large 3 10 10
z1d.xlarge 4 15 15
z1d.2xlarge 4 15 15
z1d.3xlarge 8 30 30
z1d.6xlarge 8 30 30
z1d.12xlarge 15 50 50
z1d.metal 15 50 50
You can use the describe-instance-types AWS CLI command to display information about an instance
type, such as the supported network interfaces and IP addresses per interface. The following example
displays this information for all C5 instances.
Contents
• Create a network interface (p. 1087)
• View details about a network interface (p. 1088)
• Attach a network interface to an instance (p. 1088)
• Detach a network interface from an instance (p. 1089)
• Manage IP addresses (p. 1090)
• Modify network interface attributes (p. 1091)
1086
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with network interfaces
New console
Old console
1087
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with network interfaces
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
New console
Old console
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
1088
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with network interfaces
Important
For EC2 instances in an IPv6-only subnet, if you attach a secondary network interface to the
instance, the private DNS hostname of the second network interface will resolve to the first
IPv6 address on the instance's first network interface. For more information about EC2 instance
private DNS hostnames, see Amazon EC2 instance hostname types (p. 1034).
If the public IPv4 address on your instance is released, it does not receive a new one if there is more than
one network interface attached to the instance. For more information about the behavior of public IPv4
addresses, see Public IPv4 addresses (p. 1019).
Instances page
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
If you try to detach a network interface that is attached to a resource from another service, such as
an Elastic Load Balancing load balancer, a Lambda function, a WorkSpace, or a NAT gateway, you get
an error that you do not have permission to access the resource. To find which service created the
resource attached to a network interface, check the description of the network interface. If you delete
the resource, then its network interface is deleted.
1089
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with network interfaces
Instances page
To detach a network interface from an instance using the Network Interfaces page
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
Manage IP addresses
You can manage the following IP addresses for your network interfaces:
1090
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with network interfaces
To manage the IPv4 and IPv6 addresses of a network interface using the console
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
• assign-ipv6-addresses
• associate-address
• disassociate-address
• unassign-ipv6-addresses
To manage the IP addresses of a network interface using the Tools for Windows PowerShell
• Register-EC2Address
• Register-EC2Ipv6AddressList
• Unregister-EC2Address
• Unregister-EC2Ipv6AddressList
1091
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with network interfaces
The security group and network interface must be created for the same VPC. To change the security
group for interfaces owned by other services, such as Elastic Load Balancing, do so through that
service.
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
1092
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with network interfaces
New console
Old console
To add or edit tags for a network interface using the command line
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
You cannot delete a network interface that is in use. First, you must detach the network
interface (p. 1089).
New console
1093
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scenarios for network interfaces
Old console
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
• The primary network interface (eth0) on the instance handles public traffic.
• The secondary network interface (eth1) handles backend management traffic, and is connected to a
separate subnet in your VPC that has more restrictive access controls.
The public interface, which may or may not be behind a load balancer, has an associated security
group that allows access to the server from the internet (for example, allow TCP port 80 and 443 from
0.0.0.0/0, or from the load balancer).
The private facing interface has an associated security group allowing SSH access only from an allowed
range of IP addresses, either within the VPC, or from the internet, a private subnet within the VPC, or a
virtual private gateway.
To ensure failover capabilities, consider using a secondary private IPv4 for incoming traffic on a network
interface. In the event of an instance failure, you can move the interface and/or secondary private IPv4
address to a standby instance.
1094
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Scenarios for network interfaces
1095
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Best practices for configuring network interfaces
the code running on your behalf) can attach the network interface to a hot standby instance. Because
the interface maintains its private IP addresses, Elastic IP addresses, and MAC address, network traffic
begins flowing to the standby instance as soon as you attach the network interface to the replacement
instance. Users experience a brief loss of connectivity between the time the instance fails and the time
that the network interface is attached to the standby instance, but no changes to the route table or your
DNS server are required.
Use the following command to install the package on Amazon Linux if it's not already installed, or
update it if it's installed and additional updates are available:
Identifies network interfaces when they are attached, detached, or reattached to a running instance,
and ensures that the hotplug script runs (53-ec2-network-interfaces.rules). Maps the
MAC address to a device name (75-persistent-net-generator.rules, which generates 70-
persistent-net.rules).
hotplug script
Generates an interface configuration file suitable for use with DHCP (/etc/sysconfig/network-
scripts/ifcfg-ethN). Also generates a route configuration file (/etc/sysconfig/network-
scripts/route-ethN).
1096
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Requester-managed network interfaces
DHCP script
Whenever the network interface receives a new DHCP lease, this script queries the instance
metadata for Elastic IP addresses. For each Elastic IP address, it adds a rule to the routing policy
database to ensure that outbound traffic from that address uses the correct network interface. It
also adds each private IP address to the network interface as a secondary address.
ec2ifup ethN
Extends the functionality of the standard ifup. After this script rewrites the configuration files
ifcfg-ethN and route-ethN, it runs ifup.
ec2ifdown ethN
Extends the functionality of the standard ifdown. After this script removes any rules for the network
interface from the routing policy database, it runs ifdown.
ec2ifscan
Checks for network interfaces that have not been configured and configures them.
To list any configuration files that were generated by ec2-net-utils, use the following command:
$ ls -l /etc/sysconfig/network-scripts/*-eth?
To disable the automation on a per-instance basis, you can add EC2SYNC=no to the corresponding
ifcfg-ethN file. For example, use the following command to disable the automation for the eth1
interface:
To disable the automation completely, you can remove the package using the following command:
You cannot modify or detach a requester-managed network interface. If you delete the resource that
the network interface represents, the AWS service detaches and deletes the network interface for
you. To change the security groups for a requester-managed network interface, you might have to
use the console or command line tools for that service. For more information, see the service-specific
documentation.
You can tag a requester-managed network interface. For more information, see Add or edit
tags (p. 1093).
You can view the requester-managed network interfaces that are in your account.
1097
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network bandwidth
• Attachment owner: If you created the network interface, this field displays your AWS account ID.
Otherwise, it displays an alias or ID for the principal or service that created the network interface.
• Description: Provides information about the purpose of the network interface; for example, "VPC
Endpoint Interface".
1. Use the describe-network-interfaces AWS CLI command to describe the network interfaces in your
account.
2. In the output, the RequesterManaged field displays true if the network interface is managed by
another AWS service.
{
"Status": "in-use",
...
"Description": "VPC Endpoint Interface vpce-089f2123488812123",
"NetworkInterfaceId": "eni-c8fbc27e",
"VpcId": "vpc-1a2b3c4d",
"PrivateIpAddresses": [
{
"PrivateDnsName": "ip-10-0-2-227.ec2.internal",
"Primary": true,
"PrivateIpAddress": "10.0.2.227"
}
],
"RequesterManaged": true,
...
}
Bandwidth for aggregate multi-flow traffic available to an instance depends on the destination of the
traffic.
Traffic can utilize the full network bandwidth available to the instance.
To other Regions, an internet gateway, or Direct Connect
Traffic can utilize up to 50% of the network bandwidth available to a current generation
instance (p. 227) with a minimum of 32 vCPUs. Bandwidth for a current generation instance with less
than 32 vCPUs is limited to 5 Gbps.
Bandwidth for single-flow (5-tuple) traffic is limited to 5 Gbps, regardless of the destination of the
traffic. For use cases that require low latency and high single-flow bandwidth, use a cluster placement
1098
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Available instance bandwidth
group (p. 1168) to achieve up to 10 Gbps for instances in the same placement group. Alternatively, set
up multiple paths between two endpoints to achieve higher bandwidth using Multipath TCP (MPTCP).
Typically, instances with 16 vCPUs or fewer (size 4xlarge and smaller) are documented as having "up
to" a specified bandwidth; for example, "up to 10 Gbps". These instances have a baseline bandwidth. To
meet additional demand, they can use a network I/O credit mechanism to burst beyond their baseline
bandwidth. Instances can use burst bandwidth for a limited time, typically from 5 to 60 minutes,
depending on the instance size.
An instance receives the maximum number of network I/O credits at launch. If the instance exhausts its
network I/O credits, it returns to its baseline bandwidth. A running instance earns network I/O credits
whenever it uses less network bandwidth than its baseline bandwidth. A stopped instance does not earn
network I/O credits. Instance burst is on a best effort basis, even when the instance has credits available,
as burst bandwidth is a shared resource.
The following documentation describes the network performance for all instances, plus the baseline
network bandwidth available for instances that can use burst bandwidth.
You can use the describe-instance-types AWS CLI command to display information about an instance
type. The following example displays network performance information for all C5 instances.
1099
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Monitor instance bandwidth
You can configure whether Amazon EC2 sends metric data for the instance to CloudWatch using one-
minute periods or five-minute periods. It is possible that the network performance metrics would show
that an allowance was exceeded and packets were dropped while the CloudWatch instance metrics do
not. This can happen when the instance has a short spike in demand for network resources (known as a
microburst), but the CloudWatch metrics are not granular enough to reflect these microsecond spikes.
Learn more
For information about the supported network speed for each instance type, see Amazon EC2 Instance
Types.
Contents
• Enhanced networking support (p. 1100)
• Enable enhanced networking on your instance (p. 1101)
• Enable enhanced networking with the Elastic Network Adapter (ENA) on Linux instances (p. 1101)
• Enable enhanced networking with the Intel 82599 VF interface on Linux instances (p. 1110)
• Operating system optimizations (p. 1116)
• Monitor network performance for your EC2 instance (p. 1116)
• Troubleshoot the Elastic Network Adapter (ENA) (p. 1120)
You can enable enhanced networking using one of the following mechanisms:
The Elastic Network Adapter (ENA) supports network speeds of up to 100 Gbps for supported
instance types.
The current generation instances use ENA for enhanced networking, except for C4, D2, and M4
instances smaller than m4.16xlarge.
1100
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enable enhanced networking on your instance
The Intel 82599 Virtual Function interface supports network speeds of up to 10 Gbps for supported
instance types.
The following instance types use the Intel 82599 VF interface for enhanced networking: C3, C4, D2,
I2, M4 (excluding m4.16xlarge), and R3.
For a summary of the enhanced networking mechanisms by instance type, see Summary of networking
and storage features (p. 234).
If your instance type supports the Intel 82599 VF interface for enhanced networking, follow
the procedures in Enable enhanced networking with the Intel 82599 VF interface on Linux
instances (p. 1110).
Contents
• Requirements (p. 1101)
• Enhanced networking performance (p. 1102)
• Test whether enhanced networking is enabled (p. 1102)
• Enable enhanced networking on the Amazon Linux AMI (p. 1104)
• Enable enhanced networking on Ubuntu (p. 1105)
• Enable enhanced networking on Linux (p. 1106)
• Enable enhanced networking on Ubuntu with DKMS (p. 1108)
• Driver release notes (p. 1110)
• Troubleshoot (p. 1110)
Requirements
To prepare for enhanced networking using the ENA, set up your instance as follows:
• Launch the instance using a current generation (p. 227) instance type, other than C4, D2, M4 instances
smaller than m4.16xlarge, or T2.
• Launch the instance using a supported version of the Linux kernel and a supported distribution, so that
ENA enhanced networking is enabled for your instance automatically. For more information, see ENA
Linux Kernel Driver Release Notes.
• Ensure that the instance has internet connectivity.
• Use AWS CloudShell from the AWS Management Console, or install and configure the AWS CLI or
the AWS Tools for Windows PowerShell on any computer you choose, preferably your local desktop
1101
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: ENA
or laptop. For more information, see Access Amazon EC2 (p. 3) or the AWS CloudShell User Guide.
Enhanced networking cannot be managed from the Amazon EC2 console.
• If you have important data on the instance that you want to preserve, you should back that data up
now by creating an AMI from your instance. Updating kernels and kernel modules, as well as enabling
the enaSupport attribute, might render incompatible instances or operating systems unreachable. If
you have a recent backup, your data will still be retained if this happens.
• Amazon Linux 2
• Amazon Linux AMI 2018.03
• Ubuntu 14.04 (with linux-aws kernel) or later
• Red Hat Enterprise Linux 7.4 or later
• SUSE Linux Enterprise Server 12 SP2 or later
• CentOS 7.4.1708 or later
• FreeBSD 11.1 or later
• Debian GNU/Linux 9 or later
To test whether enhanced networking is already enabled, verify that the ena module is installed on your
instance and that the enaSupport attribute is set. If your instance satisfies these two conditions, then
the ethtool -i ethn command should show that the module is in use on the network interface.
To verify that the ena module is installed, use the modinfo command as shown in the following
example.
1102
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: ENA
retpoline: Y
intree: Y
name: ena
...
In the above Ubuntu instance, the module is not installed, so you must first install it. For more
information, see Enable enhanced networking on Ubuntu (p. 1105).
To check whether an instance has the enhanced networking enaSupport attribute set, use one of the
following commands. If the attribute is set, the response is true.
To check whether an AMI has the enhanced networking enaSupport attribute set, use one of the
following commands. If the attribute is set, the response is true.
Use the following command to verify that the ena module is being used on a particular interface,
substituting the interface name that you want to check. If you are using a single interface (default), it this
is eth0. If the operating system supports predictable network names (p. 1107), this could be a name like
ens5.
In the following example, the ena module is not loaded, because the listed driver is vif.
1103
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: ENA
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
In this example, the ena module is loaded and at the minimum recommended version. This instance has
enhanced networking properly configured.
If you launched your instance using an older Amazon Linux AMI and it does not have enhanced
networking enabled already, use the following procedure to enable enhanced networking.
3. From your local computer, reboot your instance using the Amazon EC2 console or one of the
following commands: reboot-instances (AWS CLI), Restart-EC2Instance (AWS Tools for Windows
PowerShell).
4. Connect to your instance again and verify that the ena module is installed and at the minimum
recommended version using the modinfo ena command from Test whether enhanced networking is
enabled (p. 1102).
5. [EBS-backed instance] From your local computer, stop the instance using the Amazon EC2 console
or one of the following commands: stop-instances (AWS CLI), Stop-EC2Instance (AWS Tools for
Windows PowerShell). If your instance is managed by AWS OpsWorks, you should stop the instance
in the AWS OpsWorks console so that the instance state remains in sync.
[Instance store-backed instance] You can't stop the instance to modify the attribute. Instead,
proceed to this procedure: To enable enhanced networking on Amazon Linux AMI (instance store-
backed instances) (p. 1105).
6. From your local computer, enable the enhanced networking attribute using one of the following
commands:
1104
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: ENA
7. (Optional) Create an AMI from the instance, as described in Create an Amazon EBS-backed Linux
AMI (p. 134). The AMI inherits the enhanced networking enaSupport attribute from the instance.
Therefore, you can use this AMI to launch another instance with enhanced networking enabled by
default.
8. From your local computer, start the instance using the Amazon EC2 console or one of the following
commands: start-instances (AWS CLI), Start-EC2Instance (AWS Tools for Windows PowerShell). If
your instance is managed by AWS OpsWorks, you should start the instance in the AWS OpsWorks
console so that the instance state remains in sync.
9. Connect to your instance and verify that the ena module is installed and loaded on your network
interface using the ethtool -i ethn command from Test whether enhanced networking is
enabled (p. 1102).
If you are unable to connect to your instance after enabling enhanced networking, see Troubleshoot
the Elastic Network Adapter (ENA) (p. 1120).
Follow the previous procedure until the step where you stop the instance. Create a new AMI as described
in Create an instance store-backed Linux AMI (p. 139), making sure to enable the enhanced networking
attribute when you register the AMI.
If you launched your instance using an older AMI and it does not have enhanced networking enabled
already, you can install the linux-aws kernel package to get the latest enhanced networking drivers
and update the required attribute.
Ubuntu 16.04 and 18.04 ship with the Ubuntu custom kernel (linux-aws kernel package). To use a
different kernel, contact AWS Support.
1105
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: ENA
Important
If during the update process you are prompted to install grub, use /dev/xvda to install
grub onto, and then choose to keep the current version of /boot/grub/menu.lst.
3. [EBS-backed instance] From your local computer, stop the instance using the Amazon EC2 console
or one of the following commands: stop-instances (AWS CLI), Stop-EC2Instance (AWS Tools for
Windows PowerShell). If your instance is managed by AWS OpsWorks, you should stop the instance
in the AWS OpsWorks console so that the instance state remains in sync.
[Instance store-backed instance] You can't stop the instance to modify the attribute. Instead,
proceed to this procedure: To enable enhanced networking on Ubuntu (instance store-backed
instances) (p. 1106).
4. From your local computer, enable the enhanced networking attribute using one of the following
commands:
5. (Optional) Create an AMI from the instance, as described in Create an Amazon EBS-backed Linux
AMI (p. 134). The AMI inherits the enhanced networking enaSupport attribute from the instance.
Therefore, you can use this AMI to launch another instance with enhanced networking enabled by
default.
6. From your local computer, start the instance using the Amazon EC2 console or one of the following
commands: start-instances (AWS CLI), Start-EC2Instance (AWS Tools for Windows PowerShell). If
your instance is managed by AWS OpsWorks, you should start the instance in the AWS OpsWorks
console so that the instance state remains in sync.
Follow the previous procedure until the step where you stop the instance. Create a new AMI as described
in Create an instance store-backed Linux AMI (p. 139), making sure to enable the enhanced networking
attribute when you register the AMI.
1106
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: ENA
you launch an instance with the latest AMI on a supported instance type, enhanced networking is
already enabled for your instance. For more information, see Test whether enhanced networking is
enabled (p. 1102).
The following procedure provides the general steps for enabling enhanced networking on a Linux
distribution other than Amazon Linux AMI or Ubuntu. For more information, such as detailed syntax
for commands, file locations, or package and tool support, see the documentation for your Linux
distribution.
3. Compile and install the ena module on your instance. These steps depend on the Linux distribution.
For more information about compiling the module on Red Hat Enterprise Linux, see the AWS
Knowledge Center article.
4. Run the sudo depmod command to update module dependencies.
5. Update initramfs on your instance to ensure that the new module loads at boot time. For
example, if your distribution supports dracut, you can use the following command.
dracut -f -v
6. Determine if your system uses predictable network interface names by default. Systems that use
systemd or udev versions 197 or greater can rename Ethernet devices and they do not guarantee
that a single network interface will be named eth0. This behavior can cause problems connecting to
your instance. For more information and to see other configuration options, see Predictable Network
Interface Names on the freedesktop.org website.
a. You can check the systemd or udev versions on RPM-based systems with the following
command.
In the above Red Hat Enterprise Linux 7 example, the systemd version is 208, so predictable
network interface names must be disabled.
b. Disable predictable network interface names by adding the net.ifnames=0 option to the
GRUB_CMDLINE_LINUX line in /etc/default/grub.
7. [EBS-backed instance] From your local computer, stop the instance using the Amazon EC2 console
or one of the following commands: stop-instances (AWS CLI), Stop-EC2Instance (AWS Tools for
Windows PowerShell). If your instance is managed by AWS OpsWorks, you should stop the instance
in the AWS OpsWorks console so that the instance state remains in sync.
1107
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: ENA
[Instance store-backed instance] You can't stop the instance to modify the attribute. Instead,
proceed to this procedure: To enable enhanced networking on Linux (instance store–backed
instances) (p. 1108).
8. From your local computer, enable the enhanced networking enaSupport attribute using one of the
following commands:
9. (Optional) Create an AMI from the instance, as described in Create an Amazon EBS-backed Linux
AMI (p. 134) . The AMI inherits the enhanced networking enaSupport attribute from the instance.
Therefore, you can use this AMI to launch another instance with enhanced networking enabled by
default.
Important
If your instance operating system contains an /etc/udev/rules.d/70-persistent-
net.rules file, you must delete it before creating the AMI. This file contains the MAC
address for the Ethernet adapter of the original instance. If another instance boots with
this file, the operating system will be unable to find the device and eth0 might fail, causing
boot issues. This file is regenerated at the next boot cycle, and any instances launched from
the AMI create their own version of the file.
10. From your local computer, start the instance using the Amazon EC2 console or one of the following
commands: start-instances (AWS CLI), Start-EC2Instance (AWS Tools for Windows PowerShell). If
your instance is managed by AWS OpsWorks, you should start the instance in the AWS OpsWorks
console so that the instance state remains in sync.
11. (Optional) Connect to your instance and verify that the module is installed.
If you are unable to connect to your instance after enabling enhanced networking, see Troubleshoot
the Elastic Network Adapter (ENA) (p. 1120).
Follow the previous procedure until the step where you stop the instance. Create a new AMI as described
in Create an instance store-backed Linux AMI (p. 139), making sure to enable the enhanced networking
attribute when you register the AMI.
1108
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: ENA
Important
Using DKMS voids the support agreement for your subscription. It should not be used for
production deployments.
3. Clone the source for the ena module on your instance from GitHub at https://github.com/amzn/
amzn-drivers.
4. Move the amzn-drivers package to the /usr/src/ directory so DKMS can find it and build it for
each kernel update. Append the version number (you can find the current version number in the
release notes) of the source code to the directory name. For example, version 1.0.0 is shown in the
following example.
5. Create the DKMS configuration file with the following values, substituting your version of ena.
6. Add, build, and install the ena module on your instance using DKMS.
1109
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: Intel 82599 VF
8. Verify that the ena module is installed using the modinfo ena command from Test whether
enhanced networking is enabled (p. 1102).
Troubleshoot
For troubleshooting information, see Troubleshoot the Elastic Network Adapter (ENA) (p. 1120).
Contents
• Requirements (p. 1111)
• Test whether enhanced networking is enabled (p. 1111)
• Enable enhanced networking on Amazon Linux (p. 1112)
• Enable enhanced networking on Ubuntu (p. 1114)
• Enable enhanced networking on other Linux distributions (p. 1114)
1110
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: Intel 82599 VF
Requirements
To prepare for enhanced networking using the Intel 82599 VF interface, set up your instance as follows:
• Select from the following supported instance types: C3, C4, D2, I2, M4 (excluding m4.16xlarge), and
R3.
• Launch the instance from an HVM AMI using Linux kernel version of 2.6.32 or later. The latest Amazon
Linux HVM AMIs have the modules required for enhanced networking installed and have the required
attributes set. Therefore, if you launch an Amazon EBS–backed, enhanced networking–supported
instance using a current Amazon Linux HVM AMI, enhanced networking is already enabled for your
instance.
Warning
Enhanced networking is supported only for HVM instances. Enabling enhanced networking
with a PV instance can make it unreachable. Setting this attribute without the proper module
or module version can also make your instance unreachable.
• Ensure that the instance has internet connectivity.
• Use AWS CloudShell from the AWS Management Console, or install and configure the AWS CLI or
the AWS Tools for Windows PowerShell on any computer you choose, preferably your local desktop
or laptop. For more information, see Access Amazon EC2 (p. 3) or the AWS CloudShell User Guide.
Enhanced networking cannot be managed from the Amazon EC2 console.
• If you have important data on the instance that you want to preserve, you should back that data
up now by creating an AMI from your instance. Updating kernels and kernel modules, as well as
enabling the sriovNetSupport attribute, might render incompatible instances or operating systems
unreachable. If you have a recent backup, your data will still be retained if this happens.
To check whether an instance has the enhanced networking sriovNetSupport attribute set, use one of
the following commands:
If the attribute isn't set, SriovNetSupport is empty. If the attribute is set, the value is simple, as shown
in the following example output.
"SriovNetSupport": {
"Value": "simple"
},
1111
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: Intel 82599 VF
To check whether an AMI already has the enhanced networking sriovNetSupport attribute set, use
one of the following commands:
If the attribute isn't set, SriovNetSupport is empty. If the attribute is set, the value is simple.
Use the following command to verify that the module is being used on a particular interface,
substituting the interface name that you want to check. If you are using a single interface (default), this
is eth0. If the operating system supports predictable network names (p. 1115), this could be a name like
ens5.
In the following example, the ixgbevf module is not loaded, because the listed driver is vif.
In this example, the ixgbevf module is loaded. This instance has enhanced networking properly
configured.
If you launched your instance using an older Amazon Linux AMI and it does not have enhanced
networking enabled already, use the following procedure to enable enhanced networking.
Warning
There is no way to disable the enhanced networking attribute after you've enabled it.
1112
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: Intel 82599 VF
3. From your local computer, reboot your instance using the Amazon EC2 console or one of the
following commands: reboot-instances (AWS CLI), Restart-EC2Instance (AWS Tools for Windows
PowerShell).
4. Connect to your instance again and verify that the ixgbevf module is installed and at the
minimum recommended version using the modinfo ixgbevf command from Test whether enhanced
networking is enabled (p. 1111).
5. [EBS-backed instance] From your local computer, stop the instance using the Amazon EC2 console
or one of the following commands: stop-instances (AWS CLI), Stop-EC2Instance (AWS Tools for
Windows PowerShell). If your instance is managed by AWS OpsWorks, you should stop the instance
in the AWS OpsWorks console so that the instance state remains in sync.
[Instance store-backed instance] You can't stop the instance to modify the attribute.
Instead, proceed to this procedure: To enable enhanced networking (instance store-backed
instances) (p. 1113).
6. From your local computer, enable the enhanced networking attribute using one of the following
commands:
7. (Optional) Create an AMI from the instance, as described in Create an Amazon EBS-backed Linux
AMI (p. 134) . The AMI inherits the enhanced networking attribute from the instance. Therefore, you
can use this AMI to launch another instance with enhanced networking enabled by default.
8. From your local computer, start the instance using the Amazon EC2 console or one of the following
commands: start-instances (AWS CLI), Start-EC2Instance (AWS Tools for Windows PowerShell). If
your instance is managed by AWS OpsWorks, you should start the instance in the AWS OpsWorks
console so that the instance state remains in sync.
9. Connect to your instance and verify that the ixgbevf module is installed and loaded on your
network interface using the ethtool -i ethn command from Test whether enhanced networking is
enabled (p. 1111).
1113
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: Intel 82599 VF
The Quick Start Ubuntu HVM AMIs include the necessary drivers for enhanced networking. If you have a
version of ixgbevf earlier than 2.16.4, you can install the linux-aws kernel package to get the latest
enhanced networking drivers.
The following procedure provides the general steps for compiling the ixgbevf module on an Ubuntu
instance.
Important
If during the update process, you are prompted to install grub, use /dev/xvda to install
grub, and then choose to keep the current version of /boot/grub/menu.lst.
The following procedure provides the general steps if you need to enable enhanced networking with
the Intel 82599 VF interface on a Linux distribution other than Amazon Linux or Ubuntu. For more
information, such as detailed syntax for commands, file locations, or package and tool support, see the
specific documentation for your Linux distribution.
Versions of ixgbevf earlier than 2.16.4, including version 2.14.2, do not build properly on some
Linux distributions, including certain versions of Ubuntu.
3. Compile and install the ixgbevf module on your instance.
Warning
If you compile the ixgbevf module for your current kernel and then upgrade your
kernel without rebuilding the driver for the new kernel, your system might revert to the
distribution-specific ixgbevf module at the next reboot. This could make your system
unreachable if the distribution-specific version is incompatible with enhanced networking.
4. Run the sudo depmod command to update module dependencies.
5. Update initramfs on your instance to ensure that the new module loads at boot time.
1114
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Enhanced networking: Intel 82599 VF
6. Determine if your system uses predictable network interface names by default. Systems that use
systemd or udev versions 197 or greater can rename Ethernet devices and they do not guarantee
that a single network interface will be named eth0. This behavior can cause problems connecting to
your instance. For more information and to see other configuration options, see Predictable Network
Interface Names on the freedesktop.org website.
a. You can check the systemd or udev versions on RPM-based systems with the following
command:
In the above Red Hat Enterprise Linux 7 example, the systemd version is 208, so predictable
network interface names must be disabled.
b. Disable predictable network interface names by adding the net.ifnames=0 option to the
GRUB_CMDLINE_LINUX line in /etc/default/grub.
7. [EBS-backed instance] From your local computer, stop the instance using the Amazon EC2 console or
one of the following commands: stop-instances (AWS CLI/AWS CloudShell), Stop-EC2Instance (AWS
Tools for Windows PowerShell). If your instance is managed by AWS OpsWorks, you should stop the
instance in the AWS OpsWorks console so that the instance state remains in sync.
[Instance store-backed instance] You can't stop the instance to modify the attribute.
Instead, proceed to this procedure: To enable enhanced networking (instance store–backed
instances) (p. 1116).
8. From your local computer, enable the enhanced networking attribute using one of the following
commands:
9. (Optional) Create an AMI from the instance, as described in Create an Amazon EBS-backed Linux
AMI (p. 134) . The AMI inherits the enhanced networking attribute from the instance. Therefore, you
can use this AMI to launch another instance with enhanced networking enabled by default.
Important
If your instance operating system contains an /etc/udev/rules.d/70-persistent-
net.rules file, you must delete it before creating the AMI. This file contains the MAC
address for the Ethernet adapter of the original instance. If another instance boots with
this file, the operating system will be unable to find the device and eth0 might fail, causing
boot issues. This file is regenerated at the next boot cycle, and any instances launched from
the AMI create their own version of the file.
10. From your local computer, start the instance using the Amazon EC2 console or one of the following
commands: start-instances (AWS CLI), Start-EC2Instance (AWS Tools for Windows PowerShell). If
1115
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Operating system optimizations
your instance is managed by AWS OpsWorks, you should start the instance in the AWS OpsWorks
console so that the instance state remains in sync.
11. (Optional) Connect to your instance and verify that the module is installed.
Follow the previous procedure until the step where you stop the instance. Create a new AMI as described
in Create an instance store-backed Linux AMI (p. 139), making sure to enable the enhanced networking
attribute when you register the AMI.
If you enable enhanced networking for a PV instance or AMI, this can make your instance unreachable.
For more information, see How do I enable and configure enhanced networking on my EC2 instances?.
Amazon EC2 defines network maximums at the instance level to ensure a high-quality networking
experience, including consistent network performance across instance sizes. AWS provides maximums for
the following for each instance:
• Bandwidth capability – Each EC2 instance has a maximum bandwidth for aggregate inbound and
outbound traffic, based on instance type and size. Some instances use a network I/O credit mechanism
to allocate network bandwidth based on average bandwidth utilization. Amazon EC2 also has
maximum bandwidth for traffic to AWS Direct Connect and the internet.
• Packet-per-second (PPS) performance – Each EC2 instance has a maximum PPS performance, based
on instance type and size.
• Connections tracked – The security group tracks each connection established to ensure that return
packets are delivered as expected. There is a maximum number of connections that can be tracked per
instance.
1116
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network performance metrics
• Link-local service access – Amazon EC2 provides a maximum PPS per network interface for traffic to
services such as the DNS service, the Instance Metadata Service, and the Amazon Time Sync Service.
When the network traffic for an instance exceeds a maximum, AWS shapes the traffic that exceeds the
maximum by queueing and then dropping network packets. You can monitor when traffic exceeds a
maximum using the network performance metrics. These metrics inform you, in real time, of impact to
network traffic and possible network performance issues.
Contents
• Requirements (p. 1117)
• Metrics for the ENA driver (p. 1117)
• View the network performance metrics for your Linux instance (p. 1118)
• Network performance metrics with the DPDK driver for ENA (p. 1118)
• Metrics on instances running FreeBSD (p. 1119)
Requirements
The following requirements apply to Linux instances.
• Install ENA driver version 2.2.10 or later. To verify the installed version, use the ethtool command. In
the following example, the version meets the minimum requirement.
The following metrics are available on Linux instances, FreeBSD instances, and DPDK environments.
Metric Description
1117
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network performance metrics
Metric Description
the network interface. This impacts traffic to the DNS
service, the Instance Metadata Service, and the Amazon
Time Sync Service.
You can also use the ethtool to retrieve the metrics for each network interface, such as eth0, as follows.
You can use an example application to view DPDK statistics. To start an interactive version of the
example application, run the following command.
./app/dpdk-testpmd -- -i
Within this interactive session, you can enter a command to retrieve extended statistics for a port. The
following example command retrieves the statistics for port 0.
The following is an example of an interactive session with the DPDK example application.
1118
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network performance metrics
For more information about the example application and using it to retrieve extended statistics. see
Testpmd Application User Guide in the DPDK documentation.
1119
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot ENA
sysctl dev.ena.network_interface.eni_metrics.sample_interval=interval
For example, the following command sets the driver to collect FreeBSD metrics on network interface 1
every 10 seconds:
sysctl dev.ena.1.eni_metrics.sample_interval=10
To turn off the collection of FreeBSD metrics, you can run the preceding command and specify 0 as the
interval.
Once you are collecting FreeBSD metrics, you can retrieve the latest set of collected metrics by running
the following command.
sysctl dev.ena.network_interface.en1_metrics
If you are unable to connect to your instance, start with the Troubleshoot connectivity issues (p. 1120)
section.
If you are able to connect to your instance, you can gather diagnostic information by using the failure
detection and recovery mechanisms that are covered in the later sections of this topic.
Contents
• Troubleshoot connectivity issues (p. 1120)
• Keep-alive mechanism (p. 1121)
• Register read timeout (p. 1122)
• Statistics (p. 1122)
• Driver error logs in syslog (p. 1127)
If you enable enhanced networking for a PV instance or AMI, this can also make your instance
unreachable.
If your instance becomes unreachable after enabling enhanced networking with ENA, you can disable the
enaSupport attribute for your instance and it will fall back to the stock network adapter.
1. From your local computer, stop the instance using the Amazon EC2 console or one of the following
commands: stop-instances (AWS CLI), Stop-EC2Instance (AWS Tools for Windows PowerShell). If your
instance is managed by AWS OpsWorks, you should stop the instance in the AWS OpsWorks console
so that the instance state remains in sync.
1120
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot ENA
Important
If you are using an instance store-backed instance, you can't stop the instance.
Instead, proceed to To disable enhanced networking with ENA (instance store-backed
instances) (p. 1121).
2. From your local computer, disable the enhanced networking attribute using the following command.
3. From your local computer, start the instance using the Amazon EC2 console or one of the following
commands: start-instances (AWS CLI), Start-EC2Instance (AWS Tools for Windows PowerShell). If
your instance is managed by AWS OpsWorks, you should start the instance in the AWS OpsWorks
console so that the instance state remains in sync.
4. (Optional) Connect to your instance and try reinstalling the ena module with your current kernel
version by following the steps in Enable enhanced networking with the Elastic Network Adapter
(ENA) on Linux instances (p. 1101).
If your instance is an instance store-backed instance, create a new AMI as described in Create an instance
store-backed Linux AMI (p. 139). Be sure to disable the enhanced networking enaSupport attribute
when you register the AMI.
Keep-alive mechanism
The ENA device posts keep-alive events at a fixed rate (usually once every second). The ENA driver
implements a watchdog mechanism, which checks for the presence of these keep-alive messages. If a
message or messages are present, the watchdog is rearmed, otherwise the driver concludes that the
device experienced a failure and then does the following:
The above reset procedure may result in some traffic loss for a short period of time (TCP connections
should be able to recover), but should not otherwise affect the user.
The ENA device may also indirectly request a device reset procedure, by not sending a keep-alive
notification, for example, if the ENA device reaches an unknown state after loading an irrecoverable
configuration.
[18509.800135] ena 0000:00:07.0 eth1: Keep alive watchdog timeout. // The watchdog process
initiates a reset
1121
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot ENA
If the driver logs (available in dmesg output) indicate failures of read operations, this may be caused by
an incompatible or incorrectly compiled driver, a busy hardware device, or hardware failure.
Intermittent log entries that indicate failures on read operations should not be considered an issue; the
driver will retry them in this case. However, a sequence of log entries containing read failures indicate a
driver or hardware problem.
Below is an example of driver log entry indicating a read operation failure due to a timeout:
Statistics
If you experience insufficient network performance or latency issues, you should retrieve the device
statistics and examine them. These statistics can be obtained using ethtool as follows.
1122
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot ENA
tx_timeout: N
The number of times that the driver did not receive the keep-alive event in the preceding three
seconds.
interface_up: N
The number of times that the ENA interface was brought up.
interface_down: N
The number of times that the ENA interface was brought down.
admin_q_pause: N
The number of times the admin queue was not found in a running state.
bw_in_allowance_exceeded: N
The number of rx packets dropped because the bandwidth allowance limit was exceeded.
bw_out_allowance_exceeded: N
The number of tx packets dropped because the bandwidth allowance limit was exceeded.
pps_allowance_exceeded: N
The number of packets dropped because the pps (packets per second) allowance limit was exceeded.
conntrack_allowance_exceeded: N
The number of packets dropped because the connection count allowance limit was exceeded.
1123
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot ENA
linklocal_allowance_exceeded: N
The number of proxy packets dropped because the pps (packets per second) allowance limit was
exceeded.
queue_N_tx_cnt: N
Direct memory access error count. If this value is not 0, it indicates low system resources.
queue_N_tx_linearize: N
The number of times SKB linearization was attempted for this queue.
queue_N_tx_linearize_failed: N
The number of times the napi handler called napi_complete for this queue.
queue_N_tx_tx_poll: N
The number of times the napi handler was scheduled for this queue.
queue_N_tx_doorbells: N
Invalid req_id for this queue. The valid req_id is zero, minus the queue_size, minus 1.
queue_N_tx_llq_buffer_copy: N
The number of packets whose headers size are larger than llq entry for this queue.
queue_N_tx_missed_tx: N
The number of packets that were left uncompleted for this queue.
queue_N_tx_unmask_interrupt: N
The number of times the tx interrupt was unmasked for this queue.
queue_N_rx_cnt: N
1124
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot ENA
queue_N_rx_bytes: N
The number of times the rx queue received a packet that is less than the rx_copybreak packet size
for this queue.
queue_N_rx_csum_good: N
The number of times the rx queue received a packet where the checksum was checked and was
correct for this queue.
queue_N_rx_refil_partial: N
The number of times the driver did not succeed in refilling the empty portion of the rx queue with
the buffers for this queue. If this value is not zero, it indicates low memory resources.
queue_N_rx_bad_csum: N
The number of times the rx queue had a bad checksum for this queue (only if rx checksum offload is
supported).
queue_N_rx_page_alloc_fail: N
The number of time that page allocation failed for this queue. If this value is not zero, it indicates
low memory resources.
queue_N_rx_skb_alloc_fail: N
The number of time that SKB allocation failed for this queue. If this value is not zero, it indicates low
system resources.
queue_N_rx_dma_mapping_err: N
Direct memory access error count. If this value is not 0, it indicates low system resources.
queue_N_rx_bad_desc_num: N
Too many buffers per packet. If this value is not 0, it indicates the use of very small buffers.
queue_N_rx_bad_req_id: N
The req_id for this queue is not valid. The valid req_id is from [0, queue_size - 1 ].
queue_N_rx_empty_rx_ring: N
The number of times the rx queue was empty for this queue.
queue_N_rx_csum_unchecked: N
The number of times the rx queue received a packet whose checksum wasn't checked for this queue.
queue_N_rx_xdp_aborted: N
1125
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot ENA
queue_N_rx_xdp_invalid: N
The number of times that the XDP return code for the packet was not valid.
queue_N_rx_xdp_redirect: N
The number of times that this queue was full and stopped.
queue_N_xdp_tx_queue_wakeup: N
The number of times that this queue resumed after being stopped.
queue_N_xdp_tx_dma_mapping_err: N
Direct memory access error count. If this value is not 0, it indicates low system resources.
queue_N_xdp_tx_linearize: N
The number of times XDP buffer linearization was attempted for this queue.
queue_N_xdp_tx_linearize_failed: N
The number of times XDP buffer linearization failed for this queue.
queue_N_xdp_tx_napi_comp: N
The number of times the napi handler called napi_complete for this queue.
queue_N_xdp_tx_tx_poll: N
The number of times the napi handler was scheduled for this queue.
queue_N_xdp_tx_doorbells: N
The number of times ena_com_prepare_tx failed for this queue. This value should always be zero; if
not, see the driver logs.
queue_N_xdp_tx_bad_req_id: N
The req_id for this queue is not valid. The valid req_id is from [0, queue_size - 1 ].
queue_N_xdp_tx_llq_buffer_copy: N
The number of packets that had their headers copied using llq buffer copy for this queue.
queue_N_xdp_tx_missed_tx: N
The number of times a tx queue entry missed a completion timeout for this queue.
queue_N_xdp_tx_unmask_interrupt: N
The number of times the tx interrupt was unmasked for this queue.
1126
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot ENA
ena_admin_q_aborted_cmd: N
The number of admin commands that were aborted. This usually happens during the auto-recovery
procedure.
ena_admin_q_submitted_cmd: N
The number of times that the driver tried to submit new admin command, but the queue was full.
ena_admin_q_no_completion: N
The number of times that the driver did not get an admin completion for a command.
The following warnings that may appear in your system's error logs can be ignored for the Elastic
Network Adapter:
1127
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Elastic Fabric Adapter
This is a recoverable error, and it indicates that there may have been a memory pressure issue when
the error was thrown.
Feature X isn't supported
The referenced feature is not supported by the Elastic Network Adapter. Possible values for X
include:
• 10: RSS Hash function configuration is not supported for this device.
• 12: RSS Indirection table configuration is not supported for this device.
• 18: RSS Hash Input configuration is not supported for this device.
• 20: Interrupt moderation is not supported for this device.
• 27: The Elastic Network Adapter driver does not support polling the Ethernet capabilities from
snmpd.
Failed to config AENQ
This error indicates an attempt to set an AENQ events group that is not supported by the Elastic
Network Adapter.
EFA provides lower and more consistent latency and higher throughput than the TCP transport
traditionally used in cloud-based HPC systems. It enhances the performance of inter-instance
communication that is critical for scaling HPC and machine learning applications. It is optimized to work
on the existing AWS network infrastructure and it can scale depending on application requirements.
EFA integrates with Libfabric 1.7.0 and later and it supports Open MPI 3.1.3 and later and Intel MPI
2019 Update 5 and later for HPC applications, and Nvidia Collective Communications Library (NCCL) for
machine learning applications.
Note
The OS-bypass capabilities of EFAs are not supported on Windows instances. If you attach an
EFA to a Windows instance, the instance functions as an Elastic Network Adapter, without the
added EFA capabilities.
Contents
• EFA basics (p. 1129)
• Supported interfaces and libraries (p. 1130)
• Supported instance types (p. 1130)
• Supported AMIs (p. 1131)
• EFA limitations (p. 1131)
• Get started with EFA and MPI (p. 1131)
• Get started with EFA and NCCL (p. 1139)
• Work with EFA (p. 1161)
1128
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EFA basics
EFA basics
An EFA is an Elastic Network Adapter (ENA) with added capabilities. It provides all of the functionality of
an ENA, with an additional OS-bypass functionality. OS-bypass is an access model that allows HPC and
machine learning applications to communicate directly with the network interface hardware to provide
low-latency, reliable transport functionality.
Traditionally, HPC applications use the Message Passing Interface (MPI) to interface with the system’s
network transport. In the AWS Cloud, this has meant that applications interface with MPI, which then
uses the operating system's TCP/IP stack and the ENA device driver to enable network communication
between instances.
With an EFA, HPC applications use MPI or NCCL to interface with the Libfabric API. The Libfabric API
bypasses the operating system kernel and communicates directly with the EFA device to put packets on
the network. This reduces overhead and enables the HPC application to run more efficiently.
Note
Libfabric is a core component of the OpenFabrics Interfaces (OFI) framework, which defines and
exports the user-space API of OFI. For more information, see the Libfabric OpenFabrics website.
1129
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Supported interfaces and libraries
The available instance types vary by Region. To see the available instance types that support EFA in
a Region, use the describe-instance-types command with the --region option and the appropriate
Region code.
c5n.18xlarge
c5n.9xlarge
c5n.metal
c6gn.16xlarge
c6i.32xlarge
c6i.metal
dl1.24xlarge
g4dn.12xlarge
g4dn.8xlarge
g4dn.metal
g5.48xlarge
...
1130
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Supported AMIs
Supported AMIs
The following AMIs support EFA with Intel x86-based instance types:
• Amazon Linux 2
• CentOS 7 and 8
• RHEL 7 and 8
• Ubuntu 18.04 and 20.04
• SUSE Linux Enterprise 15 SP2 and later
• openSUSE Leap 15.2 and later
The following AMIs support EFA with Arm-based (Graviton 2) instance types:
• Amazon Linux 2
• CentOS 8
• RHEL 8
• Ubuntu 18.04 and 20.04
• SUSE Linux Enterprise 15 SP2 and later
EFA limitations
EFA has the following limitations:
• p4d.24xlarge instances support up to four EFAs. All other supported instance types support only
one EFA per instance.
• EFA OS-bypass traffic is limited to a single subnet. In other words, EFA traffic cannot be sent from one
subnet to another. Normal IP traffic from the EFA can be sent from one subnet to another.
• EFA OS-bypass traffic is not routable. Normal IP traffic from the EFA remains routable.
• The EFA must be a member of a security group that allows all inbound and outbound traffic to and
from the security group itself.
• EFA traffic between C6gn instances and other EFA-enabled instances is not supported.
Contents
• Step 1: Prepare an EFA-enabled security group (p. 1132)
• Step 2: Launch a temporary instance (p. 1132)
• Step 3: Install the EFA software (p. 1133)
• Step 4: Disable ptrace protection (p. 1135)
• Step 5: (Optional) Install Intel MPI (p. 1136)
• Step 6: Install your HPC application (p. 1137)
• Step 7: Create an EFA-enabled AMI (p. 1137)
• Step 8: Launch EFA-enabled instances into a cluster placement group (p. 1137)
• Step 9: Terminate the temporary instance (p. 1138)
1131
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and MPI
a. For Security group name, enter a descriptive name for the security group, such as EFA-
enabled security group.
b. (Optional) For Description, enter a brief description of the security group.
c. For VPC, select the VPC into which you intend to launch your EFA-enabled instances.
d. Choose Create.
4. Select the security group that you created, and on the Description tab, copy the Group ID.
5. On the Inbound tab, do the following:
a. Choose Edit.
b. For Type, choose All traffic.
c. For Source, choose Custom and paste the security group ID that you copied into the field.
d. Choose Save.
6. On the Outbound tab, do the following:
a. Choose Edit.
b. For Type, choose All traffic.
c. For Destination, choose Custom and paste the security group ID that you copied into the field.
d. Choose Save.
1132
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and MPI
6. On the Add Storage page, specify the volumes to attach to the instances in addition to the volumes
that are specified by the AMI (such as the root device volume). Then choose Next: Add Tags.
7. On the Add Tags page, specify a tag that you can use to identify the temporary instance, and then
choose Next: Configure Security Group.
8. On the Configure Security Group page, for Assign a security group, select Select an existing
security group, and then select the security group that you created in Step 1.
9. On the Review Instance Launch page, review the settings, and then choose Launch to choose a key
pair and to launch your instance.
The steps differ depending on whether you intend to use EFA with Open MPI, with Intel MPI, or with
Open MPI and Intel MPI.
1. Connect to the instance you launched. For more information, see Connect to your Linux
instance (p. 596).
2. To ensure that all of your software packages are up to date, perform a quick software update on
your instance. This process may take a few minutes.
3. Download the EFA software installation files. The software installation files are packaged into
a compressed tarball (.tar.gz) file. To download the latest stable version, use the following
command.
$ curl -O https://efa-installer.amazonaws.com/aws-efa-installer-1.14.1.tar.gz
You can also get the latest version by replacing the version number with latest in the preceding
command.
4. (Optional) Verify the authenticity and integrity of the EFA tarball (.tar.gz) file. We recommend
that you do this to verify the identity of the software publisher and to check that the file has not
been altered or corrupted since it was published. If you do not want to verify the tarball file, skip this
step.
Note
Alternatively, if you prefer to verify the tarball file by using an MD5 or SHA256 checksum
instead, see Verify the EFA installer using a checksum (p. 1164).
1133
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and MPI
a. Download the public GPG key and import it into your keyring.
The command should return a key value. Make a note of the key value, because you need it in
the next step.
b. Verify the GPG key's fingerprint. Run the following command and specify the key value from the
previous step.
The command should return a fingerprint that is identical to 4E90 91BC BB97 A96B 26B1
5E59 A054 80B1 DD2D 3CCC. If the fingerprint does not match, don't run the EFA installation
script, and contact AWS Support.
c. Download the signature file and verify the signature of the EFA tarball file.
gpg: Signature made Wed 29 Jul 2020 12:50:13 AM UTC using RSA key ID DD2D3CCC
gpg: Good signature from "Amazon EC2 EFA <[email protected]>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 4E90 91BC BB97 A96B 26B1 5E59 A054 80B1 DD2D 3CCC
If the result includes Good signature, and the fingerprint matches the fingerprint returned
in the previous step, proceed to the next step. If not, don't run the EFA installation script, and
contact AWS Support.
5. Extract the files from the compressed .tar.gz file and navigate into the extracted directory.
6. Install the EFA software. Do one of the following depending on your use case.
Note
If you are using a SUSE Linux operating system, you must additionally specify the --skip-
kmod option to prevent kmod installation. By default, SUSE Linux does not allow out-
of-tree kernel modules. As a result, EFA and NVIDIA GPUDirect support is currently not
supported with SUSE Linux.
If you intend to use EFA with Open MPI and Intel MPI, you must install the EFA software with
Libfabric and Open MPI, and you must complete Step 5: (Optional) Install Intel MPI. To install the
EFA software with Libfabric and Open MPI, run the following command.
$ sudo ./efa_installer.sh -y
Libfabric is installed in the /opt/amazon/efa directory, while Open MPI is installed in the /opt/
amazon/openmpi directory.
• Open MPI only
1134
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and MPI
If you intend to use EFA with Open MPI only, you must install the EFA software with Libfabric and
Open MPI, and you can skip Step 5: (Optional) Install Intel MPI. To install the EFA software with
Libfabric and Open MPI, run the following command.
$ sudo ./efa_installer.sh -y
Libfabric is installed in the /opt/amazon/efa directory, while Open MPI is installed in the /opt/
amazon/openmpi directory.
• Intel MPI only
If you intend to use EFA with Intel MPI only, you can install the EFA software without Libfabric and
Open MPI. In this case, Intel MPI uses its embedded Libfabric. If you choose to do this, you must
complete Step 5: (Optional) Install Intel MPI.
To install the EFA software without Libfabric and Open MPI, run the following command.
7. If the EFA installer prompts you to reboot the instance, do so and then reconnect to the instance.
Otherwise, log out of the instance and then log back in to complete the installation.
8. Confirm that the EFA software components were successfully installed.
The command should return information about the Libfabric EFA interfaces. The following example
shows the command output.
provider: efa
fabric: EFA-fe80::94:3dff:fe89:1b70
domain: efa_0-rdm
version: 2.0
type: FI_EP_RDM
protocol: FI_PROTO_EFA
The shared memory feature uses Cross Memory Attach (CMA), which is not supported with ptrace
protection. If you are using a Linux distribution that has ptrace protection enabled by default, such
as Ubuntu, you must disable it. If your Linux distribution does not have ptrace protection enabled by
default, skip this step.
• To temporarily disable ptrace protection for testing purposes, run the following command.
1135
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and MPI
Prerequisites
Ensure that the user performing the following steps has sudo permissions.
1. To download the Intel MPI installation files, see the Intel Developer Zone website.
You must register before you can download the installation files. After you have registered, do the
following:
$ cd directory_name
3. Open silent.cfg using your preferred text editor. In line 10, change ACCEPT_EULA=decline to
ACCEPT_EULA=accept. Save the changes and close the file.
4. Run the installation script.
source /opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpivars.sh
• For csh and tcsh, add the following environment variable to /home/username/.cshrc.
source /opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpivars.csh
$ which mpicc
Note
If you no longer want to use Intel MPI, remove the environment variables from the shell startup
scripts.
1137
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and MPI
4. On the Choose an Instance Type page, select one of the supported instance types (p. 1130) and
then choose Next: Configure Instance Details.
5. On the Configure Instance Details page, do the following:
a. For Number of instances, enter the number of EFA-enabled instances that you want to launch.
b. For Network and Subnet, select the VPC and subnet into which to launch the instances.
c. For Placement group, select Add instance to placement group.
d. For Placement group name, select Add to a new placement group, enter a descriptive name
for the placement group, and then for Placement group strategy, select cluster.
e. For EFA, choose Enable.
f. In the Network Interfaces section, for device eth0, choose New network interface. You can
optionally specify a primary IPv4 address and one or more secondary IPv4 addresses. If you're
launching the instance into a subnet that has an associated IPv6 CIDR block, you can optionally
specify a primary IPv6 address and one or more secondary IPv6 addresses.
g. Choose Next: Add Storage.
6. On the Add Storage page, specify the volumes to attach to the instances in addition to the volumes
specified by the AMI (such as the root device volume), and then choose Next: Add Tags.
7. On the Add Tags page, specify tags for the instances, such as a user-friendly name, and then choose
Next: Configure Security Group.
8. On the Configure Security Group page, for Assign a security group, select Select an existing
security group, and then select the security group that you created in Step 1.
9. Choose Review and Launch.
10. On the Review Instance Launch page, review the settings, and then choose Launch to choose a key
pair and to launch your instances.
1. Select one instance in the cluster as the leader node, and connect to it.
2. Disable strictHostKeyChecking and enable ForwardAgent on the leader node. Open ~/.ssh/
config using your preferred text editor and add the following.
Host *
1138
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
ForwardAgent yes
Host *
StrictHostKeyChecking no
5. Open ~/.ssh/id_rsa.pub using your preferred text editor and copy the key.
6. For each member node in the cluster, do the following:
$ ssh member_node_private_ip
You should connect to the member node without being prompted for a key or password.
• NCCL with EFA is supported with p3dn.24xlarge and p4d.24xlarge instances only.
• Only NCCL 2.4.2 and later is supported with EFA.
The following tutorials help you to launch an EFA and NCCL-enabled instance cluster for machine
learning workloads.
1139
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
• Only Amazon Linux 2, RHEL 7/8, CentOS 7/8, and Ubuntu 18.04 base AMIs are supported.
Contents
• Step 1: Prepare an EFA-enabled security group (p. 1140)
• Step 2: Launch a temporary instance (p. 1141)
• Step 3: Install Nvidia GPU drivers, Nvidia CUDA toolkit, and cuDNN (p. 1141)
• Step 4: Install the EFA software (p. 1149)
• Step 5: Install NCCL (p. 1151)
• Step 6: Install the aws-ofi-nccl plugin (p. 1151)
• Step 7: Install the NCCL tests (p. 1152)
• Step 8: Test your EFA and NCCL configuration (p. 1153)
• Step 9: Install your machine learning applications (p. 1154)
• Step 10: Create an EFA and NCCL-enabled AMI (p. 1154)
• Step 11: Terminate the temporary instance (p. 1154)
• Step 12: Launch EFA and NCCL-enabled instances into a cluster placement group (p. 1155)
• Step 13: Enable passwordless SSH (p. 1155)
a. For Security group name, enter a descriptive name for the security group, such as EFA-
enabled security group.
b. (Optional) For Description, enter a brief description of the security group.
c. For VPC, select the VPC into which you intend to launch your EFA-enabled instances.
d. Choose Create.
4. Select the security group that you created, and on the Description tab, copy the Group ID.
5. On the Inbound tab, do the following:
a. Choose Edit.
b. For Type, choose All traffic.
c. For Source, choose Custom and paste the security group ID that you copied into the field.
d. Choose Save.
6. On the Outbound tab, do the following:
a. Choose Edit.
b. For Type, choose All traffic.
c. For Destination, choose Custom and paste the security group ID that you copied into the field.
d. Choose Save.
1140
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
Step 3: Install Nvidia GPU drivers, Nvidia CUDA toolkit, and cuDNN
Amazon Linux 2
To install the Nvidia GPU drivers, Nvidia CUDA toolkit, and cuDNN
1. Install the utilities that are needed to install the Nvidia GPU drivers and the Nvidia CUDA toolkit.
2. To use the Nvidia GPU driver, you must first disable the nouveau open source drivers.
a. Install the required utilities and the kernel headers package for the version of the kernel
that you are currently running.
1141
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
EOF
c. Open /etc/default/grub using your preferred text editor and add the following.
GRUB_CMDLINE_LINUX="rdblacklist=nouveau"
a. Install the EPEL repository for DKMS and enable any optional repos for your Linux
distribution.
$ distribution='rhel7'
c. Set up the CUDA network repository and update the repository cache.
$ ARCH=$( /bin/arch ) \
&& sudo yum-config-manager --add-repo http://developer.download.nvidia.com/
compute/cuda/repos/$distribution/${ARCH}/cuda-$distribution.repo \
&& sudo yum clean expire-cache
7. Ensure that the CUDA paths are set each time that the instance starts.
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:
$LD_LIBRARY_PATH
setenv PATH=/usr/local/cuda/bin:$PATH
1142
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
setenv LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:
$LD_LIBRARY_PATH
8. To confirm that the Nvidia GPU drivers are functional, run the following command.
$ nvidia-smi -q | head
The command should return information about the Nvidia GPUs, Nvidia GPU drivers, and Nvidia
CUDA toolkit.
CentOS 7/8
To install the Nvidia GPU drivers, Nvidia CUDA toolkit, and cuDNN
1. To ensure that all of your software packages are up to date, perform a quick software update on
your instance.
3. To use the Nvidia GPU driver, you must first disable the nouveau open source drivers.
a. Install the required utilities and the kernel headers package for the version of the kernel
that you are currently running.
c. Open /etc/default/grub using your preferred text editor and add the following.
GRUB_CMDLINE_LINUX="rdblacklist=nouveau"
a. Install the EPEL repository for DKMS and enable any optional repos for your Linux
distribution.
1143
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
• CentOS 7
• CentOS 8
• CentOS 7
$ distribution='rhel7'
• CentOS 8
$ distribution='rhel8'
c. Set up the CUDA network repository and update the repository cache.
$ ARCH=$( /bin/arch ) \
&& sudo yum-config-manager --add-repo http://developer.download.nvidia.com/
compute/cuda/repos/$distribution/${ARCH}/cuda-$distribution.repo \
&& sudo yum clean expire-cache
8. Ensure that the CUDA paths are set each time that the instance starts.
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:
$LD_LIBRARY_PATH
setenv PATH=/usr/local/cuda/bin:$PATH
1144
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
setenv LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:
$LD_LIBRARY_PATH
9. To confirm that the Nvidia GPU drivers are functional, run the following command.
$ nvidia-smi -q | head
The command should return information about the Nvidia GPUs, Nvidia GPU drivers, and Nvidia
CUDA toolkit.
RHEL 7/8
To install the Nvidia GPU drivers, Nvidia CUDA toolkit, and cuDNN
1. Install the utilities that are needed to install the Nvidia GPU drivers and the Nvidia CUDA toolkit.
2. To use the Nvidia GPU driver, you must first disable the nouveau open source drivers.
a. Install the required utilities and the kernel headers package for the version of the kernel
that you are currently running.
c. Open /etc/default/grub using your preferred text editor and add the following.
GRUB_CMDLINE_LINUX="rdblacklist=nouveau"
a. Install the EPEL repository for DKMS and enable any optional repos for your Linux
distribution.
• RHEL 7
• RHEL 8
1145
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
c. Set up the CUDA network repository and update the repository cache.
$ ARCH=$( /bin/arch ) \
&& sudo yum-config-manager --add-repo http://developer.download.nvidia.com/
compute/cuda/repos/$distribution/${ARCH}/cuda-$distribution.repo \
&& sudo yum clean expire-cache
7. Ensure that the CUDA paths are set each time that the instance starts.
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:
$LD_LIBRARY_PATH
setenv PATH=/usr/local/cuda/bin:$PATH
setenv LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:
$LD_LIBRARY_PATH
8. To confirm that the Nvidia GPU drivers are functional, run the following command.
$ nvidia-smi -q | head
The command should return information about the Nvidia GPUs, Nvidia GPU drivers, and Nvidia
CUDA toolkit.
Ubuntu 18.04/20.04
To install the Nvidia GPU drivers, Nvidia CUDA toolkit, and cuDNN
1. Install the utilities that are needed to install the Nvidia GPU drivers and the Nvidia CUDA toolkit.
1146
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
2. To use the Nvidia GPU driver, you must first disable the nouveau open source drivers.
a. Install the required utilities and the kernel headers package for the version of the kernel
that you are currently running.
c. Open /etc/default/grub using your preferred text editor and add the following.
GRUB_CMDLINE_LINUX="rdblacklist=nouveau"
$ sudo update-grub
a. Download and install the additional dependencies and add the CUDA repository.
• Ubuntu 18.04
• Ubuntu 20.04
1147
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
a. You must install the version of the Nvidia Fabric Manager that matches the version of the
Nvidia kernel module that you installed in the previous step.
Run the following command to determine the version of the Nvidia kernel module.
NVRM version: NVIDIA UNIX x86_64 Kernel Module 450.42.01 Tue Jun 15 21:26:37
UTC 2021
In the example above, major version 450 of the kernel module was installed. This means
that you need to install Nvidia Fabric Manager version 450.
b. Install the Nvidia Fabric Manager. Run the following command and specify the major
version identified in the previous step.
For example, if major version 450 of the kernel module was installed, use the following
command to install the matching version of Nvidia Fabric Manager.
c. Start the service, and ensure that it starts automatically when the instance starts. Nvidia
Fabric Manager is required for NV Switch Management.
7. Ensure that the CUDA paths are set each time that the instance starts.
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:
$LD_LIBRARY_PATH
setenv PATH=/usr/local/cuda/bin:$PATH
setenv LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:
$LD_LIBRARY_PATH
8. To confirm that the Nvidia GPU drivers are functional, run the following command.
$ nvidia-smi -q | head
The command should return information about the Nvidia GPUs, Nvidia GPU drivers, and Nvidia
CUDA toolkit.
1. Connect to the instance you launched. For more information, see Connect to your Linux
instance (p. 596).
2. To ensure that all of your software packages are up to date, perform a quick software update on
your instance. This process may take a few minutes.
• Ubuntu 18.04
3. Download the EFA software installation files. The software installation files are packaged into
a compressed tarball (.tar.gz) file. To download the latest stable version, use the following
command.
$ curl -O https://efa-installer.amazonaws.com/aws-efa-installer-1.14.1.tar.gz
You can also get the latest version by replacing the version number with latest in the preceding
command.
4. (Optional) Verify the authenticity and integrity of the EFA tarball (.tar.gz) file. We recommend
that you do this to verify the identity of the software publisher and to check that the file has not
been altered or corrupted since it was published. If you do not want to verify the tarball file, skip this
step.
Note
Alternatively, if you prefer to verify the tarball file by using an MD5 or SHA256 checksum
instead, see Verify the EFA installer using a checksum (p. 1164).
a. Download the public GPG key and import it into your keyring.
1149
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
The command should return a key value. Make a note of the key value, because you need it in
the next step.
b. Verify the GPG key's fingerprint. Run the following command and specify the key value from the
previous step.
The command should return a fingerprint that is identical to 4E90 91BC BB97 A96B 26B1
5E59 A054 80B1 DD2D 3CCC. If the fingerprint does not match, don't run the EFA installation
script, and contact AWS Support.
c. Download the signature file and verify the signature of the EFA tarball file.
gpg: Signature made Wed 29 Jul 2020 12:50:13 AM UTC using RSA key ID DD2D3CCC
gpg: Good signature from "Amazon EC2 EFA <[email protected]>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 4E90 91BC BB97 A96B 26B1 5E59 A054 80B1 DD2D 3CCC
If the result includes Good signature, and the fingerprint matches the fingerprint returned
in the previous step, proceed to the next step. If not, don't run the EFA installation script, and
contact AWS Support.
5. Extract the files from the compressed .tar.gz file and navigate into the extracted directory.
$ sudo ./efa_installer.sh -y
Libfabric is installed in the /opt/amazon/efa directory, while Open MPI is installed in the /opt/
amazon/openmpi directory.
7. If the EFA installer prompts you to reboot the instance, do so and then reconnect to the instance.
Otherwise, log out of the instance and then log back in to complete the installation.
8. Confirm that the EFA software components were successfully installed.
The command should return information about the Libfabric EFA interfaces. The following example
shows the command output.
provider: efa
fabric: EFA-fe80::94:3dff:fe89:1b70
domain: efa_0-rdm
version: 2.0
type: FI_EP_RDM
1150
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
protocol: FI_PROTO_EFA
provider: efa
fabric: EFA-fe80::c6e:8fff:fef6:e7ff
domain: efa_0-rdm
version: 111.0
type: FI_EP_RDM
protocol: FI_PROTO_EFA
provider: efa
fabric: EFA-fe80::c34:3eff:feb2:3c35
domain: efa_1-rdm
version: 111.0
type: FI_EP_RDM
protocol: FI_PROTO_EFA
provider: efa
fabric: EFA-fe80::c0f:7bff:fe68:a775
domain: efa_2-rdm
version: 111.0
type: FI_EP_RDM
protocol: FI_PROTO_EFA
provider: efa
fabric: EFA-fe80::ca7:b0ff:fea6:5e99
domain: efa_3-rdm
version: 111.0
type: FI_EP_RDM
protocol: FI_PROTO_EFA
To install NCCL
$ cd /opt
2. Clone the official NCCL repository to the instance and navigate into the local cloned repository.
3. Build and install NCCL and specify the CUDA installation directory.
1151
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
$ cd $HOME
2. (Ubuntu only) Install the utilities that are required to install the aws-ofi-nccl plugin. To install the
required utilities, run the following command.
3. Clone the aws branch of the official AWS aws-ofi-nccl repository to the instance and navigate into
the local cloned repository.
$ ./autogen.sh
5. To generate the make files, run the configure script and specify the MPI, Libfabric, NCCL, and
CUDA installation directories.
$ export PATH=/opt/amazon/openmpi/bin/:$PATH
$ make \
&& sudo make install
$ cd $HOME
2. Clone the official nccl-tests repository to the instance and navigate into the local cloned repository.
$ export LD_LIBRARY_PATH=/opt/amazon/efa/lib64:$LD_LIBRARY_PATH
• Ubuntu 18.04
1152
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
$ export LD_LIBRARY_PATH=/opt/amazon/efa/lib:$LD_LIBRARY_PATH
4. Install the NCCL tests and specify the MPI, NCCL, and CUDA installation directories.
1. Create a host file that specifies the hosts on which to run the tests. The following command creates
a host file named my-hosts that includes a reference to the instance itself.
IMDSv2
IMDSv1
2. Run the test and specify the host file (--hostfile) and the number of GPUs to use (-n). The
following command runs the all_reduce_perf test on 8 GPUs on the instance itself, and specifies
the following environment variables.
For more information about the NCCL test arguments, see the NCCL Tests README in the official
nccl-tests repository.
$ /opt/amazon/openmpi/bin/mpirun \
-x FI_PROVIDER="efa" \
-x FI_EFA_USE_DEVICE_RDMA=1 \
-x RDMAV_FORK_SAFE=1 \
-x LD_LIBRARY_PATH=/opt/nccl/build/lib:/usr/local/cuda/lib64:/opt/amazon/efa/
lib64:/opt/amazon/openmpi/lib64:/opt/aws-ofi-nccl/lib:$LD_LIBRARY_PATH \
-x NCCL_DEBUG=INFO \
-x NCCL_ALGO=ring \
-x NCCL_PROTO=simple \
--hostfile my-hosts -n 8 -N 8 \
--mca pml ^cm --mca btl tcp,self --mca btl_tcp_if_exclude lo,docker0 --bind-to none
\
1153
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
$HOME/nccl-tests/build/all_reduce_perf -b 8 -e 1G -f 2 -g 1 -c 1 -n 100
3. You can confirm that EFA is active as the underlying provider for NCCL when the NCCL_DEBUG log is
printed.
1154
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
Step 12: Launch EFA and NCCL-enabled instances into a cluster placement group
Launch your EFA and NCCL-enabled instances into a cluster placement group using the EFA-enabled AMI
and the EFA-enabled security group that you created earlier.
Note
To launch your EFA and NCCL-enabled instances into a cluster placement group
a. For Number of instances, enter the number of EFA and NCCL-enabled instances that you want
to launch.
b. For Network and Subnet, select the VPC and subnet into which to launch the instances.
c. For Placement group, select Add instance to placement group.
d. For Placement group name, select Add to a new placement group, and then enter a
descriptive name for the placement group. Then for Placement group strategy, select cluster.
e. For EFA, choose Enable.
f. In the Network Interfaces section, for device eth0, choose New network interface. You can
optionally specify a primary IPv4 address and one or more secondary IPv4 addresses. If you are
launching the instance into a subnet that has an associated IPv6 CIDR block, you can optionally
specify a primary IPv6 address and one or more secondary IPv6 addresses.
g. Choose Next: Add Storage.
6. On the Add Storage page, specify the volumes to attach to the instances in addition to the volumes
specified by the AMI (such as the root device volume). Then choose Next: Add Tags.
7. On the Add Tags page, specify tags for the instances, such as a user-friendly name, and then choose
Next: Configure Security Group.
8. On the Configure Security Group page, for Assign a security group, select Select an existing
security group, and then select the security group that you created earlier.
9. Choose Review and Launch.
10. On the Review Instance Launch page, review the settings, and then choose Launch to choose a key
pair and to launch your instances.
1155
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
1. Select one instance in the cluster as the leader node, and connect to it.
2. Disable strictHostKeyChecking and enable ForwardAgent on the leader node. Open ~/.ssh/
config using your preferred text editor and add the following.
Host *
ForwardAgent yes
Host *
StrictHostKeyChecking no
5. Open ~/.ssh/id_rsa.pub using your preferred text editor and copy the key.
6. For each member node in the cluster, do the following:
$ ssh member_node_private_ip
You should connect to the member node without being prompted for a key or password.
For more information, see the AWS Deep Learning AMI User Guide.
Note
Only the p3dn.24xlarge and p4d.24xlarge instance types are supported.
Contents
• Step 1: Prepare an EFA-enabled security group (p. 1157)
• Step 2: Launch a temporary instance (p. 1157)
• Step 3: Test your EFA and NCCL configuration (p. 1158)
1156
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
a. For Security group name, enter a descriptive name for the security group, such as EFA-
enabled security group.
b. (Optional) For Description, enter a brief description of the security group.
c. For VPC, select the VPC into which you intend to launch your EFA-enabled instances.
d. Choose Create.
4. Select the security group that you created, and on the Description tab, copy the Group ID.
5. On the Inbound tab, do the following:
a. Choose Edit.
b. For Type, choose All traffic.
c. For Source, choose Custom and paste the security group ID that you copied into the field.
d. Choose Save.
6. On the Outbound tab, do the following:
a. Choose Edit.
b. For Type, choose All traffic.
c. For Destination, choose Custom and paste the security group ID that you copied into the field.
d. Choose Save.
1157
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
1. Create a host file that specifies the hosts on which to run the tests. The following command creates
a host file named my-hosts that includes a reference to the instance itself.
IMDSv2
IMDSv1
2. Run the test and specify the host file (--hostfile) and the number of GPUs to use (-n). The
following command runs the all_reduce_perf test on 8 GPUs on the instance itself, and specifies
the following environment variables.
For more information about the NCCL test arguments, see the NCCL Tests README in the official
nccl-tests repository.
$ /opt/amazon/openmpi/bin/mpirun \
-x FI_PROVIDER="efa" \
-x FI_EFA_USE_DEVICE_RDMA=1 \
-x RDMAV_FORK_SAFE=1 \
1158
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
-x LD_LIBRARY_PATH=/opt/nccl/build/lib:/usr/local/cuda/lib64:/opt/amazon/efa/
lib64:/opt/amazon/openmpi/lib64:/usr/local/cuda/efa/lib:$LD_LIBRARY_PATH \
-x NCCL_DEBUG=INFO \
-x NCCL_ALGO=ring \
-x NCCL_PROTO=simple \
--hostfile my-hosts -n 8 -N 8 \
--mca pml ^cm --mca btl tcp,self --mca btl_tcp_if_exclude lo,docker0 --bind-to none
\
$HOME/src/bin/efa-tests/efa-cuda-10.0/all_reduce_perf -b 8 -e 1G -f 2 -g 1 -c 1 -n
100
3. You can confirm that EFA is active as the underlying provider for NCCL when the NCCL_DEBUG log is
printed.
1159
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Get started with EFA and NCCL
Step 7: Launch EFA and NCCL-enabled instances into a cluster placement group
Launch your EFA and NCCL-enabled instances into a cluster placement group using the EFA-enabled AMI
and the EFA-enabled security group that you created earlier.
Note
To launch your EFA and NCCL-enabled instances into a cluster placement group
a. For Number of instances, enter the number of EFA and NCCL-enabled instances that you want
to launch.
b. For Network and Subnet, select the VPC and subnet into which to launch the instances.
c. For Placement group, select Add instance to placement group.
d. For Placement group name, select Add to a new placement group, and then enter a
descriptive name for the placement group. Then for Placement group strategy, select cluster.
e. For EFA, choose Enable.
f. In the Network Interfaces section, for device eth0, choose New network interface. You can
optionally specify a primary IPv4 address and one or more secondary IPv4 addresses. If you are
launching the instance into a subnet that has an associated IPv6 CIDR block, you can optionally
specify a primary IPv6 address and one or more secondary IPv6 addresses.
g. Choose Next: Add Storage.
6. On the Add Storage page, specify the volumes to attach to the instances in addition to the volumes
specified by the AMI (such as the root device volume). Then choose Next: Add Tags.
7. On the Add Tags page, specify tags for the instances, such as a user-friendly name, and then choose
Next: Configure Security Group.
8. On the Configure Security Group page, for Assign a security group, select Select an existing
security group, and then select the security group that you created earlier.
9. Choose Review and Launch.
1160
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EFA
10. On the Review Instance Launch page, review the settings, and then choose Launch to choose a key
pair and to launch your instances.
1. Select one instance in the cluster as the leader node, and connect to it.
2. Disable strictHostKeyChecking and enable ForwardAgent on the leader node. Open ~/.ssh/
config using your preferred text editor and add the following.
Host *
ForwardAgent yes
Host *
StrictHostKeyChecking no
5. Open ~/.ssh/id_rsa.pub using your preferred text editor and copy the key.
6. For each member node in the cluster, do the following:
$ ssh member_node_private_ip
You should connect to the member node without being prompted for a key or password.
EFA requirements
To use an EFA, you must do the following:
1161
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EFA
Contents
• Create an EFA (p. 1162)
• Attach an EFA to a stopped instance (p. 1162)
• Attach an EFA when launching an instance (p. 1163)
• Add an EFA to a launch template (p. 1163)
• Manage IP addresses for an EFA (p. 1163)
• Change the security group for an EFA (p. 1163)
• Detach an EFA (p. 1163)
• View EFAs (p. 1164)
• Delete an EFA (p. 1164)
Create an EFA
You can create an EFA in a subnet in a VPC. You can't move the EFA to another subnet after it's created,
and you can only attach it to stopped instances in the same Availability Zone.
Use the create-network-interface command and for interface-type, specify efa, as shown in the
following example.
1162
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EFA
You attach an EFA to an instance in the same way that you attach a network interface to an instance. For
more information, see Attach a network interface to an instance (p. 1088).
Use the run-instances command and for NetworkInterfaceId, specify the ID of the EFA, as shown in the
following example.
Use the run-instances command and for InterfaceType, specify efa, as shown in the following example.
You can leverage launch templates to launch EFA-enabled instances with other AWS services, such as
AWS Batch.
For more information about creating launch templates, see Create a launch template (p. 581).
You assign an Elastic IP (IPv4) and IPv6 address to an EFA in the same way that you assign an IP address
to an elastic network interface. For more information, see Managing IP addresses (p. 1090).
You change the security group that is associated with an EFA in the same way that you change the
security group that is associated with an elastic network interface. For more information, see Changing
the security group (p. 1092).
Detach an EFA
To detach an EFA from an instance, you must first stop the instance. You cannot detach an EFA from an
instance that is in the running state.
1163
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Monitor an EFA
You detach an EFA from an instance in the same way that you detach an elastic network interface from
an instance. For more information, see Detach a network interface from an instance (p. 1089).
View EFAs
You can view all of the EFAs in your account.
You view EFAs in the same way that you view elastic network interfaces. For more information, see View
details about a network interface (p. 1088).
Delete an EFA
To delete an EFA, you must first detach it from the instance. You cannot delete an EFA while it is
attached to an instance.
You delete EFAs in the same way that you delete elastic network interfaces. For more information, see
Delete a network interface (p. 1093).
Monitor an EFA
You can use the following features to monitor the performance of your Elastic Fabric Adapters.
You create a flow log for an EFA in the same way that you create a flow log for an elastic network
interface. For more information, see Creating a Flow Log in the Amazon VPC User Guide.
In the flow log entries, EFA traffic is identified by the srcAddress and destAddress, which are both
formatted as MAC addresses, as shown in the following example.
Amazon CloudWatch
Amazon CloudWatch provides metrics that enable you to monitor your EFAs in real time. You can collect
and track metrics, create customized dashboards, and set alarms that notify you or take actions when a
specified metric reaches a threshold that you specify. For more information, see Monitor your instances
using CloudWatch (p. 958).
Use the md5sum utility for the MD5 checksum, or the sha256sum utility for the SHA256 checksum,
and specify the tarball filename. You must run the command from the directory in which you saved the
tarball file.
1164
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Verify the EFA installer using a checksum
• MD5
$ md5sum tarball_filename.tar.gz
• SHA256
$ sha256sum tarball_filename.tar.gz
checksum_value tarball_filename.tar.gz
Compare the checksum value returned by the command with the checksum value provided in the table
below. If the checksums match, then it is safe to run the installation script. If the checksums do not
match, do not run the installation script, and contact AWS Support.
For example, the following command verifies the EFA 1.9.4 tarball using the SHA256 checksum.
$ sha256sum aws-efa-installer-1.9.4.tar.gz
1009b5182693490d908ef0ed2c1dd4f813cc310a5d2062ce9619c4c12b5a7f14 aws-efa-
installer-1.9.4.tar.gz
The following table lists the checksums for recent versions of EFA.
SHA256:
c7b1b48e86fe4b3eaa4299d3600930919c4fe6d88
SHA256:
662d62c12de85116df33780d40e0533ef7dad9270
SHA256:
ad6705eb23a3fce44af3afc0f7643091595653a72
SHA256:
2c225321824788b8ca3fbc118207b944cdb096b84
SHA256:
083a868a2c212a5a4fcf3e4d732b685ce39cceb3c
1165
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Verify the EFA installer using a checksum
SHA256:
5665795c2b4f09d5f3f767506d4d4c429695b36d4
SHA256:
28256c57e9ecc0b0778b41c1f777a9982b4e8eae7
SHA256:
a25786f98a3628f7f54f7f74ee2b39bc6734ea937
SHA256:
6cb04baf5ffc58ddf319e956b5461289199c8dd80
SHA256:
7891f6d45ae33e822189511c4ea1d14c9d54d000f
SHA256:
61564582de7320b21de319f532c3a677d26cc4678
SHA256:
136612f96f2a085a7d98296da0afb6fa807b38142
SHA256:
a4343308d7ea4dc943ccc21bcebed913e8868e59b
SHA256:
1009b5182693490d908ef0ed2c1dd4f813cc310a5
1166
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Placement groups
SHA256:
46ce732d6f3fcc9edf6a6e9f9df0ad136054328e2
SHA256:
0d974655a09b213d7859e658965e56dc4f23a0eee
Placement groups
When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that
all of your instances are spread out across underlying hardware to minimize correlated failures. You can
use placement groups to influence the placement of a group of interdependent instances to meet the
needs of your workload. Depending on the type of workload, you can create a placement group using
one of the following placement strategies:
• Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads
to achieve the low-latency network performance necessary for tightly-coupled node-to-node
communication that is typical of HPC applications. For more information, see Cluster placement
groups (p. 1168).
• Partition – spreads your instances across logical partitions such that groups of instances in one
partition do not share the underlying hardware with groups of instances in different partitions. This
strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra,
and Kafka. For more information, see Partition placement groups (p. 1168).
• Spread – strictly places a small group of instances across distinct underlying hardware to reduce
correlated failures. For more information, see Spread placement groups (p. 1169).
Contents
• Placement group strategies (p. 1167)
• Placement group rules and limitations (p. 1170)
• Working with placement groups (p. 1171)
Topics
• Cluster placement groups (p. 1168)
• Partition placement groups (p. 1168)
• Spread placement groups (p. 1169)
1167
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Placement group strategies
The following image shows instances that are placed into a cluster placement group.
Cluster placement groups are recommended for applications that benefit from low network latency,
high network throughput, or both. They are also recommended when the majority of the network
traffic is between the instances in the group. To provide the lowest latency and the highest packet-per-
second network performance for your placement group, choose an instance type that supports enhanced
networking. For more information, see Enhanced Networking (p. 1100).
• Either use a single launch request to launch the number of instances that you need in the placement
group, or create a Capacity Reservation in the placement group to reserve capacity for your
entire workload. For more information, see Work with Capacity Reservations in cluster placement
groups (p. 536).
• Use the same instance type for all instances in the placement group.
If you try to add more instances to the placement group later, or if you try to launch more than one
instance type in the placement group, you increase your chances of getting an insufficient capacity error.
If you stop an instance in a placement group and then start it again, it still runs in the placement group.
However, if you are not using a Capacity Reservation for your cluster placement group, the instance start
fails if there is insufficient capacity.
If you receive a capacity error when launching an instance in a placement group that already has running
instances, stop and start all of the instances in the placement group, and try the launch again. Starting
the instances may migrate them to hardware that has capacity for all of the requested instances.
1168
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Placement group strategies
its own set of racks. Each rack has its own network and power source. No two partitions within a
placement group share the same racks, allowing you to isolate the impact of hardware failure within your
application.
The following image is a simple visual representation of a partition placement group in a single
Availability Zone. It shows instances that are placed into a partition placement group with three
partitions—Partition 1, Partition 2, and Partition 3. Each partition comprises multiple instances. The
instances in a partition do not share racks with the instances in the other partitions, allowing you to
contain the impact of a single hardware failure to only the associated partition.
Partition placement groups can be used to deploy large distributed and replicated workloads, such as
HDFS, HBase, and Cassandra, across distinct racks. When you launch instances into a partition placement
group, Amazon EC2 tries to distribute the instances evenly across the number of partitions that you
specify. You can also launch instances into a specific partition to have more control over where the
instances are placed.
A partition placement group can have partitions in multiple Availability Zones in the same Region. A
partition placement group can have a maximum of seven partitions per Availability Zone. The number
of instances that can be launched into a partition placement group is limited only by the limits of your
account.
In addition, partition placement groups offer visibility into the partitions — you can see which instances
are in which partitions. You can share this information with topology-aware applications, such as HDFS,
HBase, and Cassandra. These applications use this information to make intelligent data replication
decisions for increasing data availability and durability.
If you start or launch an instance in a partition placement group and there is insufficient unique
hardware to fulfill the request, the request fails. Amazon EC2 makes more distinct hardware available
over time, so you can try your request again later.
The following image shows seven instances in a single Availability Zone that are placed into a spread
placement group. The seven instances are placed on seven different racks.
Spread placement groups are recommended for applications that have a small number of critical
instances that should be kept separate from each other. Launching instances in a spread placement
1169
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Placement group rules and limitations
group reduces the risk of simultaneous failures that might occur when instances share the same racks.
Spread placement groups provide access to distinct racks, and are therefore suitable for mixing instance
types or launching instances over time.
A spread placement group can span multiple Availability Zones in the same Region. You can have a
maximum of seven running instances per Availability Zone per group.
If you start or launch an instance in a spread placement group and there is insufficient unique hardware
to fulfill the request, the request fails. Amazon EC2 makes more distinct hardware available over time, so
you can try your request again later.
• You can create a maximum of 500 placement groups per account in each Region.
• The name that you specify for a placement group must be unique within your AWS account for the
Region.
• You can't merge placement groups.
• An instance can be launched in one placement group at a time; it cannot span multiple placement
groups.
• Zonal Reserved Instances (p. 383) provide a capacity reservation for Amazon EC2 instances in a specific
Availability Zone. The capacity reservation can be used by instances in a placement group, but it does
not explicitly reserve capacity in a placement group.
• You cannot launch Dedicated Hosts in placement groups.
1170
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Working with placement groups
• You can create On-Demand Capacity Reservations in a cluster placement group to explicitly reserve
capacity in that placement group. For more information, see Work with Capacity Reservations in
cluster placement groups (p. 536).
• A partition placement group supports a maximum of seven partitions per Availability Zone. The
number of instances that you can launch in a partition placement group is limited only by your account
limits.
• When instances are launched into a partition placement group, Amazon EC2 tries to evenly distribute
the instances across all partitions. Amazon EC2 doesn’t guarantee an even distribution of instances
across all partitions.
• A partition placement group with Dedicated Instances can have a maximum of two partitions.
• On-Demand Capacity Reservations provide a capacity reservation for Amazon EC2 instances in a
specific Availability Zone. The Capacity Reservation can be used by instances in a partition placement
group, but it does not explicitly reserve capacity in the partition placement group.
• A spread placement group supports a maximum of seven running instances per Availability Zone. For
example, in a Region with three Availability Zones, you can run a total of 21 instances in the group
(seven per zone). If you try to start an eighth instance in the same Availability Zone and in the same
spread placement group, the instance will not launch. If you need to have more than seven instances
in an Availability Zone, then the recommendation is to use multiple spread placement groups. Using
multiple spread placement groups does not provide guarantees about the spread of instances between
groups, but it does ensure the spread for each group, thus limiting impact from certain classes of
failures.
• Spread placement groups are not supported for Dedicated Instances.
• On-Demand Capacity Reservations provide a capacity reservation for Amazon EC2 instances in a
specific Availability Zone. The Capacity Reservation can be used by instances in a spread placement
group, but it does not explicitly reserve capacity in the spread placement group.
1171
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Working with placement groups
Note
You can tag a placement group on creation using the command line tools only.
Console
AWS CLI
Use the create-placement-group command. The following example creates a placement group
named my-cluster that uses the cluster placement strategy, and it applies a tag with a key of
purpose and a value of production.
Use the create-placement-group command. Specify the --strategy parameter with the value
partition, and specify the --partition-count parameter with the desired number of
partitions. In this example, the partition placement group is named HDFS-Group-A and is created
with five partitions.
PowerShell
To create a placement group using the AWS Tools for Windows PowerShell
When you tag a placement group, the instances that are launched into the placement group are not
automatically tagged. You need to explicitly tag the instances that are launched into the placement
group. For more information, see Add a tag when you launch an instance (p. 1674).
You can view, add, and delete tags using the new console and the command line tools.
Console
1172
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Working with placement groups
• To add a tag, choose Add tag, and then enter the tag key and value. You can add up to 50
tags per placement group. For more information, see Tag restrictions (p. 1670).
• To delete a tag, choose Remove next to the tag that you want to delete.
5. Choose Save changes.
AWS CLI
Use the describe-tags command to view the tags for the specified resource. In the following
example, you describe the tags for all of your placement groups.
{
"Tags": [
{
"Key": "Environment",
"ResourceId": "pg-0123456789EXAMPLE",
"ResourceType": "placement-group",
"Value": "Production"
},
{
"Key": "Environment",
"ResourceId": "pg-9876543210EXAMPLE",
"ResourceType": "placement-group",
"Value": "Production"
}
]
}
You can also use the describe-tags command to view the tags for a placement group by specifying
its ID. In the following example, you describe the tags for pg-0123456789EXAMPLE.
{
"Tags": [
{
"Key": "Environment",
"ResourceId": "pg-0123456789EXAMPLE",
"ResourceType": "placement-group",
"Value": "Production"
}
]
}
You can also view the tags of a placement group by describing the placement group.
Use the describe-placement-groups command to view the configuration of the specified placement
group, which includes any tags that were specified for the placement group.
1173
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Working with placement groups
{
"PlacementGroups": [
{
"GroupName": "my-cluster",
"State": "available",
"Strategy": "cluster",
"GroupId": "pg-0123456789EXAMPLE",
"Tags": [
{
"Key": "Environment",
"Value": "Production"
}
]
}
]
}
You can use the create-tags command to tag existing resources. In the following example, the
existing placement group is tagged with Key=Cost-Center and Value=CC-123.
You can use the delete-tags command to delete tags from existing resources. For examples, see
Examples in the AWS CLI Command Reference.
PowerShell
1174
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Working with placement groups
Console
• On the Choose an Instance Type page, select an instance type that can be launched into a
placement group.
• On the Configure Instance Details page, the following fields are applicable to placement
groups:
• For Number of instances, enter the total number of instances that you need in this
placement group, because you might not be able to add instances to the placement group
later.
• For Placement group, select the Add instance to placement group check box. If you do not
see Placement group on this page, verify that you have selected an instance type that can
be launched into a placement group. Otherwise, this option is not available.
• For Placement group name, you can choose to add the instances to an existing placement
group or to a new placement group that you create.
• For Placement group strategy, choose the appropriate strategy. If you choose partition,
for Target partition, choose Auto distribution to have Amazon EC2 do a best effort to
distribute the instances evenly across all the partitions in the group. Alternatively, specify
the partition in which to launch the instances.
AWS CLI
Use the run-instances command and specify the placement group name using the --placement
"GroupName = my-cluster" parameter. In this example, the placement group is named my-
cluster.
To launch instances into a specific partition of a partition placement group using the AWS CLI
Use the run-instances command and specify the placement group name and partition using the
--placement "GroupName = HDFS-Group-A, PartitionNumber = 3" parameter. In this
example, the placement group is named HDFS-Group-A and the partition number is 3.
PowerShell
To launch instances into a placement group using AWS Tools for Windows PowerShell
Use the New-EC2Instance command and specify the placement group name using the -
Placement_GroupName parameter.
1175
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Working with placement groups
New console
To view the placement group and partition number of an instance using the console
Old console
To view the placement group and partition number of an instance using the console
AWS CLI
To view the partition number for an instance in a partition placement group using the AWS CLI
The response contains the placement information, which includes the placement group name and
the partition number for the instance.
"Placement": {
"AvailabilityZone": "us-east-1c",
"GroupName": "HDFS-Group-A",
"PartitionNumber": 3,
"Tenancy": "default"
}
To filter instances for a specific partition placement group and partition number using the AWS
CLI
Use the describe-instances command and specify the --filters parameter with the placement-
group-name and placement-partition-number filters. In this example, the placement group is
named HDFS-Group-A and the partition number is 7.
The response lists all the instances that are in the specified partition within the specified placement
group. The following is example output showing only the instance ID, instance type, and placement
information for the returned instances.
1176
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Working with placement groups
"Instances": [
{
"InstanceId": "i-0a1bc23d4567e8f90",
"InstanceType": "r4.large",
},
"Placement": {
"AvailabilityZone": "us-east-1c",
"GroupName": "HDFS-Group-A",
"PartitionNumber": 7,
"Tenancy": "default"
}
{
"InstanceId": "i-0a9b876cd5d4ef321",
"InstanceType": "r4.large",
},
"Placement": {
"AvailabilityZone": "us-east-1c",
"GroupName": "HDFS-Group-A",
"PartitionNumber": 7,
"Tenancy": "default"
}
],
Console
AWS CLI
To view information about all of your placement groups using the AWS CLI
To view information about a specific placement group using the AWS CLI
Example output
{
"PlacementGroups": [
1177
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Working with placement groups
{
"GroupName": "MyClusterPG",
"GroupArn": "arn:aws:ec2:us-east-1:123456789012:placement-group/
MyClusterPG",
"State": "available",
"Strategy": "cluster"
}
]
}
Before you move or remove the instance, the instance must be in the stopped state. You can move or
remove an instance using the AWS CLI or an AWS SDK.
AWS CLI
PowerShell
To move an instance to a placement group using the AWS Tools for Windows PowerShell
AWS CLI
1178
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network MTU
PowerShell
To remove an instance from a placement group using the AWS Tools for Windows
PowerShell
Requirement
Before you can delete a placement group, it must contain no instances. You can terminate (p. 648) all
instances that you launched into the placement group, move (p. 1178) them to another placement
group, or remove (p. 1178) them from the placement group.
Console
AWS CLI
Use the delete-placement-group command and specify the placement group name to delete the
placement group. In this example, the placement group name is my-cluster.
PowerShell
To delete a placement group using the AWS Tools for Windows PowerShell
1179
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Jumbo frames (9001 MTU)
data that can be passed in a single packet. Ethernet packets consist of the frame, or the actual data you
are sending, and the network overhead information that surrounds it.
Ethernet frames can come in different formats, and the most common format is the standard Ethernet
v2 frame format. It supports 1500 MTU, which is the largest Ethernet packet size supported over most of
the internet. The maximum supported MTU for an instance depends on its instance type. All Amazon EC2
instance types support 1500 MTU, and many current instance sizes support 9001 MTU, or jumbo frames.
• Traffic that goes from one instance to another within a VPC in the same Wavelength Zone has an MTU
of 1300.
• Traffic that goes from one instance to another that uses the carrier IP within a Wavelength Zone has an
MTU of 1500.
• Traffic that goes from one instance to another between a Wavelength Zone and the Region that uses a
public IP address has an MTU of 1500.
• Traffic that goes from one instance to another between a Wavelength Zone and the Region that uses a
private IP address has an MTU of 1300.
To see Network MTU information for Windows instances, switch to this page in the Amazon EC2 User
Guide for Windows Instances guide: Network maximum transmission unit (MTU) for your EC2 instance.
Contents
• Jumbo frames (9001 MTU) (p. 1180)
• Path MTU Discovery (p. 1181)
• Check the path MTU between two hosts (p. 1181)
• Check and set the MTU on your Linux instance (p. 1182)
• Troubleshoot (p. 1183)
If packets are over 1500 bytes, they are fragmented, or they are dropped if the Don't Fragment flag is
set in the IP header.
Jumbo frames should be used with caution for internet-bound traffic or any traffic that leaves a VPC.
Packets are fragmented by intermediate systems, which slows down this traffic. To use jumbo frames
inside a VPC and not slow traffic that's bound for outside the VPC, you can configure the MTU size by
route, or use multiple elastic network interfaces with different MTU sizes and different routes.
For instances that are collocated inside a cluster placement group, jumbo frames help to achieve the
maximum network throughput possible, and they are recommended in this case. For more information,
see Placement groups (p. 1167).
1180
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Path MTU Discovery
You can use jumbo frames for traffic between your VPCs and your on-premises networks over AWS Direct
Connect. For more information, and for how to verify Jumbo Frame capability, see Setting Network MTU
in the AWS Direct Connect User Guide.
All current generation instances (p. 234) support jumbo frames. The following previous generation
instances support jumbo frames: A1, C3, G2, I2, M3, and R3.
For more information about supported MTU sizes for transit gateways, see MTU in Amazon VPC Transit
Gateways.
For IPv4, when a host sends a packet that's larger than the MTU of the receiving host or that's larger
than the MTU of a device along the path, the receiving host or device drops the packet, and then returns
the following ICMP message: Destination Unreachable: Fragmentation Needed and Don't
Fragment was Set (Type 3, Code 4). This instructs the transmitting host to split the payload into
multiple smaller packets, and then retransmit them.
The IPv6 protocol does not support fragmentation in the network. When a host sends a packet that's
larger than the MTU of the receiving host or that's larger than the MTU of a device along the path,
the receiving host or device drops the packet, and then returns the following ICMP message: ICMPv6
Packet Too Big (PTB) (Type 2). This instructs the transmitting host to split the payload into multiple
smaller packets, and then retransmit them.
By default, security groups do not allow any inbound ICMP traffic. If you don't explicitly configure an
ICMP inbound rule for your security group, PMTUD is blocked. For more information about configuring
ICMP rules in a network ACL, see Path MTU Discovery in the Amazon VPC User Guide.
Important
Path MTU Discovery does not guarantee that jumbo frames will not be dropped by some
routers. An internet gateway in your VPC will forward packets up to 1500 bytes only. 1500 MTU
packets are recommended for internet traffic.
Use the following command to check the path MTU between your EC2 instance and another host. You
can use a DNS name or an IP address as the destination. If the destination is another EC2 instance, verify
that the security group allows inbound UDP traffic. This example checks the path MTU between an EC2
instance and amazon.com.
1181
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Check and set the MTU on your Linux instance
You can check the current MTU value using the following ip command. Note that in the example output,
mtu 9001 indicates that this instance uses jumbo frames.
1. You can set the MTU value using the ip command. The following command sets the desired MTU
value to 1500, but you could use 9001 instead.
2. (Optional) To persist your network MTU setting after a reboot, modify the following configuration
files, based on your operating system type.
MTU=1500
• For Amazon Linux, add the following lines to your /etc/dhcp/dhclient-eth0.conf file.
interface "eth0" {
supersede interface-mtu 1500;
}
1182
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot
Troubleshoot
If you experience connectivity issues between your EC2 instance and an Amazon Redshift cluster when
using jumbo frames, see Queries Appear to Hang in the Amazon Redshift Cluster Management Guide
When you create your AWS account, we create a default VPC in each Region. A default VPC is a VPC that
is already configured and ready for you to use. For example, there is a default subnet for each Availability
Zone in each default VPC, an internet gateway attached to the VPC, and there's a route in the main route
table that sends all traffic (0.0.0.0/0) to the internet gateway. Alternatively, you can create your own
nondefault VPC and configure the VPC, subnets, and routing to meet your needs.
Instances launched into a default subnet have access to the internet, as the VPC is configured to assign
public IP addresses and DNS hostnames, and the main route table is configured with a route to an
internet gateway attached to the VPC.
For the subnets that you create in your VPCs, do one of the following to ensure that instances that you
launch in these subnets have access to the internet:
• Configure public subnets for your instances. For more information, see Enable internet access in the
Amazon VPC User Guide.
• Configure a public NAT gateway. For more information, see Access the internet from a private subnet
in the Amazon VPC User Guide.
Learn more
For more information, see Amazon Virtual Private Cloud Documentation.
EC2-Classic
We are retiring EC2-Classic on August 15, 2022. We recommend that you migrate from EC2-Classic to a
VPC (p. 1201).
With EC2-Classic, your instances run in a single, flat network that you share with other customers. With
Amazon VPC, your instances run in a virtual private cloud (VPC) that's logically isolated to your AWS
account.
The EC2-Classic platform was introduced in the original release of Amazon EC2. If you created your
AWS account after 2013-12-04, it does not support EC2-Classic, so you must launch your Amazon EC2
instances in a VPC.
1183
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Detect supported platforms
If your account supports EC2-Classic but you have not created a nondefault VPC, you can do one of the
following to launch instances that require a VPC:
• Create a nondefault VPC and launch your VPC-only instance into it by specifying a subnet ID or a
network interface ID in the request. Note that you must create a nondefault VPC if you do not have
a default VPC and you are using the AWS CLI, Amazon EC2 API, or AWS SDK to launch a VPC-only
instance.
• Launch your VPC-only instance using the Amazon EC2 console. The Amazon EC2 console creates a
nondefault VPC in your account and launches the instance into the subnet in the first Availability Zone.
The console creates the VPC with the following attributes:
• One subnet in each Availability Zone, with the public IPv4 addressing attribute set to true so that
instances receive a public IPv4 address. For more information, see IP Addressing in Your VPC in the
Amazon VPC User Guide.
1184
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Differences between instances in EC2-Classic and a VPC
• An Internet gateway, and a main route table that routes traffic in the VPC to the Internet gateway.
This enables the instances you launch in the VPC to communicate over the Internet. For more
information, see Internet Gateways in the Amazon VPC User Guide.
• A default security group for the VPC and a default network ACL that is associated with each subnet.
For more information, see Security Groups for Your VPC in the Amazon VPC User Guide.
If you have other resources in EC2-Classic, you can take steps to migrate them to a VPC. For more
information, see Migrate from EC2-Classic to a VPC (p. 1201).
Public IPv4 Your instance receives a Your instance launched in Your instance doesn't
address (from public IPv4 address from a default subnet receives receive a public IPv4
Amazon's the EC2-Classic public IPv4 a public IPv4 address address by default, unless
public IP address pool. by default, unless you you specify otherwise
address pool) specify otherwise during during launch, or you
launch, or you modify modify the subnet's public
the subnet's public IPv4 IPv4 address attribute.
address attribute.
Private IPv4 Your instance receives a Your instance receives a Your instance receives a
address private IPv4 address from static private IPv4 address static private IPv4 address
the EC2-Classic range each from the address range of from the address range of
time it's started. your default VPC. your VPC.
Multiple We select a single You can assign multiple You can assign multiple
private IPv4 private IP address for private IPv4 addresses to private IPv4 addresses to
addresses your instance; multiple your instance. your instance.
IP addresses are not
supported.
Reassociating If the Elastic IP address is If the Elastic IP address is If the Elastic IP address
an Elastic IP already associated with already associated with is already associated
address another instance, the another instance, the with another instance,
address is automatically address is automatically it succeeds only if you
associated with the new associated with the new allowed reassociation.
instance. instance.
1185
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Differences between instances in EC2-Classic and a VPC
Tagging Elastic You cannot apply tags to You can apply tags to an You can apply tags to an
IP addresses an Elastic IP address. Elastic IP address. Elastic IP address.
DNS DNS hostnames are DNS hostnames are DNS hostnames are
hostnames enabled by default. enabled by default. disabled by default.
Security group A security group can A security group can A security group can
reference security groups reference security groups reference security groups
that belong to other AWS for your VPC, or for a for your VPC only.
accounts. peer VPC in a VPC peering
connection.
Security group You can't change the You can assign up to 5 You can assign up to 5
association security groups of your security groups to an security groups to an
running instance. You can instance. instance.
either modify the rules
of the assigned security You can assign security You can assign security
groups, or replace the groups to your instance groups to your instance
instance with a new one when you launch it and when you launch it and
(create an AMI from the while it's running. while it's running.
instance, launch a new
instance from this AMI
with the security groups
that you need, disassociate
any Elastic IP address
from the original instance
and associate it with the
new instance, and then
terminate the original
instance).
Security group You can add rules for You can add rules for You can add rules for
rules inbound traffic only. inbound and outbound inbound and outbound
traffic. traffic.
Tenancy Your instance runs on You can run your instance You can run your instance
shared hardware. on shared hardware or on shared hardware or
single-tenant hardware. single-tenant hardware.
Accessing the Your instance can access By default, your instance By default, your instance
Internet the Internet. Your instance can access the Internet. cannot access the Internet.
automatically receives a Your instance receives Your instance doesn't
public IP address, and can a public IP address by receive a public IP address
access the Internet directly default. An Internet by default. Your VPC may
through the AWS network gateway is attached to have an Internet gateway,
edge. your default VPC, and your depending on how it was
default subnet has a route created.
to the Internet gateway.
IPv6 IPv6 addressing is not You can optionally You can optionally
addressing supported. You cannot associate an IPv6 CIDR associate an IPv6 CIDR
assign IPv6 addresses to block with your VPC, and block with your VPC, and
your instances. assign IPv6 addresses to assign IPv6 addresses to
instances in your VPC. instances in your VPC.
1186
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Differences between instances in EC2-Classic and a VPC
After you launch an instance in EC2-Classic, you can't change its security groups. However, you can
add rules to or remove rules from a security group, and those changes are automatically applied to all
instances that are associated with the security group after a short period.
Your AWS account automatically has a default security group per Region for EC2-Classic. If you try
to delete the default security group, you'll get the following error: Client.InvalidGroup.Reserved: The
security group 'default' is reserved.
You can create custom security groups. The security group name must be unique within your account for
the Region. To create a security group for use in EC2-Classic, choose No VPC for the VPC.
You can add inbound rules to your default and custom security groups. You can't change the outbound
rules for an EC2-Classic security group. When you create a security group rule, you can use a different
security group for EC2-Classic in the same Region as the source or destination. To specify a security
group for another AWS account, add the AWS account ID as a prefix; for example, 111122223333/sg-
edcd9784.
In EC2-Classic, you can have up to 500 security groups in each Region for each account. You can add
up to 100 rules to a security group. You can have up to 800 security group rules per instance. This is
calculated as the multiple of rules per security group and security groups per instance. If you reference
other security groups in your security group rules, we recommend that you use security group names
that are 22 characters or less in length.
If you create a custom firewall configuration in EC2-Classic, you must create a rule in your firewall that
allows inbound traffic from port 53 (DNS)—with a destination port from the ephemeral range—from
the address of the Amazon DNS server; otherwise, internal DNS resolution from your instances fails. If
your firewall doesn't automatically allow DNS query responses, then you need to allow traffic from the
IP address of the Amazon DNS server. To get the IP address of the Amazon DNS server, use the following
command from within your instance:
Elastic IP addresses
If your account supports EC2-Classic, there's one pool of Elastic IP addresses for use with the EC2-Classic
platform and another for use with your VPCs. You can't associate an Elastic IP address that you allocated
for use with a VPC with an instance in EC2-Classic, and vice- versa. However, you can migrate an Elastic
IP address you've allocated for use in the EC2-Classic platform for use with a VPC. You cannot migrate an
Elastic IP address to another Region.
1187
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Differences between instances in EC2-Classic and a VPC
After you've migrated an Elastic IP address to a VPC, you cannot use it with EC2-Classic. However, if
required, you can restore it to EC2-Classic. You cannot migrate an Elastic IP address that was originally
allocated for use with a VPC to EC2-Classic.
To migrate an Elastic IP address, it must not be associated with an instance. For more information about
disassociating an Elastic IP address from an instance, see Disassociate an Elastic IP address (p. 1064).
You can migrate as many EC2-Classic Elastic IP addresses as you can have in your account. However,
when you migrate an Elastic IP address, it counts against your Elastic IP address limit for VPCs. You
cannot migrate an Elastic IP address if it will result in your exceeding your limit. Similarly, when you
restore an Elastic IP address to EC2-Classic, it counts against your Elastic IP address limit for EC2-Classic.
For more information, see Elastic IP address limit (p. 1067).
You cannot migrate an Elastic IP address that has been allocated to your account for less than 24 hours.
You can migrate an Elastic IP address from EC2-Classic using the Amazon EC2 console or the Amazon
VPC console. This option is only available if your account supports EC2-Classic.
You can restore an Elastic IP address to EC2-Classic using the Amazon EC2 console or the Amazon VPC
console.
After you've performed the command to move or restore your Elastic IP address, the process of
migrating the Elastic IP address can take a few minutes. Use the describe-moving-addresses command to
check whether your Elastic IP address is still moving, or has completed moving.
After you've moved your Elastic IP address, you can view its allocation ID on the Elastic IPs page in the
Allocation ID field.
If the Elastic IP address is in a moving state for longer than 5 minutes, contact Premium Support.
1188
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Share and access resources between EC2-Classic and a VPC
To describe the status of your moving addresses using the command line
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
If your account supports EC2-Classic, you might have set up resources for use in EC2-Classic. If you
want to migrate from EC2-Classic to a VPC, you must recreate those resources in your VPC. For more
information about migrating from EC2-Classic to a VPC, see Migrate from EC2-Classic to a VPC (p. 1201).
The following resources can be shared or accessed between EC2-Classic and a VPC.
Resource Notes
AMI
Bundle task
EBS volume
Elastic IP address (IPv4) You can migrate an Elastic IP address from EC2-
Classic to a VPC. You can't migrate an Elastic
IP address that was originally allocated for use
in a VPC to EC2-Classic. For more information,
see Migrate an Elastic IP Address from EC2-
Classic (p. 1188).
1189
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
Resource Notes
Key pair
Placement group
Reserved Instance You can change the network platform for your
Reserved Instances from EC2-Classic to a VPC.
For more information, see Modify Reserved
Instances (p. 411).
Snapshot
The following resources can't be shared or moved between EC2-Classic and a VPC:
• Spot Instances
ClassicLink
We are retiring EC2-Classic on August 15, 2022. We recommend that you migrate from EC2-Classic to a
VPC (p. 1201).
ClassicLink allows you to link EC2-Classic instances to a VPC in your account, within the same Region. If
you associate the VPC security groups with a EC2-Classic instance, this enables communication between
your EC2-Classic instance and instances in your VPC using private IPv4 addresses. ClassicLink removes
the need to make use of public IPv4 addresses or Elastic IP addresses to enable communication between
instances in these platforms.
ClassicLink is available to all users with accounts that support the EC2-Classic platform, and can be used
with any EC2-Classic instance. There is no additional charge for using ClassicLink. Standard charges for
data transfer and instance usage apply.
Contents
• ClassicLink basics (p. 1191)
1190
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
ClassicLink basics
There are two steps to linking an EC2-Classic instance to a VPC using ClassicLink. First, you must enable
the VPC for ClassicLink. By default, all VPCs in your account are not enabled for ClassicLink, to maintain
their isolation. After you've enabled the VPC for ClassicLink, you can then link any running EC2-Classic
instance in the same Region in your account to that VPC. Linking your instance includes selecting security
groups from the VPC to associate with your EC2-Classic instance. After you've linked the instance, it
can communicate with instances in your VPC using their private IP addresses, provided the VPC security
groups allow it. Your EC2-Classic instance does not lose its private IP address when linked to the VPC.
A linked EC2-Classic instance can communicate with instances in a VPC, but it does not form part of the
VPC. If you list your instances and filter by VPC, for example, through the DescribeInstances API
request, or by using the Instances screen in the Amazon EC2 console, the results do not return any EC2-
Classic instances that are linked to the VPC. For more information about viewing your linked EC2-Classic
instances, see View your ClassicLink-enabled VPCs and linked instances (p. 1196).
By default, if you use a public DNS hostname to address an instance in a VPC from a linked EC2-Classic
instance, the hostname resolves to the instance's public IP address. The same occurs if you use a public
DNS hostname to address a linked EC2-Classic instance from an instance in the VPC. If you want the
public DNS hostname to resolve to the private IP address, you can enable ClassicLink DNS support for
the VPC. For more information, see Enable ClassicLink DNS support (p. 1196).
If you no longer require a ClassicLink connection between your instance and the VPC, you can unlink
the EC2-Classic instance from the VPC. This disassociates the VPC security groups from the EC2-Classic
instance. A linked EC2-Classic instance is automatically unlinked from a VPC when it's stopped. After
you've unlinked all linked EC2-Classic instances from the VPC, you can disable ClassicLink for the VPC.
If you use Elastic Load Balancing, you can register your linked EC2-Classic instances with the load
balancer. You must create your load balancer in the ClassicLink-enabled VPC and enable the Availability
Zone in which the instance runs. If you terminate the linked EC2-Classic instance, the load balancer
deregisters the instance.
If you use Amazon EC2 Auto Scaling, you can create an Amazon EC2 Auto Scaling group with instances
that are automatically linked to a specified ClassicLink-enabled VPC at launch. For more information, see
Linking EC2-Classic Instances to a VPC in the Amazon EC2 Auto Scaling User Guide.
If you use Amazon RDS instances or Amazon Redshift clusters in your VPC, and they are publicly
accessible (accessible from the Internet), the endpoint you use to address those resources from a linked
EC2-Classic instance by default resolves to a public IP address. If those resources are not publicly
accessible, the endpoint resolves to a private IP address. To address a publicly accessible RDS instance
or Redshift cluster over private IP using ClassicLink, you must use their private IP address or private DNS
hostname, or you must enable ClassicLink DNS support for the VPC.
If you use a private DNS hostname or a private IP address to address an RDS instance, the linked EC2-
Classic instance cannot use the failover support available for Multi-AZ deployments.
1191
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
You can use the Amazon EC2 console to find the private IP addresses of your Amazon Redshift, Amazon
ElastiCache, or Amazon RDS resources.
For more information about policies for working with ClassicLink, see the following example: Example
IAM policies for ClassicLink (p. 1197).
After you've linked your instance to a VPC, you cannot change which VPC security groups are associated
with the instance. To associate different security groups with your instance, you must first unlink the
instance, and then link it to the VPC again, choosing the required security groups.
VPCs that are in the 10.0.0.0/16 and 10.1.0.0/16 IP address ranges can be enabled for ClassicLink
only if they do not have any existing static routes in route tables in the 10.0.0.0/8 IP address range,
excluding the local routes that were automatically added when the VPC was created. Similarly, if you've
enabled a VPC for ClassicLink, you may not be able to add any more specific routes to your route tables
within the 10.0.0.0/8 IP address range.
Important
If your VPC CIDR block is a publicly routable IP address range, consider the security implications
before you link an EC2-Classic instance to your VPC. For example, if your linked EC2-Classic
1192
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
instance receives an incoming Denial of Service (DoS) request flood attack from a source IP
address that falls within the VPC’s IP address range, the response traffic is sent into your VPC.
We strongly recommend that you create your VPC using a private IP address range as specified
in RFC 1918.
For more information about route tables and routing in your VPC, see Route Tables in the Amazon VPC
User Guide.
If you enable a local VPC to communicate with a linked EC2-Classic instance in a peer VPC, a static route
is automatically added to your route tables with a destination of 10.0.0.0/8 and a target of local.
For more information and examples, see Configurations With ClassicLink in the Amazon VPC Peering
Guide.
ClassicLink limitations
To use the ClassicLink feature, you need to be aware of the following limitations:
1193
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
• If you link your EC2-Classic instance to a VPC in the 172.16.0.0/16 range, and you have a DNS
server running on the 172.16.0.23/32 IP address within the VPC, then your linked EC2-Classic
instance can't access the VPC DNS server. To work around this issue, run your DNS server on a different
IP address within the VPC.
• ClassicLink doesn't support transitive relationships out of the VPC. Your linked EC2-Classic instance
doesn't have access to any VPN connection, VPC gateway endpoint, NAT gateway, or Internet gateway
associated with the VPC. Similarly, resources on the other side of a VPN connection or an Internet
gateway don't have access to a linked EC2-Classic instance.
Tasks
• Enable a VPC for ClassicLink (p. 1194)
• Create a VPC with ClassicLink enabled (p. 1194)
• Link an instance to a VPC (p. 1195)
• Link an instance to a VPC at launch (p. 1195)
• View your ClassicLink-enabled VPCs and linked instances (p. 1196)
• Enable ClassicLink DNS support (p. 1196)
• Disable ClassicLink DNS support (p. 1196)
• Unlink an instance from a VPC (p. 1196)
• Disable ClassicLink for a VPC (p. 1197)
1194
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
If you want the public DNS hostname to resolve to the private IP address, enable ClassicLink DNS
support for the VPC before you link the instance. For more information, see Enable ClassicLink DNS
support (p. 1196).
a. For Network, choose Launch into EC2-Classic. If this option is disabled, then the instance type
is not supported on EC2-Classic.
b. Expand Link to VPC (ClassicLink) and choose a VPC from Link to VPC. The console displays
only VPCs with ClassicLink enabled.
5. Complete the rest of the steps in the wizard to launch your instance. For more information, see
Launch an instance using the Launch Instance Wizard (p. 565).
1195
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
1196
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
Examples
• Full permissions to work with ClassicLink (p. 1197)
• Enable and disable a VPC for ClassicLink (p. 1198)
• Link instances (p. 1198)
• Unlink instances (p. 1199)
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ec2:DescribeClassicLinkInstances", "ec2:DescribeVpcClassicLink",
"ec2:EnableVpcClassicLink", "ec2:DisableVpcClassicLink",
"ec2:AttachClassicLinkVpc", "ec2:DetachClassicLinkVpc"
],
"Resource": "*"
1197
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*VpcClassicLink",
"Resource": "arn:aws:ec2:region:account:vpc/*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/purpose":"classiclink"
}
}
}
]
}
Link instances
The following policy grants users permissions to link instances to a VPC only if the instance is an
m3.large instance type. The second statement allows users to use the VPC and security group
resources, which are required to link an instance to a VPC.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:AttachClassicLinkVpc",
"Resource": "arn:aws:ec2:region:account:instance/*",
"Condition": {
"StringEquals": {
"ec2:InstanceType":"m3.large"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:AttachClassicLinkVpc",
"Resource": [
"arn:aws:ec2:region:account:vpc/*",
"arn:aws:ec2:region:account:security-group/*"
]
}
]
}
The following policy grants users permissions to link instances to a specific VPC (vpc-1a2b3c4d) only,
and to associate only specific security groups from the VPC to the instance (sg-1122aabb and sg-
aabb2233). Users cannot link an instance to any other VPC, and they cannot specify any other of the
VPC security groups to associate with the instance in the request.
1198
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:AttachClassicLinkVpc",
"Resource": [
"arn:aws:ec2:region:account:vpc/vpc-1a2b3c4d",
"arn:aws:ec2:region:account:instance/*",
"arn:aws:ec2:region:account:security-group/sg-1122aabb",
"arn:aws:ec2:region:account:security-group/sg-aabb2233"
]
}
]
}
Unlink instances
The following grants users permission to unlink any linked EC2-Classic instance from a VPC, but only if
the instance has the tag "unlink=true". The second statement grants users permissions to use the VPC
resource, which is required to unlink an instance from a VPC.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "ec2:DetachClassicLinkVpc",
"Resource": [
"arn:aws:ec2:region:account:instance/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/unlink":"true"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:DetachClassicLinkVpc",
"Resource": [
"arn:aws:ec2:region:account:vpc/*"
]
}
]
}
You want a security group configuration that allows traffic to flow only between these instances. You
have four security groups: two for your web server (sg-1a1a1a1a and sg-2b2b2b2b), one for your
application server (sg-3c3c3c3c), and one for your database server (sg-4d4d4d4d).
The following diagram displays the architecture of your instances, and their security group configuration.
1199
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ClassicLink
You have one security group in EC2-Classic, and the other in your VPC. You associated the VPC security
group with your web server instance when you linked the instance to your VPC via ClassicLink. The VPC
security group enables you to control the outbound traffic from your web server to your application
server.
The following are the security group rules for the EC2-Classic security group (sg-1a1a1a1a).
Inbound
The following are the security group rules for the VPC security group (sg-2b2b2b2b).
Outbound
1200
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
The following are the security group rules for the VPC security group that's associated with your
application server.
Inbound
Outbound
The following are the security group rules for the VPC security group that's associated with your
database server.
Inbound
1201
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
To migrate from EC2-Classic to a VPC, you must migrate or recreate your EC2-Classic resources in a VPC.
You can migrate and recreate your resources in full, or you can perform an incremental migration over
time using ClassicLink.
Contents
• Migrate your resources to a VPC (p. 1202)
• Use the AWSSupport-MigrateEC2ClassicToVPC runbook (p. 1205)
• Use ClassicLink for an incremental migration (p. 1206)
• Example: Migrate a simple web application (p. 1207)
• Using a default VPC (p. 1209)
Prerequisites
Before you begin, you must have a VPC. If you don't have a default VPC, you can create a nondefault VPC
using one of these methods:
• In the Amazon VPC console, use the VPC wizard to create a new VPC. For more information, see
Amazon VPC Console Wizard Configurations. Use this option if you want to set up a VPC quickly, using
one of the available configuration options.
• In the Amazon VPC console, set up the components of a VPC according to your requirements. For more
information, see VPCs and Subnets. Use this option if you have specific requirements for your VPC,
such as a particular number of subnets.
Resources
• Security groups (p. 1202)
• Elastic IP addresses (p. 1203)
• AMIs and instances (p. 1203)
• Amazon RDS DB instances (p. 1205)
• Classic Load Balancers (p. 1205)
Security groups
If you want your instances in your VPC to have the same security group rules as your EC2-Classic
instances, you can use the Amazon EC2 console to copy your existing EC2-Classic security group rules to
a new VPC security group.
You can only copy security group rules to a new security group in the same AWS account in the same
Region. If you are using a different Region or a different AWS account, you must create a new security
group and manually add the rules yourself. For more information, see Amazon EC2 security groups for
Linux instances (p. 1303).
1202
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
Note
To identify an EC2-Classic security group, check the VPC ID column. For each EC2-Classic
security group, the value in the column is blank or a - symbol.
4. In the Create Security Group dialog box, specify a name and description for your new security
group. Select your VPC from the VPC list.
5. The Inbound tab is populated with the rules from your EC2-Classic security group. You can modify
the rules as required. In the Outbound tab, a rule that allows all outbound traffic has automatically
been created for you. For more information about modifying security group rules, see Amazon EC2
security groups for Linux instances (p. 1303).
Note
If you've defined a rule in your EC2-Classic security group that references another security
group, you cannot use the same rule in your VPC security group. Modify the rule to
reference a security group in the same VPC.
6. Choose Create.
Elastic IP addresses
You can migrate an Elastic IP address that is allocated for use in EC2-Classic for use with a VPC. You
cannot migrate an Elastic IP address to another Region or AWS account. For more information, see
Migrate an Elastic IP Address from EC2-Classic (p. 1188).
In the Amazon EC2 console, choose Elastic IPs in the navigation pane. In the Scope column, the value is
standard.
Contents
• Identify EC2-Classic instances (p. 1203)
• Create an AMI (p. 1204)
• (Optional) Share or copy your AMI (p. 1204)
• (Optional) Store your data on Amazon EBS volumes (p. 1204)
• Launch an instance into your VPC (p. 1205)
If you have instances running in both EC2-Classic and a VPC, you can identify your EC2-Classic instances.
Choose Instances in the navigation pane. In the VPC ID column, the value for each EC2-Classic instance
is blank or a - symbol. If the VPC ID column is not present, choose the gear icon and make the column
visible.
AWS CLI
1203
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
Use the following describe-instances AWS CLI command. The --query parameter displays only instances
where the value for VpcId is null.
Create an AMI
After you've identified your EC2-Classic instance, you can create an AMI from it.
To create a Windows AMI
EBS Create an EBS-backed AMI from your instance. For more information, see
Creating an Amazon EBS-backed Linux AMI.
Instance store Create an instance store-backed AMI from your instance using the AMI tools.
For more information, see Creating an instance store-backed Linux AMI.
To use your AMI to launch an instance in a VPC in a different Region, you must first copy the AMI to that
Region. For more information, see Copy an AMI (p. 170).
1204
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
For more information about Amazon EBS volumes, see the following topics:
To back up the data on your Amazon EBS volume, you can take periodic snapshots of your volume.
For more information, see Create Amazon EBS snapshots (p. 1385). If you need to, you can create
an Amazon EBS volume from your snapshot. For more information, see Create a volume from a
snapshot (p. 1351).
After you've created an AMI, you can use the Amazon EC2 launch wizard to launch an instance into your
VPC. The instance will have the same data and configurations as your existing EC2-Classic instance.
Note
You can use this opportunity to upgrade to a current generation instance type. However, verify
that the instance type supports the type of virtualization that your AMI offers (PV or HVM). For
more information about PV and HVM virtualization, see Linux AMI virtualization types (p. 98).
For more information about the parameters that you can configure in each step of the wizard, see
Launch an instance using the Launch Instance Wizard (p. 565).
1205
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
Use this option if you cannot afford downtime during the migration, for example, if you have a multi-tier
application with processes that cannot be interrupted.
Tasks
• Step 1: Prepare your migration sequence (p. 1206)
• Step 2: Enable your VPC for ClassicLink (p. 1206)
• Step 3: Link your EC2-Classic instances to your VPC (p. 1206)
• Step 4: Complete the VPC migration (p. 1207)
For example, you have an application that relies on a presentation web server, a backend database
server, and authentication logic for transactions. You may decide to start the migration process with the
authentication logic, then the database server, and finally, the web server.
Then, you can start migrating or recreating your resources. For more information, see Migrate your
resources to a VPC (p. 1202).
1206
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
After you've enabled internal communication between the EC2-Classic and VPC instances, you must
update your application to point to your migrated service in your VPC, instead of your service in the EC2-
Classic platform. The exact steps for this depend on your application’s design. Generally, this includes
updating your destination IP addresses to point to the IP addresses of your VPC instances instead of your
EC2-Classic instances.
After you've completed this step and you've tested that the application is functioning from your VPC, you
can terminate your EC2-Classic instances, and disable ClassicLink for your VPC. You can also clean up any
EC2-Classic resources that you no longer need to avoid incurring charges for them. For example, you can
release Elastic IP addresses and delete the volumes that were associated with your EC2-Classic instances.
The first part of migrating to a VPC is deciding what kind of VPC architecture suits your needs. In this
case, you've decided on the following: one public subnet for your web servers, and one private subnet for
your database server. As your website grows, you can add more web servers and database servers to your
subnets. By default, instances in the private subnet cannot access the internet; however, you can enable
internet access through a Network Address Translation (NAT) device in the public subnet. You might
want to set up a NAT device to support periodic updates and patches from the internet for your database
1207
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
server. You'll migrate your Elastic IP addresses to a VPC, and create a load balancer in your public subnet
to load balance the traffic between your web servers.
To migrate your web application to a VPC, you can follow these steps:
• Create a VPC: In this case, you can use the VPC wizard in the Amazon VPC console to create your VPC
and subnets. The second wizard configuration creates a VPC with one private and one public subnet,
and launches and configures a NAT device in your public subnet for you. For more information, see VPC
with public and private subnets (NAT) in the Amazon VPC User Guide.
• Configure your security groups: In your EC2-Classic environment, you have one security group for
your web servers, and another security group for your database server. You can use the Amazon EC2
console to copy the rules from each security group into new security groups for your VPC. For more
information, see Security groups (p. 1202).
Tip
Create the security groups that are referenced by other security groups first.
• Create AMIs and launch new instances: Create an AMI from one of your web servers, and a second
AMI from your database server. Then, launch replacement web servers into your public subnet, and
launch your replacement database server into your private subnet. For more information, see Create
an AMI (p. 1204).
• Configure your NAT device: If you are using a NAT instance, you must create a security group for
it that allows HTTP and HTTPS traffic from your private subnet. For more information, see NAT
instances. If you are using a NAT gateway, traffic from your private subnet is automatically allowed.
• Configure your database: When you created an AMI from your database server in EC2-Classic, all
of the configuration information that was stored in that instance was copied to the AMI. You might
have to connect to your new database server and update the configuration details. For example, if you
configured your database to grant full read, write, and modification permissions to your web servers in
EC2-Classic, you need to update the configuration files to grant the same permissions to your new VPC
web servers instead.
• Configure your web servers: Your web servers will have the same configuration settings as your
instances in EC2-Classic. For example, if you configured your web servers to use the database in EC2-
Classic, update your web servers' configuration settings to point to your new database instance.
1208
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
Note
By default, instances launched into a nondefault subnet are not assigned a public IP address,
unless you specify otherwise at launch. Your new database server might not have a public
IP address. In this case, you can update your web servers' configuration file to use your new
database server's private DNS name. Instances in the same VPC can communicate with each
other via private IP address.
• Migrate your Elastic IP addresses: Disassociate your Elastic IP addresses from your web servers in EC2-
Classic, and then migrate them to a VPC. After you've migrated them, you can associate them with
your new web servers in your VPC. For more information, see Migrate an Elastic IP Address from EC2-
Classic (p. 1188).
• Create a new load balancer: To continue using Elastic Load Balancing to load balance the traffic to
your instances, make sure you understand the various ways to configure your load balancer in VPC. For
more information, see the Elastic Load Balancing User Guide.
• Update your DNS records: After you've set up your load balancer in your public subnet, verify that
your www.garden.example.com domain points to your new load balancer. To do this, update your
DNS records and your alias record set in Route 53. For more information about using Route 53, see
Getting Started with Route 53.
• Shut down your EC2-Classic resources: After you've verified that your web application is working from
within the VPC architecture, you can shut down your EC2-Classic resources to stop incurring charges
for them.
The following are options for using a default VPC when you have an AWS account that supports EC2-
Classic.
Options
• Switch to a VPC-only Region (p. 1209)
• Create a new AWS account (p. 1209)
• Convert your existing AWS account to VPC-only (p. 1209)
1209
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Migrate from EC2-Classic to a VPC
1. Delete or migrate (if applicable) the resources that you have created for use in EC2-Classic. These
include the following:
1210
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Infrastructure security
Security is a shared responsibility between AWS and you. The shared responsibility model describes this
as security of the cloud and security in the cloud:
• Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in
the AWS Cloud. AWS also provides you with services that you can use securely. Third-party auditors
regularly test and verify the effectiveness of our security as part of the AWS Compliance Programs.
To learn about the compliance programs that apply to Amazon EC2, see AWS Services in Scope by
Compliance Program.
• Security in the cloud – Your responsibility includes the following areas:
• Controlling network access to your instances, for example, through configuring your VPC and
security groups. For more information, see Controlling network traffic (p. 1212).
• Managing the credentials used to connect to your instances.
• Managing the guest operating system and software deployed to the guest operating system,
including updates and security patches. For more information, see Update management in Amazon
EC2 (p. 1323).
• Configuring the IAM roles that are attached to the instance and the permissions associated with
those roles. For more information, see IAM roles for Amazon EC2 (p. 1275).
This documentation helps you understand how to apply the shared responsibility model when using
Amazon EC2. It shows you how to configure Amazon EC2 to meet your security and compliance
objectives. You also learn how to use other AWS services that help you to monitor and secure your
Amazon EC2 resources.
Contents
• Infrastructure security in Amazon EC2 (p. 1211)
• Amazon EC2 and interface VPC endpoints (p. 1213)
• Resilience in Amazon EC2 (p. 1214)
• Data protection in Amazon EC2 (p. 1215)
• Identity and access management for Amazon EC2 (p. 1217)
• Amazon EC2 key pairs and Linux instances (p. 1288)
• Amazon EC2 security groups for Linux instances (p. 1303)
• Update management in Amazon EC2 (p. 1323)
• Compliance validation for Amazon EC2 (p. 1323)
1211
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network isolation
Additionally, requests must be signed using an access key ID and a secret access key that is associated
with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary
security credentials to sign requests.
Network isolation
A virtual private cloud (VPC) is a virtual network in your own logically isolated area in the AWS Cloud.
Use separate VPCs to isolate infrastructure by workload or organizational entity.
A subnet is a range of IP addresses in a VPC. When you launch an instance, you launch it into a subnet
in your VPC. Use subnets to isolate the tiers of your application (for example, web, application, and
database) within a single VPC. Use private subnets for your instances if they should not be accessed
directly from the internet.
To call the Amazon EC2 API from your VPC without sending traffic over the public internet, use AWS
PrivateLink.
When you stop or terminate an instance, the memory allocated to it is scrubbed (set to zero) by the
hypervisor before it is allocated to a new instance, and every block of storage is reset. This ensures that
your data is not unintentionally exposed to another instance.
Network MAC addresses are dynamically assigned to instances by the AWS network infrastructure. IP
addresses are either dynamically assigned to instances by the AWS network infrastructure, or assigned
by an EC2 administrator through authenticated API requests. The AWS network allows instances to send
traffic only from the MAC and IP addresses assigned to them. Otherwise, the traffic is dropped.
By default, an instance cannot receive traffic that is not specifically addressed to it. If you need to run
network address translation (NAT), routing, or firewall services on your instance, you can disable source/
destination checking for the network interface.
• Restrict access to your instances using security groups (p. 1303). For example, you can allow traffic
only from the address ranges for your corporate network.
• Use private subnets for your instances if they should not be accessed directly from the internet. Use a
bastion host or NAT gateway for internet access from an instance in a private subnet.
• Use AWS Virtual Private Network or AWS Direct Connect to establish private connections from your
remote networks to your VPCs. For more information, see Network-to-Amazon VPC Connectivity
Options.
• Use VPC Flow Logs to monitor the traffic that reaches your instances.
• Use AWS Security Hub to check for unintended network accessibility from your instances.
• Use EC2 Instance Connect (p. 602) to connect to your instances using Secure Shell (SSH) without the
need to share and manage SSH keys.
• Use AWS Systems Manager Session Manager to access your instances remotely instead of opening
inbound SSH ports and managing SSH keys.
1212
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Interface VPC endpoints
• Use AWS Systems Manager Run Command to automate common administrative tasks instead of
opening inbound SSH ports and managing SSH keys.
In addition to restricting network access to each Amazon EC2 instance, Amazon VPC supports
implementing additional network security controls like in-line gateways, proxy servers, and various
network monitoring options.
For more information, see the AWS Security Best Practices whitepaper.
You are not required to configure AWS PrivateLink, but it's recommended. For more information about
AWS PrivateLink and VPC endpoints, see Interface VPC Endpoints (AWS PrivateLink).
Topics
• Create an interface VPC endpoint (p. 1213)
• Create an interface VPC endpoint policy (p. 1213)
For more information, see Creating an Interface Endpoint in the Amazon VPC User Guide.
Important
When a non-default policy is applied to an interface VPC endpoint for Amazon EC2, certain
failed API requests, such as those failing from RequestLimitExceeded, might not be logged
to AWS CloudTrail or Amazon CloudWatch.
For more information, see Controlling Access to Services with VPC Endpoints in the Amazon VPC User
Guide.
The following example shows a VPC endpoint policy that denies permission to create unencrypted
volumes or to launch instances with unencrypted volumes. The example policy also grants permission to
perform all other Amazon EC2 actions.
1213
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Resilience
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
},
{
"Action": [
"ec2:CreateVolume"
],
"Effect": "Deny",
"Resource": "*",
"Principal": "*",
"Condition": {
"Bool": {
"ec2:Encrypted": "false"
}
}
},
{
"Action": [
"ec2:RunInstances"
],
"Effect": "Deny",
"Resource": "*",
"Principal": "*",
"Condition": {
"Bool": {
"ec2:Encrypted": "false"
}
}
}]
}
If you need to replicate your data or applications over greater geographic distances, use AWS Local
Zones. An AWS Local Zone is an extension of an AWS Region in geographic proximity to your users. Local
Zones have their own connections to the internet and support AWS Direct Connect. Like all AWS Regions,
AWS Local Zones are completely isolated from other AWS Zones.
If you need to replicate your data or applications in an AWS Local Zone, AWS recommends that you use
one of the following zones as the failover zone:
For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure.
1214
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Data protection
In addition to the AWS global infrastructure, Amazon EC2 offers the following features to support your
data resiliency:
For data protection purposes, we recommend that you protect AWS account credentials and set up
individual user accounts with AWS Identity and Access Management (IAM). That way each user is given
only the permissions necessary to fulfill their job duties. We also recommend that you secure your data
in the following ways:
We strongly recommend that you never put confidential or sensitive information, such as your
customers' email addresses, into tags or free-form fields such as a Name field. This includes when you
work with Amazon EC2 or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data
that you enter into tags or free-form fields used for names may be used for billing or diagnostic logs.
If you provide a URL to an external server, we strongly recommend that you do not include credentials
information in the URL to validate your request to that server.
Encryption at rest
EBS volumes
Amazon EBS encryption is an encryption solution for your EBS volumes and snapshots. It uses AWS KMS
keys. For more information, see Amazon EBS encryption (p. 1536).
1215
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Encryption in transit
The data on NVMe instance store volumes is encrypted using an XTS-AES-256 cipher, implemented on a
hardware module on the instance. The keys used to encrypt data that's written to locally-attached NVMe
storage devices are per-customer, and per volume. The keys are generated by, and only reside within, the
hardware module, which is inaccessible to AWS personnel. The encryption keys are destroyed when the
instance is stopped or terminated and cannot be recovered. You cannot disable this encryption and you
cannot provide your own encryption key.
The data on HDD instance store volumes on H1, D3, and D3en instances is encrypted using XTS-AES-256
and one-time keys.
Memory
• Instances with AWS Graviton 2 processors, such as M6g instances. These processors support always-on
memory encryption. The encryption keys are securely generated within the host system, do not leave
the host system, and are destroyed when the host is rebooted or powered down.
• Instances with Intel Xeon Scalable processors (Ice Lake), such as M6i instances. These processors
support always-on memory encryption using Intel Total Memory Encryption (TME).
Encryption in transit
Encryption at the physical layer
All data flowing across AWS Regions over the AWS global network is automatically encrypted at the
physical layer before it leaves AWS secured facilities. All traffic between AZs is encrypted. Additional
layers of encryption, including those listed in this section, may provide additional protections.
All cross-Region traffic that uses Amazon VPC and Transit Gateway peering is automatically bulk-
encrypted when it exits a Region. An additional layer of encryption is automatically provided at the
physical layer for all cross-Region traffic, as previously noted in this section.
AWS provides secure and private connectivity between EC2 instances of all types. In addition, some
instance types use the offload capabilities of the underlying Nitro System hardware to automatically
encrypt in-transit traffic between instances, using AEAD algorithms with 256-bit encryption. There is
no impact on network performance. To support this additional in-transit traffic encryption between
instances, the following requirements must be met:
An additional layer of encryption is automatically provided at the physical layer for all traffic before it
leaves AWS secured facilities, as previously noted in this section.
1216
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Identity and access management
To view the instance types that encrypt in-transit traffic between instances using the AWS CLI
An Outpost creates special network connections called service links to its AWS home Region and,
optionally, private connectivity to a VPC subnet that you specify. All traffic over those connection is fully
encrypted. For more information, see Connectivity through service links and Encryption in transit in the
AWS Outposts User Guide.
SSH provides a secure communications channel for remote access to your Linux instances, whether
directly or through EC2 Instance Connect. Remote access to your instances using AWS Systems Manager
Session Manager or the Run Command is encrypted using TLS 1.2, and requests to create a connection
are signed using SigV4 and authenticated and authorized by AWS Identity and Access Management.
It is your responsibility to use an encryption protocol, such as Transport Layer Security (TLS), to encrypt
sensitive data in transit between clients and your Amazon EC2 instances.
Contents
• Network access to your instance (p. 1217)
• Amazon EC2 permission attributes (p. 1218)
• IAM and Amazon EC2 (p. 1218)
• IAM policies for Amazon EC2 (p. 1219)
• AWS managed policies for Amazon Elastic Compute Cloud (p. 1274)
• IAM roles for Amazon EC2 (p. 1275)
• Authorize inbound traffic for your Linux instances (p. 1285)
For more information, see Authorize inbound traffic for your Linux instances (p. 1285).
1217
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon EC2 permission attributes
Each AMI has a LaunchPermission attribute that controls which AWS accounts can access the AMI. For
more information, see Make an AMI public (p. 115).
Each Amazon EBS snapshot has a createVolumePermission attribute that controls which AWS
accounts can use the snapshot. For more information, see Share an Amazon EBS snapshot (p. 1419).
By using IAM with Amazon EC2, you can control whether users in your organization can perform a task
using specific Amazon EC2 API actions and whether they can use specific AWS resources.
• PowerUserAccess
• ReadOnlyAccess
• AmazonEC2FullAccess
• AmazonEC2ReadOnlyAccess
1218
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
To create an IAM user, add the user to your group, and create a password for the user
• Autogenerated password. Each user gets a randomly generated password that meets the current
password policy in effect (if any). You can view or download the passwords when you get to the
Final page.
• Custom password. Each user is assigned the password that you enter in the box.
5. Choose Next: Permissions.
6. On the Set permissions page, choose Add user to group. Select the check box next to the group
that you created earlier and choose Next: Review.
7. Choose Create user.
8. To view the users' access keys (access key IDs and secret access keys), choose Show next to each
password and secret access key to see. To save the access keys, choose Download .csv and then save
the file to a safe location.
Important
You cannot retrieve the secret access key after you complete this step; if you misplace it you
must create a new one.
9. Choose Close.
10. Give each user his or her credentials (access keys and password); this enables them to use services
based on the permissions you specified for the IAM group.
Related topics
For more information about IAM, see the following:
When you attach a policy to a user or group of users, it allows or denies the users permission to perform
the specified tasks on the specified resources. For more general information about IAM policies, see
Permissions and Policies in the IAM User Guide. For more information about managing and creating
custom IAM policies, see Managing IAM Policies.
1219
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
Getting Started
An IAM policy must grant or deny permissions to use one or more Amazon EC2 actions. It must also
specify the resources that can be used with the action, which can be all resources, or in some cases,
specific resources. The policy can also include conditions that you apply to the resource.
Amazon EC2 partially supports resource-level permissions. This means that for some EC2 API actions,
you cannot specify which resource a user is allowed to work with for that action. Instead, you have to
allow users to work with all resources for that action.
Task Topic
Define actions in your policy Actions for Amazon EC2 (p. 1221)
Define specific resources in your policy Amazon Resource Names (ARNs) for Amazon
EC2 (p. 1222)
Apply conditions to the use of the resources Condition keys for Amazon EC2 (p. 1223)
Work with the available resource-level Actions, resources, and condition keys for Amazon
permissions for Amazon EC2 EC2
Example policies for a CLI or SDK Example policies for working with the AWS CLI or
an AWS SDK (p. 1228)
Example policies for the Amazon EC2 console Example policies for working in the Amazon EC2
console (p. 1265)
Policy structure
The following topics explain the structure of an IAM policy.
Contents
• Policy syntax (p. 1220)
• Actions for Amazon EC2 (p. 1221)
• Supported resource-level permissions for Amazon EC2 API actions (p. 1222)
• Amazon Resource Names (ARNs) for Amazon EC2 (p. 1222)
• Condition keys for Amazon EC2 (p. 1223)
• Check that users have the required permissions (p. 1225)
Policy syntax
An IAM policy is a JSON document that consists of one or more statements. Each statement is structured
as follows.
{
"Statement":[{
"Effect":"effect",
1220
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Action":"action",
"Resource":"arn",
"Condition":{
"condition":{
"key":"value"
}
}
}
]
}
• Effect: The effect can be Allow or Deny. By default, IAM users don't have permission to use resources
and API actions, so all requests are denied. An explicit allow overrides the default. An explicit deny
overrides any allows.
• Action: The action is the specific API action for which you are granting or denying permission. To learn
about specifying action, see Actions for Amazon EC2 (p. 1221).
• Resource: The resource that's affected by the action. Some Amazon EC2 API actions allow you to
include specific resources in your policy that can be created or modified by the action. You specify
a resource using an Amazon Resource Name (ARN) or using the wildcard (*) to indicate that the
statement applies to all resources. For more information, see Supported resource-level permissions for
Amazon EC2 API actions (p. 1222).
• Condition: Conditions are optional. They can be used to control when your policy is in effect. For
more information about specifying conditions for Amazon EC2, see Condition keys for Amazon
EC2 (p. 1223).
For more information about example IAM policy statements for Amazon EC2, see Example policies for
working with the AWS CLI or an AWS SDK (p. 1228).
To specify multiple actions in a single statement, separate them with commas as follows:
You can also specify multiple actions using wildcards. For example, you can specify all actions whose
name begins with the word "Describe" as follows:
"Action": "ec2:Describe*"
Note
Currently, the Amazon EC2 Describe* API actions do not support resource-level permissions.
For more information about resource-level permissions for Amazon EC2, see IAM policies for
Amazon EC2 (p. 1219).
To specify all Amazon EC2 API actions, use the * wildcard as follows:
"Action": "ec2:*"
For a list of Amazon EC2 actions, see Actions defined by Amazon EC2 in the Service Authorization
Reference.
1221
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
To specify a resource in an IAM policy statement, use its Amazon Resource Name (ARN). For more
information about specifying the ARN value, see Amazon Resource Names (ARNs) for Amazon
EC2 (p. 1222). If an API action does not support individual ARNs, you must use a wildcard (*) to specify
that all resources can be affected by the action.
To see tables that identify which Amazon EC2 API actions support resource-level permissions, and the
ARNs and condition keys that you can use in a policy, see Actions, resources, and condition keys for
Amazon EC2.
Keep in mind that you can apply tag-based resource-level permissions in the IAM policies you use for
Amazon EC2 API actions. This gives you better control over which resources a user can create, modify, or
use. For more information, see Grant permission to tag resources during creation (p. 1225).
arn:aws:[service]:[region]:[account-id]:resourceType/resourcePath
service
A path that identifies the resource. You can use the * wildcard in your paths.
For example, you can indicate a specific instance (i-1234567890abcdef0) in your statement using its
ARN as follows.
"Resource": "arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
You can specify all instances that belong to a specific account by using the * wildcard as follows.
"Resource": "arn:aws:ec2:us-east-1:123456789012:instance/*"
You can also specify all Amazon EC2 resources that belong to a specific account by using the * wildcard
as follows.
1222
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Resource": "arn:aws:ec2:us-east-1:123456789012:*"
To specify all resources, or if a specific API action does not support ARNs, use the * wildcard in the
Resource element as follows.
"Resource": "*"
Many Amazon EC2 API actions involve multiple resources. For example, AttachVolume attaches an
Amazon EBS volume to an instance, so an IAM user must have permissions to use the volume and the
instance. To specify multiple resources in a single statement, separate their ARNs with commas, as
follows.
For a list of ARNs for Amazon EC2 resources, see Resource types defined by Amazon EC2.
For a list of service-specific condition keys for Amazon EC2, see Condition keys for Amazon EC2. Amazon
EC2 also implements the AWS-wide condition keys. For more information, see Information available in all
requests in the IAM User Guide.
To use a condition key in your IAM policy, use the Condition statement. For example, the following
policy grants users permission to add and remove inbound and outbound rules for any security group. It
uses the ec2:Vpc condition key to specify that these actions can only be performed on security groups
in a specific VPC.
{
"Version": "2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress"],
"Resource": "arn:aws:ec2:region:account:security-group/*",
"Condition": {
"StringEquals": {
"ec2:Vpc": "arn:aws:ec2:region:account:vpc/vpc-11223344556677889"
}
}
}
]
}
If you specify multiple conditions, or multiple keys in a single condition, we evaluate them using a
logical AND operation. If you specify a single condition with multiple values for one key, we evaluate the
condition using a logical OR operation. For permissions to be granted, all conditions must be met.
You can also use placeholders when you specify conditions. For example, you can grant an IAM user
permission to use resources with a tag that specifies his or her IAM user name. For more information, see
IAM policy elements: Variables and tags in the IAM User Guide.
1223
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
Important
Many condition keys are specific to a resource, and some API actions use multiple resources.
If you write a policy with a condition key, use the Resource element of the statement to
specify the resource to which the condition key applies. If not, the policy may prevent users
from performing the action at all, because the condition check fails for the resources to which
the condition key does not apply. If you do not want to specify a resource, or if you've written
the Action element of your policy to include multiple API actions, then you must use the
...IfExists condition type to ensure that the condition key is ignored for resources that do
not use it. For more information, see ...IfExists Conditions in the IAM User Guide.
All Amazon EC2 actions support the aws:RequestedRegion and ec2:Region condition keys. For more
information, see Example: Restrict access to a specific Region (p. 1229).
The ec2:SourceInstanceARN condition key can be used for conditions that specify the ARN of
the instance from which a request is made. This condition key is available AWS-wide and is not
service-specific. For policy examples, see Amazon EC2: Attach or detach volumes to an EC2 instance
and Example: Allow a specific instance to view resources in other AWS services (p. 1261). The
ec2:SourceInstanceARN key cannot be used as a variable to populate the ARN for the Resource
element in a statement.
For example policy statements for Amazon EC2, see Example policies for working with the AWS CLI or an
AWS SDK (p. 1228).
The ec2:Attribute condition key can be used for conditions that filter access by an attribute of a
resource. The condition key supports only properties that are of a primitive data type, such as a string
or integer, or complex objects that have only a Value property, such as the Description object of the
ModifyImageAttribute API action.
For example, the following policy uses the ec2:Attribute/Description condition key to filter access
by the complex Description object of the ModifyImageAttribute API action. The condition key allows
only requests that modify an image's description to either Production or Development.
{
"Statement":[{
"Effect":"allow",
"Action":"ModifyImageAttribute",
"Resource":"arn:aws:ec2:us-east-1::image/ami-*",
"Condition": {
"StringEquals": {
"ec2:Attribute/Description": ["Production", "Development"]
}
}
}
]
}
The following example policy uses the ec2:Attribute condition key to filter access by the primitive
Attribute property of the ModifyImageAttribute API action. The condition key denies all requests that
attempt to modify an image's description.
{
"Statement":[{
"Effect":"deny",
"Action":"ModifyImageAttribute",
"Resource":"arn:aws:ec2:us-east-1::image/ami-*",
1224
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Condition": {
"StringEquals": {
"ec2:Attribute": "Description"
}
}
}
]
}
First, create an IAM user for testing purposes, and then attach the IAM policy that you created to the test
user. Then, make a request as the test user.
If the Amazon EC2 action that you are testing creates or modifies a resource, you should make the
request using the DryRun parameter (or run the AWS CLI command with the --dry-run option). In
this case, the call completes the authorization check, but does not complete the operation. For example,
you can check whether the user can terminate a particular instance without actually terminating it. If
the test user has the required permissions, the request returns DryRunOperation; otherwise, it returns
UnauthorizedOperation.
If the policy doesn't grant the user the permissions that you expected, or is overly permissive, you can
adjust the policy as needed and retest until you get the desired results.
Important
It can take several minutes for policy changes to propagate before they take effect. Therefore,
we recommend that you allow five minutes to pass before you test your policy updates.
If an authorization check fails, the request returns an encoded message with diagnostic information. You
can decode the message using the DecodeAuthorizationMessage action. For more information, see
DecodeAuthorizationMessage in the AWS Security Token Service API Reference, and decode-authorization-
message in the AWS CLI Command Reference.
To enable users to tag resources on creation, they must have permissions to use the action that creates
the resource, such as ec2:RunInstances or ec2:CreateVolume. If tags are specified in the resource-
creating action, Amazon performs additional authorization on the ec2:CreateTags action to verify
if users have permissions to create tags. Therefore, users must also have explicit permissions to use the
ec2:CreateTags action.
In the IAM policy definition for the ec2:CreateTags action, use the Condition element with the
ec2:CreateAction condition key to give tagging permissions to the action that creates the resource.
The following example demonstrates a policy that allows users to launch instances and apply any tags to
instances and volumes during launch. Users are not permitted to tag any existing resources (they cannot
call the ec2:CreateTags action directly).
{
"Statement": [
{
1225
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account:*/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction" : "RunInstances"
}
}
}
]
}
Similarly, the following policy allows users to create volumes and apply any tags to the volumes
during volume creation. Users are not permitted to tag any existing resources (they cannot call the
ec2:CreateTags action directly).
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateVolume"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account:*/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction" : "CreateVolume"
}
}
}
]
}
The ec2:CreateTags action is only evaluated if tags are applied during the resource-creating action.
Therefore, a user that has permissions to create a resource (assuming there are no tagging conditions)
does not require permissions to use the ec2:CreateTags action if no tags are specified in the request.
However, if the user attempts to create a resource with tags, the request fails if the user does not have
permissions to use the ec2:CreateTags action.
The ec2:CreateTags action is also evaluated if tags are provided in a launch template. For an example
policy, see Tags in a launch template (p. 1249).
1226
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
The following condition keys can be used with the examples in the preceding section:
• aws:RequestTag: To indicate that a particular tag key or tag key and value must be present in a
request. Other tags can also be specified in the request.
• Use with the StringEquals condition operator to enforce a specific tag key and value combination,
for example, to enforce the tag cost-center=cc123:
• Use with the StringLike condition operator to enforce a specific tag key in the request; for
example, to enforce the tag key purpose:
• aws:TagKeys: To enforce the tag keys that are used in the request.
• Use with the ForAllValues modifier to enforce specific tag keys if they are provided in the request
(if tags are specified in the request, only specific tag keys are allowed; no other tags are allowed). For
example, the tag keys environment or cost-center are allowed:
• Use with the ForAnyValue modifier to enforce the presence of at least one of the specified tag
keys in the request. For example, at least one of the tag keys environment or webserver must be
present in the request:
These condition keys can be applied to resource-creating actions that support tagging, as well as the
ec2:CreateTags and ec2:DeleteTags actions. To learn whether an Amazon EC2 API action supports
tagging, see Actions, resources, and condition keys for Amazon EC2.
To force users to specify tags when they create a resource, you must use the aws:RequestTag condition
key or the aws:TagKeys condition key with the ForAnyValue modifier on the resource-creating action.
The ec2:CreateTags action is not evaluated if a user does not specify tags for the resource-creating
action.
For conditions, the condition key is not case-sensitive and the condition value is case-sensitive. Therefore,
to enforce the case-sensitivity of a tag key, use the aws:TagKeys condition key, where the tag key is
specified as a value in the condition.
For example IAM policies, see Example policies for working with the AWS CLI or an AWS SDK (p. 1228).
For more information about multi-value conditions, see Creating a Condition That Tests Multiple Key
Values in the IAM User Guide.
For example, you can create a policy that allows users to terminate an instance, but denies the action
if the instance has the tag environment=production. To do this, you use the aws:ResourceTag
condition key to allow or deny access to the resource based on the tags that are attached to the resource.
1227
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
To learn whether an Amazon EC2 API action supports controlling access using the aws:ResourceTag
condition key, see Actions, resources, and condition keys for Amazon EC2. Note that the Describe
actions do not support resource-level permissions, so you must specify them in a separate statement
without conditions.
For example IAM policies, see Example policies for working with the AWS CLI or an AWS SDK (p. 1228).
If you allow or deny users access to resources based on tags, you must consider explicitly denying users
the ability to add those tags to or remove them from the same resources. Otherwise, it's possible for a
user to circumvent your restrictions and gain access to a resource by modifying its tags.
Example policies for working with the AWS CLI or an AWS SDK
The following examples show policy statements that you could use to control the permissions that IAM
users have to Amazon EC2. These policies are designed for requests that are made with the AWS CLI
or an AWS SDK. For example policies for working in the Amazon EC2 console, see Example policies for
working in the Amazon EC2 console (p. 1265). For examples of IAM policies specific to Amazon VPC, see
Identity and Access Management for Amazon VPC.
In the following examples, replace each user input placeholder with your own information.
Examples
• Example: Read-only access (p. 1228)
• Example: Restrict access to a specific Region (p. 1229)
• Work with instances (p. 1229)
• Work with volumes (p. 1231)
• Work with snapshots (p. 1233)
• Launch instances (RunInstances) (p. 1241)
• Work with Spot Instances (p. 1252)
• Example: Work with Reserved Instances (p. 1257)
• Example: Tag resources (p. 1258)
• Example: Work with IAM roles (p. 1259)
• Example: Work with route tables (p. 1261)
• Example: Allow a specific instance to view resources in other AWS services (p. 1261)
• Example: Work with launch templates (p. 1262)
• Work with instance metadata (p. 1262)
Users don't have permission to perform any actions on the resources (unless another statement grants
them permission to do so) because they're denied permission to use API actions by default.
{
"Version": "2012-10-17",
"Statement": [
{
1228
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
}
]
}
{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": "eu-central-1"
}
}
}
]
}
Alternatively, you can use the condition key ec2:Region, which is specific to Amazon EC2 and is
supported by all Amazon EC2 API actions.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"ec2:Region": "eu-central-1"
}
}
}
]
}
The following policy grants users permissions to use the API actions specified in the Action element.
The Resource element uses a * wildcard to indicate that users can specify all resources with these API
1229
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
actions. The * wildcard is also necessary in cases where the API action does not support resource-level
permissions. For more information about which ARNs you can use with which Amazon EC2 API actions,
see Actions, resources, and condition keys for Amazon EC2.
The users don't have permission to use any other API actions (unless another statement grants them
permission to do so) because users are denied permission to use API actions by default.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeKeyPairs",
"ec2:DescribeSecurityGroups",
"ec2:DescribeAvailabilityZones",
"ec2:RunInstances",
"ec2:TerminateInstances",
"ec2:StopInstances",
"ec2:StartInstances"
],
"Resource": "*"
}
]
}
Example: Describe all instances, and stop, start, and terminate only particular instances
The following policy allows users to describe all instances, to start and stop only instances
i-1234567890abcdef0 and i-0598c7d356eba48d7, and to terminate only instances in the US East (N.
Virginia) Region (us-east-1) with the resource tag "purpose=test".
The first statement uses a * wildcard for the Resource element to indicate that users can
specify all resources with the action; in this case, they can list all instances. The * wildcard is also
necessary in cases where the API action does not support resource-level permissions (in this case,
ec2:DescribeInstances). For more information about which ARNs you can use with which Amazon
EC2 API actions, see Actions, resources, and condition keys for Amazon EC2.
The second statement uses resource-level permissions for the StopInstances and StartInstances
actions. The specific instances are indicated by their ARNs in the Resource element.
The third statement allows users to terminate all instances in the US East (N. Virginia) Region
(us-east-1) that belong to the specified AWS account, but only where the instance has the tag
"purpose=test". The Condition element qualifies when the policy statement is in effect.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:DescribeInstances",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:StopInstances",
"ec2:StartInstances"
],
1230
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Resource": [
"arn:aws:ec2:us-east-1:account-id:instance/i-1234567890abcdef0",
"arn:aws:ec2:us-east-1:account-id:instance/i-0598c7d356eba48d7"
]
},
{
"Effect": "Allow",
"Action": "ec2:TerminateInstances",
"Resource": "arn:aws:ec2:us-east-1:account-id:instance/*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/purpose": "test"
}
}
}
]
}
When an API action requires a caller to specify multiple resources, you must create a policy statement
that allows users to access all required resources. If you need to use a Condition element with one or
more of these resources, you must create multiple statements as shown in this example.
The following policy allows users to attach volumes with the tag "volume_user=iam-user-name" to
instances with the tag "department=dev", and to detach those volumes from those instances. If you
attach this policy to an IAM group, the aws:username policy variable gives each IAM user in the group
permission to attach or detach volumes from the instances with a tag named volume_user that has his
or her IAM user name as a value.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": "arn:aws:ec2:us-east-1:account-id:instance/*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/department": "dev"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume"
],
1231
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Resource": "arn:aws:ec2:us-east-1:account-id:volume/*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/volume_user": "${aws:username}"
}
}
}
]
}
The following policy allows users to use the CreateVolume API action. The user is allowed to create a
volume only if the volume is encrypted and only if the volume size is less than 20 GiB.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateVolume"
],
"Resource": "arn:aws:ec2:us-east-1:account-id:volume/*",
"Condition":{
"NumericLessThan": {
"ec2:VolumeSize" : "20"
},
"Bool":{
"ec2:Encrypted" : "true"
}
}
}
]
}
The following policy includes the aws:RequestTag condition key that requires users to tag any volumes
they create with the tags costcenter=115 and stack=prod. The aws:TagKeys condition key uses
the ForAllValues modifier to indicate that only the keys costcenter and stack are allowed in the
request (no other tags can be specified). If users don't pass these specific tags, or if they don't specify
tags at all, the request fails.
For resource-creating actions that apply tags, users must also have permissions to use the CreateTags
action. The second statement uses the ec2:CreateAction condition key to allow users to create tags
only in the context of CreateVolume. Users cannot tag existing volumes or any other resources. For
more information, see Grant permission to tag resources during creation (p. 1225).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCreateTaggedVolumes",
"Effect": "Allow",
"Action": "ec2:CreateVolume",
"Resource": "arn:aws:ec2:us-east-1:account-id:volume/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/costcenter": "115",
"aws:RequestTag/stack": "prod"
1232
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
},
"ForAllValues:StringEquals": {
"aws:TagKeys": ["costcenter","stack"]
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:us-east-1:account-id:volume/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction" : "CreateVolume"
}
}
}
]
}
The following policy allows users to create a volume without having to specify tags. The CreateTags
action is only evaluated if tags are specified in the CreateVolume request. If users do specify tags, the
tag must be purpose=test. No other tags are allowed in the request.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:CreateVolume",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:us-east-1:account-id:volume/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/purpose": "test",
"ec2:CreateAction" : "CreateVolume"
},
"ForAllValues:StringEquals": {
"aws:TagKeys": "purpose"
}
}
}
]
}
Examples
• Example: Create a snapshot (p. 1234)
• Example: Create snapshots (p. 1234)
• Example: Create a snapshot with tags (p. 1235)
1233
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
The following policy allows customers to use the CreateSnapshot API action. The customer can create
snapshots only if the volume is encrypted and only if the volume size is less than 20 GiB.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshot",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*"
},
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshot",
"Resource":"arn:aws:ec2:us-east-1:account-id:volume/*",
"Condition":{
"NumericLessThan":{
"ec2:VolumeSize":"20"
},
"Bool":{
"ec2:Encrypted":"true"
}
}
}
]
}
The following policy allows customers to use the CreateSnapshots API action. The customer can create
snapshots only if all of the volumes on the instance are type GP2.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshots",
"Resource":[
"arn:aws:ec2:us-east-1::snapshot/*",
"arn:aws:ec2:*:*:instance/*"
]
},
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshots",
"Resource":"arn:aws:ec2:us-east-1:*:volume/*",
"Condition":{
"StringLikeIfExists":{
"ec2:VolumeType":"gp2"
}
}
}
]
1234
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
For resource-creating actions that apply tags, customers must also have permissions to use the
CreateTags action. The third statement uses the ec2:CreateAction condition key to allow
customers to create tags only in the context of CreateSnapshot. Customers cannot tag existing
volumes or any other resources. For more information, see Grant permission to tag resources during
creation (p. 1225).
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshot",
"Resource":"arn:aws:ec2:us-east-1:account-id:volume/*"
},
{
"Sid":"AllowCreateTaggedSnapshots",
"Effect":"Allow",
"Action":"ec2:CreateSnapshot",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"aws:RequestTag/costcenter":"115",
"aws:RequestTag/stack":"prod"
},
"ForAllValues:StringEquals":{
"aws:TagKeys":[
"costcenter",
"stack"
]
}
}
},
{
"Effect":"Allow",
"Action":"ec2:CreateTags",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"ec2:CreateAction":"CreateSnapshot"
}
}
}
]
}
1235
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshots",
"Resource":[
"arn:aws:ec2:us-east-1::snapshot/*",
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:volume/*"
]
},
{
"Sid":"AllowCreateTaggedSnapshots",
"Effect":"Allow",
"Action":"ec2:CreateSnapshots",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"aws:RequestTag/costcenter":"115",
"aws:RequestTag/stack":"prod"
},
"ForAllValues:StringEquals":{
"aws:TagKeys":[
"costcenter",
"stack"
]
}
}
},
{
"Effect":"Allow",
"Action":"ec2:CreateTags",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"ec2:CreateAction":"CreateSnapshots"
}
}
}
]
}
The following policy allows customers to create a snapshot without having to specify tags.
The CreateTags action is evaluated only if tags are specified in the CreateSnapshot or
CreateSnapshots request. If a tag is specified, the tag must be purpose=test. No other tags are
allowed in the request.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshot",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":"ec2:CreateTags",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"aws:RequestTag/purpose":"test",
"ec2:CreateAction":"CreateSnapshot"
1236
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
},
"ForAllValues:StringEquals":{
"aws:TagKeys":"purpose"
}
}
}
]
}
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshots",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":"ec2:CreateTags",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"aws:RequestTag/purpose":"test",
"ec2:CreateAction":"CreateSnapshots"
},
"ForAllValues:StringEquals":{
"aws:TagKeys":"purpose"
}
}
}
]
}
The following policy allows snapshots to be created only if the source volume is tagged with
User:username for the customer, and the snapshot itself is tagged with Environment:Dev and
User:username. The customer can add additional tags to the snapshot.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshot",
"Resource":"arn:aws:ec2:us-east-1:account-id:volume/*",
"Condition":{
"StringEquals":{
"aws:ResourceTag/User":"${aws:username}"
}
}
},
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshot",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"aws:RequestTag/Environment":"Dev",
"aws:RequestTag/User":"${aws:username}"
}
}
},
{
1237
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Effect":"Allow",
"Action":"ec2:CreateTags",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*"
}
]
}
The following policy for CreateSnapshots allows snapshots to be created only if the source
volume is tagged with User:username for the customer, and the snapshot itself is tagged with
Environment:Dev and User:username.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshots",
"Resource":"arn:aws:ec2:us-east-1:*:instance/*",
},
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshots",
"Resource":"arn:aws:ec2:us-east-1:account-id:volume/*",
"Condition":{
"StringEquals":{
"aws:ResourceTag/User":"${aws:username}"
}
}
},
{
"Effect":"Allow",
"Action":"ec2:CreateSnapshots",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"aws:RequestTag/Environment":"Dev",
"aws:RequestTag/User":"${aws:username}"
}
}
},
{
"Effect":"Allow",
"Action":"ec2:CreateTags",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*"
}
]
}
The following policy allows deletion of a snapshot only if the snapshot is tagged with User:username for
the customer.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":"ec2:DeleteSnapshot",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"aws:ResourceTag/User":"${aws:username}"
}
}
1238
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
}
]
}
The following policy allows a customer to create a snapshot but denies the action if the snapshot being
created has a tag key value=stack.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":[
"ec2:CreateSnapshot",
"ec2:CreateTags"
],
"Resource":"*"
},
{
"Effect":"Deny",
"Action":"ec2:CreateSnapshot",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"ForAnyValue:StringEquals":{
"aws:TagKeys":"stack"
}
}
}
]
}
The following policy allows a customer to create snapshots but denies the action if the snapshots being
created have a tag key value=stack.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":[
"ec2:CreateSnapshots",
"ec2:CreateTags"
],
"Resource":"*"
},
{
"Effect":"Deny",
"Action":"ec2:CreateSnapshots",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"ForAnyValue:StringEquals":{
"aws:TagKeys":"stack"
}
}
}
]
}
The following policy allows you to combine multiple actions into a single policy. You can only create a
snapshot (in the context of CreateSnapshots) when the snapshot is created in Region us-east-1. You
can only create snapshots (in the context of CreateSnapshots) when the snapshots are being created
in the Region us-east-1 and when the instance type is t2*.
1239
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":[
"ec2:CreateSnapshots",
"ec2:CreateSnapshot",
"ec2:CreateTags"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:snapshot/*",
"arn:aws:ec2:*:*:volume/*"
],
"Condition":{
"StringEqualsIgnoreCase": {
"ec2:Region": "us-east-1"
},
"StringLikeIfExists": {
"ec2:InstanceType": ["t2.*"]
}
}
}
]
}
Resource-level permissions specified for the CopySnapshot action apply to the new snapshot only. They
cannot be specified for the source snapshot.
The following example policy allows principals to copy snapshots only if the new snapshot is created
with tag key of purpose and a tag value of production (purpose=production).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCopySnapshotWithTags",
"Effect": "Allow",
"Action": "ec2:CopySnapshot",
"Resource": "arn:aws:ec2:*:account-id:snapshot/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/purpose": "production"
}
}
}
]
}
The following policy allows modification of a snapshot only if the snapshot is tagged with
User:username, where username is the customer's AWS account user name. The request fails if this
condition is not met.
{
"Version":"2012-10-17",
"Statement": [
1240
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
{
"Effect":"Allow",
"Action":"ec2:ModifySnapshotAttribute",
"Resource":"arn:aws:ec2:us-east-1::snapshot/*",
"Condition":{
"StringEquals":{
"aws:ResourceTag/user-name":"${aws:username}"
}
}
}
]
}
For more information about the resource-level permissions that are required to launch an instance, see
Actions, resources, and condition keys for Amazon EC2.
By default, users don't have permissions to describe, start, stop, or terminate the resulting instances. One
way to grant the users permission to manage the resulting instances is to create a specific tag for each
instance, and then create a statement that enables them to manage instances with that tag. For more
information, see Work with instances (p. 1229).
Resources
• AMIs (p. 1241)
• Instance types (p. 1242)
• Subnets (p. 1243)
• EBS volumes (p. 1244)
• Tags (p. 1245)
• Tags in a launch template (p. 1249)
• Elastic GPUs (p. 1250)
• Launch templates (p. 1250)
AMIs
The following policy allows users to launch instances using only the specified AMIs, ami-9e1670f7 and
ami-45cf5c3c. The users can't launch an instance using other AMIs (unless another statement grants
the users permission to do so).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region::image/ami-9e1670f7",
"arn:aws:ec2:region::image/ami-45cf5c3c",
"arn:aws:ec2:region:account-id:instance/*",
"arn:aws:ec2:region:account-id:volume/*",
1241
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"arn:aws:ec2:region:account-id:key-pair/*",
"arn:aws:ec2:region:account-id:security-group/*",
"arn:aws:ec2:region:account-id:subnet/*",
"arn:aws:ec2:region:account-id:network-interface/*"
]
}
]
}
Alternatively, the following policy allows users to launch instances from all AMIs owned by Amazon. The
Condition element of the first statement tests whether ec2:Owner is amazon. The users can't launch
an instance using other AMIs (unless another statement grants the users permission to do so).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region::image/ami-*"
],
"Condition": {
"StringEquals": {
"ec2:Owner": "amazon"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region:account-id:instance/*",
"arn:aws:ec2:region:account-id:subnet/*",
"arn:aws:ec2:region:account-id:volume/*",
"arn:aws:ec2:region:account-id:network-interface/*",
"arn:aws:ec2:region:account-id:key-pair/*",
"arn:aws:ec2:region:account-id:security-group/*"
]
}
]
}
Instance types
The following policy allows users to launch instances using only the t2.micro or t2.small instance
type, which you might do to control costs. The users can't launch larger instances because the
Condition element of the first statement tests whether ec2:InstanceType is either t2.micro or
t2.small.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region:account-id:instance/*"
],
"Condition": {
"StringEquals": {
"ec2:InstanceType": ["t2.micro", "t2.small"]
1242
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region::image/ami-*",
"arn:aws:ec2:region:account-id:subnet/*",
"arn:aws:ec2:region:account-id:network-interface/*",
"arn:aws:ec2:region:account-id:volume/*",
"arn:aws:ec2:region:account-id:key-pair/*",
"arn:aws:ec2:region:account-id:security-group/*"
]
}
]
}
Alternatively, you can create a policy that denies users permissions to launch any instances except
t2.micro and t2.small instance types.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region:account-id:instance/*"
],
"Condition": {
"StringNotEquals": {
"ec2:InstanceType": ["t2.micro", "t2.small"]
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region::image/ami-*",
"arn:aws:ec2:region:account-id:network-interface/*",
"arn:aws:ec2:region:account-id:instance/*",
"arn:aws:ec2:region:account-id:subnet/*",
"arn:aws:ec2:region:account-id:volume/*",
"arn:aws:ec2:region:account-id:key-pair/*",
"arn:aws:ec2:region:account-id:security-group/*"
]
}
]
}
Subnets
The following policy allows users to launch instances using only the specified subnet,
subnet-12345678. The group can't launch instances into any another subnet (unless another statement
grants the users permission to do so).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
1243
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region:account-id:subnet/subnet-12345678",
"arn:aws:ec2:region:account-id:network-interface/*",
"arn:aws:ec2:region:account-id:instance/*",
"arn:aws:ec2:region:account-id:volume/*",
"arn:aws:ec2:region::image/ami-*",
"arn:aws:ec2:region:account-id:key-pair/*",
"arn:aws:ec2:region:account-id:security-group/*"
]
}
]
}
Alternatively, you could create a policy that denies users permissions to launch an instance into any other
subnet. The statement does this by denying permission to create a network interface, except where
subnet subnet-12345678 is specified. This denial overrides any other policies that are created to allow
launching instances into other subnets.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region:account-id:network-interface/*"
],
"Condition": {
"ArnNotEquals": {
"ec2:Subnet": "arn:aws:ec2:region:account-id:subnet/subnet-12345678"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:region::image/ami-*",
"arn:aws:ec2:region:account-id:network-interface/*",
"arn:aws:ec2:region:account-id:instance/*",
"arn:aws:ec2:region:account-id:subnet/*",
"arn:aws:ec2:region:account-id:volume/*",
"arn:aws:ec2:region:account-id:key-pair/*",
"arn:aws:ec2:region:account-id:security-group/*"
]
}
]
}
EBS volumes
The following policy allows users to launch instances only if the EBS volumes for the instance are
encrypted. The user must launch an instance from an AMI that was created with encrypted snapshots, to
ensure that the root volume is encrypted. Any additional volume that the user attaches to the instance
during launch must also be encrypted.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
1244
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:*:*:volume/*"
],
"Condition": {
"Bool": {
"ec2:Encrypted": "true"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:*::image/ami-*",
"arn:aws:ec2:*:*:network-interface/*",
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:subnet/*",
"arn:aws:ec2:*:*:key-pair/*",
"arn:aws:ec2:*:*:security-group/*"
]
}
]
}
Tags
The following policy allows users to launch instances and tag the instances during creation. For resource-
creating actions that apply tags, users must have permissions to use the CreateTags action. The second
statement uses the ec2:CreateAction condition key to allow users to create tags only in the context
of RunInstances, and only for instances. Users cannot tag existing resources, and users cannot tag
volumes using the RunInstances request.
For more information, see Grant permission to tag resources during creation (p. 1225).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:us-east-1:account-id:instance/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction" : "RunInstances"
}
}
}
]
}
1245
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
The following policy includes the aws:RequestTag condition key that requires users to tag any
instances and volumes that are created by RunInstances with the tags environment=production
and purpose=webserver. The aws:TagKeys condition key uses the ForAllValues modifier to
indicate that only the keys environment and purpose are allowed in the request (no other tags can be
specified). If no tags are specified in the request, the request fails.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:region::image/*",
"arn:aws:ec2:region:account-id:subnet/*",
"arn:aws:ec2:region:account-id:network-interface/*",
"arn:aws:ec2:region:account-id:security-group/*",
"arn:aws:ec2:region:account-id:key-pair/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:region:account-id:volume/*",
"arn:aws:ec2:region:account-id:instance/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/environment": "production" ,
"aws:RequestTag/purpose": "webserver"
},
"ForAllValues:StringEquals": {
"aws:TagKeys": ["environment","purpose"]
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account-id:*/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction" : "RunInstances"
}
}
}
]
}
Tag instances and volumes on creation with at least one specific tag
The following policy uses the ForAnyValue modifier on the aws:TagKeys condition to indicate that at
least one tag must be specified in the request, and it must contain the key environment or webserver.
The tag must be applied to both instances and volumes. Any tag values can be specified in the request.
1246
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:region::image/*",
"arn:aws:ec2:region:account-id:subnet/*",
"arn:aws:ec2:region:account-id:network-interface/*",
"arn:aws:ec2:region:account-id:security-group/*",
"arn:aws:ec2:region:account-id:key-pair/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:region:account-id:volume/*",
"arn:aws:ec2:region:account-id:instance/*"
],
"Condition": {
"ForAnyValue:StringEquals": {
"aws:TagKeys": ["environment","webserver"]
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account-id:*/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction" : "RunInstances"
}
}
}
]
}
If instances are tagged on creation, they must be tagged with a specific tag
In the following policy, users do not have to specify tags in the request, but if they do, the tag must be
purpose=test. No other tags are allowed. Users can apply the tags to any taggable resource in the
RunInstances request.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
1247
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account-id:*/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/purpose": "test",
"ec2:CreateAction" : "RunInstances"
},
"ForAllValues:StringEquals": {
"aws:TagKeys": "purpose"
}
}
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRun",
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:security-group/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:volume/*",
"arn:aws:ec2:us-east-1:*:instance/*",
"arn:aws:ec2:us-east-1:*:spot-instances-request/*"
]
},
{
"Sid": "VisualEditor0",
"Effect": "Deny",
"Action": "ec2:CreateTags",
"Resource": "*"
}
]
}
Only allow specific tags for spot-instances-request. Surprise inconsistency number 2 comes into play
here. Under normal circumstances, specifying no tags will result in Unauthenticated. In the case of spot-
instances-request, this policy will not be evaluated if there are no spot-instances-request tags, so a non-
tag Spot on Run request will succeed.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRun",
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
1248
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:security-group/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:volume/*",
"arn:aws:ec2:us-east-1:*:instance/*",
]
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:us-east-1:*:spot-instances-request/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/environment": "production"
}
}
}
]
}
In the following example, users can launch instances, but only if they use a specific launch template
(lt-09477bcd97b0d310e). The ec2:IsLaunchTemplateResource condition key prevents users from
overriding any of the resources specified in the launch template. The second part of the statement allows
users to tag instances on creation—this part of the statement is necessary if tags are specified for the
instance in the launch template.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "*",
"Condition": {
"ArnLike": {
"ec2:LaunchTemplate": "arn:aws:ec2:region:account-id:launch-template/
lt-09477bcd97b0d310e"
},
"Bool": {
"ec2:IsLaunchTemplateResource": "true"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account-id:instance/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction" : "RunInstances"
}
}
}
]
}
1249
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
Elastic GPUs
In the following policy, users can launch an instance and specify an elastic GPU to attach to the instance.
Users can launch instances in any Region, but they can only attach an elastic GPU during a launch in the
us-east-2 Region.
The ec2:ElasticGpuType condition key uses the ForAnyValue modifier to indicate that only the
elastic GPU types eg1.medium and eg1.large are allowed in the request.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:*:account-id:elastic-gpu/*"
],
"Condition": {
"StringEquals": {
"ec2:Region": "us-east-2"
},
"ForAnyValue:StringLike": {
"ec2:ElasticGpuType": [
"eg1.medium",
"eg1.large"
]
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:*::image/ami-*",
"arn:aws:ec2:*:account-id:network-interface/*",
"arn:aws:ec2:*:account-id:instance/*",
"arn:aws:ec2:*:account-id:subnet/*",
"arn:aws:ec2:*:account-id:volume/*",
"arn:aws:ec2:*:account-id:key-pair/*",
"arn:aws:ec2:*:account-id:security-group/*"
]
}
]
}
Launch templates
In the following example, users can launch instances, but only if they use a specific launch template
(lt-09477bcd97b0d310e). Users can override any parameters in the launch template by specifying the
parameters in the RunInstances action.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "*",
"Condition": {
"ArnLike": {
1250
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"ec2:LaunchTemplate": "arn:aws:ec2:region:account-id:launch-template/
lt-09477bcd97b0d310e"
}
}
}
]
}
In this example, users can launch instances only if they use a launch template. The policy uses the
ec2:IsLaunchTemplateResource condition key to prevent users from overriding any pre-existing
ARNs in the launch template.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "*",
"Condition": {
"ArnLike": {
"ec2:LaunchTemplate": "arn:aws:ec2:region:account-id:launch-template/*"
},
"Bool": {
"ec2:IsLaunchTemplateResource": "true"
}
}
}
]
}
The following example policy allows user to launch instances, but only if they use a launch template.
Users cannot override the subnet and network interface parameters in the request; these parameters
can only be specified in the launch template. The first part of the statement uses the NotResource
element to allow all other resources except subnets and network interfaces. The second part of the
statement allows the subnet and network interface resources, but only if they are sourced from the
launch template.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"NotResource": ["arn:aws:ec2:region:account-id:subnet/*",
"arn:aws:ec2:region:account-id:network-interface/*" ],
"Condition": {
"ArnLike": {
"ec2:LaunchTemplate": "arn:aws:ec2:region:account-id:launch-template/*"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": ["arn:aws:ec2:region:account-id:subnet/*",
"arn:aws:ec2:region:account-id:network-interface/*" ],
"Condition": {
"ArnLike": {
"ec2:LaunchTemplate": "arn:aws:ec2:region:account-id:launch-template/*"
},
"Bool": {
1251
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"ec2:IsLaunchTemplateResource": "true"
}
}
}
]
}
The following example allows users to launch instances only if they use a launch template, and only
if the launch template has the tag Purpose=Webservers. Users cannot override any of the launch
template parameters in the RunInstances action.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"NotResource": "arn:aws:ec2:region:account-id:launch-template/*",
"Condition": {
"ArnLike": {
"ec2:LaunchTemplate": "arn:aws:ec2:region:account-id:launch-template/*"
},
"Bool": {
"ec2:IsLaunchTemplateResource": "true"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:region:account-id:launch-template/*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/Purpose": "Webservers"
}
}
}
]
}
• If you don't tag a Spot Instance request on create, Amazon EC2 does not evaluate the spot-
instances-request resource in the RunInstances statement.
• If you tag a Spot Instance request on create, Amazon EC2 evaluates the spot-instances-request
resource in the RunInstances statement.
Therefore, for the spot-instances-request resource, the following rules apply to the IAM policy:
• If you use RunInstances to create a Spot Instance request and you don't intend to tag the Spot Instance
request on create, you don’t need to explicitly allow the spot-instances-request resource; the call
will succeed.
• If you use RunInstances to create a Spot Instance request and intend to tag the Spot Instance request
on create, you must include the spot-instances-request resource in the RunInstances allow
statement, otherwise the call will fail.
1252
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
• If you use RunInstances to create a Spot Instance request and intend to tag the Spot Instance request
on create, you must specify the spot-instances-request resource or * wildcard in the CreateTags
allow statement, otherwise the call will fail.
You can request Spot Instances using RunInstances or RequestSpotInstances. The following example IAM
policies apply only when requesting Spot Instances using RunInstances.
The following policy allows users to request Spot Instances by using the RunInstances action. The spot-
instances-request resource, which is created by RunInstances, requests Spot Instances.
Note
To use RunInstances to create Spot Instance requests, you can omit spot-instances-
request from the Resource list if you do not intend to tag the Spot Instance requests on
create. This is because Amazon EC2 does not evaluate the spot-instances-request resource
in the RunInstances statement if the Spot Instance request is not tagged on create.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRun",
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:security-group/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:volume/*",
"arn:aws:ec2:us-east-1:*:instance/*",
"arn:aws:ec2:us-east-1:*:spot-instances-request/*"
]
}
]
}
Warning
NOT SUPPORTED – Example: Deny users permission to request Spot Instances using
RunInstances
The following policy is not supported for the spot-instances-request resource.
The following policy is meant to give users the permission to launch On-Demand Instances, but
deny users the permission to request Spot Instances. The spot-instances-request resource,
which is created by RunInstances, is the resource that requests Spot Instances. The second
statement is meant to deny the RunInstances action for the spot-instances-request
resource. However, this condition is not supported because Amazon EC2 does not evaluate
the spot-instances-request resource in the RunInstances statement if the Spot Instance
request is not tagged on create.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRun",
"Effect": "Allow",
"Action": [
1253
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:security-group/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:volume/*",
"arn:aws:ec2:us-east-1:*:instance/*"
]
},
{
"Sid": "DenySpotInstancesRequests - NOT SUPPORTED - DO NOT USE!",
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:us-east-1:*:spot-instances-request/*"
}
]
}
The following policy allows users to tag all resources that are created during instance launch. The first
statement allows RunInstances to create the listed resources. The spot-instances-request resource,
which is created by RunInstances, is the resource that requests Spot Instances. The second statement
provides a * wildcard to allow all resources to be tagged when they are created at instance launch.
Note
If you tag a Spot Instance request on create, Amazon EC2 evaluates the spot-instances-
request resource in the RunInstances statement. Therefore, you must explicitly allow the
spot-instances-request resource for the RunInstances action, otherwise the call will fail.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRun",
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:security-group/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:volume/*",
"arn:aws:ec2:us-east-1:*:instance/*",
"arn:aws:ec2:us-east-1:*:spot-instances-request/*"
]
},
{
"Sid": "TagResources",
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "*"
}
]
}
1254
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
The following policy denies users the permission to tag the resources that are created during instance
launch.
The first statement allows RunInstances to create the listed resources. The spot-instances-request
resource, which is created by RunInstances, is the resource that requests Spot Instances. The second
statement provides a * wildcard to deny all resources being tagged when they are created at instance
launch. If spot-instances-request or any other resource is tagged on create, the RunInstances call
will fail.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRun",
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:security-group/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:volume/*",
"arn:aws:ec2:us-east-1:*:instance/*",
"arn:aws:ec2:us-east-1:*:spot-instances-request/*"
]
},
{
"Sid": "DenyTagResources",
"Effect": "Deny",
"Action": "ec2:CreateTags",
"Resource": "*"
}
]
}
Warning
NOT SUPPORTED – Example: Allow creating a Spot Instance request only if it is assigned a
specific tag
The following policy is not supported for the spot-instances-request resource.
The following policy is meant to grant RunInstances the permission to create a Spot Instance
request only if the request is tagged with a specific tag.
The first statement allows RunInstances to create the listed resources.
The second statement is meant to grant users the permission to create a Spot Instance request
only if the request has the tag environment=production. If this condition is applied to other
resources created by RunInstances, specifying no tags results in an Unauthenticated error.
However, if no tags are specified for the Spot Instance request, Amazon EC2 does not evaluate
the spot-instances-request resource in the RunInstances statement, which results in non-
tagged Spot Instance requests being created by RunInstances.
Note that specifying another tag other than environment=production results in an
Unauthenticated error, because if a user tags a Spot Instance request, Amazon EC2 evaluates
the spot-instances-request resource in the RunInstances statement.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRun",
"Effect": "Allow",
1255
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:security-group/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:volume/*",
"arn:aws:ec2:us-east-1:*:instance/*"
]
},
{
"Sid": "RequestSpotInstancesOnlyIfTagIs_environment=production - NOT
SUPPORTED - DO NOT USE!",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:us-east-1:*:spot-instances-request/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/environment": "production"
}
}
},
{
"Sid": "TagResources",
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "*"
}
]
}
The following policy denies RunInstances the permission to create a Spot Instance request if the request
is tagged with environment=production.
The second statement denies users the permission to create a Spot Instance request if the request has
the tag environment=production. Specifying environment=production as a tag results in an
Unauthenticated error. Specifying other tags or specifying no tags will result in the creation of a Spot
Instance request.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRun",
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:security-group/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:volume/*",
1256
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"arn:aws:ec2:us-east-1:*:instance/*",
"arn:aws:ec2:us-east-1:*:spot-instances-request/*"
]
},
{
"Sid": "DenySpotInstancesRequests",
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:us-east-1:*:spot-instances-request/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/environment": "production"
}
}
},
{
"Sid": "TagResources",
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "*"
}
]
}
It is not possible to set resource-level permissions for individual Reserved Instances. This policy means
that users have access to all the Reserved Instances in the account.
The Resource element uses a * wildcard to indicate that users can specify all resources with the action;
in this case, they can list and modify all Reserved Instances in the account. They can also purchase
Reserved Instances using the account credentials. The * wildcard is also necessary in cases where the API
action does not support resource-level permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeReservedInstances",
"ec2:ModifyReservedInstances",
"ec2:PurchaseReservedInstancesOffering",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeReservedInstancesOfferings"
],
"Resource": "*"
}
]
}
To allow users to view and modify the Reserved Instances in your account, but not purchase new
Reserved Instances.
{
"Version": "2012-10-17",
"Statement": [
{
1257
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Effect": "Allow",
"Action": [
"ec2:DescribeReservedInstances",
"ec2:ModifyReservedInstances",
"ec2:DescribeAvailabilityZones"
],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account-id:instance/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/environment": "production"
},
"ForAllValues:StringEquals": {
"aws:TagKeys": [
"environment"
]
}
}
}
]
}
The following policy allows users to tag any taggable resource that already has a tag with a key
of owner and a value of the IAM username. In addition, users must specify a tag with a key of
anycompany:environment-type and a value of either test or prod in the request. Users can specify
additional tags in the request.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account-id:*/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/anycompany:environment-type": ["test","prod"],
"aws:ResourceTag/owner": "${aws:username}"
}
}
1258
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
}
]
}
You can create an IAM policy that allows users to delete specific tags for a resource. For example, the
following policy allows users to delete tags for a volume if the tag keys specified in the request are
environment or cost-center. Any value can be specified for the tag but the tag key must match
either of the specified keys.
Note
If you delete a resource, all tags associated with the resource are also deleted. Users do not need
permissions to use the ec2:DeleteTags action to delete a resource that has tags; they only
need permissions to perform the deleting action.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:DeleteTags",
"Resource": "arn:aws:ec2:us-east-1:account-id:volume/*",
"Condition": {
"ForAllValues:StringEquals": {
"aws:TagKeys": ["environment","cost-center"]
}
}
}
]
}
This policy allows users to delete only the environment=prod tag on any resource, and only if the
resource is already tagged with a key of owner and a value of the IAM username. Users cannot delete
any other tags for a resource.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DeleteTags"
],
"Resource": "arn:aws:ec2:region:account-id:*/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/environment": "prod",
"aws:ResourceTag/owner": "${aws:username}"
},
"ForAllValues:StringEquals": {
"aws:TagKeys": ["environment"]
}
}
}
]
}
1259
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
IAM users must have permission to use the iam:PassRole action in order to pass the role to the
instance.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AssociateIamInstanceProfile",
"ec2:ReplaceIamInstanceProfileAssociation",
"ec2:DisassociateIamInstanceProfile"
],
"Resource": "arn:aws:ec2:us-east-1:account-id:instance/*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/department":"test"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:DescribeIamInstanceProfileAssociations",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::account-id:role/DevTeam*"
}
]
}
The following policy allows users to attach or replace an IAM role for any instance. Users can only attach
or replace IAM roles with names that begin with TestRole-. For the iam:PassRole action, ensure that
you specify the name of the IAM role and not the instance profile (if the names are different). For more
information, see Instance profiles (p. 1276).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AssociateIamInstanceProfile",
"ec2:ReplaceIamInstanceProfileAssociation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DescribeIamInstanceProfileAssociations",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::account-id:role/TestRole-*"
}
]
1260
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DeleteRoute",
"ec2:CreateRoute",
"ec2:ReplaceRoute"
],
"Resource": [
"arn:aws:ec2:region:account-id:route-table/*"
],
"Condition": {
"StringEquals": {
"ec2:Vpc": "arn:aws:ec2:region:account-id:vpc/vpc-ec43eb89"
}
}
}
]
}
The ec2:SourceInstanceARN key is an AWS-wide condition key, therefore it can be used for other
service actions, not just Amazon EC2.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"s3:ListAllMyBuckets",
"dynamodb:ListTables",
"rds:DescribeDBInstances"
],
"Resource": [
"*"
],
"Condition": {
"ArnEquals": {
"ec2:SourceInstanceARN": "arn:aws:ec2:region:account-id:instance/
i-093452212644b0dd6"
}
}
1261
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:CreateLaunchTemplateVersion",
"ec2:ModifyLaunchTemplate"
],
"Effect": "Allow",
"Resource": "arn:aws:ec2:region:account-id:launch-template/lt-09477bcd97b0d3abc"
}
]
}
The following policy allows users to delete any launch template and launch template version, provided
that the launch template has the tag Purpose=Testing.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DeleteLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions"
],
"Effect": "Allow",
"Resource": "arn:aws:ec2:region:account-id:launch-template/*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/Purpose": "Testing"
}
}
}
]
}
1262
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
more information, see the policies in Work with instances (p. 1229) and Launch instances
(RunInstances) (p. 1241).
Important
If you use Auto Scaling groups and you need to require the use of IMDSv2 on all new instances,
your Auto Scaling groups must use launch templates.
When an Auto Scaling group uses a launch template, the ec2:RunInstances permissions of
the IAM principal are checked when a new Auto Scaling group is created. They are also checked
when an existing Auto Scaling group is updated to use a new launch template or a new version
of a launch template.
Restrictions on the use of IMDSv1 on IAM principals for RunInstances are only checked
when an Auto Scaling group that is using a launch template, is created or updated. For an
Auto Scaling group that is configured to use the Latest or Default launch template, the
permissions are not checked when a new version of the launch template is created. For
permissions to be checked, you must configure the Auto Scaling group to use a specific version
of the launch template.
To enforce the use of IMDSv2 on instances launched by Auto Scaling groups, the
following additional steps are required:
1. Disable the use of launch configurations for all accounts in your organization by using
either service control policies (SCPs) or IAM permissions boundaries for new principals
that are created. For existing IAM principals with Auto Scaling group permissions,
update their associated policies with this condition key. To disable the use of launch
configurations, create or modify the relevant SCP, permissions boundary, or IAM policy with
the "autoscaling:LaunchConfigurationName" condition key with the value specified as
null.
2. For new launch templates, configure the instance metadata options in the launch template.
For existing launch templates, create a new version of the launch template and configure the
instance metadata options in the new version.
3. In the policy that gives any principal the permission to use a launch
template, restrict association of $latest and $default by specifying
"autoscaling:LaunchTemplateVersionSpecified": "true". By restricting the
use to a specific version of a launch template, you can ensure that new instances will be
launched using the version in which the instance metadata options are configured. For more
information, see LaunchTemplateSpecification in the Amazon EC2 Auto Scaling API Reference,
specifically the Version parameter.
4. For an Auto Scaling group that uses a launch configuration, replace the launch configuration
with a launch template. For more information, see Replacing a Launch Configuration with a
Launch Template in the Amazon EC2 Auto Scaling User Guide.
5. For an Auto Scaling group that uses a launch template, make sure that it uses a new launch
template with the instance metadata options configured, or uses a new version of the current
launch template with the instance metadata options configured. For more information, see
update-auto-scaling-group in the AWS CLI Command Reference.
Examples
• Require the use of IMDSv2 (p. 1263)
• Specify maximum hop limit (p. 1264)
• Limit who can modify the instance metadata options (p. 1264)
• Require role credentials to be retrieved from IMDSv2 (p. 1265)
1263
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
specify that the instance requires IMDSv2, you get an UnauthorizedOperation error when you call the
RunInstances API.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RequireImdsV2",
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:*:*:instance/*",
"Condition": {
"StringNotEquals": {
"ec2:MetadataHttpTokens": "required"
}
}
}
]
}
The following policy specifies that you can’t call the RunInstances API unless you also specify a hop limit,
and the hop limit can’t be more than 3. If you fail to do that, you get an UnauthorizedOperation
error when you call the RunInstances API.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "MaxImdsHopLimit",
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:*:*:instance/*",
"Condition": {
"NumericGreaterThan": {
"ec2:MetadataHttpPutResponseHopLimit": "3"
}
}
}
]
}
The following policy removes the ability for the general population of administrators to modify instance
metadata options, and permits only users with the role ec2-imds-admins to make changes. If any
principal other than the ec2-imds-admins role tries to call the ModifyInstanceMetadataOptions API,
it will get an UnauthorizedOperation error. This statement could be used to control the use of the
ModifyInstanceMetadataOptions API; there are currently no fine-grained access controls (conditions) for
the ModifyInstanceMetadataOptions API.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowOnlyImdsAdminsToModifySettings",
"Effect": "Deny",
"Action": "ec2:ModifyInstanceMetadataOptions",
"Resource": "*",
"Condition": {
1264
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"StringNotLike": {
"aws:PrincipalARN": "arn:aws:iam::*:role/ec2-imds-admins"
}
}
}
]
}
The following policy specifies that if this policy is applied to a role, and the role is assumed by
the EC2 service and the resulting credentials are used to sign a request, then the request must
be signed by EC2 role credentials retrieved from IMDSv2. Otherwise, all of its API calls will get an
UnauthorizedOperation error. This statement/policy can be applied generally because, if the request
is not signed by EC2 role credentials, it has no effect.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RequireAllEc2RolesToUseV2",
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"NumericLessThan": {
"ec2:RoleDelivery": "2.0"
}
}
}
]
}
Examples
• Example: Read-only access (p. 1266)
• Example: Use the EC2 launch wizard (p. 1267)
• Example: Work with volumes (p. 1270)
• Example: Work with security groups (p. 1270)
• Example: Work with Elastic IP addresses (p. 1272)
1265
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
For additional information about creating policies for the Amazon EC2 console, see the following AWS
Security Blog post: Granting Users Permission to Work in the Amazon EC2 Console.
Alternatively, you can provide read-only access to a subset of resources. To do this, replace the *
wildcard in the ec2:Describe API action with specific ec2:Describe actions for each resource. The
following policy allows users to view all instances, AMIs, and snapshots in the Amazon EC2 console.
The ec2:DescribeTags action allows users to view public AMIs. The console requires the tagging
information to display public AMIs; however, you can remove this action to allow users to view only
private AMIs.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeTags",
"ec2:DescribeSnapshots"
],
"Resource": "*"
}
]
}
Note
The Amazon EC2 ec2:Describe* API actions do not support resource-level permissions, so
you cannot control which individual resources users can view in the console. Therefore, the *
wildcard is necessary in the Resource element of the above statement. For more information
about which ARNs you can use with which Amazon EC2 API actions, see Actions, resources, and
condition keys for Amazon EC2.
The following policy allows users to view instances in the Amazon EC2 console, as well as CloudWatch
alarms and metrics in the Monitoring tab of the Instances page. The Amazon EC2 console uses the
CloudWatch API to display the alarms and metrics, so you must grant users permission to use the
cloudwatch:DescribeAlarms and cloudwatch:GetMetricStatistics actions.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"cloudwatch:DescribeAlarms",
"cloudwatch:GetMetricStatistics"
],
"Resource": "*"
1266
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
}
]
}
To complete a launch successfully, users must be given permission to use the ec2:RunInstances API
action, and at least the following API actions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeInstanceTypes",
"ec2:DescribeKeyPairs",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:CreateSecurityGroup",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateKeyPair"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "*"
}
]
}
You can add API actions to your policy to provide more options for users, for example:
1267
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
• To add outbound rules to VPC security groups, users must be granted permission to use the
ec2:AuthorizeSecurityGroupEgress API action. To modify or delete existing rules, users must be
granted permission to use the relevant ec2:RevokeSecurityGroup* API action.
• ec2:CreateTags: To tag the resources that are created by RunInstances. For more information, see
Grant permission to tag resources during creation (p. 1225). If users do not have permission to use this
action and they attempt to apply tags on the tagging page of the launch wizard, the launch fails.
Important
Be careful about granting users permission to use the ec2:CreateTags action, because
doing so limits your ability to use the aws:ResourceTag condition key to restrict their use
of other resources. If you grant users permission to use the ec2:CreateTags action, they
can change a resource's tag in order to bypass those restrictions. For more information, see
Control access to EC2 resources using resource tags (p. 1227).
• To use Systems Manager parameters when selecting an AMI, you must add
ssm:DescribeParameters and ssm:GetParameters to your policy. ssm:DescribeParameters
grants your IAM users the permission to view and select Systems Manager parameters.
ssm:GetParameters grants your IAM users the permission to get the values of the Systems
Manager parameters. You can also restrict access to specific Systems Manager parameters. For more
information, see Restrict access to specific Systems Manager parameters later in this section.
Currently, the Amazon EC2 Describe* API actions do not support resource-level permissions, so you
cannot restrict which individual resources users can view in the launch wizard. However, you can apply
resource-level permissions on the ec2:RunInstances API action to restrict which resources users can
use to launch an instance. The launch fails if users select options that they are not authorized to use.
The following policy allows users to launch t2.micro instances using AMIs owned by Amazon, and only
into a specific subnet (subnet-1a2b3c4d). Users can only launch in the sa-east-1 Region. If users select
a different Region, or select a different instance type, AMI, or subnet in the launch wizard, the launch
fails.
The first statement grants users permission to view the options in the launch wizard or to create new
ones, as explained in the example above. The second statement grants users permission to use the
network interface, volume, key pair, security group, and subnet resources for the ec2:RunInstances
action, which are required to launch an instance into a VPC. For more information about using the
ec2:RunInstances action, see Launch instances (RunInstances) (p. 1241). The third and fourth
statements grant users permission to use the instance and AMI resources respectively, but only if the
instance is a t2.micro instance, and only if the AMI is owned by Amazon.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeInstanceTypes",
"ec2:DescribeKeyPairs",
"ec2:CreateKeyPair",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:CreateSecurityGroup",
"ec2:AuthorizeSecurityGroupIngress"
],
"Resource": "*"
},
{
1268
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
"Effect": "Allow",
"Action":"ec2:RunInstances",
"Resource": [
"arn:aws:ec2:sa-east-1:111122223333:network-interface/*",
"arn:aws:ec2:sa-east-1:111122223333:volume/*",
"arn:aws:ec2:sa-east-1:111122223333:key-pair/*",
"arn:aws:ec2:sa-east-1:111122223333:security-group/*",
"arn:aws:ec2:sa-east-1:111122223333:subnet/subnet-1a2b3c4d"
]
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:sa-east-1:111122223333:instance/*"
],
"Condition": {
"StringEquals": {
"ec2:InstanceType": "t2.micro"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:sa-east-1::image/ami-*"
],
"Condition": {
"StringEquals": {
"ec2:Owner": "amazon"
}
}
}
]
}
The following policy grants access to use Systems Manager parameters with a specific name.
The first statement grants users the permission to view Systems Manager parameters when selecting an
AMI in the launch wizard. The second statement grants users the permission to only use parameters that
are named prod-*.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ssm:DescribeParameters"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameters"
],
"Resource": "arn:aws:ssm:us-east-2:123456123:parameter/prod-*"
}
]
}
1269
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
Users can attach any volume to instances that have the tag "purpose=test", and also detach volumes
from those instances. To attach a volume using the Amazon EC2 console, it is helpful for users to have
permission to use the ec2:DescribeInstances action, as this allows them to select an instance from a
pre-populated list in the Attach Volume dialog box. However, this also allows users to view all instances
on the Instances page in the console, so you can omit this action.
In the first statement, the ec2:DescribeAvailabilityZones action is necessary to ensure that a user
can select an Availability Zone when creating a volume.
Users cannot tag the volumes that they create (either during or after volume creation).
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"ec2:DescribeAvailabilityZones",
"ec2:CreateVolume",
"ec2:DescribeInstances"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": "arn:aws:ec2:region:111122223333:instance/*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/purpose": "test"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": "arn:aws:ec2:region:111122223333:volume/*"
}
]
}
The following policy grants users permission to view security groups in the Amazon EC2 console, to add
and remove inbound and outbound rules, and to list and modify rule descriptions for existing security
groups that have the tag Department=Test.
In the first statement, the ec2:DescribeTags action allows users to view tags in the console, which
makes it easier for users to identify the security groups that they are allowed to modify.
1270
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:DescribeSecurityGroupRules",
"ec2:DescribeTags"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:RevokeSecurityGroupEgress",
"ec2:ModifySecurityGroupRules",
"ec2:UpdateSecurityGroupRuleDescriptionsIngress",
"ec2:UpdateSecurityGroupRuleDescriptionsEgress"
],
"Resource": [
"arn:aws:ec2:region:111122223333:security-group/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/Department": "Test"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifySecurityGroupRules"
],
"Resource": [
"arn:aws:ec2:region:111122223333:security-group-rule/*"
]
}
]}
You can create a policy that allows users to work with the Create Security Group dialog box in the
Amazon EC2 console. To use this dialog box, users must be granted permission to use at the least the
following API actions:
With these permissions, users can create a new security group successfully, but they cannot add any rules
to it. To work with rules in the Create Security Group dialog box, you can add the following API actions
to your policy:
1271
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
The following policy grants users permission to use the Create Security Group dialog box, and to create
inbound and outbound rules for security groups that are associated with a specific VPC (vpc-1a2b3c4d).
Users can create security groups for EC2-Classic or another VPC, but they cannot add any rules to them.
Similarly, users cannot add any rules to any existing security group that's not associated with VPC
vpc-1a2b3c4d. Users are also granted permission to view all security groups in the console. This makes
it easier for users to identify the security groups to which they can add inbound rules. This policy also
grants users permission to delete security groups that are associated with VPC vpc-1a2b3c4d.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:CreateSecurityGroup",
"ec2:DescribeVpcs"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteSecurityGroup",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress"
],
"Resource": "arn:aws:ec2:region:111122223333:security-group/*",
"Condition":{
"ArnEquals": {
"ec2:Vpc": "arn:aws:ec2:region:111122223333:vpc/vpc-1a2b3c4d"
}
}
}
]
}
1272
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM policies
To allow users to work with Elastic IP addresses, you can add the following actions to your policy.
The following policy allows users to view, allocate, and associate Elastic IP addresses with instances.
Users cannot associate Elastic IP addresses with network interfaces, disassociate Elastic IP addresses, or
release them.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeAddresses",
"ec2:AllocateAddress",
"ec2:DescribeInstances",
"ec2:AssociateAddress"
],
"Resource": "*"
}
]
}
This policy allows users to view all the Reserved Instances, as well as On-Demand Instances, in the
account. It's not possible to set resource-level permissions for individual Reserved Instances.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ec2:DescribeReservedInstances",
"ec2:ModifyReservedInstances",
"ec2:PurchaseReservedInstancesOffering",
"ec2:DescribeInstances",
"ec2:DescribeInstanceTypes",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeReservedInstancesOfferings"
],
"Resource": "*"
}
]
}
The ec2:DescribeAvailabilityZones action is necessary to ensure that the Amazon EC2 console
can display information about the Availability Zones in which you can purchase Reserved Instances. The
1273
Amazon Elastic Compute Cloud
User Guide for Linux Instances
AWS managed policies
ec2:DescribeInstances action is not required, but ensures that the user can view the instances in the
account and purchase reservations to match the correct specifications.
You can adjust the API actions to limit user access, for example removing ec2:DescribeInstances
and ec2:DescribeAvailabilityZones means the user has read-only access.
AWS services maintain and update AWS managed policies. You can't change the permissions in AWS
managed policies. Services occasionally add additional permissions to an AWS managed policy to
support new features. This type of update affects all identities (users, groups, and roles) where the policy
is attached. Services are most likely to update an AWS managed policy when a new feature is launched
or when new operations become available. Services do not remove permissions from an AWS managed
policy, so policy updates won't break your existing permissions.
Additionally, AWS supports managed policies for job functions that span multiple services. For example,
the ReadOnlyAccess AWS managed policy provides read-only access to all AWS services and resources.
When a service launches a new feature, AWS adds read-only permissions for new operations and
resources. For a list and descriptions of job function policies, see AWS managed policies for job functions
in the IAM User Guide.
To view the permissions for this policy, see AmazonEC2FullAccess in the AWS Management Console.
To view the permissions for this policy, see AmazonEC2ReadOnlyAccess in the AWS Management
Console.
To view the permissions for this policy, see AWSServiceRoleForEC2CapacityReservationFleet in the AWS
Management Console.
1274
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
EC2FastLaunchServiceRolePolicy (p.Amazon
1275) EC2 added the Windows November 26, 2021
– New policy faster launching feature to
enable Windows AMIs to launch
instances faster by creating a set
of pre-provisioned snapshots.
Amazon EC2 started tracking Amazon EC2 started tracking March 1, 2021
changes changes to its AWS managed
policies
We designed IAM roles so that your applications can securely make API requests from your instances,
without requiring you to manage the security credentials that the applications use. Instead of creating
1275
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles
as follows:
For example, you can use IAM roles to grant permissions to applications running on your instances that
need to use a bucket in Amazon S3. You can specify permissions for IAM roles by creating a policy in
JSON format. These are similar to the policies that you create for IAM users. If you change a role, the
change is propagated to all instances.
When creating IAM roles, associate least privilege IAM policies that restrict access to the specific API calls
the application requires.
You can only attach one IAM role to an instance, but you can attach the same role to many instances. For
more information about creating and using IAM roles, see Roles in the IAM User Guide.
You can apply resource-level permissions to your IAM policies to control the users' ability to attach,
replace, or detach IAM roles for an instance. For more information, see Supported resource-level
permissions for Amazon EC2 API actions (p. 1222) and the following example: Example: Work with IAM
roles (p. 1259).
Contents
• Instance profiles (p. 1276)
• Retrieve security credentials from instance metadata (p. 1276)
• Grant an IAM user permission to pass an IAM role to an instance (p. 1277)
• Work with IAM roles (p. 1278)
Instance profiles
Amazon EC2 uses an instance profile as a container for an IAM role. When you create an IAM role using
the IAM console, the console creates an instance profile automatically and gives it the same name as the
role to which it corresponds. If you use the Amazon EC2 console to launch an instance with an IAM role
or to attach an IAM role to an instance, you choose the role based on a list of instance profile names.
If you use the AWS CLI, API, or an AWS SDK to create a role, you create the role and instance profile as
separate actions, with potentially different names. If you then use the AWS CLI, API, or an AWS SDK to
launch an instance with an IAM role or to attach an IAM role to an instance, specify the instance profile
name.
An instance profile can contain only one IAM role. This limit cannot be increased.
For more information, see Instance Profiles in the IAM User Guide.
1276
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
Warning
If you use services that use instance metadata with IAM roles, ensure that you don't expose your
credentials when the services make HTTP calls on your behalf. The types of services that could
expose your credentials include HTTP proxies, HTML/CSS validator services, and XML processors
that support XML inclusion.
The following command retrieves the security credentials for an IAM role named s3access.
IMDSv2
IMDSv1
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2017-05-17T15:09:54Z"
}
For applications, AWS CLI, and Tools for Windows PowerShell commands that run on the instance, you
do not have to explicitly get the temporary security credentials—the AWS SDKs, AWS CLI, and Tools for
Windows PowerShell automatically get the credentials from the EC2 instance metadata service and use
them. To make a call outside of the instance using temporary security credentials (for example, to test
IAM policies), you must provide the access key, secret key, and the session token. For more information,
see Using Temporary Security Credentials to Request Access to AWS Resources in the IAM User Guide.
For more information about instance metadata, see Instance metadata and user data (p. 710). For
information about the instance metadata IP address, see Retrieve instance metadata (p. 718).
• iam:PassRole
• ec2:AssociateIamInstanceProfile
• ec2:ReplaceIamInstanceProfileAssociation
For example, the following IAM policy grants users permission to launch instances with an IAM role, or to
attach or replace an IAM role for an existing instance using the AWS CLI.
Note
If you want the policy to grant IAM users access to all of your roles, specify the resource as * in
the policy. However, please consider the principle of least privilege as a best-practice .
1277
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances",
"ec2:AssociateIamInstanceProfile",
"ec2:ReplaceIamInstanceProfileAssociation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::123456789012:role/DevTeam*"
}
]
}
To grant users permission to launch instances with an IAM role, or to attach or replace an IAM role
for an existing instance using the Amazon EC2 console, you must grant them permission to use
iam:ListInstanceProfiles, iam:PassRole, ec2:AssociateIamInstanceProfile, and
ec2:ReplaceIamInstanceProfileAssociation in addition to any other permissions they might
need. For example policies, see Example policies for working in the Amazon EC2 console (p. 1265).
Contents
• Create an IAM role (p. 1278)
• Launch an instance with an IAM role (p. 1280)
• Attach an IAM role to an instance (p. 1281)
• Replace an IAM role (p. 1282)
• Detach an IAM role (p. 1284)
• Generate a policy for your IAM role based on access activity (p. 1285)
Alternatively, you can use the AWS CLI to create an IAM role. The following example creates an IAM role
with a policy that allows the role to use an Amazon S3 bucket.
1278
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
1. Create the following trust policy and save it in a text file named ec2-role-trust-policy.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
2. Create the s3access role and specify the trust policy that you created using the create-role
command.
Example response
{
"Role": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
},
"RoleId": "AROAIIZKPBKS2LEXAMPLE",
"CreateDate": "2013-12-12T23:46:37.247Z",
"RoleName": "s3access",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/s3access"
}
}
3. Create an access policy and save it in a text file named ec2-role-access-policy.json. For
example, this policy grants administrative permissions for Amazon S3 to applications running on the
instance.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["*"]
}
]
}
4. Attach the access policy to the role using the put-role-policy command.
1279
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
Example response
{
"InstanceProfile": {
"InstanceProfileId": "AIPAJTLBPJLEGREXAMPLE",
"Roles": [],
"CreateDate": "2013-12-12T23:53:34.093Z",
"InstanceProfileName": "s3access-profile",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:instance-profile/s3access-profile"
}
}
Alternatively, you can use the following AWS Tools for Windows PowerShell commands:
• New-IAMRole
• Register-IAMRolePolicy
• New-IAMInstanceProfile
1280
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
was created for you and given the same name as the role. If you created your IAM role using
the AWS CLI, API, or an AWS SDK, you may have named your instance profile differently.
5. Configure any other details, then follow the instructions through the rest of the wizard, or choose
Review and Launch to accept default settings and go directly to the Review Instance Launch page.
6. Review your settings, then choose Launch to choose a key pair and launch your instance.
7. If you are using the Amazon EC2 API actions in your application, retrieve the AWS security
credentials made available on the instance and use them to sign the requests. The AWS SDK does
this for you.
IMDSv2
IMDSv1
Alternatively, you can use the AWS CLI to associate a role with an instance during launch. You must
specify the instance profile in the command.
1. Use the run-instances command to launch an instance using the instance profile. The following
example shows how to launch an instance with the instance profile.
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/role_name
New console
1281
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
Old console
1. If required, describe your instances to get the ID of the instance to which to attach the role.
2. Use the associate-iam-instance-profile command to attach the IAM role to the instance by specifying
the instance profile. You can use the Amazon Resource Name (ARN) of the instance profile, or you
can use its name.
Example response
{
"IamInstanceProfileAssociation": {
"InstanceId": "i-1234567890abcdef0",
"State": "associating",
"AssociationId": "iip-assoc-0dbd8529a48294120",
"IamInstanceProfile": {
"Id": "AIPAJLNLDX3AMYZNWYYAY",
"Arn": "arn:aws:iam::123456789012:instance-profile/TestRole-1"
}
}
}
• Get-EC2Instance
• Register-EC2IamInstanceProfile
1282
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
New console
Old console
1. If required, describe your IAM instance profile associations to get the association ID for the IAM
instance profile to replace.
Example response
{
"IamInstanceProfileAssociation": {
"InstanceId": "i-087711ddaf98f9489",
"State": "associating",
"AssociationId": "iip-assoc-09654be48e33b91e0",
"IamInstanceProfile": {
"Id": "AIPAJCJEDKX7QYHWYK7GS",
"Arn": "arn:aws:iam::123456789012:instance-profile/TestRole-2"
}
}
}
• Get-EC2IamInstanceProfileAssociation
• Set-EC2IamInstanceProfileAssociation
1283
Amazon Elastic Compute Cloud
User Guide for Linux Instances
IAM roles
New console
Old console
Example response
{
"IamInstanceProfileAssociations": [
{
"InstanceId": "i-088ce778fbfeb4361",
"State": "associated",
"AssociationId": "iip-assoc-0044d817db6c0a4ba",
"IamInstanceProfile": {
"Id": "AIPAJEDNCAA64SSD265D6",
"Arn": "arn:aws:iam::123456789012:instance-profile/TestRole-2"
}
}
]
}
2. Use the disassociate-iam-instance-profile command to detach the IAM instance profile using its
association ID.
Example response
1284
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network access
"IamInstanceProfileAssociation": {
"InstanceId": "i-087711ddaf98f9489",
"State": "disassociating",
"AssociationId": "iip-assoc-0044d817db6c0a4ba",
"IamInstanceProfile": {
"Id": "AIPAJEDNCAA64SSD265D6",
"Arn": "arn:aws:iam::123456789012:instance-profile/TestRole-2"
}
}
}
• Get-EC2IamInstanceProfileAssociation
• Unregister-EC2IamInstanceProfile
Your default security groups and newly created security groups include default rules that do not
enable you to access your instance from the internet. For more information, see Default security
groups (p. 1307) and Custom security groups (p. 1308). To enable network access to your instance, you
must allow inbound traffic to your instance. To open a port for inbound traffic, add a rule to a security
group that you associated with your instance when you launched it.
To connect to your instance, you must set up a rule to authorize SSH traffic from your computer's public
IPv4 address. To allow SSH traffic from additional IP address ranges, add another rule for each range you
need to authorize.
If you've enabled your VPC for IPv6 and launched your instance with an IPv6 address, you can connect to
your instance using its IPv6 address instead of a public IPv4 address. Your local computer must have an
IPv6 address and must be configured to use IPv6.
If you need to enable network access to a Windows instance, see Authorizing inbound traffic for your
Windows instances in the Amazon EC2 User Guide for Windows Instances.
1285
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network access
you can use the search phrase "what is my IP address" in an internet browser, or use the following
service: Check IP. If you are connecting through an ISP or from behind your firewall without a static IP
address, you need to find out the range of IP addresses used by client computers.
Warning
If you use 0.0.0.0/0, you enable all IPv4 addresses to access your instance using SSH. If you
use ::/0, you enable all IPv6 address to access your instance. This is acceptable for a short time
in a test environment, but it's unsafe for production environments. In production, you authorize
only a specific IP address or range of addresses to access your instance.
Decide whether you'll support SSH access to your instances using EC2 Instance Connect. If you will not
use EC2 Instance Connect, consider uninstalling it or denying the following action in your IAM policies:
ec2-instance-connect:SendSSHPublicKey. For more information, see Uninstall EC2 Instance
Connect (p. 612) and Configure IAM Permissions for EC2 Instance Connect (p. 607).
New console
To add a rule to a security group for inbound SSH traffic over IPv4 (console)
Alternatively, for Source, choose Custom and enter the public IPv4 address of your
computer or network in CIDR notation. For example, if your IPv4 address is 203.0.113.25,
enter 203.0.113.25/32 to list this single IPv4 address in CIDR notation. If your company
allocates addresses from a range, enter the entire range, such as 203.0.113.0/24.
For information about finding your IP address, see Before you start (p. 1285).
d. Choose Save rules.
Old console
To add a rule to a security group for inbound SSH traffic over IPv4 (console)
1. In the navigation pane of the Amazon EC2 console, choose Instances. Select your instance and
look at the Description tab; Security groups lists the security groups that are associated with
the instance. Choose view inbound rules to display a list of the rules that are in effect for the
instance.
1286
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Network access
2. In the navigation pane, choose Security Groups. Select one of the security groups associated
with your instance.
3. In the details pane, on the Inbound tab, choose Edit. In the dialog, choose Add Rule, and then
choose SSH from the Type list.
4. In the Source field, choose My IP to automatically populate the field with the public IPv4
address of your local computer. Alternatively, choose Custom and specify the public IPv4
address of your computer or network in CIDR notation. For example, if your IPv4 address
is 203.0.113.25, specify 203.0.113.25/32 to list this single IPv4 address in CIDR
notation. If your company allocates addresses from a range, specify the entire range, such as
203.0.113.0/24.
For information about finding your IP address, see Before you start (p. 1285).
5. Choose Save.
If you launched an instance with an IPv6 address and want to connect to your instance using its IPv6
address, you must add rules that allow inbound IPv6 traffic over SSH.
New console
To add a rule to a security group for inbound SSH traffic over IPv6 (console)
Old console
To add a rule to a security group for inbound SSH traffic over IPv6 (console)
1287
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Key pairs
6. Choose Save.
Note
Be sure to run the following commands on your local system, not on the instance itself. For
more information about these command line interfaces, see Access Amazon EC2 (p. 3).
1. Find the security group that is associated with your instance using one of the following commands:
Both commands return a security group ID, which you use in the next step.
2. Add the rule to the security group using one of the following commands:
After you launch an instance, you can change its security groups. For more information, see Changing an
instance's security groups in the Amazon VPC User Guide.
1288
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create a key pair using Amazon EC2
SSH into your instance. Anyone who possesses your private key can connect to your instances, so it's
important that you store your private key in a secure place.
When you launch an instance, you are prompted for a key pair (p. 571). If you plan to connect to the
instance using SSH, you must specify a key pair. You can choose an existing key pair or create a new
one. When your instance boots for the first time, the public key that you specified at launch is placed on
your Linux instance in an entry within ~/.ssh/authorized_keys. When you connect to your Linux
instance using SSH, to log in you must specify the private key that corresponds to the public key. For
more information about connecting to your instance, see Connect to your Linux instance (p. 596). For
more information about key pairs and Windows instances, see Amazon EC2 key pairs and Windows
instances in the Amazon EC2 User Guide for Windows Instances.
Because Amazon EC2 doesn't keep a copy of your private key, there is no way to recover a private key if
you lose it. However, there can still be a way to connect to instances for which you've lost the private key.
For more information, see Connect to your Linux instance if you lose your private key (p. 1299).
You can use Amazon EC2 to create your key pairs. You can also use a third-party tool to create your key
pairs, and then import the public keys to Amazon EC2.
The keys that Amazon EC2 uses are ED25519 or 2048-bit SSH-2 RSA keys.
Contents
• Create a key pair using Amazon EC2 (p. 1289)
• Create a key pair using a third-party tool and import the public key to Amazon EC2 (p. 1291)
• Tag a public key (p. 1292)
• Retrieve the public key from the private key (p. 1294)
• Retrieve the public key through instance metadata (p. 1294)
• Locate the public key on an instance (p. 1295)
• Identify the key pair that was specified at launch (p. 1296)
• Verify your key pair's fingerprint (p. 1296)
• Add or replace a key pair for your instance (p. 1297)
• Delete your key pair (p. 1298)
• Delete a public key from an instance (p. 1298)
• Connect to your Linux instance if you lose your private key (p. 1299)
Console
1289
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create a key pair using Amazon EC2
6. For Private key file format, choose the format in which to save the private key. To save the
private key in a format that can be used with OpenSSH, choose pem. To save the private key in a
format that can be used with PuTTY, choose ppk.
If you chose ED25519 in the previous step, the Private key file format options do not appear,
and the private key format defaults to pem.
7. To add a tag to the public key, choose Add tag, and enter the key and value for the tag. Repeat
for each tag.
8. Choose Create key pair.
9. The private key file is automatically downloaded by your browser. The base file name is the
name that you specified as the name of your key pair, and the file name extension is determined
by the file format that you chose. Save the private key file in a safe place.
Important
This is the only chance for you to save the private key file.
10. If you will use an SSH client on a macOS or Linux computer to connect to your Linux instance,
use the following command to set the permissions of your private key file so that only you can
read it.
If you do not set these permissions, then you cannot connect to your instance using this key pair.
For more information, see Error: Unprotected private key file (p. 1693).
AWS CLI
1. Use the create-key-pair command as follows to generate the key pair and to save the private key
to a .pem file.
For --key-name, specify a name for the public key. The name can be up to 255 ASCII
characters.
For --key-type, specify either rsa or ed25519. If you do not include the --key-type
parameter, an rsa key is created by default. Note that ED25519 keys are not supported for
Windows instances.
--output text > my-key-pair.pem saves the private key material in a file with the .pem
extension. The private key can have a name that's different from the public key name, but for
ease of use, use the same name.
2. If you will use an SSH client on a macOS or Linux computer to connect to your Linux instance,
use the following command to set the permissions of your private key file so that only you can
read it.
1290
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create a key pair using a third-party tool
and import the public key to Amazon EC2
If you do not set these permissions, then you cannot connect to your instance using this key pair.
For more information, see Error: Unprotected private key file (p. 1693).
PowerShell
Use the New-EC2KeyPair AWS Tools for Windows PowerShell command as follows to generate the
key and save it to a .pem file.
For -KeyName, specify a name for the public key. The name can be up to 255 ASCII characters.
For -KeyType, specify either rsa or ed25519. If you do not include the -KeyType parameter, an
rsa key is created by default. Note that ED25519 keys are not supported for Windows instances.
• Supported types: RSA and ED25519. Amazon EC2 does not accept DSA keys.
• Note that ED25519 keys are not supported for Windows instances.
• Supported formats:
• OpenSSH public key format (the format in ~/.ssh/authorized_keys). If you connect using SSH
while using the EC2 Instance Connect API, the SSH2 format is also supported.
• SSH private key file format must be PEM
• (RSA only) Base64 encoded DER format
• (RSA only) SSH public key file format as specified in RFC 4716
• Supported lengths: 1024, 2048, and 4096. If you connect using SSH while using the EC2 Instance
Connect API, the supported lengths are 2048 and 4096.
1. Generate a key pair with a third-party tool of your choice. For example, you can use ssh-keygen
(a tool provided with the standard OpenSSH installation). Alternatively, Java, Ruby, Python, and
many other programming languages provide standard libraries that you can use to create an RSA or
ED25519 key pair.
Important
The private key must be in the PEM format. For example, use ssh-keygen -m PEM to
generate the OpenSSH key in the PEM format.
1291
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tag a public key
2. Save the public key to a local file. For example, ~/.ssh/my-key-pair.pub. The file name
extension for this file is not important.
3. Save the private key to a local file that has the .pem extension. For example, ~/.ssh/my-key-
pair.pem.
Important
Save the private key file in a safe place. You'll need to provide the name of your public key
when you launch an instance, and the corresponding private key each time you connect to
the instance.
After you have created the key pair, use one of the following methods to import your public key to
Amazon EC2.
Console
AWS CLI
You can view, add, and delete tags using one of the following methods.
1292
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tag a public key
Console
• To add a tag, choose Add tag, and then enter the tag key and value. You can add up to 50
tags per key. For more information, see Tag restrictions (p. 1670).
• To delete a tag, choose Remove next to the tag to delete.
5. Choose Save.
AWS CLI
Use the describe-tags AWS CLI command. In the following example, you describe the tags for all of
your public keys.
{
"Tags": [
{
"Key": "Environment",
"ResourceId": "key-0123456789EXAMPLE",
"ResourceType": "key-pair",
"Value": "Production"
},
{
"Key": "Environment",
"ResourceId": "key-9876543210EXAMPLE",
"ResourceType": "key-pair",
"Value": "Production"
}]
}
{
"KeyPairs": [
{
"KeyName": "MyKeyPair",
"KeyFingerprint":
"1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f",
"KeyPairId": "key-0123456789EXAMPLE",
"Tags": [
{
"Key": "Environment",
"Value": "Production"
}]
}]
1293
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Retrieve the public key from the private key
Use the create-tags AWS CLI command. In the following example, the existing key is tagged with
Key=Cost-Center and Value=CC-123.
Use the delete-tags AWS CLI command. For examples, see Examples in the AWS CLI Command
Reference.
PowerShell
ssh-keygen -y -f /path_to_key_pair/my-key-pair.pem
The command returns the public key, as shown in the following example.
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
If the command fails, run the following command to ensure that you've changed the permissions on your
private key pair file so that only you can view it.
1294
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Locate the public key on an instance
IMDSv2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE my-key-pair
IMDSv1
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE my-key-pair
If you change the key pair that you use to connect to the instance, we don't update the instance
metadata to show the new public key. Instead, the instance metadata continues to show the public key
for the key pair that you specified when you launched the instance. For more information, see Retrieve
instance metadata (p. 718).
The authorized_keys file opens, displaying the public key followed by the name of the key pair.
The following is an example entry for the key pair named my-key-pair.
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
1295
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Identify the key pair that was specified at launch
BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE my-key-pair
Note
The value of the Key pair name does not change even if you change the public key on the
instance, or add key pairs.
You can use the SSH2 fingerprint that's displayed on the Key Pairs page to verify that the private key
you have on your local machine matches the public key stored in AWS. From the computer where you
downloaded the private key file, generate an SSH2 fingerprint from the private key file. The output
should match the fingerprint that's displayed in the console.
If you're using a Windows local machine, you can run the following commands using the Windows
Subsystem for Linux (WSL). Install the WSL and a Linux distribution using the instructions in the
Windows 10 Installation Guide. The example in the instructions installs the Ubuntu distribution of Linux,
but you can install any distribution. You are prompted to restart your computer for the changes to take
effect.
If you created your key pair using AWS, you can use the OpenSSL tools to generate a fingerprint as
shown in the following example.
$ openssl pkcs8 -in path_to_private_key -inform PEM -outform DER -topk8 -nocrypt | openssl
sha1 -c
If you created a key pair using a third-party tool and uploaded the public key to AWS, you can use the
OpenSSL tools to generate the fingerprint as shown in the following example.
If you created an OpenSSH key pair using OpenSSH 7.8 or later and uploaded the public key to AWS, you
can use ssh-keygen to generate the fingerprint as shown in the following examples.
1296
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Add or replace a key pair for your instance
$ ssh-keygen -l -f path_to_private_key.pem
• If a user in your organization requires access to the system user account using a separate key pair, you
can add the public key to your instance.
• If someone has a copy of the private key (.pem file) and you want to prevent them from connecting
to your instance (for example, if they've left your organization), you can delete the public key on the
instance and replace it with a new one.
The public keys are located in the .ssh/authorized_keys file on the instance.
To add or replace a key pair, you must be able to connect to your instance. If you've lost your existing
private key or you launched your instance without a key pair, you won't be able connect to your instance
and therefore won't be able to add or replace a key pair. If you've lost your private key, you might be
able to retrieve it. For more information, see Connect to your Linux instance if you lose your private
key (p. 1299). If you launched your instance without a key pair, you won't be able to connect to the
instance unless you chose an AMI that is configured to allow users another way to log in.
Note
These procedures are for modifying the key pair for the default user account, such as ec2-user.
For information about adding user accounts to your instance, see Manage user accounts on your
Amazon Linux instance (p. 660).
1. Create a new key pair using the Amazon EC2 console (p. 1289) or a third-party tool (p. 1291).
2. Retrieve the public key from your new key pair. For more information, see Retrieve the public key
from the private key (p. 1294).
3. Connect to your instance (p. 596) using your existing private key.
4. Using a text editor of your choice, open the .ssh/authorized_keys file on the instance. Paste the
public key information from your new key pair underneath the existing public key information. Save
the file.
5. Disconnect from your instance, and test that you can connect to your instance using the new private
key file.
6. (Optional) If you're replacing an existing key pair, connect to your instance and delete the public key
information for the original key pair from the .ssh/authorized_keys file.
Note
If you're using an Auto Scaling group, ensure that the key pair you're replacing is not specified in
your launch template or launch configuration. If Amazon EC2 Auto Scaling detects an unhealthy
instance, it launches a replacement instance. However, the instance launch fails if the key pair
cannot be found. For more information, see Launch templates in the Amazon EC2 Auto Scaling
User Guide.
1297
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Delete your key pair
If you're using an Auto Scaling group (for example, in an Elastic Beanstalk environment), ensure that
the key pair you're deleting is not specified in an associated launch template or launch configuration. If
Amazon EC2 Auto Scaling detects an unhealthy instance, it launches a replacement instance. However,
the instance launch fails if the key pair cannot be found. For more information, see Launch templates in
the Amazon EC2 Auto Scaling User Guide.
You can delete a key pair using one of the following methods.
Console
AWS CLI
Warning
After you delete the public key from the instance and disconnect from the instance, you can't
connect to it again unless the AMI provides another way of logging in.
1298
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to your Linux instance if you lose your private key
This procedure is only supported for instances with EBS root volumes. If the root device is an instance
store volume, you cannot use this procedure to regain access to your instance; you must have the private
key to connect to the instance. To determine the root device type of your instance, open the Amazon EC2
console, choose Instances, select the instance, and check the value of Root device type in the details
pane. The value is either ebs or instance store.
In addition to the following steps, there are other ways to connect to your Linux instance if you lose your
private key. For more information, see How can I connect to my Amazon EC2 instance if I lost my SSH key
pair after its initial launch?
Step 2: Get information about the original instance and its root
volume
Make note of the following information because you'll need it to complete this procedure.
1299
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to your Linux instance if you lose your private key
5. On the Storage tab, under Root device name, make note of the device name for the root volume
(for example, /dev/xvda). Then, under Block devices, find this device name and make note of the
volume ID (for example, vol-0a1234b5678c910de).
• On the Choose an AMI page, select the same AMI that you used to launch the original instance. If
this AMI is unavailable, you can create an AMI that you can use from the stopped instance. For more
information, see Create an Amazon EBS-backed Linux AMI (p. 134).
• On the Choose an Instance Type page, leave the default instance type that the wizard selects for you.
• On the Configure Instance Details page, specify the same Availability Zone as the original instance. If
you're launching an instance in a VPC, select a subnet in this Availability Zone.
• On the Add Tags page, add the tag Name=Temporary to the instance to indicate that this is a
temporary instance.
• On the Review page, choose Launch. Choose the key pair that you created in Step 1, then choose
Launch Instances.
Step 5: Detach the root volume from the original instance and
attach it to the temporary instance
1. In the navigation pane, choose Volumes and select the root device volume for the original instance
(you made note of its volume ID in a previous step). Choose Actions, Detach Volume, and then
select Yes, Detach. Wait for the state of the volume to become available. (You might need to
choose the Refresh icon.)
2. With the volume still selected, choose Actions, and then select Attach Volume. Select the instance
ID of the temporary instance, make note of the device name specified under Device (for example, /
dev/sdf), and then choose Attach.
Note
If you launched your original instance from an AWS Marketplace AMI and your volume
contains AWS Marketplace codes, you must first stop the temporary instance before you can
attach the volume.
1300
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to your Linux instance if you lose your private key
Note
The device name might appear differently on your instance. For example, devices mounted
as /dev/sdf might show up as /dev/xvdf on the instance. Some versions of Red Hat (or
its variants, such as CentOS) might even increment the trailing letter by 4 characters, where
/dev/sdf becomes /dev/xvdk.
In the preceding example, /dev/xvda and /dev/xvdf are partitioned volumes, and /dev/
xvdg is not. If your volume is partitioned, you mount the partition (/dev/xvdf1) instead of
the raw device (/dev/xvdf) in the next steps.
b. Create a temporary directory to mount the volume.
c. Mount the volume (or partition) at the temporary mount point, using the volume name or
device name that you identified earlier. The required command depends on your operating
system's file system. Note that the device name might appear differently on your instance. See
note in this section for more information.
Note
If you get an error stating that the file system is corrupt, run the following command to use
the fsck utility to check the file system and repair any issues:
3. From the temporary instance, use the following command to update authorized_keys on the
mounted volume with the new public key from the authorized_keys for the temporary instance.
Important
The following examples use the Amazon Linux user name ec2-user. You might need to
substitute a different user name, such as ubuntu for Ubuntu instances.
1301
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to your Linux instance if you lose your private key
(Optional) Otherwise, if you don't have permission to edit files in /mnt/tempvol, you must update
the file using sudo and then check the permissions on the file to verify that you are able to log into
the original instance. Use the following command to check the permissions on the file.
In this example output, 222 is the user ID and 500 is the group ID. Next, use sudo to re-run the copy
command that failed.
Run the following command again to determine whether the permissions changed.
If the user ID and group ID have changed, use the following command to restore them.
2. Detach the volume from the temporary instance (you unmounted it in the previous step): From the
Amazon EC2 console, select the root device volume for the original instance (you made note of the
volume ID in a previous step), choose Actions, Detach Volume, and then choose Yes, Detach. Wait
for the state of the volume to become available. (You might need to choose the Refresh icon.)
3. Reattach the volume to the original instance: With the volume still selected, choose Actions, Attach
Volume. Select the instance ID of the original instance, specify the device name that you noted
earlier in Step 2 (p. 1299) for the original root device attachment (/dev/sda1 or /dev/xvda), and
then choose Attach.
Important
If you don't specify the same device name as the original attachment, you cannot start the
original instance. Amazon EC2 expects the root device volume at sda1 or /dev/xvda.
Step 8: Connect to the original instance using the new key pair
Select the original instance, choose Instance state, Start instance. After the instance enters the
running state, you can connect to it using the private key file for your new key pair.
1302
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Security groups
Note
If the name of your new key pair and corresponding private key file is different from the name
of the original key pair, ensure that you specify the name of the new private key file when you
connect to your instance.
Step 9: Clean up
(Optional) You can terminate the temporary instance if you have no further use for it. Select the
temporary instance, and choose Instance state, Terminate instance.
When you launch an instance in a VPC, you must specify a security group that's created for that VPC.
After you launch an instance, you can change its security groups. Security groups are associated with
network interfaces. Changing an instance's security groups changes the security groups associated
with the primary network interface (eth0). For more information, see Changing an instance's security
groups in the Amazon VPC User Guide. You can also change the security groups associated with any other
network interface. For more information, see Modify network interface attributes (p. 1091).
Security is a shared responsibility between AWS and you. For more information, see Security in Amazon
EC2 (p. 1211). AWS provides security groups as one of the tools for securing your instances, and you
need to configure them to meet your security needs. If you have requirements that aren't fully met by
security groups, you can maintain your own firewall on any of your instances in addition to using security
groups.
To allow traffic to a Windows instance, see Amazon EC2 security groups for Windows instances in the
Amazon EC2 User Guide for Windows Instances.
Contents
• Security group rules (p. 1304)
• Security group connection tracking (p. 1305)
• Untracked connections (p. 1306)
• Example (p. 1306)
• Throttling (p. 1307)
• Default and custom security groups (p. 1307)
• Default security groups (p. 1307)
• Custom security groups (p. 1308)
• Work with security groups (p. 1309)
• Create a security group (p. 1309)
• Copy a security group (p. 1310)
• View your security groups (p. 1311)
• Add rules to a security group (p. 1311)
• Update security group rules (p. 1314)
1303
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Security group rules
• By default, security groups allow all outbound traffic. Note that Amazon EC2 blocks traffic on port 25
by default. For more information, see Restriction on email sent using port 25 (p. 1681).
• Security group rules are always permissive; you can't create rules that deny access.
• Security group rules enable you to filter traffic based on protocols and port numbers.
• Security groups are stateful—if you send a request from your instance, the response traffic for that
request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this
also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound
rules. For more information, see Security group connection tracking (p. 1305).
• You can add and remove rules at any time. Your changes are automatically applied to the instances
that are associated with the security group.
The effect of some rule changes can depend on how the traffic is tracked. For more information, see
Security group connection tracking (p. 1305).
• When you associate multiple security groups with an instance, the rules from each security group
are effectively aggregated to create one set of rules. Amazon EC2 uses this set of rules to determine
whether to allow access.
You can assign multiple security groups to an instance. Therefore, an instance can have hundreds of
rules that apply. This might cause problems when you access the instance. We recommend that you
condense your rules as much as possible.
• Name: The name for the security group (for example, "my-security-group").
A name can be up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/
()#,@[]+=;{}!$*. When the name contains trailing spaces, we trim the spaces when we save the name.
For example, if you enter "Test Security Group " for the name, we store it as "Test Security Group".
1304
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connection tracking
• Protocol: The protocol to allow. The most common protocols are 6 (TCP), 17 (UDP), and 1 (ICMP).
• Port range: For TCP, UDP, or a custom protocol, the range of ports to allow. You can specify a single
port number (for example, 22), or range of port numbers (for example, 7000-8000).
• ICMP type and code: For ICMP and ICMPv6, the ICMP type and code. For example, use type 8 for ICMP
Echo Request or type 128 for ICMPv6 Echo Request.
• Source or destination: The source (inbound rules) or destination (outbound rules) for the traffic.
Specify one of these options:
• An individual IPv4 address. You must use the /32 prefix length; for example, 203.0.113.1/32.
• An individual IPv6 address. You must use the /128 prefix length; for example,
2001:db8:1234:1a00::123/128.
• A range of IPv4 addresses, in CIDR block notation; for example, 203.0.113.0/24.
• A range of IPv6 addresses, in CIDR block notation; for example, 2001:db8:1234:1a00::/64.
• A prefix list ID, for example, pl-1234abc1234abc123. For more information, see Prefix lists in the
Amazon VPC User Guide.
• Another security group. This allows instances that are associated with the specified security group
to access instances associated with this security group. Choosing this option does not add rules
from the source security group to this security group. You can specify one of the following security
groups:
• The current security group
• A different security group for the same VPC
• A different security group for a peer VPC in a VPC peering connection
• (Optional) Description: You can add a description for the rule, which can help you identify it later. A
description can be up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/
()#,@[]+=;{}!$*.
When you create a security group rule, AWS assigns a unique ID to the rule. You can use the ID of a rule
when you use the API or CLI to modify or delete the rule.
When you specify a security group as the source or destination for a rule, the rule affects all instances
that are associated with the security group. Incoming traffic is allowed based on the private IP addresses
of the instances that are associated with the source security group (and not the public IP or Elastic IP
addresses). For more information about IP addresses, see Amazon EC2 instance IP addressing (p. 1018).
If your security group rule references a deleted security group in the same VPC or in a peer VPC, or if it
references a security group in a peer VPC for which the VPC peering connection has been deleted, the
rule is marked as stale. For more information, see Working with Stale Security Group Rules in the Amazon
VPC Peering Guide.
If there is more than one rule for a specific port, Amazon EC2 applies the most permissive rule. For
example, if you have a rule that allows access to TCP port 22 (SSH) from IP address 203.0.113.1, and
another rule that allows access to TCP port 22 from everyone, everyone has access to TCP port 22.
When you add, update, or remove rules, the changes are automatically applied to all instances associated
with the security group.
As an example, suppose that you initiate a command such as netcat or similar to your instances from
your home computer, and your inbound security group rules allow ICMP traffic. Information about
the connection (including the port information) is tracked. Response traffic from the instance for the
1305
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connection tracking
command is not tracked as a new request, but rather as an established connection, and is allowed to flow
out of the instance, even if your outbound security group rules restrict outbound ICMP traffic.
For protocols other than TCP, UDP, or ICMP, only the IP address and protocol number is tracked. If your
instance sends traffic to another host (host B), and host B initiates the same type of traffic to your
instance in a separate request within 600 seconds of the original request or response, your instance
accepts it regardless of inbound security group rules. Your instance accepts it because it’s regarded as
response traffic.
To ensure that traffic is immediately interrupted when you remove a security group rule, or to ensure
that all inbound traffic is subject to firewall rules, you can use a network ACL for your subnet. Network
ACLs are stateless and therefore do not automatically allow response traffic. For more information, see
Network ACLs in the Amazon VPC User Guide.
Untracked connections
Not all flows of traffic are tracked. If a security group rule permits TCP or UDP flows for all traffic
(0.0.0.0/0 or ::/0) and there is a corresponding rule in the other direction that permits all response traffic
(0.0.0.0/0 or ::/0) for all ports (0-65535), then that flow of traffic is not tracked. The response traffic is
therefore allowed to flow based on the inbound or outbound rule that permits the response traffic, and
not on tracking information.
An untracked flow of traffic is immediately interrupted if the rule that enables the flow is removed or
modified. For example, if you have an open (0.0.0.0/0) outbound rule, and you remove a rule that allows
all (0.0.0.0/0) inbound SSH (TCP port 22) traffic to the instance (or modify it such that the connection
would no longer be permitted), your existing SSH connections to the instance are immediately dropped.
The connection was not previously being tracked, so the change will break the connection. On the other
hand, if you have a narrower inbound rule that initially allows the SSH connection (meaning that the
connection was tracked), but change that rule to no longer allow new connections from the address of
the current SSH client, the existing connection will not be broken by changing the rule.
Example
In the following example, the security group has specific inbound rules for TCP and ICMP traffic, and
outbound rules that allow all outbound IPv4 and IPv6 traffic.
Inbound rules
Outbound rules
• TCP traffic on port 22 (SSH) to and from the instance is tracked, because the inbound rule allows traffic
from 203.0.113.1/32 only, and not all IP addresses (0.0.0.0/0).
1306
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Default and custom security groups
• TCP traffic on port 80 (HTTP) to and from the instance is not tracked, because both the inbound and
outbound rules allow all traffic (0.0.0.0/0 or ::/0).
• ICMP traffic is always tracked, regardless of rules.
• If you remove the outbound rule from the security group, all traffic to and from the instance is tracked,
including traffic on port 80 (HTTP).
Throttling
Amazon EC2 defines the maximum number of connections that can be tracked per instance. After the
maximum is reached, any packets that are sent or received are dropped because a new connection
cannot be established. When this happens, applications that send and receive packets cannot
communicate properly.
To determine whether packets were dropped because the network traffic for your instance exceeded
the maximum number of connections that can be tracked, use the conntrack_allowance_exceeded
network performance metric. For more information, see Monitor network performance for your EC2
instance (p. 1116).
Connections made through the following are automatically tracked, even if the security group
configuration does not require tracking:
With Elastic Load Balancing, if you exceed the maximum number of connections that can be tracked per
instance, we recommend that you scale either the number of instances registered with the load balancer
or the size of the instances registered with the load balancer.
Topics
• Default security groups (p. 1307)
• Custom security groups (p. 1308)
A default security group is named "default", and it has an ID assigned by AWS. The following table
describes the default rules for a default security group.
1307
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Default and custom security groups
Inbound rule
Outbound rules
You can add or remove inbound and outbound rules for any default security group.
You can't delete a default security group. If you try to delete a default security group, you see the
following error: Client.CannotDelete: the specified group: "sg-51530134" name:
"default" cannot be deleted by a user.
When you create a security group, you must provide it with a name and a description. Security group
names and descriptions can be up to 255 characters in length, and are limited to the following
characters:
A security group name cannot start with the following: sg-. A security group name must be unique for
the VPC.
The following are the default rules for a security group that you create:
After you've created a security group, you can change its inbound rules to reflect the type of inbound
traffic that you want to reach the associated instances. You can also change its outbound rules.
For more information about the rules you can add to a security group, see Security group rules for
different use cases (p. 1318).
1308
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with security groups
After you launch an instance, you can change its security groups. For more information, see Change an
instance's security group (p. 1317).
You can create, view, update, and delete security groups and security group rules using the Amazon EC2
console and the command line tools.
Tasks
• Create a security group (p. 1309)
• Copy a security group (p. 1310)
• View your security groups (p. 1311)
• Add rules to a security group (p. 1311)
• Update security group rules (p. 1314)
• Delete rules from a security group (p. 1315)
• Delete a security group (p. 1316)
• Assign a security group to an instance (p. 1317)
• Change an instance's security group (p. 1317)
By default, new security groups start with only an outbound rule that allows all traffic to leave the
instances. You must add rules to enable any inbound traffic or to restrict the outbound traffic.
A security group can be used only in the VPC for which it is created.
New console
a. Enter a descriptive name and brief description for the security group. They can't be edited
after the security group is created. The name and description can be up to 255 characters
long. The valid characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*.
b. For VPC, choose the VPC.
5. You can add security group rules now, or you can add them later. For more information, see Add
rules to a security group (p. 1311).
6. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter
the tag key and value.
7. Choose Create security group.
1309
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with security groups
Old console
Command line
The copy receives a new unique security group ID and you must give it a name. You can also add a
description.
You can't copy a security group from one Region to another Region.
You can create a copy of a security group using one of the following methods.
New console
Old console
1310
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with security groups
4. The Create Security Group dialog opens, and is populated with the rules from the existing
security group. Specify a name and description for your new security group. For VPC, choose the
ID of the VPC. When you are done, choose Create.
New console
Old console
Command line
You can use Amazon EC2 Global View to view your security groups across all Regions for which your
AWS account is enabled. For more information, see List and filter resources across Regions using
Amazon EC2 Global View (p. 1665).
New console
1311
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with security groups
• For custom TCP or UDP, you must enter the port range to allow.
• For custom ICMP, you must choose the ICMP type from Protocol, and, if applicable, the
code from Port range. For example, to allow ping commands, choose Echo Request from
Protocol.
• For any other type, the protocol and port range are configured for you.
b. For Source, do one of the following to allow traffic.
• Choose Custom and then enter an IP address in CIDR notation, a CIDR block, another
security group, or a prefix list.
• Choose Anywhere to allow all traffic for the specified protocol to reach your instance.
This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This
is acceptable for a short time in a test environment, but it's unsafe in production
environments. In production, authorize only a specific IP address or range of addresses to
access your instances.
If your security group is in a VPC that's enabled for IPv6, this option automatically adds a
rule for the ::/0 IPv6 CIDR block.
• Choose My IP to allow inbound traffic from only your local computer's public IPv4
address.
c. For Description, optionally specify a brief description for the rule.
5. Choose Preview changes, Save rules.
• For custom TCP or UDP, you must enter the port range to allow.
• For custom ICMP, you must choose the ICMP type from Protocol, and, if applicable, the
code from Port range.
• For any other type, the protocol and port range are configured automatically.
b. For Destination, do one of the following.
• Choose Custom and then enter an IP address in CIDR notation, a CIDR block, another
security group, or a prefix list for which to allow outbound traffic.
• Choose Anywhere to allow outbound traffic to all IP addresses. This option automatically
adds the 0.0.0.0/0 IPv4 CIDR block as the destination.
If your security group is in a VPC that's enabled for IPv6, this option automatically adds a
rule for the ::/0 IPv6 CIDR block.
• Choose My IP to allow outbound traffic only to your local computer's public IPv4 address.
c. (Optional) For Description, specify a brief description for the rule.
5. Choose Preview changes, Confirm.
1312
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with security groups
Old console
If your security group is in a VPC that's enabled for IPv6, the Anywhere option creates two
rules—one for IPv4 traffic (0.0.0.0/0) and one for IPv6 traffic (::/0).
• My IP: automatically adds the public IPv4 address of your local computer.
• For Description, you can optionally specify a description for the rule.
For more information about the types of rules that you can add, see Security group rules for
different use cases (p. 1318).
5. Choose Save.
6. You can also specify outbound rules. On the Outbound tab, choose Edit, Add Rule, and do the
following:
If your security group is in a VPC that's enabled for IPv6, the Anywhere option creates two
rules—one for IPv4 traffic (0.0.0.0/0) and one for IPv6 traffic (::/0).
• My IP: automatically adds the IP address of your local computer.
• For Description, you can optionally specify a description for the rule.
7. Choose Save.
1313
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with security groups
Command line
New console
When you modify the protocol, port range, or source or destination of an existing security group rule
using the console, the console deletes the existing rule and adds a new one for you.
Old console
When you modify the protocol, port range, or source or destination of an existing security group rule
using the console, the console deletes the existing rule and adds a new one for you.
1314
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with security groups
Command line
You cannot modify the protocol, port range, or source or destination of an existing rule using the
Amazon EC2 API or a command line tools. Instead, you must delete the existing rule and add a new
rule. You can, however, update the description of an existing rule.
To update a rule
You can delete rules from a security group using one of the following methods.
New console
1315
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with security groups
Old console
Command line
New console
Old console
1316
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with security groups
Command line
• To assign a security group to an instance when you launch the instance, see Step 6: Configure Security
Group (p. 570).
• To specify a security group in a launch template, see Step 6 of Create a new launch template using
parameters you define (p. 581).
New console
To remove an already associated security group, choose Remove for that security group.
5. Choose Save.
Old console
1317
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Security group rules for different use cases
Command line
To change the security groups for an instance using the command line
The following are examples of the kinds of rules that you can add to security groups for specific kinds of
access.
Examples
• Web server rules (p. 1318)
• Database server rules (p. 1319)
• Rules to connect to instances from your computer (p. 1320)
• Rules to connect to instances from an instance with the same security group (p. 1320)
• Rules for ping/ICMP (p. 1320)
• DNS server rules (p. 1321)
• Amazon EFS rules (p. 1321)
• Elastic Load Balancing rules (p. 1322)
• VPC peering rules (p. 1323)
1318
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Security group rules for different use cases
• A specific IP address or range of IP addresses (in CIDR block notation) in your local network
• A security group ID for a group of instances that access the database
You can optionally restrict outbound traffic from your database servers. For example, you might want
to allow access to the internet for software updates, but restrict all other kinds of traffic. You must first
remove the default outbound rule that allows all outbound traffic.
1319
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Security group rules for different use cases
The following table describes the inbound rule for a security group that enables associated instances to
communicate with each other. The rule allows all types of traffic.
1320
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Security group rules for different use cases
To use the ping6 command to ping the IPv6 address for your instance, you must add the following
inbound ICMPv6 rule.
TCP 6 53
UDP 17 53
TCP 6 2049 (NFS) The ID of the security group Allows inbound NFS
access from resources
(including the mount
target) associated with this
security group
To mount an Amazon EFS file system on your Amazon EC2 instance, you must connect to your instance.
Therefore, the security group associated with your instance must have rules that allow inbound SSH from
your local computer or local network.
1321
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Security group rules for different use cases
Inbound
For an internal
load-balancer: the
IPv4 CIDR block of
the VPC
Outbound
The security group rules for your instances must allow the load balancer to communicate with your
instances on both the listener port and the health check port.
Inbound
TCP 6 The health check The ID of the load Allow traffic from
port balancer security the load balancer
group on the health
check port.
1322
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Update management
For more information, see Configure security groups for your Classic Load Balancer in the User Guide
for Classic Load Balancers, and Security groups for your Application Load Balancer in the User Guide for
Application Load Balancers.
To learn whether Amazon Elastic Compute Cloud or other AWS services are in scope of specific
compliance programs, see AWS Services in Scope by Compliance Program. For general information, see
AWS Compliance Programs.
You can download third-party audit reports using AWS Artifact. For more information, see Downloading
Reports in AWS Artifact.
Your compliance responsibility when using AWS services is determined by the sensitivity of your data,
your company's compliance objectives, and applicable laws and regulations. AWS provides the following
resources to help with compliance:
• Security and Compliance Quick Start Guides – These deployment guides discuss architectural
considerations and provide steps for deploying baseline environments on AWS that are security and
compliance focused.
• Architecting for HIPAA Security and Compliance Whitepaper – This whitepaper describes how
companies can use AWS to create HIPAA-compliant applications.
Note
Not all services are compliant with HIPAA.
• AWS Compliance Resources – This collection of workbooks and guides might apply to your industry
and location.
• Evaluating Resources with Rules in the AWS Config Developer Guide – The AWS Config service assesses
how well your resource configurations comply with internal practices, industry guidelines, and
regulations.
• AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS
that helps you check your compliance with security industry standards and best practices.
• AWS Audit Manager – This AWS service helps you continuously audit your AWS usage to simplify how
you manage risk and compliance with regulations and industry standards.
1323
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Storage
Amazon EC2 provides you with flexible, cost effective, and easy-to-use data storage options for your
instances. Each option has a unique combination of performance and durability. These storage options
can be used independently or in combination to suit your requirements.
After reading this section, you should have a good understanding about how you can use the data
storage options supported by Amazon EC2 to meet your specific requirements. These storage options
include the following:
The following figure shows the relationship between these storage options and your instance.
Amazon EBS
Amazon EBS provides durable, block-level storage volumes that you can attach to a running instance.
You can use Amazon EBS as a primary storage device for data that requires frequent and granular
updates. For example, Amazon EBS is the recommended storage option when you run a database on an
instance.
An EBS volume behaves like a raw, unformatted, external block device that you can attach to a single
instance. The volume persists independently from the running life of an instance. After an EBS volume
is attached to an instance, you can use it like any other physical hard drive. As illustrated in the previous
figure, multiple volumes can be attached to an instance. You can also detach an EBS volume from one
1324
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon EBS
instance and attach it to another instance. You can dynamically change the configuration of a volume
attached to an instance. EBS volumes can also be created as encrypted volumes using the Amazon EBS
encryption feature. For more information, see Amazon EBS encryption (p. 1536).
To keep a backup copy of your data, you can create a snapshot of an EBS volume, which is stored in
Amazon S3. You can create an EBS volume from a snapshot, and attach it to another instance. For more
information, see Amazon Elastic Block Store (p. 1325).
Many instances can access storage from disks that are physically attached to the host computer. This
disk storage is referred to as instance store. Instance store provides temporary block-level storage for
instances. The data on an instance store volume persists only during the life of the associated instance;
if you stop, hibernate, or terminate an instance, any data on instance store volumes is lost. For more
information, see Amazon EC2 instance store (p. 1613).
Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system
and configure your instances to mount the file system. You can use an EFS file system as a common data
source for workloads and applications running on multiple instances. For more information, see Use
Amazon EFS with Amazon EC2 (p. 1633).
Amazon S3
Amazon S3 provides access to reliable and inexpensive data storage infrastructure. It is designed to
make web-scale computing easier by enabling you to store and retrieve any amount of data, at any time,
from within Amazon EC2 or anywhere on the web. For example, you can use Amazon S3 to store backup
copies of your data and applications. Amazon EC2 uses Amazon S3 to store EBS snapshots and instance
store-backed AMIs. For more information, see Use Amazon S3 with Amazon EC2 (p. 1631).
Adding storage
Every time you launch an instance from an AMI, a root storage device is created for that instance. The
root storage device contains all the information necessary to boot the instance. You can specify storage
volumes in addition to the root device volume when you create an AMI or launch an instance using block
device mapping. For more information, see Block device mappings (p. 1647).
You can also attach EBS volumes to a running instance. For more information, see Attach an Amazon EBS
volume to an instance (p. 1353).
Storage pricing
For information about storage pricing, open AWS Pricing, scroll down to Services Pricing, choose
Storage, and then choose the storage option to open that storage option's pricing page. For information
about estimating the cost of storage, see the AWS Pricing Calculator.
We recommend Amazon EBS for data that must be quickly accessible and requires long-term persistence.
EBS volumes are particularly well-suited for use as the primary storage for file systems, databases, or for
1325
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Features of Amazon EBS
any applications that require fine granular updates and access to raw, unformatted, block-level storage.
Amazon EBS is well suited to both database-style applications that rely on random reads and writes, and
to throughput-intensive applications that perform long, continuous reads and writes.
With Amazon EBS, you pay only for what you use. For more information about Amazon EBS pricing, see
the Projecting Costs section of the Amazon Elastic Block Store page.
Contents
• Features of Amazon EBS (p. 1326)
• Amazon EBS volumes (p. 1327)
• Amazon EBS snapshots (p. 1381)
• Recycle Bin for Amazon EBS snapshots (p. 1460)
• Amazon Data Lifecycle Manager (p. 1478)
• Amazon EBS data services (p. 1523)
• Amazon EBS and NVMe on Linux instances (p. 1552)
• Amazon EBS–optimized instances (p. 1556)
• Amazon EBS volume performance on Linux instances (p. 1581)
• Amazon CloudWatch metrics for Amazon EBS (p. 1596)
• Amazon CloudWatch Events for Amazon EBS (p. 1602)
• Amazon EBS quotas (p. 1613)
The following is a summary of performance and use cases for each volume type.
• General Purpose SSD volumes (gp2 and gp3) balance price and performance for a wide variety of
transactional workloads. These volumes are ideal for use cases such as boot volumes, medium-size
single instance databases, and development and test environments.
• Provisioned IOPS SSD volumes (io1 and io2) are designed to meet the needs of I/O-intensive
workloads that are sensitive to storage performance and consistency. They provide a consistent
IOPS rate that you specify when you create the volume. This enables you to predictably scale to tens
of thousands of IOPS per instance. Additionally, io2 volumes provide the highest levels of volume
durability.
• Throughput Optimized HDD volumes (st1) provide low-cost magnetic storage that defines
performance in terms of throughput rather than IOPS. These volumes are ideal for large, sequential
workloads such as Amazon EMR, ETL, data warehouses, and log processing.
• Cold HDD volumes (sc1) provide low-cost magnetic storage that defines performance in terms of
throughput rather than IOPS. These volumes are ideal for large, sequential, cold-data workloads.
If you require infrequent access to your data and are looking to save costs, these volumes provides
inexpensive block storage.
• You can create your EBS volumes as encrypted volumes, in order to meet a wide range of data-at-rest
encryption requirements for regulated/audited data and applications. When you create an encrypted
EBS volume and attach it to a supported instance type, data stored at rest on the volume, disk I/O, and
snapshots created from the volume are all encrypted. The encryption occurs on the servers that host
1326
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
EC2 instances, providing encryption of data-in-transit from EC2 instances to EBS storage. For more
information, see Amazon EBS encryption (p. 1536).
• You can create point-in-time snapshots of EBS volumes, which are persisted to Amazon S3. Snapshots
protect data for long-term durability, and they can be used as the starting point for new EBS volumes.
The same snapshot can be used to instantiate as many volumes as you wish. These snapshots can be
copied across AWS Regions. For more information, see Amazon EBS snapshots (p. 1381).
• Performance metrics, such as bandwidth, throughput, latency, and average queue length, are available
through the AWS Management Console. These metrics, provided by Amazon CloudWatch, allow you to
monitor the performance of your volumes to make sure that you are providing enough performance
for your applications without paying for resources you don't need. For more information, see Amazon
EBS volume performance on Linux instances (p. 1581).
You can use EBS volumes as primary storage for data that requires frequent updates, such as the system
drive for an instance or storage for a database application. You can also use them for throughput-
intensive applications that perform continuous disk scans. EBS volumes persist independently from the
running life of an EC2 instance.
You can attach multiple EBS volumes to a single instance. The volume and instance must be in the same
Availability Zone. Depending on the volume and instance types, you can use Multi-Attach (p. 1355) to
mount a volume to multiple instances at the same time.
Amazon EBS provides the following volume types: General Purpose SSD (gp2 and gp3), Provisioned IOPS
SSD (io1 and io2), Throughput Optimized HDD (st1), Cold HDD (sc1), and Magnetic (standard). They
differ in performance characteristics and price, allowing you to tailor your storage performance and cost
to the needs of your applications. For more information, see Amazon EBS volume types (p. 1329).
Your account has a limit on the number of EBS volumes that you can use, and the total storage available
to you. For more information about these limits, and how to request an increase in your limits, see
Amazon EC2 service quotas (p. 1680).
Contents
• Benefits of using EBS volumes (p. 1328)
• Amazon EBS volume types (p. 1329)
• Constraints on the size and configuration of an EBS volume (p. 1346)
• Create an Amazon EBS volume (p. 1349)
• Attach an Amazon EBS volume to an instance (p. 1353)
• Attach a volume to multiple instances with Amazon EBS Multi-Attach (p. 1355)
• Make an Amazon EBS volume available for use on Linux (p. 1360)
• View information about an Amazon EBS volume (p. 1364)
• Replace an Amazon EBS volume (p. 1366)
• Monitor the status of your volumes (p. 1370)
• Detach an Amazon EBS volume from a Linux instance (p. 1378)
• Delete an Amazon EBS volume (p. 1380)
1327
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Data availability
When you create an EBS volume, it is automatically replicated within its Availability Zone to prevent data
loss due to failure of any single hardware component. You can attach an EBS volume to any EC2 instance
in the same Availability Zone. After you attach a volume, it appears as a native block device similar to
a hard drive or other physical device. At that point, the instance can interact with the volume just as it
would with a local drive. You can connect to the instance and format the EBS volume with a file system,
such as ext3, and then install applications.
If you attach multiple volumes to a device that you have named, you can stripe data across the volumes
for increased I/O and throughput performance.
You can attach io1 and io2 EBS volumes to up to 16 Nitro-based instances. For more information, see
Attach a volume to multiple instances with Amazon EBS Multi-Attach (p. 1355). Otherwise, you can
attach an EBS volume to a single instance.
You can get monitoring data for your EBS volumes, including root device volumes for EBS-backed
instances, at no additional charge. For more information about monitoring metrics, see Amazon
CloudWatch metrics for Amazon EBS (p. 1596). For information about tracking the status of your
volumes, see Amazon CloudWatch Events for Amazon EBS (p. 1602).
Data persistence
An EBS volume is off-instance storage that can persist independently from the life of an instance. You
continue to pay for the volume usage as long as the data persists.
EBS volumes that are attached to a running instance can automatically detach from the instance with
their data intact when the instance is terminated if you uncheck the Delete on Termination check
box when you configure EBS volumes for your instance on the EC2 console. The volume can then be
reattached to a new instance, enabling quick recovery. If the check box for Delete on Termination
is checked, the volume(s) will delete upon termination of the EC2 instance. If you are using an EBS-
backed instance, you can stop and restart that instance without affecting the data stored in the attached
volume. The volume remains attached throughout the stop-start cycle. This enables you to process
and store the data on your volume indefinitely, only using the processing and storage resources when
required. The data persists on the volume until the volume is deleted explicitly. The physical block
storage used by deleted EBS volumes is overwritten with zeroes before it is allocated to another account.
If you are dealing with sensitive data, you should consider encrypting your data manually or storing
the data on a volume protected by Amazon EBS encryption. For more information, see Amazon EBS
encryption (p. 1536).
By default, the root EBS volume that is created and attached to an instance at launch is deleted
when that instance is terminated. You can modify this behavior by changing the value of the flag
DeleteOnTermination to false when you launch the instance. This modified value causes the
volume to persist even after the instance is terminated, and enables you to attach the volume to another
instance.
By default, additional EBS volumes that are created and attached to an instance at launch are not
deleted when that instance is terminated. You can modify this behavior by changing the value of the flag
DeleteOnTermination to true when you launch the instance. This modified value causes the volumes
to be deleted when the instance is terminated.
Data encryption
For simplified data encryption, you can create encrypted EBS volumes with the Amazon EBS encryption
feature. All EBS volume types support encryption. You can use encrypted EBS volumes to meet a wide
1328
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
range of data-at-rest encryption requirements for regulated/audited data and applications. Amazon EBS
encryption uses 256-bit Advanced Encryption Standard algorithms (AES-256) and an Amazon-managed
key infrastructure. The encryption occurs on the server that hosts the EC2 instance, providing encryption
of data-in-transit from the EC2 instance to Amazon EBS storage. For more information, see Amazon EBS
encryption (p. 1536).
Amazon EBS encryption uses AWS Key Management Service (AWS KMS) master keys when creating
encrypted volumes and any snapshots created from your encrypted volumes. The first time you create
an encrypted EBS volume in a region, a default master key is created for you automatically. This key
is used for Amazon EBS encryption unless you select a customer master key (CMK) that you created
separately using AWS KMS. Creating your own CMK gives you more flexibility, including the ability to
create, rotate, disable, define access controls, and audit the encryption keys used to protect your data.
For more information, see the AWS Key Management Service Developer Guide.
Snapshots
Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of
the data in the volume to Amazon S3, where it is stored redundantly in multiple Availability Zones. The
volume does not need to be attached to a running instance in order to take a snapshot. As you continue
to write data to a volume, you can periodically create a snapshot of the volume to use as a baseline for
new volumes. These snapshots can be used to create multiple new EBS volumes or move volumes across
Availability Zones. Snapshots of encrypted EBS volumes are automatically encrypted.
When you create a new volume from a snapshot, it's an exact copy of the original volume at the time
the snapshot was taken. EBS volumes that are created from encrypted snapshots are automatically
encrypted. By optionally specifying a different Availability Zone, you can use this functionality to create a
duplicate volume in that zone. The snapshots can be shared with specific AWS accounts or made public.
When you create snapshots, you incur charges in Amazon S3 based on the volume's total size. For a
successive snapshot of the volume, you are only charged for any additional data beyond the volume's
original size.
Snapshots are incremental backups, meaning that only the blocks on the volume that have changed
after your most recent snapshot are saved. If you have a volume with 100 GiB of data, but only 5 GiB of
data have changed since your last snapshot, only the 5 GiB of modified data is written to Amazon S3.
Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you
need to retain only the most recent snapshot.
To help categorize and manage your volumes and snapshots, you can tag them with metadata of your
choice. For more information, see Tag your Amazon EC2 resources (p. 1666).
To back up your volumes automatically, you can use Amazon Data Lifecycle Manager (p. 1478) or AWS
Backup.
Flexibility
EBS volumes support live configuration changes while in production. You can modify volume type,
volume size, and IOPS capacity without service interruptions. For more information, see Amazon EBS
Elastic Volumes (p. 1523).
• Solid state drives (SSD) (p. 1330) — Optimized for transactional workloads involving frequent read/
write operations with small I/O size, where the dominant performance attribute is IOPS.
• Hard disk drives (HDD) (p. 1331) — Optimized for large streaming workloads where the dominant
performance attribute is throughput.
1329
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
• Previous generation (p. 1332) — Hard disk drives that can be used for workloads with small datasets
where data is accessed infrequently and performance is not of primary importance. We recommend
that you consider a current generation volume type instead.
There are several factors that can affect the performance of EBS volumes, such as instance configuration,
I/O characteristics, and workload demand. To fully use the IOPS provisioned on an EBS volume, use EBS-
optimized instances (p. 1556). For more information about getting the most out of your EBS volumes,
see Amazon EBS volume performance on Linux instances (p. 1581).
• General Purpose SSD — Provides a balance of price and performance. We recommend these volumes
for most workloads.
• Provisioned IOPS SSD — Provides high performance for mission-critical, low-latency, or high-
throughput workloads.
The following is a summary of the use cases and characteristics of SSD-backed volumes. For information
about the maximum IOPS and throughput per instance, see Amazon EBS–optimized instances (p. 1556).
Use • Low-latency interactive apps Workloads that • Workloads that require sustained
cases • Development and test require: IOPS performance or more than
environments 16,000 IOPS
• Sub-
• I/O-intensive database workloads
millisecond
latency
• Sustained
IOPS
performance
• More than
64,000 IOPS
or 1,000 MiB/s
of throughput
1330
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Boot Supported
volume
* The throughput limit is between 128 MiB/s and 250 MiB/s, depending on the volume size. Volumes
smaller than or equal to 170 GiB deliver a maximum throughput of 128 MiB/s. Volumes larger than 170
GiB but smaller than 334 GiB deliver a maximum throughput of 250 MiB/s if burst credits are available.
Volumes larger than or equal to 334 GiB deliver 250 MiB/s regardless of burst credits. gp2 volumes that
were created before December 3, 2018 and that have not been modified since creation might not reach
full performance unless you modify the volume (p. 1523).
† Maximum IOPS and throughput are guaranteed only on Instances built on the Nitro System (p. 232)
provisioned with more than 32,000 IOPS. Other instances guarantee up to 32,000 IOPS and 500 MiB/s.
io1 volumes that were created before December 6, 2017 and that have not been modified since creation
might not reach full performance unless you modify the volume (p. 1523).
‡ io2 Block Express volumes are supported with R5b instances only. io2 volumes attached to an R5b
instance during or after launch automatically run on Block Express. For more information, see io2 Block
Express volumes (p. 1337).
• Throughput Optimized HDD — A low-cost HDD designed for frequently accessed, throughput-
intensive workloads.
• Cold HDD — The lowest-cost HDD design for less frequently accessed workloads.
The following is a summary of the use cases and characteristics of HDD-backed volumes. For information
about the maximum IOPS and throughput per instance, see Amazon EBS–optimized instances (p. 1556).
Durability 99.8% - 99.9% durability (0.1% - 0.2% 99.8% - 99.9% durability (0.1% - 0.2%
annual failure rate) annual failure rate)
1331
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Magnetic
The maximum ratio of provisioned IOPS to provisioned volume size is 500 IOPS per GiB. The maximum
ratio of provisioned throughput to provisioned IOPS is .25 MiB/s per IOPS. The following volume
configurations support provisioning either maximum IOPS or maximum throughput:
1332
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
GiB of volume size. AWS designs gp2 volumes to deliver their provisioned performance 99% of the time.
A gp2 volume can range in size from 1 GiB to 16 TiB.
The performance of gp2 volumes is tied to volume size, which determines the baseline performance
level of the volume and how quickly it accumulates I/O credits; larger volumes have higher baseline
performance levels and accumulate I/O credits faster. I/O credits represent the available bandwidth
that your gp2 volume can use to burst large amounts of I/O when more than the baseline performance
is needed. The more credits your volume has for I/O, the more time it can burst beyond its baseline
performance level and the better it performs when more performance is needed. The following diagram
shows the burst-bucket behavior for gp2.
Each volume receives an initial I/O credit balance of 5.4 million I/O credits, which is enough to sustain
the maximum burst performance of 3,000 IOPS for at least 30 minutes. This initial credit balance is
designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping
experience for other applications. Volumes earn I/O credits at the baseline performance rate of 3 IOPS
per GiB of volume size. For example, a 100 GiB gp2 volume has a baseline performance of 300 IOPS.
1333
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
When your volume requires more than the baseline performance I/O level, it draws on I/O credits in the
credit balance to burst to the required performance level, up to a maximum of 3,000 IOPS. When your
volume uses fewer I/O credits than it earns in a second, unused I/O credits are added to the I/O credit
balance. The maximum I/O credit balance for a volume is equal to the initial credit balance (5.4 million I/
O credits).
When the baseline performance of a volume is higher than maximum burst performance, I/O credits are
never spent. If the volume is attached to an instance built on the Nitro System (p. 232), the burst balance
is not reported. For other instances, the reported burst balance is 100%.
The burst duration of a volume is dependent on the size of the volume, the burst IOPS required, and the
credit balance when the burst begins. This is shown in the following equation:
(Credit balance)
Burst duration = ------------------------------------
(Burst IOPS) - 3(Volume size in GiB)
The following table lists several volume sizes and the associated baseline performance of the volume
(which is also the rate at which it accumulates I/O credits), the burst duration at the 3,000 IOPS
maximum (when starting with a full credit balance), and the time in seconds that the volume would take
to refill an empty credit balance.
Volume size (GiB) Baseline performance Burst duration when Seconds to fill empty
(IOPS) driving sustained credit balance when
3,000 IOPS (second) driving no IO
1334
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Volume size (GiB) Baseline performance Burst duration when Seconds to fill empty
(IOPS) driving sustained credit balance when
3,000 IOPS (second) driving no IO
* The baseline performance of the volume exceeds the maximum burst performance.
If your gp2 volume uses all of its I/O credit balance, the maximum IOPS performance of the volume
remains at the baseline IOPS performance level (the rate at which your volume earns credits) and the
volume's maximum throughput is reduced to the baseline IOPS multiplied by the maximum I/O size.
Throughput can never exceed 250 MiB/s. When I/O demand drops below the baseline level and unused
credits are added to the I/O credit balance, the maximum IOPS performance of the volume again
exceeds the baseline. For example, a 100 GiB gp2 volume with an empty credit balance has a baseline
performance of 300 IOPS and a throughput limit of 75 MiB/s (300 I/O operations per second * 256 KiB
per I/O operation = 75 MiB/s). The larger a volume is, the greater the baseline performance is and the
faster it replenishes the credit balance. For more information about how IOPS are measured, see I/O
characteristics and monitoring (p. 1583).
If you notice that your volume performance is frequently limited to the baseline level (due to an empty I/
O credit balance), you should consider switching to a gp3 volume.
For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see
Monitor the burst bucket balance for volumes (p. 1346).
Throughput performance
Throughput for a gp2 volume can be calculated using the following formula, up to the throughput limit
of 250 MiB/s:
Throughput in MiB/s = ((Volume size in GiB) × (IOPS per GiB) × (I/O size in KiB))
Assuming V = volume size, I = I/O size, R = I/O rate, and T = throughput, this can be simplified to:
T = VIR
The smallest volume size that achieves the maximum throughput is given by:
T
V = -----
I R
250 MiB/s
= ---------------------
(256 KiB)(3 IOPS/GiB)
[(250)(2^20)(Bytes)]/s
= ------------------------------------------
1335
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
(256)(2^10)(Bytes)([3 IOP/s]/[(2^30)(Bytes)])
(250)(2^20)(2^30)(Bytes)
= ------------------------
(256)(2^10)(3)
= 357,913,941,333 Bytes
= 333# GiB (334 GiB in practice because volumes are provisioned in whole gibibytes)
io1 volumes are designed to provide 99.8 to 99.9 percent volume durability with an annual failure
rate (AFR) no higher than 0.2 percent, which translates to a maximum of two volume failures per 1,000
running volumes over a one-year period. io2 volumes are designed to provide 99.999 percent volume
durability with an AFR no higher than 0.001 percent, which translates to a single volume failure per
100,000 running volumes over a one-year period.
Provisioned IOPS SSD io1 and io2 volumes are available for all Amazon EC2 instance types. Provisioned
IOPS SSD io2 volumes attached to R5b instances run on EBS Block Express. For more information, see
io2 Block Express volumes.
• Keep the following in mind when launching instances with io2 volumes:
• If you launch an R5b instance with an io2 volume, the volume automatically runs on Block
Express (p. 1337), regardless of the volume’s size and IOPS.
• You can't launch an instance type that does not support Block Express (p. 1337) with an io2 volume
that has a size greater than 16 TiB or IOPS greater than 64,000.
• You can't launch an R5b instance with an encrypted io2 volume that has a size greater than 16 TiB
or IOPS greater than 64,000 from an unencrypted AMI or a shared encrypted AMI. In this case, you
must first create an encrypted AMI in your account and then use that AMI to launch the instance.
• Keep the following in mind when creating io2 volumes:
• If you create an io2 volume with a size greater than 16 TiB or IOPS greater than 64,000 in a Region
where Block Express (p. 1337) is supported, the volume automatically runs on Block Express.
• You can't create an io2 volume with a size greater than 16 TiB or IOPS greater than 64,000 in a
Region where Block Express (p. 1337) is not supported.
• If you create an io2 volume with a size of 16 TiB or less and IOPS of 64,000 or less in a Region
where Block Express (p. 1337) is supported, the volume does not run on Block Express.
• You can't create an encrypted io2 volume that has a size greater than 16 TiB or IOPS greater than
64,000 from an unencrypted snapshot or a shared encrypted snapshot. In this case, you must first
create an encrypted snapshot in your account and then use that snapshot to create the volume.
• Keep the following in mind when attaching io2 volumes to instances:
• If you attach an io2 volume to an R5b instance, the volume automatically runs on Block
Express (p. 1337). It can take up to 48 hours to optimize the volume for Block Express. During this
time, the volume provides io2 latency. After the volume has been optimized, it provides the sub-
millisecond latency supported by Block Express.
• You can't attach an io2 volume with a size greater than 16 TiB or IOPS greater than 64,000 to an
instance type that does not support Block Express (p. 1337).
1336
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
• If you detach an io2 volume with a size of 16 TiB or less and IOPS of 64,000 or less from an R5b
instance and attach it to an instance type that does not support Block Express (p. 1337), the volume
no longer runs on Block Express and it provides io2 latency.
• Keep the following in mind when modifying io2 volumes:
• You can't modify an io2 volume and increase its size beyond 16 TiB or its IOPS beyond 64,000 while
it is attached to an instance type that does not support Block Express (p. 1337).
• You can't modify the size or provisioned IOPS of an io2 volume that is attached to an R5b instance.
Performance
Provisioned IOPS SSD volumes can range in size from 4 GiB to 16 TiB and you can provision from 100
IOPS up to 64,000 IOPS per volume. You can achieve up to 64,000 IOPS only on Instances built on the
Nitro System (p. 232). On other instance families you can achieve performance up to 32,000 IOPS. The
maximum ratio of provisioned IOPS to requested volume size (in GiB) is 50:1 for io1 volumes, and 500:1
for io2 volumes. For example, a 100 GiB io1 volume can be provisioned with up to 5,000 IOPS, while
a 100 GiB io2 volume can be provisioned with up to 50,000 IOPS. On a supported instance type, the
following volume sizes allow provisioning up to the 64,000 IOPS maximum:
• io1 volume 1,280 GiB in size or greater (50 × 1,280 GiB = 64,000 IOPS)
• io2 volume 128 GiB in size or greater (500 × 128 GiB = 64,000 IOPS)
Provisioned IOPS SSD volumes provisioned with up to 32,000 IOPS support a maximum I/O size of 256
KiB and yield as much as 500 MiB/s of throughput. With the I/O size at the maximum, peak throughput
is reached at 2,000 IOPS. Volumes provisioned with more than 32,000 IOPS (up to the maximum
of 64,000 IOPS) yield a linear increase in throughput at a rate of 16 KiB per provisioned IOPS. For
example, a volume provisioned with 48,000 IOPS can support up to 750 MiB/s of throughput (16 KiB
per provisioned IOPS × 48,000 provisioned IOPS = 750 MiB/s). To achieve the maximum throughput of
1,000 MiB/s, a volume must be provisioned with 64,000 IOPS (16 KiB per provisioned IOPS × 64,000
provisioned IOPS = 1,000 MiB/s). The following graph illustrates these performance characteristics:
Your per-I/O latency experience depends on the provisioned IOPS and on your workload profile. For the
best I/O latency experience, ensure that you provision IOPS to meet the I/O profile of your workload.
io2 Block Express volumes is the next generation of Amazon EBS storage server architecture. It has been
built for the purpose of meeting the performance requirements of the most demanding I/O intensive
applications that run on Nitro-based Amazon EC2 instances.
1337
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Block Express architecture increases performance and scale. Block Express servers communicate with
Nitro-based instances using the Scalable Reliable Datagram (SRD) networking protocol. This interface
is implemented in the Nitro Card dedicated for Amazon EBS I/O function on the host hardware of the
instance. It minimizes I/O delay and latency variation (network jitter), which provides faster and more
consistent performance for your applications. For more information, see io2 Block Express volumes.
io2 Block Express volumes are suited for workloads that benefit from a single volume that provides sub-
millisecond latency, and supports higher IOPS, higher throughput, and larger capacity than io2 volumes.
io2 Block Express volumes support the same features as io2 volumes, including Multi-Attach and
encryption.
Topics
• Considerations (p. 1338)
• Performance (p. 1338)
• Quotas (p. 1339)
• Pricing and billing (p. 1339)
Considerations
• io2 Block Express volumes are currently supported with R5b instances only.
• io2 Block Express volumes are currently available in all Regions where R5b instances are available,
including us-east-1, us-east-2, us-west-2, ap-southeast-1, ap-northeast-1, and eu-
central-1. R5b instance availability might vary by Availability Zone. For more information about R5b
availability, see Find an Amazon EC2 instance type.
• io2 Block Express volumes do not support fast snapshot restore. We recommend that you initialize
these volumes to ensure that they deliver full performance. For more information, see Initialize
Amazon EBS volumes (p. 1586).
• io2 Block Express volumes do not support Elastic Volume operations.
Performance
With io2 Block Express volumes, you can provision volumes with:
1338
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Quotas
io2 Block Express volumes adhere to the same service quotas as io2 volumes. For more information, see
Amazon EBS quotas.
io2 volumes and io2 Block Express volumes are billed at the same rate. For more information, see
Amazon EBS pricing.
Usage reports do not distinguish between io2 Block Express volumes and io2 volumes. We recommend
that you use tags to help you identify costs associated with io2 Block Express volumes.
Throughput Optimized HDD (st1) volumes, though similar to Cold HDD (sc1) volumes, are designed to
support frequently accessed data.
This volume type is optimized for workloads involving large, sequential I/O, and we recommend that
customers with workloads performing small, random I/O use gp2. For more information, see Inefficiency
of small read/writes on HDD (p. 1346).
Like gp2, st1 uses a burst-bucket model for performance. Volume size determines the baseline
throughput of your volume, which is the rate at which the volume accumulates throughput credits.
Volume size also determines the burst throughput of your volume, which is the rate at which you can
spend credits when they are available. Larger volumes have higher baseline and burst throughput. The
more credits your volume has, the longer it can drive I/O at the burst level.
1339
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Subject to throughput and throughput-credit caps, the available throughput of an st1 volume is
expressed by the following formula:
For a 1-TiB st1 volume, burst throughput is limited to 250 MiB/s, the bucket fills with credits at 40 MiB/
s, and it can hold up to 1 TiB-worth of credits.
Larger volumes scale these limits linearly, with throughput capped at a maximum of 500 MiB/s. After the
bucket is depleted, throughput is limited to the baseline rate of 40 MiB/s per TiB.
On volume sizes ranging from 0.125 TiB to 16 TiB, baseline throughput varies from 5 MiB/s to a cap of
500 MiB/s, which is reached at 12.5 TiB as follows:
40 MiB/s
12.5 TiB × ---------- = 500 MiB/s
1 TiB
Burst throughput varies from 31 MiB/s to a cap of 500 MiB/s, which is reached at 2 TiB as follows:
250 MiB/s
2 TiB × ---------- = 500 MiB/s
1 TiB
The following table states the full range of base and burst throughput values for st1:
Volume size (TiB) ST1 base throughput (MiB/s) ST1 burst throughput (MiB/s)
0.125 5 31
0.5 20 125
1 40 250
2 80 500
3 120 500
1340
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Volume size (TiB) ST1 base throughput (MiB/s) ST1 burst throughput (MiB/s)
4 160 500
5 200 500
6 240 500
7 280 500
8 320 500
9 360 500
10 400 500
11 440 500
12 480 500
13 500 500
14 500 500
15 500 500
16 500 500
Note
When you create a snapshot of a Throughput Optimized HDD (st1) volume, performance may
drop as far as the volume's baseline value while the snapshot is in progress.
For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see
Monitor the burst bucket balance for volumes (p. 1346).
1341
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Cold HDD (sc1) volumes, though similar to Throughput Optimized HDD (st1) volumes, are designed to
support infrequently accessed data.
Note
This volume type is optimized for workloads involving large, sequential I/O, and we recommend
that customers with workloads performing small, random I/O use gp2. For more information,
see Inefficiency of small read/writes on HDD (p. 1346).
Subject to throughput and throughput-credit caps, the available throughput of an sc1 volume is
expressed by the following formula:
For a 1-TiB sc1 volume, burst throughput is limited to 80 MiB/s, the bucket fills with credits at 12 MiB/s,
and it can hold up to 1 TiB-worth of credits.
Larger volumes scale these limits linearly, with throughput capped at a maximum of 250 MiB/s. After the
bucket is depleted, throughput is limited to the baseline rate of 12 MiB/s per TiB.
On volume sizes ranging from 0.125 TiB to 16 TiB, baseline throughput varies from 1.5 MiB/s to a
maximum of 192 MiB/s, which is reached at 16 TiB as follows:
12 MiB/s
16 TiB × ---------- = 192 MiB/s
1 TiB
Burst throughput varies from 10 MiB/s to a cap of 250 MiB/s, which is reached at 3.125 TiB as follows:
80 MiB/s
3.125 TiB × ----------- = 250 MiB/s
1 TiB
The following table states the full range of base and burst throughput values for sc1:
1342
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Volume Size (TiB) SC1 Base Throughput (MiB/s) SC1 Burst Throughput (MiB/s)
0.125 1.5 10
0.5 6 40
1 12 80
2 24 160
3 36 240
4 48 250
5 60 250
6 72 250
7 84 250
8 96 250
9 108 250
10 120 250
11 132 250
12 144 250
13 156 250
14 168 250
15 180 250
16 192 250
Note
When you create a snapshot of a Cold HDD (sc1) volume, performance may drop as far as the
volume's baseline value while the snapshot is in progress.
1343
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see
Monitor the burst bucket balance for volumes (p. 1346).
Magnetic volumes
Magnetic volumes are backed by magnetic drives and are suited for workloads where data is accessed
infrequently, and scenarios where low-cost storage for small volume sizes is important. These volumes
deliver approximately 100 IOPS on average, with burst capability of up to hundreds of IOPS, and they
can range in size from 1 GiB to 1 TiB.
Note
Magnetic is a previous generation volume type. For new applications, we recommend using one
of the newer volume types. For more information, see Previous Generation Volumes.
For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see
Monitor the burst bucket balance for volumes (p. 1346).
The st1 and sc1 bucket sizes vary according to volume size, and a full bucket contains enough tokens
for a full volume scan. However, larger st1 and sc1 volumes take longer for the volume scan to
complete due to per-instance and per-volume throughput limits. Volumes attached to smaller instances
are limited to the per-instance throughput rather than the st1 or sc1 throughput limits.
Both st1 and sc1 are designed for performance consistency of 90% of burst throughput 99% of the
time. Non-compliant periods are approximately uniformly distributed, targeting 99% of expected total
throughput each hour.
Volume size
------------ = Scan time
Throughput
For example, taking the performance consistency guarantees and other optimizations into account, an
st1 customer with a 5-TiB volume can expect to complete a full volume scan in 2.91 to 3.27 hours.
5 TiB 5 TiB
----------- = ------------------ = 10,486 seconds = 2.91 hours
500 MiB/s 0.00047684 TiB/s
2.91 hours
-------------- = 3.27 hours
(0.90)(0.99) <-- From expected performance of 90% of burst 99% of the time
Similarly, an sc1 customer with a 5-TiB volume can expect to complete a full volume scan in 5.83 to 6.54
hours.
1344
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
5 TiB 5 TiB
----------- = ------------------- = 20972 seconds = 5.83 hours
250 MiB/s 0.000238418 TiB/s
5.83 hours
-------------- = 6.54 hours
(0.90)(0.99)
The following table shows ideal scan times for volumes of various size, assuming full buckets and
sufficient instance throughput.
Volume size (TiB) ST1 scan time with burst SC1 scan time with burst
(hours)* (hours)*
1 1.17 3.64
2 1.17 3.64
3 1.75 3.64
4 2.33 4.66
5 2.91 5.83
6 3.50 6.99
7 4.08 8.16
8 4.66 9.32
9 5.24 10.49
10 5.83 11.65
11 6.41 12.82
12 6.99 13.98
13 7.57 15.15
14 8.16 16.31
15 8.74 17.48
16 9.32 18.64
* These scan times assume an average queue depth (rounded to the nearest whole number) of four or
more when performing 1 MiB of sequential I/O.
Therefore if you have a throughput-oriented workload that needs to complete scans quickly (up to 500
MiB/s), or requires several full volume scans a day, use st1. If you are optimizing for cost, your data is
relatively infrequently accessed, and you don’t need more than 250 MiB/s of scanning performance, then
use sc1.
1345
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
For example, an I/O request of 1 MiB or less counts as a 1 MiB I/O credit. However, if the I/Os are
sequential, they are merged into 1 MiB I/O blocks and count only as a 1 MiB I/O credit.
As for all Amazon EBS volumes, we recommend that you select an appropriate EBS-optimized EC2
instance in order to avoid network bottlenecks. For more information, see Amazon EBS–optimized
instances (p. 1556).
The following sections describe the most important factors that limit the usable size of an EBS volume
and offer recommendations for configuring your EBS volumes.
Contents
• Storage capacity (p. 1346)
• Service limitations (p. 1347)
• Partitioning schemes (p. 1347)
• Data block sizes (p. 1348)
Storage capacity
The following table summarizes the theoretical and implemented storage capacities for the most
commonly used file systems on Amazon EBS, assuming a 4,096 byte block size.
1346
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
** https://access.redhat.com/solutions/1532
† io2 Block Express volumes support up to 64 TiB for GPT partitions. For more information, see io2
Block Express volumes (p. 1337).
Service limitations
Amazon EBS abstracts the massively distributed storage of a data center into virtual hard disk drives. To
an operating system installed on an EC2 instance, an attached EBS volume appears to be a physical hard
disk drive containing 512-byte disk sectors. The OS manages the allocation of data blocks (or clusters)
onto those virtual sectors through its storage management utilities. The allocation is in conformity with
a volume partitioning scheme, such as master boot record (MBR) or GUID partition table (GPT), and
within the capabilities of the installed file system (ext4, NTFS, and so on).
EBS is not aware of the data contained in its virtual disk sectors; it only ensures the integrity of the
sectors. This means that AWS actions and OS actions are independent of each other. When you are
selecting a volume size, be aware of the capabilities and limits of both, as in the following cases:
• EBS currently supports a maximum volume size of 64 TiB. This means that you can create an EBS
volume as large as 64 TiB, but whether the OS recognizes all of that capacity depends on its own
design characteristics and on how the volume is partitioned.
• Linux boot volumes may use either the MBR or GPT partitioning scheme. MBR supports boot volumes
up to 2047 GiB (2 TiB - 1 GiB). GPT with GRUB 2 supports boot volumes 2 TiB or larger. If your Linux
AMI uses MBR, your boot volume is limited to 2047 GiB, but your non-boot volumes do not have this
limit. For more information, see Make an Amazon EBS volume available for use on Linux (p. 1360).
Partitioning schemes
Among other impacts, the partitioning scheme determines how many logical data blocks can be uniquely
addressed in a single volume. For more information, see Data block sizes (p. 1348). The common
partitioning schemes in use are master boot record (MBR) and GUID partition table (GPT). The important
differences between these schemes can be summarized as follows.
MBR
MBR uses a 32-bit data structure to store block addresses. This means that each data block is mapped
32
with one of 2 possible integers. The maximum addressable size of a volume is given by the following
formula:
1347
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
The block size for MBR volumes is conventionally limited to 512 bytes. Therefore:
Engineering workarounds to increase this 2-TiB limit for MBR volumes have not met with widespread
industry adoption. Consequently, Linux and Windows never detect an MBR volume as being larger than 2
TiB even if AWS shows its size to be larger.
GPT
GPT uses a 64-bit data structure to store block addresses. This means that each data block is mapped
64
with one of 2 possible integers. The maximum addressable size of a volume is given by the following
formula:
The block size for GPT volumes is commonly 4,096 bytes. Therefore:
Real-world computer systems don't support anything close to this theoretical maximum. Implemented
file-system size is currently limited to 50 TiB for ext4 and 256 TiB for NTFS—both of which exceed the
16-TiB limit imposed by AWS.
The industry default size for logical data blocks is currently 4,096 bytes (4 KiB). Because certain
workloads benefit from a smaller or larger block size, file systems support non-default block sizes
that can be specified during formatting. Scenarios in which non-default block sizes should be used are
outside the scope of this topic, but the choice of block size has consequences for the storage capacity of
the volume. The following table shows storage capacity as a function of block size:
8 KiB 32 TiB
16 KiB 64 TiB
The EBS-imposed limit on volume size (16 TiB) is currently equal to the maximum size enabled by 4-KiB
data blocks.
1348
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
If you are creating a volume for a high-performance storage scenario, you should make sure to use
a Provisioned IOPS SSD volume (io1 or io2) and attach it to an instance with enough bandwidth to
support your application, such as an EBS-optimized instance. The same advice holds for Throughput
Optimized HDD (st1) and Cold HDD (sc1) volumes. For more information, see Amazon EBS–optimized
instances (p. 1556).
Empty EBS volumes receive their maximum performance the moment that they are available and do not
require initialization (formerly known as pre-warming). However, storage blocks on volumes that were
created from snapshots must be initialized (pulled down from Amazon S3 and written to the volume)
before you can access the block. This preliminary action takes time and can cause a significant increase
in the latency of an I/O operation the first time each block is accessed. Volume performance is achieved
after all blocks have been downloaded and written to the volume. For most applications, amortizing this
cost over the lifetime of the volume is acceptable. To avoid this initial performance hit in a production
environment, you can force immediate initialization of the entire volume or enable fast snapshot restore.
For more information, see Initialize Amazon EBS volumes (p. 1586).
Important
If you create an io2 volume with a size greater than 16 TiB or with IOPS greater than 64,000 in
a Region where EBS Block Express is supported, the volume automatically runs on Block Express.
io2 Block Express volumes can be attached to R5b instances only. For more information, see
io2 Block Express volumes.
• Create and attach EBS volumes when you launch instances by specifying a block device mapping. For
more information, see Launch an instance using the Launch Instance Wizard (p. 565) and Block device
mappings (p. 1647).
• Create an empty EBS volume and attach it to a running instance. For more information, see Create an
empty volume (p. 1349) below.
• Create an EBS volume from a previously created snapshot and attach it to a running instance. For more
information, see Create a volume from a snapshot (p. 1351) below.
You can create an empty EBS volume using one of the following methods.
New console
1349
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
6. (io1, io2, and gp3 only) For IOPS, enter the maximum number of input/output operations per
second (IOPS) that the volume should provide.
7. (gp3 only) For Throughput, enter the throughput that the volume should provide, in MiB/s.
8. For Availability Zone, choose the Availability Zone in which to create the volume. A volume can
be attached only to an instance that is in the same Availability Zone.
9. For Snapshot ID, keep the default value (Don't create volume from a snapshot).
10. (io1 and io2 only) To enable the volume for Amazon EBS Multi-Attach, select Enable Multi-
Attach. For more information, see Attach a volume to multiple instances with Amazon EBS
Multi-Attach (p. 1355).
11. Set the encryption status for the volume.
If your account is enabled for encryption by default (p. 1539), then encryption is automatically
enabled and you can't disable it. You can choose the KMS key to use to encrypt the volume.
If your account is not enabled for encryption by default, encryption is optional. To encrypt the
volume, for Encryption, choose Encrypt this volume and then select the KMS key to use to
encrypt the volume.
Note
Encrypted volumes can be attached only to instances that support Amazon EBS
encryption. For more information, see Amazon EBS encryption (p. 1536).
12. (Optional) To assign custom tags to the volume, in the Tags section, choose Add tag,
and then enter a tag key and value pair. For more information, see Tag your Amazon EC2
resources (p. 1666).
13. Choose Create volume.
Note
The volume is ready for use when the volume status is Available.
14. To use the volume, attach it to an instance. For more information, see Attach an Amazon EBS
volume to an instance (p. 1353).
Old console
1350
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
EBS encryption is enabled and the default CMK for EBS encryption is chosen. You can choose a
different CMK from Master Key or paste the full ARN of any key that you can access. For more
information, see Amazon EBS encryption (p. 1536).
11. (Optional) Choose Create additional tags to add tags to the volume. For each tag, provide a tag
key and a tag value. For more information, see Tag your Amazon EC2 resources (p. 1666).
12. Choose Create Volume. The volume is ready for use when the volume status is Available.
13. To use your new volume, attach it to an instance, format it, and mount it. For more information,
see Attach an Amazon EBS volume to an instance (p. 1353).
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
New EBS volumes that are created from encrypted snapshots are automatically encrypted. You can also
encrypt a volume on-the-fly while restoring it from an unencrypted snapshot. Encrypted volumes can
only be attached to instance types that support EBS encryption. For more information, see Supported
instance types (p. 1538).
You can create a volume from a snapshot using one of the following methods.
New console
1351
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
9. For Snapshot ID, select the snapshot from which to create the volume.
10. Set the encryption status for the volume.
If the selected snapshot is unencrypted and your account is not enabled for encryption by
default, encryption is optional. To encrypt the volume, for Encryption, choose Encrypt this
volume and then select the KMS key to use to encrypt the volume.
Note
Encrypted volumes can be attached only to instances that support Amazon EBS
encryption. For more information, see Amazon EBS encryption (p. 1536).
11. (Optional) To assign custom tags to the volume, in the Tags section, choose Add tag,
and then enter a tag key and value pair. For more information, see Tag your Amazon EC2
resources (p. 1666).
12. Choose Create Volume.
Note
The volume is ready for use when the volume status is Available.
13. To use the volume, attach it to an instance. For more information, see Attach an Amazon EBS
volume to an instance (p. 1353).
Old console
To use the snapshot to create a volume in a different region, copy your snapshot to the new
Region and then use it to create a volume in that Region. For more information, see Copy an
Amazon EBS snapshot (p. 1391).
3. In the navigation pane, choose ELASTIC BLOCK STORE, Volumes.
4. Choose Create Volume.
5. For Volume Type, choose a volume type. For more information, see Amazon EBS volume
types (p. 1329).
6. For Snapshot ID, start typing the ID or description of the snapshot from which you are restoring
the volume, and choose it from the list of suggested options.
7. (Optional) Select Encrypt this volume to change the encryption state of your volume. This is
optional if encryption by default (p. 1539) is enabled. Select a CMK from Master Key to specify
a CMK other than the default CMK for EBS encryption.
8. For Size, verify that the default size of the snapshot meets your needs or enter the size of the
volume, in GiB.
If you specify both a volume size and a snapshot, the size must be equal to or greater than the
snapshot size. When you select a volume type and a snapshot, the minimum and maximum sizes
for the volume are shown next to Size. For more information, see Constraints on the size and
configuration of an EBS volume (p. 1346).
9. For IOPS, enter the maximum number of input/output operations per second (IOPS) that the
volume should provide. You can specify IOPS only for gp3, io1, and io2 volumes.
10. For Throughput, enter the throughput that the volume should provide, in MiB/s. You can
specify throughput only for gp3 volumes.
1352
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
11. For Availability Zone, choose the Availability Zone in which to create the volume. An EBS
volume must be attached to an EC2 instance that is in the same Availability Zone as the volume.
12. (Optional) Choose Create additional tags to add tags to the volume. For each tag, provide a tag
key and a tag value.
13. Choose Create Volume.
14. To use your new volume, attach it to an instance and mount it. For more information, see Attach
an Amazon EBS volume to an instance (p. 1353).
15. If you created a volume that is larger than the snapshot, you must extend the file system on
the volume to take advantage of the extra space. For more information, see Amazon EBS Elastic
Volumes (p. 1523).
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
For information about adding EBS volumes to your instance at launch, see Instance block device
mapping (p. 1652).
Prerequisites
• Determine how many volumes that you can attach to your instance. For more information, see
Instance volume limits (p. 1637).
• Determine whether you can attach your volume to multiple instances and enable Multi-Attach. For
more information, see Attach a volume to multiple instances with Amazon EBS Multi-Attach (p. 1355).
• If a volume is encrypted, you can attach it only to an instance that supports Amazon EBS encryption.
For more information, see Supported instance types (p. 1538).
• If a volume has an AWS Marketplace product code:
• You can attach a volume only to a stopped instance.
• You must be subscribed to the AWS Marketplace code that is on the volume.
• The instance's configuration, such as its type and operating system, must support that specific AWS
Marketplace code. For example, you cannot take a volume from a Windows instance and attach it to
a Linux instance.
• AWS Marketplace product codes are copied from the volume to the instance.
Important
If you attach an io2 volume to an R5b instance, the volume always runs on EBS Block Express.
Currently, only R5b instances support io2 Block Express volumes. For more information, see
io2 Block Express volumes.
You can attach a volume to an instance using one of the following methods.
1353
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
New console
Old console
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
Note
In some situations, you may find that a volume other than the volume attached to /dev/xvda
or /dev/sda has become the root volume of your instance. This can happen when you have
1354
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
attached the root volume of another instance, or a volume created from the snapshot of a root
volume, to an instance with an existing root volume. For more information, see Boot from the
wrong volume.
Contents
• Considerations and limitations (p. 367)
• Performance (p. 1356)
• Work with Multi-Attach (p. 1356)
• Monitor a Multi-Attach enabled volume (p. 1360)
• Pricing and billing (p. 547)
Multi-Attach for io2 and io2 Block Express volumes is available in all Regions that support those
volumes types volumes.
• Standard file systems, such as XFS and EXT4, are not designed to be accessed simultaneously by
multiple servers, such as EC2 instances. Using Multi-Attach with a standard file system can result in
data corruption or loss, so this is not safe for production workloads. You can use a clustered file system
to ensure data resiliency and reliability for production workloads.
• Multi-Attach enabled volumes do not support I/O fencing. I/O fencing protocols control write access
in a shared storage environment to maintain data consistency. Your applications must provide write
ordering for the attached instances to maintain data consistency.
• Multi-Attach enabled volumes can't be created as boot volumes.
• Multi-Attach enabled volumes can be attached to one block device mapping per instance.
• Multi-Attach can't be enabled during instance launch using either the Amazon EC2 console or
RunInstances API.
• Multi-Attach enabled volumes that have an issue at the Amazon EBS infrastructure layer are
unavailable to all attached instances. Issues at the Amazon EC2 or networking layer might impact only
some attached instances.
• The following table shows volume modification support for Multi-Attach enabled io1 and io2
volumes after creation.
1355
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Modify volume ✗ ✗
type
Modify volume ✓ ✗
size
Modify ✓ ✗
provisioned IOPS
Enable Multi- ✓* ✗
Attach
Disable Multi- ✓* ✗
Attach
* You can't enable or disable Multi-Attach while the volume is attached to an instance.
Performance
Each attached instance is able to drive its maximum IOPS performance up to the volume's maximum
provisioned performance. However, the aggregate performance of all of the attached instances can't
exceed the volume's maximum provisioned performance. If the attached instances' demand for IOPS is
higher than the volume's Provisioned IOPS, the volume will not exceed its provisioned performance.
For example, say you create an io2 Multi-Attach enabled volume with 50,000 Provisioned IOPS and you
attach it to an m5.8xlarge instance and a c5.12xlarge instance. The m5.8xlarge and c5.12xlarge
instances support a maximum of 30,000 and 40,000 IOPS respectively. Each instance can drive its
maximum IOPS as it is less than the volume's Provisioned IOPS of 50,000. However, if both instances
drive I/O to the volume simultaneously, their combined IOPS can't exceed the volume's provisioned
performance of 50,000 IOPS. The volume will not exceed 50,000 IOPS.
To achieve consistent performance, it is best practice to balance I/O driven from attached instances
across the sectors of a Multi-Attach enabled volume.
Contents
• Enable Multi-Attach (p. 1356)
• Disable Multi-Attach (p. 1358)
• Attach a volume to instances (p. 1359)
• Delete on termination (p. 1359)
Enable Multi-Attach
You can enable Multi-Attach for io1 and io2 volumes during creation.
Use one of the following methods to enable Multi-Attach for an io1 or io2 volume during creation.
1356
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
New console
If the selected snapshot is unencrypted and your account is not enabled for encryption by
default, encryption is optional. To encrypt the volume, for Encryption, choose Encrypt this
volume and then select the KMS key to use to encrypt the volume.
Note
You can attach encrypted volumes only to instances that support Amazon EBS
encryption. For more information, see Amazon EBS encryption (p. 1536).
10. (Optional) To assign custom tags to the volume, in the Tags section, choose Add tag,
and then enter a tag key and value pair. For more information, see Tag your Amazon EC2
resources (p. 1666).
11. Choose Create volume.
Old console
Command line
$ aws ec2 create-volume --volume-type io2 --multi-attach-enabled --size 100 --iops 2000
--region us-west-2 --availability-zone us-west-2b
1357
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
You can also enable Multi-Attach for io2 volumes after they have been created only if they are not
attached to any instances.
Note
You can't enable Multi-Attach for io1 volumes after creation.
Use one of the following methods to enable Multi-Attach for an Amazon EBS volume after it has been
created.
New console
Old console
Command line
Disable Multi-Attach
You can disable Multi-Attach for an io2 volume only if it is attached to no more than one instance.
Note
You can't disable Multi-Attach for io1 volumes after creation.
Use one of the following methods to disable Multi-Attach for an io2 volume.
New console
1358
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Old console
To disable Multi-Attach
Command line
You attach a Multi-Attach enabled volume to an instance in the same way that you attach any other EBS
volume. For more information, see Attach an Amazon EBS volume to an instance (p. 1353).
Delete on termination
Multi-Attach enabled volumes are deleted on instance termination if the last attached instance is
terminated and if that instance is configured to delete the volume on termination. If the volume is
attached to multiple instances that have different delete on termination settings in their volume block
device mappings, the last attached instance's block device mapping setting determines the delete on
termination behavior.
To ensure predictable delete on termination behavior, enable or disable delete on termination for all of
the instances to which the volume is attached.
By default, when a volume is attached to an instance, the delete on termination setting for the block
device mapping is set to false. If you want to turn on delete on termination for a Multi-Attach enabled
volume, modify the block device mapping.
If you want the volume to be deleted when the attached instances are terminated, enable delete on
termination in the block device mapping for all of the attached instances. If you want to retain the
volume after the attached instances have been terminated, disable delete on termination in the block
device mapping for all of the attached instances. For more information, see Preserve Amazon EBS
volumes on instance termination (p. 650).
You can modify an instance's delete on termination setting at launch or after it has launched. If you
enable or disable delete on termination during instance launch, the settings apply only to volumes that
are attached at launch. If you attach a volume to an instance after launch, you must explicitly set the
delete on termination behavior for that volume.
You can modify an instance's delete on termination setting using the command line tools only.
Use the modify-instance-attribute command and specify the DeleteOnTermination attribute in the
--block-device-mappings option.
1359
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
[
{
"DeviceName": "/dev/sdf",
"Ebs": {
"DeleteOnTermination": true|false
}
}
]
Data is aggregated across all of the attached instances. You can't monitor metrics for individual attached
instances.
You can take snapshots of your EBS volume for backup purposes or to use as a baseline when you create
another volume. For more information, see Amazon EBS snapshots (p. 1381).
You can get directions for volumes on a Windows instance from Make a volume available for use on
Windows in the Amazon EC2 User Guide for Windows Instances.
1. Connect to your instance using SSH. For more information, see Connect to your Linux
instance (p. 596).
2. The device could be attached to the instance with a different device name than you specified in the
block device mapping. For more information, see Device names on Linux instances (p. 1645). Use the
lsblk command to view your available disk devices and their mount points (if applicable) to help you
determine the correct device name to use. The output of lsblk removes the /dev/ prefix from full
device paths.
1360
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
The following is example output for an instance built on the Nitro System (p. 232), which exposes
EBS volumes as NVMe block devices. The root device is /dev/nvme0n1, which has two partitions
named nvme0n1p1 and nvme0n1p128. The attached volume is /dev/nvme1n1, which has no
partitions and is not yet mounted.
The following is example output for a T2 instance. The root device is /dev/xvda, which has one
partition named xvda1. The attached volume is /dev/xvdf, which has no partitions and is not yet
mounted.
3. Determine whether there is a file system on the volume. New volumes are raw block devices, and
you must create a file system on them before you can mount and use them. Volumes that were
created from snapshots likely have a file system on them already; if you create a new file system on
top of an existing file system, the operation overwrites your data.
Use one or both of the following methods to determine whether there is a file system on the
volume:
• Use the file -s command to get information about a specific device, such as its file system type. If
the output shows simply data, as in the following example output, there is no file system on the
device
If the device has a file system, the command shows information about the file system type. For
example, the following output shows a root device with the XFS file system.
• Use the lsblk -f command to get information about all of the devices attached to the instance.
For example, the following output shows that there are three devices attached to the instances
—nvme1n1, nvme0n1, and nvme2n1. The first column lists the devices and their partitions. The
FSTYPE column shows the file system type for each device. If the column is empty for a specific
device, it means that the device does not have a file system. In this case, device nvme1n1 and
partition nvme0n1p1 on device nvme0n1 are both formatted using the XFS file system, while
device nvme2n1 and partition nvme0n1p128 on device nvme0n1 do not have file systems.
1361
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
nvme0n1
##nvme0n1p1 xfs / 90e29211-2de8-4967-b0fb-16f51a6e464c /
##nvme0n1p128
nvme2n1
If the output from these commands show that there is no file system on the device, you must create
one.
4. (Conditional) If you discovered that there is a file system on the device in the previous step, skip this
step. If you have an empty volume, use the mkfs -t command to create a file system on the volume.
Warning
Do not use this command if you're mounting a volume that already has data on it (for
example, a volume that was created from a snapshot). Otherwise, you'll format the volume
and delete the existing data.
If you get an error that mkfs.xfs is not found, use the following command to install the XFS tools
and then repeat the previous command:
5. Use the mkdir command to create a mount point directory for the volume. The mount point is
where the volume is located in the file system tree and where you read and write files to after you
mount the volume. The following example creates a directory named /data.
6. Use the following command to mount the volume at the directory you created in the previous step.
7. Review the file permissions of your new volume mount to make sure that your users and
applications can write to the volume. For more information about file permissions, see File security
at The Linux Documentation Project.
8. The mount point is not automatically preserved after rebooting your instance. To automatically
mount this EBS volume after reboot, see Automatically mount an attached volume after
reboot (p. 1362).
You can use the device name, such as /dev/xvdf, in /etc/fstab, but we recommend using the
device's 128-bit universally unique identifier (UUID) instead. Device names can change, but the UUID
persists throughout the life of the partition. By using the UUID, you reduce the chances that the system
becomes unbootable after a hardware reconfiguration. For more information, see Identify the EBS
device (p. 1553).
1. (Optional) Create a backup of your /etc/fstab file that you can use if you accidentally destroy or
delete this file while editing it.
1362
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
2. Use the blkid command to find the UUID of the device. Make a note of the UUID of the device that
you want to mount after reboot. You'll need it in the following step.
For example, the following command shows that there are two devices mounted to the instance, and
it shows the UUIDs for both devices.
3. Open the /etc/fstab file using any text editor, such as nano or vim.
4. Add the following entry to /etc/fstab to mount the device at the specified mount point. The
fields are the UUID value returned by blkid (or lsblk for Ubuntu 18.04), the mount point, the file
system, and the recommended file system mount options. For more information about the required
fields, run man fstab to open the fstab manual.
Note
If you ever boot your instance without this volume attached (for example, after moving the
volume to another instance), the nofail mount option enables the instance to boot even if
there are errors mounting the volume. Debian derivatives, including Ubuntu versions earlier
than 16.04, must also add the nobootwait mount option.
5. To verify that your entry works, run the following commands to unmount the device and then
mount all file systems in /etc/fstab. If there are no errors, the /etc/fstab file is OK and your
file system will mount automatically after it is rebooted.
If you are unsure how to correct errors in /etc/fstab and you created a backup file in the first step
of this procedure, you can restore from your backup file using the following command.
1363
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
You can get additional information about your EBS volumes, such as how much disk space is available,
from the operating system on the instance.
New console
To view the EBS volumes that are attached to an instance using the console
Old console
To view the EBS volumes that are attached to an instance using the new console
1364
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
To view the EBS volumes that are attached to an instance using the old console
AWS CLI
You can use one of the following commands to view volume attributes. For more information, see
Access Amazon EC2 (p. 3).
You can use Amazon EC2 Global View to view your volumes across all Regions for which your AWS
account is enabled. For more information, see List and filter resources across Regions using Amazon
EC2 Global View (p. 1665).
Volume state
Volume state describes the availability of an Amazon EBS volume. You can view the volume state in the
State column on the Volumes page in the console, or by using the describe-volumes AWS CLI command.
creating
The underlying hardware related to your EBS volume has failed, and the data associated with the
volume is unrecoverable. For information about how to restore the volume or recover the data on
the volume, see My EBS volume has a status of "error".
1365
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
For information about viewing free disk space on a Windows instance, see View free disk space in the
Amazon EC2 User Guide for Windows Instances.
The procedure for replacing a volume differs depending on whether the volume is the root volume or a
data volume.
Topics
• Replace a root volume (p. 1366)
• Replace a data volume (p. 1369)
• Data stored on instance store volumes — Instance store volumes remain attached to the instance after
the root volume has been replaced.
• Network configuration — All network interfaces remain attached to the instance and they retain
their IP addresses, identifiers, and attachment IDs. When the instance becomes available, all pending
network traffic is flushed. Additionally, the instance remains on the same physical host, so it retains its
public and private IP addresses and DNS name.
• IAM policies — IAM profiles and policies (such as tag-based policies) that are associated with the
instance are retained and enforced.
When you replace the root volume for an instance, a new volume is restored to the original volume's
launch state, or using a specific snapshot. The original volume is detached from the instance, and the
new volume is attached to the instance in its place. The original volume is not automatically deleted.
If you no longer need it, you can delete it manually after the root volume replacement task completes.
For more information about root volume replacement task states, see View root volume replacement
tasks (p. 1368).
1366
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Topics
• Considerations (p. 367)
• Replace a root volume (p. 1367)
• View root volume replacement tasks (p. 1368)
Considerations
• The instance is automatically rebooted when the root volume is replaced. The contents of the memory
(RAM) is erased during the reboot.
• You can't replace the root volume if it is an instance store volume.
• You can't replace the root volume for metal instances.
• You can only use snapshots that belong to the same lineage as the instance's current root volume. You
can't use snapshot copies created from snapshots that were taken from the root volume. Additionally,
after successfully completing a root volume replacement task, snapshots taken from the previous root
volume can't be used to create a root volume replacement task for the new volume.
When you replace the root volume for an instance, you can choose to restore the volume to its initial
launch state, or you can choose to restore the volume to a specific snapshot. If you choose to restore
the volume to a specific snapshot, then you must select a snapshot that was taken of that root volume.
If you choose to restore the root volume to its initial launch state, the root volume is restored from the
snapshot that was used to create the volume.
You can replace the root volume for an instance using one of the following methods. If you use the
Amazon EC2 console, note that replacing the root volume is only available in the new console.
New console
• To restore the instance's root volume to its initial launch state, choose Create replacement
task without selecting a snapshot.
• To restore the instance's root volume to a specific snapshot, for Snapshot, select the snapshot
to use, and then choose Create replacement task.
AWS CLI
Use the create-replace-root-volume-task command. Specify the ID of the instance for which to
replace the root volume and omit the --snapshot-id parameter.
For example:
1367
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Use the create-replace-root-volume-task command. Specify the ID of the instance for which to
replace the root volume and the ID of the snapshot to use.
For example:
After you start a root volume replacement task, the task enters the following states:
You can view the root volume replacement tasks for an instance using one of the following methods. If
you use the console, note that this functionality is only available in the new console.
New console
AWS CLI
Use the describe-replace-root-volume-tasks command and specify the IDs of the root volume
replacement tasks to view.
1368
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
For example:
{
"ReplaceRootVolumeTasks": [
{
"ReplaceRootVolumeTaskId": "replacevol-1234567890abcdef0",
"InstanceId": "i-1234567890abcdef0",
"TaskState": "succeeded",
"StartTime": "2020-11-06 13:09:54.0",
"CompleteTime": "2020-11-06 13:10:14.0"
}]
}
For example:
Note that EBS volumes can only be attached to EC2 instances in the same Availability Zone.
New console
1. Create a volume from the snapshot and write down the ID of the new volume. For more
information, see Create a volume from a snapshot (p. 1351).
2. On the Instances page, select the instance on which to replace the volume and write down the
instance ID.
With the instance still selected, choose the Storage tab. In the Block devices section, find the
volume to replace and write down the device name for the volume, for example /dev/sda1.
For Instance and Device name, enter the instance ID and device name that you wrote down in
Step 2, and then choose Attach volume.
1369
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
5. Connect to your instance and mount the volume. For more information, see Make an Amazon
EBS volume available for use on Linux (p. 1360).
Old console
1. Create a volume from the snapshot and write down the ID of the new volume. For more
information, see Create a volume from a snapshot (p. 1351).
2. On the volumes page, select the check box for the volume to replace. On the Description tab,
find Attachment information and write down the device name of the volume (for example, /
dev/sda1) and the ID of the instance.
3. With the volume still selected, choose Actions, Detach Volume. When prompted for
confirmation, choose Yes, Detach. Clear the check box for this volume.
4. Select the check box for the new volume that you created in step 1. Choose Actions, Attach
Volume. Enter the instance ID and device name that you wrote down in step 2, and then choose
Attach.
5. Connect to your instance and mount the volume. For more information, see Make an Amazon
EBS volume available for use on Linux (p. 1360).
Contents
• EBS volume status checks (p. 1370)
• EBS volume events (p. 1372)
• Work with an impaired volume (p. 1373)
• Work with the Auto-Enabled IO volume attribute (p. 1375)
For additional monitoring information, see Amazon CloudWatch metrics for Amazon EBS (p. 1596) and
Amazon CloudWatch Events for Amazon EBS (p. 1602).
Volume status checks are automated tests that run every 5 minutes and return a pass or fail status. If
all checks pass, the status of the volume is ok. If a check fails, the status of the volume is impaired. If
the status is insufficient-data, the checks may still be in progress on the volume. You can view the
results of volume status checks to identify any impaired volumes and take any necessary actions.
When Amazon EBS determines that a volume's data is potentially inconsistent, the default is that it
disables I/O to the volume from any attached EC2 instances, which helps to prevent data corruption.
After I/O is disabled, the next volume status check fails, and the volume status is impaired. In addition,
you'll see an event that lets you know that I/O is disabled, and that you can resolve the impaired status
of the volume by enabling I/O to the volume. We wait until you enable I/O to give you the opportunity
to decide whether to continue to let your instances use the volume, or to run a consistency check using a
command, such as fsck, before doing so.
1370
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Note
Volume status is based on the volume status checks, and does not reflect the volume state.
Therefore, volume status does not indicate volumes in the error state (for example, when
a volume is incapable of accepting I/O.) For information about volume states, see Volume
state (p. 1365).
If the consistency of a particular volume is not a concern, and you'd prefer that the volume be made
available immediately if it's impaired, you can override the default behavior by configuring the volume
to automatically enable I/O. If you enable the Auto-Enable IO volume attribute (autoEnableIO in the
API), the volume status check continues to pass. In addition, you'll see an event that lets you know that
the volume was determined to be potentially inconsistent, but that its I/O was automatically enabled.
This enables you to check the volume's consistency or replace it at a later time.
The I/O performance status check compares actual volume performance to the expected performance of
a volume. It alerts you if the volume is performing below expectations. This status check is available only
for Provisioned IOPS SSD (io1 and io2) and General Purpose SSD (gp3) volumes that are attached to an
instance. The status check is not valid for General Purpose SSD (gp2), Throughput Optimized HDD (st1),
Cold HDD (sc1), or Magnetic(standard) volumes. The I/O performance status check is performed once
every minute, and CloudWatch collects this data every 5 minutes. It might take up to 5 minutes from
the moment that you attach an io1 or io2 volume to an instance for the status check to report the I/O
performance status.
Important
While initializing Provisioned IOPS SSD volumes that were restored from snapshots, the
performance of the volume may drop below 50 percent of its expected level, which causes the
volume to display a warning state in the I/O Performance status check. This is expected, and
you can ignore the warning state on Provisioned IOPS SSD volumes while you are initializing
them. For more information, see Initialize Amazon EBS volumes (p. 1586).
Insufficient Data
You can view and work with status checks using the following methods.
1371
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
The Volume status column displays the operational status of each volume.
3. To view the status details of a specific volume, select it in the grid and choose the Status checks
tab.
4. If you have a volume with a failed status check (status is impaired), see Work with an impaired
volume (p. 1373).
Alternatively, you can choose Events in the navigator to view all the events for your instances and
volumes. For more information, see EBS volume events (p. 1372).
AWS CLI
For more information about these command line interfaces, see Access Amazon EC2 (p. 3).
To automatically enable I/O on a volume with potential data inconsistencies, change the setting of the
Auto-Enabled IO volume attribute (autoEnableIO in the API). For more information about changing
this attribute, see Work with an impaired volume (p. 1373).
Each event includes a start time that indicates the time at which the event occurred, and a duration that
indicates how long I/O for the volume was disabled. The end time is added to the event when I/O for the
volume is enabled.
Volume data is potentially inconsistent. I/O is disabled for the volume until you explicitly enable it.
The event description changes to IO Enabled after you explicitly enable I/O.
IO Enabled
I/O operations were automatically enabled on this volume after an event occurred. We recommend
that you check for data inconsistencies before continuing to use the data.
Normal
For io1, io2, and gp3 volumes only. Volume performance is as expected.
1372
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Degraded
For io1, io2, and gp3 volumes only. Volume performance is below expectations.
Severely Degraded
For io1, io2, and gp3 volumes only. Volume performance is well below expectations.
Stalled
For io1, io2, and gp3 volumes only. Volume performance is severely impacted.
You can view events for your volumes using the following methods.
AWS CLI
For more information about these command line interfaces, see Access Amazon EC2 (p. 3).
If you have a volume where I/O is disabled, see Work with an impaired volume (p. 1373). If you have a
volume where I/O performance is below normal, this might be a temporary condition due to an action
you have taken (for example, creating a snapshot of a volume during peak usage, running the volume on
an instance that cannot support the I/O bandwidth required, accessing data on the volume for the first
time, etc.).
Options
• Option 1: Perform a consistency check on the volume attached to its instance (p. 1373)
• Option 2: Perform a consistency check on the volume using another instance (p. 1374)
• Option 3: Delete the volume if you no longer need it (p. 1375)
1373
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
New console
Old console
AWS CLI
You can use one of the following commands to view event information for your Amazon EBS
volumes. For more information about these command line interfaces, see Access Amazon
EC2 (p. 3).
Use the following procedure to check the volume outside your production environment.
Important
This procedure may cause the loss of write I/Os that were suspended when volume I/O was
disabled.
1374
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
New console
Old console
AWS CLI
If you have a recent snapshot that backs up the data on the volume, you can create a new volume from
the snapshot. For more information, see Create a volume from a snapshot (p. 1351).
1375
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
creates a volume status event that indicates the cause of the failure. If the consistency of a particular
volume is not a concern, and you prefer that the volume be made available immediately if it's impaired,
you can override the default behavior by configuring the volume to automatically enable I/O. If you
enable the Auto-Enabled IO volume attribute (autoEnableIO in the API), I/O between the volume and
the instance is automatically re-enabled and the volume's status check will pass. In addition, you'll see
an event that lets you know that the volume was in a potentially inconsistent state, but that its I/O was
automatically enabled. When this event occurs, you should check the volume's consistency and replace it
if necessary. For more information, see EBS volume events (p. 1372).
You can view and modify the Auto-Enabled IO attribute of a volume using one of the following
methods.
New console
The Auto-enabled I/O field displays the current setting (Enabled or Disabled) for the selected
volume.
Old console
1376
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
3. Select the volume and choose Actions, Change Auto-Enable IO Setting. Alternatively, choose
the Status Checks tab, and for Auto-Enabled IO, choose Edit.
4. Select the Auto-Enable Volume IO check box to automatically enable I/O for an impaired
volume. To disable the feature, clear the check box.
5. Choose Save.
AWS CLI
For more information about these command line interfaces, see Access Amazon EC2 (p. 3)
1377
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
For information about detaching volumes from a Windows instance, see Detach a volume from a
Windows instance in the Amazon EC2 User Guide for Windows Instances.
Topics
• Considerations (p. 367)
• Unmount and detach a volume (p. 1378)
• Troubleshoot (p. 1379)
Considerations
• You can detach an Amazon EBS volume from an instance explicitly or by terminating the instance.
However, if the instance is running, you must first unmount the volume from the instance.
• If an EBS volume is the root device of an instance, you must stop the instance before you can detach
the volume.
• You can reattach a volume that you detached (without unmounting it), but it might not get the same
mount point. If there were writes to the volume in progress when it was detached, the data on the
volume might be out of sync.
• After you detach a volume, you are still charged for volume storage as long as the storage amount
exceeds the limit of the AWS Free Tier. You must delete a volume to avoid incurring further charges.
For more information, see Delete an Amazon EBS volume (p. 1380).
Steps
• Step 1: Unmount the volume (p. 1378)
• Step 2: Detach the volume from the instance (p. 1378)
From your Linux instance, use the following command to unmount the /dev/sdh device.
To detach the volume from the instance, use one of the following methods:
New console
1378
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
Old console
Command line
After unmounting the volume, you can use one of the following commands to detach it. For more
information about these command line interfaces, see Access Amazon EC2 (p. 3).
Troubleshoot
The following are common problems encountered when detaching volumes, and how to resolve them.
Note
To guard against the possibility of data loss, take a snapshot of your volume before attempting
to unmount it. Forced detachment of a stuck volume can cause damage to the file system or the
data it contains or an inability to attach a new volume using the same device name, unless you
reboot the instance.
• If you encounter problems while detaching a volume through the Amazon EC2 console, it can be
helpful to use the describe-volumes CLI command to diagnose the issue. For more information, see
describe-volumes.
• If your volume stays in the detaching state, you can force the detachment by choosing Force Detach.
Use this option only as a last resort to detach a volume from a failed instance, or if you are detaching
a volume with the intention of deleting it. The instance doesn't get an opportunity to flush file system
caches or file system metadata. If you use this option, you must perform the file system check and
repair procedures.
• If you've tried to force the volume to detach multiple times over several minutes and it stays in the
detaching state, you can post a request for help to the Amazon EC2 forum. To help expedite a
resolution, include the volume ID and describe the steps that you've already taken.
• When you attempt to detach a volume that is still mounted, the volume can become stuck in the busy
state while it is trying to detach. The following output from describe-volumes shows an example of
this condition:
"Volumes": [
{
"AvailabilityZone": "us-west-2b",
"Attachments": [
{
1379
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes
"AttachTime": "2016-07-21T23:44:52.000Z",
"InstanceId": "i-fedc9876",
"VolumeId": "vol-1234abcd",
"State": "busy",
"DeleteOnTermination": false,
"Device": "/dev/sdf"
}
...
}
]
When you encounter this state, detachment can be delayed indefinitely until you unmount the volume,
force detachment, reboot the instance, or all three.
You can delete an EBS volume using one of the following methods.
New console
Old console
1380
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of
the original volume that was used to create the snapshot. The replicated volume loads data in the
background so that you can begin using it immediately. If you access data that hasn't been loaded
yet, the volume immediately downloads the requested data from Amazon S3, and then continues
loading the rest of the volume's data in the background. For more information, see Create Amazon EBS
snapshots (p. 1385).
When you delete a snapshot, only the data unique to that snapshot is removed. For more information,
see Delete an Amazon EBS snapshot (p. 1389).
Snapshot events
You can track the status of your EBS snapshots through CloudWatch Events. For more information, see
EBS snapshot events (p. 1606).
Multi-volume snapshots
Snapshots can be used to create a backup of critical workloads, such as a large database or a file system
that spans across multiple EBS volumes. Multi-volume snapshots allow you to take exact point-in-
time, data coordinated, and crash-consistent snapshots across multiple EBS volumes attached to an
EC2 instance. You are no longer required to stop your instance or to coordinate between volumes to
ensure crash consistency, because snapshots are automatically taken across multiple EBS volumes. For
more information, see the steps for creating a multi-volume EBS snapshot under Create Amazon EBS
snapshots (p. 1385) .
Snapshot pricing
Charges for your snapshots are based on the amount of data stored. Because snapshots are incremental,
deleting a snapshot might not reduce your data storage costs. Data referenced exclusively by a snapshot
is removed when that snapshot is deleted, but data referenced by other snapshots is preserved. For
more information, see Amazon Elastic Block Store Volumes and Snapshots in the AWS Billing and Cost
Management User Guide.
Contents
• How incremental snapshots work (p. 1382)
• Copy and share snapshots (p. 1384)
• Encryption support for snapshots (p. 1385)
1381
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The diagram in this section shows Volume 1 at three points in time. A snapshot is taken of each of these
three volume states. The diagram specifically shows the following:
• In State 1, the volume has 10 GiB of data. Because Snap A is the first snapshot taken of the volume,
the entire 10 GiB of data must be copied.
• In State 2, the volume still contains 10 GiB of data, but 4 GiB have changed. Snap B needs to copy
and store only the 4 GiB that changed after Snap A was taken. The other 6 GiB of unchanged data,
which are already copied and stored in Snap A, are referenced by Snap B rather than being copied
again. This is indicated by the dashed arrow.
• In State 3, 2 GiB of data have been added to the volume, for a total of 12 GiB. Snap C needs to
copy the 2 GiB that were added after Snap B was taken. As shown by the dashed arrows, Snap C also
references 4 GiB of data stored in Snap B, and 6 GiB of data stored in Snap A.
• The total storage required for the three snapshots is 16 GiB.
1382
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The diagram in this section shows how incremental snapshots can be taken from different volumes.
Important
The diagram assumes that you own Vol 1 and that you have created Snap A. If Vol 1 was owned
by another AWS account and that account took Snap A and shared it with you, then Snap B
would be a full snapshot.
1383
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
1. Vol 1 has 10 GiB of data. Because Snap A is the first snapshot taken of the volume, the entire 10
GiB of data is copied and stored.
2. Vol 2 is created from Snap A, so it is an exact replica of Vol 1 at the time the snapshot was taken.
3. Over time, 4 GiB of data is added to Vol 2 and its total size becomes 14 GiB.
4. Snap B is taken from Vol 2. For Snap B, only the 4 GiB of data that was added after the volume was
created from Snap A is copied and stored. The other 10 GiB of unchanged data, which is already
stored in Snap A, is referenced by Snap B instead of being copied and stored again.
Snap B is an incremental snapshot of Snap A, even though it was created from a different volume.
For more information about how data is managed when you delete a snapshot, see Delete an Amazon
EBS snapshot (p. 1389).
1384
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
A snapshot is constrained to the AWS Region where it was created. After you create a snapshot of an EBS
volume, you can use it to create new volumes in the same Region. For more information, see Create a
volume from a snapshot (p. 1351). You can also copy snapshots across Regions, making it possible to use
multiple Regions for geographical expansion, data center migration, and disaster recovery. You can copy
any accessible snapshot that has a completed status. For more information, see Copy an Amazon EBS
snapshot (p. 1391).
Complete documentation of possible snapshot encryption scenarios is provided in Create Amazon EBS
snapshots (p. 1385) and in Copy an Amazon EBS snapshot (p. 1391).
Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of
the snapshot is pending until the snapshot is complete (when all of the modified blocks have been
transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent
snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not
affected by ongoing reads and writes to the volume.
You can take a snapshot of an attached volume that is in use. However, snapshots only capture data
that has been written to your Amazon EBS volume at the time the snapshot command is issued. This
might exclude any data that has been cached by any applications or the operating system. If you can
pause any file writes to the volume long enough to take a snapshot, your snapshot should be complete.
However, if you can't pause all file writes to the volume, you should unmount the volume from within
the instance, issue the snapshot command, and then remount the volume to ensure a consistent and
complete snapshot. You can remount and use your volume while the snapshot status is pending.
To make snapshot management easier, you can tag your snapshots during creation or add tags
afterward. For example, you can apply tags describing the original volume from which the snapshot
was created, or the device name that was used to attach the original volume to an instance. For more
information, see Tag your Amazon EC2 resources (p. 1666).
Snapshot encryption
Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created
from encrypted snapshots are also automatically encrypted. The data in your encrypted volumes and
1385
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
any associated snapshots is protected both at rest and in motion. For more information, see Amazon EBS
encryption (p. 1536).
By default, only you can create volumes from snapshots that you own. However, you can share your
unencrypted snapshots with specific AWS accounts, or you can share them with the entire AWS
community by making them public. For more information, see Share an Amazon EBS snapshot (p. 1419).
You can share an encrypted snapshot only with specific AWS accounts. For others to use your shared,
encrypted snapshot, you must also share the CMK key that was used to encrypt it. Users with access to
your encrypted snapshot must create their own personal copy of it and then use that copy. Your copy of
a shared, encrypted snapshot can also be re-encrypted using a different key. For more information, see
Share an Amazon EBS snapshot (p. 1419).
Multi-volume snapshots
You can create multi-volume snapshots, which are point-in-time snapshots for all EBS volumes attached
to an EC2 instance. You can also create lifecycle policies to automate the creation and retention of multi-
volume snapshots. For more information, see Amazon Data Lifecycle Manager (p. 1478).
After the snapshots are created, each snapshot is treated as an individual snapshot. You can perform all
snapshot operations, such as restore, delete, and copy across Regions or accounts, just as you would with
a single volume snapshot. You can also tag your multi-volume snapshots as you would a single volume
snapshot. We recommend you tag your multiple volume snapshots to manage them collectively during
restore, copy, or retention.
After creating your snapshots, they appear in your EC2 console created at the exact point-in-time.
If any one snapshot for the multi-volume snapshot set fails, all of the other snapshots display an error
status and a createSnapshots CloudWatch event with a result of failed is sent to your AWS account.
For more information, see Create snapshots (createSnapshots) (p. 1606).
Considerations
The following considerations apply to creating snapshots:
• When you create a snapshot for an EBS volume that serves as a root device, you should stop the
instance before taking the snapshot.
• You cannot create snapshots from instances for which hibernation is enabled.
• You cannot create snapshots from hibernated instances.
• Although you can take a snapshot of a volume while a previous snapshot of that volume is in the
pending status, having multiple pending snapshots of a volume can result in reduced volume
performance until the snapshots complete.
• There is a limit of one pending snapshot for a single st1 or sc1 volume, or five
pending snapshots for a single volume of the other volume types. If you receive a
1386
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Create a snapshot
To create a snapshot from the specified volume, use one of the following methods.
New console
The Encryption field indicates the selected volume's encryption status. If the selected volume
is encrypted, the snapshot is automatically encrypted using the same KMS key. If the selected
volume is unencrypted, the snapshot is not encrypted.
5. (Optional) For Description, enter a brief description for the snapshot.
6. (Optional) To assign custom tags to the snapshot, in the Tags section, choose Add tag, and then
enter the key-value pair. You can add up to 50 tags.
7. Choose Create snapshot.
Old console
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
1387
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
New console
The Attached volumes section lists all of the volumes that are attached to the selected
instance, along with their encryption statuses. Snapshots get the same encryption status as
their source volume.
5. For Description, enter a brief description for the snapshots. This description is applied to all of
the snapshots.
6. To create snapshots from all of the instance's volumes, including its root volume, for Root
volume, choose Include. To create snapshots from the instance's data volumes only, for Root
volume, choose Exclude.
7. (Optional) To automatically copy tags from the source volumes to the corresponding snapshots,
for Copy tags from source volume, select Enable. This sets snapshot metadata—such as access
policies, attachment information, and cost allocation—to match the source volume.
8. (Optional) To assign custom tags to the snapshots, in the Tags section, choose Add tag, and
then enter the key-value pair. You can add up to 50 tags.
9. Choose Create snapshot.
During snapshot creation, the snapshots are managed together. If one of the snapshots in the
volume set fails, the other snapshots are moved to error status for the volume set. You can
monitor the progress of your snapshots using CloudWatch Events. After the snapshot creation
process completes, CloudWatch generates an event that contains the status and all of the
relevant snapshot details for the affected instance.
Old console
1388
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
If all of the snapshots complete successfully, a createSnapshots CloudWatch event with a result
of succeeded is sent to your AWS account. If any one snapshot for the multi-volume snapshot set
fails, all of the other snapshots display an error status and a createSnapshots CloudWatch event
with a result of failed is sent to your AWS account. For more information, see Create snapshots
(createSnapshots) (p. 1606).
If data was present on a volume held in an earlier snapshot or series of snapshots, and that data is
subsequently deleted from the volume later on, the data is still considered to be unique data of the
earlier snapshots. Unique data is only deleted from the sequence of snapshots if all snapshots that
reference the unique data are deleted.
When you delete a snapshot, only the data that is referenced exclusively by that snapshot is removed.
Unique data is only deleted if all of the snapshots that reference it are deleted. Deleting previous
snapshots of a volume does not affect your ability to create volumes from later snapshots of that
volume.
Deleting a snapshot might not reduce your organization's data storage costs. Other snapshots might
reference that snapshot's data, and referenced data is always preserved. If you delete a snapshot
containing data being used by a later snapshot, costs associated with the referenced data are allocated
to the later snapshot. For more information about how snapshots store data, see How incremental
snapshots work (p. 1382) and the following example.
In the following diagram, Volume 1 is shown at three points in time. A snapshot has captured each of the
first two states, and in the third, a snapshot has been deleted.
1389
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
• In State 1, the volume has 10 GiB of data. Because Snap A is the first snapshot taken of the volume,
the entire 10 GiB of data must be copied.
• In State 2, the volume still contains 10 GiB of data, but 4 GiB have changed. Snap B needs to copy and
store only the 4 GiB that changed after Snap A was taken. The other 6 GiB of unchanged data, which
are already copied and stored in Snap A, are referenced by Snap B rather than (again) copied. This is
indicated by the dashed arrow.
• In state 3, the volume has not changed since State 2, but Snapshot A has been deleted. The 6 GiB of
data stored in Snapshot A that were referenced by Snapshot B have now been moved to Snapshot
B, as shown by the heavy arrow. As a result, you are still charged for storing 10 GiB of data; 6 GiB of
unchanged data preserved from Snap A and 4 GiB of changed data from Snap B.
Considerations
The following considerations apply to deleting snapshots:
• You can't delete a snapshot of the root device of an EBS volume used by a registered AMI. You must
first deregister the AMI before you can delete the snapshot. For more information, see Deregister your
Linux AMI (p. 185).
• You can't delete a snapshot that is managed by the AWS Backup service using Amazon EC2. Instead,
use AWS Backup to delete the corresponding recovery points in the backup vault.
• You can create, retain, and delete snapshots manually, or you can use Amazon Data Lifecycle
Manager to manage your snapshots for you. For more information, see Amazon Data Lifecycle
Manager (p. 1478).
1390
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
• Although you can delete a snapshot that is still in progress, the snapshot must complete before the
deletion takes effect. This might take a long time. If you are also at your concurrent snapshot limit, and
you attempt to take an additional snapshot, you might get a ConcurrentSnapshotLimitExceeded
error. For more information, see the Service Quotas for Amazon EBS in the Amazon Web Services
General Reference.
• If you delete a snapshot that matches an Recycle Bin for Amazon EBS snapshots retention rule, the
snapshot is retained in the Recycle Bin instead of being immediately deleted. For more information,
see Recycle Bin for Amazon EBS snapshots (p. 1460).
Delete a snapshot
To delete a snapshot, use one of the following methods.
New console
Old console
AWS CLI
You will not be prevented from deleting individual snapshots in the multi-volume snapshot set. If you
delete a snapshot while it is in the pending state, only that snapshot is deleted. The other snapshots
in the multi-volume snapshot set still complete successfully.
1391
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
completed), you can copy it from one AWS Region to another, or within the same Region. Amazon S3
server-side encryption (256-bit AES) protects a snapshot's data in transit during a copy operation. The
snapshot copy receives an ID that is different from the ID of the original snapshot.
To copy multi-volume snapshots to another AWS Region, retrieve the snapshots using the tag you
applied to the multi-volume snapshot set when you created it. Then individually copy the snapshots to
another Region.
If you would like another account to be able to copy your snapshot, you must either modify the snapshot
permissions to allow access to that account or make the snapshot public so that all AWS accounts can
copy it. For more information, see Share an Amazon EBS snapshot (p. 1419).
For information about copying an Amazon RDS snapshot, see Copying a DB Snapshot in the Amazon RDS
User Guide.
Use cases
Prerequisites
• You can copy any accessible snapshots that have a completed status, including shared snapshots and
snapshots that you have created.
• You can copy AWS Marketplace, VM Import/Export, and Storage Gateway snapshots, but you must
verify that the snapshot is supported in the destination Region.
Considerations
• Each account can have up to twenty concurrent snapshot copy requests to a single destination Region.
• User-defined tags are not copied from the source snapshot to the new snapshot. You can add user-
defined tags during or after the copy operation. For more information, see Tag your Amazon EC2
resources (p. 1666).
• Snapshots created by a snapshot copy operation have an arbitrary volume ID that should not be used
for any purpose.
• Resource-level permissions specified for the snapshot copy operation apply only to the new snapshot.
You cannot specify resource-level permissions for the source snapshot. For an example, see Example:
Copying snapshots (p. 1240).
Pricing
• For pricing information about copying snapshots across AWS Regions and accounts, see Amazon EBS
Pricing.
1392
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
• Snapshot copy operations within a single account and Region do not copy any actual data and
therefore are cost-free as long as the encryption status of the snapshot copy does not change.
• If you copy a snapshot and encrypt it to a new KMS key, a complete (non-incremental) copy is created.
This results in additional storage costs.
• If you copy a snapshot to a new Region, a complete (non-incremental) copy is created. This results in
additional storage costs. Subsequent copies of the same snapshot are incremental.
If the most recent snapshot copy was deleted, the next copy is a full copy, not an incremental copy. If
a copy is still pending when you start a another copy, the second copy starts only after the first copy
finishes.
We recommend that you tag your snapshots with the volume ID and creation time so that you can keep
track of the most recent snapshot copy of a volume in the destination Region or account.
To see whether your snapshot copies are incremental, check the copySnapshot (p. 1608) CloudWatch
event.
To copy an encrypted snapshot shared from another AWS account, you must have permissions to use
the snapshot and the customer master key (CMK) that was used to encrypt the snapshot. When using
an encrypted snapshot that was shared with you, we recommend that you re-encrypt the snapshot by
copying it using a KMS key that you own. This protects you if the original KMS key is compromised, or if
the owner revokes it, which could cause you to lose access to any encrypted volumes that you created
using the snapshot. For more information, see Share an Amazon EBS snapshot (p. 1419).
You apply encryption to EBS snapshot copies by setting the Encrypted parameter to true. (The
Encrypted parameter is optional if encryption by default (p. 1539) is enabled).
Optionally, you can use KmsKeyId to specify a custom key to use to encrypt the snapshot copy. (The
Encrypted parameter must also be set to true, even if encryption by default is enabled.) If KmsKeyId
is not specified, the key that is used for encryption depends on the encryption state of the source
snapshot and its ownership.
The following tables describe the encryption outcome for each possible combination of settings.
Topics
• Encryption outcomes: Copying snapshots that you own (p. 1394)
• Encryption outcomes: Copying snapshots that are shared with you (p. 1394)
1393
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Encryption by Is Encrypted Source snapshot Default (no KMS Custom (KMS key
default parameter set? encryption status key specified) specified)
Encrypted Encrypted by
default KMS key
** This is a customer managed key specified for the copy action. This customer managed key is used
instead of the default customer managed key for the AWS account and Region.
Encrypted Encrypted by
default KMS key
1394
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Encrypted Encrypted by
default KMS key
** This is a customer managed key specified for the copy action. This customer managed key is used
instead of the default customer managed key for the AWS account and Region.
Copy a snapshot
To copy a snapshot, use one of the following methods.
New console
By default, the description includes information about the source snapshot so that you can
identify a copy from the original. You can change this description as needed.
5. For Destination Region, select the Region in which to create the snapshot copy.
6. Specify the encryption status for the snapshot copy.
If the source snapshot is unencrypted and your account is not enabled for encryption by
default, encryption is optional. To encrypt the snapshot copy, for Encryption, select Encrypt
this snapshot. Then, for KMS key, select the KMS key to use to encrypt the snapshot in the
destination Region.
7. Choose Copy snapshot.
Old console
• Destination region: Select the Region where you want to write the copy of the snapshot.
• Description: By default, the description includes information about the source snapshot so
that you can identify a copy from the original. You can change this description as necessary.
• Encryption: If the source snapshot is not encrypted, you can choose to encrypt the copy. If
you have enabled encryption by default (p. 1539), the Encryption option is set and cannot be
unset from the snapshot console. If the Encryption option is set, you can choose to encrypt it
to a customer managed CMK by selecting one in the field, described below.
1395
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
To view the progress of the copy process, switch to the destination Region, and then refresh the
Snapshots page. Copies in progress are listed at the top of the page.
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
If you attempt to copy an encrypted snapshot without having permissions to use the encryption key, the
operation fails silently. The error state is not displayed in the console until you refresh the page. You can
also check the state of the snapshot from the command line, as in the following example.
If the copy failed because of insufficient key permissions, you see the following message:
"StateMessage": "Given key ID is not accessible".
When copying an encrypted snapshot, you must have DescribeKey permissions on the default CMK.
Explicitly denying these permissions results in copy failure. For information about managing CMK keys,
see Controlling Access to Customer Master Keys.
By default, when you create a snapshot, it is stored in the Amazon EBS Snapshot Standard tier (standard
tier). Snapshots stored in the standard tier are incremental. This means that only the blocks on the
volume that have changed after your most recent snapshot are saved.
When you archive a snapshot, the incremental snapshot is converted to a full snapshot, and it is moved
from the standard tier to the Amazon EBS Snapshots Archive tier (archive tier). Full snapshots include all
of the blocks that were written to the volume at the time when the snapshot was created.
When you need to access an archived snapshot, you can restore it from the archive tier to the standard
tier, and then use it in the same way that you use any other snapshot in your account.
Amazon EBS Snapshots Archive offers up to 75 percent lower snapshot storage costs for snapshots that
you plan to store for 90 days or longer and that you rarely need to access.
1396
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Topics
• Considerations and limitations (p. 1397)
• Pricing and billing (p. 1398)
• Quotas (p. 1399)
• Guidelines and best practices for archiving snapshots (p. 1400)
• Work with snapshot archiving (p. 1408)
• Monitor snapshot archiving (p. 1414)
• The minimum archive period is 90 days. If you delete or permanently restore an archived snapshot
before the minimum archive period of 90 days, you are billed for remaining days in the archive tier,
rounded to the nearest hour. For more information, see Pricing and billing (p. 1398).
• It can take up to 72 hours to restore an archived snapshot from the archive tier to the standard tier,
depending on the size of the snapshot.
• Archived snapshots are always full snapshots. A full snapshot contains all the blocks written to
the volume at the time the snapshot was created. The full snapshot will likely be larger than the
incremental snapshot from which it was created. However, if you have only one incremental snapshot
of a volume on the standard tier, the size of the full snapshot in the archive tier will be the same size
as the snapshot in standard tier. This is because the first snapshot taken of a volume is always a full
snapshot.
• When a snapshot is archived, the data of the snapshot that is referenced by other snapshots in
the snapshot lineage are retained in the standard tier. Data and storage costs associated with the
referenced data that is retained on the standard tier are allocated to the next snapshot in the lineage.
This ensures that subsequent snapshots in the lineage are not affected by the archival.
• If you delete an archived snapshot that matches a Recycle Bin retention rule, the archived snapshot is
retained in the Recycle Bin for the retention period defined in the retention rule. To use the snapshot,
you must first recover it from the Recycle Bin and then restore it from the archive tier. For more
information, see Recycle Bin for Amazon EBS snapshots (p. 1460) and Pricing and billing (p. 1398).
Limitations
• You can archive snapshots that are in the completed state only.
• You can archive only snapshots that you own in your account. To archive a snapshot that is shared with
you, first copy the snapshot to your account and then archive the snapshot copy.
• You can’t archive a snapshot of the root device volume of a registered AMI.
• You can't archive snapshots that are associated with an Amazon EBS-backed AMI.
• You can't cancel the snapshot archive or snapshot restore process after it has been started.
• You can't share archived snapshots. If you archive a snapshot that you have shared with other
accounts, the accounts with which the snapshot is shared lose access after the snapshot is archived.
• You can't copy an archived snapshot. If you need to copy an archived snapshot, you must first restore
it.
1397
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
• You can't enable fast snapshot restore for an archived snapshot. Fast snapshot restore is automatically
disabled when a snapshot is archived. If you need to use fast snapshot restore, you must manually
enable it after restoring the snapshot.
Snapshot restores are billed at a rate of $0.03 per GB of data restored. For example, if you restore a 100
GiB snapshot from the archive tier, you are billed one time for $3 (100 GiB * $0.03).
After the snapshot is restored to the standard tier, the snapshot is billed at the standard rate for
snapshots of $0.05 per GB-month.
The minimum archive period is 90 days. If you delete or permanently restore an archived snapshot
before the minimum archive period of 90 days, you are billed a pro-rated charge equal to the archive
tier storage charge for the remaining days, rounded to the nearest hour. For example, if you delete or
permanently restore an archived snapshot after 40 days, you are billed for the remaining 50 days of the
minimum archive period.
Note
Temporarily restoring an archived snapshot before the minimum archive period of 90 days does
not incur this charge.
Temporary restores
When you temporarily restore a snapshot, the snapshot is restored from the archive tier to the standard
tier, and a copy of the snapshot remains in the archive tier. You are billed for both the snapshot in the
standard tier and the snapshot copy in the archive tier for the duration of the temporary restore period.
When the temporarily restored snapshot is removed from the standard tier, you are no longer billed for
it, and you are billed for the snapshot in the archive tier only.
Permanent restores
When you permanently restore a snapshot, the snapshot is restored from the archive tier to the standard
tier, and the snapshot is deleted from the archive tier. You are billed for the snapshot in the standard tier
only.
Deleting snapshots
If you delete a snapshot while it is being archived, you are billed for the snapshot data that has already
been moved to the archive tier. This data is subject to the minimum archive period of 90 days and billed
accordingly upon deletion. For example, if you archive a 100 GiB snapshot, and you delete the snapshot
after only 40 GiB has been archived, you are billed $1.50 for the minimum archive period of 90 days for
the 40 GiB that has already been archived ($0.0125 per GB-month * 40 GB * (90 days * 24 hours) / (24
hours/day * 30-day month).
If you delete a snapshot while it is being restored from the archive tier, you are billed for the snapshot
restore for the full size of the snapshot (snapshot size * $0.03). For example, if you restore a 100 GiB
snapshot from the archive tier, and you delete the snapshot at any point before the snapshot restore
completes, you are billed $3 (100 GiB snapshot size * $0.03).
Recycle Bin
1398
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Archived snapshots are billed at the rate for archived snapshots while they are in the Recycle Bin.
Archived snapshots that are in the Recycle Bin are subject to the minimum archive period of 90 days and
they are billed accordingly if they are deleted by Recycle Bin before the minimum archive period. In other
words, if a retention rule deletes an archived snapshot from the Recycle Bin before the minimum period
of 90 days, you are billed for the remaining days.
If you delete a snapshot that matches a retention rule while the snapshot is being archived, the archived
snapshot is retained in the Recycle Bin for the retention period defined in the retention rule. It is billed at
the rate for archived snapshots.
If you delete a snapshot that matches a retention rule while the snapshot is being restored, the restored
snapshot is retained in the Recycle Bin for the remainder of the retention period, and billed at the
standard snapshot rate. To use the restored snapshot, you must first recover it from the Recycle Bin.
For more information, see Recycle Bin for Amazon EBS snapshots (p. 1460).
Cost tracking
Archived snapshots appear in the AWS Cost and Usage Report with their same resource ID and Amazon
Resource Name (ARN). For more information, see the AWS Cost and Usage Report User Guide.
You can use the following usage types to identify the associated costs:
Quotas
This section describes the default quotas for archived and in-progress snapshots.
Archived 25
snapshots
per
volume
Concurrent 5
in-
progress
snapshot
archives
per
account
Concurrent 5
in-
progress
snapshot
restores
per
account
If you need more than the default limits, complete the AWS Support Center Create case form to request
a limit increase.
1399
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Topics
• Archiving the only snapshot of a volume (p. 1400)
• Archiving incremental snapshots of a single volume (p. 1400)
• Archiving full snapshots for compliance reasons (p. 1401)
• Determining the reduction in standard tier storage costs (p. 1402)
When you have only one snapshot of a volume, the snapshot is always the same size as the blocks
written to the volume at the time the snapshot was created. When you archive such a snapshot, the
snapshot in the standard tier is converted to an equivalent-sized full snapshot and it is moved from the
standard tier to the archive tier.
Archiving these snapshots can help you save with lower storage costs. If you no longer need the source
volume, you can delete the volume for further storage cost savings.
When you archive an incremental snapshot, the snapshot is converted to a full snapshot and it is moved
to the archive tier. For example, in the following image, if you archive Snap B, the snapshot is converted
to a full snapshot that is 10 GiB in size and moved to the archive tier. Similarly, if you archive Snap C, the
size of the full snapshot in the archive tier is 14 GiB.
1400
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
If you are archiving snapshots to reduce your storage costs in the standard tier, you should not archive
the first snapshot in a set of incremental snapshots. These snapshots are referenced by subsequent
snapshots in the snapshot lineage. In most cases, archiving these snapshots will not reduce storage costs.
Note
You should not archive the last snapshot in a set of incremental snapshots. The last snapshot is
the most recent snapshot taken of a volume. You will need this snapshot in the standard tier if
you want to create volumes from it in the case of a volume corruption or loss.
If you archive a snapshot that contains data that is referenced by a later snapshot in the lineage, the data
storage and storage costs associated with the referenced data are allocated to the later snapshot in the
lineage. In this case, archiving the snapshot will not reduce data storage or storage costs. For example,
in the preceding image, if you archive Snap B, it's 4 GiB of data is attributed to Snap C. In this case, your
overall storage costs will increase because you incur storage costs for the full version of Snap B in the
archive tier, and your storage costs for the standard tier remain unchanged.
If you archive Snap C, your standard tier storage will decrease by 4 GiB because the data is not
referenced by any other snapshots later in the lineage. And your archive tier storage will increase by 14
GiB because the snapshot is converted to a full snapshot.
1401
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
references to other snapshots in the snapshot lineage. Snapshots archived with EBS Snapshots Archive
are full snapshots, and they do not have any references to other snapshots in the lineage. Additionally,
you will likely need to retain these snapshots for compliance snapshots for several years. EBS Snapshots
Archive makes it cost-effective to archive these full snapshots for long-term retention.
If you want to archive an incremental snapshot to reduce your storage costs, you should consider the
size of the full snapshot in the archive tier and the reduction in storage in the standard tier. This section
explains how to do this.
Important
The API responses are data accurate at the point-in-time when the APIs are called. API responses
can differ as the data associated with a snapshot changes as a result of changes in the snapshot
lineage.
To determine the reduction in storage and storage costs in the standard tier, use the following steps.
1. Check the size of the full snapshot. To determine the full size of the snapshot, use the list-snapshot-
blocks command. For --snapshot-id, specify the ID of the snapshot that you want to archive.
This returns information about all of the blocks in the specified snapshot. The BlockIndex of the
last block returned by the command indicates the number of blocks in the snapshot. The number of
blocks multiplied by 512 KiB, which is the snapshot block size, gives you a close approximation of
the size of the full snapshot in the archive tier (blocks * 512 KiB = full snapshot size).
For example, the following command lists the blocks for snapshot snap-01234567890abcdef.
The following is the command output, with some blocks omitted. The following output indicates
that the snapshot includes about 16,383 blocks of data. This approximates to a full snapshot size of
about 8 GiB (16,383 * 512 KiB = 7.99 GiB).
{
"VolumeSize": 8,
"Blocks": [
{
"BlockToken": "ABgBAeShfa5RwG+RiWUg2pwmnCU/
YMnV7fGMxLbCWfEBEUmmuqac5RmoyVat",
"BlockIndex": 0
},
{
"BlockToken": "ABgBATdTONyThPUAbQhbUQXsn5TGoY/
J17GfE83j9WN7siupavOTw9E1KpFh",
"BlockIndex": 1
},
{
"BlockToken": "EBEUmmuqXsn5TGoY/QwmnCU/YMnV74eKE2TSsn5TGoY/
E83j9WQhbUQXsn5T",
"BlockIndex": 4
},
.....
{
"BlockToken": "yThPUAbQhb5V8xpwmnCU/
YMnV74eKE2TSFY1sKP/4r05y47WETdTONyThPUA",
"BlockIndex": 12890
},
1402
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
{
"BlockToken":
"ABgBASHKD5V8xEbaRKdxdkZZS4eKE2TSFYlMG1sKP/4r05y47WEHqKaNPcLs",
"BlockIndex": 12906
},
{
"BlockToken": "ABgBARROGMUJo6P9X3CFHQGZNQ7av9B6vZtTTqV89QqC
+SkO0HWMlwkGXjnA",
"BlockIndex": 16383
}
],
"VolumeSize": 8,
"ExpiryTime": 1637677800.845,
"BlockSize": 524288
}
2. Find the source volume from which the snapshot that you want to archive was created. Use the
describe-snapshots command. For --snapshot-id, specify the ID of the snapshot that you want to
archive. The VolumeId response parameter indicates the ID of the source volume.
The following is the command output, which indicates that snapshot snap-09c9114207084f0d9
was created from volume vol-0f3e2c292c52b85c3.
{
"Snapshots": [
{
"Description": "",
"Tags": [],
"Encrypted": false,
VolumeId": "vol-0f3e2c292c52b85c3",
"State": "completed",
"VolumeSize": 8,
"StartTime": "2021-11-16T08:29:49.840Z",
"Progress": "100%",
"OwnerId": "123456789012",
"SnapshotId": "snap-09c9114207084f0d9"
}
]
}
3. Find all of the snapshots created from the source volume. Use the describe-snapshots command.
Specify the volume-id filter, and for the filter value, specify the volume ID from the previous step.
For example, the following command returns all snapshots created from volume
vol-0f3e2c292c52b85c3.
The following is the command output, which indicates that three snapshots were created from
volume vol-0f3e2c292c52b85c3.
1403
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
{
"Snapshots": [
{
"Description": "",
"Tags": [],
"Encrypted": false,
"VolumeId": "vol-0f3e2c292c52b85c3",
"State": "completed",
"VolumeSize": 8,
"StartTime": "2021-11-14T08:57:39.300Z",
"Progress": "100%",
"OwnerId": "123456789012",
"SnapshotId": "snap-08ca60083f86816b0"
},
{
"Description": "",
"Tags": [],
"Encrypted": false,
"VolumeId": "vol-0f3e2c292c52b85c3",
"State": "completed",
"VolumeSize": 8,
"StartTime": "2021-11-15T08:29:49.840Z",
"Progress": "100%",
"OwnerId": "123456789012",
"SnapshotId": "snap-09c9114207084f0d9"
},
{
"Description": "01",
"Tags": [],
"Encrypted": false,
"VolumeId": "vol-0f3e2c292c52b85c3",
"State": "completed",
"VolumeSize": 8,
"StartTime": "2021-11-16T07:50:08.042Z",
"Progress": "100%",
"OwnerId": "123456789012",
"SnapshotId": "snap-024f49fe8dd853fa8"
}
]
}
4. Using the output from the previous command, sort the snapshots by their creation times, from
earliest to newest. The StartTime response parameter for each snapshot indicates its creation
time, in UTC time format.
For example, the snapshots returned in the previous step arranged by creation time, from earliest to
newest, is as follows:
1. snap-08ca60083f86816b0 (earliest – created before the snapshot that you want to archive)
2. snap-09c9114207084f0d9 (the snapshot to archive)
3. snap-024f49fe8dd853fa8 (newest – created after the snapshot that you that want to archive)
5. Identify the snapshots that were created immediately before and after the snapshot that
you want to archive. In this case, you want to archive snapshot snap-09c9114207084f0d9,
which was the second incremental snapshot created in the set of three snapshots.
Snapshot snap-08ca60083f86816b0 was created immediately before, and snapshot
snap-024f49fe8dd853fa8 was created immediately after.
6. Find the unreferenced data in the snapshot that you want to archive. First, find the blocks that are
different between the snapshot that was created immediately before the snapshot that you want
to archive, and the snapshot that you want to archive. Use the list-changed-blocks command. For
--first-snapshot-id, specify the ID of the snapshot that was created immediately before the
1404
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
snapshot that you want to archive. For --second-snapshot-id, specify the ID of the snapshot
that you want to archive.
For example, the following command shows the block indexes for the blocks that are different
between snapshot snap-08ca60083f86816b0 (the snapshot created before the snapshot you
want to archive), and snapshot snap-09c9114207084f0d9 (the snapshot you want to archive).
The following shows the command output, with some blocks omitted.
{
"BlockSize": 524288,
"ChangedBlocks": [
{
"FirstBlockToken": "ABgBAX6y
+WH6Rm9y5zq1VyeTCmEzGmTT0jNZG1cDirFq1rOVeFbWXsH3W4z/",
"SecondBlockToken": "ABgBASyx0bHHBnTERu
+9USLxYK/81UT0dbHIUFqUjQUkwTwK5qkjP8NSGyNB",
"BlockIndex": 4
},
{
"FirstBlockToken": "ABgBAcfL
+EfmQmlNgstqrFnYgsAxR4SDSO4LkNLYOOChGBWcfJnpn90E9XX1",
"SecondBlockToken": "ABgBAdX0mtX6aBAt3EBy+8jFCESMpig7csKjbO2Ocd08m2iNJV2Ue
+cRwUqF",
"BlockIndex": 5
},
{
"FirstBlockToken": "ABgBAVBaFJmbP/eRHGh7vnJlAwyiyNUi3MKZmEMxs2wC3AmM/
fc6yCOAMb65",
"SecondBlockToken":
"ABgBAdewWkHKTcrhZmsfM7GbaHyXD1Ctcn2nppz4wYItZRmAo1M72fpXU0Yv",
"BlockIndex": 13
},
{
"FirstBlockToken": "ABgBAQGxwuf6z095L6DpRoVRVnOqPxmx9r7Wf6O+i
+ltZ0dwPpGN39ijztLn",
"SecondBlockToken": "ABgBAUdlitCVI7c6hGsT4ckkKCw6bMRclnV
+bKjViu/9UESTcW7CD9w4J2td",
"BlockIndex": 14
},
{
"FirstBlockToken":
"ABgBAZBfEv4EHS1aSXTXxSE3mBZG6CNeIkwxpljzmgSHICGlFmZCyJXzE4r3",
"SecondBlockToken":
"ABgBAVWR7QuQQB0AP2TtmNkgS4Aec5KAQVCldnpc91zBiNmSfW9ouIlbeXWy",
"BlockIndex": 15
},
.....
{
"SecondBlockToken": "ABgBAeHwXPL+z3DBLjDhwjdAM9+CPGV5VO5Q3rEEA
+ku50P498hjnTAgMhLG",
"BlockIndex": 13171
},
{
"SecondBlockToken":
"ABgBAbZcPiVtLx6U3Fb4lAjRdrkJMwW5M2tiCgIp6ZZpcZ8AwXxkjVUUHADq",
1405
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
"BlockIndex": 13172
},
{
"SecondBlockToken": "ABgBAVmEd/pQ9VW9hWiOujOAKcauOnUFCO
+eZ5ASVdWLXWWC04ijfoDTpTVZ",
"BlockIndex": 13173
},
{
"SecondBlockToken": "ABgBAT/jeN7w
+8ALuNdaiwXmsSfM6tOvMoLBLJ14LKvavw4IiB1d0iykWe6b",
"BlockIndex": 13174
},
{
"SecondBlockToken": "ABgBAXtGvUhTjjUqkwKXfXzyR2GpQei/
+pJSG/19ESwvt7Hd8GHaUqVs6Zf3",
"BlockIndex": 13175
}
],
"ExpiryTime": 1637648751.813,
"VolumeSize": 8
}
Next, use the same command to find blocks that are different between the snapshot that you want
to archive and the snapshot that was created immediately after it. For --first-snapshot-id,
specify the ID of the snapshot that you want to archive. For --second-snapshot-id, specify the
ID of the snapshot that was created immediately after the snapshot that you want to archive.
For example, the following command shows the block indexes of the blocks that are different
between snapshot snap-09c9114207084f0d9 (the snapshot that you want to archive) and
snapshot snap-024f49fe8dd853fa8 (the snapshot created after the snapshot that you want to
archive).
The following shows the command output, with some blocks omitted.
{
"BlockSize": 524288,
"ChangedBlocks": [
{
"FirstBlockToken": "ABgBAVax0bHHBnTERu
+9USLxYK/81UT0dbSnkDk0gqwRFSFGWA7HYbkkAy5Y",
"SecondBlockToken":
"ABgBASEvi9x8Om7Htp37cKG2NT9XUzEbLHpGcayelomSoHpGy8LGyvG0yYfK",
"BlockIndex": 4
},
{
"FirstBlockToken": "ABgBAeL0mtX6aBAt3EBy+8jFCESMpig7csfMrI4ufnQJT3XBm/
pwJZ1n2Uec",
"SecondBlockToken": "ABgBAXmUTg6rAI
+v0LvekshbxCVpJjWILvxgC0AG0GQBEUNRVHkNABBwXLkO",
"BlockIndex": 5
},
{
"FirstBlockToken":
"ABgBATKwWkHKTcrhZmsfM7GbaHyXD1CtcnjIZv9YzisYsQTMHfTfh4AhS0s2",
1406
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
"SecondBlockToken": "ABgBAcmiPFovWgXQio
+VBrxOqGy4PKZ9SAAHaZ2HQBM9fQQU0+EXxQjVGv37",
"BlockIndex": 13
},
{
"FirstBlockToken":
"ABgBAbRlitCVI7c6hGsT4ckkKCw6bMRclnARrMt1hUbIhFnfz8kmUaZOP2ZE",
"SecondBlockToken": "ABgBAXe935n544+rxhJ0INB8q7pAeoPZkkD27vkspE/
qKyvOwpozYII6UNCT",
"BlockIndex": 14
},
{
"FirstBlockToken": "ABgBAd+yxCO26I
+1Nm2KmuKfrhjCkuaP6LXuol3opCNk6+XRGcct4suBHje1",
"SecondBlockToken": "ABgBAcPpnXz821NtTvWBPTz8uUFXnS8jXubvghEjZulIjHgc
+7saWys77shb",
"BlockIndex": 18
},
.....
{
"SecondBlockToken": "ABgBATni4sDE5rS8/a9pqV03lU/lKCW
+CTxFl3cQ5p2f2h1njpuUiGbqKGUa",
"BlockIndex": 13190
},
{
"SecondBlockToken": "ABgBARbXo7zFhu7IEQ/9VMYFCTCtCuQ+iSlWVpBIshmeyeS5FD/
M0i64U+a9",
"BlockIndex": 13191
},
{
"SecondBlockToken": "ABgBAZ8DhMk+rROXa4dZlNK45rMYnVIGGSyTeiMli/sp/
JXUVZKJ9sMKIsGF",
"BlockIndex": 13192
},
{
"SecondBlockToken":
"ABgBATh6MBVE904l6sqOC27s1nVntFUpDwiMcRWGyJHy8sIgGL5yuYXHAVty",
"BlockIndex": 13193
},
{
"SecondBlockToken":
"ABgBARuZykaFBWpCWrJPXaPCneQMbyVgnITJqj4c1kJWPIj5Gn61OQyy+giN",
"BlockIndex": 13194
}
],
"ExpiryTime": 1637692677.286,
"VolumeSize": 8
}
7. Compare the output returned by both commands in the previous step. If the same block index
appears in both command outputs, it indicates that the block contains unreferenced data.
For example, the command outputs in the previous step indicate that blocks 4, 5, 13, and 15 are
unique to snapshot snap-09c9114207084f0d9 and that they are not referenced by any other
snapshots in the snapshot lineage.
To determine the reduction in standard tier storage, multiply the number of blocks that appear in
both command outputs by 512 KiB, which is the snapshot block size.
For example, if 9,950 block indexes appear in both command outputs, it indicates that you will
decrease standard tier storage by around 4.85 GiB (9,950 blocks * 512 KiB = 4.85 GiB).
8. Determine the storage costs for storing the unreferenced blocks in the standard tier for 90 days.
Compare this value with the cost of storing the full snapshot, described in from step 1, in the
archive tier. You can determine your costs savings by comparing the values, assuming that you do
1407
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
not restore the full snapshot from the archive tier during the minimum 90-day period. For more
information, see Pricing and billing (p. 1398).
Archive a snapshot
You can archive any snapshot that is in the completed state and that you own in your account. You can't
archive snapshots that are in the pending or error states, or snapshots that are shared with you. For
more information, see Considerations and limitations (p. 1397).
Archived snapshots retain their snapshot ID, encryption status, AWS Identity and Access Management
(IAM) permissions, owner information, and resource tags. However, fast snapshot restore and snapshot
sharing are automatically disabled after the snapshot is archived.
You can continue to use the snapshot while the archive is in process. As soon as the snapshot tiering
status reaches the archival-complete state, you can no longer use the snapshot.
Console
To archive a snapshot
AWS CLI
To archive a snapshot
Use the modify-snapshot-tier AWS CLI command. For --snapshot-id, specify the ID of the
snapshot to archive. For --storage-tier, specify archive.
1408
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The following is the command output. The TieringStartTime response parameter indicates
the date and time at which the archive process was started, in UTC time format (YYYY-MM-
DDTHH:MM:SSZ).
{
"SnapshotId": "snap-01234567890abcedf",
"TieringStartTime": "2021-09-15T16:44:37.574Z"
}
When you restore a snapshot, you can choose to restore it permanently or temporarily.
If you restore a snapshot permanently, the snapshot is moved from the archive tier to the standard tier
permanently. The snapshot remains restored and ready for use until you manually re-archive it or you
manually delete it. When you permanently restore a snapshot, the snapshot is removed from the archive
tier.
If you restore a snapshot temporarily, the snapshot is copied from the archive tier to the standard tier
for a restore period that you specify. The snapshot remains restored and ready for use for the restore
period only. During the restore period, a copy of the snapshot remains in the archive tier. After the period
expires, the snapshot is automatically removed from the standard tier. You can increase or decrease the
restore period or change the restore type to permanent at any time during the restore period. For more
information, see Modify the restore period or restore type for a temporarily restored snapshot (p. 1410).
You can restore an archived snapshot using one of the following methods.
Console
AWS CLI
Use the restore-snapshot-tier AWS CLI command. For --snapshot-id, specify the ID of the
snapshot to restore, and include the --permanent-restore option.
1409
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
--permanent-restore
{
"SnapshotId": "snap-01234567890abcedf",
"IsPermanentRestore": true
}
Use the restore-snapshot-tier AWS CLI command. Omit the --permanent-restore option. For --
snapshot-id, specify the ID of the snapshot to restore, and for --temporary-restore-days,
specify the number of days for which to restore the snapshot.
{
"SnapshotId": "snap-01234567890abcedf",
"RestoreDuration": 5,
"IsPermanentRestore": false
}
Modify the restore period or restore type for a temporarily restored snapshot
When you restore a snapshot temporarily, you must specify the number of days for which the snapshot
is to remain restored in your account. After the restore period expires, the snapshot is automatically
removed from the standard tier.
You can change the restore period for a temporarily restored snapshot at any time.
You can choose to either increase or decrease the restore period, or you can change the restore type from
temporary to permanent.
If you change the restore period, the new restore period is effective from the current date. For example,
if you specify a new restore period of 5 days, the snapshot will remain restored for five days from the
current date.
1410
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Note
You can end a temporary restore early by setting the restore period to 1 day.
If you change the restore type from temporary to permanent, the snapshot copy is deleted from the
archive tier, and the snapshot remains available in your account until you manually re-archive it or delete
it.
You can modify the restore period for a snapshot using one of the following methods.
Console
AWS CLI
Use the restore-snapshot-tier AWS CLI command. For --snapshot-id, specify the ID of the
snapshot that you previously temporarily restored. To change the restore type from temporary to
permanent, specify --permanent-restore and omit --temporary-restore-days. To increase
or decrease the restore period, omit --permanent-restore and for --temporary-restore-
days, specify the new restore period in days.
The following command changes the restore period for snapshot snap-01234567890abcedf to 10
days.
{
"SnapshotId": "snap-01234567890abcedf",
"RestoreDuration": 10,
"IsPermanentRestore": false
}
The following command changes the restore type for snapshot snap-01234567890abcedf from
temporary to permanent.
1411
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
--permanent-restore
{
"SnapshotId": "snap-01234567890abcedf",
"IsPermanentRestore": true
}
Console
• Last tier change started on — The date and time when the last archive or restore was started.
• Tier change progress — The progress of the last archive or restore action, as a percentage.
• Storage tier — The storage tier for the snapshot. Always archive for archived snapshots,
and standard for snapshots stored on the standard tier, including temporarily restored
snapshots.
• Tiering status — The status of the last archive or restore action.
• Archive completed on — The date and time when the archive completed.
• Temporary restore expires on — The date and time when a temporarily restored snapshot is
set to expire.
AWS CLI
Use the describe-snapshot-tier-status AWS CLI command. Specify the snapshot-id filter, and for
the filter value, specify the snapshot ID. Alternatively, to view all archived snapshots, omit the filter.
• Status — The status of the snapshot. Always completed for archived snapshots. Only snapshots
that are in the completed state can be archived.
• LastTieringStartTime — The date and time that the archival process started, in UTC time
format (YYYY-MM-DDTHH:MM:SSZ).
• LastTieringOperationState — The current state of the archival process. Possible states
include: archival-in-progress | archival-completed | archival-failed | permanent-
restore-in-progress | permanent-restore-completed | permanent-restore-failed
| temporary-restore-in-progress | temporary-restore-completed | temporary-
restore-failed
1412
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Example
{
"SnapshotTierStatuses": [
{
"Status": "completed",
"ArchivalCompleteTime": "2021-09-15T17:33:16.147Z",
"LastTieringProgress": 100,
"Tags": [],
"VolumeId": "vol-01234567890abcedf",
"LastTieringOperationState": "archival-completed",
"StorageTier": "archive",
"OwnerId": "123456789012",
"SnapshotId": "snap-01234567890abcedf",
"LastTieringStartTime": "2021-09-15T16:44:37.574Z"
}
]
}
Use the describe-snapshot AWS CLI command. For --snapshot-ids, specify the ID of the snapshot
view.
The following is the command output. The StorageTier response parameter indicates whether the
snapshot is currently archived. archive indicates that the snapshot is currently archived and stored
in the archive tier, and standard indicates that the snapshot is currently not archived and that it is
stored in the standard tier.
In the following example output, only Snap A is archived. Snap B and Snap C are not archived.
Additionally, the RestoreExpiryTime response parameter is returned only for snapshots that are
temporarily restored from the archive. It indicates when temporarily restored snapshots are to be
automatically removed from the standard tier. It is not returned for snapshots that are permanently
restored.
In the following example output, Snap C is temporarily restored, and it will be automatically
removed from the standard tier at 2021-09-19T21:00:00.000Z (September 19, 2021 at 21:00 UTC).
1413
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
{
"Snapshots": [
{
"Description": "Snap A",
"Encrypted": false,
"VolumeId": "vol-01234567890aaaaaa",
"State": "completed",
"VolumeSize": 8,
"StartTime": "2021-09-07T21:00:00.000Z",
"Progress": "100%",
"OwnerId": "123456789012",
"SnapshotId": "snap-01234567890aaaaaa",
"StorageTier": "archive",
"Tags": []
},
{
"Description": "Snap B",
"Encrypted": false,
"VolumeId": "vol-09876543210bbbbbb",
"State": "completed",
"VolumeSize": 10,
"StartTime": "2021-09-14T21:00:00.000Z",
"Progress": "100%",
"OwnerId": "123456789012",
"SnapshotId": "snap-09876543210bbbbbb",
"StorageTier": "standard",
"RestoreExpiryTime": "2019-09-19T21:00:00.000Z",
"Tags": []
},
{
"Description": "Snap C",
"Encrypted": false,
"VolumeId": "vol-054321543210cccccc",
"State": "completed",
"VolumeSize": 12,
"StartTime": "2021-08-01T21:00:00.000Z",
"Progress": "100%",
"OwnerId": "123456789012",
"SnapshotId": "snap-054321543210cccccc",
"StorageTier": "standard",
"Tags": []
}
]
}
To view only snapshots that are stored in the archive tier or the standard tier
Use the describe-snapshot AWS CLI command. Include the --filter option, for the filter name,
specify storage-tier, and for the filter value specify either archive or standard.
1414
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The following is an example of an event that is emitted when a snapshot archive action succeeds.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2021-05-25T13:12:22Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef"
],
"detail": {
"event": "archiveSnapshot",
"result": "succeeded",
"cause": "",
"request-id": "123456789",
"snapshot_id": "arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef",
"startTime": "2021-05-25T13:12:22Z",
"endTime": "2021-05-45T15:30:00Z",
"recycleBinExitTime": "2021-10-45T15:30:00Z"
}
The following is an example of an event that is emitted when a snapshot archive action fails.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2021-05-25T13:12:22Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef"
],
"detail": {
"event": "archiveSnapshot",
"result": "failed",
"cause": "Source snapshot ID is not valid",
"request-id": "1234567890",
"snapshot_id": "arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef",
"startTime": "2021-05-25T13:12:22Z",
"endTime": "2021-05-45T15:30:00Z",
"recycleBinExitTime": "2021-10-45T15:30:00Z"
}
}
The following is an example of an event that is emitted when a permanent restore action succeeds.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "123456789012",
1415
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
"time": "2021-05-25T13:12:22Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef"
],
"detail": {
"event": "restoreSnapshot",
"result": "succeeded",
"cause": "",
"request-id": "1234567890",
"snapshot_id": "arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef"
"startTime": "2021-05-25T13:12:22Z",
"endTime": "2021-10-45T15:30:00Z"
}
}
The following is an example of an event that is emitted when a permanent restore action fails.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2021-05-25T13:12:22Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef"
],
"detail": {
"event": "restoreSnapshot",
"result": "failed",
"cause": "Source snapshot ID is not valid",
"request-id": "1234567890",
"snapshot_id": "arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef",
"startTime": "2021-05-25T13:12:22Z",
"endTime": "2021-05-45T15:30:00Z",
"recycleBinExitTime": "2021-10-45T15:30:00Z"
}
}
The following is an example of an event that is emitted when a temporary restore action succeeds.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2021-05-25T13:12:22Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef"
],
"detail": {
"event": "restoreSnapshot",
"result": "succeeded",
"cause": "",
"request-id": "1234567890",
"snapshot_id": "arn:aws:ec2:us-us-east-1::snapshot/snap-01234567890abcdef",
"startTime": "2021-05-25T13:12:22Z",
"endTime": "2021-05-45T15:30:00Z",
"restoreExpiryTime": "2021-06-45T15:30:00Z",
1416
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
"recycleBinExitTime": "2021-10-45T15:30:00Z"
}
}
The following is an example of an event that is emitted when a temporary restore action fails.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2021-05-25T13:12:22Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef"
],
"detail": {
"event": "restoreSnapshot",
"result": "failed",
"cause": "Source snapshot ID is not valid",
"request-id": "1234567890",
"snapshot_id": "arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef",
"startTime": "2021-05-25T13:12:22Z",
"endTime": "2021-05-45T15:30:00Z",
"recycleBinExitTime": "2021-10-45T15:30:00Z"
}
}
• restoreExpiry — Emitted when the restore period for a temporarily restored snapshot expires.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2021-05-25T13:12:22Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef"
],
"detail": {
"event": "restoryExpiry",
"result": "succeeded",
"cause": "",
"request-id": "1234567890",
"snapshot_id": "arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef",
"startTime": "2021-05-25T13:12:22Z",
"endTime": "2021-05-45T15:30:00Z",
"recycleBinExitTime": "2021-10-45T15:30:00Z"
}
}
1417
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
New console
Old console
AWS CLI
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
The following command describes the snapshots with the tag Stack=production.
The following command describes the snapshots created from the specified volume.
With the AWS CLI, you can use JMESPath to filter results using expressions. For example, the
following command displays the IDs of all snapshots created by your AWS account (represented by
123456789012) before the specified date (represented by 2020-03-31). If you do not specify the
owner, the results include all public snapshots.
1418
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The following command displays the IDs of all snapshots created in the specified date range.
Topics
• Before you share a snapshot (p. 1419)
• Share a snapshot (p. 1419)
• Share a KMS key (p. 1421)
• View snapshots that are shared with you (p. 1422)
• Use snapshots that are shared with you (p. 1423)
• Determine the use of snapshots that you share (p. 1423)
• Snapshots are constrained to the Region in which they were created. To share a snapshot with another
Region, copy the snapshot to that Region and then share the copy. For more information, see Copy an
Amazon EBS snapshot (p. 1391).
• You can't share snapshots that are encrypted with the default AWS managed key. You can only share
snapshots that are encrypted with a customer managed key. For more information, see Creating Keys
in the AWS Key Management Service Developer Guide.
• You can share only unencrypted snapshots publicly.
• When you share an encrypted snapshot, you must also share the customer managed key used to
encrypt the snapshot. For more information, see Share a KMS key (p. 1421).
Share a snapshot
You can share a snapshot using one of the methods described in the section.
New console
To share a snapshot
1419
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Old console
To share a snapshot
AWS CLI
The permissions for a snapshot are specified using the createVolumePermission attribute of the
snapshot. To make a snapshot public, set the group to all. To share a snapshot with a specific AWS
account, set the user to the ID of the AWS account.
1420
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Users of your shared customer managed key who are accessing encrypted snapshots must be granted
permissions to perform the following actions on the key:
• kms:DescribeKey
• kms:CreateGrant
• kms:GenerateDataKey
• kms:ReEncrypt
• kms:Decrypt
For more information about controlling access to a customer managed key, see Using key policies in
AWS KMS in the AWS Key Management Service Developer Guide.
Use either the policy view or the default view, depending on which view you can access, to add one
or more AWS account IDs to the policy, as follows:
• (Policy view) Choose Edit. Add one or more AWS account IDs to the following statements:
"Allow use of the key" and "Allow attachment of persistent resources".
Choose Save changes. In the following example, the AWS account ID 444455556666 is added
to the policy.
1421
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {"AWS": [
"arn:aws:iam::111122223333:user/KeyUser",
"arn:aws:iam::444455556666:root"
]},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
},
{
"Sid": "Allow attachment of persistent resources",
"Effect": "Allow",
"Principal": {"AWS": [
"arn:aws:iam::111122223333:user/KeyUser",
"arn:aws:iam::444455556666:root"
]},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*",
"Condition": {"Bool": {"kms:GrantIsForAWSResource": true}}
}
• (Default view) Scroll down to Other AWS accounts. Choose Add other AWS accounts and enter
the AWS account ID as prompted. To add another account, choose Add another AWS account
and enter the AWS account ID. When you have added all AWS accounts, choose Save changes.
• Private snapshots — To view only snapshots that are shared with you privately.
• Public snapshots — To view only snapshots that are shared with you publicly.
AWS CLI
1422
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Locate the shared snapshot by ID or description. For more information, see View snapshots that are
shared with you (p. 1422). You can use this snapshot as you would any other snapshot that you own in
your account. For example, you can create a volume from the snapshot or copy it to a different Region.
Locate the shared snapshot by ID or description. For more information, see View snapshots that are
shared with you (p. 1422). Create a copy of the shared snapshot in your account, and encrypt the copy
with a KMS key that you own. You can then use the copy to create volumes or you can copy it to different
Regions.
For more information about using CloudTrail, see Log Amazon EC2 and Amazon EBS API calls with AWS
CloudTrail (p. 1001).
You can restore a snapshot from the Recycle Bin at any time before its retention period expires. After
you restore a snapshot from the Recycle Bin, the snapshot is removed from the Recycle Bin and you can
use it in the same way you use any other snapshot in your account. If the retention period expires and
the snapshot is not restored, the snapshot is permanently deleted from the Recycle Bin and is no longer
available for recovery.
Snapshots in the Recycle Bin are billed at the same rate as regular snapshots in your account. There are
no additional charges for using Recycle Bin and retention rules. For more information, see Amazon EBS
pricing.
For more information, see Recycle Bin for Amazon EBS snapshots (p. 1460).
Topics
• View snapshots in the Recycle Bin (p. 1423)
• Restore snapshots from the Recycle Bin (p. 1424)
1423
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
You can view the snapshots in the Recycle Bin using one of the following methods.
AWS CLI
Use the list-snapshots-in-recycle-bin AWS CLI command. Include the --snapshot-id option to
view a specific snapshot. Or omit the --snapshot-id option to view all snapshots in the Recycle
Bin.
Example output:
{
"SnapshotRecycleBinInfo": [
{
"Description": "Monthly data backup snapshot",
"RecycleBinEnterTime": "2021-12-01T13:00:00.000Z",
"RecycleBinExitTime": "2021-12-15T13:00:00.000Z",
"VolumeId": "vol-abcdef09876543210",
"SnapshotId": "snap-01234567890abcdef"
}
]
}
1424
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
You can restore a snapshot from the Recycle Bin using one of the following methods.
AWS CLI
To restore a deleted snapshot from the Recycle Bin using the AWS CLI
Use the restore-snapshot-from-recycle-bin AWS CLI command. For --snapshot-id, specify the ID
of the snapshot to restore.
For example, the following command restores snapshot snap-01234567890abcdef from the
Recycle Bin.
By default, snapshots of EBS volumes on an Outpost are stored in Amazon S3 in the Region of the
Outpost. You can also use Amazon EBS local snapshots on Outposts to store snapshots of volumes on
an Outpost locally in Amazon S3 on the Outpost itself. This ensures that the snapshot data resides on
the Outpost, and on your premises. In addition, you can use AWS Identity and Access Management (IAM)
policies and permissions to set up data residency enforcement policies to ensue that snapshot data does
not leave the Outpost. This is especially useful if you reside in a country or region that is not yet served
by an AWS Region and that has data residency requirements.
This topic provides information about working with Amazon EBS local snapshots on Outposts. For more
information about Amazon EBS snapshots and about working with snapshots in an AWS Region, see
Amazon EBS snapshots (p. 1381).
For more information about AWS Outposts, see AWS Outposts Features and the AWS Outposts User
Guide. For pricing information, see AWS Outposts pricing.
Topics
• Frequently asked questions (p. 1426)
• Prerequisites (p. 1427)
• Considerations (p. 367)
• Controlling access with IAM (p. 1428)
• Working with local snapshots (p. 1429)
1425
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
By default, Amazon EBS snapshots of volumes on an Outpost are stored in Amazon S3 in the Region
of the Outpost. If the Outpost is provisioned with Amazon S3 on Outposts, you can choose to store
the snapshots locally on the Outpost itself. Local snapshots are incremental, which means that only
the blocks of the volume that have changed after your most recent snapshot are saved. You can use
these snapshots to restore a volume on the same Outpost as the snapshot at any time. For more
information about Amazon EBS snapshots, see Amazon EBS snapshots (p. 1381).
2. Why should I use local snapshots?
Snapshots are a convenient way of backing up your data. With local snapshots, all of your snapshot
data is stored locally on the Outpost. This means that it does not leave your premises. This is
especially useful if you reside in a country or region that is not yet served by an AWS Region and that
has residency requirements.
Additionally, using local snapshots can help to reduce the bandwidth used for communication
between the Region and the Outpost in bandwidth constrained environments.
3. How do I enforce snapshot data residency on Outposts?
You can use AWS Identity and Access Management (IAM) policies to control the permissions that
principals (AWS accounts, IAM users, and IAM roles) have when working with local snapshots and
to enforce data residency. You can create a policy that prevents principals from creating snapshots
from Outpost volumes and instances and storing the snapshots in an AWS Region. Currently, copying
snapshots and images from an Outpost to a Region is not supported. For more information, see
Controlling access with IAM (p. 1428).
4. Are multi-volume, crash-consistent local snapshots supported?
Yes, you can create multi-volume, crash-consistent local snapshots from instances on an Outpost.
5. How do I create local snapshots?
You can create snapshots manually using the AWS Command Line Interface (AWS CLI) or the
Amazon EC2 console. For more information see, Working with local snapshots (p. 1429). You can
also automate the lifecycle of local snapshots using Amazon Data Lifecycle Manager. For more
information see, Automate snapshots on an Outpost (p. 1434).
6. Can I create, use, or delete local snapshots if my Outpost loses connectivity to its Region?
No. The Outpost must have connectivity with its Region as the Region provides the access,
authorization, logging, and monitoring services that are critical for your snapshots' health. If there
is no connectivity, you can't create new local snapshots, create volumes or launch instances from
existing local snapshots, or delete local snapshots.
7. How quickly is Amazon S3 storage capacity made available after deleting local snapshots?
Amazon S3 storage capacity becomes available within 72 hours after deleting local snapshots and
the volumes that reference them.
8. How can I ensure that I do not run out of Amazon S3 capacity on my Outpost?
We recommend that you use Amazon CloudWatch alarms to monitor your Amazon S3 storage
capacity, and delete snapshots and volumes that you no longer need to avoid running out of
storage capacity. If you are using Amazon Data Lifecycle Manager to automate the lifecycle of local
snapshots, ensure that your snapshot retention policies do not retain snapshots for longer than is
needed.
9. What happens if I run out of local Amazon S3 capacity on my Outposts?
If you run out of local Amazon S3 capacity on your Outposts, Amazon Data Lifecycle Manager
will not be able to successfully create local snapshots on the Outposts. Amazon Data Lifecycle
1426
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Manager will attempt to create the local snapshots on the Outposts, but the snapshots immediately
transition to the error state and they are eventually deleted by Amazon Data Lifecycle Manager.
We recommend that you use the SnapshotsCreateFailed Amazon CloudWatch metric to monitor
your snapshot lifecycle policies for snapshot creation failures. For more information, see Monitor
your policies using Amazon CloudWatch (p. 1516).
10. Can I use local snapshots and AMIs backed by local snapshots with Spot Instances and Spot Fleet?
No, you can't use local snapshots or AMIs backed by local snapshots to launch Spot Instances or a
Spot Fleet.
11. Can I use local snapshots and AMIs backed by local snapshots with Amazon EC2 Auto Scaling?
Yes, you can use local snapshots and AMIs backed by local snapshots to launch Auto Scaling groups
in a subnet that is on the same Outpost as the snapshots. The Amazon EC2 Auto Scaling group
service-linked role must have permission to use the KMS key used to encrypt the snapshots.
You can't use local snapshots or AMIs backed by local snapshots to launch Auto Scaling groups in an
AWS Region.
Prerequisites
To store snapshots on an Outpost, you must have an Outpost that is provisioned with Amazon S3 on
Outposts. For more information about Amazon S3 on Outposts, see Using Amazon S3 on Outposts in the
Amazon Simple Storage Service User Guide.
Considerations
Keep the following in mind when working with local snapshots.
• Outposts must have connectivity to their AWS Region to use local snapshots.
• Snapshot metadata is stored in the AWS Region associated with the Outpost. This does not include any
snapshot data.
• Snapshots stored on Outposts are encrypted by default. Unencrypted snapshots are not supported.
Snapshots that are created on an Outpost and snapshots that are copied to an Outpost are encrypted
using the default KMS key for the Region or a different KMS key that you specify at the time of the
request.
• When you create a volume on an Outpost from a local snapshot, you cannot re-encrypt the volume
using a different KMS key. Volumes created from local snapshots must be encrypted using the same
KMS key as the source snapshot.
• After you delete local snapshots from an Outpost, the Amazon S3 storage capacity used by the
deleted snapshots becomes available within 72 hours. For more information, see Delete local
snapshots (p. 1434).
• You can't export local snapshots from an Outpost.
• You can't enable fast snapshot restore for local snapshots.
• EBS direct APIs are not supported with local snapshots.
• You can't copy local snapshots or AMIs from an Outpost to an AWS Region, from one Outpost to
another, or within an Outpost. However, you can copy snapshots from an AWS Region to an Outpost.
For more information, see Copy snapshots from an AWS Region to an Outpost (p. 1432).
• When copying a snapshot from an AWS region to an Outpost, the data is transferred over the service
link. Copying multiple snapshots simultaneously could impact other services running on the Outpost.
• You can't share local snapshots.
• You must use IAM policies to ensure that your data residency requirements are met. For more
information, see Controlling access with IAM (p. 1428).
1427
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
• Local snapshots are incremental backups. Only the blocks in the volume that have changed after your
most recent snapshot are saved. Each local snapshot contains all of the information that is needed to
restore your data (from the moment when the snapshot was taken) to a new EBS volume. For more
information, see How incremental snapshots work (p. 1382).
• You can’t use IAM policies to enforce data residency for CopySnapshot and CopyImage actions.
Topics
• Enforce data residency for snapshots (p. 1428)
• Prevent principals from deleting local snapshots (p. 1429)
The following example policy prevents all principals from creating snapshots from volumes
and instances on Outpost arn:aws:outposts:us-east-1:123456789012:outpost/
op-1234567890abcdef and storing the snapshot data in an AWS Region. Principals can still create local
snapshots. This policy ensures that all snapshots remain on the Outpost.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"ec2:CreateSnapshot",
"ec2:CreateSnapshots"
],
"Resource": "arn:aws:ec2:us-east-1::snapshot/*",
"Condition": {
"StringEquals": {
"ec2:SourceOutpostArn": "arn:aws:outposts:us-
east-1:123456789012:outpost/op-1234567890abcdef0"
},
"Null": {
"ec2:OutpostArn": "true"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:CreateSnapshots"
],
"Resource": "*"
}
]
}
1428
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The following example policy prevents all principals from deleting local snapshots that are stored on
Outpost arn:aws:outposts:us-east-1:123456789012:outpost/op-1234567890abcdef0.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"ec2:DeleteSnapshot"
],
"Resource": "arn:aws:ec2:us-east-1::snapshot/*",
"Condition": {
"StringEquals": {
"ec2:OutpostArn": "arn:aws:outposts:us-east-1:123456789012:outpost/
op-1234567890abcdef0"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteSnapshot"
],
"Resource": "*"
}
]
}
Topics
• Rules for storing snapshots (p. 1429)
• Create local snapshots from volumes on an Outpost (p. 1430)
• Create multi-volume local snapshots from instances on an Outpost (p. 1431)
• Create AMIs from local snapshots (p. 1431)
• Copy snapshots from an AWS Region to an Outpost (p. 1432)
• Copy AMIs from an AWS Region to an Outpost (p. 1433)
• Create volumes from local snapshots (p. 1434)
• Launch instances from AMIs backed by local snapshots (p. 224)
• Delete local snapshots (p. 1434)
• Automate snapshots on an Outpost (p. 1434)
• If the most recent snapshot of a volume is stored on an Outpost, then all successive snapshots must be
stored on the same Outpost.
• If the most recent snapshot of a volume is stored in an AWS Region, then all successive snapshots must
be stored in the same Region. To start creating local snapshots from that volume, do the following:
1429
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
For the new volume on the Outpost, the next snapshot can be stored on the Outpost or in the AWS
Region. All successive snapshots must then be stored in that same location.
• Local snapshots, including snapshots created on an Outpost and snapshots copied to an Outpost from
an AWS Region, can be used only to create volumes on the same Outpost.
• If you create a volume on an Outpost from a snapshot in a Region, then all successive snapshots of
that new volume must be in the same Region.
• If you create a volume on an Outpost from a local snapshot, then all successive snapshots of that new
volume must be on the same Outpost.
You can create local snapshots from volumes on your Outpost. You can choose to store the snapshots on
the same Outpost as the source volume, or in the Region for the Outpost.
Local snapshots can be used to create volumes on the same Outpost only.
You can create local snapshots from volumes on an Outpost using one of the following methods.
Console
Command line
Use the create-snapshot command. Specify the ID of the volume from which to create the snapshot,
and the ARN of the destination Outpost on which to store the snapshot. If you omit the Outpost
ARN, the snapshot is stored in the AWS Region for the Outpost.
1430
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
You can create crash-consistent multi-volume local snapshots from instances on your Outpost. You
can choose to store the snapshots on the same Outpost as the source instance, or in the Region for the
Outpost.
Multi-volume local snapshots can be used to create volumes on the same Outpost only.
You can create multi-volume local snapshots from instances on an Outpost using one of the following
methods.
Console
During snapshot creation, the snapshots are managed together. If one of the snapshots in the
volume set fails, the other snapshots in the volume set are moved to error status.
Command line
Use the create-snapshots command. Specify the ID of the instance from which to create the
snapshots, and the ARN of the destination Outpost on which to store the snapshots. If you omit the
Outpost ARN, the snapshots are stored in the AWS Region for the Outpost.
For example, the following command creates snapshots of the volumes attached to instance
i-1234567890abcdef0 and stores the snapshots on Outpost arn:aws:outposts:us-
east-1:123456789012:outpost/op-1234567890abcdef0.
You can create Amazon Machine Images (AMIs) using a combination of local snapshots and snapshots
that are stored in the Region of the Outpost. For example, if you have an Outpost in us-east-1, you can
1431
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
create an AMI with data volumes that are backed by local snapshots on that Outpost, and a root volume
that is backed by a snapshot in the us-east-1 Region.
Note
• You can't create AMIs that include backing snapshots stored across multiple Outposts.
• You can’t currently create AMIs directly from instances on an Outposts using CreateImage API
or the Amazon EC2 console for Outposts that are enabled with Amazon S3 on Outposts.
• AMIs that are backed by local snapshots can be used to launch instances on the same Outpost
only.
1. Copy the snapshots from the Region to the Outpost. For more information, see Copy snapshots from
an AWS Region to an Outpost (p. 1432).
2. Use the Amazon EC2 console or the register-image command to create the AMI using the snapshot
copies on the Outpost. For more information, see Creating an AMI from a snapshot.
1. Create snapshots from the instance on the Outpost and store the snapshots on the Outpost. For more
information, see Create multi-volume local snapshots from instances on an Outpost (p. 1431).
2. Use the Amazon EC2 console or the register-image command to create the AMI using the local
snapshots. For more information, see Creating an AMI from a snapshot.
1. Create snapshots from the instance on the Outpost and store the snapshots in the Region. For more
information, see Create local snapshots from volumes on an Outpost (p. 1430) or Create multi-
volume local snapshots from instances on an Outpost (p. 1431).
2. Use the Amazon EC2 console or the register-image command to create the AMI using the snapshot
copies in the Region. For more information, see Creating an AMI from a snapshot.
You can copy snapshots from an AWS Region to an Outpost. You can do this only if the snapshots are in
the Region for the Outpost. If the snapshots are in a different Region, you must first copy the snapshot
to the Region for the Outpost, and then copy it from that Region to the Outpost.
Note
You can't copy local snapshots from an Outpost to a Region, from one Outpost to another, or
within the same Outpost.
You can copy snapshots from a Region to an Outpost using one of the following methods.
Console
1432
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The Snapshot Destination field only appears if you have Outposts in the selected destination
Region. If the field does not appear, you do not have any Outposts in the selected destination
Region.
5. For Destination Outpost ARN, enter the ARN of the Outpost to which to copy the snapshot.
6. (Optional) For Description, enter a brief description of the copied snapshot.
7. Encryption is enabled by default for the snapshot copy. Encryption cannot be disabled. For KMS
key, choose the KMS key to use.
8. Choose Copy.
Command line
Use the copy-snapshot command. Specify the ID of the snapshot to copy, the Region from which to
copy the snapshot, and the ARN of the destination Outpost.
For example, the following command copies snapshot snap-1234567890abcdef0 from the
us-east-1 Region to Outpost arn:aws:outposts:us-east-1:123456789012:outpost/
op-1234567890abcdef0.
You can copy AMIs from an AWS Region to an Outpost. When you copy an AMI from a Region to an
Outpost, all of the snapshots associated with the AMI are copied from the Region to the Outpost.
You can copy an AMI from a Region to an Outpost only if the snapshots associated with the AMI are in
the Region for the Outpost. If the snapshots are in a different Region, you must first copy the AMI to the
Region for the Outpost, and then copy it from that Region to the Outpost.
Note
You can't copy an AMI from an Outpost to a Region, from one Outpost to another, or within an
Outpost.
You can copy AMIs from a Region to an Outpost using the AWS CLI only.
Command line
Use the copy-image command. Specify the ID of the AMI to copy, the source Region, and the ARN of
the destination Outpost.
For example, the following command copies AMI ami-1234567890abcdef0 from the us-
east-1 Region to Outpost arn:aws:outposts:us-east-1:123456789012:outpost/
op-1234567890abcdef0.
1433
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
You can create volumes on Outposts from local snapshots. Volumes must be created on the same
Outpost as the source snapshots. You cannot use local snapshots to create volumes in the Region for the
Outpost.
When you create a volume from a local snapshot, you cannot re-encrypt the volume using different KMS
key. Volumes created from local snapshots must be encrypted using the same KMS key as the source
snapshot.
For more information, see Create a volume from a snapshot (p. 1351).
You can launch instances from AMIs that are backed by local snapshots. You must launch Instances on
the same Outpost as the source AMI. For more information, see Launch an instance on your Outpost in
the AWS Outposts User Guide.
You can delete local snapshots from an Outpost. After you delete a snapshot from an Outpost, the
Amazon S3 storage capacity used by the deleted snapshot becomes available within 72 hours after
deleting the snapshot and the volumes that reference that snapshot.
Because Amazon S3 storage capacity does not become available immediately, we recommend that you
use Amazon CloudWatch alarms to monitor your Amazon S3 storage capacity. Delete snapshots and
volumes that you no longer need to avoid running out of storage capacity.
For more information about deleting snapshots, see Delete a snapshot (p. 1391).
You can create Amazon Data Lifecycle Manager snapshot lifecycle policies that automatically create,
copy, retain, and delete snapshots of your volumes and instances on an Outpost. You can choose
whether to store the snapshots in a Region or whether to store them locally on an Outpost. Additionally,
you can automatically copy snapshots that are created and stored in an AWS Region to an Outpost.
The following table shows provides and Overview of the supported features.
Region Region ✓ ✓ ✓ ✓
Outpost Region ✓ ✓ ✓ ✓
Outpost Outpost ✗ ✗ ✗ ✗
Considerations
• Only Amazon EBS snapshot lifecycle policies are currently supported. EBS-backed AMI policies and
Cross-account sharing event policies are not supported.
• If a policy manages snapshots for volumes or instances in a Region, then snapshots are created in the
same Region as the source resource.
• If a policy manages snapshots for volumes or instances on an Outpost, then snapshots can be created
on the source Outpost, or in the Region for that Outpost.
1434
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
• A single policy can't manage both snapshots in a Region and snapshots on an Outpost. If you need to
automate snapshots in a Region and on an Outpost, you must create separate policies.
• Fast snapshot restore is not supported for snapshots created on an Outpost, or for snapshots copied to
an Outpost.
• Cross-account sharing is not supported for snapshots created on an Outpost.
For more information about creating a snapshot lifecycle that manages local snapshots, see Automating
snapshot lifecycles (p. 1484).
You can create incremental snapshots directly from data on-premises into EBS volumes and the cloud
to use for quick disaster recovery. With the ability to write and read snapshots, you can write your on-
premises data to an EBS snapshot during a disaster. Then after recovery, you can restore it back to AWS
or on-premises from the snapshot. You no longer need to build and maintain complex mechanisms to
copy data to and from Amazon EBS.
This user guide provides an overview of the elements that make up the EBS direct APIs, and examples of
how to use them effectively. For more information about the actions, data types, parameters, and errors
of the APIs, see the EBS direct APIs reference. For more information about the supported AWS Regions,
endpoints, and service quotas for the EBS direct APIs, see Amazon EBS Endpoints and Quotas in the AWS
General Reference.
Contents
• Understand the EBS direct APIs (p. 1435)
• IAM permissions for EBS direct APIs (p. 1437)
• Use EBS direct APIs (p. 1441)
• Pricing for EBS direct APIs (p. 1452)
• Using interface VPC endpoints with EBS direct APIs (p. 1453)
• Log API Calls for EBS direct APIs with AWS CloudTrail (p. 1454)
• Frequently asked questions (p. 1459)
Snapshots
Snapshots are the primary means to back up data from your EBS volumes. With the EBS direct APIs,
you can also back up data from your on-premises disks to snapshots. To save storage costs, successive
snapshots are incremental, containing only the volume data that changed since the previous snapshot.
For more information, see Amazon EBS snapshots (p. 1381).
Note
Public snapshots are not supported by the EBS direct APIs.
1435
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Blocks
A block is a fragment of data within a snapshot. Each snapshot can contain thousands of blocks. All
blocks in a snapshot are of a fixed size.
Block indexes
A block index is the offset position of a block within a snapshot, and it is used to identify the block.
Multiply the BlockIndex value with the BlockSize value (BlockIndex * BlockSize) to identify the logical
offset of the data in the logical volume.
Block tokens
A block token is the identifying hash of a block within a snapshot, and it is used to locate the block data.
Block tokens returned by EBS direct APIs are temporary. They change on the expiry timestamp specified
for them, or if you run another ListSnapshotBlocks or ListChangedBlocks request for the same snapshot.
Checksum
A checksum is a small-sized datum derived from a block of data for the purpose of detecting errors that
were introduced during its transmission or storage. The EBS direct APIs use checksums to validate data
integrity. When you read data from an EBS snapshot, the service provides Base64-encoded SHA256
checksums for each block of data transmitted, which you can use for validation. When you write data
to an EBS snapshot, you must provide a Base64 encoded SHA256 checksum for each block of data
transmitted. The service validates the data received using the checksum provided. For more information,
see Use checksums (p. 1451) later in this guide.
Encryption
Encryption protects your data by converting it into unreadable code that can be deciphered only
by people who have access to the KMS key used to encrypt it. You can use the EBS direct APIs to
read and write encrypted snapshots, but there are some limitations. For more information, see Use
encryption (p. 1450) later in this guide.
API actions
The EBS direct APIs consists of six actions. There are three read actions and three write actions. The read
actions are:
• ListSnapshotBlocks — returns the block indexes and block tokens of blocks in the specified snapshot
• ListChangedBlocks — returns the block indexes and block tokens of blocks that are different between
two specified snapshots of the same volume and snapshot lineage.
• GetSnapshotBlock — returns the data in a block for the specified snapshot ID, block index, and block
token.
1436
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
For more information about the EBS direct APIs resources, actions, and condition context keys for use in
IAM permission policies, see Actions, resources, and condition keys for Amazon Elastic Block Store in the
Service Authorization Reference.
Important
Be cautious when assigning the following policies to IAM users. By assigning these policies, you
might give access to a user who is denied access to the same resource through the Amazon EC2
APIs, such as the CopySnapshot or CreateVolume actions.
The following policy allows the read EBS direct APIs to be used on all snapshots in a specific AWS Region.
In the policy, replace <Region> with the Region of the snapshot.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ebs:ListSnapshotBlocks",
"ebs:ListChangedBlocks",
"ebs:GetSnapshotBlock"
],
"Resource": "arn:aws:ec2:<Region>::snapshot/*"
}
]
}
The following policy allows the read EBS direct APIs to be used on snapshots with a specific key-value
tag. In the policy, replace <Key> with the key value of the tag, and <Value> with the value of the tag.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ebs:ListSnapshotBlocks",
"ebs:ListChangedBlocks",
"ebs:GetSnapshotBlock"
],
"Resource": "arn:aws:ec2:*::snapshot/*",
"Condition": {
"StringEqualsIgnoreCase": {
"aws:ResourceTag/<Key>": "<Value>"
}
}
}
]
}
The following policy allows all of the read EBS direct APIs to be used on all snapshots in the account
only within a specific time range. This policy authorizes use of the EBS direct APIs based on the
aws:CurrentTime global condition key. In the policy, be sure to replace the date and time range shown
with the date and time range for your policy.
1437
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ebs:ListSnapshotBlocks",
"ebs:ListChangedBlocks",
"ebs:GetSnapshotBlock"
],
"Resource": "arn:aws:ec2:*::snapshot/*",
"Condition": {
"DateGreaterThan": {
"aws:CurrentTime": "2018-05-29T00:00:00Z"
},
"DateLessThan": {
"aws:CurrentTime": "2020-05-29T23:59:59Z"
}
}
}
]
}
The following policy grants access to decrypt an encrypted snapshot using a specific KMS key. It grants
access to encrypt new snapshots using the default KMS key ID for EBS snapshots. It also provides the
ability to determine if encrypt by default is enabled on the account. In the policy, replace <Region> with
the Region of the KMS key, <AccountId> with the ID of the AWS account of the KMS key, and <KeyId>
with the ID of the KMS key used to encrypt the snapshot that you want to read with the EBS direct APIs.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:GenerateDataKey",
"kms:GenerateDataKeyWithoutPlaintext",
"kms:ReEncrypt*",
"kms:CreateGrant",
"ec2:CreateTags",
"kms:DescribeKey",
"ec2:GetEbsDefaultKmsKeyId",
"ec2:GetEbsEncryptionByDefault"
],
"Resource": "arn:aws:kms:<Region>:<AccountId>:key/<KeyId>"
}
]
}
For more information, see Changing Permissions for an IAM User in the IAM User Guide.
{
"Version": "2012-10-17",
"Statement": [
1438
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
{
"Effect": "Allow",
"Action": [
"ebs:StartSnapshot",
"ebs:PutSnapshotBlock",
"ebs:CompleteSnapshot"
],
"Resource": "arn:aws:ec2:<Region>::snapshot/*"
}
]
}
The following policy allows the write EBS direct APIs to be used on snapshots with a specific key-value
tag. In the policy, replace <Key> with the key value of the tag, and <Value> with the value of the tag.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ebs:StartSnapshot",
"ebs:PutSnapshotBlock",
"ebs:CompleteSnapshot"
],
"Resource": "arn:aws:ec2:*::snapshot/*",
"Condition": {
"StringEqualsIgnoreCase": {
"aws:ResourceTag/<Key>": "<Value>"
}
}
}
]
}
The following policy allows all of the EBS direct APIs to be used. It also allows the StartSnapshot
action only if a parent snapshot ID is specified. Therefore, this policy blocks the ability to start new
snapshots without using a parent snapshot.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ebs:*",
"Resource": "*",
"Condition": {
"StringEquals": {
"ebs:ParentSnapshot": "arn:aws:ec2:*::snapshot/*"
}
}
}
]
}
The following policy allows all of the EBS direct APIs to be used. It also allows only the user tag key
to be created for a new snapshot. This policy also ensures that the user has access to create tags. The
StartSnapshot action is the only action that can specify tags.
{
"Version": "2012-10-17",
"Statement": [
1439
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
{
"Effect": "Allow",
"Action": "ebs:*",
"Resource": "*",
"Condition": {
"ForAllValues:StringEquals": {
"aws:TagKeys": "user"
}
}
},
{
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "*"
}
]
}
The following policy allows all of the write EBS direct APIs to be used on all snapshots in the account
only within a specific time range. This policy authorizes use of the EBS direct APIs based on the
aws:CurrentTime global condition key. In the policy, be sure to replace the date and time range shown
with the date and time range for your policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ebs:StartSnapshot",
"ebs:PutSnapshotBlock",
"ebs:CompleteSnapshot"
],
"Resource": "arn:aws:ec2:*::snapshot/*",
"Condition": {
"DateGreaterThan": {
"aws:CurrentTime": "2018-05-29T00:00:00Z"
},
"DateLessThan": {
"aws:CurrentTime": "2020-05-29T23:59:59Z"
}
}
}
]
}
The following policy grants access to decrypt an encrypted snapshot using a specific KMS key. It grants
access to encrypt new snapshots using the default KMS key ID for EBS snapshots. It also provides the
ability to determine if encrypt by default is enabled on the account. In the policy, replace <Region> with
the Region of the KMS key, <AccountId> with the ID of the AWS account of the KMS key, and <KeyId>
with the ID of the KMS key used to encrypt the snapshot that you want to read with the EBS direct APIs.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:GenerateDataKey",
1440
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
"kms:GenerateDataKeyWithoutPlaintext",
"kms:ReEncrypt*",
"kms:CreateGrant",
"ec2:CreateTags",
"kms:DescribeKey",
"ec2:GetEbsDefaultKmsKeyId",
"ec2:GetEbsEncryptionByDefault"
],
"Resource": "arn:aws:kms:<Region>:<AccountId>:key/<KeyId>"
}
]
}
For more information, see Changing Permissions for an IAM User in the IAM User Guide.
Important
The EBS direct APIs require an AWS Signature Version 4 signature. For more information, see
Use Signature Version 4 signing (p. 1451).
Topics
• Read snapshots with EBS direct APIs (p. 1441)
• Write snapshots with EBS direct APIs (p. 1446)
• Use encryption (p. 1450)
• Use Signature Version 4 signing (p. 1451)
• Use checksums (p. 1451)
• Idempotency for StartSnapshot API (p. 1451)
• Optimize performance (p. 1452)
The following steps describe how to use the EBS direct APIs to read snapshots:
1. Use the ListSnapshotBlocks action to view all block indexes and block tokens of blocks in a snapshot.
Or use the ListChangedBlocks action to view only the block indexes and block tokens of blocks that
are different between two snapshots of the same volume and snapshot lineage. These actions help
you identify the block tokens and block indexes of blocks for which you might want to get data.
2. Use the GetSnapshotBlock action, and specify the block index and block token of the block for which
you want to get data.
The following examples show how to read snapshots using the EBS direct APIs.
Topics
• List blocks in a snapshot (p. 1442)
• List blocks that are different between two snapshots (p. 1443)
• Get block data from a snapshot (p. 1445)
1441
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
AWS CLI
The following list-snapshot-blocks example command returns the block indexes and block tokens
of blocks that are in snapshot snap-0987654321. The --starting-block-index parameter
limits the results to block indexes greater than 1000, and the --max-results parameter limits the
results to the first 100 blocks.
The following example response for the previous command lists the block indexes and block tokens
in the snapshot. Use the get-snapshot-block command and specify the block index and block
token of the block for which you want to get data. The block tokens are valid until the expiry time
listed.
{
"Blocks": [
{
"BlockIndex": 1001,
"BlockToken": "AAABAV3/
PNhXOynVdMYHUpPsetaSvjLB1dtIGfbJv5OJ0sX855EzGTWos4a4"
},
{
"BlockIndex": 1002,
"BlockToken": "AAABATGQIgwr0WwIuqIMjCA/Sy7e/
YoQFZsHejzGNvjKauzNgzeI13YHBfQB"
},
{
"BlockIndex": 1007,
"BlockToken": "AAABAZ9CTuQtUvp/
dXqRWw4d07eOgTZ3jvn6hiW30W9duM8MiMw6yQayzF2c"
},
{
"BlockIndex": 1012,
"BlockToken": "AAABAQdzxhw0rVV6PNmsfo/
YRIxo9JPR85XxPf1BLjg0Hec6pygYr6laE1p0"
},
{
"BlockIndex": 1030,
"BlockToken": "AAABAaYvPax6mv+iGWLdTUjQtFWouQ7Dqz6nSD9L
+CbXnvpkswA6iDID523d"
},
{
"BlockIndex": 1031,
"BlockToken": "AAABATgWZC0XcFwUKvTJbUXMiSPg59KVxJGL
+BWBClkw6spzCxJVqDVaTskJ"
},
...
],
"ExpiryTime": 1576287332.806,
"VolumeSize": 32212254720,
"BlockSize": 524288
}
AWS API
The following ListChangedBlocks example request returns the block indexes and block tokens of
blocks that are in snapshot snap-0acEXAMPLEcf41648. The startingBlockIndex parameter
limits the results to block indexes greater than 1000, and the maxResults parameter limits the
results to the first 100 blocks.
1442
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
GET /snapshots/snap-0acEXAMPLEcf41648/blocks?maxResults=100&startingBlockIndex=1000
HTTP/1.1
Host: ebs.us-east-2.amazonaws.com
Accept-Encoding: identity
User-Agent: <User agent parameter>
X-Amz-Date: 20200617T231953Z
Authorization: <Authentication parameter>
The following example response for the previous request lists the block indexes and block tokens in
the snapshot. Use the GetSnapshotBlock action and specify the block index and block token of the
block for which you want to get data. The block tokens are valid until the expiry time listed.
HTTP/1.1 200 OK
x-amzn-RequestId: d6e5017c-70a8-4539-8830-57f5557f3f27
Content-Type: application/json
Content-Length: 2472
Date: Wed, 17 Jun 2020 23:19:56 GMT
Connection: keep-alive
{
"BlockSize": 524288,
"Blocks": [
{
"BlockIndex": 0,
"BlockToken": "AAUBAcuWqOCnDNuKle11s7IIX6jp6FYcC/q8oT93913HhvLvA
+3JRrSybp/0"
},
{
"BlockIndex": 1536,
"BlockToken":
"AAUBAWudwfmofcrQhGVlLwuRKm2b8ZXPiyrgoykTRC6IU1NbxKWDY1pPjvnV"
},
{
"BlockIndex": 3072,
"BlockToken":
"AAUBAV7p6pC5fKAC7TokoNCtAnZhqq27u6YEXZ3MwRevBkDjmMx6iuA6tsBt"
},
{
"BlockIndex": 3073,
"BlockToken":
"AAUBAbqt9zpqBUEvtO2HINAfFaWToOwlPjbIsQOlx6JUN/0+iMQl0NtNbnX4"
},
...
],
"ExpiryTime": 1.59298379649E9,
"VolumeSize": 3
}
AWS CLI
The following list-changed-blocks example command returns the block indexes and block tokens of
blocks that are different between snapshots snap-1234567890 and snap-0987654321. The --
starting-block-index parameter limits the results to block indexes greater than 0, and the --
max-results parameter limits the results to the first 500 blocks..
1443
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The following example response for the previous command shows that block indexes 0, 6000, 6001,
6002, and 6003 are different between the two snapshots. Additionally, block indexes 6001, 6002,
and 6003 exist only in the first snapshot ID specified, and not in the second snapshot ID because
there is no second block token listed in the response.
Use the get-snapshot-block command and specify the block index and block token of the block
for which you want to get data. The block tokens are valid until the expiry time listed.
{
"ChangedBlocks": [
{
"BlockIndex": 0,
"FirstBlockToken": "AAABAVahm9SO60Dyi0ORySzn2ZjGjW/
KN3uygGlS0QOYWesbzBbDnX2dGpmC",
"SecondBlockToken":
"AAABAf8o0o6UFi1rDbSZGIRaCEdDyBu9TlvtCQxxoKV8qrUPQP7vcM6iWGSr"
},
{
"BlockIndex": 6000,
"FirstBlockToken": "AAABAbYSiZvJ0/
R9tz8suI8dSzecLjN4kkazK8inFXVintPkdaVFLfCMQsKe",
"SecondBlockToken":
"AAABAZnqTdzFmKRpsaMAsDxviVqEI/3jJzI2crq2eFDCgHmyNf777elD9oVR"
},
{
"BlockIndex": 6001,
"FirstBlockToken": "AAABASBpSJ2UAD3PLxJnCt6zun4/
T4sU25Bnb8jB5Q6FRXHFqAIAqE04hJoR"
},
{
"BlockIndex": 6002,
"FirstBlockToken": "AAABASqX4/
NWjvNceoyMUljcRd0DnwbSwNnes1UkoP62CrQXvn47BY5435aw"
},
{
"BlockIndex": 6003,
"FirstBlockToken":
"AAABASmJ0O5JxAOce25rF4P1sdRtyIDsX12tFEDunnePYUKOf4PBROuICb2A"
},
...
],
"ExpiryTime": 1576308931.973,
"VolumeSize": 32212254720,
"BlockSize": 524288,
"NextToken": "AAADARqElNng/sV98CYk/bJDCXeLJmLJHnNSkHvLzVaO0zsPH/QM3Bi3zF//O6Mdi/
BbJarBnp8h"
}
AWS API
The following ListChangedBlocks example request returns the block indexes and block
tokens of blocks that are different between snapshots snap-0acEXAMPLEcf41648 and
snap-0c9EXAMPLE1b30e2f. The startingBlockIndex parameter limits the results to block
indexes greater than 0, and the maxResults parameter limits the results to the first 500 blocks.
GET /snapshots/snap-0c9EXAMPLE1b30e2f/changedblocks?
firstSnapshotId=snap-0acEXAMPLEcf41648&maxResults=500&startingBlockIndex=0 HTTP/1.1
Host: ebs.us-east-2.amazonaws.com
Accept-Encoding: identity
User-Agent: <User agent parameter>
X-Amz-Date: 20200617T232546Z
Authorization: <Authentication parameter>
1444
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The following example response for the previous request shows that block indexes 0, 3072, 6002,
and 6003 are different between the two snapshots. Additionally, block indexes 6002, and 6003 exist
only in the first snapshot ID specified, and not in the second snapshot ID because there is no second
block token listed in the response.
Use the GetSnapshotBlock action and specify the block index and block token of the block for
which you want to get data. The block tokens are valid until the expiry time listed.
HTTP/1.1 200 OK
x-amzn-RequestId: fb0f6743-6d81-4be8-afbe-db11a5bb8a1f
Content-Type: application/json
Content-Length: 1456
Date: Wed, 17 Jun 2020 23:25:47 GMT
Connection: keep-alive
{
"BlockSize": 524288,
"ChangedBlocks": [
{
"BlockIndex": 0,
"FirstBlockToken": "AAUBAVaWqOCnDNuKle11s7IIX6jp6FYcC/
tJuVT1GgP23AuLntwiMdJ+OJkL",
"SecondBlockToken": "AAUBASxzy0Y0b33JVRLoYm3NOresCxn5RO+HVFzXW3Y/
RwfFaPX2Edx8QHCh"
},
{
"BlockIndex": 3072,
"FirstBlockToken": "AAUBAcHp6pC5fKAC7TokoNCtAnZhqq27u6fxRfZOLEmeXLmHBf2R/
Yb24MaS",
"SecondBlockToken":
"AAUBARGCaufCqBRZC8tEkPYGGkSv3vqvOjJ2xKDi3ljDFiytUxBLXYgTmkid"
},
{
"BlockIndex": 6002,
"FirstBlockToken": "AAABASqX4/
NWjvNceoyMUljcRd0DnwbSwNnes1UkoP62CrQXvn47BY5435aw"
},
{
"BlockIndex": 6003,
"FirstBlockToken":
"AAABASmJ0O5JxAOce25rF4P1sdRtyIDsX12tFEDunnePYUKOf4PBROuICb2A"
},
...
],
"ExpiryTime": 1.592976647009E9,
"VolumeSize": 3
}
AWS CLI
The following get-snapshot-block example command returns the data in the block index 6001 with
block token AAABASBpSJ2UAD3PLxJnCt6zun4/T4sU25Bnb8jB5Q6FRXHFqAIAqE04hJoR, in
snapshot snap-1234567890. The binary data is output to the data file in the C:\Temp directory
on a Windows computer. If you run the command on a Linux or Unix computer, replace the output
path with /tmp/data to output the data to the data file in the /tmp directory.
1445
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The following example response for the previous command shows the size of the data returned, the
checksum to validate the data, and the algorithm of the checksum. The binary data is automatically
saved to the directory and file you specified in the request command.
{
"DataLength": "524288",
"Checksum": "cf0Y6/Fn0oFa4VyjQPOa/iD0zhTflPTKzxGv2OKowXc=",
"ChecksumAlgorithm": "SHA256"
}
AWS API
The following GetSnapshotBlock example request returns the data in the block index 3072 with
block token AAUBARGCaufCqBRZC8tEkPYGGkSv3vqvOjJ2xKDi3ljDFiytUxBLXYgTmkid, in
snapshot snap-0c9EXAMPLE1b30e2f.
GET /snapshots/snap-0c9EXAMPLE1b30e2f/blocks/3072?
blockToken=AAUBARGCaufCqBRZC8tEkPYGGkSv3vqvOjJ2xKDi3ljDFiytUxBLXYgTmkid HTTP/1.1
Host: ebs.us-east-2.amazonaws.com
Accept-Encoding: identity
User-Agent: <User agent parameter>
X-Amz-Date: 20200617T232838Z
Authorization: <Authentication parameter>
The following example response for the previous request shows the size of the data returned, the
checksum to validate the data, and the algorithm used to generate the checksum. The binary data is
transmitted in the body of the response and is represented as BlockData in the following example.
HTTP/1.1 200 OK
x-amzn-RequestId: 2d0db2fb-bd88-474d-a137-81c4e57d7b9f
x-amz-Data-Length: 524288
x-amz-Checksum: Vc0yY2j3qg8bUL9I6GQuI2orTudrQRBDMIhcy7bdEsw=
x-amz-Checksum-Algorithm: SHA256
Content-Type: application/octet-stream
Content-Length: 524288
Date: Wed, 17 Jun 2020 23:28:38 GMT
Connection: keep-alive
BlockData
The following steps describe how to use the EBS direct APIs to write incremental snapshots:
1. Use the StartSnapshot action and specify a parent snapshot ID to start a snapshot as an incremental
snapshot of an existing one, or omit the parent snapshot ID to start a new snapshot. This action
returns the new snapshot ID, which is in a pending state.
2. Use the PutSnapshotBlock action and specify the ID of the pending snapshot to add data to it in the
form of individual blocks. You must specify a Base64-encoded SHA256 checksum for the block of
data transmitted. The service computes the checksum of the data received and validates it with the
checksum that you specified. The action fails if the checksums don't match.
3. When you're done adding data to the pending snapshot, use the CompleteSnapshot action to start an
asynchronous workflow that seals the snapshot and moves it to a completed state.
Repeat these steps to create a new, incremental snapshot using the previously created snapshot as the
parent.
1446
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
For example, in the following diagram, snapshot A is the first new snapshot started. Snapshot A is used
as the parent snapshot to start snapshot B. Snapshot B is used as the parent snapshot to start and create
snapshot C. Snapshots A, B, and C are incremental snapshots. Snapshot A is used to create EBS volume
1. Snapshot D is created from EBS volume 1. Snapshot D is an incremental snapshot of A; it is not an
incremental snapshot of B or C.
The following examples show how to write snapshots using the EBS direct APIs.
Topics
• Start a snapshot (p. 1447)
• Put data into a snapshot (p. 1448)
• Complete a snapshot (p. 1449)
Start a snapshot
AWS CLI
The following start-snapshot example command starts an 8 GiB snapshot, using snapshot
snap-123EXAMPLE1234567 as the parent snapshot. The new snapshot will be an incremental
snapshot of the parent snapshot. The snapshot moves to an error state if there are no put or
complete requests made for the snapshot within the specified 60 minute timeout period. The
550e8400-e29b-41d4-a716-446655440000 client token ensures idempotency for the request. If
the client token is omitted, the AWS SDK automatically generates one for you. For more information
about idempotency, see Idempotency for StartSnapshot API (p. 1451).
The following example response for the previous command shows the snapshot ID, AWS account
ID, status, volume size in GiB, and size of the blocks in the snapshot. The snapshot is started in a
pending state. Specify the snapshot ID in subsequent put-snapshot-block commands to write
data to the snapshot, then use the complete-snapshot command to complete the snapshot and
change its status to completed.
{
"SnapshotId": "snap-0aaEXAMPLEe306d62",
"OwnerId": "111122223333",
1447
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
"Status": "pending",
"VolumeSize": 8,
"BlockSize": 524288
}
AWS API
The following StartSnapshot example request starts an 8 GiB snapshot, using snapshot
snap-123EXAMPLE1234567 as the parent snapshot. The new snapshot will be an incremental
snapshot of the parent snapshot. The snapshot moves to an error state if there are no put or
complete requests made for the snapshot within the specified 60 minute timeout period. The
550e8400-e29b-41d4-a716-446655440000 client token ensures idempotency for the request. If
the client token is omitted, the AWS SDK automatically generates one for you. For more information
about idempotency, see Idempotency for StartSnapshot API (p. 1451).
{
"VolumeSize": 8,
"ParentSnapshot": snap-123EXAMPLE1234567,
"ClientToken": "550e8400-e29b-41d4-a716-446655440000",
"Timeout": 60
}
The following example response for the previous request shows the snapshot ID, AWS account
ID, status, volume size in GiB, and size of the blocks in the snapshot. The snapshot is started in a
pending state. Specify the snapshot ID in a subsequent PutSnapshotBlocks request to write data
to the snapshot.
{
"BlockSize": 524288,
"Description": null,
"OwnerId": "138695307491",
"Progress": null,
"SnapshotId": "snap-052EXAMPLEc85d8dd",
"StartTime": null,
"Status": "pending",
"Tags": null,
"VolumeSize": 8
}
AWS CLI
1448
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The following example response for the previous command confirms the data length, checksum, and
checksum algorithm for the data received by the service.
{
"DataLength": "524288",
"Checksum": "QOD3gmEQOXATfJx2Aa34W4FU2nZGyXfqtsUuktOw8DM=",
"ChecksumAlgorithm": "SHA256"
}
AWS API
BlockData
The following is example response for the previous request confirms the data length, checksum, and
checksum algorithm for the data received by the service.
{}
Complete a snapshot
AWS CLI
1449
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
for the complete set of data written to a snapshot. For more information about checksums, see Use
checksums (p. 1451) earlier in this guide.
{
"Status": "pending"
}
AWS API
{"Status":"pending"}
Use encryption
If Amazon EBS encryption by default is enabled on your AWS account, you cannot start a new snapshot
using an un-encrypted parent snapshot. You must first encrypt the parent snapshot by copying it. For
more information, see Copy an Amazon EBS snapshot (p. 1391) and Encryption by default (p. 1539).
To start an encrypted snapshot, specify the Amazon Resource Name (ARN) of an KMS key, or specify
an encrypted parent snapshot in your StartSnapshot request. If neither are specified, and Amazon EBS
encryption by default is enabled on the account, then the default KMS key for the account is used. If no
default KMS key has been specified for the account, then the AWS managed key is used.
Important
By default, all principals in the account have access to the default AWS managed key, and they
can use it for EBS encryption and decryption operations. For more information, see Default KMS
key for EBS encryption (p. 1538).
1450
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
You might need additional IAM permissions to use the EBS direct APIs with encryption. For more
information, see the IAM permissions for EBS direct APIs (p. 1437) section earlier in this guide.
Signature Version 4 is the process to add authentication information to AWS requests sent by HTTP. For
security, most requests to AWS must be signed with an access key, which consists of an access key ID and
secret access key. These two keys are commonly referred to as your security credentials. For information
about how to obtain credentials for your account, see Understanding and getting your credentials.
If you intend to manually create HTTP requests, you must learn how to sign them. When you use the
AWS Command Line Interface (AWS CLI) or one of the AWS SDKs to make requests to AWS, these tools
automatically sign the requests for you with the access key that you specify when you configure the
tools. When you use these tools, you don't need to learn how to sign requests yourself.
For more information, see Signing AWS requests with Signature Version 4 in the AWS General Reference.
Use checksums
The GetSnapshotBlock action returns data that is in a block of a snapshot, and the PutSnapshotBlock
action adds data to a block in a snapshot. The block data that is transmitted is not signed as part of the
Signature Version 4 signing process. As a result, checksums are used to validate the integrity of the data
as follows:
• When you use the GetSnapshotBlock action, the response provides a Base64-encoded SHA256
checksum for the block data using the x-amz-Checksum header, and the checksum algorithm using
the x-amz-Checksum-Algorithm header. Use the returned checksum to validate the integrity of
the data. If the checksum that you generate doesn't match what Amazon EBS provided, you should
consider the data not valid and retry your request.
• When you use the PutSnapshotBlock action, your request must provide a Base64-encoded SHA256
checksum for the block data using the x-amz-Checksum header, and the checksum algorithm using
the x-amz-Checksum-Algorithm header. The checksum that you provide is validated against a
checksum generated by Amazon EBS to validate the integrity of the data. If the checksums do not
correspond, the request fails.
• When you use the CompleteSnapshot action, your request can optionally provide an aggregate
Base64-encoded SHA256 checksum for the complete set of data added to the snapshot. Provide the
checksum using the x-amz-Checksum header, the checksum algorithm using the x-amz-Checksum-
Algorithm header, and the checksum aggregation method using the x-amz-Checksum-Aggregation-
Method header. To generate the aggregated checksum using the linear aggregation method, arrange
the checksums for each written block in ascending order of their block index, concatenate them to
form a single string, and then generate the checksum on the entire string using the SHA256 algorithm.
The checksums in these actions are part of the Signature Version 4 signing process.
Idempotency ensures that an API request completes only once. With an idempotent request, if the
original request completes successful, the subsequent retries return the result from the original
successful request and they have no additional effect.
The StartSnapshot API supports idempotency using a client token. A client token is a unique string
that you specify when you make an API request. If you retry an API request with the same client token
and the same request parameters after it has completed successfully, the result of the original request
is returned. If you retry a request with the same client token, but change one or more of the request
parameters, the ConflictException error is returned.
If you do not specify your own client token, the AWS SDKs automatically generates a client token for the
request to ensure that it is idempotent.
1451
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
A client token can be any string that includes up to up to 64 ASCII characters. You should not reuse the
same client tokens for different requests.
To make an idempotent StartSnapshot request with your own client token using the API
{
"VolumeSize": 8,
"ParentSnapshot": snap-123EXAMPLE1234567,
"ClientToken": "550e8400-e29b-41d4-a716-446655440000",
"Timeout": 60
}
To make an idempotent StartSnapshot request with your own client token using the AWS CLI
Optimize performance
You can run API requests concurrently. Assuming PutSnapshotBlock latency is 100ms, then a thread
can process 10 requests in one second. Furthermore, assuming your client application creates multiple
threads and connections (for example, 100 connections), it can make 1000 (10 * 100) requests per
second in total. This will correspond to a throughput of around 500 MB per second.
The following list contains few things to look for in your application:
• Is each thread using a separate connection? If the connections are limited on the application then
multiple threads will wait for the connection to be available and you will notice lower throughput.
• Is there any wait time in the application between two put requests? This will reduce the effective
throughput of a thread.
• The bandwidth limit on the instance – If bandwidth on the instance is shared by other applications, it
could limit the available throughput for PutSnapshotBlock requests.
Be sure to take note of other workloads that might be running in the account to avoid bottlenecks. You
should also build retry mechanisms into your EBS direct APIs workflows to handle throttling, timeouts,
and service unavailability.
Review the EBS direct APIs service quotas to determine the maximum API requests that you can run per
second. For more information, see Amazon Elastic Block Store Endpoints and Quotas in the AWS General
Reference.
1452
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
The price that you pay to use the EBS direct APIs depends on the requests you make. For more
information, see Amazon EBS pricing.
• ListChangedBlocks and ListSnapshotBlocks APIs are charged per request. For example, if you make
100,000 ListSnapshotBlocks API requests in a Region that charges $0.0006 per 1,000 requests, you will
be charged $0.06 ($0.0006 per 1,000 requests x 100).
• GetSnapshotBlock is charged per block returned. For example, if you make 100,000 GetSnapshotBlock
API requests in a Region that charges $0.003 per 1,000 blocks returned, you will be charged $0.30
($0.003 per 1,000 blocks retruned x 100).
• PutSnapshotBlock is charged per block written. For example, if you make 100,000 PutSnapshotBlock
API requests in a Region that charges $0.006 per 1,000 blocks written, you will be charged $0.60
($0.006 per 1,000 blocks written x 100).
Networking costs
Data transferred directly between EBS direct APIs and Amazon EC2 instances in the same AWS Region
is free when using non-FIPS endpoints. For more information, see AWS service endpoints. If other AWS
services are in the path of your data transfer, you will be charged their associated data processing costs.
These services include, but are not limited to, PrivateLink endpoints, NAT Gateway and Transit Gateway.
If you are using EBS direct APIs from Amazon EC2 instances or AWS Lambda functions in private subnets,
you can use VPC interface endpoints, instead of using NAT gateways, to reduce network data transfer
costs. For more information, see Using interface VPC endpoints with EBS direct APIs (p. 1453).
Each interface endpoint is represented by one or more Elastic Network Interfaces in your subnets.
For more information, see Interface VPC endpoints (AWS PrivateLink) in the Amazon VPC User Guide.
Before you set up an interface VPC endpoint for EBS direct APIs, ensure that you review Interface
endpoint properties and limitations in the Amazon VPC User Guide.
VPC endpoint policies are not supported for EBS direct APIs. By default, full access to EBS direct APIs is
allowed through the endpoint. However, you can control access to the interface endpoint using security
groups. For more information, see Controlling access to services with VPC endpoints in the Amazon VPC
User Guide.
You can create a VPC endpoint for EBS direct APIs using either the Amazon VPC console or the AWS
Command Line Interface (AWS CLI). For more information, see Creating an interface endpoint in the
Amazon VPC User Guide.
1453
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
Create a VPC endpoint for EBS direct APIs using the following service name:
• com.amazonaws.region.ebs
If you enable private DNS for the endpoint, you can make API requests to EBS direct APIs using
its default DNS name for the Region, for example, ebs.us-east-1.amazonaws.com. For more
information, see Accessing a service through an interface endpoint in the Amazon VPC User Guide.
Log API Calls for EBS direct APIs with AWS CloudTrail
The EBS direct APIs service is integrated with AWS CloudTrail. CloudTrail is a service that provides a
record of actions taken by a user, role, or an AWS service. CloudTrail captures all API calls performed in
EBS direct APIs as events. If you create a trail, you can enable continuous delivery of CloudTrail events
to an Amazon Simple Storage Service (Amazon S3) bucket. If you don't configure a trail, you can still
view the most recent management events in the CloudTrail console in Event history. Data events are
not captured in Event history. You can use the information collected by CloudTrail to determine the
request that was made to EBS direct APIs, the IP address from which the request was made, who made
the request, when it was made, and additional details.
For more information about CloudTrail, see the AWS CloudTrail User Guide.
CloudTrail is enabled on your AWS account when you create the account. When supported event activity
occurs in EBS direct APIs, that activity is recorded in a CloudTrail event along with other AWS service
events in Event history. You can view, search, and download recent events in your AWS account. For
more information, see Viewing Events with CloudTrail Event History.
For an ongoing record of events in your AWS account, including events for EBS direct APIs, create a
trail. A trail enables CloudTrail to deliver log files to an S3 bucket. By default, when you create a trail
in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the AWS
partition and delivers the log files to the S3 bucket that you specify. Additionally, you can configure
other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more
information, see the following:
For EBS direct APIs, you can use CloudTrail to log two types of events:
• Management events — Management events provide visibility into management operations that are
performed on snapshots in your AWS account. The following API actions are logged by default as
management events in trails:
• StartSnapshot
• CompleteSnapshot
For more information about logging management events, see Logging management events for trails in
the CloudTrail User Guide.
• Data events — These events provide visibility into the snapshot operations performed on or within a
snapshot. The following API actions can optionally be logged as data events in trails:
1454
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
• ListSnapshotBlocks
• ListChangedBlocks
• GetSnapshotBlock
• PutSnapshotBlock
Data events are not logged by default when you create a trail. You can use only advanced event
selectors to record data events on EBS direct API calls. For more information, see Logging data events
for trails in the CloudTrail User Guide.
Note
If you perform an action on a snapshot that is shared with you, data events are not sent to the
AWS account that owns the snapshot.
Identity information
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or AWS Identity and Access Management (IAM) user
credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
A trail is a configuration that enables delivery of events as log files to an S3 bucket that you specify.
CloudTrail log files contain one or more log entries. An event represents a single request from any
source and includes information about the requested action, the date and time of the action, request
parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they
don't appear in any specific order.
StartSnapshot
{
"eventVersion": "1.05",
"userIdentity": {
"type": "IAMUser",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"userName": "user"
},
"eventTime": "2020-07-03T23:27:26Z",
"eventSource": "ebs.amazonaws.com",
"eventName": "StartSnapshot",
"awsRegion": "eu-west-1",
"sourceIPAddress": "192.0.2.0",
"userAgent": "PostmanRuntime/7.25.0",
"requestParameters": {
"volumeSize": 8,
"clientToken": "token",
"encrypted": true
},
1455
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
"responseElements": {
"snapshotId": "snap-123456789012",
"ownerId": "123456789012",
"status": "pending",
"startTime": "Jul 3, 2020 11:27:26 PM",
"volumeSize": 8,
"blockSize": 524288,
"kmsKeyArn": "HIDDEN_DUE_TO_SECURITY_REASONS"
},
"requestID": "be112233-1ba5-4ae0-8e2b-1c302EXAMPLE",
"eventID": "6e12345-2a4e-417c-aa78-7594fEXAMPLE",
"eventType": "AwsApiCall",
"recipientAccountId": "123456789012"
}
CompleteSnapshot
{
"eventVersion": "1.05",
"userIdentity": {
"type": "IAMUser",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"userName": "user"
},
"eventTime": "2020-07-03T23:28:24Z",
"eventSource": "ebs.amazonaws.com",
"eventName": "CompleteSnapshot",
"awsRegion": "eu-west-1",
"sourceIPAddress": "192.0.2.0",
"userAgent": "PostmanRuntime/7.25.0",
"requestParameters": {
"snapshotId": "snap-123456789012",
"changedBlocksCount": 5
},
"responseElements": {
"status": "completed"
},
"requestID": "be112233-1ba5-4ae0-8e2b-1c302EXAMPLE",
"eventID": "6e12345-2a4e-417c-aa78-7594fEXAMPLE",
"eventType": "AwsApiCall",
"recipientAccountId": "123456789012"
}
ListSnapshotBlocks
{
"eventVersion": "1.08",
"userIdentity": {
"type": "IAMUser",
"principalId": "AIDAT4HPB2AO3JEXAMPLE",
"arn": "arn:aws:iam::123456789012:user/user",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"userName": "user"
},
"eventTime": "2021-06-03T00:32:46Z",
"eventSource": "ebs.amazonaws.com",
"eventName": "ListSnapshotBlocks",
"awsRegion": "us-east-1",
"sourceIPAddress": "111.111.111.111",
"userAgent": "PostmanRuntime/7.28.0",
1456
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
"requestParameters": {
"snapshotId": "snap-abcdef01234567890",
"maxResults": 100,
"startingBlockIndex": 0
},
"responseElements": null,
"requestID": "example6-0e12-4aa9-b923-1555eexample",
"eventID": "example4-218b-4f69-a9e0-2357dexample",
"readOnly": true,
"resources": [
{
"accountId": "123456789012",
"type": "AWS::EC2::Snapshot",
"ARN": "arn:aws:ec2:us-west-2::snapshot/snap-abcdef01234567890"
}
],
"eventType": "AwsApiCall",
"managementEvent": false,
"recipientAccountId": "123456789012",
"eventCategory": "Data",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-SHA",
"clientProvidedHostHeader": "ebs.us-west-2.amazonaws.com"
}
}
ListChangedBlocks
{
"eventVersion": "1.08",
"userIdentity": {
"type": "IAMUser",
"principalId": "AIDAT4HPB2AO3JEXAMPLE",
"arn": "arn:aws:iam::123456789012:user/user",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"userName": "user"
},
"eventTime": "2021-06-02T21:11:46Z",
"eventSource": "ebs.amazonaws.com",
"eventName": "ListChangedBlocks",
"awsRegion": "us-east-1",
"sourceIPAddress": "111.111.111.111",
"userAgent": "PostmanRuntime/7.28.0",
"requestParameters": {
"firstSnapshotId": "snap-abcdef01234567890",
"secondSnapshotId": "snap-9876543210abcdef0",
"maxResults": 100,
"startingBlockIndex": 0
},
"responseElements": null,
"requestID": "example0-f4cb-4d64-8d84-72e1bexample",
"eventID": "example3-fac4-4a78-8ebb-3e9d3example",
"readOnly": true,
"resources": [
{
"accountId": "123456789012",
"type": "AWS::EC2::Snapshot",
"ARN": "arn:aws:ec2:us-west-2::snapshot/snap-abcdef01234567890"
},
{
"accountId": "123456789012",
"type": "AWS::EC2::Snapshot",
"ARN": "arn:aws:ec2:us-west-2::snapshot/snap-9876543210abcdef0"
1457
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
}
],
"eventType": "AwsApiCall",
"managementEvent": false,
"recipientAccountId": "123456789012",
"eventCategory": "Data",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-SHA",
"clientProvidedHostHeader": "ebs.us-west-2.amazonaws.com"
}
}
GetSnapshotBlock
{
"eventVersion": "1.08",
"userIdentity": {
"type": "IAMUser",
"principalId": "AIDAT4HPB2AO3JEXAMPLE",
"arn": "arn:aws:iam::123456789012:user/user",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"userName": "user"
},
"eventTime": "2021-06-02T20:43:05Z",
"eventSource": "ebs.amazonaws.com",
"eventName": "GetSnapshotBlock",
"awsRegion": "us-east-1",
"sourceIPAddress": "111.111.111.111",
"userAgent": "PostmanRuntime/7.28.0",
"requestParameters": {
"snapshotId": "snap-abcdef01234567890",
"blockIndex": 1,
"blockToken": "EXAMPLEiL5E3pMPFpaDWjExM2/mnSKh1mQfcbjwe2mM7EwhrgCdPAEXAMPLE"
},
"responseElements": null,
"requestID": "examplea-6eca-4964-abfd-fd9f0example",
"eventID": "example6-4048-4365-a275-42e94example",
"readOnly": true,
"resources": [
{
"accountId": "123456789012",
"type": "AWS::EC2::Snapshot",
"ARN": "arn:aws:ec2:us-west-2::snapshot/snap-abcdef01234567890"
}
],
"eventType": "AwsApiCall",
"managementEvent": false,
"recipientAccountId": "123456789012",
"eventCategory": "Data",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-SHA",
"clientProvidedHostHeader": "ebs.us-west-2.amazonaws.com"
}
}
PutSnapshotBlock
{
"eventVersion": "1.08",
"userIdentity": {
"type": "IAMUser",
1458
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS snapshots
"principalId": "AIDAT4HPB2AO3JEXAMPLE",
"arn": "arn:aws:iam::123456789012:user/user",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"userName": "user"
},
"eventTime": "2021-06-02T21:09:17Z",
"eventSource": "ebs.amazonaws.com",
"eventName": "PutSnapshotBlock",
"awsRegion": "us-east-1",
"sourceIPAddress": "111.111.111.111",
"userAgent": "PostmanRuntime/7.28.0",
"requestParameters": {
"snapshotId": "snap-abcdef01234567890",
"blockIndex": 1,
"dataLength": 524288,
"checksum": "exampleodSGvFSb1e3kxWUgbOQ4TbzPurnsfVexample",
"checksumAlgorithm": "SHA256"
},
"responseElements": {
"checksum": "exampleodSGvFSb1e3kxWUgbOQ4TbzPurnsfVexample",
"checksumAlgorithm": "SHA256"
},
"requestID": "example3-d5e0-4167-8ee8-50845example",
"eventID": "example8-4d9a-4aad-b71d-bb31fexample",
"readOnly": false,
"resources": [
{
"accountId": "123456789012",
"type": "AWS::EC2::Snapshot",
"ARN": "arn:aws:ec2:us-west-2::snapshot/snap-abcdef01234567890"
}
],
"eventType": "AwsApiCall",
"managementEvent": false,
"recipientAccountId": "123456789012",
"eventCategory": "Data",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-SHA",
"clientProvidedHostHeader": "ebs.us-west-2.amazonaws.com"
}
}
Yes. The block indexes returned are unique, and in numerical order.
Can I submit a request with a MaxResults parameter value of under 100?
No. The minimum MaxResult parameter value you can use is 100. If you submit a request with a
MaxResult parameter value of under 100, and there are more than 100 blocks in the snapshot, then
the API will return at least 100 results.
Can I run API requests concurrently?
You can run API requests concurrently. Be sure to take note of other workloads that might be
running in the account to avoid bottlenecks. You should also build retry mechanisms into your
1459
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
EBS direct APIs workflows to handle throttling, timeouts, and service unavailability. For more
information, see Optimize performance (p. 1452).
Review the EBS direct APIs service quotas to determine the API requests that you can run per
second. For more information, see Amazon Elastic Block Store Endpoints and Quotas in the AWS
General Reference.
When running the ListChangedBlocks action, is it possible to get an empty response even though
there are blocks in the snapshot?
Yes. If the changed blocks are scarce in the snapshot, the response may be empty but the API will
return a next page token value. Use the next page token value to continue to the next page of
results. You can confirm that you have reached the last page of results when the API returns a next
page token value of null.
If the NextToken parameter is specified together with a StartingBlockIndex parameter, which of the
two is used?
Block tokens are valid for seven days, and next tokens are valid for 60 minutes.
Are encrypted snapshots supported?
Yes. Encrypted snapshots can be accessed using the EBS direct APIs.
To access an encrypted snapshot, the user must have access to the KMS key used to encrypt the
snapshot, and the AWS KMS decrypt action. See the IAM permissions for EBS direct APIs (p. 1437)
section earlier in this guide for the AWS KMS policy to assign to a user.
Are public snapshots supported?
It returns only block indexes and tokens that have data written to them.
Can I get a history of the API calls made by the EBS direct APIs on my account for security analysis
and operational troubleshooting purposes?
Yes. To receive a history of EBS direct APIs API calls made on your account, turn on AWS CloudTrail in
the AWS Management Console. For more information, see Log API Calls for EBS direct APIs with AWS
CloudTrail (p. 1454).
For more information, see Amazon Data Lifecycle Manager (p. 1478).
You can restore a snapshot from the Recycle Bin at any time before its retention period expires. After
you restore a snapshot from the Recycle Bin, the snapshot is removed from the Recycle Bin and you can
1460
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
use it in the same way you use any other snapshot in your account. If the retention period expires and
the snapshot is not restored, the snapshot is permanently deleted from the Recycle Bin and is no longer
available for recovery.
Using Recycle Bin ensures business continuity by protecting your business-critical data backups against
accidental deletion.
Topics
• How does it work? (p. 1461)
• Considerations (p. 1461)
• Quotas (p. 1462)
• Related services (p. 1462)
• Pricing (p. 1462)
• Required permissions (p. 1463)
• Work with retention rules (p. 1464)
• Work with snapshots in the Recycle Bin (p. 1471)
• Monitoring Recycle Bin using AWS CloudTrail (p. 1471)
• The snapshots that you want to retain in the Recycle Bin when they are deleted.
• The retention period for which to retain snapshots in the Recycle Bin after deletion.
With Recycle Bin, you can create two types of retention rules:
• Tag-level retention rules — These retention rules use resource tags to identify the snapshots that are
to be retained in the Recycle Bin. For each retention rule, you specify one or more tag key and value
pairs. Snapshots that are tagged with at least one of the tag key and value pairs that are specified
in the retention rule are automatically retained in the Recycle Bin upon deletion. Use this type of
retention rule if you want to protect specific snapshots in your account based on their tags.
• Region-level retention rules — These retention rules do not have any resource tags specified. They
apply to all of the snapshots in the Region in which they are created, even if the snapshots are not
tagged. Use this type of retention rule if you want to protect all of your snapshots in a specific Region.
While a snapshot is in the Recycle Bin, you have the ability to restore it for use at any time.
The snapshot remains in the Recycle Bin until one of the following happens:
• You manually restore it for use. When you restore a snapshot from the Recycle Bin, the snapshot is
removed from the Recycle Bin and it immediately becomes available for use as a regular snapshot. You
can use restored snapshots in the same way as any other snapshot in your account.
• The retention period expires. If the retention period expires, and the snapshot has not been restored
from the Recycle Bin, the snapshot is permanently deleted from the Recycle Bin and it can no longer
be viewed or restored.
Considerations
Keep the following in mind when working with the Recycle Bin:
1461
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
• If a snapshot is enabled for fast snapshot restore when it is deleted, fast snapshot restore is
automatically disabled shortly after the snapshot is sent to the Recycle Bin.
• If you restore the snapshot before fast snapshot restore is disabled for the snapshot, it remains
enabled.
• If you restore the snapshot, after fast snapshot restore has been disabled, it remains disabled. If
needed, you must manually re-enable fast snapshot restore.
• If a snapshot is shared when is deleted, it is automatically unshared when it is sent to the Recycle Bin.
If you restore the snapshot, all of the previous sharing permissions are automatically restored.
• If a snapshot matches more than one retention rule upon deletion, then the retention rule with the
longest retention period takes precedence. If a snapshot matches a Region-level rule and a tag-level
rule, then the tag-level rule takes precedence.
• You can't manually delete a snapshot from the Recycle Bin. The snapshot will be automatically deleted
when its retention period expires.
• While a snapshot is in the Recycle Bin, you can only view it, restore it, or modify its tags. To use the
snapshot in any other way, you must first restore it.
Quotas
The following quotas apply to Recycle Bin.
Related services
Recycle Bin works with the following services:
• AWS CloudTrail — Enables you to record events that occur in Recycle Bin. For more information, see
Monitoring Recycle Bin using AWS CloudTrail (p. 1471).
Pricing
Snapshots in the Recycle Bin are billed at the same rate as regular snapshots in your account. There are
no additional charges for using Recycle Bin and retention rules. For more information, see Amazon EBS
pricing.
Note
Some snapshots might still appear in the Recycle Bin console or in the AWS CLI and API output
for a short period after their retention periods have expired and they have been permanently
deleted. You are not billed for these snapshots. Billing stops as soon as the retention period
expires.
You can use the following AWS generated cost allocation tags for cost tracking and allocation purposes
when using AWS Billing and Cost Management.
• Key: aws:recycle-bin:resource-in-bin
1462
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
• Value: true
For more information, see AWS-Generated Cost Allocation Tags in the AWS Billing and Cost Management
User Guide.
Required permissions
By default, IAM users don't have permission to work with Recycle Bin, retention rules, or with snapshots
that are in the Recycle Bin. To allow IAM users to work with these resources, you must create IAM policies
that grant permission to use specific resources and API actions. You then attach those policies to the IAM
users or the groups that require those permissions.
Topics
• Permissions for working with Recycle Bin (p. 1463)
• Permissions for working with snapshots in the Recycle Bin (p. 1464)
• rbin:CreateRule
• rbin:UpdateRule
• rbin:GetRule
• rbin:ListRules
• rbin:DeleteRule
• rbin:TagResource
• rbin:UntagResource
• rbin:ListTagsForResource
Note
Console users additionally require the tag:GetResources permission. The example policy
below includes this permission. If it is not needed, you can remove the permission from the
policy.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"rbin:CreateRule",
"rbin:UpdateRule",
"rbin:GetRule",
"rbin:ListRules",
"rbin:DeleteRule",
"rbin:TagResource",
"rbin:UntagResource",
"rbin:ListTagsForResource",
"tag:GetResources"
],
"Resource": "*"
}]
1463
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
• ec2:ListSnapshotsInRecycleBin
• ec2:RestoreSnapshotFromRecycleBin
• ec2:CreateTags
• ec2:DeleteTags
Note
Console users additionally require the ec2:DescribeTags permission. The example policy
below includes this permission. If it is not needed, you can remove the permission from the
policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:ListSnapshotsInRecycleBin",
"ec2:RestoreSnapshotFromRecycleBin"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags",
"ec2:DeleteTags",
"ec2:DescribeTags"
],
"Resource": "arn:aws:ec2:Region:account-id:snapshot/*"
},
]
}
• The snapshots that you want to retain in the Recycle Bin when they are deleted.
• The retention period for which to retain snapshots in the Recycle Bin after deletion.
With Recycle Bin, you can create two types of retention rules:
• Tag-level retention rules — These retention rules use resource tags to identify the snapshots that are
to be retained in the Recycle Bin. For each retention rule, you specify one or more tag key and value
pairs. Snapshots that are tagged with at least one of the tag key and value pairs that are specified
in the retention rule are automatically retained in the Recycle Bin upon deletion. Use this type of
retention rule if you want to protect specific snapshots in your account based on their tags.
1464
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
• Region-level retention rules — These retention rules do not have any resource tags specified. They
apply to all of the snapshots in the Region in which they are created, even if the snapshots are not
tagged. Use this type of retention rule if you want to protect all of your snapshots in a specific Region.
After you create a retention rule, snapshots that match its criteria are automatically retained in the
Recycle Bin for the specified period when they are deleted.
Topics
• Create a retention rule (p. 1465)
• View Recycle Bin retention rules (p. 1466)
• Update retention rules (p. 1467)
• Tag retention rules (p. 1468)
• View retention rules tags (p. 1469)
• Remove tags from a retention rules (p. 1469)
• Delete Recycle Bin retention rules (p. 1470)
• An optional name for the retention rule. The name can be up to 255 characters long.
• An optional description for the rule. The description can be up to 255 characters long.
• Resource tags that identify the snapshots that are to be retained in the Recycle Bin. You can specify
up to 50 tags for each rule. However, you can add the same tag key and value pair to up to 5 retention
rules only.
To create a tag-level retention rule, specify at least one tag key and value pair. To create an Region-
level retention rule, do not specify any tag key and value pairs.
• The period for which the snapshots are to be retained in the Recycle Bin. The period can be up to 1
year (365 days).
• Optional retention rule tags to help identify and organize your retention rules. You can assign up to 50
tags to each rule.
Retention rules function only in the Regions in which they are created. If you intend to use Recycle Bin in
other Regions, you must create additional retention rules in those Regions.
You can create a Recycle Bin retention rule using one of the following methods.
a. (Optional) For Retention rule name, enter a descriptive name for the retention rule.
b. (Optional) For Retention rule description, enter a brief description for the retention rule.
4. In the Rule settings section, do the following:
1465
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
• To create a Region-level retention rule that matches all deleted snapshots in the Region,
select Apply to all resources. The retention rule will retain all deleted snapshots in the
Recycle Bin upon deletion, even if the snapshots do not have any tags.
• To create a tag-level retention rule, for Resource tags to match, enter the tag key and
value pairs to use to identify snapshots that are to be retained in the Recycle Bin. Only
snapshots that have at least one of the specified tag key and value pairs will be retained
by the retention rule.
c. For Retention period, enter the number of days for which the retention rule is to retain
snapshots in the Recycle Bin.
5. (Optional) In the Tags section, do the following:
• To tag the rule with custom tags, choose Add tag and then enter the tag key and value pair.
6. Choose Create retention rule.
AWS CLI
Use the create-rule AWS CLI command. For --retention-period, specify the number of days
to retain deleted snapshots in the Recycle Bin. For --resource-type, specify EBS_SNAPSHOT.
To create a tag-level retention rule, for --resource-tags, specify the tags to use to identify the
snapshots that are to be retained. To create a Region-level retention rule, omit --resource-tags.
Example 1
The following example command creates a Region-level retention rule that retains all deleted
snapshots for a period of 8 days.
Example 2
The following example command creates a tag-level rule that retains deleted snapshots that are
tagged with purpose=production for a period of 14 days.
1466
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
AWS CLI
Use the list-rules AWS CLI command, and for --resource-type, specify EBS_SNAPSHOT.
Example
The following example command provides information about retention rule pwxIkFcvge4.
After you update a retention rule, the changes only apply to new snapshots that it retains. The changes
do not affect snapshots that it previously sent to the Recycle Bin. For example, if you update a retention
rule's retention period, only new snapshots that it retains from that point are retained for the new
retention period. Snapshots that it sent to the Recycle Bin before the update are still retained for the
previous (old) retention period.
You can update a retention rule using one of the following methods.
1467
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
AWS CLI
Use the update-rule AWS CLI command. For --identifier, specify the ID of the retention rule to
update.
Example
The following example command updates retention rule 6lsJ2Fa9nh9 to retain all snapshots for
21 days and updates its description.
You can assign a tag to a retention rule using one of the following methods.
AWS CLI
Use the tag-resource AWS CLI command. For --resource-arn, specify the Amazon Resource Name
(ARN) of the retention rule to tag, and for --tags, specify the tag key and value pair.
1468
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
Example
AWS CLI
Use the list-tags-for-resource AWS CLI command. For --resource-arn, specify the ARN of the
retention rule.
Example
The following example command lists the tags for retention rule arn:aws:rbin:us-
east-1:123456789012:rule/nOoSBBtItF3.
1469
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
AWS CLI
Use the untag-resource AWS CLI command. For --resource-arn, specify the ARN of the retention
rule. For --tagkeys, specify the tags keys of the tags to remove.
Example
The following example command removes tags that have a tag key of purpose from retention rule
arn:aws:rbin:us-east-1:123456789012:rule/nOoSBBtItF3.
You can delete a retention rule using one of the following methods.
AWS CLI
Use the delete-rule AWS CLI command. For --identifier, specify the ID of the retention rule to
delete.
Example
1470
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
For more information about CloudTrail, see the AWS CloudTrail User Guide.
For an ongoing record of events in your AWS account, including events for Recycle Bin, create a trail.
A trail enables CloudTrail to deliver log files to an S3 bucket. By default, when you create a trail in
the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the AWS
partition and delivers the log files to the S3 bucket that you specify. Additionally, you can configure
other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more
information, see Overview for creating a trail in the AWS CloudTrail User Guide.
For Recycle Bin, you can use CloudTrail to log the following API actions as management events.
• CreateRule
• UpdateRule
• GetRules
• ListRule
• DeleteRule
• TagResource
• UntagResource
• ListTagsForResource
For more information about logging management events, see Logging management events for trails in
the CloudTrail User Guide.
Identity information
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or AWS Identity and Access Management (IAM) user
credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
1471
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
CreateRule
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:role/Admin",
"accountId": "123456789012",
"userName": "Admin"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2021-08-02T21:43:38Z"
}
}
},
"eventTime": "2021-08-02T21:45:22Z",
"eventSource": "rbin.amazonaws.com",
"eventName": "CreateRule",
"awsRegion": "us-west-2",
"sourceIPAddress": "123.123.123.123",
"userAgent": "aws-cli/1.20.9 Python/3.6.14
Linux/4.9.230-0.1.ac.224.84.332.metal1.x86_64 botocore/1.21.9",
"requestParameters": {
"retentionPeriod": {
"retentionPeriodValue": 8,
"retentionPeriodUnit": "DAYS"
},
"description": "Match all snapshots",
"resourceType": "EBS_SNAPSHOT"
},
"responseElements": {
"identifier": "jkrnexample"
},
"requestID": "ex0577a5-amc4-pl4f-ef51-50fdexample",
"eventID": "714fafex-2eam-42pl-913e-926d4example",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"eventCategory": "Management",
1472
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
"recipientAccountId": "123456789012",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "rbin.us-west-2.amazonaws.com"
}
}
GetRule
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:role/Admin",
"accountId": "123456789012",
"userName": "Admin"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2021-08-02T21:43:38Z"
}
}
},
"eventTime": "2021-08-02T21:45:33Z",
"eventSource": "rbin.amazonaws.com",
"eventName": "GetRule",
"awsRegion": "us-west-2",
"sourceIPAddress": "123.123.123.123",
"userAgent": "aws-cli/1.20.9 Python/3.6.14
Linux/4.9.230-0.1.ac.224.84.332.metal1.x86_64 botocore/1.21.9",
"requestParameters": {
"identifier": "jkrnexample"
},
"responseElements": null,
"requestID": "ex0577a5-amc4-pl4f-ef51-50fdexample",
"eventID": "714fafex-2eam-42pl-913e-926d4example",
"readOnly": true,
"eventType": "AwsApiCall",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "123456789012",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "rbin.us-west-2.amazonaws.com"
}
}
ListRules
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
1473
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:role/Admin",
"accountId": "123456789012",
"userName": "Admin"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2021-08-02T21:43:38Z"
}
}
},
"eventTime": "2021-08-02T21:44:37Z",
"eventSource": "rbin.amazonaws.com",
"eventName": "ListRules",
"awsRegion": "us-west-2",
"sourceIPAddress": "123.123.123.123",
"userAgent": "aws-cli/1.20.9 Python/3.6.14
Linux/4.9.230-0.1.ac.224.84.332.metal1.x86_64 botocore/1.21.9",
"requestParameters": {
"resourceTags": [
{
"resourceTagKey": "test",
"resourceTagValue": "test"
}
]
},
"responseElements": null,
"requestID": "ex0577a5-amc4-pl4f-ef51-50fdexample",
"eventID": "714fafex-2eam-42pl-913e-926d4example",
"readOnly": true,
"eventType": "AwsApiCall",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "123456789012",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "rbin.us-west-2.amazonaws.com"
}
}
UpdateRule
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:role/Admin",
"accountId": "123456789012",
1474
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
"userName": "Admin"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2021-08-02T21:43:38Z"
}
}
},
"eventTime": "2021-08-02T21:46:03Z",
"eventSource": "rbin.amazonaws.com",
"eventName": "UpdateRule",
"awsRegion": "us-west-2",
"sourceIPAddress": "123.123.123.123",
"userAgent": "aws-cli/1.20.9 Python/3.6.14
Linux/4.9.230-0.1.ac.224.84.332.metal1.x86_64 botocore/1.21.9",
"requestParameters": {
"identifier": "jkrnexample",
"retentionPeriod": {
"retentionPeriodValue": 365,
"retentionPeriodUnit": "DAYS"
},
"description": "Match all snapshots",
"resourceType": "EBS_SNAPSHOT"
},
"responseElements": null,
"requestID": "ex0577a5-amc4-pl4f-ef51-50fdexample",
"eventID": "714fafex-2eam-42pl-913e-926d4example",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "123456789012",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "rbin.us-west-2.amazonaws.com"
}
}
DeleteRule
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:role/Admin",
"accountId": "123456789012",
"userName": "Admin"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2021-08-02T21:43:38Z"
}
}
},
1475
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
"eventTime": "2021-08-02T21:46:25Z",
"eventSource": "rbin.amazonaws.com",
"eventName": "DeleteRule",
"awsRegion": "us-west-2",
"sourceIPAddress": "123.123.123.123",
"userAgent": "aws-cli/1.20.9 Python/3.6.14
Linux/4.9.230-0.1.ac.224.84.332.metal1.x86_64 botocore/1.21.9",
"requestParameters": {
"identifier": "jkrnexample"
},
"responseElements": null,
"requestID": "ex0577a5-amc4-pl4f-ef51-50fdexample",
"eventID": "714fafex-2eam-42pl-913e-926d4example",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "123456789012",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "rbin.us-west-2.amazonaws.com"
}
}
TagResource
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "123456789012:cheluyao-Isengard",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:role/Admin",
"accountId": "123456789012",
"userName": "Admin"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2021-10-22T21:38:34Z"
}
}
},
"eventTime": "2021-10-22T21:43:15Z",
"eventSource": "rbin.amazonaws.com",
"eventName": "TagResource",
"awsRegion": "us-west-2",
"sourceIPAddress": "123.123.123.123",
"userAgent": "aws-cli/1.20.26 Python/3.6.14
Linux/4.9.273-0.1.ac.226.84.332.metal1.x86_64 botocore/1.21.26",
"requestParameters": {
"resourceArn": "arn:aws:rbin:us-west-2:123456789012:rule/ABCDEF01234",
"tags": [
{
"key": "purpose",
"value": "production"
}
]
1476
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Recycle Bin for Amazon EBS snapshots
},
"responseElements": null,
"requestID": "examplee-7962-49ec-8633-795efexample",
"eventID": "example4-6826-4c0a-bdec-0bab1example",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "123456789012",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "beta.us-west-2.api.rbs.aws.dev"
}
}
UntagResource
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "123456789012:cheluyao-Isengard",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:role/Admin",
"accountId": "123456789012",
"userName": "Admin"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2021-10-22T21:38:34Z"
}
}
},
"eventTime": "2021-10-22T21:44:16Z",
"eventSource": "rbin.amazonaws.com",
"eventName": "UntagResource",
"awsRegion": "us-west-2",
"sourceIPAddress": "123.123.123.123",
"userAgent": "aws-cli/1.20.26 Python/3.6.14
Linux/4.9.273-0.1.ac.226.84.332.metal1.x86_64 botocore/1.21.26",
"requestParameters": {
"resourceArn": "arn:aws:rbin:us-west-2:123456789012:rule/ABCDEF01234",
"tagKeys": [
"purpose"
]
},
"responseElements": null,
"requestID": "example7-6c1e-4f09-9e46-bb957example",
"eventID": "example6-75ff-4c94-a1cd-4d5f5example",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "123456789012",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
1477
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
"clientProvidedHostHeader": "beta.us-west-2.api.rbs.aws.dev"
}
}
ListTagsForResource
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "123456789012:cheluyao-Isengard",
"arn": "arn:aws:iam::123456789012:root",
"accountId": "123456789012",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "123456789012",
"arn": "arn:aws:iam::123456789012:role/Admin",
"accountId": "123456789012",
"userName": "Admin"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2021-10-22T21:38:34Z"
}
}
},
"eventTime": "2021-10-22T21:42:31Z",
"eventSource": "rbin.amazonaws.com",
"eventName": "ListTagsForResource",
"awsRegion": "us-west-2",
"sourceIPAddress": "123.123.123.123",
"userAgent": "aws-cli/1.20.26 Python/3.6.14
Linux/4.9.273-0.1.ac.226.84.332.metal1.x86_64 botocore/1.21.26",
"requestParameters": {
"resourceArn": "arn:aws:rbin:us-west-2:123456789012:rule/ABCDEF01234"
},
"responseElements": null,
"requestID": "example8-10c7-43d4-b147-3d9d9example",
"eventID": "example2-24fc-4da7-a479-c9748example",
"readOnly": true,
"eventType": "AwsApiCall",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "123456789012",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "beta.us-west-2.api.rbs.aws.dev"
}
}
1478
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
When combined with the monitoring features of Amazon CloudWatch Events and AWS CloudTrail,
Amazon Data Lifecycle Manager provides a complete backup solution for Amazon EC2 instances and
individual EBS volumes at no additional cost.
Important
Amazon Data Lifecycle Manager cannot be used to manage snapshots or AMIs that are created
by any other means.
Amazon Data Lifecycle Manager cannot be used to automate the creation, retention, and
deletion of instance store-backed AMIs.
Contents
• How Amazon Data Lifecycle Manager works (p. 1479)
• Considerations for Amazon Data Lifecycle Manager (p. 1481)
• Automate snapshot lifecycles (p. 1484)
• Automate AMI lifecycles (p. 1491)
• Automate cross-account snapshot copies (p. 1497)
• View, modify, and delete lifecycle policies (p. 1505)
• AWS Identity and Access Management (p. 1508)
• Monitor the lifecycle of snapshots and AMIs (p. 1515)
Elements
• Snapshots (p. 1479)
• EBS-backed AMIs (p. 1479)
• Target resource tags (p. 1480)
• Amazon Data Lifecycle Manager tags (p. 1480)
• Lifecycle policies (p. 1480)
• Policy schedules (p. 1481)
Snapshots
Snapshots are the primary means to back up data from your EBS volumes. To save storage costs,
successive snapshots are incremental, containing only the volume data that changed since the previous
snapshot. When you delete one snapshot in a series of snapshots for a volume, only the data that's
unique to that snapshot is removed. The rest of the captured history of the volume is preserved.
EBS-backed AMIs
An Amazon Machine Image (AMI) provides the information that's required to launch an instance. You
can launch multiple instances from a single AMI when you need multiple instances with the same
configuration. Amazon Data Lifecycle Manager supports EBS-backed AMIs only. EBS-backed AMIs include
a snapshot for each EBS volume that's attached to the source instance.
1479
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
For more information, see Amazon Machine Images (AMI) (p. 93).
For more information, see Tag your Amazon EC2 resources (p. 1666).
• aws:dlm:lifecycle-policy-id
• aws:dlm:lifecycle-schedule-name
• aws:dlm:expirationTime — For policies with age-based retention schedules only.
• dlm:managed
You can also specify custom tags to be applied to snapshots and AMIs on creation. You can't use a '\' or
'=' character in a tag key.
The target tags that Amazon Data Lifecycle Manager uses to associate volumes with a snapshot policy
can optionally be applied to snapshots created by the policy. Similarly, the target tags that are used to
associate instances with an AMI policy can optionally be applied to AMIs created by the policy.
Lifecycle policies
A lifecycle policy consists of these core settings:
• Policy type—Defines the type of resources that the policy can manage. Amazon Data Lifecycle
Manager supports the following types of lifecycle policies:
• Snapshot lifecycle policy—Used to automate the lifecycle of EBS snapshots. These policies can
target individual EBS volumes or all EBS volumes attached to an instance.
• EBS-backed AMI lifecycle policy—Used to automate the lifecycle of EBS-backed AMIs and their
backing snapshots. These policies can target instances only.
• Cross-account copy event policy—Used to automate snapshot copies across accounts. Use this policy
type in conjunction with an EBS snapshot policy that shares snapshots across accounts.
• Resource type—Defines the type of resources that are targeted by the policy. Snapshot lifecycle
policies can target instances or volumes. Use VOLUME to create snapshots of individual volumes, or use
INSTANCE to create multi-volume snapshots of all of the volumes that are attached to an instance. For
more information, see Multi-volume snapshots (p. 1386). AMI lifecycle policies can target instances
only. One AMI is created that includes snapshots of all of the volumes that are attached to the target
instance.
• Target tags—Specifies the tags that must be assigned to an EBS volume or an Amazon EC2 instance
for it to be targeted by the policy.
• Schedules—The start times and intervals for creating snapshots or AMIs. The first snapshot or AMI
creation operation starts within one hour after the specified start time. Subsequent snapshot or
AMI creation operations start within one hour of their scheduled time. A policy can have up to four
schedules: one mandatory schedule, and up to three optional schedules. For more information, see
Policy schedules (p. 1481).
1480
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
• Retention—Specifies how snapshots or AMIs are to be retained. You can retain snapshots or AMIs
based either on their total count (count-based), or their age (age-based). For snapshot policies, when
the retention threshold is reached, the oldest snapshot is deleted. For AMI policies, when the retention
threshold is reached, the oldest AMI is deregistered and its backing snapshots are deleted.
For example, you could create a policy with settings similar to the following:
• Manages all EBS volumes that have a tag with a key of account and a value of finance.
• Creates snapshots every 24 hours at 0900 UTC.
• Retains only the five most recent snapshots.
• Starts snapshot creation no later than 0959 UTC each day.
Policy schedules
Policy schedules define when snapshots or AMIs are created by the policy. Policies can have up to four
schedules—one mandatory schedule, and up to three optional schedules.
Adding multiple schedules to a single policy lets you create snapshots or AMIs at different frequencies
using the same policy. For example, you can create a single policy that creates daily, weekly, monthly,
and yearly snapshots. This eliminates the need to manage multiple policies.
For each schedule, you can define the frequency, fast snapshot restore settings (snapshot lifecycle
policies only), cross-Region copy rules, and tags. The tags that are assigned to a schedule are
automatically assigned to the snapshots or AMIs that are created when the schedule is initiated. In
addition, Amazon Data Lifecycle Manager automatically assigns a system-generated tag based on the
schedule's frequency to each snapshot or AMI.
Each schedule is initiated individually based on its frequency. If multiple schedules are initiated at the
same time, Amazon Data Lifecycle Manager creates only one snapshot or AMI and applies the retention
settings of the schedule that has the highest retention period. The tags of all of the initiated schedules
are applied to the snapshot or AMI.
• (Snapshot lifecycle policies only) If more than one of the initiated schedules is enabled for fast
snapshot restore, then the snapshot is enabled for fast snapshot restore in all of the Availability Zones
specified across all of the initiated schedules. The highest retention settings of the initiated schedules
is used for each Availability Zone.
• If more than one of the initiated schedules is enabled for cross-Region copy, the snapshot or AMI is
copied to all Regions specified across all of the initiated schedules. The highest retention period of the
initiated schedules is applied.
• A policy does not begin creating snapshots or AMIs until you set its activation status to enabled. You
can configure a policy to be enabled upon creation.
• The first snapshot or AMI creation operation starts within one hour after the specified start time.
Subsequent snapshot or AMI creation operations start within one hour of their scheduled time.
1481
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
• If you modify a policy by removing or changing its target tags, the EBS volumes or instances with
those tags are no longer managed by the policy.
• If you modify a schedule name for a policy, the snapshots or AMIs created under the old schedule
name are no longer affected by the policy.
• If you modify a time-based retention schedule to use a new time interval, the new interval is used only
for new snapshots or AMIs created after the change. The new schedule does not affect the retention
schedule of snapshots or AMIs created before the change.
• You cannot change the retention schedule of a policy from count-based to time-based after creation.
To make this change, you must create a new policy.
• If you disable a policy with an age-based retention schedule, the snapshots or AMIs that are set
to expire while the policy is disabled are retained indefinitely. You must delete the snapshots or
deregister the AMIs manually. When you enable the policy again, Amazon Data Lifecycle Manager
resumes deleting snapshots or deregistering AMIs as their retention periods expire.
• If you delete the resource to which a policy with count-based retention applies, the policy no longer
manages the previously created snapshots or AMIs. You must manually delete the snapshots or
deregister the AMIs if they are no longer needed.
• If you delete the resource to which a policy with age-based retention applies, the policy continues
to delete snapshots or deregister AMIs on the defined schedule, up to, but not including, the last
snapshot or AMI. You must manually delete the last snapshot or deregister the last AMI if it is no
longer needed.
• You can create multiple policies to back up an EBS volume or an Amazon EC2 instance. For example,
if an EBS volume has two tags, where tag A is the target for policy A to create a snapshot every 12
hours, and tag B is the target for policy B to create a snapshot every 24 hours, Amazon Data Lifecycle
Manager creates snapshots according to the schedules for both policies. Alternatively, you can achieve
the same result by creating a single policy that has multiple schedules. For example, you can create a
single policy that targets only tag A, and specify two schedules—one for every 12 hours and one for
every 24 hours.
• If you create a policy that targets instances, and new volumes are attached to the instance after the
policy has been created, the newly-added volumes are included in the backup at the next policy run.
All volumes attached to the instance at the time of the policy run are included.
• For AMI lifecycle policies, when the AMI retention threshold is reached, the oldest AMI is deregistered
and its backing snapshots are deleted.
• If a policy with a custom cron-based schedule and age-based or count-based retention rule is
configured to create only one snapshot or AMI, the policy will not automatically delete that snapshot
or AMI when the retention threshold is reached. You must manually delete the snapshot or deregister
the AMI if it is no longer needed.
The following considerations apply to snapshot lifecycle policies and fast snapshot restore (p. 1547):
• A snapshot that is enabled for fast snapshot restore remains enabled even if you delete or disable the
lifecycle policy, disable fast snapshot restore for the lifecycle policy, or disable fast snapshot restore
for the Availability Zone. You can disable fast snapshot restore for these snapshots manually.
• If you enable fast snapshot restore and you exceed the maximum number of snapshots that can be
enabled for fast snapshot restore, Amazon Data Lifecycle Manager creates snapshots as scheduled
but does not enable them for fast snapshot restore. After a snapshot that is enabled for fast snapshot
restore is deleted, the next snapshot that Amazon Data Lifecycle Manager creates is enabled for fast
snapshot restore.
• When you enable fast snapshot restore for a snapshot, it takes 60 minutes per TiB to optimize
the snapshot. We recommend that you create a schedule that ensures that each snapshot is fully
optimized before Amazon Data Lifecycle Manager creates the next snapshot.
• You are billed for each minute that fast snapshot restore is enabled for a snapshot in a particular
Availability Zone. Charges are pro-rated with a minimum of one hour. For more information, see
Pricing and Billing (p. 1552).
1482
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
Note
Depending on the configuration of your lifecycle policies, you could have multiple snapshots
enabled for fast snapshot restore simultaneously.
The following considerations apply to snapshot lifecycle policies and Multi-Attach (p. 1355) enabled
volumes:
• When creating a lifecycle policy based on instance tags for Multi-Volume snapshots, Amazon Data
Lifecycle Manager initiates a snapshot of the volume for each attached instance. Use the timestamp
tag to identify the set of time-consistent snapshots that are created from the attached instances.
• You can only share snapshots that are unencrypted or that are encrypted using a customer managed
key.
• You can't share snapshots that are encrypted with the default EBS encryption KMS key.
• If you share encrypted snapshots, then you must also share the KMS key that was used to encrypt the
source volume with the target accounts. For more information, see Allowing users in other accounts to
use a KMS key in the AWS Key Management Service Developer Guide.
• You can only copy snapshots that are unencrypted or that are encrypted using a customer managed
key.
• You can create a cross-account copy event policy that copies snapshots that are shared outside of
Amazon Data Lifecycle Manager.
• If you want to encrypt snapshots in the target account, then the IAM role selected for the cross-
account copy event policy must have permission to use the required KMS key.
The following considerations apply to EBS-backed AMI policies and AMI deprecation:
• If you increase the AMI deprecation count for a schedule with count-based retention, the change is
applied to all AMIs (existing and new) created by the schedule.
• If you increase the AMI deprecation period for a schedule with age-based retention, the change is
applied to new AMIs only. Existing AMIs are not affected.
• If you remove the AMI deprecation rule from a schedule, Amazon Data Lifecycle Manager will not
cancel deprecation for AMIs that were previously deprecated by that schedule.
• If you decrease the AMI deprecation count or period for a schedule, Amazon Data Lifecycle Manager
will not cancel deprecation for AMIs that were previously deprecated by that schedule.
• If you manually deprecate an AMI that was created by an AMI policy, Amazon Data Lifecycle Manager
will not override the deprecation.
• If you manually cancel deprecation for an AMI that was previously deprecated by an AMI policy,
Amazon Data Lifecycle Manager will not override the cancellation.
• If an AMI is created by multiple conflicting schedules, and one or more of those schedules do not have
an AMI deprecation rule, Amazon Data Lifecycle Manager will not deprecate that AMI.
• If an AMI is created by multiple conflicting schedules, and all of those schedules have an AMI
deprecation rule, Amazon Data Lifecycle Manager will use the deprecation rule with the latest
deprecation date.
1483
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
• If you manually archive a snapshot that was created by a policy, and that snapshot is in the archive
tier when the policy’s retention threshold is reached, Amazon Data Lifecycle Manager will not delete
the snapshot. Amazon Data Lifecycle Manager does not manage snapshots while they are stored in
the archive tier. If you no longer need snapshots that are stored in the archive tier, you must manually
delete them.
• If Amazon Data Lifecycle Manager deletes a snapshot and sends it to the Recycle Bin when the policy's
retention threshold is reached, and you manually restore the snapshot from the Recycle Bin, you must
manually delete that snapshot when it is no longer needed. Amazon Data Lifecycle Manager will not
automatically delete the snapshot.
• If you manually delete a snapshot that was created by a policy, and that snapshot is in the Recycle Bin
when the policy’s retention threshold is reached, Amazon Data Lifecycle Manager will not delete the
snapshot. Amazon Data Lifecycle Manager does not manage the snapshots while they are stored in the
Recycle Bin.
If the snapshot is restored from the Recycle Bin before the policy's retention threshold is reached,
Amazon Data Lifecycle Manager will delete the snapshot when the policy's retention threshold is
reached.
If the snapshot is restored from the Recycle Bin after the policy's retention threshold is reached,
Amazon Data Lifecycle Manager will no longer delete the snapshot. You must manually delete the
snapshot when it is no longer needed.
New console
a. For Target resource types, choose the type of resource to back up. Choose Volume to
create snapshots of individual volumes, or choose Instance to create multi-volume
snapshots from the volumes attached to an instance.
b. (For AWS Outpost customers only) For Target resource location, specify where the source
resources are located.
• If the source resources are located in an AWS Region, choose AWS Region. Amazon Data
Lifecycle Manager backs up all resources of the specified type that have matching target
tags in the current Region only. If the resource is located in a Region, snapshots created
by the policy will be stored in the same Region.
• If the source resources are located on an Outpost in your account, choose AWS Outpost.
Amazon Data Lifecycle Manager backs up all resources of the specified type that have
matching target tags across all of the Outposts in your account. If the resource is located
1484
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
on an Outpost, snapshots created by the policy can be stored in the same Region or on
the same Outpost as the resource.
• If you do not have any Outposts in your account, this option is hidden and AWS Region is
selected for you.
c. For Target resource tags, choose the resource tags that identify the volumes or instances to
back up. Only resources that have the specified tag key and value pairs are backed up by the
policy.
5. For Description, enter a brief description for the policy.
6. For IAM role, choose the IAM role that has permissions to manage snapshots and to describe
volumes and instances. To use the default role provided by Amazon Data Lifecycle Manager.
choose Default role. Alternatively, to use a custom IAM role that you previously created, choose
Choose another role and then select the role to use.
7. For Policy tags, add the tags to apply to the lifecycle policy. You can use these tags to identify
and categorize your policies.
8. For Policy status after creation, choose Enable policy to start the policy runs at the next
scheduled time, or Disable policy to prevent the policy from running. If you do not enable the
policy now, it will not start creating snapshots until you manually enable it after creation.
9. Choose Next.
10. On the Configure schedule screen, configure the policy schedules. A policy can have up to 4
schedules. Schedule 1 is mandatory. Schedules 2, 3, and 4 are optional. For each policy schedule
that you add, do the following:
For count-based retention, the range is 1 to 1000. After the maximum count is reached,
the oldest snapshot is deleted when a new one is created.
For age-based retention, the range is 1 day to 100 years. After the retention period of
each snapshot expires, it is deleted.
Note
All schedules must have the same retention type. You can specify the retention
type for Schedule 1 only. Schedules 2, 3, and 4 inherit the retention type from
Schedule 1. Each schedule can have its own retention count or period.
v. (For AWS Outposts customers only) For Snapshot destination, specify the destination
for snapshots created by the policy.
• If the policy targets resources in a Region, snapshots must be created in the same
Region. AWS Region is selected for you.
• If the policy targets resources on an Outpost, you can choose to create snapshots on
the same Outpost as the source resource, or in the Region that is associated with the
Outpost.
• If you do not have any Outposts in your account, this option is hidden and AWS
Region is selected for you.
1485
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
i. To copy all of the user-defined tags from the source volume to the snapshots created
by the schedule, select Copy tags from source.
ii. To specify additional tags to assign to snapshots created by this schedule, choose Add
tags.
c. To enable fast snapshot restore for snapshots created by the schedule, in the Fast snapshot
restore section, select Enable fast snapshot restore. If you enable fast snapshot restore,
you must choose the Availability Zones in which to enable it. If the schedule uses an age-
based retention schedule, you must specify the period for which to enable fast snapshot
restore for each snapshot. If the schedule uses count-based retention, you must specify the
maximum number of snapshots to enable for fast snapshot restore.
If the schedule creates snapshots on an Outpost, you can't enable fast snapshot restore.
Fast snapshot restore is not supported with local snapshots that are stored on an Outpost.
Note
You are billed for each minute that fast snapshot restore is enabled for a snapshot
in a particular Availability Zone. Charges are pro-rated with a minimum of one
hour.
d. To copy snapshots created by the schedule to an Outpost or to a different Region, in the
Cross-Region copy section, select Enable cross-Region copy.
If the schedule creates snapshots in a Region, you can copy the snapshots to up to three
additional Regions or Outposts in your account. You must specify a separate cross-Region
copy rule for each destination Region or Outpost.
For each Region or Outpost, you can choose different retention policies and you can choose
whether to copy all tags or no tags. If the source snapshot is encrypted, or if encryption
by default is enabled, the copied snapshots are encrypted. If the source snapshot is
unencrypted, you can enable encryption. If you do not specify a KMS key, the snapshots are
encrypted using the default KMS key for EBS encryption in each destination Region. If you
specify a KMS key for the destination Region, then the selected IAM role must have access
to the KMS key.
Note
You must ensure that you do not exceed the number of concurrent snapshot copies
per Region.
If the policy creates snapshots on an Outpost, then you can't copy the snapshots to a
Region or to another Outpost and the cross-Region copy settings are not available.
e. In the Cross-account sharing, configure the policy to automatically share the snapshots
created by the schedule with other AWS accounts. Do the following:
i. To enable sharing with other AWS accounts, select Enable cross-account sharing.
ii. To add the accounts with which to share the snapshots, choose Add account, enter the
12-digit AWS account ID, and choose Add.
iii. To automatically unshare shared snapshots after a specific period, selectUnshare
automatically. If you choose to automatically unshare shared snapshots, the period
after which to automatically unshare the snapshots cannot be longer than the period
for which the policy retains its snapshots. For example, if the policy's retention
configuration retains snapshots for a period of 5 days, you can configure the policy
to automatically unshare shared snapshots after periods up to 4 days. This applies to
policies with age-based and count-based snapshot retention configurations.
If you do not enable automatic unsharing, the snapshot is shared until it is deleted.
1486
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
Note
You can only share snapshots that are unencrypted or that are encrypted using
a customer managed key. You can't share snapshots that are encrypted with
the default EBS encryption KMS key. If you share encrypted snapshots, then
you must also share the KMS key that was used to encrypt the source volume
with the target accounts. For more information, see Allowing users in other
accounts to use a KMS key in the AWS Key Management Service Developer
Guide.
f. To add additional schedules, choose Add another schedule, which is located at the top of
the screen. For each additional schedule, complete the fields as described previously in this
topic.
g. After you have added the required schedules, choose Review policy.
11. Review the policy summary, and then choose Create policy.
Old console
If you do not have any Outposts in your account, then AWS Region is selected by default.
Note
If the resource is located in a Region, snapshots created by the policy will be stored in
the same Region. If the resource is located on an Outpost, snapshots created by the
policy can be stored in the same Region or on the same Outpost as the resource.
• Target with these tags—The resource tags that identify the volumes or instances to back up.
Only resources that have the specified tag key and value pairs are backed up by the policy.
• Policy tags—The tags to apply to the lifecycle policy.
4. For IAM role, choose the IAM role that has permissions to create, delete, and describe snapshots
and to describe volumes and instances. AWS provides a default role, or you can create a custom
IAM role.
5. Add the policy schedules. Schedule 1 is mandatory. Schedules 2, 3, and 4 are optional. For each
policy schedule that you add, specify the following information:
1487
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
• Starting at hh:mm UTC—The time at which the policy runs are scheduled to start. The first
policy run starts within an hour after the scheduled time.
• Retention type—You can retain snapshots based on either their total count or their age. For
count-based retention, the range is 1 to 1000. After the maximum count is reached, the oldest
snapshot is deleted when a new one is created. For age-based retention, the range is 1 day
to 100 years. After the retention period of each snapshot expires, it is deleted. The retention
period should be greater than or equal to the interval.
Note
All schedules must have the same retention type. You can specify the retention type
for Schedule 1 only. Schedules 2, 3, and 4 inherit the retention type from Schedule 1.
Each schedule can have its own retention count or period.
• Snapshot destination—Specifies the destination for snapshots created by the policy. To
create snapshots in the same AWS Region as the source resource, choose AWS Region. To
create snapshots on an Outpost, choose AWS Outpost.
If the policy targets resources in a Region, snapshots are created in the same Region, and
cannot be created on an Outpost.
If the policy targets resources on an Outpost, snapshots can be created on the same Outpost
as the source resource, or in the Region that is associated with the Outpost.
• Copy tags from source—Choose whether to copy all of the user-defined tags from the source
volume to the snapshots created by the schedule.
• Variable tags—If the source resource is an instance, you can choose to automatically tag your
snapshots with the following variable tags:
• instance-id—The ID of the source instance.
• timestamp—The date and time of the policy run.
• Additional tags—Specify any additional tags to assign to the snapshots created by this
schedule.
• Fast snapshot restore—Choose whether to enable fast snapshot restore for all snapshots
that are created by the schedule. If you enable fast snapshot restore, you must choose the
Availability Zones in which to enable it. You are billed for each minute that fast snapshot
restore is enabled for a snapshot in a particular Availability Zone. Charges are pro-rated with
a minimum of one hour. You can also specify the maximum number of snapshots that can be
enabled for fast snapshot restore.
If the policy creates snapshots on an Outpost, you can't enable fast snapshot restore. Fast
snapshot restore is not supported with local snapshots that are stored on an Outpost.
• Cross region copy—If the policy creates snapshots in a Region, then you can copy the
snapshots to up to three additional Regions or Outposts in your account. You must specify a
separate cross-Region copy rule for each destination Region or Outpost.
For each Region or Outpost, you can choose different retention policies and you can choose
whether to copy all tags or no tags. If the source snapshot is encrypted, or if encryption by
default is enabled, the copied snapshots are encrypted. If the source snapshot is unencrypted,
you can enable encryption. If you do not specify a KMS key, the snapshots are encrypted using
the default KMS key for EBS encryption in each destination Region. If you specify a KMS key
for the destination Region, then the selected IAM role must have access to the KMS key.
You must ensure that you do not exceed the number of concurrent snapshot copies per
Region.
If the policy creates snapshots on an Outpost, then you can't copy the snapshots to a Region
or to another Outpost and the cross-Region copy settings are not available.
6. For Policy status after creation, choose Enable policy to start the policy runs at the next
scheduled time, or Disable policy to prevent the policy from running.
1488
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
Command line
Use the create-lifecycle-policy command to create a snapshot lifecycle policy. For PolicyType,
specify EBS_SNAPSHOT_MANAGEMENT.
Note
To simplify the syntax, the following examples use a JSON file, policyDetails.json, that
includes the policy details.
This example creates a snapshot lifecycle policy that creates snapshots of all volumes that have a
tag key of costcenter with a value of 115. The policy includes two schedules. The first schedule
creates a snapshot every day at 03:00 UTC. The second schedule creates a weekly snapshot every
Friday at 17:00 UTC.
{
"PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
"ResourceTypes": [
"VOLUME"
],
"TargetTags": [{
"Key": "costcenter",
"Value": "115"
}],
"Schedules": [{
"Name": "DailySnapshots",
"TagsToAdd": [{
"Key": "type",
"Value": "myDailySnapshot"
}],
"CreateRule": {
"Interval": 24,
"IntervalUnit": "HOURS",
"Times": [
"03:00"
]
},
"RetainRule": {
"Count": 5
},
"CopyTags": false
},
{
"Name": "WeeklySnapshots",
"TagsToAdd": [{
"Key": "type",
"Value": "myWeeklySnapshot"
}],
"CreateRule": {
"CronExpression": "cron(0 17 ? * FRI *)"
},
"RetainRule": {
1489
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
"Count": 5
},
"CopyTags": false
}
]}
Upon success, the command returns the ID of the newly created policy. The following is example
output.
{
"PolicyId": "policy-0123456789abcdef0"
}
Example 2—Snapshot lifecycle policy that automates local snapshots of Outpost resources
This example creates a snapshot lifecycle policy that creates snapshots of volumes tagged with
team=dev across all of your Outposts. The policy creates the snapshots on the same Outposts as the
source volumes. The policy creates snapshots every 12 hours starting at 00:00 UTC.
{
"PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
"ResourceTypes": "VOLUME",
"ResourceLocations": "OUTPOST",
"TargetTags": [{
"Key": "team",
"Value": "dev"
}],
"Schedules": [{
"Name": "on-site backup",
"CreateRule": {
"Interval": 12,
"IntervalUnit": "HOURS",
"Times": [
"00:00"
],
"Location": [
"OUTPOST_LOCAL"
]
},
"RetainRule": {
"Count": 1
},
"CopyTags": false
}
]}
Example 3—Snapshot lifecycle policy that creates snapshots in a Region and copies them to an
Outpost
The following example policy creates snapshots of volumes that are tagged with team=dev.
Snapshots are created in the same Region as the source volume. Snapshots are created every
12 hours starting at 00:00 UTC, and retains a maximum of 1 snapshot. The policy also copies
the snapshots to Outpost arn:aws:outposts:us-east-1:123456789012:outpost/
1490
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
op-1234567890abcdef0, encrypts the copied snapshots using the default encryption KMS key, and
retains the copies for 1 month.
{
"PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
"ResourceTypes": "VOLUME",
"ResourceLocations": "CLOUD",
"TargetTags": [{
"Key": "team",
"Value": "dev"
}],
"Schedules": [{
"Name": "on-site backup",
"CopyTags": false,
"CreateRule": {
"Interval": 12,
"IntervalUnit": "HOURS",
"Times": [
"00:00"
],
"Location": "CLOUD"
},
"RetainRule": {
"Count": 1
},
"CrossRegionCopyRules" : [
{
"Target": "arn:aws:outposts:us-east-1:123456789012:outpost/
op-1234567890abcdef0",
"Encrypted": true,
"CopyTags": true,
"RetainRule": {
"Interval": 1,
"IntervalUnit": "MONTHS"
}
}]
}
]}
New console
1491
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
3. On the Select policy type screen, choose EBS-backed AMI policy, and then choose Next.
4. In the Target resources section, for Target resource tags, choose the resource tags that identify
the volumes or instances to back up. The policy backs up only the resources that have the
specified tag key and value pairs.
5. For Description, enter a brief description for the policy.
6. For IAM role, choose the IAM role that has permissions to manage AMIs and snapshot and to
describe instances. To use the default role provided by Amazon Data Lifecycle Manager, choose
Default role. Alternatively, to use a custom IAM role that you previously created, choose Choose
another role, and then select the role to use.
7. For Policy tags, add the tags to apply to the lifecycle policy. You can use these tags to identify
and categorize your policies.
8. For Policy status after creation, choose Enable policy to start running the policy at the next
scheduled time, or Disable policy to prevent the policy from running. If you do not enable the
policy now, it will not start creating AMIs until you manually enable it after creation.
9. In the Instance reboot section, indicate whether instances should be rebooted before AMI
creation. To prevent the targeted instances from being rebooted, choose No. Choosing No
could cause data consistency issues. To reboot instances before AMI creation, choose Yes.
Choosing this ensures data consistency, but could result in multiple targeted instances rebooting
simultaneously.
10. Choose Next.
11. On the Configure schedule screen, configure the policy schedules. A policy can have up to four
schedules. Schedule 1 is mandatory. Schedules 2, 3, and 4 are optional. For each policy schedule
that you add, do the following:
For count-based retention, the range is 1 to 1000. After the maximum count is reached,
the oldest AMI is deregistered when a new one is created.
For age-based retention, the range is 1 day to 100 years. After the retention period of
each AMI expires, it is deregistered.
Note
All schedules must have the same retention type. You can specify the retention
type for Schedule 1 only. Schedules 2, 3, and 4 inherit the retention type from
Schedule 1. Each schedule can have its own retention count or period.
b. In the Tagging section, do the following:
i. To copy all of the user-defined tags from the source instance to the AMIs created by the
schedule, select Copy tags from source.
ii. By default, AMIs created by the schedule are automatically tagged with the ID of the
source instance. To prevent this automatic tagging from happening, for Variable tags,
remove the instance-id:$(instance-id) tile.
iii. To specify additional tags to assign to AMIs created by this schedule, choose Add tags.
1492
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
c. To deprecate AMIs when they should no longer be used, in the AMI deprecation section,
select Enable AMI deprecation for this schedule and then specify the AMI deprecation rule.
The AMI deprecation rule specifies when AMIs are to be deprecated.
If the schedule uses count-based AMI retention, you must specify the number of oldest
AMIs to deprecate. The deprecation count must be less than or equal to the schedule's
AMI retention count, and it can't be greater than 1000. For example, if the schedule is
configured to retain a maximum of 5 AMIs, then you can configure the scheduled to
deprecate up to old 5 oldest AMIs.
If the schedule uses age-based AMI retention, you must specify the period after which AMIs
are to be deprecated. The deprecation count must be less than or equal to the schedule's
AMI retention period, and it can't be greater than 10 years (120 months, 520 weeks, or
3650 days). For example, if the schedule is configured to retain AMIs for 10 days, then you
can configure the scheduled to deprecate AMIs after periods up to 10 days after creation.
d. To copy AMIs created by the schedule to different Regions, in the Cross-Region copy
section, select Enable cross-Region copy. You can copy AMIs to up to three additional
Regions in your account. You must specify a separate cross-Region copy rule for each
destination Region.
• A retention policy for the AMI copy. When the retention period expires, the copy in the
destination Region is automatically deregistered.
• Encryption status for the AMI copy. If the source AMI is encrypted, or if encryption
by default is enabled, the copied AMIs are always encrypted. If the source AMI is
unencrypted and encryption by default is disabled, you can optionally enable encryption.
If you do not specify a KMS key, the AMIs are encrypted using the default KMS key for
EBS encryption in each destination Region. If you specify a KMS key for the destination
Region, then the selected IAM role must have access to the KMS key.
• A deprecation rule for the AMI copy. When the deprecation period expires, the AMI copy is
automatically deprecated. The deprecation period must be less than or equal to the copy
retention period, and it can't be greater than 10 years.
• Whether to copy all tags or no tags from the source AMI.
Note
Do not exceed the number of concurrent AMI copies per Region.
e. To add additional schedules, choose Add another schedule, which is located at the top of
the screen. For each additional schedule, complete the fields as described previously in this
topic.
f. After you have added the required schedules, choose Review policy.
12. Review the policy summary, and then choose Create policy.
Console
1493
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
• Target with these tags—The resource tags that identify the instances to back up. Only
instances that have the specified tag key and value pairs are backed up by the policy.
• Policy tags—The tags to apply to the lifecycle policy.
4. For IAM role, choose the IAM role that has permissions to manage images. AWS provides a
default roles, or you can create a custom IAM role.
5. Add the policy schedules. Schedule 1 is mandatory. Schedules 2, 3, and 4 are optional. For each
policy schedule that you add, specify the following information:
For each Region, you can choose different retention policies and you can choose whether to
copy all tags or no tags. If the source AMI is encrypted, or if encryption by default is enabled,
the copied AMIs are encrypted. If the AMI is unencrypted, you can enable encryption. If you do
not specify a KMS key, the AMIs are encrypted using the default KMS key for EBS encryption
in each destination Region. If you specify a KMS key for the destination Region, then the
selected IAM role must have access to the KMS key.
Command line
Use the create-lifecycle-policy command to create an AMI lifecycle policy. For PolicyType, specify
IMAGE_MANAGEMENT.
1494
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
Note
To simplify the syntax, the following examples use a JSON file, policyDetails.json, that
includes the policy details.
This example creates an AMI lifecycle policy that creates AMIs of all instances that have a tag key of
purpose with a value of production without rebooting the targeted instances. The policy includes
one schedule that creates an AMI every day at 01:00 UTC. The policy retains AMIs for 2 days and
deprecates them after 1 day. It also copies the tags from the source instance to the AMIs that it
creates.
{
"PolicyType": "IMAGE_MANAGEMENT",
"ResourceTypes": [
"INSTANCE"
],
"TargetTags": [{
"Key": "purpose",
"Value": "production"
}],
"Schedules": [{
"Name": "DailyAMIs",
"TagsToAdd": [{
"Key": "type",
"Value": "myDailyAMI"
}],
"CreateRule": {
"Interval": 24,
"IntervalUnit": "HOURS",
"Times": [
"01:00"
]
},
RetainRule":{
"Interval" : 2,
"IntervalUnit" : "DAYS"
},
DeprecateRule": {
"Interval" : 1,
"IntervalUnit" : "DAYS"
},
"CopyTags": true
}
],
"Parameters" : {
"NoReboot":true
}
}
Upon success, the command returns the ID of the newly created policy. The following is example
output.
1495
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
"PolicyId": "policy-9876543210abcdef0"
}
This example creates an AMI lifecycle policy that creates AMIs of all instances that have a tag key
of purpose with a value of production and reboots the target instances. The policy includes one
schedule that creates an AMI every 6 hours starting at 17:30 UTC. The policy retains 3 AMIs and
automatically deprecates the 2 oldest AMIs. It also has a cross-Region copy rule that copies AMIs to
us-east-1, retains 2 AMI copies, and automatically deprecates the oldest AMI.
{
"PolicyType": "IMAGE_MANAGEMENT",
"ResourceTypes" : [
"INSTANCE"
],
"TargetTags": [{
"Key":"purpose",
"Value":"production"
}],
"Parameters" : {
"NoReboot": true
},
"Schedules" : [{
"Name" : "Schedule1",
"CopyTags": true,
"CreateRule" : {
"Interval": 6,
"IntervalUnit": "HOURS",
"Times" : ["17:30"]
},
"RetainRule":{
"Count" : 3
},
"DeprecateRule":{
"Count" : 2
},
"CrossRegionCopyRules": [{
"TargetRegion": "us-east-1",
"Encrypted": true,
"RetainRule":{
"IntervalUnit": "DAYS",
"Interval": 2
},
"DeprecateRule":{
"IntervalUnit": "DAYS",
"Interval": 1
},
"CopyTags": true
}]
}]
}
1496
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
• Source account—The source account is the account that creates and shares the snapshots with the
target account. In this account, you must create an EBS snapshot policy that creates snapshots at set
intervals and then shares them with other AWS accounts.
• Target account—The target account is the account with destination account with which the snapshots
are shared, and it is the account that creates copies of the shared snapshots. In this account, you must
create a cross-account copy event policy that automatically copies snapshots that are shared with it by
one or more specified source accounts.
Topics
• Create cross-account snapshot copy policies (p. 1497)
• Specify snapshot description filters (p. 1504)
Topics
• Step 1: Create the EBS snapshot policy (Source account) (p. 1497)
• Step 2: Share the customer managed key (Source account) (p. 1498)
• Step 3: Create cross-account copy event policy (Target account) (p. 1499)
• Step 4: Allow IAM role to use the required KMS keys (Target account) (p. 1502)
In the source account, create an EBS snapshot policy that will create the snapshots and share them with
the required target accounts.
When you create the policy, ensure that you enable cross-account sharing and that you specify the target
AWS accounts with which to share the snapshots. These are the accounts with which the snapshots are
to be shared. If you are sharing encrypted snapshots, then you must give the selected target accounts
permission to use the KMS key used to encrypt the source volume. For more information, see Step 2:
Share the customer managed key (Source account) (p. 1498).
Note
You can only share snapshots that are unencrypted or that are encrypted using a customer
managed key. You can't share snapshots that are encrypted with the default EBS encryption
KMS key. If you share encrypted snapshots, then you must also share the KMS key that was used
to encrypt the source volume with the target accounts. For more information, see Allowing
users in other accounts to use a KMS key in the AWS Key Management Service Developer Guide.
For more information about creating an EBS snapshot policy, see Automate snapshot lifecycles (p. 1484).
Use one of the following methods to create the EBS snapshot policy.
1497
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
If you are sharing encrypted snapshots, you must grant the IAM role and the target AWS accounts
(that you selected in the previous step) permissions to use the customer managed key that was used to
encrypt the source volume.
Note
Perform this step only if you are sharing encrypted snapshots. If you are sharing unencrypted
snapshots, skip this step.
Console
Make note of the KMS key ARN, you'll need this later.
4. On the Key policy tab, scroll down to the Key users section. Choose Add, enter the name of the
IAM role that you selected in the previous step, and then choose Add.
5. On the Key policy tab, scroll down to the Other AWS accounts section. Choose Add other AWS
accounts, and then add all of the target AWS accounts that you chose to share the snapshots
with in the previous step.
6. Choose Save changes.
Command line
Use the get-key-policy command to retrieve the key policy that is currently attached to the KMS key.
For example, the following command retrieves the key policy for a KMS key with an ID of
9d5e2b3d-e410-4a27-a958-19e220d83a1e and writes it to a file named snapshotKey.json.
Open the key policy using your preferred text editor. Add the ARN of the IAM role that you specified
when you created the snapshot policy and the ARNs of the target accounts with which to share the
KMS key.
For example, in the following policy, we added the ARN of the default IAM role, and the ARN of the
root account for target account 222222222222.
{
"Sid" : "Allow use of the key",
"Effect" : "Allow",
"Principal" : {
"AWS" : [
"arn:aws:iam::111111111111:role/service-role/
AWSDataLifecycleManagerDefaultRole",
"arn:aws:iam::222222222222:root"
]
},
"Action" : [
"kms:Encrypt",
"kms:Decrypt",
1498
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource" : "*"
},
{
"Sid" : "Allow attachment of persistent resources",
"Effect" : "Allow",
"Principal" : {
"AWS" : [
"arn:aws:iam::111111111111:role/service-role/
AWSDataLifecycleManagerDefaultRole",
"arn:aws:iam::222222222222:root"
]
},
"Action" : [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource" : "*",
"Condition" : {
"Bool" : {
"kms:GrantIsForAWSResource" : "true"
}
}
}
Save and close the file. Then use the put-key-policy command to attach the updated key policy to
the KMS key.
In the target account, you must create a cross-account copy event policy that will automatically copy
snapshots that are shared by the required source accounts.
This policy runs in the target account only when one of the specified source accounts shares snapshot
with the account.
Use one of the following methods to create the cross-account copy event policy.
New console
1499
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
a. For Sharing accounts, specify the source AWS accounts from which you want to copy
the shared snapshots. Choose Add account, enter the 12-digit AWS account ID, and then
choose Add.
b. For Filter by description, enter the required snapshot description using a regular
expression. Only snapshots that are shared by the specified source accounts and that have
descriptions that match the specified filter are copied by the policy. For more information,
see Specify snapshot description filters (p. 1504).
7. For IAM role, choose the IAM role that has permissions to perform snapshot copy actions. To use
the default role provided by Amazon Data Lifecycle Manager, choose Default role. Alternatively,
to use a custom IAM role that you previously created, choose Choose another role and then
select the role to use.
If you are copying encrypted snapshots, you must grant the selected IAM role permissions to
use the encryption KMS key used to encrypt the source volume. Similarly, if you are encrypting
the snapshot in the destination Region using a different KMS key, you must grant the IAM role
permission to use the destination KMS key. For more information, see Step 4: Allow IAM role to
use the required KMS keys (Target account) (p. 1502).
8. In the Copy action section, define the snapshot copy actions that the policy should perform
when it is activated. The policy can copy snapshots to up to three Regions. You must specify a
separate copy rule for each destination Region. For each rule that you add, do the following:
Old console
1500
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
If you are copying encrypted snapshots, you must grant the selected IAM role permissions to
use the encryption KMS key used to encrypt the source volume. Similarly, if you are encrypting
the snapshot in the destination Region using a different KMS key, you must grant the IAM role
permission to use the destination KMS key. For more information, see Step 4: Allow IAM role to
use the required KMS keys (Target account) (p. 1502).
7. In the Copy settings section, you can configure the policy to copy snapshots to up to three
Regions in the target account. Do the following:
Command line
Use the create-lifecycle-policy command to create a policy. To create a cross-account copy event
policy, for PolicyType, specify EVENT_BASED_POLICY.
For example, the following command creates a cross-account copy event policy in target account
222222222222. The policy copies snapshots that are shared by source account 111111111111.
The policy copies snapshots to sa-east-1 and eu-west-2. Snapshots copied to sa-east-1 are
unencrypted and they are retained for 3 days. Snapshots copied to eu-west-2 are encrypted using
KMS key 8af79514-350d-4c52-bac8-8985e84171c7 and they are retained for 1 month. The
policy uses the default IAM role.
{
"PolicyType" : "EVENT_BASED_POLICY",
"EventSource" : {
"Type" : "MANAGED_CWE",
"Parameters": {
"EventType" : "shareSnapshot",
"SnapshotOwner": ["111111111111"]
}
},
1501
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
"Actions" : [{
"Name" :"Copy Snapshot to Sao Paulo and London",
"CrossRegionCopy" : [{
"Target" : "sa-east-1",
"EncryptionConfiguration" : {
"Encrypted" : false
},
"RetainRule" : {
"Interval" : 3,
"IntervalUnit" : "DAYS"
}
},
{
"Target" : "eu-west-2",
"EncryptionConfiguration" : {
"Encrypted" : true,
"CmkArn" : "arn:aws:kms:eu-west-2:222222222222:key/8af79514-350d-4c52-
bac8-8985e84171c7"
},
"RetainRule" : {
"Interval" : 1,
"IntervalUnit" : "MONTHS"
}
}]
}]
}
Upon success, the command returns the ID of the newly created policy. The following is example
output.
{
"PolicyId": "policy-9876543210abcdef0"
}
Step 4: Allow IAM role to use the required KMS keys (Target account)
If you are copying encrypted snapshots, you must grant the IAM role (that you selected in the previous
step) permissions to use the customer managed key that was used to encrypt the source volume.
Note
Only perform this step if you are copying encrypted snapshots. If you are copying unencrypted
snapshots, skip this step.
Use one of the following methods to add the required policies to the IAM role.
Console
1502
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
In the following example, the policy grants the IAM role permission to use KMS key
1234abcd-12ab-34cd-56ef-1234567890ab, which was shared by source account
111111111111, and KMS key 4567dcba-23ab-34cd-56ef-0987654321yz, which exists in
target account 222222222222.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:RevokeGrant",
"kms:CreateGrant",
"kms:ListGrants"
],
"Resource": [
"arn:aws:kms:us-
east-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab",
"arn:aws:kms:us-
east-1:222222222222:key/4567dcba-23ab-34cd-56ef-0987654321yz"
],
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": "true"
}
}
},
{
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": [
"arn:aws:kms:us-
east-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab",
"arn:aws:kms:us-
east-1:222222222222:key/4567dcba-23ab-34cd-56ef-0987654321yz"
]
}
]
}
Command line
Using your preferred text editor, create a new JSON file named policyDetails.json. Add the
following policy and specify the ARN of the KMS key that was used to encrypt the source volumes
and that was shared with you by the source account in Step 2.
Note
If you are copying from multiple source accounts, then you must specify the corresponding
KMS key ARN from each source account.
In the following example, the policy grants the IAM role permission to use KMS key
1234abcd-12ab-34cd-56ef-1234567890ab, which was shared by source account
1503
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:RevokeGrant",
"kms:CreateGrant",
"kms:ListGrants"
],
"Resource": [
"arn:aws:kms:us-
east-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab",
"arn:aws:kms:us-
east-1:222222222222:key/4567dcba-23ab-34cd-56ef-0987654321yz"
],
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": "true"
}
}
},
{
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": [
"arn:aws:kms:us-
east-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab",
"arn:aws:kms:us-
east-1:222222222222:key/4567dcba-23ab-34cd-56ef-0987654321yz"
]
}
]
}
Save and close the file. Then use the put-role-policy command to add the policy to the IAM role.
For example
1504
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
The snapshot filter description must be specified using a regular expression. It is a mandatory field when
creating cross-account copy event policies using the console and the command line. The following are
example regular expressions that can be used:
• .*—This filter matches all snapshot descriptions. If you use this expression the policy will copy all
snapshots that are shared by one of the specified source accounts.
• Created for policy: policy-0123456789abcdef0.*—This filter matches only snapshots that
are created by a policy with an ID of policy-0123456789abcdef0. If you use an expression like this,
only snapshots that are shared with your account by one of the specified source accounts, and that
have been created by a policy with the specified ID are copied by the policy.
• .*production.*—This filter matches any snapshot that has the word production anywhere in its
description. If you use this expression the policy will copy all snapshots that are shared by one of the
specified source accounts and that have the specified text in their description.
Topics
• View lifecycle policies (p. 1505)
• Modify lifecycle policies (p. 1506)
• Delete lifecycle policies (p. 1359)
Console
Command line
The following is example output. It includes the information that you specified, plus metadata
inserted by AWS.
{
"Policy":{
"Description": "My first policy",
"DateCreated": "2018-05-15T00:16:21+0000",
"State": "ENABLED",
"ExecutionRoleArn":
"arn:aws:iam::210774411744:role/AWSDataLifecycleManagerDefaultRole",
"PolicyId": "policy-0123456789abcdef0",
1505
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
"DateModified": "2018-05-15T00:16:22+0000",
"PolicyDetails": {
"PolicyType":"EBS_SNAPSHOT_MANAGEMENT",
"ResourceTypes": [
"VOLUME"
],
"TargetTags": [
{
"Value": "115",
"Key": "costcenter"
}
],
"Schedules": [
{
"TagsToAdd": [
{
"Value": "myDailySnapshot",
"Key": "type"
}
],
"RetainRule": {
"Count": 5
},
"CopyTags": false,
"CreateRule": {
"Interval": 24,
"IntervalUnit": "HOURS",
"Times": [
"03:00"
]
},
"Name": "DailySnapshots"
}
]
}
}
}
Console
Command line
Use the update-lifecycle-policy command to modify the information in a lifecycle policy. To simplify
the syntax, this example references a JSON file, policyDetailsUpdated.json, that includes the
policy details.
1506
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
{
"ResourceTypes":[
"VOLUME"
],
"TargetTags":[
{
"Key": "costcenter",
"Value": "120"
}
],
"Schedules":[
{
"Name": "DailySnapshots",
"TagsToAdd": [
{
"Key": "type",
"Value": "myDailySnapshot"
}
],
"CreateRule": {
"Interval": 12,
"IntervalUnit": "HOURS",
"Times": [
"15:00"
]
},
"RetainRule": {
"Count" :5
},
"CopyTags": false
}
]
}
To view the updated policy, use the get-lifecycle-policy command. You can see that the state,
the value of the tag, the snapshot interval, and the snapshot start time were changed.
Old console
1507
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
Command line
Use the delete-lifecycle-policy command to delete a lifecycle policy and free up the target tags
specified in the policy for reuse.
Note
You can delete snapshots created only by Amazon Data Lifecycle Manager.
The Amazon Data Lifecycle Manager API Reference provides descriptions and syntax for each of the
actions and data types for the Amazon Data Lifecycle Manager Query API.
Alternatively, you can use one of the AWS SDKs to access the API in a way that's tailored to the
programming language or platform that you're using. For more information, see AWS SDKs.
Topics
• AWS managed policies (p. 1508)
• IAM service roles (p. 1511)
• Permissions for IAM users (p. 1514)
• Permissions for encryption (p. 1515)
However, you can't change the permissions defined in AWS managed policies. AWS occasionally updates
the permissions defined in an AWS managed policy. When this occurs, the update affects all principal
entities (users, groups, and roles) that the policy is attached to.
Amazon Data Lifecycle Manager provides two AWS managed policies for common use cases. These
policies make it more efficient to define the appropriate permissions and control access to your
resources. The AWS managed policies provided by Amazon Data Lifecycle Manager are designed to be
attached to roles that you pass to Amazon Data Lifecycle Manager.
The following are the AWS managed policies that Amazon Data Lifecycle Manager provides. You can also
find these AWS managed policies in the Policies section of the IAM console.
AWSDataLifecycleManagerServiceRole
1508
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:CreateSnapshots",
"ec2:DeleteSnapshot",
"ec2:DescribeInstances",
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:EnableFastSnapshotRestores",
"ec2:DescribeFastSnapshotRestores",
"ec2:DisableFastSnapshotRestores",
"ec2:CopySnapshot",
"ec2:ModifySnapshotAttribute",
"ec2:DescribeSnapshotAttribute"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*::snapshot/*"
},
{
"Effect": "Allow",
"Action": [
"events:PutRule",
"events:DeleteRule",
"events:DescribeRule",
"events:EnableRule",
"events:DisableRule",
"events:ListTargetsByRule",
"events:PutTargets",
"events:RemoveTargets"
],
"Resource": "arn:aws:events:*:*:rule/AwsDataLifecycleRule.managed-cwe.*"
}
]
}
AWSDataLifecycleManagerServiceRoleForAMIManagement
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": [
"arn:aws:ec2:*::snapshot/*",
"arn:aws:ec2:*::image/*"
]
},
{
"Effect": "Allow",
1509
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
"Action": [
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeImageAttribute",
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:EnableImageDeprecation",
"ec2:DisableImageDeprecation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DeleteSnapshot",
"Resource": "arn:aws:ec2:*::snapshot/*"
},
{
"Effect": "Allow",
"Action": [
"ec2:ResetImageAttribute",
"ec2:DeregisterImage",
"ec2:CreateImage",
"ec2:CopyImage",
"ec2:ModifyImageAttribute"
],
"Resource": "*"
}
]
}
AWS services maintain and update AWS managed policies. You can't change the permissions in AWS
managed policies. Services occasionally add additional permissions to an AWS managed policy to
support new features. This type of update affects all identities (users, groups, and roles) where the policy
is attached. Services are most likely to update an AWS managed policy when a new feature is launched
or when new operations become available. Services do not remove permissions from an AWS managed
policy, so policy updates won't break your existing permissions.
The following table provides details about updates to AWS managed policies for Amazon Data Lifecycle
Manager since this service began tracking these changes. For automatic alerts about changes to this
page, subscribe to the RSS feed on the Document history (p. 1756).
AWSDataLifecycleManagerServiceRoleForAMIManagement
Amazon Data August 23, 2021
— Added Lifecycle Manager
permissions to added the
support AMI ec2:EnableImageDeprecation
deprecation. and
ec2:DisableImageDeprecation
actions to grant
EBS-backed AMI
policies permission
to enable and
disable AMI
deprecation.
1510
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
The role that you pass to Amazon Data Lifecycle Manager must have an IAM policy with the permissions
that enable Amazon Data Lifecycle Manager to perform actions associated with policy operations, such
as creating snapshots and AMIs, copying snapshots and AMIs, deleting snapshots, and deregistering
AMIs. Different permissions are required for each of the Amazon Data Lifecycle Manager policy types.
The role must also have Amazon Data Lifecycle Manager listed as a trusted entity, which enables Amazon
Data Lifecycle Manager to assume the role.
Topics
• Default service roles for Amazon Data Lifecycle Manager (p. 1511)
• Custom service roles for Amazon Data Lifecycle Manager (p. 1512)
Amazon Data Lifecycle Manager uses the following default service roles:
If you are using the Amazon Data Lifecycle Manager console, Amazon Data Lifecycle Manager
automatically creates the AWSDataLifecycleManagerDefaultRole service role the first time
you create a snapshot or cross-account snapshot copy policy, and it automatically creates the
AWSDataLifecycleManagerDefaultRoleForAMIManagement service role the first time you create an
EBS-backed AMI policy.
If you are not using the console, you can manually create the service roles using the create-default-role
command. For --resource-type, specify snapshot to create AWSDataLifecycleManagerDefaultRole,
or image to create AWSDataLifecycleManagerDefaultRoleForAMIManagement.
1511
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
If you delete the default service roles, and then need to create them again, you can use the same process
to recreate them in your account.
As an alternative to using the default service roles, you can create custom IAM roles with the required
permissions and then select them when you create a lifecycle policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:CreateSnapshots",
"ec2:DeleteSnapshot",
"ec2:DescribeInstances",
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:EnableFastSnapshotRestores",
"ec2:DescribeFastSnapshotRestores",
"ec2:DisableFastSnapshotRestores",
"ec2:CopySnapshot",
"ec2:ModifySnapshotAttribute",
"ec2:DescribeSnapshotAttribute"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*::snapshot/*"
},
{
"Effect": "Allow",
"Action": [
"events:PutRule",
"events:DeleteRule",
"events:DescribeRule",
"events:EnableRule",
"events:DisableRule",
"events:ListTargetsByRule",
"events:PutTargets",
"events:RemoveTargets"
],
"Resource": "arn:aws:events:*:*:rule/AwsDataLifecycleRule.managed-cwe.*"
}
]
}
{
"Version": "2012-10-17",
1512
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": [
"arn:aws:ec2:*::snapshot/*",
"arn:aws:ec2:*::image/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeImageAttribute",
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DeleteSnapshot",
"Resource": "arn:aws:ec2:*::snapshot/*"
},
{
"Effect": "Allow",
"Action": [
"ec2:ResetImageAttribute",
"ec2:DeregisterImage",
"ec2:CreateImage",
"ec2:CopyImage",
"ec2:ModifyImageAttribute"
],
"Resource": "*"
}]
}
For more information, see Creating a Role in the IAM User Guide.
2. Add a trust relationship to the roles.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "dlm.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}
We recommend that you use the aws:SourceAccount and aws:SourceArn condition keys
to protect yourself against the confused deputy problem. For example, you could add the
following condition block to the previous trust policy. The aws:SourceAccount is the owner
of the lifecycle policy and the aws:SourceArn is the ARN of the lifecycle policy. If you don't
1513
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
know the lifecycle policy ID, you can replace that portion of the ARN with a wildcard (*) and
then update the trust policy after you create the lifecycle policy.
"Condition": {
"StringEquals": {
"aws:SourceAccount": "account_id"
},
"ArnLike": {
"aws:SourceArn": "arn:partition:dlm:region:account_id:policy/policy_id"
}
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:CreateSnapshots",
"ec2:DeleteSnapshot",
"ec2:DescribeInstances",
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:EnableFastSnapshotRestores",
"ec2:DescribeFastSnapshotRestores",
"ec2:DisableFastSnapshotRestores",
"ec2:CopySnapshot",
"ec2:ModifySnapshotAttribute",
"ec2:DescribeSnapshotAttribute"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*::snapshot/*"
},
{
"Effect": "Allow",
"Action": [
"events:PutRule",
"events:DeleteRule",
"events:DescribeRule",
"events:EnableRule",
"events:DisableRule",
"events:ListTargetsByRule",
"events:PutTargets",
"events:RemoveTargets"
],
"Resource": "arn:aws:events:*:*:rule/AwsDataLifecycleRule.managed-cwe.*"
}
]
}
For more information, see Changing Permissions for an IAM User in the IAM User Guide.
1514
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
If you enable Cross Region copy for unencrypted snapshots or AMIs backed by unencrypted snapshots,
and choose to enable encryption in the destination Region, ensure that the default roles have permission
to use the KMS key needed to perform the encryption in the destination Region.
If you enable Cross Region copy for encrypted snapshots or AMIs backed by encrypted snapshots, ensure
that the default roles have permission to use both the source and destination KMS keys.
For more information, see Allowing users in other accounts to use a KMS key in the AWS Key
Management Service Developer Guide.
Features
• Console and AWS CLI (p. 1515)
• AWS CloudTrail (p. 1515)
• Monitor your policies using CloudWatch Events (p. 1515)
• Monitor your policies using Amazon CloudWatch (p. 1516)
AWS CloudTrail
With AWS CloudTrail, you can track user activity and API usage to demonstrate compliance with internal
policies and regulatory standards. For more information, see the AWS CloudTrail User Guide.
• createSnapshot—An Amazon EBS event emitted when a CreateSnapshot action succeeds or fails.
For more information, see Amazon CloudWatch Events for Amazon EBS (p. 1602).
• DLM Policy State Change—An Amazon Data Lifecycle Manager event emitted when a lifecycle
policy enters an error state. The event contains a description of what caused the error. The following is
an example of an event when the permissions granted by the IAM role are insufficient.
1515
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
{
"version": "0",
"id": "01234567-0123-0123-0123-0123456789ab",
"detail-type": "DLM Policy State Change",
source": "aws.dlm",
"account": "123456789012",
"time": "2018-05-25T13:12:22Z",
"region": "us-east-1",
"resources": [
"arn:aws:dlm:us-east-1:123456789012:policy/policy-0123456789abcdef"
],
"detail": {
"state": "ERROR",
"cause": "Role provided does not have sufficient permissions",
"policy_id": "arn:aws:dlm:us-east-1:123456789012:policy/policy-0123456789abcdef"
}
}
{
"version": "0",
"id": "01234567-0123-0123-0123-0123456789ab",
"detail-type": "DLM Policy State Change",
"source": "aws.dlm",
"account": "123456789012",
"time": "2018-05-25T13:12:22Z",
"region": "us-east-1",
"resources": [
"arn:aws:dlm:us-east-1:123456789012:policy/policy-0123456789abcdef"
],
"detail":{
"state": "ERROR",
"cause": "Maximum allowed active snapshot limit exceeded",
"policy_id": "arn:aws:dlm:us-east-1:123456789012:policy/policy-0123456789abcdef"
}
}
Metrics are kept for a period of 15 months, so that you can access historical information and gain a
better understanding of how your lifecycle policies perform over an extended period.
For more information about Amazon CloudWatch, see the Amazon CloudWatch User Guide.
Topics
• Supported metrics (p. 1517)
• View CloudWatch metrics for your policies (p. 1519)
• Graph metrics for your policies (p. 1520)
• Create a CloudWatch alarm for a policy (p. 1521)
• Example use cases (p. 177)
1516
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
Supported metrics
The Data Lifecycle Manager namespace includes the following metrics for Amazon Data Lifecycle
Manager lifecycle policies. The supported metrics differ by policy type.
All metrics can be measured on the DLMPolicyId dimension. The most useful statistics are sum and
average, and the unit of measure is count.
Metric Description
This metric includes snapshots that are deleted when an EBS-backed AMI
policy deregisters AMIs.
This metric includes snapshots that are deleted when an EBS-backed AMI
policy deregisters AMIs.
1517
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
Metric Description
Metric Description
This metric includes snapshots that are deleted when an EBS-backed AMI
policy deregisters AMIs.
This metric includes snapshots that are deleted when an EBS-backed AMI
policy deregisters AMIs.
ImagesCreateFailed The number of AMIs that could not be created by an EBS-backed AMI
policy.
1518
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
Metric Description
The number of AMIs that could not be marked for deprecation by an EBS-
EnableImageDeprecationFailed
backed AMI policy.
The number of cross-Region AMI copies that could not be marked for
EnableCopiedImageDeprecationFailed
deprecation by an EBS-backed AMI policy.
The following metrics can be used with cross-account copy event policies:
Metric Description
The number of snapshots that could not be copied from another account
SnapshotsCopiedAccountFailed
by a cross-account copy event policy. This includes unsuccessful retries
within 24 hours of the scheduled time.
You can use the AWS Management Console or the command line tools to list the metrics that Amazon
Data Lifecycle Manager sends to Amazon CloudWatch.
1519
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
CloudWatch console
AWS CLI
To list all the available metrics for Amazon Data Lifecycle Manager
After you create a policy, you can open the Amazon EC2 console and view the monitoring graphs for the
policy on the Monitoring tab. Each graph is based on one of the available Amazon EC2 metrics.
1520
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
You can create a CloudWatch alarm that monitors CloudWatch metrics for your policies. CloudWatch
will automatically send you a notification when the metric reaches a threshold that you specify. You can
create a CloudWatch alarm using the CloudWatch console.
For more information about creating alarms using the CloudWatch console, see the following topic in the
Amazon CloudWatch User Guide.
Topics
• Example 1: ResourcesTargeted metric (p. 1522)
• Example 2: SnapshotDeleteFailed metric (p. 1522)
• Example 3: SnapshotsCopiedRegionFailed metric (p. 1522)
1521
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon Data Lifecycle Manager
For example, if you expect your daily policy to create backups of no more than 50 volumes, you can
create an alarm that sends an email notification when the sum for ResourcesTargeted is greater than
50 over a 1 hour period. In this way, you can ensure that no snapshots have been unexpectedly created
from volumes that have been incorrectly tagged.
For example, if you've created a policy that should automatically delete snapshots every
twelve hours, you can create an alarm that notifies your engineering team when the sum of
SnapshotDeletionFailed is greater than 0 over a 1 hour period. This could help to investigate
improper snapshot retention and to ensure that your storage costs are not increased by unnecessary
snapshots.
For example, if your policy copies snapshots across Regions daily, you can create an alarm that sends an
SMS to your engineering team when the sum of SnapshotCrossRegionCopyFailed is greater than
0 over a 1 hour period. This can useful for verifying whether subsequent snapshots in the lineage were
successfully copied by the policy.
1522
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
For more information about what to do when one of your policies reports an unexpected non-zero value
for a failed action metric, see the What should I do if Amazon Data Lifecycle Manager reports failed
actions in CloudWatch metrics? AWS Knowledge Center article.
Data services
• Amazon EBS Elastic Volumes (p. 1523)
• Amazon EBS encryption (p. 1536)
• Amazon EBS fast snapshot restore (p. 1547)
There is no charge to modify the configuration of a volume. You are charged for the new volume
configuration after volume modification starts. For more information, see the Amazon EBS Pricing page.
Contents
• Requirements when modifying volumes (p. 1523)
• Request modifications to your EBS volumes (p. 1525)
• Monitor the progress of volume modifications (p. 1529)
• Extend a Linux file system after resizing a volume (p. 1532)
Topics
• Supported instance types (p. 1524)
1523
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
If your instance type does not support Elastic Volumes, see Modify an EBS volume if Elastic Volumes is
not supported (p. 1528).
Before attempting to resize a boot volume beyond 2 TiB, you can determine whether the volume is using
MBR or GPT partitioning by running the following command on your instance:
An Amazon Linux instance with GPT partitioning returns the following information:
Limitations
• There are limits to the maximum aggregated storage that can be requested across volume
modifications. For more information, see Amazon EBS service quotas in the Amazon Web Services
General Reference.
• After modifying a volume, you must wait at least six hours and ensure that the volume is in the in-
use or available state before you can modify the same volume. This is sometimes referred to as a
cooldown period.
1524
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
• If the volume was attached before November 3, 2016 23:40 UTC, you must initialize Elastic Volumes
support. For more information, see Initializing Elastic Volumes Support (p. 1527).
• If you encounter an error message while attempting to modify an EBS volume, or if you are modifying
an EBS volume attached to a previous-generation instance type, take one of the following steps:
• For a non-root volume, detach the volume from the instance, apply the modifications, and then re-
attach the volume.
• For a root volume, stop the instance, apply the modifications, and then restart the instance.
• Modification time is increased for volumes that are not fully initialized. For more information see
Initialize Amazon EBS volumes (p. 1586).
• The new volume size can't exceed the supported capacity of its file system and partitioning scheme.
For more information, see Constraints on the size and configuration of an EBS volume (p. 1346).
• If you modify the volume type of a volume, the size and performance must be within the limits of the
target volume type. For more information, see Amazon EBS volume types (p. 1329)
• You can't decrease the size of an EBS volume. However, you can create a smaller volume and then
migrate your data to it using an application-level tool such as rsync.
• After provisioning over 32,000 IOPS on an existing io1 or io2 volume, you might need to detach and
re-attach the volume, or restart the instance to see the full performance improvements.
• For io2 volumes, you can't increase the size beyond 16 TiB or the IOPS beyond 64,000 while the
volume is attached to an instance type that does not support io2 Block Express volumes. Currently,
only R5b instances support io2 Block Express volumes volumes. For more information, see io2 Block
Express volumes (p. 1337)
• You can't modify the size or provisioned IOPS of an io2 volume that is attached to an R5B instance.
• You can't modify the volume type of Multi-Attach enabled io2 volumes.
• You can't modify the volume type, size, or Provisioned IOPS of Multi-Attach enabled io1 volumes.
• A gp2 volume that is attached to an instance as a root volume can't be modified to an st1 or sc1
volume. If detached and modified to st1 or sc1, it can't be re-attached to an instance as the root
volume.
• While m3.medium instances fully support volume modification, m3.large, m3.xlarge, and
m3.2xlarge instances might not support all volume modification features.
1. (Optional) Before modifying a volume that contains valuable data, it is a best practice to create a
snapshot of the volume in case you need to roll back your changes. For more information, see Create
Amazon EBS snapshots (p. 1385).
2. Request the volume modification.
3. Monitor the progress of the volume modification. For more information, see Monitor the progress of
volume modifications (p. 1529).
4. If the size of the volume was modified, extend the volume's file system to take advantage of the
increased storage capacity. For more information, see Extend a Linux file system after resizing a
volume (p. 1532).
Contents
• Modify an EBS volume using Elastic Volumes (p. 1526)
• Initialize Elastic Volumes support (if needed) (p. 1527)
• Modify an EBS volume if Elastic Volumes is not supported (p. 1528)
1525
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
You can only increase volume size. You can increase or decrease volume performance. If you are not
changing the volume type, then volume size and performance modifications must be within the limits
of the current volume type. If you are changing the volume type, then volume size and performance
modifications must be within the limits of the target volume type.
Note
You can't cancel or undo a volume modification request after it has been submitted.
New console
Old console
AWS CLI
1526
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
Use the modify-volume command to modify one or more configuration settings for a volume. For
example, if you have a volume of type gp2 with a size of 100 GiB, the following command changes
its configuration to a volume of type io1 with 10,000 IOPS and a size of 200 GiB.
aws ec2 modify-volume --volume-type io1 --iops 10000 --size 200 --volume-
id vol-11111111111111111
{
"VolumeModification": {
"TargetSize": 200,
"TargetVolumeType": "io1",
"ModificationState": "modifying",
"VolumeId": "vol-11111111111111111",
"TargetIops": 10000,
"StartTime": "2017-01-19T22:21:02.959Z",
"Progress": 0,
"OriginalVolumeType": "gp2",
"OriginalIops": 300,
"OriginalSize": 100
}
}
Modifying volume size has no practical effect until you also extend the volume's file system to make
use of the new storage capacity. For more information, see Extend a Linux file system after resizing a
volume (p. 1532).
Before you can modify a volume that was attached to an instance before November 3, 2016 23:40 UTC,
you must initialize volume modification support using one of the following actions:
Use one of the following procedures to determine whether your instances are ready for volume
modification.
New console
Old console
1527
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
AWS CLI
Use the following describe-instances command to determine whether the volume was attached
before November 3, 2016 23:40 UTC.
The first line of the output for each instance shows its ID and whether it was started before the
cutoff date (True or False). The first line is followed by one or more lines that show whether each
EBS volume was attached before the cutoff date (True or False). In the following example output,
you must initialize volume modification for the first instance because it was started before the cutoff
date and its root volume was attached before the cutoff date. The other instances are ready because
they were started after the cutoff date.
i-e905622e True
True
i-719f99a8 False
True
i-006b02c1b78381e57 False
False
False
i-e3d172ed False
True
If you are using a supported instance type, you can use Elastic Volumes to dynamically modify the size,
performance, and volume type of your Amazon EBS volumes without detaching them.
If you cannot use Elastic Volumes but you need to modify the root (boot) volume, you must stop the
instance, modify the volume, and then restart the instance.
After the instance has started, you can check the file system size to see if your instance recognizes the
larger volume space. On Linux, use the df -h command to check the file system size.
[ec2-user ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 943M 6.9G 12% /
1528
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
If the size does not reflect your newly expanded volume, you must extend the file system of your device
so that your instance can use the new space. For more information, see Extend a Linux file system after
resizing a volume (p. 1532).
While the volume is in the optimizing state, your volume performance is in between the source and
target configuration specifications. Transitional volume performance will be no less than the source
volume performance. If you are downgrading IOPS, transitional volume performance is no less than the
target volume performance.
• Size changes usually take a few seconds to complete and take effect after the volume has transitioned
to the Optimizing state.
• Performance (IOPS) changes can take from a few minutes to a few hours to complete and are
dependent on the configuration change being made.
• It might take up to 24 hours for a new configuration to take effect, and in some cases more, such as
when the volume has not been fully initialized. Typically, a fully used 1-TiB volume takes about 6 hours
to migrate to a new performance configuration.
To monitor the progress of a volume modification, use one of the following methods.
New console
The possible volume states are creating, available, in-use, deleting, deleted, and
error.
Old console
1529
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
4. The State column and the State field in the details pane contain information in the following
format: volume-state - modification-state (progress%). The possible volume states are creating,
available, in-use, deleting, deleted, and error. The possible modification states are modifying,
optimizing, and completed. Shortly after the volume modification is completed, we remove the
modification state and progress, leaving only the volume state.
In this example, the modification state of the selected volume is optimizing. The modification
state of the next volume is modifying.
5. Choose the text in the State field in the details pane to display information about the most
recent modification action, as shown in the previous step.
AWS CLI
Use the describe-volumes-modifications command to view the progress of one or more volume
modifications. The following example describes the volume modifications for two volumes.
In the following example output, the volume modifications are still in the modifying state.
Progress is reported as a percentage.
{
"VolumesModifications": [
{
"TargetSize": 200,
"TargetVolumeType": "io1",
"ModificationState": "modifying",
"VolumeId": "vol-11111111111111111",
"TargetIops": 10000,
"StartTime": "2017-01-19T22:21:02.959Z",
"Progress": 0,
"OriginalVolumeType": "gp2",
"OriginalIops": 300,
"OriginalSize": 100
},
{
"TargetSize": 2000,
1530
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
"TargetVolumeType": "sc1",
"ModificationState": "modifying",
"VolumeId": "vol-22222222222222222",
"StartTime": "2017-01-19T22:23:22.158Z",
"Progress": 0,
"OriginalVolumeType": "gp2",
"OriginalIops": 300,
"OriginalSize": 1000
}
]
}
The next example describes all volumes with a modification state of either optimizing or
completed, and then filters and formats the results to show only modifications that were initiated
on or after February 1, 2017:
[
{
"STATE": "optimizing",
"ID": "vol-06397e7a0eEXAMPLE"
},
{
"STATE": "completed",
"ID": "vol-ba74e18c2aEXAMPLE"
}
]
With CloudWatch Events, you can create a notification rule for volume modification events. You can
use your rule to generate a notification message using Amazon SNS or to invoke a Lambda function
in response to matching events. Events are emitted on a best effort basis.
{
"source": [
"aws.ec2"
],
"detail-type": [
"EBS Volume Notification"
],
"detail": {
"event": [
"modifyVolume"
]
}
}
1531
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Volume Notification",
"source": "aws.ec2",
"account": "012345678901",
"time": "2017-01-12T21:09:07Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:012345678901:volume/vol-03a55cf56513fa1b6"
],
"detail": {
"result": "optimizing",
"cause": "",
"event": "modifyVolume",
"request-id": "01234567-0123-0123-0123-0123456789ab"
}
}
1. Your EBS volume might have a partition that contains the file system and data. Increasing the size of
a volume does not increase the size of the partition. Before you extend the file system on a resized
volume, check whether the volume has a partition that must be extended to the new size of the
volume.
2. Use a file system-specific command to resize each file system to the new volume capacity.
For information about extending a Windows file system, see Extend a Windows file system after resizing
a volume in the Amazon EC2 User Guide for Windows Instances.
The following examples walk you through the process of extending a Linux file system. For file systems
and partitioning schemes other than the ones shown here, refer to the documentation for those file
systems and partitioning schemes for instructions.
Note
If you are using logical volumes on the Amazon EBS volume, you must use Logical Volume
Manager (LVM) to extend the logical volume. For instructions on how to do this, see the Extend
the logical volume section in the How do I create an LVM logical volume on an entire EBS
volume? AWS Knowledge Center article.
Examples
• Example: Extend the file system of NVMe EBS volumes (p. 1533)
1532
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
For this example, suppose that you have an instance built on the Nitro System (p. 232), such as an M5
instance. You resized the boot volume from 8 GB to 16 GB and an additional volume from 8 GB to 30 GB.
Use the following procedure to extend the file system of the resized volumes.
The following is example output for an instance that has a boot volume with an XFS file system
and an additional volume with an XFS file system. The naming convention /dev/nvme[0-26]n1
indicates that the volumes are exposed as NVMe block devices.
3. To check whether the volume has a partition that must be extended, use the lsblk command to
display information about the NVMe block devices attached to your instance.
• The root volume, /dev/nvme0n1, has a partition, /dev/nvme0n1p1. While the size of the root
volume reflects the new size, 16 GB, the size of the partition reflects the original size, 8 GB, and
must be extended before you can extend the file system.
• The volume /dev/nvme1n1 has no partitions. The size of the volume reflects the new size, 30 GB.
4. For volumes that have a partition, such as the root volume shown in the previous step, use the
growpart command to extend the partition. Notice that there is a space between the device name
and the partition number.
5. (Optional) To verify that the partition reflects the increased volume size, use the lsblk command
again.
1533
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
6. To verify the size of the file system for each volume, use the df -h command. In this example output,
both file systems reflect the original volume size, 8 GB.
[ec2-user ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p1 8.0G 1.6G 6.5G 20% /
/dev/nvme1n1 8.0G 33M 8.0G 1% /data
...
7. To extend the file system on each volume, use the correct command for your file system, as follows:
• [XFS file system] To extend the file system on each volume, use the xfs_growfs command. In this
example, / and /data are the volume mount points shown in the output for df -h.
If the XFS tools are not already installed, you can install them as follows.
• [ext4 file system] To extend the file system on each volume, use the resize2fs command.
• [Other file system] To extend the file system on each volume, refer to the documentation for your
file system for instructions.
8. (Optional) To verify that each file system reflects the increased volume size, use the df -h command
again.
[ec2-user ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p1 16G 1.6G 15G 10% /
/dev/nvme1n1 30G 33M 30G 1% /data
...
For this example, suppose that you have resized the boot volume of an instance, such as a T2 instance,
from 8 GB to 16 GB and an additional volume from 8 GB to 30 GB. Use the following procedure to
extend the file system of the resized volumes.
The following is example output for an instance that has a boot volume with an ext4 file system and
an additional volume with an XFS file system.
1534
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
3. To check whether the volume has a partition that must be extended, use the lsblk command to
display information about the block devices attached to your instance.
• The root volume, /dev/xvda, has a partition, /dev/xvda1. While the size of the volume is 16 GB,
the size of the partition is still 8 GB and must be extended.
• The volume /dev/xvdf has a partition, /dev/xvdf1. While the size of the volume is 30G, the
size of the partition is still 8 GB and must be extended.
4. For volumes that have a partition, such as the volumes shown in the previous step, use the growpart
command to extend the partition. Notice that there is a space between the device name and the
partition number.
5. (Optional) To verify that the partitions reflect the increased volume size, use the lsblk command
again.
6. To verify the size of the file system for each volume, use the df -h command. In this example output,
both file systems reflect the original volume size, 8 GB.
[ec2-user ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 1.9G 6.2G 24% /
/dev/xvdf1 8.0G 45M 8.0G 1% /data
...
7. To extend the file system on each volume, use the correct command for your file system, as follows:
• [XFS volumes] To extend the file system on each volume, use the xfs_growfs command. In this
example, / and /data are the volume mount points shown in the output for df -h.
If the XFS tools are not already installed, you can install them as follows.
• [ext4 volumes] To extend the file system on each volume, use the resize2fs command.
1535
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
• [Other file system] To extend the file system on each volume, refer to the documentation for your
file system for instructions.
8. (Optional) To verify that each file system reflects the increased volume size, use the df -h command
again.
[ec2-user ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 16G 1.9G 14G 12% /
/dev/xvdf1 30G 45M 30G 1% /data
...
Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-
at-rest and data-in-transit between an instance and its attached EBS storage.
You can attach both encrypted and unencrypted volumes to an instance simultaneously.
Contents
• How EBS encryption works (p. 1536)
• Requirements (p. 1537)
• Default KMS key for EBS encryption (p. 1538)
• Encryption by default (p. 1539)
• Encrypt EBS resources (p. 1540)
• Encryption scenarios (p. 1541)
• Set encryption defaults using the API and CLI (p. 1546)
When you create an encrypted EBS volume and attach it to a supported instance type, the following
types of data are encrypted:
EBS encrypts your volume with a data key using the industry-standard AES-256 algorithm. Your data key
is stored on disk with your encrypted data, but not before EBS encrypts it with your KMS key. Your data
key never appears on disk in plaintext. The same data key is shared by snapshots of the volume and any
1536
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
subsequent volumes created from those snapshots. For more information, see Data keys in the AWS Key
Management Service Developer Guide.
Amazon EC2 works with AWS KMS to encrypt and decrypt your EBS volumes in slightly different ways
depending on whether the snapshot from which you create an encrypted volume is encrypted or
unencrypted.
When you create an encrypted volume from an encrypted snapshot that you own, Amazon EC2 works
with AWS KMS to encrypt and decrypt your EBS volumes as follows:
1. Amazon EC2 sends a GenerateDataKeyWithoutPlaintext request to AWS KMS, specifying the KMS key
that you chose for volume encryption.
2. AWS KMS generates a new data key, encrypts it under the KMS key that you chose for volume
encryption, and sends the encrypted data key to Amazon EBS to be stored with the volume metadata.
3. When you attach the encrypted volume to an instance, Amazon EC2 sends a CreateGrant request to
AWS KMS so that it can decrypt the data key.
4. AWS KMS decrypts the encrypted data key and sends the decrypted data key to Amazon EC2.
5. Amazon EC2 uses the plaintext data key in hypervisor memory to encrypt disk I/O to the volume. The
plaintext data key persists in memory as long as the volume is attached to the instance.
When you create an encrypted volume from unencrypted snapshot, Amazon EC2 works with AWS KMS to
encrypt and decrypt your EBS volumes as follows:
1. Amazon EC2 sends a CreateGrant request to AWS KMS, so that it can encrypt the volume that is
created from the snapshot.
2. Amazon EC2 sends a GenerateDataKeyWithoutPlaintext request to AWS KMS, specifying the KMS key
that you chose for volume encryption.
3. AWS KMS generates a new data key, encrypts it under the KMS key that you chose for volume
encryption, and sends the encrypted data key to Amazon EBS to be stored with the volume metadata.
4. Amazon EC2 sends a Decrypt request to AWS KMS to get the encryption key to encrypt the volume
data.
5. When you attach the encrypted volume to an instance, Amazon EC2 sends a CreateGrant request to
AWS KMS, so that it can decrypt the data key.
6. When you attach the encrypted volume to an instance, Amazon EC2 sends a Decrypt request to AWS
KMS, specifying the encrypted data key.
7. AWS KMS decrypts the encrypted data key and sends the decrypted data key to Amazon EC2.
8. Amazon EC2 uses the plaintext data key in hypervisor memory to encrypt disk I/O to the volume. The
plaintext data key persists in memory as long as the volume is attached to the instance.
For more information, see How Amazon Elastic Block Store (Amazon EBS) uses AWS KMS and Amazon
EC2 example two in the AWS Key Management Service Developer Guide.
Requirements
Before you begin, verify that the following requirements are met.
Encryption is supported by all EBS volume types. You can expect the same IOPS performance on
encrypted volumes as on unencrypted volumes, with a minimal effect on latency. You can access
1537
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
encrypted volumes the same way that you access unencrypted volumes. Encryption and decryption are
handled transparently, and they require no additional action from you or your applications.
Amazon EBS encryption is available on all current generation (p. 227) instance types and the following
previous generation (p. 231) instance types: A1, C3, cr1.8xlarge, G2, I2, M3, and R3.
When you configure a KMS key as the default key for EBS encryption, the default KMS key policy
allows any IAM user with access to the required KMS actions to use this KMS key to encrypt or decrypt
EBS resources. You must grant IAM users permission to call the following actions in order to use EBS
encryption:
• kms:CreateGrant
• kms:Decrypt
• kms:DescribeKey
• kms:GenerateDataKeyWithoutPlainText
• kms:ReEncrypt
To follow the principle of least privilege, do not allow full access to kms:CreateGrant. Instead, allow
the user to create grants on the KMS key only when the grant is created on the user's behalf by an AWS
service, as shown in the following example.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "kms:CreateGrant",
"Resource": [
"arn:aws:kms:us-east-2:123456789012:key/abcd1234-a123-456d-a12b-
a123b4cd56ef"
],
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": true
}
}
}
]
}
For more information, see Allows access to the AWS account and enables IAM policies in the Default key
policy section in the AWS Key Management Service Developer Guide.
1538
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
New console
To configure the default KMS key for EBS encryption for a Region
Old console
To configure the default KMS key for EBS encryption for a Region
Encryption by default
You can configure your AWS account to enforce the encryption of the new EBS volumes and snapshot
copies that you create. For example, Amazon EBS encrypts the EBS volumes created when you launch an
instance and the snapshots that you copy from an unencrypted snapshot. For examples of transitioning
from unencrypted to encrypted EBS resources, see Encrypt unencrypted resources (p. 1541).
Considerations
• Encryption by default is a Region-specific setting. If you enable it for a Region, you cannot disable it
for individual volumes or snapshots in that Region.
• When you enable encryption by default, you can launch an instance only if the instance type supports
EBS encryption. For more information, see Supported instance types (p. 1538).
• If you copy a snapshot and encrypt it to a new KMS key, a complete (non-incremental) copy is created.
This results in additional storage costs.
• When migrating servers using AWS Server Migration Service (SMS), do not turn on encryption by
default. If encryption by default is already on and you are experiencing delta replication failures, turn
off encryption by default. Instead, enable AMI encryption when you create the replication job.
New console
1539
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
4. In the upper-right corner of the page, choose Account Attributes, EBS encryption.
5. Choose Manage.
6. Select Enable. You keep the AWS managed key with the alias alias/aws/ebs created on your
behalf as the default encryption key, or choose a symmetric customer managed key.
7. Choose Update EBS encryption.
Old console
You cannot change the KMS key that is associated with an existing snapshot or encrypted volume.
However, you can associate a different KMS key during a snapshot copy operation so that the resulting
copied snapshot is encrypted by the new KMS key.
When you encrypt a volume, you can specify the symmetric KMS key to use to encrypt the volume. If you
do not specify a KMS key, the KMS key that is used for encryption depends on the encryption state of the
source snapshot and its ownership. For more information, see the encryption outcomes table (p. 1545).
Note
If you are using the API or AWS CLI to specify a KMS key, be aware that AWS authenticates the
KMS key asynchronously. If you specify a KMS key ID, an alias, or an ARN that is not valid, the
action can appear to complete, but it eventually fails.
You cannot change the KMS key that is associated with an existing snapshot or volume. However, you can
associate a different KMS key during a snapshot copy operation so that the resulting copied snapshot is
encrypted by the new KMS key.
When you create a new, empty EBS volume, you can encrypt it by enabling encryption for the specific
volume creation operation. If you enabled EBS encryption by default, the volume is automatically
encrypted using your default KMS key for EBS encryption. Alternatively, you can specify a different
symmetric KMS key for the specific volume creation operation. The volume is encrypted by the time
it is first available, so your data is always secured. For detailed procedures, see Create an Amazon EBS
volume (p. 1349).
By default, the KMS key that you selected when creating a volume encrypts the snapshots that you
make from the volume and the volumes that you restore from those encrypted snapshots. You cannot
remove encryption from an encrypted volume or snapshot, which means that a volume restored from an
encrypted snapshot, or a copy of an encrypted snapshot, is always encrypted.
Public snapshots of encrypted volumes are not supported, but you can share an encrypted snapshot with
specific accounts. For detailed directions, see Share an Amazon EBS snapshot (p. 1419).
1540
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
You cannot directly encrypt existing unencrypted volumes or snapshots. However, you can create
encrypted volumes or snapshots from unencrypted volumes or snapshots. If you enable encryption
by default, Amazon EBS automatically encrypts new volumes and snapshots using your default KMS
key for EBS encryption. Otherwise, you can enable encryption when you create an individual volume
or snapshot, using either the default KMS key for EBS encryption or a symmetric customer managed
key. For more information, see Create an Amazon EBS volume (p. 1349) and Copy an Amazon EBS
snapshot (p. 1391).
To encrypt the snapshot copy to a customer managed key, you must both enable encryption and specify
the KMS key, as shown in Copy an unencrypted snapshot (encryption by default not enabled) (p. 1542).
Important
Amazon EBS does not support asymmetric KMS keys. For more information, see Using
Symmetric and Asymmetric KMS keys in the AWS Key Management Service Developer Guide.
You can also apply new encryption states when launching an instance from an EBS-backed AMI. This is
because EBS-backed AMIs include snapshots of EBS volumes that can be encrypted as described. For
more information, see Use encryption with EBS-backed AMIs (p. 189).
Encryption scenarios
When you create an encrypted EBS resource, it is encrypted by your account's default KMS key for EBS
encryption unless you specify a different customer managed key in the volume creation parameters or
the block device mapping for the AMI or instance. For more information, see Default KMS key for EBS
encryption (p. 1538).
The following examples illustrate how you can manage the encryption state of your volumes and
snapshots. For a full list of encryption cases, see the encryption outcomes table (p. 1545).
Examples
• Restore an unencrypted volume (encryption by default not enabled) (p. 1541)
• Restore an unencrypted volume (encryption by default enabled) (p. 1542)
• Copy an unencrypted snapshot (encryption by default not enabled) (p. 1542)
• Copy an unencrypted snapshot (encryption by default enabled) (p. 1543)
• Re-encrypt an encrypted volume (p. 1543)
• Re-encrypt an encrypted snapshot (p. 1544)
• Migrate data between encrypted and unencrypted volumes (p. 1544)
• Encryption outcomes (p. 1545)
Without encryption by default enabled, a volume restored from an unencrypted snapshot is unencrypted
by default. However, you can encrypt the resulting volume by setting the Encrypted parameter and,
optionally, the KmsKeyId parameter. The following diagram illustrates the process.
1541
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
If you leave out the KmsKeyId parameter, the resulting volume is encrypted using your default KMS key
for EBS encryption. You must specify a KMS key ID to encrypt the volume to a different KMS key.
For more information, see Create a volume from a snapshot (p. 1351).
When you have enabled encryption by default, encryption is mandatory for volumes restored from
unencrypted snapshots, and no encryption parameters are required for your default KMS key to be used.
The following diagram shows this simple default case:
If you want to encrypt the restored volume to a symmetric customer managed key, you must supply both
the Encrypted and KmsKeyId parameters as shown in Restore an unencrypted volume (encryption by
default not enabled) (p. 1541).
1542
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
You can encrypt an EBS volume by copying an unencrypted snapshot to an encrypted snapshot and
then creating a volume from the encrypted snapshot. For more information, see Copy an Amazon EBS
snapshot (p. 1391).
When you have enabled encryption by default, encryption is mandatory for copies of unencrypted
snapshots, and no encryption parameters are required if your default KMS key is used. The following
diagram illustrates this default case:
When the CreateVolume action operates on an encrypted snapshot, you have the option of re-
encrypting it with a different KMS key. The following diagram illustrates the process. In this example,
you own two KMS keys, KMS key A and KMS key B. The source snapshot is encrypted by KMS key A.
During volume creation, with the KMS key ID of KMS key B specified as a parameter, the source data is
automatically decrypted, then re-encrypted by KMS key B.
1543
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
For more information, see Create a volume from a snapshot (p. 1351).
The ability to encrypt a snapshot during copying allows you to apply a new symmetric KMS key to an
already-encrypted snapshot that you own. Volumes restored from the resulting copy are only accessible
using the new KMS key. The following diagram illustrates the process. In this example, you own two KMS
keys, KMS key A and KMS key B. The source snapshot is encrypted by KMS key A. During copy, with the
KMS key ID of KMS key B specified as a parameter, the source data is automatically re-encrypted by KMS
key B.
In a related scenario, you can choose to apply new encryption parameters to a copy of a snapshot that
has been shared with you. By default, the copy is encrypted with a KMS key shared by the snapshot's
owner. However, we recommend that you create a copy of the shared snapshot using a different KMS
key that you control. This protects your access to the volume if the original KMS key is compromised, or
if the owner revokes the KMS key for any reason. For more information, see Encryption and snapshot
copying (p. 1393).
When you have access to both an encrypted and unencrypted volume, you can freely transfer data
between them. EC2 carries out the encryption and decryption operations transparently.
For example, use the rsync command to copy the data. In the following command, the source data is
located in /mnt/source and the destination volume is mounted at /mnt/destination.
1544
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
Encryption outcomes
The following table describes the encryption outcome for each possible combination of settings.
1545
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
* This is the default customer managed key used for EBS encryption for the AWS account and Region. By
default this is a unique AWS managed key for EBS, or you can specify a customer managed key. For more
information, see Default KMS key for EBS encryption (p. 1538).
** This is a customer managed key specified for the volume at launch time. This customer managed key is
used instead of the default customer managed key for the AWS account and Region.
1546
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
To get started, enable fast snapshot restore for specific snapshots in specific Availability Zones. Each
snapshot and Availability Zone pair refers to one fast snapshot restore. When you create a volume
from one of these snapshots in one of its enabled Availability Zones, the volume is restored using fast
snapshot restore.
Fast snapshot restore must be explicitly enabled on a per-snapshot basis. If you create a new snapshot
from a volume that was restored from a fast snapshot restore-enabled snapshot, the new snapshot is not
automatically enabled for fast snapshot restore. You must explicitly enable it for the new snapshot.
The number of volumes that you can restore with the the full performance benefit of fast snapshot
restore is determined by volume creation credits for the snapshot. For more information see Volume
creation credits (p. 1547).
You can enable fast snapshot restore for snapshots that you own and for public and private snapshots
that are shared with you.
Contents
• Volume creation credits (p. 1547)
• Manage fast snapshot restore (p. 1548)
• Monitor fast snapshot restore (p. 1552)
• Fast snapshot restore quotas (p. 1552)
• Pricing and Billing (p. 1552)
When you enable fast snapshot restore for a snapshot that is shared with you, you get a separate credit
bucket for the shared snapshot in your account. If you create volumes from the shared snapshot, the
credits are consumed from your credit bucket; they are not consumed from the snapshot owner's credit
bucket.
The size of a credit bucket and the rate at which it refills depends on the size of the snapshot, not the
size of the volumes created from the snapshot.
1547
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
When you enable fast snapshot restore for a snapshot, the credit bucket starts with zero credits, and it
gets filled at a set rate until it reaches its maximum credit capacity. Also, as you consume credits, the
credit bucket is refilled over time until it reaches its maximum credit capacity.
For example, if you enable fast snapshot restore for a snapshot with a size of 128 GiB, the fill rate is
0.1333 credits per minute.
In this example, when you enable fast snapshot restore, the credit bucket starts with zero credits. After
8 minutes, the credit bucket has enough credits to create one initialized volume (0.1333 credits ×
8 minutes = 1.066 credits). When the credit bucket is full, you can create 8 initialized volumes
simultaneously (8 credits). When the bucket is below its maximum capacity, it refills with 0.1333 credits
per minute.
You can use Cloudwatch metrics to monitor the size of your credit buckets and the number of credits
available in each bucket. For more information, see Fast snapshot restore metrics (p. 1601).
After you create a volume from a snapshot with fast snapshot restore enabled, you can describe the
volume using describe-volumes and check the fastRestored field in the output to determine whether
the volume was created as an initialized volume using fast snapshot restore.
1548
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
When you delete a snapshot that you own, fast snapshot restore is automatically disabled for that
snapshot in your account. If you enabled fast snapshot restore for a snapshot that is shared with you,
and the snapshot owner deletes or unshares it, fast snapshot restore is automatically disabled for the
shared snapshot in your account.
If you enabled fast snapshot restore for a snapshot that is shared with you, and it has been encrypted
using a custom CMK, fast snapshot restore is not automatically disabled for the snapshot when the
snapshot owner revokes your access to the custom CMK. You must manually disable fast snapshot
restore for that snapshot.
Use one of the following methods to enable or disable fast snapshot restore for a snapshot that you own
or for a snapshot that is shared with you.
New console
To enable fast snapshot restore in a zone where it is currently disabled, select the zone, choose
Enable, and then to confirm, choose Enable.
To disable fast snapshot restore in a zone where it is currently enabled, select the zone, and
then choose Disable.
5. After you have made the required changes, choose Close.
Old console
AWS CLI
• enable-fast-snapshot-restores
• disable-fast-snapshot-restores
• describe-fast-snapshot-restores
1549
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
Note
After you enable fast snapshot restore for a snapshot, it enters the optimizing state.
Snapshots that are in the optimizing state provide some performance benefits when using
them to restore volumes. They start to provide the full performance benefits of fast snapshot
restore only after they enter the enabled state.
Fast snapshot restore for a snapshot can be in one of the following states.
Use one of the following methods to view the state of fast snapshot restore for a snapshot that you own
or for a snapshot that is shared with you.
New console
Old console
AWS CLI
To view snapshots with fast snapshot restore enabled using the AWS CLI
Use the describe-fast-snapshot-restores command to describe the snapshots that are enabled for
fast snapshot restore.
1550
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS data services
{
"FastSnapshotRestores": [
{
"SnapshotId": "snap-0e946653493cb0447",
"AvailabilityZone": "us-east-2a",
"State": "enabled",
"StateTransitionReason": "Client.UserInitiated - Lifecycle state
transition",
"OwnerId": "123456789012",
"EnablingTime": "2020-01-25T23:57:49.596Z",
"OptimizingTime": "2020-01-25T23:58:25.573Z",
"EnabledTime": "2020-01-25T23:59:29.852Z"
},
{
"SnapshotId": "snap-0e946653493cb0447",
"AvailabilityZone": "us-east-2b",
"State": "enabled",
"StateTransitionReason": "Client.UserInitiated - Lifecycle state
transition",
"OwnerId": "123456789012",
"EnablingTime": "2020-01-25T23:57:49.596Z",
"OptimizingTime": "2020-01-25T23:58:25.573Z",
"EnabledTime": "2020-01-25T23:59:29.852Z"
}
]
}
When you create a volume from a snapshot that is enabled for fast snapshot restore in the Availability
Zone for the volume, it is restored using fast snapshot restore.
Use the describe-volumes command to view volumes that were created from a snapshot that is enabled
for fast snapshot restore.
{
"Volumes": [
{
"Attachments": [],
"AvailabilityZone": "us-east-2a",
"CreateTime": "2020-01-26T00:34:11.093Z",
"Encrypted": true,
"KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/8c5b2c63-b9bc-45a3-
a87a-5513e232e843",
"Size": 20,
"SnapshotId": "snap-0e946653493cb0447",
"State": "available",
"VolumeId": "vol-0d371921d4ca797b0",
"Iops": 100,
"VolumeType": "gp2",
"FastRestored": true
}
]
}
1551
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes and NVMe
For example, if you enable fast snapshot restore for one snapshot in US-East-1a for one month (30
days), you are billed $540 (1 snapshot x 1 AZ x 720 hours x $0.75 per hour). If you enable fast snapshot
restore for two snapshots in us-east-1a, us-east-1b, and us-east-1c for the same period, you are
billed $3240 (2 snapshots x 3 AZs x 720 hours x $0.75 per hour).
If you enable fast snapshot restore for a public or private snapshot that is shared with you, your account
is billed; the snapshot owner is not billed. When a snapshot that is shared with you is deleted or
unshared by the snapshot owner, fast snapshot restore is disabled for the snapshot in your account and
billing is stopped.
The EBS performance guarantees stated in Amazon EBS Product Details are valid regardless of the block-
device interface.
Contents
• Install or upgrade the NVMe driver (p. 1552)
• Identify the EBS device (p. 1553)
• Work with NVMe EBS volumes (p. 1555)
• I/O operation timeout (p. 1556)
• Amazon Linux 2
• Amazon Linux AMI 2018.03
1552
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes and NVMe
For more information about NVMe drivers on Windows instances, see Amazon EBS and NVMe on
Windows Instances in the Amazon EC2 User Guide for Windows Instances.
You can confirm that your instance has the NVMe driver and check the driver version using the following
command. If the instance has the NVMe driver, the command returns information about the driver.
$ modinfo nvme
• For Amazon Linux 2, Amazon Linux, CentOS, and Red Hat Enterprise Linux:
3. Ubuntu 16.04 and later include the linux-aws package, which contains the NVMe and ENA drivers
required by Nitro-based instances. Upgrade the linux-aws package to receive the latest version as
follows:
For Ubuntu 14.04, you can install the latest linux-aws package as follows:
sudo reboot
1553
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes and NVMe
and create device nodes based on the order in which the devices respond, not on how the devices
are specified in the block device mapping. In Linux, NVMe device names follow the pattern /dev/
nvme<x>n<y>, where <x> is the enumeration order, and, for EBS, <y> is 1. Occasionally, devices can
respond to discovery in a different order in subsequent instance starts, which causes the device name
to change. Additionally, the device name assigned by the block device driver can be different from the
name specified in the block device mapping.
We recommend that you use stable identifiers for your EBS volumes within your instance, such as one of
the following:
• For Nitro-based instances, the block device mappings that are specified in the Amazon EC2 console
when you are attaching an EBS volume or during AttachVolume or RunInstances API calls are
captured in the vendor-specific data field of the NVMe controller identification. With Amazon Linux
AMIs later than version 2017.09.01, we provide a udev rule that reads this data and creates a symbolic
link to the block-device mapping.
• The EBS volume ID and the mount point are stable between instance state changes. The NVMe device
name can change depending on the order in which the devices respond during instance boot. We
recommend using the EBS volume ID and the mount point for consistent device identification.
• NVMe EBS volumes have the EBS volume ID set as the serial number in the device identification. Use
the lsblk -o +SERIAL command to list the serial number.
• The NVMe device name format can vary depending on whether the EBS volume was attached during
or after the instance launch. NVMe device names for volumes attached after instance launch include
the /dev/ prefix, while NVMe device names for volumes attached during instance launch do not
include the /dev/ prefix. If you are using an Amazon Linux or FreeBSD AMI, use the sudo ebsnvme-
id /dev/nvme0n1 -u command for a consistent NVMe device name. For other distributions, use the
sudo ebsnvme-id /dev/nvme0n1 -u command to determine the NVMe device name.
• When a device is formatted, a UUID is generated that persists for the life of the filesystem. A device
label can be specified at the same time. For more information, see Make an Amazon EBS volume
available for use on Linux (p. 1360) and Boot from the wrong volume (p. 1724).
With Amazon Linux AMI 2017.09.01 or later (including Amazon Linux 2), you can run the ebsnvme-id
command as follows to map the NVMe device name to a volume ID and device name:
The following example shows the command and output for a volume attached during instance launch.
Note that the NVMe device name does not include the /dev/ prefix.
The following example shows the command and output for a volume attached after instance launch.
Note that the NVMe device name includes the /dev/ prefix.
Amazon Linux also creates a symbolic link from the device name in the block device mapping (for
example, /dev/sdf), to the NVMe device name.
FreeBSD AMIs
Starting with FreeBSD 12.2-RELEASE, you can run the ebsnvme-id command as shown above. Pass
either the name of the NVMe device (for example, nvme0) or the disk device (for example, nvd0
1554
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS volumes and NVMe
or nda0). FreeBSD also creates symbolic links to the disk devices (for example, /dev/aws/disk/
ebs/volume_id).
With a kernel version of 4.2 or later, you can run the nvme id-ctrl command as follows to map an NVMe
device to a volume ID. First, install the NVMe command line package, nvme-cli, using the package
management tools for your Linux distribution. For download and installation instructions for other
distributions, refer to the documentation specific to your distribution.
The following example gets the volume ID and NVMe device name for a volume that was attached
during instance launch. Note that the NVMe device name does not include the /dev/ prefix. The
device name is available through the NVMe controller vendor-specific extension (bytes 384:4095 of the
controller identification):
The following example gets the volume ID and NVMe device name for a volume that was attached after
instance launch. Note that the NVMe device name includes the /dev/ prefix.
The lsblk command lists available devices and their mount points (if applicable). This helps you
determine the correct device name to use. In this example, /dev/nvme0n1p1 is mounted as the root
device and /dev/nvme1n1 is attached but not mounted.
If you are using Linux kernel 4.2 or later, any change you make to the volume size of an NVMe EBS
volume is automatically reflected in the instance. For older Linux kernels, you might need to detach and
attach the EBS volume or reboot the instance for the size change to be reflected. With Linux kernel 3.19
or later, you can use the hdparm command as follows to force a rescan of the NVMe device:
1555
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
When you detach an NVMe EBS volume, the instance does not have an opportunity to flush the file
system caches or metadata before detaching the volume. Therefore, before you detach an NVMe EBS
volume, you should first sync and unmount it. If the volume fails to detach, you can attempt a force-
detach command as described in Detach an Amazon EBS volume from a Linux instance (p. 1378).
If I/O latency exceeds the value of this timeout parameter, the Linux NVMe driver fails the I/O and
returns an error to the filesystem or application. Depending on the I/O operation, your filesystem or
application can retry the error. In some cases, your filesystem might be remounted as read-only.
For an experience similar to EBS volumes attached to Xen instances, we recommend setting
nvme_core.io_timeout to the highest value possible. For current kernels, the maximum is
4294967295, while for earlier kernels the maximum is 255. Depending on the version of Linux, the
timeout might already be set to the supported maximum value. For example, the timeout is set to
4294967295 by default for Amazon Linux AMI 2017.09.01 and later.
You can verify the maximum value for your Linux distribution by writing a value higher than the
suggested maximum to /sys/module/nvme_core/parameters/io_timeout and checking for the
Numerical result out of range error when attempting to save the file.
EBS–optimized instances deliver dedicated bandwidth to Amazon EBS. When attached to an EBS–
optimized instance, General Purpose SSD (gp2 and gp3) volumes are designed to deliver their baseline
and burst performance 99% of the time, and Provisioned IOPS SSD (io1 and io2) volumes are designed
to deliver their provisioned performance 99.9% of the time. Both Throughput Optimized HDD (st1) and
Cold HDD (sc1) guarantee performance consistency of 90% of burst throughput 99% of the time. Non-
compliant periods are approximately uniformly distributed, targeting 99% of expected total throughput
each hour. For more information, see Amazon EBS volume types (p. 1329).
Contents
• Supported instance types (p. 1556)
• Get maximum performance (p. 1578)
• View instances types that support EBS optimization (p. 1578)
• Enable EBS optimization at launch (p. 1579)
• Enable EBS optimization for an existing instance (p. 1580)
1556
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1557
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1558
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1559
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1560
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1561
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1562
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1563
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1564
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1565
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1566
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1567
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1568
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1569
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1570
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
* These instance types can support maximum performance for 30 minutes at least once every 24 hours.
If you have a workload that requires sustained maximum performance for longer than 30 minutes, select
an instance type according to baseline performance as shown in the following table.
Instance size Baseline bandwidth Baseline throughput Baseline IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1571
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Baseline bandwidth Baseline throughput Baseline IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1572
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Baseline bandwidth Baseline throughput Baseline IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1573
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Baseline bandwidth Baseline throughput Baseline IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1574
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Baseline bandwidth Baseline throughput Baseline IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1575
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Baseline bandwidth Baseline throughput Baseline IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
t4g.nano 32 4 250
t4g.micro 64 8 500
1576
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Baseline bandwidth Baseline throughput Baseline IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
1577
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
Instance size Maximum bandwidth Maximum throughput Maximum IOPS (16 KiB
(Mbps) (MB/s, 128 KiB I/O) I/O)
The i2.8xlarge, c3.8xlarge, and r3.8xlarge instances do not have dedicated EBS bandwidth and
therefore do not offer EBS optimization. On these instances, network traffic and Amazon EBS traffic
share the same 10-gigabit network interface.
The high memory instances are designed to run large in-memory databases, including production
deployments of the SAP HANA in-memory database, in the cloud. To maximize EBS performance,
use high memory instances with an even number of io1 or io2 volumes with identical provisioned
performance. For example, for IOPS heavy workloads, use four io1 or io2 volumes with 40,000
provisioned IOPS to get the maximum 160,000 instance IOPS. Similarly, for throughput heavy workloads,
use six io1 or io2 volumes with 48,000 provisioned IOPS to get the maximum 4,750 MB/s throughput.
For additional recommendations, see Storage Configuration for SAP HANA.
Considerations
• G4dn, I3en, Inf1, M5a, M5ad, R5a, R5ad, T3, T3a, and Z1d instances launched after February 26, 2020
provide the maximum performance listed in the table above. To get the maximum performance from
an instance launched before February 26, 2020, stop and start it.
• C5, C5d, C5n, M5, M5d, M5n, M5dn, R5, R5d, R5n, R5dn, and P3dn instances launched after
December 3, 2019 provide the maximum performance listed in the table above. To get the maximum
performance from an instance launched before December 3, 2019, stop and start it.
• u-6tb1.metal, u-9tb1.metal, and u-12tb1.metal instances launched after March 12, 2020
provide the performance in the table above. Instances of these types launched before March 12, 2020
might provide lower performance. To get the maximum performance from an instance launched
before March 12, 2020, contact your account team to upgrade the instance at no additional cost.
1578
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
To view the instance types that support EBS optimization and that have it enabled by default
-----------------------------------------------------------------------------------------------
| DescribeInstanceTypes
|
+--------------+--------------------+---------------------+-----------
+-----------------------+
| EBSOptimized | InstanceType | MaxBandwidth(Mb/s) | MaxIOPS | MaxThroughput(MB/
s) |
+--------------+--------------------+---------------------+-----------
+-----------------------+
| default | m5dn.8xlarge | 6800 | 30000 | 850.0
|
| default | m6gd.xlarge | 4750 | 20000 | 593.75
|
| default | c4.4xlarge | 2000 | 16000 | 250.0
|
| default | r4.16xlarge | 14000 | 75000 | 1750.0
|
| default | m5ad.large | 2880 | 16000 | 360.0
|
...
To view the instance types that support EBS optimization but do not have it enabled by default
------------------------------------------------------------------------------------------
| DescribeInstanceTypes |
+--------------+---------------+----------------------+----------+-----------------------+
| EBSOptimized | InstanceType | MaxBandwidth(Mb/s) | MaxIOPS | MaxThroughput(MB/s) |
+--------------+---------------+----------------------+----------+-----------------------+
| supported | m2.4xlarge | 1000 | 8000 | 125.0 |
| supported | i2.2xlarge | 1000 | 8000 | 125.0 |
| supported | r3.4xlarge | 2000 | 16000 | 250.0 |
| supported | m3.xlarge | 500 | 4000 | 62.5 |
| supported | r3.2xlarge | 1000 | 8000 | 125.0 |
...
1579
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS optimization
To enable Amazon EBS optimization when launching an instance using the console
To enable EBS optimization when launching an instance using the command line
You can use one of the following commands with the corresponding option. For more information about
these command line interfaces, see Access Amazon EC2 (p. 3).
To enable EBS optimization for an existing instance using the command line
1. If the instance is running, use one of the following commands to stop it:
1580
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
AWS updates to the performance of EBS volume types might not immediately take effect on your
existing volumes. To see full performance on an older volume, you might first need to perform a
ModifyVolume action on it. For more information, see Modifying the Size, IOPS, or Type of an EBS
Volume on Linux.
Contents
• Amazon EBS performance tips (p. 1581)
• I/O characteristics and monitoring (p. 1583)
• Initialize Amazon EBS volumes (p. 1586)
• RAID configuration on Linux (p. 1588)
• Benchmark EBS volumes (p. 1592)
1581
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
O, and latency) affects the others, and different applications are more sensitive to one factor or another.
For more information, see Benchmark EBS volumes (p. 1592).
• Access each block prior to putting the volume into production. This process is called initialization
(formerly known as pre-warming). For more information, see Initialize Amazon EBS volumes (p. 1586).
• Enable fast snapshot restore on a snapshot to ensure that the EBS volumes created from it are fully-
initialized at creation and instantly deliver all of their provisioned performance. For more information,
see Amazon EBS fast snapshot restore (p. 1547).
Your performance can also be impacted if your application isn’t sending enough I/O requests. This can
be monitored by looking at your volume’s queue length and I/O size. The queue length is the number
of pending I/O requests from your application to your volume. For maximum consistency, HDD-backed
volumes must maintain a queue length (rounded to the nearest whole number) of 4 or more when
performing 1 MiB sequential I/O. For more information about ensuring consistent performance of your
volumes, see I/O characteristics and monitoring (p. 1583)
To examine the current value of read-ahead for your block devices, use the following command:
The device shown reports a read-ahead value of 256 (the default). Multiply this number by the sector
size (512 bytes) to obtain the size of the read-ahead buffer, which in this case is 128 KiB. To set the
buffer value to 1 MiB, use the following command:
Verify that the read-ahead setting now displays 2,048 by running the first command again.
Only use this setting when your workload consists of large, sequential I/Os. If it consists mostly of small,
random I/Os, this setting will actually degrade your performance. In general, if your workload consists
1582
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
mostly of small or random I/Os, you should consider using a General Purpose SSD (gp2 and gp3) volume
rather than an st1 or sc1 volume.
For example, in an Amazon Linux AMI with an earlier kernel, you can add it to the end of the kernel line
in the GRUB configuration found in /boot/grub/menu.lst:
For more information, see Configuring GRUB (p. 217). Other Linux distributions, especially those that do
not use the GRUB boot loader, may require a different approach to adjusting the kernel parameters.
For more information about EBS I/O characteristics, see the Amazon EBS: Designing for Performance
re:Invent presentation on this topic.
Topics
1583
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
IOPS
IOPS are a unit of measure representing input/output operations per second. The operations are
measured in KiB, and the underlying drive technology determines the maximum amount of data that a
volume type counts as a single I/O. I/O size is capped at 256 KiB for SSD volumes and 1,024 KiB for HDD
volumes because SSD volumes handle small or random I/O much more efficiently than HDD volumes.
When small I/O operations are physically sequential, Amazon EBS attempts to merge them into a single
I/O operation up to the maximum I/O size. Similarly, when I/O operations are larger than the maximum
I/O size, Amazon EBS attempts to split them into smaller I/O operations. The following table shows
some examples.
SSD 256 KiB 1 x 1024 KiB I/O 4 (1,024÷256=4) Amazon EBS splits
operation the 1,024 I/O
operation into four
smaller 256 KiB
operations.
1584
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
Consequently, when you create an SSD-backed volume supporting 3,000 IOPS (either by provisioning a
Provisioned IOPS SSD volume at 3,000 IOPS or by sizing a General Purpose SSD volume at 1,000 GiB),
and you attach it to an EBS-optimized instance that can provide sufficient bandwidth, you can transfer
up to 3,000 I/Os of data per second, with throughput determined by I/O size.
Optimal queue length varies for each workload, depending on your particular application's sensitivity to
IOPS and latency. If your workload is not delivering enough I/O requests to fully use the performance
available to your EBS volume, then your volume might not deliver the IOPS or throughput that you have
provisioned.
Transaction-intensive applications are sensitive to increased I/O latency and are well-suited for SSD-
backed volumes. You can maintain high IOPS while keeping latency down by maintaining a low queue
length and a high number of IOPS available to the volume. Consistently driving more IOPS to a volume
than it has available can cause increased I/O latency.
Throughput-intensive applications are less sensitive to increased I/O latency, and are well-suited for
HDD-backed volumes. You can maintain high throughput to HDD-backed volumes by maintaining a high
queue length when performing large, sequential I/O.
For smaller I/O operations, you may see a higher-than-provisioned IOPS value as measured from inside
your instance. This happens when the instance operating system merges small I/O operations into a
larger operation before passing them to Amazon EBS.
If your workload uses sequential I/Os on HDD-backed st1 and sc1 volumes, you may experience a
higher than expected number of IOPS as measured from inside your instance. This happens when the
instance operating system merges sequential I/Os and counts them in 1,024 KiB-sized units. If your
workload uses small or random I/Os, you may experience a lower throughput than you expect. This is
because we count each random, non-sequential I/O toward the total IOPS count, which can cause you to
hit the volume's IOPS limit sooner than expected.
Whatever your EBS volume type, if you are not experiencing the IOPS or throughput you expect in your
configuration, ensure that your EC2 instance bandwidth is not the limiting factor. You should always use
a current-generation, EBS-optimized instance (or one that includes 10 Gb/s network connectivity) for
optimal performance. For more information, see Amazon EBS–optimized instances (p. 1556). Another
possible cause for not experiencing the expected IOPS is that you are not driving enough I/O to the EBS
volumes.
1585
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
• BurstBalance
• VolumeReadBytes
• VolumeWriteBytes
• VolumeReadOps
• VolumeWriteOps
• VolumeQueueLength
BurstBalance displays the burst bucket balance for gp2, st1, and sc1 volumes as a percentage of
the remaining balance. When your burst bucket is depleted, volume I/O (for gp2 volumes) or volume
throughput (for st1 and sc1 volumes) is throttled to the baseline. Check the BurstBalance value to
determine whether your volume is being throttled for this reason. For a complete list of the available
Amazon EBS metrics, see Amazon EBS metrics (p. 1596) and Amazon EBS metrics for Nitro-based
instances (p. 965).
HDD-backed st1 and sc1 volumes are designed to perform best with workloads that take
advantage of the 1,024 KiB maximum I/O size. To determine your volume's average I/O size, divide
VolumeWriteBytes by VolumeWriteOps. The same calculation applies to read operations. If average
I/O size is below 64 KiB, increasing the size of the I/O operations sent to an st1 or sc1 volume should
improve performance.
Note
If average I/O size is at or near 44 KiB, you might be using an instance or kernel without support
for indirect descriptors. Any Linux kernel 3.8 and above has this support, as well as any current-
generation instance.
If your I/O latency is higher than you require, check VolumeQueueLength to make sure your application
is not trying to drive more IOPS than you have provisioned. If your application requires a greater number
of IOPS than your volume can provide, you should consider using a larger gp2 volume with a higher base
performance level or an io1 or io2 volume with more provisioned IOPS to achieve faster latencies.
Related resources
For more information about Amazon EBS I/O characteristics, see the following re:Invent presentation:
Amazon EBS: Designing for Performance.
For volumes that were created from snapshots, the storage blocks must be pulled down from Amazon
S3 and written to the volume before you can access them. This preliminary action takes time and can
cause a significant increase in the latency of I/O operations the first time each block is accessed. Volume
performance is achieved after all blocks have been downloaded and written to the volume.
Important
While initializing Provisioned IOPS SSD volumes that were created from snapshots, the
performance of the volume may drop below 50 percent of its expected level, which causes the
volume to display a warning state in the I/O Performance status check. This is expected, and
you can ignore the warning state on Provisioned IOPS SSD volumes while you are initializing
them. For more information, see EBS volume status checks (p. 1370).
For most applications, amortizing the initialization cost over the lifetime of the volume is acceptable. To
avoid this initial performance hit in a production environment, you can use one of the following options:
• Force the immediate initialization of the entire volume. For more information, see Initialize Amazon
EBS volumes on Linux (p. 1587).
1586
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
• Enable fast snapshot restore on a snapshot to ensure that the EBS volumes created from it are fully-
initialized at creation and instantly deliver all of their provisioned performance. For more information,
see Amazon EBS fast snapshot restore (p. 1547).
For information about initializing Amazon EBS volumes on Windows, see Initializing Amazon EBS
volumes on Windows.
Here you can see that the new volume, /dev/xvdf, is attached, but not mounted (because there is
no path listed under the MOUNTPOINT column).
3. Use the dd or fio utilities to read all of the blocks on the device. The dd command is installed by
default on Linux systems, but fio is considerably faster because it allows multi-threaded reads.
Note
This step may take several minutes up to several hours, depending on your EC2 instance
bandwidth, the IOPS provisioned for the volume, and the size of the volume.
[dd] The if (input file) parameter should be set to the drive you wish to initialize. The of (output
file) parameter should be set to the Linux null virtual device, /dev/null. The bs parameter sets the
block size of the read operation; for optimal performance, this should be set to 1 MB.
Important
Incorrect use of dd can easily destroy a volume's data. Be sure to follow precisely the
example command below. Only the if=/dev/xvdf parameter will vary depending on the
name of the device you are reading.
[fio] If you have fio installed on your system, use the following command to initialize your volume.
The --filename (input file) parameter should be set to the drive you wish to initialize.
1587
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
When the operation is finished, you will see a report of the read operation. Your volume is now ready
for use. For more information, see Make an Amazon EBS volume available for use on Linux (p. 1360).
Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss
of data from the failure of any single component. This replication makes Amazon EBS volumes ten times
more reliable than typical commodity disk drives. For more information, see Amazon EBS Availability and
Durability in the Amazon EBS product detail pages.
Note
You should avoid booting from a RAID volume. Grub is typically installed on only one device in
a RAID array, and if one of the mirrored devices fails, you may be unable to boot the operating
system.
If you need to create a RAID array on a Windows instance, see RAID configuration on Windows in the
Amazon EC2 User Guide for Windows Instances.
Contents
• RAID configuration options (p. 1588)
• Create a RAID 0 array on Linux (p. 1589)
• Create snapshots of volumes in a RAID array (p. 1591)
The resulting size of a RAID 0 array is the sum of the sizes of the volumes within it, and the bandwidth
is the sum of the available bandwidth of the volumes within it. For example, two 500 GiB io1 volumes
with 4,000 provisioned IOPS each create a 1000 GiB RAID 0 array with an available bandwidth of 8,000
IOPS and 1,000 MiB/s of throughput.
Important
RAID 5 and RAID 6 are not recommended for Amazon EBS because the parity write operations
of these RAID modes consume some of the IOPS available to your volumes. Depending on the
configuration of your RAID array, these RAID modes provide 20-30% fewer usable IOPS than
a RAID 0 configuration. Increased cost is a factor with these RAID modes as well; when using
identical volume sizes and speeds, a 2-volume RAID 0 array can outperform a 4-volume RAID 6
array that costs twice as much.
RAID 1 is also not recommended for use with Amazon EBS. RAID 1 requires more Amazon
EC2 to Amazon EBS bandwidth than non-RAID configurations because the data is written to
multiple volumes simultaneously. In addition, RAID 1 does not provide any write performance
improvement.
1588
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
Before you perform this procedure, you need to decide how large your RAID 0 array should be and how
many IOPS you want to provision.
Use the following procedure to create a RAID 0 array. Note that you can get directions for Windows
instances from Create a RAID 0 array on Windows in the Amazon EC2 User Guide for Windows Instances.
1. Create the Amazon EBS volumes for your array. For more information, see Create an Amazon EBS
volume (p. 1349).
Important
Create volumes with identical size and IOPS performance values for your array. Make sure
you do not create an array that exceeds the available bandwidth of your EC2 instance. For
more information, see Amazon EBS–optimized instances (p. 1556).
2. Attach the Amazon EBS volumes to the instance that you want to host the array. For more
information, see Attach an Amazon EBS volume to an instance (p. 1353).
3. Use the mdadm command to create a logical RAID device from the newly attached Amazon EBS
volumes. Substitute the number of volumes in your array for number_of_volumes and the device
names for each volume in the array (such as /dev/xvdf) for device_name. You can also substitute
MY_RAID with your own unique name for the array.
Note
You can list the devices on your instance with the lsblk command to find the device names.
To create a RAID 0 array, run the following command (note the --level=0 option to stripe the
array):
[ec2-user ~]$ sudo mdadm --create --verbose /dev/md0 --level=0 --name=MY_RAID --raid-
devices=number_of_volumes device_name1 device_name2
4. Allow time for the RAID array to initialize and synchronize. You can track the progress of these
operations with the following command:
Personalities : [raid0]
md0 : active raid0 xvdc[1] xvdb[0]
41910272 blocks super 1.2 512k chunks
In general, you can display detailed information about your RAID array with the following command:
/dev/md0:
Version : 1.2
Creation Time : Wed May 19 11:12:56 2021
1589
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
Name : MY_RAID
UUID : 646aa723:db31bbc7:13c43daf:d5c51e0c
Events : 0
5. Create a file system on your RAID array, and give that file system a label to use when you mount
it later. For example, to create an ext4 file system with the label MY_RAID, run the following
command:
Depending on the requirements of your application or the limitations of your operating system, you
can use a different file system type, such as ext3 or XFS (consult your file system documentation for
the corresponding file system creation command).
6. To ensure that the RAID array is reassembled automatically on boot, create a configuration file to
contain the RAID information:
Note
If you are using a Linux distribution other than Amazon Linux, you might need to modify
this command. For example, you might need to place the file in a different location, or you
might need to add the --examine parameter. For more information, run man mdadm.conf
on your Linux instance.
7. Create a new ramdisk image to properly preload the block device modules for your new RAID
configuration:
9. Finally, mount the RAID device on the mount point that you created:
1590
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
10. (Optional) To mount this Amazon EBS volume on every system reboot, add an entry for the device
to the /etc/fstab file.
a. Create a backup of your /etc/fstab file that you can use if you accidentally destroy or delete
this file while you are editing it.
b. Open the /etc/fstab file using your favorite text editor, such as nano or vim.
c. Comment out any lines starting with "UUID=" and, at the end of the file, add a new line for your
RAID volume using the following format:
The last three fields on this line are the file system mount options, the dump frequency of the
file system, and the order of file system checks done at boot time. If you don't know what these
values should be, then use the values in the example below for them (defaults,nofail 0
2). For more information about /etc/fstab entries, see the fstab manual page (by entering
man fstab on the command line). For example, to mount the ext4 file system on the device with
the label MY_RAID at the mount point /mnt/raid, add the following entry to /etc/fstab.
Note
If you ever intend to boot your instance without this volume attached (for example,
so this volume could move back and forth between different instances), you should
add the nofail mount option that allows the instance to boot even if there are
errors in mounting the volume. Debian derivatives, such as Ubuntu, must also add the
nobootwait mount option.
d. After you've added the new entry to /etc/fstab, you need to check that your entry works.
Run the sudo mount -a command to mount all file systems in /etc/fstab.
If the previous command does not produce an error, then your /etc/fstab file is OK and your
file system will mount automatically at the next boot. If the command does produce any errors,
examine the errors and try to correct your /etc/fstab.
Warning
Errors in the /etc/fstab file can render a system unbootable. Do not shut down a
system that has errors in the /etc/fstab file.
e. (Optional) If you are unsure how to correct /etc/fstab errors, you can always restore your
backup /etc/fstab file with the following command.
1591
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
To create a consistent set of snapshots for your RAID array, use EBS multi-volume snapshots. Multi-
volume snapshots allow you to take point-in-time, data coordinated, and crash-consistent snapshots
across multiple EBS volumes attached to an EC2 instance. You do not have to stop your instance to
coordinate between volumes to ensure consistency because snapshots are automatically taken across
multiple EBS volumes. For more information, see the steps for creating multi-volume snapshots under
Creating Amazon EBS snapshots.
Important
Some of the procedures result in the destruction of existing data on the EBS volumes you
benchmark. The benchmarking procedures are intended for use on volumes specially created for
testing purposes, not production volumes.
To create an EBS-optimized instance, choose Launch as an EBS-Optimized instance when launching the
instance using the Amazon EC2 console, or specify --ebs-optimized when using the command line. Be
sure that you launch a current-generation instance that supports this option. For more information, see
Amazon EBS–optimized instances (p. 1556).
To create Provisioned IOPS SSD (io1 and io2) or General Purpose SSD (gp2 and gp3) volumes using the
Amazon EC2 console, for Volume type, choose Provisioned IOPS SSD (io1), Provisioned IOPS SSD (io2),
General Purpose SSD (gp2), or General Purpose SSD (gp3). At the command line, specify io1, io2,
gp2, or gp3 for the --volume-type parameter. For io1, io2, and gp3 volumes, specify the number of I/
O operations per second (IOPS) for the --iops parameter. For more information, see Amazon EBS volume
types (p. 1329) and Create an Amazon EBS volume (p. 1349).
For the example tests, we recommend that you create a RAID 0 array with 6 volumes, which offers
a high level of performance. Because you are charged by gigabytes provisioned (and the number of
provisioned IOPS for io1, io2, and gp3 volumes), not the number of volumes, there is no additional
cost for creating multiple, smaller volumes and using them to create a stripe set. If you're using Oracle
Orion to benchmark your volumes, it can simulate striping the same way that Oracle ASM does, so we
recommend that you let Orion do the striping. If you are using a different benchmarking tool, you need
to stripe the volumes yourself.
1592
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
For instructions on how to create a RAID 0 array with 6 volumes, see Create a RAID 0 array on
Linux (p. 1589).
To create an st1 volume, choose Throughput Optimized HDD when creating the volume using the
Amazon EC2 console, or specify --type st1 when using the command line. To create an sc1 volume,
choose Cold HDD when creating the volume using the Amazon EC2 console, or specify --type sc1
when using the command line. For information about creating EBS volumes, see Create an Amazon
EBS volume (p. 1349). For information about attaching these volumes to your instance, see Attach an
Amazon EBS volume to an instance (p. 1353).
AWS provides a JSON template for use with AWS CloudFormation that simplifies this setup procedure.
Access the template and save it as a JSON file. AWS CloudFormation allows you to configure your own
SSH keys and offers an easier way to set up a performance test environment to evaluate st1 volumes.
The template creates a current-generation instance and a 2 TiB st1 volume, and attaches the volume to
the instance at /dev/xvdf.
Tool Description
Oracle Orion For calibrating the I/O performance of storage systems to be used with Oracle
Calibration Tool databases.
1593
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
These benchmarking tools support a wide variety of test parameters. You should use commands that
approximate the workloads your volumes will support. These commands provided below are intended as
examples to help you get started.
Increasing the queue length is beneficial until you achieve the provisioned IOPS, throughput or optimal
system queue length value, which is currently set to 32. For example, a volume with 3,000 provisioned
IOPS should target a queue length of 3. You should experiment with tuning these values up or down to
see what performs best for your application.
Disable C-states
Before you run benchmarking, you should disable processor C-states. Temporarily idle cores in a
supported CPU can enter a C-state to save power. When the core is called on to resume processing, a
certain amount of time passes until the core is again fully operational. This latency can interfere with
processor benchmarking routines. For more information about C-states and which EC2 instance types
support them, see Processor state control for your EC2 instance.
2. Disable the C-states from c1 to cN. Ideally, the cores should be in state c0.
Perform benchmarking
The following procedures describe benchmarking commands for various EBS volume types.
Run the following commands on an EBS-optimized instance with attached EBS volumes. If the EBS
volumes were created from snapshots, be sure to initialize them before benchmarking. For more
information, see Initialize Amazon EBS volumes (p. 1586).
When you are finished testing your volumes, see the following topics for help cleaning up: Delete an
Amazon EBS volume (p. 1380) and Terminate your instance (p. 646).
1594
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS performance
For more information about interpreting the results, see this tutorial: Inspecting disk IO performance
with fio.
The following command performs 1 MiB sequential read operations against an attached st1 block
device (e.g., /dev/xvdf):
The following command performs 1 MiB sequential write operations against an attached st1 block
device:
Some workloads perform a mix of sequential reads and sequential writes to different parts of the block
device. To benchmark such a workload, we recommend that you use separate, simultaneous fio jobs for
reads and writes, and use the fio offset_increment option to target different block device locations
for each job.
Running this workload is a bit more complicated than a sequential-write or sequential-read workload.
Use a text editor to create a fio job file, called fio_rw_mix.cfg in this example, that contains the
following:
[global]
clocksource=clock_gettime
randrepeat=0
runtime=180
[sequential-write]
bs=1M
ioengine=libaio
direct=1
iodepth=8
filename=/dev/<device>
1595
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch metrics
do_verify=0
rw=write
rwmixread=0
rwmixwrite=100
[sequential-read]
bs=1M
ioengine=libaio
direct=1
iodepth=8
filename=/dev/<device>
do_verify=0
rw=read
rwmixread=100
rwmixwrite=0
offset=100g
For more information about interpreting the results, see this tutorial: Inspecting disk I/O performance
with fio.
Multiple fio jobs for direct I/O, even though using sequential read or write operations, can result in lower
than expected throughput for st1 and sc1 volumes. We recommend that you use one direct I/O job and
use the iodepth parameter to control the number of concurrent I/O operations.
When you get data from CloudWatch, you can include a Period request parameter to specify the
granularity of the returned data. This is different than the period that we use when we collect the data
(1-minute periods). We recommend that you specify a period in your request that is equal to or greater
than the collection period to ensure that the returned data is valid.
You can get the data using either the CloudWatch API or the Amazon EC2 console. The console takes the
raw data from the CloudWatch API and displays a series of graphs based on the data. Depending on your
needs, you might prefer to use either the data from the API or the graphs in the console.
Topics
• Amazon EBS metrics (p. 1596)
• Dimensions for Amazon EBS metrics (p. 1601)
• Graphs in the Amazon EC2 console (p. 1601)
Metrics
• Volume metrics for volumes attached to all instance types (p. 1597)
• Volume metrics for volumes attached to Nitro-based instance types (p. 1600)
1596
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch metrics
• Some metrics have differences on instances that are built on the Nitro System. For a list of
these instance types, see Instances built on the Nitro System (p. 232).
• The AWS/EC2 namespace includes additional Amazon EBS metrics for volumes that are
attached to Nitro-based instances that are not bare metal instances. For more information
about these metrics see, Amazon EBS metrics for Nitro-based instances (p. 965).
Metric Description
Units: Bytes
Units: Bytes
1597
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch metrics
Metric Description
To calculate the average read operations per second (read IOPS)
for the period, divide the total read operations in the period by
the number of seconds in that period.
Units: Count
Units: Count
VolumeTotalReadTime Note
This metric is not supported with Multi-Attach enabled
volumes.
Units: Seconds
1598
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch metrics
Metric Description
VolumeTotalWriteTime Note
This metric is not supported with Multi-Attach enabled
volumes.
Units: Seconds
VolumeIdleTime Note
This metric is not supported with Multi-Attach enabled
volumes.
Units: Seconds
Units: Count
1599
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch metrics
Metric Description
VolumeThroughputPercentage Note
This metric is not supported with Multi-Attach enabled
volumes.
Units: Percent
VolumeConsumedReadWriteOps Used with Provisioned IOPS SSD volumes only. The total amount
of read and write operations (normalized to 256K capacity units)
consumed in a specified period of time.
Units: Count
Units: Percent
1600
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch metrics
Metric Description
For the volume metrics (p. 1597), the supported dimension is the volume ID (VolumeId). All available
statistics are filtered by volume ID.
For the fast snapshot restore metrics (p. 1601), the supported dimensions are the snapshot ID
(SnapshotId) and the Availability Zone (AvailabilityZone).
1601
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
(Sum(VolumeWriteBytes) / Sum(VolumeWriteOps)) /
1024
(Sum(VolumeTotalReadTime) / Sum(VolumeReadOps))
× 1000
(Sum(VolumeTotalWriteTime) /
Sum(VolumeWriteOps)) * 1000
For the average latency graphs and average size graphs, the average is calculated over the total number
of operations (read or write, whichever is applicable to the graph) that completed during the period.
Events in CloudWatch are represented as JSON objects. The fields that are unique to the event are
contained in the "detail" section of the JSON object. The "event" field contains the event name. The
"result" field contains the completed status of the action that triggered the event. For more information,
see Event Patterns in CloudWatch Events in the Amazon CloudWatch Events User Guide.
For more information, see Using Events in the Amazon CloudWatch User Guide.
1602
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
Contents
• EBS volume events (p. 1603)
• EBS snapshot events (p. 1606)
• EBS volume modification events (p. 1609)
• EBS fast snapshot restore events (p. 1610)
• Using AWS Lambda to handle CloudWatch events (p. 1611)
Events
• Create volume (createVolume) (p. 1603)
• Delete volume (deleteVolume) (p. 1604)
• Volume attach or reattach (attachVolume, reattachVolume) (p. 1605)
Event data
The listing below is an example of a JSON object emitted by EBS for a successful createVolume event.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Volume Notification",
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:012345678901:volume/vol-01234567"
],
"detail": {
"result": "available",
"cause": "",
"event": "createVolume",
"request-id": "01234567-0123-0123-0123-0123456789ab"
}
}
The listing below is an example of a JSON object emitted by EBS after a failed createVolume event.
The cause for the failure was a disabled KMS key.
{
"version": "0",
"id": "01234567-0123-0123-0123-0123456789ab",
"detail-type": "EBS Volume Notification",
"source": "aws.ec2",
"account": "012345678901",
1603
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "sa-east-1",
"resources": [
"arn:aws:ec2:sa-east-1:0123456789ab:volume/vol-01234567",
],
"detail": {
"event": "createVolume",
"result": "failed",
"cause": "arn:aws:kms:sa-east-1:0123456789ab:key/01234567-0123-0123-0123-0123456789ab
is disabled.",
"request-id": "01234567-0123-0123-0123-0123456789ab",
}
}
The following is an example of a JSON object that is emitted by EBS after a failed createVolume event.
The cause for the failure was a KMS key pending import.
{
"version": "0",
"id": "01234567-0123-0123-0123-0123456789ab",
"detail-type": "EBS Volume Notification",
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "sa-east-1",
"resources": [
"arn:aws:ec2:sa-east-1:0123456789ab:volume/vol-01234567",
],
"detail": {
"event": "createVolume",
"result": "failed",
"cause": "arn:aws:kms:sa-east-1:0123456789ab:key/01234567-0123-0123-0123-0123456789ab
is pending import.",
"request-id": "01234567-0123-0123-0123-0123456789ab",
}
}
Event data
The listing below is an example of a JSON object emitted by EBS for a successful deleteVolume event.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Volume Notification",
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:012345678901:volume/vol-01234567"
],
"detail": {
"result": "deleted",
"cause": "",
"event": "deleteVolume",
1604
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
"request-id": "01234567-0123-0123-0123-0123456789ab"
}
}
Event data
The listing below is an example of a JSON object emitted by EBS after a failed attachVolume event.
The cause for the failure was a KMS key pending deletion.
Note
AWS may attempt to reattach to a volume following routine server maintenance.
{
"version": "0",
"id": "01234567-0123-0123-0123-0123456789ab",
"detail-type": "EBS Volume Notification",
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:0123456789ab:volume/vol-01234567",
"arn:aws:kms:us-east-1:0123456789ab:key/01234567-0123-0123-0123-0123456789ab"
],
"detail": {
"event": "attachVolume",
"result": "failed",
"cause": "arn:aws:kms:us-east-1:0123456789ab:key/01234567-0123-0123-0123-0123456789ab
is pending deletion.",
"request-id": ""
}
}
The listing below is an example of a JSON object emitted by EBS after a failed reattachVolume event.
The cause for the failure was a KMS key pending deletion.
{
"version": "0",
"id": "01234567-0123-0123-0123-0123456789ab",
"detail-type": "EBS Volume Notification",
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:0123456789ab:volume/vol-01234567",
"arn:aws:kms:us-east-1:0123456789ab:key/01234567-0123-0123-0123-0123456789ab"
],
"detail": {
"event": "reattachVolume",
"result": "failed",
"cause": "arn:aws:kms:us-east-1:0123456789ab:key/01234567-0123-0123-0123-0123456789ab
is pending deletion.",
"request-id": ""
1605
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
}
}
Events
• Create snapshot (createSnapshot) (p. 1606)
• Create snapshots (createSnapshots) (p. 1606)
• Copy snapshot (copySnapshot) (p. 1608)
• Share snapshot (shareSnapshot) (p. 1609)
Event data
The listing below is an example of a JSON object emitted by EBS for a successful createSnapshot
event. In the detail section, the source field contains the ARN of the source volume. The startTime
and endTime fields indicate when creation of the snapshot started and completed.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-west-2::snapshot/snap-01234567"
],
"detail": {
"event": "createSnapshot",
"result": "succeeded",
"cause": "",
"request-id": "",
"snapshot_id": "arn:aws:ec2:us-west-2::snapshot/snap-01234567",
"source": "arn:aws:ec2:us-west-2::volume/vol-01234567",
"startTime": "yyyy-mm-ddThh:mm:ssZ",
"endTime": "yyyy-mm-ddThh:mm:ssZ" }
}
Event data
The listing below is an example of a JSON object emitted by EBS for a successful createSnapshots
event. In the detail section, the source field contains the ARNs of the source volumes of the multi-
1606
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
volume snapshot set. The startTime and endTime fields indicate when creation of the snapshot
started and completed.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Multi-Volume Snapshots Completion Status",
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2::us-east-1:snapshot/snap-01234567",
"arn:aws:ec2::us-east-1:snapshot/snap-012345678"
],
"detail": {
"event": "createSnapshots",
"result": "succeeded",
"cause": "",
"request-id": "",
"startTime": "yyyy-mm-ddThh:mm:ssZ",
"endTime": "yyyy-mm-ddThh:mm:ssZ",
"snapshots": [
{
"snapshot_id": "arn:aws:ec2::us-east-1:snapshot/snap-01234567",
"source": "arn:aws:ec2::us-east-1:volume/vol-01234567",
"status": "completed"
},
{
"snapshot_id": "arn:aws:ec2::us-east-1:snapshot/snap-012345678",
"source": "arn:aws:ec2::us-east-1:volume/vol-012345678",
"status": "completed"
}
]
}
}
The listing below is an example of a JSON object emitted by EBS after a failed createSnapshots
event. The cause for the failure was one or more snapshots for the multi-volume snapshot set failed to
complete. The values of snapshot_id are the ARNs of the failed snapshots. startTime and endTime
represent when the create-snapshots action started and ended.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Multi-Volume Snapshots Completion Status",
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2::us-east-1:snapshot/snap-01234567",
"arn:aws:ec2::us-east-1:snapshot/snap-012345678"
],
"detail": {
"event": "createSnapshots",
"result": "failed",
"cause": "Snapshot snap-01234567 is in status error",
"request-id": "",
"startTime": "yyyy-mm-ddThh:mm:ssZ",
"endTime": "yyyy-mm-ddThh:mm:ssZ",
"snapshots": [
{
1607
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
"snapshot_id": "arn:aws:ec2::us-east-1:snapshot/snap-01234567",
"source": "arn:aws:ec2::us-east-1:volume/vol-01234567",
"status": "error"
},
{
"snapshot_id": "arn:aws:ec2::us-east-1:snapshot/snap-012345678",
"source": "arn:aws:ec2::us-east-1:volume/vol-012345678",
"status": "error"
}
]
}
}
Event data
The listing below is an example of a JSON object emitted by EBS after a successful copySnapshot
event. The value of snapshot_id is the ARN of the newly created snapshot. In the detail section, the
value of source is the ARN of the source snapshot. startTime and endTime represent when the copy-
snapshot action started and ended.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-west-2::snapshot/snap-01234567"
],
"detail": {
"event": "copySnapshot",
"result": "succeeded",
"cause": "",
"request-id": "",
"snapshot_id": "arn:aws:ec2:us-west-2::snapshot/snap-01234567",
"source": "arn:aws:ec2:eu-west-1::snapshot/snap-76543210",
"startTime": "yyyy-mm-ddThh:mm:ssZ",
"endTime": "yyyy-mm-ddThh:mm:ssZ",
"Incremental": "true"
}
}
The listing below is an example of a JSON object emitted by EBS after a failed copySnapshot event.
The cause for the failure was an invalid source snapshot ID. The value of snapshot_id is the ARN of
the failed snapshot. In the detail section, the value of source is the ARN of the source snapshot.
startTime and endTime represent when the copy-snapshot action started and ended.
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
1608
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-west-2::snapshot/snap-01234567"
],
"detail": {
"event": "copySnapshot",
"result": "failed",
"cause": "Source snapshot ID is not valid",
"request-id": "",
"snapshot_id": "arn:aws:ec2:us-west-2::snapshot/snap-01234567",
"source": "arn:aws:ec2:eu-west-1::snapshot/snap-76543210",
"startTime": "yyyy-mm-ddThh:mm:ssZ",
"endTime": "yyyy-mm-ddThh:mm:ssZ"
}
}
Event data
The following is an example of a JSON object emitted by EBS after a completed shareSnapshot event.
In the detail section, the value of source is the AWS account number of the user that shared the
snapshot with you. startTime and endTime represent when the share-snapshot action started and
ended. The shareSnapshot event is emitted only when a private snapshot is shared with another user.
Sharing a public snapshot does not trigger the event.
{
"version": "0",
"id": "01234567-01234-0123-0123-012345678901",
"detail-type": "EBS Snapshot Notification",
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-west-2::snapshot/snap-01234567"
],
"detail": {
"event": "shareSnapshot",
"result": "succeeded",
"cause": "",
"request-id": "",
"snapshot_id": "arn:aws:ec2:us-west-2::snapshot/snap-01234567",
"source": 012345678901,
"startTime": "yyyy-mm-ddThh:mm:ssZ",
"endTime": "yyyy-mm-ddThh:mm:ssZ"
}
}
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Volume Notification",
1609
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
"source": "aws.ec2",
"account": "012345678901",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:012345678901:volume/vol-03a55cf56513fa1b6"
],
"detail": {
"result": "optimizing",
"cause": "",
"event": "modifyVolume",
"request-id": "01234567-0123-0123-0123-0123456789ab"
}
}
{
"version": "0",
"id": "01234567-0123-0123-0123-012345678901",
"detail-type": "EBS Fast Snapshot Restore State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1::snapshot/snap-03a55cf56513fa1b6"
],
"detail": {
"snapshot-id": "snap-1234567890abcdef0",
"state": "optimizing",
"zone": "us-east-1a",
"message": "Client.UserInitiated - Lifecycle state transition",
}
}
The possible values for state are enabling, optimizing, enabled, disabling, and disabled.
A request to enable fast snapshot restore failed and the state transitioned to disabling or
disabled. Fast snapshot restore cannot be enabled for this snapshot.
Client.UserInitiated
A request to enable fast snapshot restore failed due to insufficient capacity, and the state
transitioned to disabling or disabled. Wait and then try again.
1610
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
A request to enable fast snapshot restore failed due to an internal error, and the state transitioned
to disabling or disabled. Wait and then try again.
Client.InvalidSnapshot.InvalidState - The requested snapshot was deleted or
access permissions were revoked
The fast snapshot restore state for the snapshot has transitioned to disabling or disabled
because the snapshot was deleted or unshared by the snapshot owner. Fast snapshot restore cannot
be enabled for a snapshot that has been deleted or is no longer shared with you.
The following procedure uses the createSnapshot event to automatically copy a completed snapshot
to another Region for disaster recovery.
1. Create an IAM policy, such as the one shown in the following example, to provide permissions to use
the CopySnapshot action and write to the CloudWatch Events log. Assign the policy to the IAM user
that will handle the CloudWatch event.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CopySnapshot"
],
"Resource": "*"
}
]
}
2. Define a function in Lambda that will be available from the CloudWatch console. The sample
Lambda function below, written in Node.js, is invoked by CloudWatch when a matching
createSnapshot event is emitted by Amazon EBS (signifying that a snapshot was completed).
When invoked, the function copies the snapshot from us-east-2 to us-east-1.
// define variables
1611
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS CloudWatch events
//main function
exports.handler = (event, context, callback) => {
// Load EC2 class and update the configuration to use destination Region to
initiate the snapshot.
AWS.config.update({region: destinationRegion});
var ec2 = new AWS.EC2();
To ensure that your Lambda function is available from the CloudWatch console, create it in the
Region where the CloudWatch event will occur. For more information, see the AWS Lambda
Developer Guide.
3. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
4. In the navigation panel, expand Events and choose Rules, and then choose Create rule.
5. Select Event Pattern.. For Service Name, choose EC2, and for Event Type, choose EBS Snapshot
Notification.
6. Select Specific event(s) and then choose createSnapshot.
7. Select Specific result(s) and then choose succeeded.
8. In the Targets section, choose Add target, and then for Function, choose the Lambda function that
you created previously.
9. Choose Configure details.
10. On the Configure rule details page, enter values for Name and Description. Select the State check
box to activate the function.
11. Choose Create rule.
1612
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EBS quotas
Your rule should now appear on the Rules tab. In the example shown, the event that you configured
should be emitted by EBS the next time you copy a snapshot.
For a list of Amazon EBS service quotas, see Amazon Elastic Block Store endpoints and quotas in the AWS
General Reference.
An instance store consists of one or more instance store volumes exposed as block devices. The size of an
instance store as well as the number of devices available varies by instance type.
The virtual devices for instance store volumes are ephemeral[0-23]. Instance types that support one
instance store volume have ephemeral0. Instance types that support two instance store volumes have
ephemeral0 and ephemeral1, and so on.
Contents
• Instance store lifetime (p. 1614)
• Instance store volumes (p. 1614)
• Add instance store volumes to your EC2 instance (p. 1623)
• SSD instance store volumes (p. 1627)
1613
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store lifetime
The data in an instance store persists only during the lifetime of its associated instance. If an instance
reboots (intentionally or unintentionally), data in the instance store persists. However, data in the
instance store is lost under any of the following circumstances:
Therefore, do not rely on instance store for valuable, long-term data. Instead, use more durable data
storage, such as Amazon S3, Amazon EBS, or Amazon EFS.
When you stop, hibernate, or terminate an instance, every block of storage in the instance store is reset.
Therefore, your data cannot be accessed through the instance store of another instance.
If you create an AMI from an instance, the data on its instance store volumes isn't preserved and isn't
present on the instance store volumes of the instances that you launch from the AMI.
If you change the instance type, an instance store will not be attached to the new instance type. For
more information, see Change the instance type (p. 367).
Some instance types use NVMe or SATA-based solid state drives (SSD) to deliver high random I/O
performance. This is a good option when you need storage with very low latency, but you don't need the
data to persist when the instance terminates or you can take advantage of fault-tolerant architectures.
For more information, see SSD instance store volumes (p. 1627).
The data on NVMe instance store volumes and some HDD instance store volumes is encrypted at rest. For
more information, see Data protection in Amazon EC2 (p. 1215).
The following table provides the quantity, size, type, and performance optimizations of instance store
volumes available on each supported instance type. For a complete list of instance types, including EBS-
only types, see Amazon EC2 Instance Types.
1614
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store volumes
1615
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store volumes
g2.2xlarge 1 x 60 GB SSD ✔
1616
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store volumes
1617
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store volumes
m3.medium 1 x 4 GB SSD ✔
m3.large 1 x 32 GB SSD ✔
1618
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store volumes
1619
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store volumes
r3.large 1 x 32 GB SSD ✔
r3.xlarge 1 x 80 GB SSD ✔
1620
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store volumes
1621
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store volumes
* Volumes attached to certain instances suffer a first-write penalty unless initialized. For more
information, see Optimize disk performance for instance store volumes (p. 1630).
** For more information, see Instance store volume TRIM support (p. 1628).
† The c1.medium and m1.small instance types also include a 900 MB instance store swap volume,
which may not be automatically enabled at boot time. For more information, see Instance store swap
volumes (p. 1628).
You can use the describe-instance-types AWS CLI command to display information about an instance
type, such as its instance store volumes. The following example displays the total size of instance storage
for all R5 instances with instance store volumes.
Example output
---------------------------
| DescribeInstanceTypes |
+----------------+--------+
| r5ad.24xlarge | 3600 |
| r5ad.12xlarge | 1800 |
| r5dn.8xlarge | 1200 |
| r5ad.8xlarge | 1200 |
| r5ad.large | 75 |
| r5d.4xlarge | 600 |
. . .
| r5dn.2xlarge | 300 |
1622
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Add instance store volumes
| r5d.12xlarge | 1800 |
+----------------+--------+
The following example displays the complete instance storage details for the specified instance type.
The example output shows that this instance type has two 300 GB NVMe SSD volumes, for a total of 600
GB of instance storage.
[
{
"TotalSizeInGB": 600,
"Disks": [
{
"SizeInGB": 300,
"Count": 2,
"Type": "ssd"
}
],
"NvmeSupport": "required"
}
]
All the NVMe instance store volumes supported by an instance type are automatically enumerated and
assigned a device name on instance launch; including them in the block device mapping for the AMI or
the instance has no effect. For more information, see Block device mappings (p. 1647).
A block device mapping always specifies the root volume for the instance. The root volume is either
an Amazon EBS volume or an instance store volume. For more information, see Storage for the root
device (p. 96). The root volume is mounted automatically. For instances with an instance store volume for
the root volume, the size of this volume varies by AMI, but the maximum size is 10 GB.
You can use a block device mapping to specify additional EBS volumes when you launch your instance, or
you can attach additional EBS volumes after your instance is running. For more information, see Amazon
EBS volumes (p. 1327).
You can specify the instance store volumes for your instance only when you launch it. You can't attach
instance store volumes to an instance after you've launched it.
If you change the instance type, an instance store will not be attached to the new instance type. For
more information, see Change the instance type (p. 367).
The number and size of available instance store volumes for your instance varies by instance type. Some
instance types do not support instance store volumes. If the number of instance store volumes in a block
device mapping exceeds the number of instance store volumes available to an instance, the additional
volumes are ignored. For more information about the instance store volumes supported by each instance
type, see Instance store volumes (p. 1614).
If the instance type you choose for your instance supports non-NVMe instance store volumes, you must
add them to the block device mapping for the instance when you launch it. NVMe instance store volumes
1623
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Add instance store volumes
are available by default. After you launch an instance, you must ensure that the instance store volumes
for your instance are formatted and mounted before you can use them. The root volume of an instance
store-backed instance is mounted automatically.
Contents
• Add instance store volumes to an AMI (p. 1624)
• Add instance store volumes to an instance (p. 1625)
• Make instance store volumes available on your instance (p. 1625)
Considerations
• For M3 instances, specify instance store volumes in the block device mapping of the instance, not
the AMI. Amazon EC2 might ignore instance store volumes that are specified only in the block device
mapping of the AMI.
• When you launch an instance, you can omit non-NVMe instance store volumes specified in the AMI
block device mapping or add instance store volumes.
New console
To add instance store volumes to an Amazon EBS-backed AMI using the console
Old console
To add instance store volumes to an Amazon EBS-backed AMI using the console
1624
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Add instance store volumes
You can use one of the following commands. For more information about these command line interfaces,
see Access Amazon EC2 (p. 3).
Considerations
• For M3 instances, you might receive instance store volumes even if you do not specify them in the
block device mapping for the instance.
• For HS1 instances, no matter how many instance store volumes you specify in the block device
mapping of an AMI, the block device mapping for an instance launched from the AMI automatically
includes the maximum number of supported instance store volumes. You must explicitly remove the
instance store volumes that you don't want from the block device mapping for the instance before you
launch it.
To update the block device mapping for an instance using the console
To update the block device mapping for an instance using the command line
You can use one of the following options commands with the corresponding command. For more
information about these command line interfaces, see Access Amazon EC2 (p. 3).
1625
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Add instance store volumes
store volumes are mounted for you and which are available for you to mount yourself. For Windows
instances, the EC2Config service mounts the instance store volumes for an instance. The block device
driver for the instance assigns the actual volume name when mounting the volume, and the name
assigned can be different than the name that Amazon EC2 recommends.
Many instance store volumes are pre-formatted with the ext3 file system. SSD-based instance store
volumes that support TRIM instruction are not pre-formatted with any file system. However, you can
format volumes with the file system of your choice after you launch your instance. For more information,
see Instance store volume TRIM support (p. 1628). For Windows instances, the EC2Config service
reformats the instance store volumes with the NTFS file system.
You can confirm that the instance store devices are available from within the instance itself using
instance metadata. For more information, see View the instance block device mapping for instance store
volumes (p. 1655).
For Windows instances, you can also view the instance store volumes using Windows Disk Management.
For more information, see List disks using Windows Disk Management.
For Linux instances, you can view and mount the instance store volumes as described in the following
procedure.
1. Connect to the instance using an SSH client. For more information, see Connect to your Linux
instance (p. 596).
2. Use the df -h command to view the volumes that are formatted and mounted.
[ec2-user ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 72K 3.8G 1% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
/dev/nvme0n1p1 7.9G 1.2G 6.6G 15% /
3. Use the lsblk to view any volumes that were mapped at launch but not formatted and mounted.
4. To format and mount an instance store volume that was mapped only, do the following:
b. Create a directory on which to mount the device using the mkdir command.
c. Mount the device on the newly created directory using the mount command.
For instructions on how to mount an attached volume automatically after reboot, see Automatically
mount an attached volume after reboot (p. 1362).
1626
Amazon Elastic Compute Cloud
User Guide for Linux Instances
SSD instance store volumes
Like other instance store volumes, you must map the SSD instance store volumes for your instance when
you launch it. The data on an SSD instance volume persists only for the life of its associated instance. For
more information, see Add instance store volumes to your EC2 instance (p. 1623).
To access NVMe volumes, the NVMe drivers (p. 1552) must be installed. The following AMIs meet this
requirement:
• Amazon Linux 2
• Amazon Linux AMI 2018.03
• Ubuntu 14.04 (with linux-aws kernel) or later
• Red Hat Enterprise Linux 7.4 or later
• SUSE Linux Enterprise Server 12 SP2 or later
• CentOS 7.4.1708 or later
• FreeBSD 11.1 or later
• Debian GNU/Linux 9 or later
After you connect to your instance, you can list the NVMe devices using the lspci command. The
following is example output for an i3.8xlarge instance, which supports four NVMe devices.
If you are using a supported operating system but you do not see the NVMe devices, verify that the
NVMe module is loaded using the following command.
• Amazon Linux, Amazon Linux 2, Ubuntu 14/16, Red Hat Enterprise Linux, SUSE Linux Enterprise
Server, CentOS 7
• Ubuntu 18
1627
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store swap volumes
The NVMe volumes are compliant with the NVMe 1.0e specification. You can use the NVMe commands
with your NVMe volumes. With Amazon Linux, you can install the nvme-cli package from the repo
using the yum install command. With other supported versions of Linux, you can download the nvme-
cli package if it's not available in the image.
The data on NVMe instance storage is encrypted using an XTS-AES-256 block cipher implemented in a
hardware module on the instance. The encryption keys are generated using the hardware module and
are unique to each NVMe instance storage device. All encryption keys are destroyed when the instance
is stopped or terminated and cannot be recovered. You cannot disable this encryption and you cannot
provide your own encryption key.
Instance store volumes that support TRIM are fully trimmed before they are allocated to your instance.
These volumes are not formatted with a file system when an instance launches, so you must format
them before they can be mounted and used. For faster access to these volumes, you should skip the
TRIM operation when you format them.
With instance store volumes that support TRIM, you can use the TRIM command to notify the SSD
controller when you no longer need data that you've written. This provides the controller with more
free space, which can reduce write amplification and increase performance. On Linux, use the fstrim
command to enable periodic TRIM.
The c1.medium and m1.small instance types have a limited amount of physical memory to work with,
and they are given a 900 MiB swap volume at launch time to act as virtual memory for Linux AMIs.
Although the Linux kernel sees this swap space as a partition on the root device, it is actually a separate
instance store volume, regardless of your root device type.
Amazon Linux automatically enables and uses this swap space, but your AMI may require some
additional steps to recognize and use this swap space. To see if your instance is using swap space, you
can use the swapon -s command.
1628
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance store swap volumes
The above instance has a 900 MiB swap volume attached and enabled. If you don't see a swap volume
listed with this command, you may need to enable swap space for the device. Check your available disks
using the lsblk command.
Here, the swap volume xvda3 is available to the instance, but it is not enabled (notice that the
MOUNTPOINT field is empty). You can enable the swap volume with the swapon command.
Note
You must prepend /dev/ to the device name listed by lsblk. Your device may be named
differently, such as sda3, sde3, or xvde3. Use the device name for your system in the command
below.
Now the swap space should show up in lsblk and swapon -s output.
You also need to edit your /etc/fstab file so that this swap space is automatically enabled at every
system boot.
Append the following line to your /etc/fstab file (using the swap device name for your system):
Any instance store volume can be used as swap space. For example, the m3.medium instance type
includes a 4 GB SSD instance store volume that is appropriate for swap space. If your instance store
volume is much larger (for example, 350 GB), you may consider partitioning the volume with a smaller
swap partition of 4-8 GB and the rest for a data volume.
Note
This procedure applies only to instance types that support instance storage. For a list of
supported instance types, see Instance store volumes (p. 1614).
1. List the block devices attached to your instance to get the device name for your instance store
volume.
1629
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Optimize disk performance
In this example, the instance store volume is /dev/xdvb. Because this is an Amazon Linux instance,
the instance store volume is formatted and mounted at /media/ephemeral0; not all Linux
operating systems do this automatically.
2. (Optional) If your instance store volume is mounted (it lists a MOUNTPOINT in the lsblk command
output), unmount it with the following command.
3. Set up a Linux swap area on the device with the mkswap command.
6. Edit your /etc/fstab file so that this swap space is automatically enabled at every system boot.
If your /etc/fstab file has an entry for /dev/xvdb (or /dev/sdb) change it to match the line
below; if it does not have an entry for this device, append the following line to your /etc/fstab
file (using the swap device name for your system):
Important
Instance store volume data is lost when an instance is stopped or hibernated; this includes
the instance store swap space formatting created in Step 3 (p. 1630). If you stop and restart
an instance that has been configured to use instance store swap space, you must repeat
Step 1 (p. 1629) through Step 5 (p. 1630) on the new instance store volume.
1630
Amazon Elastic Compute Cloud
User Guide for Linux Instances
File storage
Note
Some instance types with direct-attached solid state drives (SSD) and TRIM support provide
maximum performance at launch time, without initialization. For information about the instance
store for each instance type, see Instance store volumes (p. 1614).
If you require greater flexibility in latency or throughput, we recommend using Amazon EBS.
To initialize the instance store volumes, use the following dd commands, depending on the store to
initialize (for example, /dev/sdb or /dev/nvme1n1).
Note
Make sure to unmount the drive before performing this command.
Initialization can take a long time (about 8 hours for an extra large instance).
To initialize the instance store volumes, use the following commands on the m1.large, m1.xlarge,
c1.xlarge, m2.xlarge, m2.2xlarge, and m2.4xlarge instance types:
To perform initialization on all instance store volumes at the same time, use the following command:
Configuring drives for RAID initializes them by writing to every drive location. When configuring
software-based RAID, make sure to change the minimum reconstruction speed:
File storage
Cloud file storage is a method for storing data in the cloud that provides servers and applications access
to data through shared file systems. This compatibility makes cloud file storage ideal for workloads that
rely on shared file systems and provides simple integration without code changes.
There are many file storage solutions that exist, ranging from a single node file server on a compute
instance using block storage as the underpinnings with no scalability or few redundancies to protect
the data, to a do-it-yourself clustered solution, to a fully-managed solution. The following content
introduces some of the storage services provided by AWS for use with Linux.
Contents
• Use Amazon S3 with Amazon EC2 (p. 1631)
• Use Amazon EFS with Amazon EC2 (p. 1633)
1631
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon S3
read or write access to these data objects by many separate clients or application threads. You can use
the redundant data stored in Amazon S3 to recover quickly and reliably from instance or application
failures.
Amazon EC2 uses Amazon S3 for storing Amazon Machine Images (AMIs). You use AMIs for launching
EC2 instances. In case of instance failure, you can use the stored AMI to immediately launch another
instance, thereby allowing for fast recovery and business continuity.
Amazon EC2 also uses Amazon S3 to store snapshots (backup copies) of the data volumes. You can use
snapshots for recovering data quickly and reliably in case of application or system failures. You can
also use snapshots as a baseline to create multiple new data volumes, expand the size of an existing
data volume, or move data volumes across multiple Availability Zones, thereby making your data usage
highly scalable. For more information about using data volumes and snapshots, see Amazon Elastic Block
Store (p. 1325).
Objects are the fundamental entities stored in Amazon S3. Every object stored in Amazon S3 is
contained in a bucket. Buckets organize the Amazon S3 namespace at the highest level and identify
the account responsible for that storage. Amazon S3 buckets are similar to internet domain names.
Objects stored in the buckets have a unique key value and are retrieved using a URL. For example, if an
object with a key value /photos/mygarden.jpg is stored in the DOC-EXAMPLE-BUCKET1 bucket, then
it is addressable using the URL https://DOC-EXAMPLE-BUCKET1.s3.amazonaws.com/photos/
mygarden.jpg.
For more information about the features of Amazon S3, see the Amazon S3 product page.
Usage examples
Given the benefits of Amazon S3 for storage, you might decide to use this service to store files and data
sets for use with EC2 instances. There are several ways to move data to and from Amazon S3 to your
instances. In addition to the examples discussed below, there are a variety of tools that people have
written that you can use to access your data in Amazon S3 from your computer or your instance. Some of
the common ones are discussed in the AWS forums.
If you have permission, you can copy a file to or from Amazon S3 and your instance using one of the
following methods.
GET or wget
Note
This method works for public objects only. If the object is not public, you receive an ERROR
403: Forbidden message. If you receive this error, you must use either the Amazon S3
console, AWS CLI, AWS API, AWS SDK, or AWS Tools for Windows PowerShell, and you must
have the required permissions. For more information, see Identity and access management in
Amazon S3 and Downloading an object in the Amazon S3 User Guide.
The wget utility is an HTTP and FTP client that allows you to download public objects from Amazon S3.
It is installed by default in Amazon Linux and most other distributions, and available for download on
Windows. To download an Amazon S3 object, use the following command, substituting the URL of the
object to download.
The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. The AWS
CLI enables users to authenticate themselves and download restricted items from Amazon S3 and also
to upload items. For more information, such as how to install and configure the tools, see the AWS
Command Line Interface detail page.
1632
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon EFS
The aws s3 cp command is similar to the Unix cp command. You can copy files from Amazon S3 to your
instance, copy files from your instance to Amazon S3, and copy files from one Amazon S3 location to
another.
Use the following command to copy an object from Amazon S3 to your instance.
Use the following command to copy an object from your instance back into Amazon S3.
The aws s3 sync command can synchronize an entire Amazon S3 bucket to a local directory location. This
can be helpful for downloading a data set and keeping the local copy up-to-date with the remote set. If
you have the proper permissions on the Amazon S3 bucket, you can push your local directory back up to
the cloud when you are finished by reversing the source and destination locations in the command.
Use the following command to download an entire Amazon S3 bucket to a local directory on your
instance.
Amazon S3 API
If you are a developer, you can use an API to access data in Amazon S3. For more information, see the
Amazon Simple Storage Service User Guide. You can use this API and its examples to help develop your
application and integrate it with other APIs and SDKs, such as the boto Python interface.
You can mount an EFS file system to your instance in the following ways:
Topics
• Create an EFS file system using Amazon EFS Quick Create (p. 1633)
• Create an EFS file system and mount it to your instance (p. 1634)
When you create an EFS file system using EFS Quick Create, the file system is created with the following
service recommended settings:
• Automatic backups turned on. For more information, see Using AWS Backup with Amazon EFS in the
Amazon Elastic File System User Guide.
• Mount targets in each default subnet in the selected VPC, using the VPC's default security group. For
more information, see Managing file system network accessibility in the Amazon Elastic File System
User Guide.
1633
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon EFS
• General Purpose performance mode. For more information, see Performance Modes in the Amazon
Elastic File System User Guide.
• Bursting throughput mode. For more information, see Throughput Modes in the Amazon Elastic File
System User Guide.
• Encryption of data at rest enabled using your default key for Amazon EFS (aws/
elasticfilesystem). For more information, see Encrypting Data at Rest in the Amazon Elastic File
System User Guide.
• Amazon EFS lifecycle management enabled with a 30-day policy. For more information, see EFS
lifecycle management in the Amazon Elastic File System User Guide.
To enable access to the file system, the following security groups are automatically created and
attached to the instance and the mount targets of the file system.
• Instance security group—Includes no inbound rules and an outbound rule that allows traffic over
the NFS 2049 port.
• File system mount targets security group—Includes an inbound rule that allows traffic over the
NFS 2049 port from the instance security group (described above), and an outbound rule that
allows traffic over the NFS 2049 port.
You can also choose to manually create and attach the security groups. To do this, clear
Automatically create and attach the required security groups.
Configure the remaining settings as needed and choose Next: Add Storage.
6. On the Add Storage page, specify the volumes to attach to the instances, in addition to the volumes
specified by the AMI (such as the root device volume). Ensure that you provision enough storage for
the Nvidia CUDA Toolkit. Then choose Next: Add Tags.
7. On the Add Tags page, specify a tag that you can use to identify the temporary instance, and then
choose Next: Configure Security Group.
8. On the Configure Security Group page, review the security groups and then choose Review and
Launch.
9. On the Review Instance Launch page, review the settings, and then choose Launch to choose a key
pair and to launch your instance.
Tasks
• Prerequisites (p. 1635)
• Step 1: Create an EFS file system (p. 1635)
• Step 2: Mount the file system (p. 1635)
1634
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon EFS
Prerequisites
• Create a security group (for example, efs-sg) to associate with the EC2 instances and EFS mount target,
and add the following rules:
• Allow inbound SSH connections to the EC2 instances from your computer (the source is the CIDR
block for your network).
• Allow inbound NFS connections to the file system via the EFS mount target from the EC2 instances
that are associated with this security group (the source is the security group itself). For more
information, see Amazon EFS rules (p. 1321), and Creating security Groups in the Amazon Elastic File
System User Guide.
• Create a key pair. You must specify a key pair when you configure your instances or you can't connect
to them. For more information, see Create a key pair (p. 5).
1635
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Amazon EFS
3. For Step 1: Choose an Amazon Machine Image (AMI), select an Amazon Linux AMI.
4. For Step 2: Choose an Instance Type, keep the default instance type, t2.micro, and choose Next:
Configure Instance Details.
5. For Step 3: Configure Instance Details, do the following:
[Nondefault VPC] Select your VPC for Network, and a public subnet from Subnet.
c. [Nondefault VPC] For Auto-assign Public IP, choose Enable. Otherwise, your instances do not
get public IP addresses or public DNS names.
d. For File systems, choose Add file system. Ensure that the value matches the file system ID
that you created in Step 1: Create an EFS file system (p. 1635). The path shown next to the file
system ID is the mount point that the instance will use, which you can change. Under Advanced
Details, the User data is automatically generated, and includes the commands needed to mount
the file system.
e. Advance to Step 6 of the wizard.
6. On the Configure Security Group page, choose Select an existing security group and select the
security group that you created in Prerequisites (p. 1635). Then choose Review and Launch.
7. On the Review Instance Launch page, choose Launch.
8. In the Select an existing key pair or create a new key pair dialog box, select Choose an existing
key pair and choose your key pair. Select the acknowledgment check box, and choose Launch
Instances.
9. In the navigation pane, choose Instances to see the status of your instances. Initially, their status is
pending. After the status changes to running, your instances are ready for use.
Your instance is now configured to mount the Amazon EFS file system at launch and whenever it's
rebooted.
1. Connect to your instances. For more information, see Connect to your Linux instance (p. 596).
2. From the terminal window for each instance, run the df -T command to verify that the EFS file
system is mounted.
$ df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/xvda1 ext4 8123812 1949800 6073764 25% /
devtmpfs devtmpfs 4078468 56 4078412 1% /dev
tmpfs tmpfs 4089312 0 4089312 0% /dev/shm
efs-dns nfs4 9007199254740992 0 9007199254740992 0% /mnt/efs
Note that the name of the file system, shown in the example output as efs-dns, has the following
form.
file-system-id.efs.aws-region.amazonaws.com:/
1636
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance volume limits
3. (Optional) Create a file in the file system from one instance, and then verify that you can view the
file from the other instance.
a. From the first instance, run the following command to create the file.
b. From the second instance, run the following command to view the file.
$ ls /mnt/efs
test-file.txt
Step 4: Clean up
When you are finished with this tutorial, you can terminate the instances and delete the file system.
Contents
• Nitro System volume limits (p. 1637)
• Linux-specific volume limits (p. 1638)
• Bandwidth versus capacity (p. 1638)
1637
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Linux-specific volume limits
Most of these instances support a maximum of 28 attachments. For example, if you have no additional
network interface attachments on an EBS-only instance, you can attach up to 27 EBS volumes to it. If
you have one additional network interface on an instance with 2 NVMe instance store volumes, you can
attach 24 EBS volumes to it.
You can choose between AMIs backed by Amazon EC2 instance store and AMIs backed by Amazon EBS.
We recommend that you use AMIs backed by Amazon EBS, because they launch faster and use persistent
storage.
1638
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Root device storage concepts
Important
Only the following instance types support an instance store volume as the root device: C3, D2,
G2, I2, M3, and R3.
For more information about the device names Amazon EC2 uses for your root volumes, see Device names
on Linux instances (p. 1645).
Contents
• Root device storage concepts (p. 1639)
• Choose an AMI by root device type (p. 1640)
• Determine the root device type of your instance (p. 1641)
• Change the root volume to persist (p. 1642)
• Change the initial size of the root volume (p. 1645)
Instances that use instance stores for the root device automatically have one or more instance store
volumes available, with one volume serving as the root device volume. When an instance is launched, the
image that is used to boot the instance is copied to the root volume. Note that you can optionally use
additional instance store volumes, depending on the instance type.
Any data on the instance store volumes persists as long as the instance is running, but this data is
deleted when the instance is terminated (instance store-backed instances do not support the Stop
action) or if it fails (such as if an underlying drive has issues).
After an instance store-backed instance fails or terminates, it cannot be restored. If you plan to use
Amazon EC2 instance store-backed instances, we highly recommend that you distribute the data on
your instance stores across multiple Availability Zones. You should also back up critical data from your
instance store volumes to persistent storage on a regular basis.
For more information, see Amazon EC2 instance store (p. 1613).
1639
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Choose an AMI by root device type
Instances that use Amazon EBS for the root device automatically have an Amazon EBS volume attached.
When you launch an Amazon EBS-backed instance, we create an Amazon EBS volume for each Amazon
EBS snapshot referenced by the AMI you use. You can optionally use other Amazon EBS volumes or
instance store volumes, depending on the instance type.
An Amazon EBS-backed instance can be stopped and later restarted without affecting data stored in the
attached volumes. There are various instance– and volume-related tasks you can do when an Amazon
EBS-backed instance is in a stopped state. For example, you can modify the properties of the instance,
change its size, or update the kernel it is using, or you can attach your root volume to a different running
instance for debugging or any other purpose.
If an Amazon EBS-backed instance fails, you can restore your session by following one of these methods:
Console
1640
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Determine the root device type of your instance
3. From the filter lists, select the image type (such as Public images). In the search bar choose
Platform to select the operating system (such as Amazon Linux), and Root Device Type to
select EBS images.
4. (Optional) To get additional information to help you make your choice, choose the Show/Hide
Columns icon, update the columns to display, and choose Close.
5. Choose an AMI and write down its AMI ID.
AWS CLI
To verify the type of the root device volume of an AMI using the command line
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
Old console
1641
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the root volume to persist
AWS CLI
To determine the root device type of an instance using the command line
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
Tasks
• Configure the root volume to persist during instance launch (p. 1642)
• Configure the root volume to persist for an existing instance (p. 1643)
• Confirm that a root volume is configured to persist (p. 1644)
Console
To configure the root volume to persist when you launch an instance using the console
AWS CLI
To configure the root volume to persist when you launch an instance using the AWS CLI
Use the run-instances command and include a block device mapping that sets the
DeleteOnTermination attribute to false.
1642
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the root volume to persist
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": false
}
}
]
To configure the root volume to persist when you launch an instance using the Tools for
Windows PowerShell
Use the New-EC2Instance command and include a block device mapping that sets the
DeleteOnTermination attribute to false.
AWS CLI
To configure the root volume to persist for an existing instance using the AWS CLI
Use the modify-instance-attribute command with a block device mapping that sets the
DeleteOnTermination attribute to false.
[
{
"DeviceName": "/dev/xvda",
"Ebs": {
"DeleteOnTermination": false
}
}
]
To configure the root volume to persist for an existing instance using the AWS Tools for Windows
PowerShell
1643
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the root volume to persist
Use the Edit-EC2InstanceAttribute command with a block device mapping that sets the
DeleteOnTermination attribute to false.
New console
To confirm that a root volume is configured to persist using the Amazon EC2 console
Old console
To confirm that a root volume is configured to persist using the Amazon EC2 console
AWS CLI
To confirm that a root volume is configured to persist using the AWS CLI
Use the describe-instances command and verify that the DeleteOnTermination attribute in the
BlockDeviceMappings response element is set to false.
...
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"Status": "attached",
"DeleteOnTermination": false,
"VolumeId": "vol-1234567890abcdef0",
"AttachTime": "2013-07-19T02:42:39.000Z"
}
}
...
1644
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Change the initial size of the root volume
To confirm that a root volume is configured to persist using the AWS Tools for Windows
PowerShell
Use the Get-EC2Instance and verify that the DeleteOnTermination attribute in the
BlockDeviceMappings response element is set to false.
1. Determine the device name of the root volume specified in the AMI, as described in View the EBS
volumes in an AMI block device mapping (p. 1652).
2. Confirm the size of the snapshot specified in the AMI block device mapping, as described in View
Amazon EBS snapshot information (p. 1417).
3. Override the size of the root volume using the instance block device mapping, as described in Update
the block device mapping when launching an instance (p. 1652), specifying a volume size that is
larger than the snapshot size.
For example, the following entry for the instance block device mapping increases the size of the root
volume, /dev/xvda, to 100 GiB. You can omit the snapshot ID in the instance block device mapping
because the snapshot ID is already specified in the AMI block device mapping.
{
"DeviceName": "/dev/xvda",
"Ebs": {
"VolumeSize": 100
}
}
The number of volumes that your instance can support is determined by the operating system. For more
information, see Instance volume limits (p. 1637).
Contents
• Available device names (p. 1646)
• Device name considerations (p. 1646)
For information about device names on Windows instances, see Device naming on Windows instances in
the Amazon EC2 User Guide for Windows Instances.
1645
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Available device names
The following table lists the available device names that you can specify in a block device mapping or
when attaching an EBS volume.
/dev/sd[a-z][1-15] /dev/sd[f-p][1-6]
/dev/hd[a-z]
/dev/hd[a-z]
[1-15]
/dev/sd[b-y]
(d2.8xlarge)
/dev/sd[b-i]
(i2.8xlarge)
**
* The device names that you specify for NVMe EBS volumes in a block device mapping are renamed using
NVMe device names (/dev/nvme[0-26]n1). The block device driver can assign NVMe device names in a
different order than you specified for the volumes in the block device mapping.
** NVMe instance store volumes are automatically enumerated and assigned an NVMe device name.
For more information about instance store volumes, see Amazon EC2 instance store (p. 1613). For more
information about NVMe EBS volumes (Nitro-based instances), including how to identify the EBS device,
see Amazon EBS and NVMe on Linux instances (p. 1552).
• Although you can attach your EBS volumes using the device names used to attach instance store
volumes, we strongly recommend that you don't because the behavior can be unpredictable.
• The number of NVMe instance store volumes for an instance depends on the size of the instance.
NVMe instance store volumes are automatically enumerated and assigned an NVMe device name (/
dev/nvme[0-26]n1).
1646
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Block device mappings
• Depending on the block device driver of the kernel, the device could be attached with a different
name than you specified. For example, if you specify a device name of /dev/sdh, your device could
be renamed /dev/xvdh or /dev/hdh. In most cases, the trailing letter remains the same. In some
versions of Red Hat Enterprise Linux (and its variants, such as CentOS), the trailing letter could change
(/dev/sda could become /dev/xvde). In these cases, the trailing letter of each device name is
incremented the same number of times. For example, if /dev/sdb is renamed /dev/xvdf, then /
dev/sdc is renamed /dev/xvdg. Amazon Linux creates a symbolic link for the name you specified to
the renamed device. Other operating systems could behave differently.
• HVM AMIs do not support the use of trailing numbers on device names, except for /dev/sda1,
which is reserved for the root device, and /dev/sda2. While using /dev/sda2 is possible, we do not
recommend using this device mapping with HVM instances.
• When using PV AMIs, you cannot attach volumes that share the same device letters both with and
without trailing digits. For example, if you attach a volume as /dev/sdc and another volume as /
dev/sdc1, only /dev/sdc is visible to the instance. To use trailing digits in device names, you must
use trailing digits on all device names that share the same base letters (such as /dev/sdc1, /dev/
sdc2, /dev/sdc3).
• Some custom kernels might have restrictions that limit use to /dev/sd[f-p] or /dev/sd[f-p]
[1-6]. If you're having trouble using /dev/sd[q-z] or /dev/sd[q-z][1-6], try switching to /
dev/sd[f-p] or /dev/sd[f-p][1-6].
For more information about root device volumes, see Change the root volume to persist (p. 1642).
Contents
• Block device mapping concepts (p. 1647)
• AMI block device mapping (p. 1650)
• Instance block device mapping (p. 1652)
• Instance store volumes (virtual devices whose underlying hardware is physically attached to the host
computer for the instance)
• EBS volumes (remote storage devices)
A block device mapping defines the block devices (instance store volumes and EBS volumes) to attach
to an instance. You can specify a block device mapping as part of creating an AMI so that the mapping
is used by all instances launched from the AMI. Alternatively, you can specify a block device mapping
when you launch an instance, so this mapping overrides the one specified in the AMI from which you
1647
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Block device mapping concepts
launched the instance. Note that all NVMe instance store volumes supported by an instance type are
automatically enumerated and assigned a device name on instance launch; including them in your block
device mapping has no effect.
Contents
• Block device mapping entries (p. 1648)
• Block device mapping instance store caveats (p. 1648)
• Example block device mapping (p. 1649)
• How devices are made available in the operating system (p. 1650)
• The device name used within Amazon EC2. The block device driver for the instance assigns the actual
volume name when mounting the volume. The name assigned can be different from the name that
Amazon EC2 recommends. For more information, see Device names on Linux instances (p. 1645).
For Instance store volumes, you also specify the following information:
• The virtual device: ephemeral[0-23]. Note that the number and size of available instance store
volumes for your instance varies by instance type.
For NVMe instance store volumes, the following information also applies:
• These volumes are automatically enumerated and assigned a device name; including them in your
block device mapping has no effect.
• The ID of the snapshot to use to create the block device (snap-xxxxxxxx). This value is optional as long
as you specify a volume size.
• The size of the volume, in GiB. The specified size must be greater than or equal to the size of the
specified snapshot.
• Whether to delete the volume on instance termination (true or false). The default value is true
for the root device volume and false for attached volumes. When you create an AMI, its block device
mapping inherits this setting from the instance. When you launch an instance, it inherits this setting
from the AMI.
• The volume type, which can be gp2 and gp3 for General Purpose SSD, io1 and io2 for Provisioned
IOPS SSD, st1 for Throughput Optimized HDD, sc1 for Cold HDD, or standard for Magnetic. The
default value is gp2.
• The number of input/output operations per second (IOPS) that the volume supports. (Used only with
io1 and io2 volumes.)
• Some instance types include more instance store volumes than others, and some instance types
contain no instance store volumes at all. If your instance type supports one instance store volume, and
1648
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Block device mapping concepts
your AMI has mappings for two instance store volumes, then the instance launches with one instance
store volume.
• Instance store volumes can only be mapped at launch time. You cannot stop an instance without
instance store volumes (such as the t2.micro), change the instance to a type that supports instance
store volumes, and then restart the instance with instance store volumes. However, you can create an
AMI from the instance and launch it on an instance type that supports instance store volumes, and
map those instance store volumes to the instance.
• If you launch an instance with instance store volumes mapped, and then stop the instance and change
it to an instance type with fewer instance store volumes and restart it, the instance store volume
mappings from the initial launch still show up in the instance metadata. However, only the maximum
number of supported instance store volumes for that instance type are available to the instance.
Note
When an instance is stopped, all data on the instance store volumes is lost.
• Depending on instance store capacity at launch time, M3 instances may ignore AMI instance store
block device mappings at launch unless they are specified at launch. You should specify instance
store block device mappings at launch time, even if the AMI you are launching has the instance store
volumes mapped in the AMI, to ensure that the instance store volumes are available when the instance
launches.
Note that this example block device mapping is used in the example commands and APIs in this
topic. You can find example commands and APIs that create block device mappings in Specify a
1649
Amazon Elastic Compute Cloud
User Guide for Linux Instances
AMI block device mapping
block device mapping for an AMI (p. 1650) and Update the block device mapping when launching an
instance (p. 1652).
With a Linux instance, the device names specified in the block device mapping are mapped to their
corresponding block devices when the instance first boots. The instance type determines which instance
store volumes are formatted and mounted by default. You can mount additional instance store volumes
at launch, as long as you don't exceed the number of instance store volumes available for your instance
type. For more information, see Amazon EC2 instance store (p. 1613). The block device driver for the
instance determines which devices are used when the volumes are formatted and mounted. For more
information, see Attach an Amazon EBS volume to an instance (p. 1353).
Contents
• Specify a block device mapping for an AMI (p. 1650)
• View the EBS volumes in an AMI block device mapping (p. 1652)
For an EBS-backed AMI, you can add EBS volumes and instance store volumes using a block device
mapping. For an instance store-backed AMI, you can add instance store volumes only by modifying the
block device mapping entries in the image manifest file when registering the image.
Note
For M3 instances, you must specify instance store volumes in the block device mapping for the
instance when you launch it. When you launch an M3 instance, instance store volumes specified
in the block device mapping for the AMI may be ignored if they are not specified as part of the
instance block device mapping.
1650
Amazon Elastic Compute Cloud
User Guide for Linux Instances
AMI block device mapping
6. For Volume type, choose the volume type. For Device choose the device name. For an EBS volume,
you can specify additional details, such as a snapshot, volume size, volume type, IOPS, and
encryption state.
7. Choose Create image.
Use the create-image AWS CLI command to specify a block device mapping for an EBS-backed AMI. Use
the register-image AWS CLI command to specify a block device mapping for an instance store-backed
AMI.
Specify the block device mapping using the --block-device-mappings parameter. Arguments
encoded in JSON can be supplied either directly on the command line or by reference to a file:
{
"DeviceName": "/dev/sdf",
"VirtualName": "ephemeral0"
}
To add an empty 100 GiB gp2 volume, use the following mapping.
{
"DeviceName": "/dev/sdg",
"Ebs": {
"VolumeSize": 100
}
}
{
"DeviceName": "/dev/sdh",
"Ebs": {
"SnapshotId": "snap-xxxxxxxx"
}
}
{
"DeviceName": "/dev/sdj",
"NoDevice": ""
}
Alternatively, you can use the -BlockDeviceMapping parameter with the following commands (AWS
Tools for Windows PowerShell):
• New-EC2Image
• Register-EC2Image
1651
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance block device mapping
If the AMI was created with additional EBS volumes using a block device mapping, the Block Devices
field displays the mapping for those additional volumes as well. (This screen doesn't display instance
store volumes.)
To view the EBS volumes for an AMI using the command line
Use the describe-images (AWS CLI) command or Get-EC2Image (AWS Tools for Windows PowerShell)
command to enumerate the EBS volumes in the block device mapping for an AMI.
Limitations
• For the root volume, you can only modify the following: volume size, volume type, and the Delete on
Termination flag.
• When you modify an EBS volume, you can't decrease its size. Therefore, you must specify a snapshot
whose size is equal to or greater than the size of the snapshot specified in the block device mapping of
the AMI.
Contents
• Update the block device mapping when launching an instance (p. 1652)
• Update the block device mapping of a running instance (p. 1654)
• View the EBS volumes in an instance block device mapping (p. 1654)
• View the instance block device mapping for instance store volumes (p. 1655)
1652
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance block device mapping
• To change the size of the root volume, locate the Root volume under the Type column, and
change its Size field.
• To suppress an EBS volume specified by the block device mapping of the AMI used to launch the
instance, locate the volume and click its Delete icon.
• To add an EBS volume, choose Add New Volume, choose EBS from the Type list, and fill in the
fields (Device, Snapshot, and so on).
• To suppress an instance store volume specified by the block device mapping of the AMI used to
launch the instance, locate the volume, and choose its Delete icon.
• To add an instance store volume, choose Add New Volume, select Instance Store from the Type
list, and select a device name from Device.
6. Complete the remaining wizard pages, and choose Launch.
Use the run-instances AWS CLI command with the --block-device-mappings option to specify a
block device mapping for an instance at launch.
For example, suppose that an EBS-backed AMI specifies the following block device mapping:
• /dev/sdb=ephemeral0
• /dev/sdh=snap-1234567890abcdef0
• /dev/sdj=:100
To prevent /dev/sdj from attaching to an instance launched from this AMI, use the following mapping.
{
"DeviceName": "/dev/sdj",
"NoDevice": ""
}
To increase the size of /dev/sdh to 300 GiB, specify the following mapping. Notice that you don't need
to specify the snapshot ID for /dev/sdh, because specifying the device name is enough to identify the
volume.
{
"DeviceName": "/dev/sdh",
"Ebs": {
"VolumeSize": 300
}
}
To increase the size of the root volume at instance launch, first call describe-images with the ID of the
AMI to verify the device name of the root volume. For example, "RootDeviceName": "/dev/xvda".
To override the size of the root volume, specify the device name of the root device used by the AMI and
the new volume size.
1653
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance block device mapping
{
"DeviceName": "/dev/xvda",
"Ebs": {
"VolumeSize": 100
}
}
To attach an additional instance store volume, /dev/sdc, specify the following mapping. If the instance
type doesn't support multiple instance store volumes, this mapping has no effect. If the instance
supports NVMe instance store volumes, they are automatically enumerated and assigned an NVMe
device name.
{
"DeviceName": "/dev/sdc",
"VirtualName": "ephemeral1"
}
To add volumes to an instance using the AWS Tools for Windows PowerShell
Use the -BlockDeviceMapping parameter with the New-EC2Instance command (AWS Tools for
Windows PowerShell).
For example, to preserve the root volume at instance termination, specify the following in
mapping.json.
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": false
}
}
]
Alternatively, you can use the -BlockDeviceMapping parameter with the Edit-EC2InstanceAttribute
command (AWS Tools for Windows PowerShell).
1654
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance block device mapping
If the instance was launched with additional EBS volumes using a block device mapping, they appear
under Block devices. Any instance store volumes do not appear on this tab.
5. To display additional information about an EBS volume, choose its volume ID to go to the volume
page. For more information, see View information about an Amazon EBS volume (p. 1364).
To view the EBS volumes for an instance using the command line
Use the describe-instances (AWS CLI) command or Get-EC2Instance (AWS Tools for Windows PowerShell)
command to enumerate the EBS volumes in the block device mapping for an instance.
You can use the NVMe command line package, nvme-cli, to query the NVMe instance store volumes
in the block device mapping. Download and install the package on your instance, and then run the
following command.
The following is example output for an instance. The text in the Model column indicates whether the
volume is an EBS volume or an instance store volume. In this example, both /dev/nvme1n1 and /dev/
nvme2n1 are instance store volumes.
You can use instance metadata to query the HDD or SSD instance store volumes in the block device
mapping. NVMe instance store volumes are not included.
The base URI for all requests for instance metadata is http://169.254.169.254/latest/. For more
information, see Instance metadata and user data (p. 710).
First, connect to your running instance. From the instance, use this query to get its block device mapping.
1655
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance block device mapping
IMDSv2
IMDSv1
The response includes the names of the block devices for the instance. For example, the output for an
instance store–backed m1.small instance looks like this.
ami
ephemeral0
root
swap
The ami device is the root device as seen by the instance. The instance store volumes are named
ephemeral[0-23]. The swap device is for the page file. If you've also mapped EBS volumes, they
appear as ebs1, ebs2, and so on.
To get details about an individual block device in the block device mapping, append its name to the
previous query, as shown here.
IMDSv2
IMDSv1
The instance type determines the number of instance store volumes that are available to the instance. If
the number of instance store volumes in a block device mapping exceeds the number of instance store
volumes available to an instance, the additional volumes are ignored. To view the instance store volumes
for your instance, run the lsblk command. To learn how many instance store volumes are supported by
each instance type, see Instance store volumes (p. 1614).
1656
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Resource locations
Some resources can be tagged with values that you define, to help you organize and identify them.
The following topics describe resources and tags, and how you can work with them.
Contents
• Resource locations (p. 1657)
• Resource IDs (p. 1658)
• List and filter your resources (p. 1659)
• Tag your Amazon EC2 resources (p. 1666)
• Amazon EC2 service quotas (p. 1680)
• Amazon EC2 usage reports (p. 1682)
Resource locations
Amazon EC2 resources are specific to the AWS Region or Availability Zone in which they reside.
Amazon EC2 resource Regional Each resource identifier, such as an AMI ID, instance ID,
identifiers EBS volume ID, or EBS snapshot ID, is tied to its Region
and can be used only in the Region where you created
the resource.
User-supplied resource Regional Each resource name, such as a security group name
names or key pair name, is tied to its Region and can be used
only in the Region where you created the resource.
Although you can create resources with the same name
in multiple Regions, they aren't related to each other.
AMIs Regional An AMI is tied to the Region where its files are located
within Amazon S3. You can copy an AMI from one
Region to another. For more information, see Copy an
AMI (p. 170).
EBS snapshots Regional An EBS snapshot is tied to its Region and can only
be used to create volumes in the same Region. You
can copy a snapshot from one Region to another.
For more information, see Copy an Amazon EBS
snapshot (p. 1391).
EBS volumes Availability Zone An Amazon EBS volume is tied to its Availability Zone
and can be attached only to instances in the same
Availability Zone.
1657
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Resource IDs
Key pairs Global or The key pairs that you create using Amazon EC2 are
Regional tied to the Region where you created them. You can
create your own RSA key pair and upload it to the
Region in which you want to use it; therefore, you can
make your key pair globally available by uploading it
to each Region.
Resource IDs
When resources are created, we assign each resource a unique resource ID. A resource ID takes the form
of a resource identifier (such as snap for a snapshot) followed by a hyphen and a unique combination of
letters and numbers.
Each resource identifier, such as an AMI ID, instance ID, EBS volume ID, or EBS snapshot ID, is tied to its
Region and can be used only in the Region where you created the resource.
You can use resource IDs to find your resources in the Amazon EC2 console. If you are using a command
line tool or the Amazon EC2 API to work with Amazon EC2, resource IDs are required for certain
commands. For example, if you are using the stop-instances AWS CLI command to stop an instance, you
must specify the instance ID in the command.
Resource ID length
Prior to January 2016, the IDs assigned to newly created resources of certain resource types used
8 characters after the hyphen (for example, i-1a2b3c4d). From January 2016 to June 2018, we
changed the IDs of these resource types to use 17 characters after the hyphen (for example,
i-1234567890abcdef0). Depending on when your account was created, you might have resources of
the following resource types with short IDs, though any new resources of these types receive the longer
IDs:
• bundle
• conversion-task
• customer-gateway
• dhcp-options
• elastic-ip-allocation
• elastic-ip-association
• export-task
• flow-log
1658
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List and filter your resources
• image
• import-task
• instance
• internet-gateway
• network-acl
• network-acl-association
• network-interface
• network-interface-attachment
• prefix-list
• route-table
• route-table-association
• security-group
• snapshot
• subnet
• subnet-cidr-block-association
• reservation
• volume
• vpc
• vpc-cidr-block-association
• vpc-endpoint
• vpc-peering-connection
• vpn-connection
• vpn-gateway
Contents
• List and filter resources using the console (p. 1659)
• List and filter using the CLI and API (p. 1663)
• List and filter resources across Regions using Amazon EC2 Global View (p. 1665)
1659
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List and filter resources using the console
The search and filter functionality differs slightly between the old and new Amazon EC2 console.
New console
• API filtering happens on the server side. The filtering is applied on the API call, which reduces the
number of resources returned by the server. It allows for quick filtering across large sets of resources,
and it can reduce data transfer time and cost between the server and the browser.
• Client filtering happens on the client side. It enables you to filter down on data that is already available
in the browser (in other words, data that has already been returned by the API). Client filtering works
well in conjunction with an API filter to filter down to smaller data sets in the browser.
The new Amazon EC2 console supports the following types of searches:
Search by keyword
Searching by keyword is a free text search that lets you search for a value across all of your
resources' attributes, without specifying an attribute to search.
Note
All keyword searches use client filtering.
To search by keyword, enter or paste what you’re looking for in the search field, and then choose
Enter. For example, searching for 123 matches all instances that have 123 in any of their attributes,
such as an IP address, instance ID, VPC ID, or AMI ID. If your free text search returns unexpected
matches, apply additional filters.
Search by attributes
Searching by an attribute lets you search a specific attribute across all of your resources.
Note
Attribute searches use either API filtering or client filtering, depending on the selected
attribute. When performing an attribute search, the attributes are grouped accordingly.
For example, you can search the Instance state attribute for all of your instances to return only
instances that are in the stopped state. To do this:
1660
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List and filter resources using the console
1. In the search field on the Instances screen, start entering Instance state. As you enter the
characters, the two types of filters appear for Instance state: API filters and Client filters.
2. To search on the server side, choose Instance state under API filters. To search on the client side,
choose Instance state (client) under Client filters.
You can use the following techniques to enhance or refine your searches:
Inverse search
Inverse searches let you search for resources that do not match a specified value. Inverse searches
are performed by prefixing the search keyword with the exclamation mark (!) character.
Note
Inverse search is supported with keyword searches and attribute searches on client filters
only. It is not supported with attribute searches on API filters.
For example, you can search the Instance state attribute for all of your instances to exclude all
instances that are in the terminated state. To do this:
1. In the search field on the Instances screen, start entering Instance state. As you enter the
characters, the two types of filters appear for Instance state: API filters and Client filters.
2. Choose Instance state (client). Inverse search is only supported on client filters.
To filter instances based on an instance state attribute, you can also use the search icons (
) in the Instance state column. The search icon with a plus sign ( + ) displays all the instances that
match that attribute. The search icon with a minus sign ( - ) excludes all instances that match that
attribute.
Here is another example of using the inverse search: To list all instances that are not assigned the
security group named launch-wizard-1, search by the Security group name attribute, and for the
keyword, enter !launch-wizard-1.
Partial search
With partial searches, you can search for partial string values. To perform a partial search, enter
only a part of the keyword that you want to search for. For example, to search for all t2.micro,
t2.small, and t2.medium instances, search by the Instance Type attribute, and for the keyword,
enter t2.
Note
Partial search is supported with keyword searches and attribute searches on client filters
only. It is not supported with attribute searches on API filters.
Regular expression search
To use regular expression searches, you must enable Use regular expression matching in the
Preferences.
Regular expressions are useful when you need to match the values in a field with a specific pattern.
For example, to search for a value that starts with s, search for ^s. To search for a value that ends
with xyz, search for xyz$. Or to search for a value that starts with a number that is followed by one
or more characters, search for [0-9]+.*. Regular expression searches are not case-sensitive.
1661
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List and filter resources using the console
Note
Regular expression search is supported with keyword searches and attribute searches on
client filters only. It is not supported with attribute searches on API filters.
Wildcard search
Use the * wildcard to match zero or more characters. Use the ? wildcard to match zero or one
character. For example, if you have a data set with the following values: prod, prods, and production;
"prod*" matches all values, whereas "prod?" matches only prod and prods. To use the literals
values, escape them with a backslash (\). For example, "prod\*" would match prod*.
Note
Wildcard search is supported with attribute searches on API filters only. It is not supported
with keyword searches and attribute searches on client filters only.
Combining searches
In general, multiple filters with the same attribute are automatically joined with OR. For example,
searching for Instance State : Running and Instance State : Stopped returns all
instances that are either running OR stopped. To join search with AND, search across different
attributes. For example, searching for Instance State : Running and Instance Type :
c4.large returns only instances that are of type c4.large AND that are in the running state.
Old console
The old Amazon EC2 console supports the following types of searches:
Search by keyword
Searching by keyword is a free text search that lets you search for a value across all of your
resources' attributes. To search by keyword, enter or paste what you’re looking for in the search field,
and then choose Enter. For example, searching for 123 matches all instances that have 123 in any of
their attributes, such as an IP address, instance ID, VPC ID, or AMI ID. If your free text search returns
unexpected matches, apply additional filters.
Search by attributes
Searching by an attribute lets you search a specific attribute across all of your resources. For
example, you can search the State attribute for all of your instances to return only instances that are
in the stopped state. To do this:
1. In the search field on the Instances screen, start entering Instance State. As you enter
characters, a list of matching attributes appears.
2. Select Instance State from the list. A list of possible values for the selected attribute appears.
3. Select Stopped from the list.
You can use the following techniques to enhance or refine your searches:
Inverse search
Inverse searches let you search for resources that do not match a specified value. Inverse searches
are performed by prefixing the search keyword with the exclamation mark (!) character. For example,
to list all instances that are not terminated, search by the Instance State attribute, and for the
keyword, enter !Terminated.
Partial search
With partial searches, you can search for partial string values. To perform a partial search, enter only
a part of the keyword you want to search for. For example, to search for all t2.micro, t2.small,
and t2.medium instances, search by the Instance Type attribute, and for the keyword, enter t2.
1662
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List and filter using the CLI and API
Regular expressions are useful when you need to match the values in a field with a specific pattern.
For example, to search for all instances that have an attribute value that starts with s, search for
^s. Or to search for all instances that have an attribute value that ends with xyz, search for xyz$.
Regular expression searches are not case-sensitive.
Combining searches
In general, multiple filters with the same attribute are automatically joined with OR. For example,
searching for Instance State : Running and Instance State : Stopped returns all
instances that are either running OR stopped. To join search with AND, search across different
attributes. For example, searching for Instance State : Running and Instance Type :
c4.large returns only instances that are of type c4.large AND that are in the stopped state.
Filtering considerations
• You can specify multiple filters and multiple filter values in a single request.
• You can use wildcards with the filter values. An asterisk (*) matches zero or more characters, and a
question mark (?) matches zero or one character.
• Filter values are case sensitive.
• Your search can include the literal values of the wildcard characters; you just need to escape them with
a backslash before the character. For example, a value of \*amazon\?\\ searches for the literal string
*amazon?\.
Supported filters
To see the supported filters for each Amazon EC2 resource, see the following documentation:
• AWS CLI: The describe commands in the AWS CLI Command Reference-Amazon EC2.
• Tools for Windows PowerShell: The Get commands in the AWS Tools for PowerShell Cmdlet
Reference-Amazon EC2.
• Query API: The Describe API actions in the Amazon EC2 API Reference.
You can list your Amazon EC2 instances using describe-instances. Without filters, the response contains
information for all of your resources. You can use the following command to include only the running
instances in your output.
To list only the instance IDs for your running instances, add the --query parameter as follows.
1663
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List and filter using the CLI and API
i-0ef1f57f78d4775a4
i-0626d4edd54f1286d
i-04a636d18e83cfacb
If you specify multiple filters or multiple filter values, the resource must match all filters to be included
in the results.
You can use the following command to list all instances whose type is either m5.large or m5d.large.
You can use the following command to list all stopped instances whose type is t2.micro.
If you specify database as the filter value for the description filter when describing EBS snapshots
using describe-snapshots, the command returns only the snapshots whose description is "database".
The * wildcard matches zero or more characters. If you specify *database* as the filter value, the
command returns only snapshots whose description includes the word database.
The ? wildcard matches exactly 1 character. If you specify database? as the filter value, the command
returns only snapshots whose description is "database" or "database" followed by one character.
If you specify database????, the command returns only snapshots whose description is "database"
followed by up to four characters. It excludes descriptions with "database" followed by five or more
characters.
With the AWS CLI, you can use JMESPath to filter results using expressions. For example, the
following describe-snapshots command displays the IDs of all snapshots created by your AWS account
(represented by 123456789012) before the specified date (represented by 2020-03-31). If you do not
specify the owner, the results include all public snapshots.
1664
Amazon Elastic Compute Cloud
User Guide for Linux Instances
List and filter resources across Regions
using Amazon EC2 Global View
The following command displays the IDs of all snapshots created in the specified date range.
For examples of how to filter a list of resources according to their tags, see Work with tags using the
command line (p. 1675).
Amazon EC2 Global View does not let you modify resources in any way.
Required permissions
An IAM user must have the following permissions to use Amazon EC2 Global View.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeVpcs",
"ec2:DescribeRegions",
"ec2:DescribeVolumes",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups"
],
"Resource": "*"
}]
}
Enabled Regions indicates the number of Regions for which your AWS account is enabled. The
remaining fields indicate the number of resources that you currently have in those Regions. Choose
any of the links to view the resources of that type across all Regions. For example, if the link below
the Instances label is 29 in 10 Regions, it indicates that you currently have 29 instances across 10
Regions. Choose the link to view a list of all 29 instances.
1665
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tag your resources
• Resource counts per Region—Lists all of the AWS Regions (including those for which your account is
not enabled) and provides totals for each resource type for each Region.
Choose a Region name to view all resources of all types for that specific Region. For example, choose
Africa (Cape Town) af-south-1 to view all VPCs, subnets, instances, security groups, and volumes in
that Region. Alternatively, select a Region and choose View resources for selected Region.
Choose the value for a specific resource type in a specific Region to view only resources of that type
in that Region. For example, choose the value for Instances for Africa (Cape Town) af-south-1 to
view only the instances in that Region.
• Global search—This tab enables you to search for specific resources or specific resource types across a
single Region or across multiple Regions. It also enables you to view details for a specific resource.
To search for resources, enter the search criteria in the field preceding the grid. You can search by
Region, by resource type, and by the tags assigned to resources.
To view the details for a specific resource, select it in the grid. You can also choose the resource ID of a
resource to open it in its respective console. For example, choose an instance ID to open the instance in
the Amazon EC2 console, or choose a subnet ID to open the subnet in the Amazon VPC console.
Contents
• Tag basics (p. 1666)
• Tag your resources (p. 1667)
• Tag restrictions (p. 1670)
• Tags and access management (p. 1671)
• Tag your resources for billing (p. 1671)
• Work with tags using the console (p. 1671)
• Work with tags using the command line (p. 1675)
• Work with instance tags in instance metadata (p. 1678)
• Add tags to a resource using CloudFormation (p. 1679)
Tag basics
A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value, both
of which you define.
Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or
environment. For example, you could define a set of tags for your account's Amazon EC2 instances that
helps you track each instance's owner and stack level.
1666
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tag your resources
The following diagram illustrates how tagging works. In this example, you've assigned two tags to each
of your instances—one tag with the key Owner and another with the key Stack. Each tag also has an
associated value.
We recommend that you devise a set of tag keys that meets your needs for each resource type. Using
a consistent set of tag keys makes it easier for you to manage your resources. You can search and filter
the resources based on the tags you add. For more information about how to implement an effective
resource tagging strategy, see the AWS whitepaper Tagging Best Practices.
Tags don't have any semantic meaning to Amazon EC2 and are interpreted strictly as a string of
characters. Also, tags are not automatically assigned to your resources. You can edit tag keys and values,
and you can remove tags from a resource at any time. You can set the value of a tag to an empty string,
but you can't set the value of a tag to null. If you add a tag that has the same key as an existing tag on
that resource, the new value overwrites the old value. If you delete a resource, any tags for the resource
are also deleted.
Note
After you delete a resource, its tags might remain visible in the console, API, and CLI output for
a short period. These tags will be gradually disassociated from the resource and be permanently
deleted.
If you're using the Amazon EC2 console, you can apply tags to resources by using the Tags tab on the
relevant resource screen, or you can use the Tags screen. Some resource screens enable you to specify
tags for a resource when you create the resource; for example, a tag with a key of Name and a value that
you specify. In most cases, the console applies the tags immediately after the resource is created (rather
1667
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tag your resources
than during resource creation). The console may organize resources according to the Name tag, but this
tag doesn't have any semantic meaning to the Amazon EC2 service.
If you're using the Amazon EC2 API, the AWS CLI, or an AWS SDK, you can use the CreateTags EC2
API action to apply tags to existing resources. Additionally, some resource-creating actions enable you
to specify tags for a resource when the resource is created. If tags cannot be applied during resource
creation, we roll back the resource creation process. This ensures that resources are either created with
tags or not created at all, and that no resources are left untagged at any time. By tagging resources at
the time of creation, you can eliminate the need to run custom tagging scripts after resource creation.
For more information about enabling users to tag resources on creation, see Grant permission to tag
resources during creation (p. 1225).
The following table describes the Amazon EC2 resources that can be tagged, and the resources that can
be tagged on creation using the Amazon EC2 API, the AWS CLI, or an AWS SDK.
Bundle task No No
1668
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tag your resources
1669
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tag restrictions
You can tag instances, volumes, and network interfaces on creation using the Amazon EC2 Launch
Instances wizard in the Amazon EC2 console. You can tag your EBS volumes on creation using the
Volumes screen, or EBS snapshots using the Snapshots screen. Alternatively, use the resource-creating
Amazon EC2 APIs (for example, RunInstances) to apply tags when creating your resource.
You can apply tag-based resource-level permissions in your IAM policies to the Amazon EC2 API actions
that support tagging on creation to implement granular control over the users and groups that can tag
resources on creation. Your resources are properly secured from creation—tags are applied immediately
to your resources, therefore any tag-based resource-level permissions controlling the use of resources are
immediately effective. Your resources can be tracked and reported on more accurately. You can enforce
the use of tagging on new resources, and control which tag keys and values are set on your resources.
You can also apply resource-level permissions to the CreateTags and DeleteTags Amazon EC2 API
actions in your IAM policies to control which tag keys and values are set on your existing resources. For
more information, see Example: Tag resources (p. 1258).
For more information about tagging your resources for billing, see Using cost allocation tags in the AWS
Billing and Cost Management User Guide.
Tag restrictions
The following basic restrictions apply to tags:
1670
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Tags and access management
• The aws: prefix is reserved for AWS use. If a tag has a tag key with this prefix, then you can't edit or
delete the tag's key or value. Tags with the aws: prefix do not count against your tags per resource
limit.
You can't terminate, stop, or delete a resource based solely on its tags; you must specify the resource
identifier. For example, to delete snapshots that you tagged with a tag key called DeleteMe, you
must use the DeleteSnapshots action with the resource identifiers of the snapshots, such as
snap-1234567890abcdef0.
When you tag public or shared resources, the tags you assign are available only to your AWS account; no
other AWS account will have access to those tags. For tag-based access control to shared resources, each
AWS account must assign its own set of tags to control access to the resource.
You can't tag all resources. For more information, see Tagging support for Amazon EC2
resources (p. 1668).
You can also use resource tags to implement attribute-based control (ABAC). You can create IAM policies
that allow operations based on the tags for the resource. For more information, see Control access to
EC2 resources using resource tags (p. 1227).
Cost allocation tags can indicate which resources are contributing to costs, but deleting or deactivating
resources doesn't always reduce costs. For example, snapshot data that is referenced by another
snapshot is preserved, even if the snapshot that contains the original data is deleted. For more
information, see Amazon Elastic Block Store volumes and snapshots in the AWS Billing and Cost
Management User Guide.
Note
Elastic IP addresses that are tagged do not appear on your cost allocation report.
1671
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with tags using the console
For more information about using filters when listing your resources, see List and filter your
resources (p. 1659).
For ease of use and best results, use Tag Editor in the AWS Management Console, which provides a
central, unified way to create and manage your tags. For more information, see Tag Editor in Getting
Started with the AWS Management Console.
Tasks
• Display tags (p. 1672)
• Add and delete tags on an individual resource (p. 1673)
• Add and delete tags to a group of resources (p. 1673)
• Add a tag when you launch an instance (p. 1674)
• Filter a list of resources by tag (p. 1674)
Display tags
You can display tags in two different ways in the Amazon EC2 console. You can display the tags for an
individual resource or for all resources.
When you select a resource-specific page in the Amazon EC2 console, it displays a list of those resources.
For example, if you select Instances from the navigation pane, the console displays your Amazon EC2
instances. When you select a resource from one of these lists (for example, an instance), if the resource
supports tags, you can view and manage its tags. On most resource pages, you can view the tags by
choosing the Tags tab.
You can add a column to the resource list that displays all values for tags with the same key. You can use
this column sort and filter the resource list by the tag.
New console
• Choose the Preferences gear-shaped icon in the top right corner of the screen. In the Preferences
dialog box, under Tag columns, select one of more tag keys, and then choose Confirm.
Old console
There are two ways to add a new column to the resource list to display your tags:
• On the Tags tab, select Show Column. A new column is added to the console.
• Choose the Show/Hide Columns gear-shaped icon, and in the Show/Hide Columns dialog box,
select the tag key under Your Tag Keys.
You can display tags across all resources by selecting Tags from the navigation pane in the Amazon EC2
console. The following image shows the Tags pane, which lists all tags in use by resource type.
1672
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with tags using the console
1673
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with tags using the console
1. From the navigation bar, select the Region for the instance. This choice is important because some
Amazon EC2 resources can be shared between Regions, while others can't. Select the Region that
meets your needs. For more information, see Resource locations (p. 1657).
2. Choose Launch Instance.
3. The Choose an Amazon Machine Image (AMI) page displays a list of basic configurations called
Amazon Machine Images (AMIs). Select the AMI to use and choose Select. For more information, see
Find a Linux AMI (p. 107).
4. On the Configure Instance Details page, configure the instance settings as necessary, and then
choose Next: Add Storage.
5. On the Add Storage page, you can specify additional storage volumes for your instance. Choose
Next: Add Tags when done.
6. On the Add Tags page, specify tags for the instance, the volumes, or both. Choose Add another tag
to add more than one tag to your instance. Choose Next: Configure Security Group when you are
done.
7. On the Configure Security Group page, you can choose from an existing security group that you
own, or let the wizard create a new security group for you. Choose Review and Launch when you are
done.
8. Review your settings. When you're satisfied with your selections, choose Launch. Select an existing
key pair or create a new one, select the acknowledgment check box, and then choose Launch
Instances.
1674
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with tags using the command line
For more information about filters, see List and filter your resources (p. 1659).
Tasks
• Add tags on resource creation (p. 1675)
• Add tags to an existing resource (p. 1676)
• Describe tagged resources (p. 1677)
The way you enter JSON-formatted parameters on the command line differs depending on your
operating system. Linux, macOS, or Unix and Windows PowerShell use single quotes (') to enclose the
JSON data structure. Omit the single quotes when using the commands with the Windows command
line. For more information, see Specifying parameter values for the AWS CLI.
Example Example: Launch an instance and apply tags to the instance and volume
The following run-instances command launches an instance and applies a tag with the key webserver
and the value production to the instance. The command also applies a tag with the key cost-center
and the value cc123 to any EBS volume that's created (in this case, the root volume).
1675
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with tags using the command line
You can apply the same tag keys and values to both instances and volumes during launch. The following
command launches an instance and applies a tag with a key of cost-center and a value of cc123 to
both the instance and any EBS volume that's created.
The following create-volume command creates a volume and applies two tags: purpose=production
and cost-center=cc123.
The following command adds the tag Stack=production to the specified image, or overwrites an
existing tag for the AMI where the tag key is Stack. If the command succeeds, no output is returned.
This example adds (or overwrites) two tags for an AMI and an instance. One of the tags contains just a
key (webserver), with no value (we set the value to an empty string). The other tag consists of a key
(stack) and value (Production). If the command succeeds, no output is returned.
This example adds the tag [Group]=test to an instance. The square brackets ([ and ]) are special
characters, which must be escaped.
If you are using Linux or OS X, to escape the special characters, enclose the element with the special
character with double quotes ("), and then enclose the entire key and value structure with single quotes
(').
1676
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with tags using the command line
If you are using Windows, to escape the special characters, enclose the element that has special
characters with double quotes ("), and then precede each double quote character with a backslash (\) as
follows:
If you are using Windows PowerShell, to escape the special characters, enclose the value that has special
characters with double quotes ("), precede each double quote character with a backslash (\), and then
enclose the entire key and value structure with single quotes (') as follows:
The following command describes the instances with a Stack tag, regardless of the value of the tag.
The following command describes the instances with the tag Stack=production.
The following command describes the instances with a tag with the value production, regardless of the
tag key.
Example Example: Describe all EC2 resources with the specified tag
The following command describes all EC2 resources with the tag Stack=Test.
1677
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with instance tags in instance metadata
By default, tags are not available from the instance metadata; you must explicitly allow access. You can
allow access at instance launch, or after launch on a running or stopped instance. You can also allow
access to tags by specifying this in a launch template. Instances that are launched by using the template
allow access to tags in the instance metadata.
If you add or remove an instance tag, the instance metadata is updated while the instance is running
for instances built on the Nitro System (p. 232), without needing to reboot, or stop and then start the
instance. For all other instances, to update the tags in the instance metadata, you must either reboot, or
stop and then start the instance.
Topics
• Allow access to tags in instance metadata (p. 1678)
• Turn off access to tags in instance metadata (p. 1679)
• Retrieve tags from instance metadata (p. 1679)
To allow access to tags in instance metadata at launch using the AWS CLI
To allow access to tags in instance metadata on a running or stopped instance using the AWS CLI
1678
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Add tags to a resource using CloudFormation
The following examples add the tag Stack=Production to AWS::EC2::Instance using its Tags property.
Tags:
- Key: "Stack"
Value: "Production"
"Tags": [
{
"Key": "Stack",
"Value": "Production"
1679
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Service quotas
}
]
TagSpecifications:
- ResourceType: "instance"
Tags:
- Key: "Stack"
Value: "Production"
"TagSpecifications": [
{
"ResourceType": "instance",
"Tags": [
{
"Key": "Stack",
"Value": "Production"
}
]
}
]
The Amazon EC2 console provides limit information for the resources managed by the Amazon EC2 and
Amazon VPC consoles. You can request an increase for many of these limits. Use the limit information
that we provide to manage your AWS infrastructure. Plan to request any limit increases in advance of the
time that you'll need them.
For more information, see Amazon EC2 endpoints and quotas in the Amazon Web Services General
Reference. For information about Amazon EBS quotas, see Amazon EBS quotas (p. 1613).
1680
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Request an increase
Request an increase
Use the Limits page in the Amazon EC2 console to request an increase in your Amazon EC2 or Amazon
VPC resources, on a per-Region basis.
Alternatively, request an increase using Service Quotas. For more information, see Requesting a quota
increase in the Service Quotas User Guide.
1681
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Usage reports
• IP addresses in the primary CIDR block of the VPC in which the originating network interface
exists.
• IP addresses in the CIDRs defined in RFC 1918, RFC 6598, and RFC 4193.
Here's an example of some of the questions that you can answer when using Cost Explorer:
For more information about working with reports in Cost Explorer, including saving reports, see
Analyzing your costs with Cost Explorer.
1682
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot launch issues
Contents
• Troubleshoot instance launch issues (p. 1683)
• Troubleshoot connecting to your instance (p. 1686)
• Troubleshoot stopping your instance (p. 1697)
• Troubleshoot instance termination (shutting down) (p. 1699)
• Troubleshoot instances with failed status checks (p. 1700)
• Troubleshoot an unreachable instance (p. 1721)
• Boot from the wrong volume (p. 1724)
• Use EC2Rescue for Linux (p. 1725)
• EC2 Serial Console for Linux instances (p. 1735)
• Send a diagnostic interrupt (for advanced users) (p. 1752)
For additional help with Windows instances, see Troubleshoot Windows instances in the Amazon EC2
User Guide for Windows Instances.
Launch Issues
• Instance limit exceeded (p. 1683)
• Insufficient instance capacity (p. 1684)
• The requested configuration is currently not supported. Please check the documentation for
supported configurations. (p. 1684)
• Instance terminates immediately (p. 1685)
Cause
If you get an InstanceLimitExceeded error when you try to launch a new instance or restart a
stopped instance, you have reached the limit on the number of instances that you can launch in a Region.
When you create your AWS account, we set default limits on the number of instances you can run on a
per-Region basis.
1683
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Insufficient instance capacity
Solution
You can request an instance limit increase on a per-region basis. For more information, see Amazon EC2
service quotas (p. 1680).
Cause
If you get this error when you try to launch an instance or restart a stopped instance, AWS does not
currently have enough available On-Demand capacity to fulfill your request.
Solution
To resolve the issue, try the following:
• Wait a few minutes and then submit your request again; capacity can shift frequently.
• Submit a new request with a reduced number of instances. For example, if you're making a single
request to launch 15 instances, try making 3 requests for 5 instances, or 15 requests for 1 instance
instead.
• If you're launching an instance, submit a new request without specifying an Availability Zone.
• If you're launching an instance, submit a new request using a different instance type (which you can
resize at a later stage). For more information, see Change the instance type (p. 367).
• If you are launching instances into a cluster placement group, you can get an insufficient capacity
error. For more information, see Placement group rules and limitations (p. 1170).
Cause
The error message provides additional details. For example, an instance type or instance purchasing
option might not be supported in the specified Region or Availability Zone.
Solution
Try a different instance configuration. To search for an instance type that meets your requirements, see
Find an Amazon EC2 instance type (p. 363).
1684
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance terminates immediately
Cause
The following are a few reasons why an instance might immediately terminate:
• You've exceeded your EBS volume limits. For more information, see Instance volume limits (p. 1637).
• An EBS snapshot is corrupted.
• The root EBS volume is encrypted and you do not have permissions to access the KMS key for
decryption.
• A snapshot specified in the block device mapping for the AMI is encrypted and you do not have
permissions to access the KMS key for decryption or you do not have access to the KMS key to encrypt
the restored volumes.
• The instance store-backed AMI that you used to launch the instance is missing a required part (an
image.part.xx file).
For more information, get the termination reason using one of the following methods.
To get the termination reason using the AWS Command Line Interface
2. Review the JSON response returned by the command and note the values in the StateReason
response element.
"StateReason": {
"Message": "Client.VolumeLimitExceeded: Volume limit exceeded",
"Code": "Server.InternalError"
},
For more information, see Viewing events with CloudTrail event history in the AWS CloudTrail User Guide.
Solution
Depending on the termination reason, take one of the following actions:
1685
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to your instance
You can connect to your instance using the user name for your user account or the default user
name for the AMI that you used to launch your instance.
• Get the user name for your user account.
For more information about how to create a user account, see Manage user accounts on your
Amazon Linux instance (p. 660).
• Get the default user name for the AMI that you used to launch your instance:
• For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.
• For a CentOS AMI, the user name is centos or ec2-user.
• For a Debian AMI, the user name is admin.
• For a Fedora AMI, the user name is fedora or ec2-user.
• For a RHEL AMI, the user name is ec2-user or root.
• For a SUSE AMI, the user name is ec2-user or root.
• For an Ubuntu AMI, the user name is ubuntu.
• For an Oracle AMI, the user name is ec2-user.
1686
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error connecting to your instance: Connection timed out
Make sure your security group rules allow inbound traffic from your public IPv4 address on
the proper port. For steps to verify, see Error connecting to your instance: Connection timed
out (p. 1687)
Verify that your instance is ready
After you launch an instance, it can take a few minutes for the instance to be ready so that you can
connect to it. Check your instance to make sure it is running and has passed its status checks.
a. In the Instance state column, verify that your instance is in the running state.
b. In the Status check column, verify that your instance has passed the two status checks.
For more information, see General prerequisites for connecting to your instance (p. 596).
You need a security group rule that allows inbound traffic from your public IPv4 address on the proper
port.
New console
• For Linux instances: Verify that there is a rule that allows traffic from your computer to port
22 (SSH).
• For Windows instances: Verify that there is a rule that allows traffic from your computer to
port 3389 (RDP).
4. Each time you restart your instance, a new IP address (and host name) will be assigned. If your
security group has a rule that allows inbound traffic from a single IP address, this address
might not be static if your computer is on a corporate network or if you are connecting through
an internet service provider (ISP). Instead, specify the range of IP addresses used by client
computers. If your security group does not have a rule that allows inbound traffic as described
in the previous step, add a rule to your security group. For more information, see Authorize
inbound traffic for your Linux instances (p. 1285).
1687
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error connecting to your instance: Connection timed out
For more information about security group rules, see Security group rules in the Amazon VPC
User Guide.
Old console
For Windows instances: When you select view inbound rules, a window will appear that displays
the port(s) to which traffic is allowed. Verify that there is a rule that allows traffic from your
computer to port 3389 (RDP).
Each time you restart your instance, a new IP address (and host name) will be assigned. If your
security group has a rule that allows inbound traffic from a single IP address, this address
may not be static if your computer is on a corporate network or if you are connecting through
an internet service provider (ISP). Instead, specify the range of IP addresses used by client
computers. If your security group does not have a rule that allows inbound traffic as described
in the previous step, add a rule to your security group. For more information, see Authorize
inbound traffic for your Linux instances (p. 1285).
For more information about security group rules, see Security group rules in the Amazon VPC
User Guide.
You need a route that sends all traffic destined outside the VPC to the internet gateway for the VPC.
New console
a. Choose the ID of the route table (rtb-xxxxxxxx) to navigate to the route table.
b. On the Routes tab, choose Edit routes. Choose Add route, use 0.0.0.0/0 as the
destination and the internet gateway as the target. For IPv6, choose Add route, use ::/0 as
the destination and the internet gateway as the target.
1688
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error connecting to your instance: Connection timed out
Old console
a. Choose the ID of the route table (rtb-xxxxxxxx) to navigate to the route table.
b. On the Routes tab, choose Edit routes. Choose Add route, use 0.0.0.0/0 as the
destination and the internet gateway as the target. For IPv6, choose Add route, use ::/0 as
the destination and the internet gateway as the target.
c. Choose Save routes.
Check the network access control list (ACL) for the subnet.
The network ACLs must allow inbound traffic from your local IP address on port 22 (for Linux instances)
or port 3389 (for Windows instances). It must also allow outbound traffic to the ephemeral ports
(1024-65535).
Ask your network administrator whether the internal firewall allows inbound and outbound traffic from
your computer on port 22 (for Linux instances) or port 3389 (for Windows instances).
If you have a firewall on your computer, verify that it allows inbound and outbound traffic from your
computer on port 22 (for Linux instances) or port 3389 (for Windows instances).
If not, you can associate an Elastic IP address with your instance. For more information, see Elastic IP
addresses (p. 1059).
Check the CPU load on your instance; the server may be overloaded.
1689
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error: unable to load key ... Expecting: ANY PRIVATE KEY
AWS automatically provides data such as Amazon CloudWatch metrics and instance status, which you
can use to see how much CPU load is on your instance and, if necessary, adjust how your loads are
handled. For more information, see Monitor your instances using CloudWatch (p. 958).
• If your load is variable, you can automatically scale your instances up or down using Auto Scaling and
Elastic Load Balancing.
• If your load is steadily growing, you can move to a larger instance type. For more information, see
Change the instance type (p. 367).
• Your subnet must be associated with a route table that has a route for IPv6 traffic (::/0) to an
internet gateway.
• Your security group rules must allow inbound traffic from your local IPv6 address on the proper port
(22 for Linux and 3389 for Windows).
• Your network ACL rules must allow inbound and outbound IPv6 traffic.
• If you launched your instance from an older AMI, it might not be configured for DHCPv6 (IPv6
addresses are not automatically recognized on the network interface). For more information, see
Configure IPv6 on Your Instances in the Amazon VPC User Guide.
• Your local computer must have an IPv6 address, and must be configured to use IPv6.
If the private key file is incorrectly configured, follow these steps to resolve the error
1. Create a new key pair. For more information, see Create a key pair using Amazon EC2 (p. 1289).
2. Add the new key pair to your instance. For more information, see Connect to your Linux instance if
you lose your private key (p. 1299).
3. Connect to your instance using the new key pair.
• Use ssh -vvv to get triple verbose debugging information while connecting:
The following sample output demonstrates what you might see if you were trying to connect to your
instance with a key that was not recognized by the server:
open/ANT/myusername/.ssh/known_hosts).
debug2: bits set: 504/1024
debug1: ssh_rsa_verify: signature correct
1690
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error: User key not recognized by server
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: boguspem.pem ((nil))
debug1: Authentications that can continue: publickey
debug3: start over, passed a different list publickey
debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Trying private key: boguspem.pem
debug1: read PEM private key done: type RSA
debug3: sign_and_send_pubkey: RSA 9c:4c:bc:0c:d0:5c:c7:92:6c:8e:9b:16:e4:43:d8:b2
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey
debug2: we did not send a packet, disable method
debug1: No more authentication methods to try.
Permission denied (publickey).
• Verify that your private key (.pem) file has been converted to the format recognized by PuTTY (.ppk).
For more information about converting your private key, see Connect to your Linux instance from
Windows using PuTTY (p. 612).
Note
In PuTTYgen, load your private key file and select Save Private Key rather than Generate.
• Verify that you are connecting with the appropriate user name for your AMI. Enter the user name in
the Host name box in the PuTTY Configuration window.
• For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.
• For a CentOS AMI, the user name is centos or ec2-user.
• For a Debian AMI, the user name is admin.
• For a Fedora AMI, the user name is fedora or ec2-user.
• For a RHEL AMI, the user name is ec2-user or root.
• For a SUSE AMI, the user name is ec2-user or root.
• For an Ubuntu AMI, the user name is ubuntu.
• For an Oracle AMI, the user name is ec2-user.
• For a Bitnami AMI, the user name is bitnami.
• Otherwise, check with the AMI provider.
• Verify that you have an inbound security group rule to allow inbound traffic to the appropriate port.
For more information, see Authorizing Network Access to Your Instances (p. 1285).
1691
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error: Permission denied or connection
closed by [instance] port 22
• For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.
• For a CentOS AMI, the user name is centos or ec2-user.
• For a Debian AMI, the user name is admin.
• For a Fedora AMI, the user name is fedora or ec2-user.
• For a RHEL AMI, the user name is ec2-user or root.
• For a SUSE AMI, the user name is ec2-user or root.
• For an Ubuntu AMI, the user name is ubuntu.
• For an Oracle AMI, the user name is ec2-user.
• For a Bitnami AMI, the user name is bitnami.
• Otherwise, check with the AMI provider.
For example, to use an SSH client to connect to an Amazon Linux instance, use the following command:
Confirm that you are using the private key file that corresponds to the key pair that you selected when
you launched the instance.
New console
Old console
1692
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error: Unprotected private key file
pair with a new one. For more information, see Connect to your Linux instance if you lose your
private key (p. 1299).
If you generated your own key pair, ensure that your key generator is set up to create RSA keys. DSA keys
are not accepted.
If you get a Permission denied (publickey) error and none of the above applies (for example,
you were able to connect previously), the permissions on the home directory of your instance may have
been changed. Permissions for /home/my-instance-user-name/.ssh/authorized_keys must be
limited to the owner only.
1. Stop your instance and detach the root volume. For more information, see Stop and start your
instance (p. 622) and Detach an Amazon EBS volume from a Linux instance (p. 1378).
2. Launch a temporary instance in the same Availability Zone as your current instance (use a similar or
the same AMI as you used for your current instance), and attach the root volume to the temporary
instance. For more information, see Attach an Amazon EBS volume to an instance (p. 1353).
3. Connect to the temporary instance, create a mount point, and mount the volume that you attached.
For more information, see Make an Amazon EBS volume available for use on Linux (p. 1360).
4. From the temporary instance, check the permissions of the /home/my-instance-user-name/
directory of the attached volume. If necessary, adjust the permissions as follows:
5. Unmount the volume, detach it from the temporary instance, and re-attach it to the original
instance. Ensure that you specify the correct device name for the root volume; for example, /dev/
xvda.
6. Start your instance. If you no longer require the temporary instance, you can terminate it.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0777 for '.ssh/my_private_key.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: .ssh/my_private_key.pem
Permission denied (publickey).
If you see a similar message when you try to log in to your instance, examine the first line of the error
message to verify that you are using the correct public key for your instance. The above example uses the
private key .ssh/my_private_key.pem with file permissions of 0777, which allow anyone to read or
write to this file. This permission level is very insecure, and so SSH ignores this key.
1693
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error: Private key must begin with "-----BEGIN RSA PRIVATE
KEY-----" and end with "-----END RSA PRIVATE KEY-----"
If you are connecting from MacOS or Linux, run the following command to fix this error, substituting the
path for your private key file.
If you are connecting from Windows, perform the following steps on your local computer.
1. From the command prompt, navigate to the file path location of your .pem file.
2. Run the following command to reset and remove explicit permissions:
3. Run the following command to grant Read permissions to the current user:
4. Run the following command to disable inheritance and remove inherited permissions.
5. You should be able to connect to your Linux instance from Windows via SSH.
To resolve the error, the private key must be in the PEM format. Use the following command to create
the private key in the PEM format:
ssh-keygen -m PEM
1694
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error: Server refused our key or No
supported authentication methods available
• For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.
• For a CentOS AMI, the user name is centos or ec2-user.
• For a Debian AMI, the user name is admin.
• For a Fedora AMI, the user name is fedora or ec2-user.
• For a RHEL AMI, the user name is ec2-user or root.
• For a SUSE AMI, the user name is ec2-user or root.
• For an Ubuntu AMI, the user name is ubuntu.
• For an Oracle AMI, the user name is ec2-user.
• For a Bitnami AMI, the user name is bitnami.
• Otherwise, check with the AMI provider.
You should also verify that your private key (.pem) file has been correctly converted to the format
recognized by PuTTY (.ppk). For more information about converting your private key, see Connect to
your Linux instance from Windows using PuTTY (p. 612).
If you are unable to issue a ping command from your instance, ensure that your outbound security
group rules allow ICMP traffic for the Echo Request message to all destinations, or to the host that you
are attempting to ping.
Ping commands can also be blocked by a firewall or time out due to network latency or hardware issues.
You should consult your local network or system administrator for help with further troubleshooting.
If you still experience issues after enabling keepalives, try to disable Nagle's algorithm on the Connection
page of the PuTTY Configuration.
1695
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error: Host key validation failed for EC2 Instance Connect
To resolve the error, you must run the eic_harvest_hostkeys script on your instance, which uploads
your new host key to EC2 Instance Connect. The script is located at /opt/aws/bin/ on Amazon Linux 2
instances, and at /usr/share/ec2-instance-connect/ on Ubuntu instances.
Amazon Linux 2
To resolve the host key validation failed error on an Amazon Linux 2 instance
You can connect by using the EC2 Instance Connect CLI or by using the SSH key pair that was
assigned to your instance when you launched it and the default user name of the AMI that you
used to launch your instance. For Amazon Linux 2, the default user name is ec2-user.
For example, if your instance was launched using Amazon Linux 2, your instance's public
DNS name is ec2-a-b-c-d.us-west-2.compute.amazonaws.com, and the key pair is
my_ec2_private_key.pem, use the following command to SSH into your instance:
For more information about connecting to your instance, see Connect to your Linux instance
using SSH (p. 599).
2. Navigate to the following folder.
You can now use the EC2 Instance Connect browser-based client to connect to your instance.
Ubuntu
You can connect by using the EC2 Instance Connect CLI or by using the SSH key pair that was
assigned to your instance when you launched it and the default user name of the AMI that you
used to launch your instance. For Ubuntu, the default user name is ubuntu.
1696
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Stop your instance
For example, if your instance was launched using Ubuntu, your instance's public DNS
name is ec2-a-b-c-d.us-west-2.compute.amazonaws.com, and the key pair is
my_ec2_private_key.pem, use the following command to SSH into your instance:
For more information about connecting to your instance, see Connect to your Linux instance
using SSH (p. 599).
2. Navigate to the following folder.
You can now use the EC2 Instance Connect browser-based client to connect to your instance.
There is no cost for instance usage while an instance is in the stopping state or in any other state
except running. You are only charged for instance usage when an instance is in the running state.
New console
Old console
AWS CLI
1697
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Create a replacement instance
If, after 10 minutes, the instance has not stopped, post a request for help in the Amazon EC2 forum.
To help expedite a resolution, include the instance ID, and describe the steps that you've already taken.
Alternatively, if you have a support plan, create a technical support case in the Support Center.
New console
For more information, see Create a Linux AMI from an instance (p. 136).
5. Launch a new instance from the AMI and verify that the new instance is working.
6. Select the stuck instance, and choose Actions, Instance state, Terminate instance. If the
instance also gets stuck terminating, Amazon EC2 automatically forces it to terminate within a
few hours.
Old console
For more information, see Create a Linux AMI from an instance (p. 136).
5. Launch a new instance from the AMI and verify that the new instance is working.
6. Select the stuck instance, and choose Actions, Instance State, Terminate. If the instance also
gets stuck terminating, Amazon EC2 automatically forces it to terminate within a few hours.
1698
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Terminate your instance
AWS CLI
1. Create an AMI from the stuck instance using the create-image (AWS CLI) command and the --
no-reboot option as follows:.
2. Launch a new instance from the AMI using the run-instances (AWS CLI) command as follows:
If you are unable to create an AMI from the instance as described in the previous procedure, you can set
up a replacement instance as follows:
1. Select the instance and choose Description, Block devices. Select each volume and make note of its
volume ID. Be sure to note which volume is the root volume.
2. In the navigation pane, choose Volumes. Select each volume for the instance, and choose Actions,
Create Snapshot.
3. In the navigation pane, choose Snapshots. Select the snapshot that you just created, and choose
Actions, Create Volume.
4. Launch an instance with the same operating system as the stuck instance. Note the volume ID and
device name of its root volume.
5. In the navigation pane, choose Instances, select the instance that you just launched, and choose
Instance state, Stop instance.
6. In the navigation pane, choose Volumes, select the root volume of the stopped instance, and choose
Actions, Detach Volume.
7. Select the root volume that you created from the stuck instance, choose Actions, Attach Volume,
and attach it to the new instance as its root volume (using the device name that you made note of).
Attach any additional non-root volumes to the instance.
8. In the navigation pane, choose Instances and select the replacement instance. Choose Instance
state, Start instance. Verify that the instance is working.
9. Select the stuck instance, choose Instance state, Terminate instance. If the instance also gets stuck
terminating, Amazon EC2 automatically forces it to terminate within a few hours.
1699
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance terminates immediately
Another possible cause is a problem with the underlying host computer. If your instance remains in the
shutting-down state for several hours, Amazon EC2 treats it as a stuck instance and forcibly terminates
it.
If it appears that your instance is stuck terminating and it has been longer than several hours, post a
request for help to the Amazon EC2 forum. To help expedite a resolution, include the instance ID and
describe the steps that you've already taken. Alternatively, if you have a support plan, create a technical
support case in the Support Center.
To stop automatic scaling, see the Amazon EC2 Auto Scaling User Guide, EC2 Fleet (p. 762), or Create a
Spot Fleet request (p. 854).
For examples of problems that can cause status checks to fail, see Status checks for your
instances (p. 928).
Contents
• Review status check information (p. 1701)
• Retrieve the system logs (p. 1702)
• Troubleshoot system log errors for Linux-based instances (p. 1702)
• Out of memory: kill process (p. 1703)
1700
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Review status check information
If a system status check has failed, you can try one of the following options:
• Create an instance recovery alarm. For more information, see Create alarms that stop, terminate,
reboot, or recover an instance (p. 982).
• If you changed the instance type to an instance built on the Nitro System (p. 232), status checks fail
if you migrated from an instance that does not have the required ENA and NVMe drivers. For more
information, see Compatibility for changing the instance type (p. 372).
• For an instance using an Amazon EBS-backed AMI, stop and restart the instance.
• For an instance using an instance-store backed AMI, terminate the instance and launch a replacement.
• Wait for Amazon EC2 to resolve the issue.
• Post your issue to the Amazon EC2 forum.
• If your instance is in an Auto Scaling group, the Amazon EC2 Auto Scaling service automatically
launches a replacement instance. For more information, see Health Checks for Auto Scaling Instances
in the Amazon EC2 Auto Scaling User Guide.
• Retrieve the system log and look for errors.
1701
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Retrieve the system logs
Memory Errors
Device Errors
Kernel Errors
• request_module: runaway loop modprobe (Looping legacy kernel modprobe on older Linux
versions) (p. 1706)
• "FATAL: kernel too old" and "fsck: No such file or directory while trying to open /dev" (Kernel and AMI
mismatch) (p. 1707)
• "FATAL: Could not load /lib/modules" or "BusyBox" (Missing kernel modules) (p. 1708)
• ERROR Invalid kernel (EC2 incompatible kernel) (p. 1709)
1702
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Out of memory: kill process
• fsck: No such file or directory while trying to open... (File system not found) (p. 1710)
• General error mounting filesystems (failed mount) (p. 1711)
• VFS: Unable to mount root fs on unknown-block (Root filesystem mismatch) (p. 1713)
• Error: Unable to determine major/minor number of root device... (Root file system/device
mismatch) (p. 1714)
• XENBUS: Device with no driver... (p. 1715)
• ... days without being checked, check forced (File system check required) (p. 1716)
• fsck died with exit status... (Missing device) (p. 1716)
Potential cause
Exhausted memory
Suggested actions
1703
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ERROR: mmu_update failed (Memory
management update failed)
For this instance type Do this
• Reboot the instance to return it to an
unimpaired status. The problem will probably
occur again unless you change the instance
type.
...
Press `ESC' to enter the menu... 0 [H[J Booting 'Amazon Linux 2011.09
(2.6.35.14-95.38.amzn1.i686)'
root (hd0)
en_US.UTF-8 KEYTABLE=us
initrd /boot/initramfs-2.6.35.14-95.38.amzn1.i686.img
Potential cause
Issue with Amazon Linux
Suggested action
Post your issue to the Developer Forums or contact AWS Support.
1704
Amazon Elastic Compute Cloud
User Guide for Linux Instances
I/O ERROR: neither local nor remote
disk (Broken distributed block device)
[9943664.192949] end_request: I/O error, dev sde, sector 52428288
[9943664.193112] end_request: I/O error, dev sde, sector 52428288
[9943664.193266] end_request: I/O error, dev sde, sector 52428288
...
Potential causes
Suggested actions
...
block drbd1: Local IO failed in request_timer_fn. Detaching...
1705
Amazon Elastic Compute Cloud
User Guide for Linux Instances
request_module: runaway loop modprobe (Looping
legacy kernel modprobe on older Linux versions)
block drbd1: IO ERROR: neither local nor remote disk
JBD2: I/O error detected when updating journal superblock for drbd1-8.
Potential causes
Suggested action
Terminate the instance and launch a new instance.
For an Amazon EBS-backed instance you can recover data from a recent snapshot by creating an image
from it. Any data added after the snapshot cannot be recovered.
Suggested actions
1706
Amazon Elastic Compute Cloud
User Guide for Linux Instances
"FATAL: kernel too old" and "fsck: No such file or directory
while trying to open /dev" (Kernel and AMI mismatch)
For this instance type Do this
Option 1: Terminate the instance and launch
a new instance, specifying the –kernel and –
ramdisk parameters.
Option 2:
Potential causes
Incompatible kernel and userland
Suggested actions
1707
Amazon Elastic Compute Cloud
User Guide for Linux Instances
"FATAL: Could not load /lib/modules"
or "BusyBox" (Missing kernel modules)
(initramfs)
Potential causes
One or more of the following conditions can cause this problem:
• Missing ramdisk
• Missing correct modules from ramdisk
• Amazon EBS root volume not correctly attached as /dev/sda1
Suggested actions
1708
Amazon Elastic Compute Cloud
User Guide for Linux Instances
ERROR Invalid kernel (EC2 incompatible kernel)
...
root (hd0)
initrd /initrd.img
Booting 'Fallback'
root (hd0)
Potential causes
One or both of the following conditions can cause this problem:
Suggested actions
1709
Amazon Elastic Compute Cloud
User Guide for Linux Instances
fsck: No such file or directory while
trying to open... (File system not found)
For this instance type Do this
2. Replace with working kernel.
3. Install a fallback kernel.
4. Modify the AMI by correcting the kernel.
Welcome to Fedora
Press 'I' to enter interactive startup.
Setting clock : Wed Oct 26 05:52:05 EDT 2011 [ OK ]
Starting udev: [ OK ]
No devices found
Setting up Logical Volume Management: File descriptor 7 left open
No volume groups found
[ OK ]
Checking filesystems
Checking all file systems.
[/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/sda1
/dev/sda1: clean, 82081/1310720 files, 2141116/2621440 blocks
[/sbin/fsck.ext3 (1) -- /mnt/dbbackups] fsck.ext3 -a /dev/sdh
fsck.ext3: No such file or directory while trying to open /dev/sdh
/dev/sdh:
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
[FAILED]
Potential causes
• A bug exists in ramdisk filesystem definitions /etc/fstab
1710
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General error mounting filesystems (failed mount)
Suggested actions
1711
Amazon Elastic Compute Cloud
User Guide for Linux Instances
General error mounting filesystems (failed mount)
Potential causes
Amazon EBS-backed
• Detached or failed Amazon EBS volume.
• Corrupted filesystem.
• Mismatched ramdisk and AMI combination
(such as Debian ramdisk with a SUSE AMI).
Instance store-backed
• A failed drive.
• A corrupted file system.
• A mismatched ramdisk and combination (for
example, a Debian ramdisk with a SUSE AMI).
Suggested actions
1712
Amazon Elastic Compute Cloud
User Guide for Linux Instances
VFS: Unable to mount root fs on unknown-
block (Root filesystem mismatch)
For this instance type Do this
Potential causes
Suggested actions
1713
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Error: Unable to determine major/minor number
of root device... (Root file system/device mismatch)
For this instance type Do this
...
XENBUS: Device with no driver: device/vif/0
XENBUS: Device with no driver: device/vbd/2048
drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
Initializing network drop monitor service
Freeing unused kernel memory: 508k freed
:: Starting udevd...
done.
:: Running Hook [udev]
:: Triggering uevents...<30>udevd[65]: starting version 173
done.
Waiting 10 seconds for device /dev/xvda1 ...
Root device '/dev/xvda1' doesn't exist. Attempting to create it.
ERROR: Unable to determine major/minor number of root device '/dev/xvda1'.
You are being dropped to a recovery shell
Type 'exit' to try and continue booting
sh: can't access tty; job control turned off
[ramfs /]#
Potential causes
• Missing or incorrectly configured virtual block device driver
• Device enumeration clash (sda versus xvda or sda instead of sda1)
• Incorrect choice of instance kernel
Suggested actions
1714
Amazon Elastic Compute Cloud
User Guide for Linux Instances
XENBUS: Device with no driver...
Potential causes
• Missing or incorrectly configured virtual block device driver
• Device enumeration clash (sda versus xvda)
• Incorrect choice of instance kernel
Suggested actions
1715
Amazon Elastic Compute Cloud
User Guide for Linux Instances
... days without being checked, check
forced (File system check required)
...
Checking filesystems
Checking all file systems.
[/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/sda1
/dev/sda1 has gone 361 days without being checked, check forced
Potential causes
Filesystem check time passed; a filesystem check is being forced.
Suggested actions
• Wait until the filesystem check completes. A filesystem check can take a long time depending on the
size of the root filesystem.
• Modify your filesystems to remove the filesystem check (fsck) enforcement using tune2fs or tools
appropriate for your filesystem.
Cleaning up ifupdown....
Loading kernel modules...done.
...
Activating lvm and md swap...done.
Checking file systems...fsck from util-linux-ng 2.16.2
/sbin/fsck.xfs: /dev/sdh does not exist
fsck died with exit status 8
[31mfailed (code 8).[39;49m
Potential causes
• Ramdisk looking for missing drive
• Filesystem consistency check forced
• Drive failed or detached
Suggested actions
1716
Amazon Elastic Compute Cloud
User Guide for Linux Instances
GRUB prompt (grubdom>)
completions of a device/filename. ]
grubdom>
Potential causes
1717
Amazon Elastic Compute Cloud
User Guide for Linux Instances
GRUB prompt (grubdom>)
Suggested actions
1718
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Bringing up interface eth0: Device eth0
has different MAC address than expected,
For this instance type ignoring. (Hard-coded MAC Doaddress)
this
Note
To recover data from the existing
instance, contact AWS Support.
...
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: Device eth0 has different MAC address than expected, ignoring.
[FAILED]
Starting auditd: [ OK ]
Potential causes
There is a hardcoded interface MAC in the AMI configuration
Suggested actions
OR
1719
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Unable to load SELinux Policy. Machine is in enforcing
mode. Halting now. (SELinux misconfiguration)
For this instance type Do this
• Terminate the instance and launch a new
instance.
Potential causes
SELinux has been enabled in error:
Suggested actions
1720
Amazon Elastic Compute Cloud
User Guide for Linux Instances
XENBUS: Timeout connecting to devices (Xenbus timeout)
Potential causes
• The block device is not connected to the instance
• This instance is using an old instance kernel
Suggested actions
Contents
1721
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance reboot
Instance reboot
The ability to reboot instances that are otherwise unreachable is valuable for both troubleshooting and
general instance management.
Just as you can reset a computer by pressing the reset button, you can reset EC2 instances using the
Amazon EC2 console, CLI, or API. For more information, see Reboot your instance (p. 642)
Warning
For Windows instances, this operation performs a hard reboot that might result in data
corruption.
For Linux/Unix, the instance console output displays the exact console output that would normally
be displayed on a physical monitor attached to a computer. The console output returns buffered
information that was posted shortly after an instance transition state (start, stop, reboot, and terminate).
The posted output is not continuously updated; only when it is likely to be of the most value.
For Windows instances, the instance console output includes the last three system event log errors.
You can optionally retrieve the latest serial console output at any time during the instance lifecycle. This
option is only supported on Instances built on the Nitro System (p. 232). It is not supported through the
Amazon EC2 console.
Note
Only the most recent 64 KB of posted output is stored, which is available for at least 1 hour
after the last posting.
Only the instance owner can access the console output. You can retrieve the console output for your
instances using the console or the command line.
New console
Old console
1722
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Capture a screenshot of an unreachable instance
Command line
You can use one of the following commands. For more information about these command line
interfaces, see Access Amazon EC2 (p. 3).
For more information about common system log errors, see Troubleshoot system log errors for Linux-
based instances (p. 1702).
1723
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Instance recovery when a host computer fails
You can use one of the following commands. The returned content is base64-encoded. For more
information about these command line interfaces, see Access Amazon EC2 (p. 3).
1. Back up any important data on your instance store volumes to Amazon EBS or Amazon S3.
2. Stop the instance.
3. Start the instance.
4. Restore any important data.
For more information, see Stop and start your instance (p. 622).
For more information, see Create an instance store-backed Linux AMI (p. 139).
This is due to how the initial ramdisk in Linux works. It chooses the volume defined as / in the /etc/
fstab, and in some distributions, this is determined by the label attached to the volume partition.
Specifically, you find that your /etc/fstab looks something like the following:
If you check the label of both volumes, you see that they both contain the / label:
1724
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2Rescue for Linux
In this example, you could end up having /dev/xvdf1 become the root device that your instance boots
to after the initial ramdisk runs, instead of the /dev/xvda1 volume from which you had intended to
boot. To solve this, use the same e2label command to change the label of the attached volume that you
do not want to boot from.
In some cases, specifying a UUID in /etc/fstab can resolve this. However, if both volumes come from
the same snapshot, or the secondary is created from a snapshot of the primary volume, they share a
UUID.
1. Use the e2label command to change the label of the volume to something other than /.
• Use the xfs_admin command to change the label of the volume to something other than /.
After changing the volume label as shown, you should be able to reboot the instance and have the
proper volume selected by the initial ramdisk when the instance boots.
Important
If you intend to detach the volume with the new label and return it to another instance to use
as the root volume, you must perform the above procedure again and change the volume label
back to its original value. Otherwise, the other instance does not boot because the ramdisk is
unable to find the volume with the label /.
1725
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Install EC2Rescue for Linux
collecting resource utilization data, and diagnosing/remediating known problematic kernel parameters
and common OpenSSH issues.
The AWSSupport-TroubleshootSSH runbook installs EC2Rescue for Linux and then uses the tool to
check or attempt to fix common issues that prevent a remote connection to a Linux machine via SSH. For
more information, and to run this automation, see AWS Support-TroubleshootSSH.
If you are using a Windows instance, see EC2Rescue for Windows Server.
Contents
Prerequisites
The AWSSupport-TroubleshootSSH runbook installs EC2Rescue for Linux and then uses the tool to
check or attempt to fix common issues that prevent a remote connection to a Linux machine via SSH. For
more information, and to run this automation, see AWS Support-TroubleshootSSH.
If your system has the required Python version, you can install the standard build. Otherwise, you can
install the bundled build, which includes a minimal copy of Python.
1. From a working Linux instance, download the EC2Rescue for Linux tool:
curl -O https://s3.amazonaws.com/ec2rescuelinux/ec2rl.tgz
2. (Optional) Before proceeding, you can optionally verify the signature of the EC2Rescue for Linux
installation file. For more information, see (Optional) Verify the signature of EC2Rescue for
Linux (p. 1727).
3. Download the sha256 hash file:
curl -O https://s3.amazonaws.com/ec2rescuelinux/ec2rl.tgz.sha256
sha256sum -c ec2rl.tgz.sha256
1726
Amazon Elastic Compute Cloud
User Guide for Linux Instances
(Optional) Verify the signature of EC2Rescue for Linux
cd ec2rl-<version_number>
./ec2rl help
For a link to the download and a list of limitations, see EC2Rescue for Linux on github.
When you download an application from the internet, we recommend that you authenticate the identity
of the software publisher and check that the application has not been altered or corrupted after it was
published. This protects you from installing a version of the application that contains a virus or other
malicious code.
If, after running the steps in this topic, you determine that the software for EC2Rescue for Linux is
altered or corrupted, do not run the installation file. Instead, contact Amazon Web Services.
EC2Rescue for Linux files for Linux-based operating systems are signed using GnuPG, an open-source
implementation of the Pretty Good Privacy (OpenPGP) standard for secure digital signatures. GnuPG
(also known as GPG) provides authentication and integrity checking through a digital signature. AWS
publishes a public key and signatures that you can use to verify the downloaded EC2Rescue for Linux
package. For more information about PGP and GnuPG (GPG), see http://www.gnupg.org.
The first step is to establish trust with the software publisher. Download the public key of the software
publisher, check that the owner of the public key is who they claim to be, and then add the public key to
your keyring. Your keyring is a collection of known public keys. After you establish the authenticity of the
public key, you can use it to verify the signature of the application.
Tasks
• Install the GPG tools (p. 1727)
• Authenticate and import the public key (p. 1728)
• Verify the signature of the package (p. 1728)
1727
Amazon Elastic Compute Cloud
User Guide for Linux Instances
(Optional) Verify the signature of EC2Rescue for Linux
1. At a command prompt, use the following command to obtain a copy of our public GPG build key:
curl -O https://s3.amazonaws.com/ec2rescuelinux/ec2rl.key
2. At a command prompt in the directory where you saved ec2rl.key, use the following command to
import the EC2Rescue for Linux public key into your keyring:
1. At a command prompt, run the following command to download the signature file for the
installation script:
curl -O https://s3.amazonaws.com/ec2rescuelinux/ec2rl.tgz.sig
2. Verify the signature by running the following command at a command prompt in the directory
where you saved ec2rl.tgz.sig and the EC2Rescue for Linux installation file. Both files must be
present.
1728
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2Rescue for Linux
gpg: Signature made Thu 12 Jul 2018 01:57:51 AM UTC using RSA key ID 6991ED45
gpg: Good signature from "[email protected] <EC2 Rescue for Linux>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: E528 BCC9 0DBF 5AFA 0F6C C36A F780 4843 2FAE 2A1C
Subkey fingerprint: 966B 0D27 85E9 AEEC 1146 7A9D 8851 1153 6991 ED45
If the output contains the phrase Good signature from "[email protected] <EC2
Rescue for Linux>", it means that the signature has successfully been verified, and you can
proceed to run the EC2Rescue for Linux installation script.
If the output includes the phrase BAD signature, check whether you performed the procedure
correctly. If you continue to get this response, contact Amazon Web Services and do not run the
installation file that you downloaded previously.
The following are details about the warnings that you might see:
• WARNING: This key is not certified with a trusted signature! There is no indication that the
signature belongs to the owner. This refers to your personal level of trust in your belief that you
possess an authentic public key for EC2Rescue for Linux. In an ideal world, you would visit an Amazon
Web Services office and receive the key in person. However, more often you download it from a
website. In this case, the website is an Amazon Web Services website.
• gpg2: no ultimately trusted keys found. This means that the specific key is not "ultimately trusted" by
you (or by other people whom you trust).
Tasks
• Run EC2Rescue for Linux (p. 1729)
• Upload the results (p. 1730)
• Create backups (p. 1730)
• Get help (p. 1731)
./ec2rl run
Some modules require root access. If you are not a root user, use sudo to run these modules as follows:
1729
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Work with EC2Rescue for Linux
For example, this command runs the dig module to query the amazon.com domain:
cat /var/tmp/ec2rl/logfile_location
For example, view the log file for the dig module:
cat /var/tmp/ec2rl/2017-05-11T15_39_21.893145/mod_out/run/dig.log
For more information about generating pre-signed URLs for Amazon S3, see Uploading Objects Using
Pre-Signed URLs.
Create backups
Create a backup for your instance, one or more volumes, or a specific device ID using the following
commands.
1730
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Develop EC2Rescue modules
Get help
EC2Rescue for Linux includes a help file that gives you information and syntax for each available
command.
./ec2rl help
./ec2rl list
For example, use the following command to show the help file for the dig module:
Attribute Description
For example:
helptext: !!str |
1731
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Develop EC2Rescue modules
Attribute Description
Collect output from ps for system
analysis
Consumes --times= for number of times to
repeat
Consumes --period= for time period
between repetition
• prediagnostic
• run
• postdiagnostic
• bash
• python
Note
Python code must be compatible with
both Python 2.7.9+ and Python 3.2+.
• application
• net
• os
• performance
1732
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Develop EC2Rescue modules
Attribute Description
1733
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Develop EC2Rescue modules
Default value:/var/tmp/ec2rl/
<date×tamp>/mod_out/gathered/.
Examples:
• xen_netfront
• ixgbevf
• ena
Examples:
• default-hvm
• default-paravirtual
1734
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Serial Console
Example modules
Example one (mod.d/ps.yaml):
--- !ec2rlcore.module.Module
# Module document. Translates directly into an almost-complete Module object
name: !!str ps
path: !!str
version: !!str 1.0
title: !!str Collect output from ps for system analysis
helptext: !!str |
Collect output from ps for system analysis
Requires --times= for number of times to repeat
Requires --period= for time period between repetition
placement: !!str run
package:
- !!str
language: !!str bash
content: !!str |
#!/bin/bash
error_trap()
{
printf "%0.s=" {1..80}
echo -e "\nERROR: "$BASH_COMMAND" exited with an error on line ${BASH_LINENO[0]}"
exit 0
}
trap error_trap ERR
Access to the serial console is not available by default. Your organization must grant account access
to the serial console and configure IAM policies to grant your users access to the serial console. Serial
1735
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure access to the EC2 Serial Console
console access can be controlled at a granular level by using instance IDs, resource tags, and other IAM
levers. For more information, see Configure access to the EC2 Serial Console (p. 1736).
The serial console can be accessed by using the EC2 console or the AWS CLI.
If you are using a Windows instance, see EC2 Serial Console for Windows instances in the Amazon EC2
User Guide for Windows Instances.
Topics
• Configure access to the EC2 Serial Console (p. 1736)
• Connect to the EC2 Serial Console (p. 1741)
• Terminate an EC2 Serial Console session (p. 1746)
• Troubleshoot your Linux instance using the EC2 Serial Console (p. 1746)
Topics
• Levels of access to the EC2 Serial Console (p. 1736)
• Manage account access to the EC2 Serial Console (p. 1737)
• Configure IAM policies for EC2 Serial Console access (p. 1739)
• Set an OS user password (p. 1740)
You can use a service control policy (SCP) to allow access to the serial console within your organization.
You can then have granular access control at the IAM user level by using an IAM policy to control access.
By using a combination of SCP and IAM policies, you have different levels of access control to the serial
console.
Organization level
You can use a service control policy (SCP) to allow access to the serial console for member accounts
within your organization. For more information about SCPs, see Service control policies in the AWS
Organizations User Guide.
Instance level
You can configure the serial console access policies by using IAM PrincipalTag and ResourceTag
constructions and by specifying instances by their ID. For more information, see Configure IAM
policies for EC2 Serial Console access (p. 1739).
IAM user level
You can configure access at the user level by configuring an IAM policy to allow or deny a specified
user the permission to push the SSH public key to the serial console service of a particular instance.
For more information, see Configure IAM policies for EC2 Serial Console access (p. 1739).
1736
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure access to the EC2 Serial Console
OS level
You can set a user password at the guest OS level. This provides access to the serial console for
some use cases. However, to monitor the logs, you don't need a password-based user. For more
information, see Set an OS user password (p. 1740).
Topics
• Grant permission to IAM users to manage account access (p. 1737)
• View account access status to the serial console (p. 1737)
• Grant account access to the serial console (p. 1738)
• Deny account access to the serial console (p. 1738)
The following policy grants permissions to view the account status, and to allow and prevent account
access to the EC2 serial console.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:GetSerialConsoleAccessStatus",
"ec2:EnableSerialConsoleAccess",
"ec2:DisableSerialConsoleAccess"
],
"Resource": "*"
}
]
}
For more information, see Creating IAM policies in the IAM User Guide.
The EC2 Serial Console access field indicates whether account access is Allowed or Prevented.
The following screenshot shows that the account is prevented from using the EC2 serial console.
1737
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure access to the EC2 Serial Console
Use the get-serial-console-access-status command to view account access status to the serial console.
In the following output, true indicates that the account is allowed access to the serial console.
{
"SerialConsoleAccessEnabled": true
}
Use the enable-serial-console-access command to allow account access to the serial console.
In the following output, true indicates that the account is allowed access to the serial console.
{
"SerialConsoleAccessEnabled": true
}
1738
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure access to the EC2 Serial Console
Use the disable-serial-console-access command to prevent account access to the serial console.
In the following output, false indicates that the account is denied access to the serial console.
{
"SerialConsoleAccessEnabled": false
}
For serial console access, create a JSON policy document that includes the ec2-instance-
connect:SendSerialConsoleSSHPublicKey action. This action grants an IAM user permission to
push the public key to the serial console service, which starts a serial console session. We recommend
restricting access to specific EC2 instances. Otherwise, all IAM users with this permission can connect to
the serial console of all EC2 instances.
The following policy allows access to the serial console of a specific instance, identified by its instance ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSerialConsoleAccess",
"Effect": "Allow",
"Action": [
"ec2-instance-connect:SendSerialConsoleSSHPublicKey"
],
"Resource": "arn:aws:ec2:region:account-id:instance/i-0598c7d356eba48d7"
}
]
}
1739
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Configure access to the EC2 Serial Console
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSerialConsoleAccess",
"Effect": "Allow",
"Action": [
"ec2-instance-connect:SendSerialConsoleSSHPublicKey"
],
"Resource": "*"
},
{
"Sid": "DenySerialConsoleAccess",
"Effect": "Deny",
"Action": [
"ec2-instance-connect:SendSerialConsoleSSHPublicKey"
],
"Resource": "arn:aws:ec2:region:account-id:instance/i-0598c7d356eba48d7"
}
]
}
Attribute-based access control is an authorization strategy that defines permissions based on tags that
can be attached to users and AWS resources. For example, the following policy allows an IAM user to
initiate a serial console connection for an instance only if that instance's resource tag and the principal's
tag have the same SerialConsole value for the tag key.
For more information about using tags to control access to your AWS resources, see Controlling access to
AWS resources in the IAM User Guide.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowTagBasedSerialConsoleAccess",
"Effect": "Allow",
"Action": [
"ec2-instance-connect:SendSerialConsoleSSHPublicKey"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/SerialConsole": "${aws:PrincipalTag/SerialConsole}"
}
}
}
]
}
1740
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to the EC2 Serial Console
You can set the password for any OS user, including the root user. Note that the root user can modify all
files, while each OS user might have limited permissions.
You must set a user password for every instance for which you will use the serial console. This is a one-
time requirement for each instance.
Note
The following instructions are applicable only if you launched your instance using an AWS-
provided AMI because, by default, AWS-provided AMIs are not configured with a password-
based user. If you launched your instance using an AMI that already has the root user password
configured, you can skip these instructions.
1. Connect (p. 596) to your instance. You can use any method for connecting to your instance, except
the EC2 Serial Console connection method.
2. To set the password for a user, use the passwd command. In the following example, the user is
root.
Topics
• Considerations (p. 1741)
• Prerequisites (p. 1742)
• Connect to the EC2 Serial Console (p. 1742)
• EC2 Serial Console fingerprints (p. 1745)
Considerations
• Only one active serial console connection is supported per instance.
• The serial console connection typically lasts for one hour unless you terminate it. However, during
system maintenance, Amazon EC2 will terminate the serial console session.
• It takes 30 seconds to tear down a session after you've disconnected from the serial console in order to
allow a new session.
• Supported serial console port for Linux: ttyS0
• When you connect to the serial console, you might observe a slight drop in your instance’s throughput.
1741
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to the EC2 Serial Console
Prerequisites
• Supported in all AWS Regions except Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Osaka),
China (Beijing), China (Ningxia), Europe (Milan), and Middle East (Bahrain).
• Not supported in Local Zones, Wavelength Zones, or AWS Outposts.
• Supported for all virtualized instances built on the Nitro System (p. 232): A1, C5, C5a, C5ad, C5d, C5n,
C6g, C6gd, C6gn, C6i, D3, D3en, DL1, G4, G4ad, G5, G5g, Hpc6a, I3en, Im4gn, Inf1, Is4gen, M5, M5a,
M5ad, M5d, M5dn, M5n, M5zn, M6a, M6g, M6gd, M6i, p3dn.24xlarge, P4, R5, R5a, R5ad, R5b, R5d,
R5dn, R5n, R6g, R6gd, R6i, T3, T3a, T4g, high memory (u-*), VT1, X2gd, and z1d
• Not supported on bare metal instances.
• Configure access to the EC2 Serial Console, as follows:
• Manage account access to the EC2 Serial Console (p. 1737).
• Configure IAM policies for EC2 Serial Console access (p. 1739). All IAM users who will use the serial
console must have the required permissions.
• Set an OS user password (p. 1740).
• To connect to the serial console using the browser-based client (p. 1742), your browser must support
WebSocket. If your browser does not support WebSocket, connect to the serial console using your own
key and an SSH client. (p. 1743)
• The instance must be in the pending, running, stopping, or shutting-down state. If the instance
is terminated or stopped, you can't connect to the serial console. For more information about the
instance states, see Instance lifecycle (p. 559).
• If the instance uses Amazon EC2 Systems Manager, then SSM Agent version 3.0.854.0 or later must be
installed on the instance. For information about SSM Agent, see Working with SSM Agent in the AWS
Systems Manager User Guide.
EC2 serial console works from most browsers, and supports keyboard and mouse input.
To connect to your instance's serial port using the browser-based client (Amazon EC2
console)
Alternatively, you can select the instance and choose Actions, Monitor and troubleshoot, EC2 Serial
Console, Connect.
1742
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to the EC2 Serial Console
4. Press Enter. If a login prompt returns, you are connected to the serial console.
If the screen remains black, you can use the following information to help resolve issues with
connecting to the serial console:
• Check that you have configured access to the serial console. For more information, see
Configure access to the EC2 Serial Console (p. 1736).
• Use SysRq to connect to the serial console. SysRq does not require that you connect via
the browser-based client. For more information, see Troubleshoot your Linux instance using
SysRq (p. 1750).
• Restart getty. If you have SSH access to your instance, then connect to your instance using SSH,
and restart getty using the following command.
• Reboot your instance. You can reboot your instance by using SysRq, the EC2 console, or the AWS
CLI. For more information, see Troubleshoot your Linux instance using SysRq (p. 1750) or Reboot
your instance (p. 642).
5. At the login prompt, enter the user name of the password-based user that you set up
previously (p. 1740), and then press Enter.
6. At the Password prompt, enter the password, and then press Enter.
You are now logged onto the instance and can use the serial console for troubleshooting.
1. Push your SSH public key to the instance to start a serial console session
Use the send-serial-console-ssh-public-key command to push your SSH public key to the instance.
This starts a serial console session.
If a serial console session has already been started for this instance, the command fails because you
can only have one session open at a time. It takes 30 seconds to tear down a session after you've
disconnected from the serial console in order to allow a new session.
Use the ssh command to connect to the serial console before the public key is removed from the
serial console service. You have 60 seconds before it is removed.
The user name format is instance-id.port0, which comprises the instance ID and port 0. In the
following example, the user name is i-001234a4bf70dec41EXAMPLE.port0.
For all supported AWS Regions, except AWS GovCloud (US) Regions:
1743
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to the EC2 Serial Console
The format of the public DNS name of the serial console service is serial-console.ec2-
instance-connect.region.aws. In the following example, the serial console service is in the us-
east-1 Region.
The format of the public DNS name of the serial console service in the AWS GovCloud (US) Regions
is serial-console.ec2-instance-connect.GovCloud-region.amazonaws.com. In the
following example, the serial console service is in the us-gov-east-1 Region.
When you connect for the first time to the serial console, you are prompted to verify the fingerprint.
You can compare the serial console fingerprint with the fingerprint that's displayed for verification. If
these fingerprints don't match, someone might be attempting a "man-in-the-middle" attack. If they
match, you can confidently connect to the serial console.
The following fingerprint is for the serial console service in the us-east-1 Region. For the fingerprints
for each Region, see EC2 Serial Console fingerprints (p. 1745).
SHA256:dXwn5ma/xadVMeBZGEru5l2gx+yI5LDiJaLUcz0FMmw
Note
The fingerprint only appears the first time you connect to the serial console.
4. Press Enter. If a prompt returns, you are connected to the serial console.
If the screen remains black, you can use the following information to help resolve issues with
connecting to the serial console:
• Check that you have configured access to the serial console. For more information, see
Configure access to the EC2 Serial Console (p. 1736).
• Use SysRq to connect to the serial console. SysRq does not require that you connect via SSH. For
more information, see Troubleshoot your Linux instance using SysRq (p. 1750).
• Restart getty. If you have SSH access to your instance, then connect to your instance using SSH,
and restart getty using the following command.
• Reboot your instance. You can reboot your instance by using SysRq, the EC2 console, or the AWS
CLI. For more information, see Troubleshoot your Linux instance using SysRq (p. 1750) or Reboot
your instance (p. 642).
5. At the login prompt, enter the user name of the password-based user that you set up
previously (p. 1740), and then press Enter.
6. At the Password prompt, enter the password, and then press Enter.
You are now logged onto the instance and can use the serial console for troubleshooting.
1744
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Connect to the EC2 Serial Console
SHA256:dXwn5ma/xadVMeBZGEru5l2gx+yI5LDiJaLUcz0FMmw
SHA256:EhwPkTzRtTY7TRSzz26XbB0/HvV9jRM7mCZN0xw/d/0
SHA256:OHldlcMET8u7QLSX3jmRTRAPFHVtqbyoLZBMUCqiH3Y
SHA256:EMCIe23TqKaBI6yGHainqZcMwqNkDhhAVHa1O2JxVUc
SHA256:oBLXcYmklqHHEbliARxEgH8IsO51rezTPiSM35BsU40
SHA256:FoqWXNX+DZ++GuNTztg9PK49WYMqBX+FrcZM2dSrqrI
SHA256:PLFNn7WnCQDHx3qmwLu1Gy/O8TUX7LQgZuaC6L45CoY
SHA256:yFvMwUK9lEUQjQTRoXXzuN+cW9/VSe9W984Cf5Tgzo4
SHA256:RQfsDCZTOfQawewTRDV1t9Em/HMrFQe+CRlIOT5um4k
SHA256:P2O2jOZwmpMwkpO6YW738FIOTHdUTyEv2gczYMMO7s4
SHA256:aCMFS/yIcOdOlkXvOl8AmZ1Toe+bBnrJJ3Fy0k0De2c
SHA256:h2AaGAWO4Hathhtm6ezs3Bj7udgUxi2qTrHjZAwCW6E
SHA256:a69rd5CE/AEG4Amm53I6lkD1ZPvS/BCV3tTPW2RnJg8
1745
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Terminate an EC2 Serial Console session
SHA256:q8ldnAf9pymeNe8BnFVngY3RPAr/kxswJUzfrlxeEWs
SHA256:tkGFFUVUDvocDiGSS3Cu8Gdl6w2uI32EPNpKFKLwX84
SHA256:rd2+/32Ognjew1yVIemENaQzC+Botbih62OqAPDq1dI
SHA256:tIwe19GWsoyLClrtvu38YEEh+DHIkqnDcZnmtebvF28
SHA256:kfOFRWLaOZfB+utbd3bRf8OlPf8nGO2YZLqXZiIw5DQ
Browser-based client
To terminate the serial console session, close the serial console in-browser terminal window.
To terminate the serial console session, use the following command to close the SSH connection. This
command must be run immediately following a new line.
$ ~.
Note
The command that you use for closing an SSH connection might be different depending on the
SSH client that you're using.
Topics
• Troubleshoot your Linux instance using GRUB (p. 1747)
• Troubleshoot your Linux instance using SysRq (p. 1750)
For information about troubleshooting your Windows instance, see Troubleshoot your Windows instance
using the EC2 Serial Console in the Amazon EC2 User Guide for Windows Instances.
1746
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot your instance using the EC2 Serial Console
The GRUB menu is displayed during the boot process. The menu is not accessible via normal SSH, but
you can access it via the EC2 Serial Console.
Topics
• Prerequisites (p. 1747)
• Configure GRUB (p. 1747)
• Use GRUB (p. 1749)
Prerequisites
Before you can configure and use GRUB, you must grant access to the serial console. For more
information, see Configure access to the EC2 Serial Console (p. 1736).
Configure GRUB
Before you can use GRUB via the serial console, you must configure your instance to use GRUB via the
serial console.
To configure GRUB, choose one of the following procedures based on the AMI that was used to launch
the instance.
Amazon Linux 2
• Set GRUB_TIMEOUT=1.
• Add GRUB_TERMINAL="console serial".
• Add GRUB_SERIAL_COMMAND="serial --speed=115200".
1747
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot your instance using the EC2 Serial Console
Ubuntu
• Set GRUB_TIMEOUT=1.
• Add GRUB_TIMEOUT_STYLE=menu.
• Add GRUB_TERMINAL="console serial".
• Remove GRUB_HIDDEN_TIMEOUT.
• Add GRUB_SERIAL_COMMAND="serial --speed=115200".
RHEL
• Remove GRUB_TERMINAL_OUTPUT.
• Add GRUB_TERMINAL="console serial".
• Add GRUB_SERIAL_COMMAND="serial --speed=115200".
GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
1748
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot your instance using the EC2 Serial Console
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_CMDLINE_LINUX="console=ttyS0,115200n8 console=tty0 net.ifnames=0
rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
GRUB_TERMINAL="console serial"
GRUB_SERIAL_COMMAND="serial --speed=115200"
CentOS
For instances that are launched using a CentOS AMI, GRUB is configured for the serial console by
default.
GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL="serial console"
GRUB_SERIAL_COMMAND="serial --speed=115200"
GRUB_CMDLINE_LINUX="console=tty0 crashkernel=auto console=ttyS0,115200"
GRUB_DISABLE_RECOVERY="true"
Use GRUB
After GRUB is configured, connect to the serial console and reboot the instance with the reboot
command. During reboot, you see the GRUB menu. Press any key when the GRUB menu appears to stop
the boot process, allowing you to interact with the GRUB menu.
Topics
• Single user mode (p. 1749)
• Emergency mode (p. 1750)
Single user mode will boot the kernel at a lower runlevel. For example, it might mount the filesystem but
not activate the network, giving you the opportunity to perform the maintenance necessary to fix the
instance.
3. During reboot, when the GRUB menu appears, press any key to stop the boot process.
1749
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot your instance using the EC2 Serial Console
4. In the GRUB menu, use the arrow keys to select the kernel to boot into, and press e on your
keyboard.
5. Use the arrow keys to locate your cursor on the line containing the kernel. The line begins with
either linux or linux16 depending on the AMI that was used to launch the instance. For Ubuntu,
two lines begin with linux, which must both be modified in the next step.
6. At the end of the line, add the word single.
Emergency mode
Emergency mode is similar to single user mode except that the kernel runs at the lowest runlevel
possible.
To boot into emergency mode, follow the steps in Single user mode (p. 1749) in the preceding section,
but at step 6 add the word emergency instead of single.
Topics
• Prerequisites (p. 1750)
• Configure SysRq (p. 1750)
• Use SysRq (p. 1751)
Prerequisites
Before you can configure and use SysRq, you must grant access to the serial console. For more
information, see Configure access to the EC2 Serial Console (p. 1736).
Configure SysRq
To configure SysRq, you enable the SysRq commands for the current boot cycle. To make the
configuration persistent, you can also enable the SysRq commands for subsequent boots.
1750
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Troubleshoot your instance using the EC2 Serial Console
Note
This setting will clear on the next reboot.
kernel.sysrq=1
4. At the login prompt, enter the user name of the password-based user that you set up
previously (p. 1740), and then press Enter.
5. At the Password prompt, enter the password, and then press Enter.
Use SysRq
You can use SysRq commands in the EC2 Serial Console browser-based client or in an SSH client. The
command to send a break request is different for each client.
To use SysRq, choose one of the following procedures based on the client that you are using.
Browser-based client
3. To issue a SysRq command, press the key on your keyboard that corresponds to the required
command. For example, to display a list of SysRq commands, press h.
[ec2-user ~]$ h
1751
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Send a diagnostic interrupt
SSH client
[ec2-user ~]$ ~B
3. To issue a SysRq command, press the key on your keyboard that corresponds to the required
command. For example, to display a list of SysRq commands, press h.
[ec2-user ~]$ h
Note
The command that you use for sending a break request might be different depending
on the SSH client that you're using.
You can send a diagnostic interrupt to an unreachable or unresponsive Linux instance to manually trigger
a kernel panic.
Linux operating systems typically crash and reboot when a kernel panic occurs. The specific behavior
of the operating system depends on its configuration. A kernel panic can also be used to cause the
instance's operating system kernel to perform tasks, such as generating a crash dump file. You can then
use the information in the crash dump file to conduct root cause analysis and debug the instance.
The crash dump data is generated locally by the operating system on the instance itself.
Before sending a diagnostic interrupt to your instance, we recommend that you consult the
documentation for your operating system and then make the necessary configuration changes.
Contents
• Supported instance types (p. 1753)
• Prerequisites (p. 1753)
• Send a diagnostic interrupt (p. 1755)
1752
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Supported instance types
Prerequisites
Before using a diagnostic interrupt, you must configure your instance's operating system. This ensures
that it performs the actions that you need when a kernel panic occurs.
To configure Amazon Linux 2 to generate a crash dump when a kernel panic occurs
3. Configure the kernel to reserve an appropriate amount of memory for the secondary kernel. The
amount of memory to reserve depends on the total available memory of your instance. Open
the /etc/default/grub file using your preferred text editor, locate the line that starts with
GRUB_CMDLINE_LINUX_DEFAULT, and then add the crashkernel parameter in the following
format: crashkernel=memory_to_reserve. For example, to reserve 160MB, modify the grub file
as follows:
kernel.unknown_nmi_panic=1
BOOT_IMAGE=/boot/vmlinuz-4.14.128-112.105.amzn2.x86_64 root=UUID=a1e1011e-
e38f-408e-878b-fed395b47ad6 ro crashkernel=160M console=tty0 console=ttyS0,115200n8
net.ifnames=0 biosdevname=0 nvme_core.io_timeout=4294967295 rd.emergency=poweroff
rd.shell=0
1753
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Prerequisites
The following example output shows the result if the kdump service is running.
Note
By default, the crash dump file is saved to /var/crash/. To change the location, modify the /
etc/kdump.conf file using your preferred text editor.
To configure Amazon Linux to generate a crash dump when a kernel panic occurs
3. Configure the kernel to reserve an appropriate amount of memory for the secondary kernel. The
amount of memory to reserve depends on the total available memory of your instance.
For example, to reserve 160MB for the crash kernel, use the following command.
kernel.unknown_nmi_panic=1
1754
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Send a diagnostic interrupt
If the service is running, the command returns the Kdump is operational response.
Note
By default, the crash dump file is saved to /var/crash/. To change the location, modify the /
etc/kdump.conf file using your preferred text editor.
Note
On instances based on Intel and AMD processors, the send-diagnostic-interrupt
command sends an unknown non-maskable interrupt (NMI) to the instance. You must configure
the kernel to crash when it receives the unknown NMI. Add the following to your configuration
file.
kernel.unknown_nmi_panic=1
1755
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Document history
The following table describes important additions to the Amazon EC2 documentation starting in 2019.
We also update the documentation frequently to address the feedback that you send us.
X2iezn instances (p. 1756) New memory optimized January 26, 2022
instances featuring Intel Xeon
Platinum processors (Cascade
Lake).
New Local Zones added Add Local Zones in Atlanta, January 11, 2022
Phoenix, and Seattle.
Additional RHEL platforms for Additional Red Hat Enterprise January 11, 2022
Capacity Reservations Linux platforms for On-Demand
Capacity Reservations.
Hpc6a instances (p. 1756) New compute optimized January 10, 2022
instances featuring AMD EPYC
processors.
Instance tags in instance You can access an instance's tags January 6, 2022
metadata from the instance metadata.
Recycle Bin for Amazon EBS Recycle Bin for Amazon EBS November 29, 2021
snapshots snapshots is a snapshot recovery
feature that enables you to
restore accidentally deleted
snapshots.
M6a instances (p. 1756) New general purpose instances November 29, 2021
powered by AMD 3rd Generation
EPYC processors.
G5g instances (p. 1756) New accelerated computing November 29, 2021
instances featuring AWS
Graviton2 processors based on
64-bit Arm architecture.
Amazon EBS Snapshots Archive Amazon EBS Snapshots Archive November 29, 2021
is a new storage tier that you
can use for low-cost, long-term
storage of your rarely-accessed
snapshots.
1756
Amazon Elastic Compute Cloud
User Guide for Linux Instances
R6i instances (p. 1756) New memory optimized November 22, 2021
instances.
Spot Fleet launch-before- Spot Fleet can terminate the November 4, 2021
terminate Spot Instances that receive a
rebalance notification after new
replacement Spot Instances are
launched.
EC2 Fleet launch-before- EC2 Fleet can terminate the November 4, 2021
terminate Spot Instances that receive a
rebalance notification after new
replacement Spot Instances are
launched.
Compare timestamps You can determine the true time November 2, 2021
of an event by comparing the
timestamp of your Amazon EC2
Linux instance with ClockBound.
Share AMIs with organizations You can now share AMIs with October 29, 2021
and OUs the following AWS resources:
organizations and organizational
units (OUs).
C6i instances (p. 1756) New compute optimized October 28, 2021
instances featuring Intel Xeon
Scalable processors (Ice Lake).
Attribute-based instance type Specify the attributes that an October 27, 2021
selection for Spot Fleet instance must have, and Amazon
EC2 will identify all the instance
types with those attributes.
Attribute-based instance type Specify the attributes that an October 27, 2021
selection for EC2 Fleet instance must have, and Amazon
EC2 will identify all the instance
types with those attributes.
New Local Zones added Add Local Zones in Las Vegas, October 26, 2021
New York City, and Portland.
DL1 instances (p. 1756) New accelerated computing October 26, 2021
instances featuring Habana
Gaudi accelerators and Intel
Xeon Platinum processors
(Cascade Lake).
1757
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 Fleet and targeted On- EC2 Fleet can launch September 22, 2021
Demand Capacity Reservations On-Demand Instances
into targeted Capacity
Reservations.
VT1 instances (p. 1756) New accelerated computing September 13, 2021
instances that use Xilinx Alveo
U30 media accelerators and
are designed for live video
transcoding workloads.
New Local Zones added Add Local Zones in Chicago, September 8, 2021
Minneapolis, and Kansas City.
Amazon EC2 Global View Amazon EC2 Global View September 1, 2021
enables you to view VPCs,
subnets, instances, security
groups, and volumes across
multiple AWS Regions in a single
console.
AMI deprecation support for Amazon Data Lifecycle Manager August 23, 2021
Amazon Data Lifecycle Manager EBS-backed AMI policies
can deprecate AMIs. The
AWSDataLifecycleManagerServiceRoleForAMIManagement
AWS managed policy has been
updated to support this feature.
Hibernation support for C5d, Hibernate your newly-launched August 19, 2021
M5d, and R5d instances running on C5d, M5d,
and R5d instance types.
Amazon EC2 key pairs Amazon EC2 now supports August 17, 2021
ED25519 keys on Linux and Mac
instances.
M6i instances (p. 1756) New general purpose instances August 16, 2021
featuring third generation Intel
Xeon Scalable processors (Ice
Lake).
1758
Amazon Elastic Compute Cloud
User Guide for Linux Instances
CloudWatch metrics for Amazon You can monitor your Amazon July 28, 2021
Data Lifecycle Manager Data Lifecycle Manager policies
using Amazon CloudWatch.
New Local Zone added Add Local Zone in Denver. July 27, 2021
CloudTrail data events for EBS The ListSnapshotBlocks, July 27, 2021
direct APIs ListChangedBlocks,
GetSnapshotBlock, and
PutSnapshotBlock APIs can be
logged data events in CloudTrail.
Prefixes for network interfaces You can assign a private IPv4 July 22, 2021
or IPv6 CIDR range, either
automatically or manually, to
your network interfaces.
io2 Block Express volumes io2 Block Express volumes are July 19, 2021
now generally available in all
Regions and Availability Zones
that support R5b instances.
Event windows You can define custom, weekly- July 15, 2021
recurring event windows for
scheduled events that reboot,
stop, or terminate your Amazon
EC2 instances.
Resource IDs and tagging You can refer to security group July 7, 2021
support for security group rules by resource ID. You can also
rules (p. 1756) add tags to your security group
rules.
New Local Zones added Add Local Zones in Dallas and July 7, 2021
Philadelphia.
Deprecate an AMI You can now specify when an June 11, 2021
AMI is deprecated.
Capacity Reservations on AWS You can now use Capacity May 24, 2021
Outposts Reservations on AWS Outposts.
Capacity Reservation sharing You can now share Capacity May 24, 2021
Reservations created in Local
Zones and Wavelength Zones.
1759
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Root volume replacement You can now use root volume April 22, 2021
replacement tasks to replace
the root EBS volume for running
instances.
Store and restore an AMI using Store EBS-backed AMIs in S3 and April 6, 2021
S3 restore them from S3 to enable
cross-partition copying of AMIs.
Boot modes Amazon EC2 now supports UEFI March 22, 2021
boot on selected AMD- and
Intel-based EC2 instances.
X2gd instances (p. 1756) New memory optimized March 16, 2021
instances featuring an AWS
Graviton2 processor based on
64-bit Arm architecture.
Amazon EBS local snapshots on You can now use Amazon EBS February 4, 2021
Outposts local snapshots on Outposts to
store snapshots of volumes on
an Outpost locally in Amazon S3
on the Outpost itself.
Create a reverse DNS record You can now set up reverse February 3, 2021
DNS lookup for your Elastic IP
addresses.
Multi-Attach support for io2 You can now enable Provisioned December 18, 2020
volumes IOPS SSD (io2) volumes for
Amazon EBS Multi-Attach.
C6gn instances (p. 1756) New computed optimized December 18, 2020
instances featuring an AWS
Graviton2 processor based on
64-bit Arm architecture. These
instances can utilize up to 100
Gbps of network bandwidth.
Amazon Data Lifecycle Manager Use Amazon Data Lifecycle December 17, 2020
Manager to automate the
process of sharing snapshots
and copying them across AWS
accounts.
G4ad instances (p. 1756) New instances powered by AMD December 9, 2020
Radeon Pro V520 GPUs and AMD
2nd Generation EPYC processors.
Tag AMIs and snapshots on AMI When you create an AMI, December 4, 2020
creation you can tag the AMI and the
snapshots with the same tags, or
you can tag them with different
tags.
1760
Amazon Elastic Compute Cloud
User Guide for Linux Instances
io2 Block Express preview You can opt in to the io2 Block December 1, 2020
Express volumes preview. io2
Block Express volumes provide
sub-millisecond latency, and
support higher IOPS, higher
throughput, and larger capacity
than io2 volumes.
gp3 volumes (p. 1756) A new Amazon EBS General December 1, 2020
Purpose SSD volume type. You
can specify provisioned IOPS and
throughput when you create or
modify the volume.
D3, D3en, M5zn, and R5b New instance types built on the December 1, 2020
instances (p. 1756) Nitro System.
Throughput Optimized HDD and Throughput Optimized HDD November 30, 2020
Cold HDD volume sizes (st1) and Cold HDD (sc1)
volumes can range in size from
125 GiB to 16 TiB.
Use Amazon EventBridge to Create EventBridge rules that November 20, 2020
monitor Spot Fleet events trigger programmatic actions
in response to Spot Fleet state
changes and errors.
Use Amazon EventBridge to Create EventBridge rules that November 20, 2020
monitor EC2 Fleet events trigger programmatic actions
in response to EC2 Fleet state
changes and errors.
Delete instant fleets Delete an EC2 Fleet of type November 18, 2020
instant and terminate all the
instances in the fleet in a single
API call.
Hibernation support for T3 and Hibernate your newly-launched November 17, 2020
T3a instances running on T3 and T3a
instance types.
Amazon EFS Quick Create You can create and mount an November 9, 2020
Amazon Elastic File System
(Amazon EFS) file system to an
instance at launch using Amazon
EFS Quick Create.
Amazon Data Lifecycle Manager You can use Amazon Data November 9, 2020
Lifecycle Manager to automate
the creation, retention, and
deletion of EBS-backed AMIs.
1761
Amazon Elastic Compute Cloud
User Guide for Linux Instances
EC2 instance rebalance A signal that notifies you when a November 4, 2020
recommendation Spot Instance is at elevated risk
of interruption.
Hibernation support for I3, Hibernate your newly-launched October 21, 2020
M5ad, and R5ad instances running on I3, M5ad,
and R5ad instance types.
Spot Instance vCPU limits Spot Instance limits are now October 1, 2020
managed in terms of the number
of vCPUs that your running Spot
Instances are either using or will
use pending the fulfillment of
open requests.
Capacity Reservations in Local Capacity Reservations can now September 30, 2020
Zones be created and used in Local
Zones.
Amazon Data Lifecycle Manager Amazon Data Lifecycle Manager September 17, 2020
policies can be configured with
up to four schedules.
T4g instances (p. 1756) New general purpose instances September 14, 2020
powered by AWS Graviton2
processors, which are based on
64-bit Arm Neoverse cores and
custom silicon designed by AWS
for optimized performance and
cost.
Hibernation support for M5a Hibernate your newly-launched August 28, 2020
and R5a instances running on M5a and
R5a instance types.
1762
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Provisioned IOPS SSD (io2) Provisioned IOPS SSD (io2) August 24, 2020
volumes for Amazon EBS volumes are designed to
provide 99.999 percent volume
durability with an AFR no higher
than 0.001 percent.
Instance metadata provides New instance metadata fields August 24, 2020
instance location and placement under the placement category:
information Region, placement group name,
partition number, host ID, and
Availability Zone ID.
C5ad instances (p. 1756) New compute optimized August 13, 2020
instances featuring second-
generation AMD EPYC
processors.
Capacity Reservation groups You can use AWS Resource July 29, 2020
Groups to create logical
collections of Capacity
Reservations, and then target
instance launches into those
groups.
C6gd, M6gd, and R6gd New general purpose instances July 27, 2020
instances (p. 1756) powered by AWS Graviton2
processors, which are based on
64-bit Arm Neoverse cores and
custom silicon designed by AWS
for optimized performance and
cost.
Fast snapshot restore You can enable fast snapshot July 21, 2020
restore for snaphots that are
shared with you.
C6g and R6g instances (p. 1756) New general purpose instances June 10, 2020
powered by AWS Graviton2
processors, which are based on
64-bit Arm Neoverse cores and
custom silicon designed by AWS
for optimized performance and
cost.
Bare metal instances for New instances that provide your June 5, 2020
G4dn (p. 1756) applications with direct access
to the physical resources of the
host server.
1763
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Bring your own IPv6 addresses You can bring part or all of your May 21, 2020
IPv6 address range from your
on-premises network to your
AWS account.
M6g instances (p. 1756) New general purpose instances May 11, 2020
powered by AWS Graviton2
processors, which are based on
64-bit Arm Neoverse cores and
custom silicon designed by AWS
for optimized performance and
cost.
Launch instances using a You can specify a AWS Systems May 5, 2020
Systems Manager parameter Manager parameter instead
of an AMI when you launch an
instance.
Amazon Linux 2 Kernel Live Kernel Live Patching for Amazon April 28, 2020
Patching Linux 2 enables you to apply
security vulnerability and
critical bug patches to a running
Linux kernel, without reboots
or disruptions to running
applications.
Amazon EBS Multi-Attach You can now attach a single February 14, 2020
Provisioned IOPS SSD (io1)
volume to up to 16 Nitro-based
instances that are in the same
Availability Zone.
Stop and start a Spot Instance Stop your Spot Instances backed January 13, 2020
by Amazon EBS and start them
at will, instead of relying on the
stop interruption behavior.
Resource tagging (p. 1756) You can tag egress-only internet January 10, 2020
gateways, local gateways, local
gateway route tables, local
gateway virtual interfaces, local
gateway virtual interface groups,
local gateway route table VPC
associations, and local gateway
route table virtual interface
group associations.
Connect to your instance using You can start a Session Manager December 18, 2019
Session Manager session with an instance from
the Amazon EC2 console.
1764
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Inf1 instances (p. 1756) New instances featuring AWS December 3, 2019
Inferentia, a machine learning
inference chip designed to
deliver high performance at a
low cost.
Dedicated Hosts and host Dedicated Hosts can now be December 2, 2019
resource groups used with host resource groups.
Dedicated Host sharing You can now share your December 2, 2019
Dedicated Hosts across AWS
accounts.
Default credit specification at You can set the default credit November 25, 2019
the account level specification per burstable
performance instance family
at the account level per AWS
Region.
Instance type discovery You can find an instance type November 22, 2019
that meets your needs.
Dedicated Hosts (p. 1756) You can now configure a November 21, 2019
Dedicated Host to support
multiple instance types in an
instance family.
Amazon EBS fast snapshot You can enable fast snapshot November 20, 2019
restores restores on an EBS snapshot
to ensure that EBS volumes
created from the snapshot are
fully-initialized at creation and
instantly deliver all of their
provisioned performance.
Instance Metadata Service You can use Instance Metadata November 19, 2019
Version 2 Service Version 2, which is a
session-oriented method for
requesting instance metadata.
Elastic Fabric Adapter (p. 1756) Elastic Fabric Adapters can now November 15, 2019
be used with Intel MPI 2019
Update 6.
Queued purchases of Reserved You can queue the purchase of October 4, 2019
Instances a Reserved Instance up to three
years in advance.
G4dn instances (p. 1756) New instances featuring NVIDIA September 19, 2019
Tesla GPUs.
1765
Amazon Elastic Compute Cloud
User Guide for Linux Instances
Capacity optimized allocation Using EC2 Fleet or Spot Fleet, August 12, 2019
strategy you can launch Spot Instances
from Spot pools with optimal
capacity for the number of
instances that are launching.
On-Demand Capacity You can now share your Capacity July 29, 2019
Reservation sharing Reservations across AWS
accounts.
Elastic Fabric Adapter (p. 1756) EFA now supports Open MPI July 26, 2019
3.1.4 and Intel MPI 2019 Update
4.
Resource tagging (p. 1756) Launch templates on creation. July 24, 2019
EC2 Instance Connect EC2 Instance Connect is a simple June 27, 2019
and secure way to connect to
your instances using Secure Shell
(SSH).
Amazon EBS multi-volume You can take exact point-in-time, May 29, 2019
snapshots data coordinated, and crash-
consistent snapshots across
multiple EBS volumes attached
to an EC2 instance.
Resource tagging (p. 1756) You can tag Dedicated Host May 27, 2019
Reservations.
Amazon EBS encryption by After you enable encryption May 23, 2019
default by default in a Region, all new
EBS volumes you create in the
Region are encrypted using
the default KMS key for EBS
encryption.
Resource tagging (p. 1756) You can tag VPC endpoints, May 13, 2019
endpoint services, and endpoint
service configurations.
I3en instances (p. 1756) New I3en instances can utilize May 8, 2019
up to 100 Gbps of network
bandwidth.
1766
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Elastic Fabric Adapter You can attach an Elastic Fabric April 29, 2019
Adapter to your instances to
accelerate High Performance
Computing (HPC) applications.
T3a instances (p. 1756) New instances featuring AMD April 24, 2019
EPYC processors.
M5ad and R5ad New instances featuring AMD March 27, 2019
instances (p. 1756) EPYC processors.
Resource tagging (p. 1756) You can assign custom tags March 14, 2019
to your Dedicated Host
Reservations to categorize them
in different ways.
Bare metal instances for M5, New instances that provide your February 13, 2019
M5d, R5, R5d, and z1d (p. 1756) applications with direct access
to the physical resources of the
host server.
Hibernate EC2 Linux 2016-11-15 You can hibernate a Linux instance if it's enabled 28
instances for hibernation and it meets the hibernation November
prerequisites. For more information, see 2018
Hibernate your On-Demand or Reserved Linux
instance (p. 626).
Instances featuring 2016-11-15 New C5n instances can utilize up to 100 Gbps of 26
100 Gbps of network network bandwidth. November
bandwidth 2018
1767
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
New EC2 Fleet request 2016-11-15 EC2 Fleet now supports a new request type, 14
type: instant instant, that you can use to synchronously November
provision capacity across instance types and 2018
purchase models. The instant request returns
the launched instances in the API response, and
takes no further action, enabling you to control
if and when instances are launched. For more
information, see EC2 Fleet request types (p. 764).
Spot savings 2016-11-15 You can view the savings made from using Spot 5
information Instances for a single Spot Fleet or for all Spot November
Instances. For more information, see Savings from 2018
purchasing Spot Instances (p. 432).
Console support for 2016-11-15 When you launch an instance, you can optimize 31
optimizing CPU options the CPU options to suit specific workloads or October
business needs using the Amazon EC2 console. 2018
For more information, see Optimize CPU
options (p. 676).
Console support for 2016-11-15 You can create a launch template using 30
creating a launch an instance as the basis for a new launch October
template from an template using the Amazon EC2 console. 2018
instance For more information, see Create a launch
template (p. 581).
On-Demand Capacity 2016-11-15 You can reserve capacity for your Amazon EC2 25
Reservations instances in a specific Availability Zone for any October
duration. This allows you to create and manage 2018
capacity reservations independently from the
billing discounts offered by Reserved Instances
(RI). For more information, see On-Demand
Capacity Reservations (p. 522).
1768
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Bring Your Own IP 2016-11-15 You can bring part or all of your public IPv4 23
Addresses (BYOIP) address range from your on-premises network to October
your AWS account. After you bring the address 2018
range to AWS, it appears in your account as
an address pool. You can create an Elastic IP
address from your address pool and use it with
your AWS resources. For more information, see
Bring your own IP addresses (BYOIP) in Amazon
EC2 (p. 1039).
Dedicated Host tag 2016-11-15 You can tag your Dedicated Hosts on creation, and 08
on create and console you can manage your Dedicated Host tags using October
support the Amazon EC2 console. For more information, 2018
see Allocate Dedicated Hosts (p. 489).
High memory instances 2016-11-15 These instances are purpose-built to run large 27
in-memory databases. They offer bare metal September
performance with direct access to host hardware. 2018
For more information, see Memory optimized
instances (p. 304).
Allocation strategies for 2016-11-15 You can specify whether On-Demand capacity 26 July
EC2 Fleets is fulfilled by price (lowest price first) or priority 2018
(highest priority first). You can specify the number
of Spot pools across which to allocate your
target Spot capacity. For more information, see
Allocation strategies for Spot Instances (p. 783).
Allocation strategies for 2016-11-15 You can specify whether On-Demand capacity 26 July
Spot Fleets is fulfilled by price (lowest price first) or priority 2018
(highest priority first). You can specify the number
of Spot pools across which to allocate your
target Spot capacity. For more information, see
Allocation strategy for Spot Instances (p. 823).
1769
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
R5 and R5d instances 2016-11-15 R5 and R5d instances are ideally suited for high- 25 July
performance databases, distributed in-memory 2018
caches, and in-memory analytics. R5d instances
come with NVMe instance store volumes. For
more information, see Memory optimized
instances (p. 304).
z1d instances 2016-11-15 These instances are designed for applications 25 July
that require high per-core performance with 2018
a large amount of memory, such as electronic
design automation (EDA) and relational databases.
These instances come with NVME instance store
volumes. For more information, see Memory
optimized instances (p. 304).
Automate snapshot 2016-11-15 You can use Amazon Data Lifecycle Manager to 12 July
lifecycle automate creation and deletion of snapshots for 2018
your EBS volumes. For more information, see
Amazon Data Lifecycle Manager (p. 1478).
Launch template CPU 2016-11-15 When you create a launch template using the 11 July
options command line tools, you can optimize the CPU 2018
options to suit specific workloads or business
needs. For more information, see Create a launch
template (p. 581).
Tag Dedicated Hosts 2016-11-15 You can tag your Dedicated Hosts. For more 3 July
information, see Tag Dedicated Hosts (p. 500). 2018
Get latest console 2016-11-15 You can retrieve the latest console output for 9 May
output some instance types when you use the get- 2018
console-output AWS CLI command.
Optimize CPU options 2016-11-15 When you launch an instance, you can optimize 8 May
the CPU options to suit specific workloads or 2018
business needs. For more information, see
Optimize CPU options (p. 676).
EC2 Fleet 2016-11-15 You can use EC2 Fleet to launch a group of 2 May
instances across different EC2 instance types 2018
and Availability Zones, and across On-Demand
Instance, Reserved Instance, and Spot Instance
purchasing models. For more information, see EC2
Fleet (p. 762).
On-Demand Instances 2016-11-15 You can include a request for On-Demand 2 May
in Spot Fleets capacity in your Spot Fleet request to ensure 2018
that you always have instance capacity. For more
information, see Spot Fleet (p. 822).
1770
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Tag EBS snapshots on 2016-11-15 You can apply tags to snapshots during creation. 2 April
creation For more information, see Create Amazon EBS 2018
snapshots (p. 1385).
Change placement 2016-11-15 You can move an instance in or out of a placement 1 March
groups group, or change its placement group. For more 2018
information, see Change the placement group for
an instance (p. 1178).
Longer resource IDs 2016-11-15 You can enable the longer ID format for more 9 February
resource types. For more information, see 2018
Resource IDs (p. 1658).
Tag Elastic IP addresses 2016-11-15 You can tag your Elastic IP addresses. For 21
more information, see Tag an Elastic IP December
address (p. 1062). 2017
Amazon Time Sync 2016-11-15 You can use the Amazon Time Sync Service to 29
Service keep accurate time on your instance. For more November
information, see Set the time for your Linux 2017
instance (p. 670).
Launch templates 2016-11-15 A launch template can contain all or some of the 29
parameters to launch an instance, so that you November
don't have to specify them every time you launch 2017
an instance. For more information, see Launch an
instance from a launch template (p. 579).
1771
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Spot Instance 2016-11-15 The Spot service can hibernate Spot Instances 28
hibernation in the event of an interruption. For more November
information, see Hibernate interrupted Spot 2017
Instances (p. 462).
Spot Fleet target 2016-11-15 You can set up target tracking scaling policies for 17
tracking your Spot Fleet. For more information, see Scale November
Spot Fleet using a target tracking policy (p. 871). 2017
Spot Fleet integrates 2016-11-15 You can attach one or more load balancers to a 10
with Elastic Load Spot Fleet. November
Balancing 2017
X1e instances 2016-11-15 X1e instances are ideally suited for high- 28
performance databases, in-memory databases, November
and other memory-intensive enterprise 2017
applications. For more information, see Memory
optimized instances (p. 304).
Merge and split 2016-11-15 You can exchange (merge) two or more 6
Convertible Reserved Convertible Reserved Instances for a new November
Instances Convertible Reserved Instance. You can also use 2017
the modification process to split a Convertible
Reserved Instance into smaller reservations. For
more information, see Exchange Convertible
Reserved Instances (p. 418).
Modify VPC tenancy 2016-11-15 You can change the instance tenancy attribute 16
of a VPC from dedicated to default. For October
more information, see Change the tenancy of a 2017
VPC (p. 522).
Per second billing 2016-11-15 Amazon EC2 charges for Linux-based usage by the 2 October
second, with a one-minute minimum charge. 2017
Stop on interruption 2016-11-15 You can specify whether Amazon EC2 should 18
stop or terminate Spot Instances when they September
are interrupted. For more information, see 2017
Interruption behavior (p. 460).
Tag NAT gateways 2016-11-15 You can tag your NAT gateway. For more 7
information, see Tag your resources (p. 1667). September
2017
1772
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Security group rule 2016-11-15 You can add descriptions to your security group 31 August
descriptions rules. For more information, see Security group 2017
rules (p. 1304).
Recover Elastic IP 2016-11-15 If you release an Elastic IP address for use in 11 August
addresses a VPC, you might be able to recover it. For 2017
more information, see Recover an Elastic IP
address (p. 1065).
Tag Spot Fleet instances 2016-11-15 You can configure your Spot Fleet to 24 July
automatically tag the instances that it launches. 2017
Tag resources during 2016-11-15 You can apply tags to instances and volumes 28 March
creation during creation. For more information, see Tag 2017
your resources (p. 1667). In addition, you can use
tag-based resource-level permissions to control
the tags that are applied. For more information
see, Grant permission to tag resources during
creation (p. 1225).
Perform modifications 2016-11-15 With most EBS volumes attached to most EC2 13
on attached EBS instances, you can modify volume size, type, and February
volumes IOPS without detaching the volume or stopping 2017
the instance. For more information, see Amazon
EBS Elastic Volumes (p. 1523).
Attach an IAM role 2016-11-15 You can attach, detach, or replace an IAM role for 9 February
an existing instance. For more information, see 2017
IAM roles for Amazon EC2 (p. 1275).
Dedicated Spot 2016-11-15 You can run Spot Instances on single-tenant 19 January
Instances hardware in a virtual private cloud (VPC). For 2017
more information, see Specify a tenancy for your
Spot Instances (p. 435).
IPv6 support 2016-11-15 You can associate an IPv6 CIDR with your VPC and 1
subnets, and assign IPv6 addresses to instances in December
your VPC. For more information, see Amazon EC2 2016
instance IP addressing (p. 1018).
1773
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
P2 instances 2016-09-15 P2 instances use NVIDIA Tesla K80 GPUs and are 29
designed for general purpose GPU computing September
using the CUDA or OpenCL programming models. 2016
For more information, see Linux accelerated
computing instances (p. 330).
Automatic scaling for You can now set up scaling policies for your Spot 1
Spot Fleet Fleet. For more information, see Automatic scaling September
for Spot Fleet (p. 869). 2016
Elastic Network Adapter 2016-04-01 You can now use ENA for enhanced networking. 28 June
(ENA) For more information, see Enhanced networking 2016
support (p. 1100).
Enhanced support for 2016-04-01 You can now view and modify longer ID settings 23 June
viewing and modifying for other IAM users, IAM roles, or the root user. For 2016
longer IDs more information, see Resource IDs (p. 1658).
Copy encrypted 2016-04-01 You can now copy encrypted EBS snapshots 21 June
Amazon EBS snapshots between AWS accounts. For more information, see 2016
between AWS accounts Copy an Amazon EBS snapshot (p. 1391).
Capture a screenshot of 2015-10-01 You can now obtain additional information when 24 May
an instance console debugging instances that are unreachable. For 2016
more information, see Capture a screenshot of an
unreachable instance (p. 1723).
1774
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Two new EBS volume 2015-10-01 You can now create Throughput Optimized 19 April
types HDD (st1) and Cold HDD (sc1) volumes. For 2016
more information, see Amazon EBS volume
types (p. 1329).
CloudWatch metrics for You can now get CloudWatch metrics for your 21 March
Spot Fleet Spot Fleet. For more information, see CloudWatch 2016
metrics for Spot Fleet (p. 867).
Longer resource IDs 2015-10-01 We're gradually introducing longer length IDs 13 January
for some Amazon EC2 and Amazon EBS resource 2016
types. During the opt-in period, you can enable
the longer ID format for supported resource
types. For more information, see Resource
IDs (p. 1658).
ClassicLink DNS support 2015-10-01 You can enable ClassicLink DNS support for 11 January
your VPC so that DNS hostnames that are 2016
addressed between linked EC2-Classic instances
and instances in the VPC resolve to private
IP addresses and not public IP addresses. For
more information, see Enable ClassicLink DNS
support (p. 1196).
1775
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Spot Instance duration 2015-10-01 You can now specify a duration for your Spot 6 October
Instances. For more information, see Define a 2015
duration for your Spot Instances (p. 435).
Spot Fleet modify 2015-10-01 You can now modify the target capacity of your 29
request Spot Fleet request. For more information, see September
Modify a Spot Fleet request (p. 864). 2015
Spot Fleet diversified 2015-04-15 You can now allocate Spot Instances in multiple 15
allocation strategy Spot pools using a single Spot Fleet request. For September
more information, see Allocation strategy for Spot 2015
Instances (p. 823).
Spot Fleet instance 2015-04-15 You can now define the capacity units that each 31 August
weighting instance type contributes to your application's 2015
performance, and adjust the amount you are
willing to pay for Spot Instances for each Spot
pool accordingly. For more information, see Spot
Fleet instance weighting (p. 846).
New reboot alarm Added the reboot alarm action and new IAM role 23 July
action and new IAM for use with alarm actions. For more information, 2015
role for use with alarm see Create alarms that stop, terminate, reboot, or
actions recover an instance (p. 982).
Spot Fleets 2015-04-15 You can manage a collection, or fleet, of Spot 18 May
Instances instead of managing separate Spot 2015
Instance requests. For more information, see Spot
Fleet (p. 822).
Migrate Elastic IP 2015-04-15 You can migrate an Elastic IP address that you've 15 May
addresses to EC2- allocated for use in EC2-Classic to be used in a 2015
Classic VPC. For more information, see Migrate an Elastic
IP Address from EC2-Classic (p. 1188).
1776
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Importing VMs with 2015-03-01 The VM Import process now supports importing 23 April
multiple disks as AMIs VMs with multiple disks as AMIs. For more 2015
information, see Importing a VM as an Image
Using VM Import/Export in the VM Import/Export
User Guide .
Automatic recovery for You can create an Amazon CloudWatch alarm 12 January
EC2 instances that monitors an Amazon EC2 instance and 2015
automatically recovers the instance if it becomes
impaired due to an underlying hardware failure
or a problem that requires AWS involvement to
repair. A recovered instance is identical to the
original instance, including the instance ID, IP
addresses, and all instance metadata.
1777
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Spot Instance The best way to protect against Spot Instance 5 January
termination notices interruption is to architect your application to be 2015
fault tolerant. In addition, you can take advantage
of Spot Instance termination notices, which
provide a two-minute warning before Amazon
EC2 must terminate your Spot Instance.
1778
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
New EC2 Service Limits Use the EC2 Service Limits page in the Amazon 19 June
page EC2 console to view the current limits for 2014
resources provided by Amazon EC2 and Amazon
VPC, on a per-region basis.
Amazon EBS General 2014-05-01 General Purpose SSD volumes offer cost- 16 June
Purpose SSD Volumes effective storage that is ideal for a broad range 2014
of workloads. These volumes deliver single-digit
millisecond latencies, the ability to burst to 3,000
IOPS for extended periods of time, and a base
performance of 3 IOPS/GiB. General Purpose SSD
volumes can range in size from 1 GiB to 1 TiB.
For more information, see General Purpose SSD
volumes (gp2) (p. 1332).
Amazon EBS encryption 2014-05-01 Amazon EBS encryption offers seamless 21 May
encryption of EBS data volumes and snapshots, 2014
eliminating the need to build and maintain a
secure key management infrastructure. EBS
encryption enables data at rest security by
encrypting your data using AWS managed keys.
The encryption occurs on the servers that host
EC2 instances, providing encryption of data
as it moves between EC2 instances and EBS
storage. For more information, see Amazon EBS
encryption (p. 1536).
New Amazon Linux AMI Amazon Linux AMI 2014.03 is released. 27 March
release 2014
Amazon EC2 Usage Amazon EC2 Usage Reports is a set of reports 28 January
Reports that shows cost and usage data of your usage of 2014
EC2. For more information, see Amazon EC2 usage
reports (p. 1682).
Additional M3 instances 2013-10-15 The M3 instance sizes m3.medium and m3.large 20 January
are now supported. For more information about 2014
the hardware specifications for each Amazon EC2
instance type, see Amazon EC2 Instance Types.
1779
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Importing Linux virtual 2013-10-15 The VM Import process now supports the 16
machines importation of Linux instances. For more December
information, see the VM Import/Export User 2013
Guide.
Resource-level 2013-10-15 You can now create policies in AWS Identity and 20
permissions for Access Management to control resource-level November
RunInstances permissions for the Amazon EC2 RunInstances 2013
API action. For more information and example
policies, see Identity and access management for
Amazon EC2 (p. 1217).
Launching an instance You can now launch an instance from the AWS 11
from the AWS Marketplace using the Amazon EC2 launch November
Marketplace wizard. For more information, see Launch an AWS 2013
Marketplace instance (p. 595).
1780
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
New launch wizard There is a new and redesigned EC2 launch wizard. 10
For more information, see Launch an instance October
using the Launch Instance Wizard (p. 565). 2013
Modifying Instance 2013-10-01 You can now modify the instance type of Linux 09
Types of Amazon EC2 Reserved Instances within the same family (for October
Reserved Instances example, M1, M2, M3, C1). For more information, 2013
see Modify Reserved Instances (p. 411).
Modifying Amazon EC2 2013-08-15 You can now modify Reserved Instances in 11
Reserved Instances a Region. For more information, see Modify September
Reserved Instances (p. 411). 2013
Assigning a public IP 2013-07-15 You can now assign a public IP address when 20 August
address you launch an instance in a VPC. For more 2013
information, see Assign a public IPv4 address
during instance launch (p. 1023).
Granting resource-level 2013-06-15 Amazon EC2 supports new Amazon Resource 8 July
permissions Names (ARNs) and condition keys. For more 2013
information, see IAM policies for Amazon
EC2 (p. 1219).
Incremental Snapshot 2013-02-01 You can now perform incremental snapshot 11 June
Copies copies. For more information, see Copy an 2013
Amazon EBS snapshot (p. 1391).
New Tags page There is a new Tags page in the Amazon EC2 04 April
console. For more information, see Tag your 2013
Amazon EC2 resources (p. 1666).
New Amazon Linux AMI Amazon Linux AMI 2013.03 is released. 27 March
release 2013
Additional EBS- 2013-02-01 The following instance types can now be launched 19 March
optimized instance as EBS-optimized instances: c1.xlarge, 2013
types m2.2xlarge, m3.xlarge, and m3.2xlarge.
Copy an AMI from one 2013-02-01 You can copy an AMI from one Region to another, 11 March
Region to another enabling you to launch consistent instances in 2013
more than one AWS Region quickly and easily.
1781
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Launch instances into a 2013-02-01 Your AWS account is capable of launching 11 March
default VPC instances into either EC2-Classic or a VPC, or only 2013
into a VPC, on a region-by-region basis. If you
can launch instances only into a VPC, we create a
default VPC for you. When you launch an instance,
we launch it into your default VPC, unless you
create a nondefault VPC and specify it when you
launch the instance.
High-memory cluster 2012-12-01 Have large amounts of memory coupled with high 21 January
(cr1.8xlarge) instance CPU and network performance. These instances 2013
type are well suited for in-memory analytics, graph
analysis, and scientific computing applications.
High storage 2012-12-01 High storage instances provide very high storage 20
(hs1.8xlarge) density and high sequential read and write December
instance type performance per instance. They are well-suited 2012
for data warehousing, Hadoop/MapReduce, and
parallel file systems.
EBS snapshot copy 2012-12-01 You can use snapshot copies to create backups 17
of data, to create new Amazon EBS volumes, or December
to create Amazon Machine Images (AMIs). For 2012
more information, see Copy an Amazon EBS
snapshot (p. 1391).
Updated EBS metrics 2012-10-01 Updated the EBS metrics to include two new 20
and status checks for metrics for Provisioned IOPS SSD volumes. For November
Provisioned IOPS SSD more information, see Amazon CloudWatch 2012
volumes metrics for Amazon EBS (p. 1596). Also added
new status checks for Provisioned IOPS SSD
volumes. For more information, see EBS volume
status checks (p. 1370).
Spot Instance request 2012-10-01 Spot Instance request status makes it easy to 14
status determine the state of your Spot requests. October
2012
1782
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Provisioned IOPS SSD 2012-07-20 Provisioned IOPS SSD volumes deliver predictable, 31 July
for Amazon EBS high performance for I/O intensive workloads, 2012
such as database applications, that rely
on consistent and fast response times. For
more information, see Amazon EBS volume
types (p. 1329).
High I/O instances for 2012-06-15 High I/O instances provides very high, low latency, 18 July
Amazon EC2 disk I/O performance using SSD-based local 2012
instance storage.
IAM roles on Amazon 2012-06-01 IAM roles for Amazon EC2 provide: 11 June
EC2 instances 2012
• AWS access keys for applications running on
Amazon EC2 instances.
• Automatic rotation of the AWS access keys on
the Amazon EC2 instance.
• Granular permissions for applications running
on Amazon EC2 instances that make requests to
your AWS services.
Spot Instance features You can now manage your Spot Instances as 7 June
that make it easier follows: 2012
to get started and
handle the potential of • Specify the amount you are willing to pay
interruption. for Spot Instances using Auto Scaling launch
configurations, and set up a schedule for
specifying the amount you are willing to pay
for Spot Instances. For more information, see
Launching Spot Instances in Your Auto Scaling
Group in the Amazon EC2 Auto Scaling User
Guide.
• Get notifications when instances are launched
or terminated.
• Use AWS CloudFormation templates to launch
Spot Instances in a stack with AWS resources.
EC2 instance export and 2012-05-01 Added support for timestamps on instance status 25 May
timestamps for status and system status to indicate the date and time 2012
checks for Amazon EC2 that a status check failed.
1783
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
EC2 instance export, 2012-05-01 Added support for EC2 instance export to Citrix 25 May
and timestamps in Xen, Microsoft Hyper-V, and VMware vSphere. 2012
instance and system
status checks for Added support for timestamps in instance and
Amazon VPC system status checks.
Cluster Compute Eight 2012-04-01 Added support for cc2.8xlarge instances in a 26 April
Extra Large instances VPC. 2012
AWS Marketplace AMIs 2012-04-01 Added support for AWS Marketplace AMIs. 19 April
2012
New Linux AMI release Amazon Linux AMI 2012.03 is released. 28 March
2012
New AKI version We've released AKI version 1.03 and AKIs for the 28 March
AWS GovCloud (US) region. 2012
Medium instances, 2011-12-15 Added support for a new instance type and 64-bit 7 March
support for 64-bit on information. Added procedures for using the Java- 2012
all AMIs, and a Java- based SSH client to connect to Linux instances.
based SSH Client
Reserved Instance 2011-12-15 Added a new section discussing how to take 5 March
pricing tiers advantage of the discount pricing that is built into 2012
the Reserved Instance pricing tiers.
New GRU Region and Added information about the release of new 14
AKIs AKIs for the SA-East-1 Region. This release December
deprecates the AKI version 1.01. AKI version 1.02 2011
will continue to be backward compatible.
New offering types for 2011-11-01 You can choose from a variety of Reserved 01
Amazon EC2 Reserved Instance offerings that address your projected use December
Instances of the instance. 2011
Amazon EC2 instance 2011-11-01 You can view additional details about the status 16
status of your instances, including scheduled events November
planned by AWS that might have an impact 2011
on your instances. These operational activities
include instance reboots required to apply
software updates or security patches, or instance
retirements required where there are hardware
issues. For more information, see Monitor the
status of your instances (p. 928).
Amazon EC2 Cluster Added support for Cluster Compute Eight Extra 14
Compute Instance Type Large (cc2.8xlarge) to Amazon EC2. November
2011
1784
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
New PDX Region and Added information about the release of new AKIs 8
AKIs for the new US-West 2 Region. November
2011
Spot Instances in 2011-07-15 Added information about the support for Spot 11
Amazon VPC Instances in Amazon VPC. With this update, users October
can launch Spot Instances a virtual private cloud 2011
(VPC). By launching Spot Instances in a VPC,
users of Spot Instances can enjoy the benefits of
Amazon VPC.
New Linux AMI release Added information about the release of Amazon 26
Linux AMI 2011.09. This update removes the beta September
tag from the Amazon Linux AMI, supports the 2011
ability to lock the repositories to a specific version,
and provides for notification when updates are
available to installed packages including security
updates.
Support for importing VM Import can now import virtual machine 24 August
in VHD file format image files in VHD format. The VHD file format 2011
is compatible with the Citrix Xen and Microsoft
Hyper-V virtualization platforms. With this
release, VM Import now supports RAW, VHD and
VMDK (VMware ESX-compatible) image formats.
For more information, see the VM Import/Export
User Guide.
Update to the Amazon Added information about the 1.1 version of the 27 June
EC2 VM Import Amazon EC2 VM Import Connector for VMware 2011
Connector for VMware vCenter virtual appliance (Connector). This update
vCenter includes proxy support for Internet access, better
error handling, improved task progress bar
accuracy, and several bug fixes.
Enabling Linux AMI Added information about the AKI version change 20 June
to run user-provided from 1.01 to 1.02. This version updates the 2011
kernels PVGRUB to address launch failures associated
with t1.micro Linux instances. For more
information, see User provided kernels (p. 215).
1785
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Spot Instances 2011-05-15 Added information about the Spot Instances 26 May
Availability Zone pricing Availability Zone pricing feature. In this release, 2011
changes we've added new Availability Zone pricing options
as part of the information returned when you
query for Spot Instance requests and Spot
price history. These additions make it easier to
determine the price required to launch a Spot
Instance into a particular Availability Zone.
AWS Identity and Added information about AWS Identity and 26 April
Access Management Access Management (IAM), which enables users to 2011
specify which Amazon EC2 actions a user can use
with Amazon EC2 resources in general. For more
information, see Identity and access management
for Amazon EC2 (p. 1217).
Enabling Linux AMI Added information about enabling a Linux AMI to 26 April
to run user-provided use PVGRUB Amazon Kernel Image (AKI) to run a 2011
kernels user-provided kernel. For more information, see
User provided kernels (p. 215).
New Amazon Linux The new Amazon Linux reference AMI replaces 15 March
reference AMI the CentOS reference AMI. Removed information 2011
about the CentOS reference AMI, including the
section named Correcting Clock Drift for Cluster
Instances on CentOS 5.4 AMI.
1786
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Amazon EC2 VM Import Added information about the Amazon EC2 VM 3 March
Connector for VMware Import Connector for VMware vCenter virtual 2011
vCenter appliance (Connector). The Connector is a plug-in
for VMware vCenter that integrates with VMware
vSphere Client and provides a graphical user
interface that you can use to import your VMware
virtual machines to Amazon EC2.
Force volume You can now use the AWS Management Console 23
detachment to force the detachment of an Amazon EBS February
volume from an instance. For more information, 2011
see Detach an Amazon EBS volume from a Linux
instance (p. 1378).
Instance termination You can now use the AWS Management Console 23
protection to prevent an instance from being terminated. February
For more information, see Enable termination 2011
protection (p. 649).
Correcting Clock Drift Added information about how to correct clock 25 January
for Cluster Instances on drift for cluster instances running on Amazon's 2011
CentOS 5.4 AMI CentOS 5.4 AMI.
Basic monitoring for 2010-08-31 Added information about basic monitoring for 12
instances EC2 instances. December
2010
Filters and Tags 2010-08-31 Added information about listing, filtering, and 19
tagging resources. For more information, see List September
and filter your resources (p. 1659) and Tag your 2010
Amazon EC2 resources (p. 1666).
AWS Identity and Amazon EC2 now integrates with AWS Identity 2
Access Management for and Access Management (IAM). For more September
Amazon EC2 information, see Identity and access management 2010
for Amazon EC2 (p. 1217).
1787
Amazon Elastic Compute Cloud
User Guide for Linux Instances
History for previous years
Cluster instances 2010-06-15 Amazon EC2 offers cluster compute instances for 12 July
high-performance computing (HPC) applications. 2010
For more information about the hardware
specifications for each Amazon EC2 instance type,
see Amazon EC2 Instance Types.
Amazon VPC IP Address 2010-06-15 Amazon VPC users can now specify the IP address 12 July
Designation to assign an instance launched in a VPC. 2010
1788