Glacier DG
Glacier DG
Developer Guide
API Version 2012-06-01
Amazon S3 Glacier Developer Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon S3 Glacier Developer Guide
Table of Contents
What Is Amazon S3 Glacier? ................................................................................................................ 1
Are You a First-Time S3 Glacier User? ........................................................................................... 1
Data Model ............................................................................................................................... 2
Vault ................................................................................................................................ 2
Archive ............................................................................................................................. 3
Job ................................................................................................................................... 3
Notification Configuration ................................................................................................... 4
Supported Operations ................................................................................................................ 4
Vault Operations ................................................................................................................ 5
Archive Operations ............................................................................................................. 5
Jobs ................................................................................................................................. 5
Accessing Amazon S3 Glacier ....................................................................................................... 5
Regions and Endpoints ....................................................................................................... 6
Getting Started .................................................................................................................................. 7
Step 1: Before You Begin ............................................................................................................ 7
Set Up an AWS Account ...................................................................................................... 8
Download the Appropriate AWS SDK .................................................................................. 10
Step 2: Create a Vault ............................................................................................................... 11
Step 3: Upload an Archive to a Vault .......................................................................................... 12
Upload an Archive Using Java ............................................................................................ 13
Upload an Archive Using .NET ............................................................................................ 14
Step 4: Download an Archive from a Vault .................................................................................. 15
Download an Archive Using Java ........................................................................................ 16
Download an Archive Using .NET ........................................................................................ 17
Step 5: Delete an Archive from a Vault ....................................................................................... 18
Related Sections ............................................................................................................... 18
Delete an Archive Using Java ............................................................................................. 19
Delete an Archive Using .NET ............................................................................................. 20
Deleting an Archive Using the AWS CLI ............................................................................... 20
Step 6: Delete a Vault ............................................................................................................... 22
Where Do I Go From Here? ........................................................................................................ 23
Working with Vaults ......................................................................................................................... 24
Vault Operations in S3 Glacier ................................................................................................... 24
Creating and Deleting Vaults ............................................................................................. 24
Retrieving Vault Metadata ................................................................................................. 25
Downloading a Vault Inventory .......................................................................................... 25
Configuring Vault Notifications .......................................................................................... 25
Creating a Vault ....................................................................................................................... 25
Creating a Vault Using Java ............................................................................................... 26
Creating a Vault Using .NET ............................................................................................... 28
Creating a Vault Using REST .............................................................................................. 31
Creating a Vault Using the Console .................................................................................... 32
Creating a Vault Using the AWS CLI .................................................................................... 32
Retrieving Vault Metadata ......................................................................................................... 33
Retrieving Vault Metadata Using Java ................................................................................. 33
Retrieving Vault Metadata Using .NET ................................................................................. 35
Retrieving Vault Metadata Using REST ................................................................................ 36
Retrieving Vault Metadata Using the AWS CLI ...................................................................... 37
Downloading a Vault Inventory .................................................................................................. 37
About the Inventory ......................................................................................................... 39
Downloading a Vault Inventory Using Java .......................................................................... 39
Downloading a Vault Inventory Using .NET .......................................................................... 44
Downloading a Vault Inventory Using REST ......................................................................... 49
Downloading a Vault Inventory Using the AWS CLI ............................................................... 49
With S3 Glacier, customers can store their data cost effectively for months, years, or even decades. S3
Glacier enables customers to offload the administrative burdens of operating and scaling storage to
AWS, so they don't have to worry about capacity planning, hardware provisioning, data replication,
hardware failure detection and recovery, or time-consuming hardware migrations. For more service
highlights and pricing information, go to the S3 Glacier detail page.
S3 Glacier is one of the many different storage classes for Amazon S3. For a general overview of Amazon
S3 core concepts, such as buckets, access points, storage classes and objects, see What is Amazon S3 in
the Amazon Simple Storage Service Developer Guide.
Topics
• Are You a First-Time S3 Glacier User? (p. 1)
• Amazon S3 Glacier Data Model (p. 2)
• Supported Operations in S3 Glacier (p. 4)
• Accessing Amazon S3 Glacier (p. 5)
• What is Amazon S3 Glacier—The rest of this section describes the underlying data model, the
operations it supports, and the AWS SDKs that you can use to interact with the service.
• Getting Started—The Getting Started with Amazon S3 Glacier (p. 7) section walks you through
the process of creating a vault, uploading archives, creating jobs to download archives, retrieving the
job output, and deleting archives.
Important
S3 Glacier provides a console, which you can use to create and delete vaults. However, all
other interactions with S3 Glacier require that you use the AWS Command Line Interface (AWS
CLI) or write code. For example, to upload data, such as photos, videos, and other documents,
you must either use the AWS CLI or write code to make requests, by using either the REST API
directly or by using the AWS SDKs. For more information about using S3 Glacier with the AWS
CLI, go to AWS CLI Reference for S3 Glacier. To install the AWS CLI, go to AWS Command Line
Interface.
Beyond the getting started section, you'll probably want to learn more about S3 Glacier operations. The
following sections provide detailed information about working with S3 Glacier using the REST API and
the AWS Software Development Kits (SDKs) for Java and Microsoft .NET:
This section provides an overview of the AWS SDKs used in various code examples in this guide. A
review of this section will help when reading the following sections. It includes an overview of the
high-level and the low-level APIs that these SDKs offer, when to use them, and common steps for
running the code examples provided in this guide.
This section provides details of various vault operations such as creating a vault, retrieving vault
metadata, using jobs to retrieve vault inventory, and configuring vault notifications. In addition to
using the S3 Glacier console, you can use the AWS SDKs for various vault operations. This section
describes the API and provides working samples using the AWS SDK for Java and .NET.
• Working with Archives in Amazon S3 Glacier (p. 67)
This section provides details of archive operations such as uploading an archive in a single request or
using a multipart upload operation to upload large archives in parts. The section also explains creating
jobs to download archives asynchronously. The section provides examples using the AWS SDK for Java
and .NET.
• API Reference for Amazon S3 Glacier (p. 160)
S3 Glacier is a RESTful service. This section describes the REST operations, including the syntax, and
example requests and responses for all the operations. Note that the AWS SDK libraries wrap this API,
simplifying your programming tasks.
Amazon Simple Storage Service (Amazon S3) supports lifecycle configuration on an S3 bucket, which
enables you to transition objects to the S3 Glacier storage class for archival. When you transition
Amazon S3 objects to the S3 Glacier storage class, Amazon S3 internally uses S3 Glacier for durable
storage at lower cost. Although the objects are stored in S3 Glacier, they remain Amazon S3 objects that
you manage in Amazon S3, and you cannot access them directly through S3 Glacier.
For more information about Amazon S3 lifecycle configuration and transitioning objects to the S3
Glacier storage class, see Object Lifecycle Management and Transitioning Objects in the Amazon Simple
Storage Service Developer Guide.
Topics
• Vault (p. 2)
• Archive (p. 3)
• Job (p. 3)
• Notification Configuration (p. 4)
Vault
In S3 Glacier, a vault is a container for storing archives. When you create a vault, you specify a name and
choose an AWS Region where you want to create the vault.
Each vault resource has a unique address. The general form is:
https://<region-specific endpoint>/<account-id>/vaults/<vaultname>
For example, suppose that you create a vault (examplevault) in the US West (Oregon) Region. This
vault can then be addressed by the following URI:
https://glacier.us-west-2.amazonaws.com/111122223333/vaults/examplevault
In the URI,
An AWS account can create vaults in any supported AWS Region. For list of supported AWS Regions, see
Accessing Amazon S3 Glacier (p. 5). Within a Region, an account must use unique vault names. An
AWS account can create same-named vaults in different Regions.
You can store an unlimited number of archives in a vault. Depending on your business or application
needs, you can store these archives in one vault or multiple vaults.
S3 Glacier supports various vault operations. Note that vault operations are Region specific. For example,
when you create a vault, you create it in a specific Region. When you request a vault list, you request it
from a specific AWS Region, and the resulting list only includes vaults created in that specific Region.
Archive
An archive can be any data such as a photo, video, or document and is a base unit of storage in S3
Glacier. Each archive has a unique ID and an optional description. Note that you can only specify the
optional description during the upload of an archive. S3 Glacier assigns the archive an ID, which is unique
in the AWS Region in which it is stored.
https://<region-specific endpoint>/<account-id>/vaults/<vault-name>/archives/<archive-id>
The following is an example URI of an archive stored in the examplevault vault in the US West
(Oregon) Region:
https://glacier.us-west-2.amazonaws.com/111122223333/vaults/
examplevault/archives/NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-
TjhqG6eGoOY9Z8i1_AUyUsuhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs01MNGntHEQL8MBfGlqrEXAMPLEArchiveId
In addition, the S3 Glacier data model includes job and notification-configuration resources. These
resources complement the core vault and archive resources.
Job
S3 Glacier jobs can perform a select query on an archive, retrieve an archive, or get an inventory of a
vault. When performing a query on an archive, you initiate a job providing a SQL query and list of S3
Glacier archive objects. S3 Glacier Select runs the query in place and writes the output results to Amazon
S3.
Retrieving an archive and vault inventory (list of archives) are asynchronous operations in S3 Glacier in
which you first initiate a job, and then download the job output after S3 Glacier completes the job.
Note
S3 Glacier offers a cold storage data archival solution. If your application needs a storage
solution that requires real-time data retrieval, you might consider using Amazon S3. For more
information, see Amazon Simple Storage Service (Amazon S3).
To initiate a vault inventory job, you provide a vault name. Select and archive retrieval jobs require that
both the vault name and the archive ID. You can also provide an optional job description to help identify
the jobs.
Select, archive retrieval, and vault inventory jobs are associated with a vault. A vault can have multiple
jobs in progress at any point in time. When you send a job request (initiate a job), S3 Glacier returns to
you a job ID to track the job. Each job is uniquely identified by a URI of the form:
https://<region-specific endpoint>/<account-id>/vaults/<vault-name>/jobs/<job-id>
https://glacier.us-west-2.amazonaws.com/111122223333/vaults/examplevault/jobs/
HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID
For each job, S3 Glacier maintains information such as job type, description, creation date, completion
date, and job status. You can obtain information about a specific job or obtain a list of all your jobs
associated with a vault. The list of jobs that S3 Glacier returns includes all the in-progress and recently
finished jobs.
Notification Configuration
Because jobs take time to complete, S3 Glacier supports a notification mechanism to notify you when a
job is complete. You can configure a vault to send notification to an Amazon Simple Notification Service
(Amazon SNS) topic when jobs complete. You can specify one SNS topic per vault in the notification
configuration.
S3 Glacier stores the notification configuration as a JSON document. The following is an example vault
notification configuration:
{
"Topic": "arn:aws:sns:us-west-2:111122223333:mytopic",
"Events": ["ArchiveRetrievalCompleted", "InventoryRetrievalCompleted"]
}
Note that notification configurations are associated with vaults; you can have one for each vault. Each
notification configuration resource is uniquely identified by a URI of the form:
https://<region-specific endpoint>/<account-id>/vaults/<vault-name>/notification-
configuration
S3 Glacier supports operations to set, get, and delete a notification configuration. When you delete a
notification configuration, no notifications are sent when any data retrieval operation on the vault is
complete.
• Retrieving an archive
• Retrieving a vault inventory (list of archives)
These operations require you to first initiate a job and then download the job output. The following
sections summarize the S3 Glacier operations:
Vault Operations
S3 Glacier provides operations to create and delete vaults. You can obtain a vault description for a
specific vault or for all vaults in an AWS Region. The vault description provides information such as
creation date, number of archives in the vault, total size in bytes used by all the archives in the vault, and
the date S3 Glacier generated the vault inventory. S3 Glacier also provides operations to set, retrieve,
and delete a notification configuration on the vault. For more information, see Working with Vaults in
Amazon S3 Glacier (p. 24).
Archive Operations
S3 Glacier provides operations for you to upload and delete archives. You cannot update an existing
archive; you must delete the existing archive and upload a new archive. Note that each time you upload
an archive, S3 Glacier generates a new archive ID. For more information, see Working with Archives in
Amazon S3 Glacier (p. 67).
Jobs
You can initiate a S3 Glacier job to perform a select query on an archive, retrieve an archive, or get an
inventory of a vault.
For more information, see Querying Archives with S3 Glacier Select (p. 148).
• archive-retrieval— Retrieve an archive.
For more information, see Downloading an Archive in Amazon S3 Glacier (p. 83).
• inventory-retrieval— Inventory a vault.
For more information, see Downloading a Vault Inventory in Amazon S3 Glacier (p. 37).
Alternatively, you can simplify application development by using the AWS SDKs that wrap the S3 Glacier
REST API calls. You provide your credentials, and these libraries take care of authentication and request
signing. For more information about using the AWS SDKs, see Using the AWS SDKs with Amazon S3
Glacier (p. 116).
S3 Glacier also provides a console. You can use the console to create and delete vaults. However, all
the archive and job operations require you to write code and make requests using either the REST API
directly or the AWS SDK wrapper libraries. To access the S3 Glacier console, go to S3 Glacier Console.
In the getting started exercise, you will create a vault, upload and download an archive, and finally
delete the archive and the vault. You can do all these operations programmatically. However, the getting
started exercise uses the S3 Glacier management console to create and delete a vault. For uploading and
downloading an archive, this getting started section uses the AWS Software Development Kits (SDKs)
for Java and .NET high-level API. The high-level API provides a simplified programming experience when
working with S3 Glacier. For more information about these APIs, see Using the AWS SDKs with Amazon
S3 Glacier (p. 116).
Important
S3 Glacier provides a management console. You can use the console to create and delete vaults
as shown in this getting started exercise. However, all other interactions with S3 Glacier require
that you use the AWS Command Line Interface (CLI) or write code. For example, to upload
data, such as photos, videos, and other documents, you must either use the AWS CLI or write
code to make requests, using either the REST API directly or by using the AWS SDKs. For more
information about using S3 Glacier with the AWS CLI, go to AWS CLI Reference for S3 Glacier. To
install the AWS CLI, go to AWS Command Line Interface.
This getting started exercise provides code examples in Java and C# for you to upload and download
an archive. The last section of the getting started provides steps where you can learn more about the
developer experience with S3 Glacier.
Topics
• Step 1: Before You Begin with Amazon S3 Glacier (p. 7)
• Step 2: Create a Vault in Amazon S3 Glacier (p. 11)
• Step 3: Upload an Archive to a Vault in Amazon S3 Glacier (p. 12)
• Step 4: Download an Archive from a Vault in Amazon S3 Glacier (p. 15)
• Step 5: Delete an Archive from a Vault in Amazon S3 Glacier (p. 18)
• Step 6: Delete a Vault in Amazon S3 Glacier (p. 22)
• Where Do I Go From Here? (p. 23)
Topics
• Set Up an AWS Account and an Administrator User (p. 8)
• Download the Appropriate AWS SDK (p. 10)
Important
Amazon S3 Glacier (S3 Glacier) provides a management console, which you can use to create
and delete vaults. However, all other interactions with S3 Glacier require that you use the AWS
Command Line Interface (CLI) or write code. For example, to upload data, such as photos,
videos, and other documents, you must either use the AWS CLI or write code to make requests,
using either the REST API directly or by using the AWS SDKs. For more information about using
S3 Glacier with the AWS CLI, go to AWS CLI Reference for S3 Glacier. To install the AWS CLI, go
to AWS Command Line Interface.
If you already have an AWS account and you have created an IAM user for the account, skip to the next
task. If you don't have an AWS account, use the following procedure to create one.
1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.
Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.
Note your AWS account ID, because you'll need it for the next step.
If you signed up for AWS, but you haven't created an IAM user for yourself, you can create one using the
IAM console.
The Getting Started examples in this guide assume you have a user with administrator privileges.
To create an administrator user for yourself and add the user to an administrators group
(console)
1. Sign in to the IAM console as the account owner by choosing Root user and entering your AWS
account email address. On the next page, enter your password.
Note
We strongly recommend that you adhere to the best practice of using the Administrator
IAM user below and securely lock away the root user credentials. Sign in as the root user
only to perform a few account and service management tasks.
2. In the navigation pane, choose Users and then choose Add user.
3. For User name, enter Administrator.
4. Select the check box next to AWS Management Console access. Then select Custom password, and
then enter your new password in the text box.
5. (Optional) By default, AWS requires the new user to create a new password when first signing in. You
can clear the check box next to User must create a new password at next sign-in to allow the new
user to reset their password after they sign in.
6. Choose Next: Permissions.
7. Under Set permissions, choose Add user to group.
8. Choose Create group.
9. In the Create group dialog box, for Group name enter Administrators.
10. Choose Filter policies, and then select AWS managed -job function to filter the table contents.
11. In the policy list, select the check box for AdministratorAccess. Then choose Create group.
Note
You must activate IAM user and role access to Billing before you can use the
AdministratorAccess permissions to access the AWS Billing and Cost Management
console. To do this, follow the instructions in step 1 of the tutorial about delegating access
to the billing console.
12. Back in the list of groups, select the check box for your new group. Choose Refresh if necessary to
see the group in the list.
13. Choose Next: Tags.
14. (Optional) Add metadata to the user by attaching tags as key-value pairs. For more information
about using tags in IAM, see Tagging IAM Entities in the IAM User Guide.
15. Choose Next: Review to see the list of group memberships to be added to the new user. When you
are ready to proceed, choose Create user.
You can use this same process to create more groups and users and to give your users access to your AWS
account resources. To learn about using policies that restrict user permissions to specific AWS resources,
see Access Management and Example Policies.
https://aws_account_number.signin.aws.amazon.com/console/
The aws_account_number is your AWS account ID without hyphen. For example, if your AWS
account ID is 1234-5678-9012, your AWS account number is 123456789012. For information
about how to find your account number, see Your AWS Account ID and Its Alias in the IAM User
Guide.
3. Enter the IAM user name and password that you just created. When you're signed in, the navigation
bar displays your_user_name @ your_aws_account_id.
If you don't want the URL for your sign-in page to contain your AWS account ID, you can create an
account alias.
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. On the navigation pane, choose Dashboard.
3. Find the IAM users sign-in link.
4. To create the alias, click Customize, enter the name you want to use for your alias, and then choose
Yes, Create.
5. To remove the alias, choose Customize, and then choose Yes, Delete. The sign-in URL reverts to
using your AWS account ID.
To sign in after you create an account alias, use the following URL:
https://your_account_alias.signin.aws.amazon.com/console/
To verify the sign-in link for IAM users for your account, open the IAM console and check under IAM
users sign-in link: on the dashboard.
For information about using IAM with S3 Glacier, see Identity and Access Management in Amazon S3
Glacier (p. 125).
• If you are using Eclipse, you can download and install the AWS Toolkit for Eclipse using the update site
http://aws.amazon.com/eclipse/. For more information, go to AWS Toolkit for Eclipse.
• If you are using any other IDE to create your application, download the AWS SDK for Java.
• If you are using Visual Studio, you can install both the AWS SDK for .NET and the AWS Toolkit for
Visual Studio. The toolkit provides AWS Explorer for Visual Studio and project templates that you can
use for development. To download the AWS SDK for .NET go to http://aws.amazon.com/sdkfornet.
By default, the installation script installs both the AWS SDK and the AWS Toolkit for Visual Studio. To
learn more about the toolkit, go to AWS Toolkit for Visual Studio User Guide.
• If you are using any other IDE to create your application, you can use the same link provided in the
preceding step and install only the AWS SDK for .NET.
You can create vaults programmatically or by using the S3 Glacier console. This section uses the console
to create a vault. In a later step, you will upload an archive to the vault.
To create a vault
1. Sign into the AWS Management Console and open the S3 Glacier console at https://
console.aws.amazon.com/glacier/.
2. Select an AWS Region from the Region selector.
4. Enter examplevault as the vault name in the Vault Name field and then click Next Step.
There are guidelines for naming a vault. For more information, see Creating a Vault in Amazon S3
Glacier (p. 25).
5. Select Do not enable notifications. For this getting started exercise, you will not configure
notifications for the vault.
If you wanted to have notifications sent to you or your application whenever certain S3 Glacier
jobs complete, you would select Enable notifications and create a new SNS topic or Enable
notifications and use an existing SNS topic to set up Amazon Simple Notification Service (Amazon
SNS) notifications. In subsequent steps, you upload an archive and then download it using the
high-level API of the AWS SDK. Using the high-level API does not require that you configure vault
notification to retrieve your data.
6. If the AWS Region and vault name are correct, then click Submit.
For example, to upload data, such as photos, videos, and other documents, you must either
use the AWS CLI or write code to make requests, using either the REST API directly or by using
the AWS SDKs. To install the AWS CLI, see AWS Command Line Interface. For more information
about using Amazon S3 Glacier (S3 Glacier) with the AWS CLI, see AWS CLI Reference for S3
Glacier. For examples of using the AWS CLI to upload archives to S3 Glacier, see Using S3 Glacier
with the AWS Command Line Interface.
An archive is any object, such as a photo, video, or document that you store in a vault. It is a base unit
of storage in S3 Glacier. You can upload an archive in a single request. For large archives, S3 Glacier
provides a multipart upload API that enables you to upload an archive in parts. In this getting started
section, you upload a sample archive in a single request. For this exercise, you specify a file that is smaller
in size. For larger files, multipart upload is suitable. For more information, see Uploading Large Archives
in Parts (Multipart Upload) (p. 75).
Topics
• Upload an Archive to a Vault in Amazon S3 Glacier Using the AWS SDK for Java (p. 13)
• Upload an Archive to a Vault in Amazon S3 Glacier Using the AWS SDK for .NET (p. 14)
For step-by-step instructions on how to run this example, see Running Java Examples for Amazon S3
Glacier Using Eclipse (p. 119). You need to update the code as shown with the name of the archive file
you want to upload.
Note
Amazon S3 Glacier (S3 Glacier) keeps an inventory of all the archives in your vaults. When you
upload the archive in the following example, it will not appear in a vault in the management
console until the vault inventory has been updated. This update usually happens once a day.
import java.io.File;
import java.io.IOException;
import java.util.Date;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.transfer.ArchiveTransferManager;
import com.amazonaws.services.glacier.transfer.UploadResult;
try {
ArchiveTransferManager atm = new ArchiveTransferManager(client, credentials);
} catch (Exception e)
{
System.err.println(e);
}
}
}
• The example creates an instance of the ArchiveTransferManager class for the specified Amazon S3
Glacier (S3 Glacier) Region endpoint.
• The code example uses the US West (Oregon) Region (us-west-2) to match the location where you
created the vault previously in Step 2: Create a Vault in Amazon S3 Glacier (p. 11).
• The example uses the Upload method of the ArchiveTransferManager class to upload your
archive. For small archives, this method uploads the archive directly to S3 Glacier. For larger archives,
this method uses the multipart upload API in S3 Glacier to split the upload into multiple parts for
better error recovery, if any errors are encountered while streaming the data to S3 Glacier.
For step-by-step instructions on how to run the following example, see Running Code
Examples (p. 121). You need to update the code as shown with the name of your vault and the name of
the archive file to upload.
Note
S3 Glacier keeps an inventory of all the archives in your vaults. When you upload the archive in
the following example, it will not appear in a vault in the management console until the vault
inventory has been updated. This update usually happens once a day.
Example — Uploading an Archive Using the High-Level API of the AWS SDK for .NET
using System;
using Amazon.Glacier;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class ArchiveUploadHighLevel_GettingStarted
{
static string vaultName = "examplevault";
static string archiveToUpload = "*** Provide file name (with full path) to upload
***";
To retrieve an archive from S3 Glacier, you first initiate a job. After the job completes, you download the
data. For more information about archive retrievals, see Retrieving S3 Glacier Archives (p. 83).
The access time of your request depends on the retrieval option you choose: Expedited, Standard, or
Bulk retrievals. For all but the largest archives (250 MB+), data accessed using Expedited retrievals
are typically made available within 1–5 minutes. Archives retrieved using Standard retrievals typically
complete between 3–5 hours. Bulk retrievals typically complete within 5–12 hours. For more information
about the retrieval options, see the S3 Glacier FAQ. For information about data retrieval charges, see the
S3 Glacier detail page.
The code examples shown in the following topics initiate the job, wait for it to complete, and then
download the archive's data.
Topics
• Download an Archive from a Vault in Amazon S3 Glacier Using the AWS SDK for Java (p. 16)
• Download an Archive from a Vault in Amazon S3 Glacier Using the AWS SDK for .NET (p. 17)
For step-by-step instructions on how to run this example, see Running Java Examples for Amazon S3
Glacier Using Eclipse (p. 119). You need to update the code as shown with the archive ID of the file you
uploaded in Step 3: Upload an Archive to a Vault in Amazon S3 Glacier (p. 12).
import java.io.File;
import java.io.IOException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.transfer.ArchiveTransferManager;
import com.amazonaws.services.sns.AmazonSNSClient;
import com.amazonaws.services.sqs.AmazonSQSClient;
glacierClient.setEndpoint("glacier.us-west-2.amazonaws.com");
sqsClient.setEndpoint("sqs.us-west-2.amazonaws.com");
snsClient.setEndpoint("sns.us-west-2.amazonaws.com");
try {
ArchiveTransferManager atm = new ArchiveTransferManager(glacierClient,
sqsClient, snsClient);
} catch (Exception e)
{
System.err.println(e);
}
}
}
• The example creates an instance of the ArchiveTransferManager class for the specified Amazon S3
Glacier (S3 Glacier) Region endpoint.
• The code example uses the US West (Oregon) Region (us-west-2) to match the location where you
created the vault previously in Step 2: Create a Vault in Amazon S3 Glacier (p. 11).
• The example uses the Download method of the ArchiveTransferManager class to download your
archive. The example creates an Amazon SNS topic, and an Amazon Simple Queue Service queue that
is subscribed to that topic. If you created an IAM administrative user as instructed in Step 1: Before You
Begin with Amazon S3 Glacier (p. 7) your user has the necessary IAM permissions for the creation
and use of the Amazon SNS topic and Amazon SQS queue.
• The example then initiates the archive retrieval job and polls the queue for the archive to be available.
When the archive is available, download begins. For information about retrieval times, see Archive
Retrieval Options (p. 84)
For step-by-step instructions on how to run this example, see Running Code Examples (p. 121). You
need to update the code as shown with the archive ID of the file you uploaded in Step 3: Upload an
Archive to a Vault in Amazon S3 Glacier (p. 12).
Example — Download an Archive Using the High-Level API of the AWS SDK for .NET
using System;
using Amazon.Glacier;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class ArchiveDownloadHighLevel_GettingStarted
{
static string vaultName = "examplevault";
static string archiveId = "*** Provide archive ID ***";
static string downloadFilePath = "*** Provide the file name and path to where to
store the download ***";
Delete the sample archive by following one of these SDKs or the AWS CLI:
• Delete an Archive from a Vault in Amazon S3 Glacier Using the AWS SDK for Java (p. 19)
• Delete an Archive from a Vault in Amazon S3 Glacier Using the AWS SDK for .NET (p. 20)
• Deleting an Archive in Amazon S3 Glacier Using the AWS Command Line Interface (p. 20)
Related Sections
• Step 3: Upload an Archive to a Vault in Amazon S3 Glacier (p. 12)
• Deleting an Archive in Amazon S3 Glacier (p. 109)
• The DeleteArchiveRequest object describes the delete request, including the vault name where the
archive is located and the archive ID.
• The deleteArchive method sends the request to Amazon S3 Glacier (S3 Glacier) to delete the
archive.
• The example uses the US West (Oregon) Region (us-west-2) to match the location where you created
the vault in Step 2: Create a Vault in Amazon S3 Glacier (p. 11).
For step-by-step instructions on how to run this example, see Running Java Examples for Amazon S3
Glacier Using Eclipse (p. 119). You need to update the code as shown with the archive ID of the file you
uploaded in Step 3: Upload an Archive to a Vault in Amazon S3 Glacier (p. 12).
import java.io.IOException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.model.DeleteArchiveRequest;
try {
} catch (Exception e) {
System.err.println("Archive not deleted.");
System.err.println(e);
}
}
}
• The example creates an instance of the ArchiveTransferManager class for the specified Amazon S3
Glacier (S3 Glacier) Region endpoint.
• The code example uses the US West (Oregon) Region (us-west-2) to match the location where you
created the vault previously in Step 2: Create a Vault in Amazon S3 Glacier (p. 11).
• The example uses the Delete method of the ArchiveTransferManager class provided as part of
the high-level API of the AWS SDK for .NET.
For step-by-step instructions on how to run this example, see Running Code Examples (p. 121). You
need to update the code as shown with the archive ID of the file you uploaded in Step 3: Upload an
Archive to a Vault in Amazon S3 Glacier (p. 12).
Example — Deleting an Archive Using the High-Level API of the AWS SDK for .NET
using System;
using Amazon.Glacier;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class ArchiveDeleteHighLevel_GettingStarted
{
static string vaultName = "examplevault";
static string archiveId = "*** Provide archive ID ***";
Topics
• (Prerequisite) Setting Up the AWS CLI (p. 21)
aws help
aws s3 ls
Expected output:
{
"location": "/111122223333/vaults/awsexamplevault/jobs/*** jobid ***",
"jobId": "*** jobid ***"
}
2. Use the describe-job command to check status of the previous retrieval job.
Expected output:
{
"InventoryRetrievalParameters": {
"Format": "JSON"
},
"VaultARN": "*** vault arn ***",
API Version 2012-06-01
21
Amazon S3 Glacier Developer Guide
Step 6: Delete a Vault
"Completed": false,
"JobId": "*** jobid ***",
"Action": "InventoryRetrieval",
"CreationDate": "*** job creation date ***",
"StatusCode": "InProgress"
}
You must wait until the job output is ready for you to download. If you set a notification
configuration on the vault or specified an Amazon Simple Notification Service (Amazon SNS) topic
when you initiated the job, S3 Glacier sends a message to the topic after it completes the job.
You can set notification configuration for specific events on the vault. For more information, see
Configuring Vault Notifications in Amazon S3 Glacier (p. 51). S3 Glacier sends a message to the
specified SNS topic anytime the specific event occurs.
4. When it's complete, use the get-job-output command to download the retrieval job to the file
output.json.
{
"VaultARN":"arn:aws:glacier:region:111122223333:vaults/awsexamplevault",
"InventoryDate":"*** job completion date ***",
"ArchiveList":[
{"ArchiveId":"*** archiveid ***",
"ArchiveDescription":*** archive description (if set) ***,
"CreationDate":"*** archive creation date ***",
"Size":"*** archive size (in bytes) ***",
"SHA256TreeHash":"*** archive hash ***"
}
{"ArchiveId":
...
]}
5. Use the delete-archive command to delete each archive from a vault until none remain.
You can delete a vault programmatically or by using the S3 Glacier console. For information about
deleting a vault programmatically, see Deleting a Vault in Amazon S3 Glacier (p. 59).
1. Sign into the AWS Management Console and open the S3 Glacier console at https://
console.aws.amazon.com/glacier/.
2. From the AWS Region selector, select the AWS Region where the vault exists that you want to delete.
In this getting started exercise, we've been using a vault named examplevault.
• If deleting a nonempty vault you must first delete all existing archives before deleting the vault.
This can be done by writing code to make a delete archive request using either the REST API, the
AWS SDK for Java, the AWS SDK for .NET or by using the AWS CLI. For information about deleting
archives, see Step 5: Delete an Archive from a Vault in Amazon S3 Glacier (p. 18)
Topics
• Vault Operations in S3 Glacier (p. 24)
• Creating a Vault in Amazon S3 Glacier (p. 25)
• Retrieving Vault Metadata in Amazon S3 Glacier (p. 33)
• Downloading a Vault Inventory in Amazon S3 Glacier (p. 37)
• Configuring Vault Notifications in Amazon S3 Glacier (p. 51)
• Deleting a Vault in Amazon S3 Glacier (p. 59)
• Tagging Your Amazon S3 Glacier Vaults (p. 64)
• Amazon S3 Glacier Vault Lock (p. 65)
You can delete a vault only if there are no archives in the vault as of the last inventory that S3 Glacier
computed and there have been no writes to the vault since the last inventory.
Note
S3 Glacier prepares an inventory for each vault periodically, every 24 hours. Because the
inventory might not reflect the latest information, S3 Glacier ensures the vault is indeed empty
by checking if there were any write operations since the last vault inventory.
For more information, see Creating a Vault in Amazon S3 Glacier (p. 25) and Deleting a Vault in
Amazon S3 Glacier (p. 59).
Downloading a vault inventory is an asynchronous operation. You must first initiate a job to download
the inventory. After receiving the job request, S3 Glacier prepares your inventory for download. After the
job completes, you can download the inventory data.
Given the asynchronous nature of the job, you can use Amazon Simple Notification Service (Amazon
SNS) notifications to notify you when the job completes. You can specify an Amazon SNS topic for each
individual job request or configure your vault to send a notification when specific vault events occur.
S3 Glacier prepares an inventory for each vault periodically, every 24 hours. If there have been no archive
additions or deletions to the vault since the last inventory, the inventory date is not updated. When you
initiate a job for a vault inventory, S3 Glacier returns the last inventory it generated, which is a point-
in-time snapshot and not real-time data. You might not find it useful to retrieve vault inventory for
each archive upload. However, suppose you maintain a database on the client-side associating metadata
about the archives you upload to S3 Glacier. Then, you might find the vault inventory useful to reconcile
information in your database with the actual vault inventory.
For more information about retrieving a vault inventory, see Downloading a Vault Inventory in Amazon
S3 Glacier (p. 37).
You can configure notifications on a vault and identify vault events and the Amazon SNS topic to be
notified when the event occurs. Anytime the vault event occurs, S3 Glacier sends a notification to the
specified Amazon SNS topic. For more information, see Configuring Vault Notifications in Amazon S3
Glacier (p. 51).
Regions and Endpoints in the AWS General Reference. For information on creating more vaults, go to the
S3 Glacier product detail page.
When you create a vault, you must provide a vault name. The following are the vault naming
requirements:
Vault names must be unique within an account and the AWS Region in which the vault is being created.
That is, an account can create vaults with the same name in different AWS Regions but not in the same
AWS Region.
Topics
• Creating a Vault in Amazon S3 Glacier Using the AWS SDK for Java (p. 26)
• Creating a Vault in Amazon S3 Glacier Using the AWS SDK for .NET (p. 28)
• Creating a Vault in Amazon S3 Glacier Using the REST API (p. 31)
• Creating a Vault Using the Amazon S3 Glacier Console (p. 32)
• Creating a Vault in Amazon S3 Glacier Using the AWS Command Line Interface (p. 32)
You need to specify an AWS Region in which you want to create a vault. All operations you perform
using this client apply to that AWS Region.
2. Provide request information by creating an instance of the CreateVaultRequest class.
Amazon S3 Glacier (S3 Glacier) requires you to provide a vault name and your account ID. If you
don't provide an account ID, then the account ID associated with the credentials you provide to
sign the request is used. For more information, see Using the AWS SDK for Java with Amazon S3
Glacier (p. 117).
3. Run the createVault method by providing the request object as a parameter.
The following Java code snippet illustrates the preceding steps. The snippet creates a vault in the us-
west-2 Region. The Location it prints is the relative URI of the vault that includes your account ID, the
AWS Region, and the vault name.
Note
For information about the underlying REST API, see Create Vault (PUT vault) (p. 185).
For step-by-step instructions on how to run the following example, see Running Java Examples for
Amazon S3 Glacier Using Eclipse (p. 119).
Example
import java.io.IOException;
import java.util.List;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.model.CreateVaultRequest;
import com.amazonaws.services.glacier.model.CreateVaultResult;
import com.amazonaws.services.glacier.model.DeleteVaultRequest;
import com.amazonaws.services.glacier.model.DescribeVaultOutput;
import com.amazonaws.services.glacier.model.DescribeVaultRequest;
import com.amazonaws.services.glacier.model.DescribeVaultResult;
import com.amazonaws.services.glacier.model.ListVaultsRequest;
import com.amazonaws.services.glacier.model.ListVaultsResult;
try {
createVault(client, vaultName);
describeVault(client, vaultName);
listVaults(client);
deleteVault(client, vaultName);
} catch (Exception e) {
System.err.println("Vault operation failed." + e.getMessage());
}
}
Topics
• Creating a Vault Using the High-Level API of the AWS SDK for .NET (p. 28)
• Creating a Vault Using the Low-Level API of the AWS SDK for .NET (p. 29)
Example: Vault Operations Using the High-Level API of the AWS SDK for .NET
The following C# code example creates and delete a vault in the US West (Oregon) Region. For a list of
AWS Regions in which you can create vaults, see Accessing Amazon S3 Glacier (p. 5).
For step-by-step instructions on how to run the following example, see Running Code
Examples (p. 121). You need to update the code as shown with a vault name.
Example
using System;
using Amazon.Glacier;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class VaultCreateDescribeListVaultsDeleteHighLevel
{
static string vaultName = "*** Provide vault name ***";
You need to specify an AWS Region in which you want to create a vault. All operations you perform
using this client apply to that AWS Region.
2. Provide request information by creating an instance of the CreateVaultRequest class.
Amazon S3 Glacier (S3 Glacier) requires you to provide a vault name and your account ID. If you don't
provide an account ID, then account ID associated with the credentials you provide to sign the request
is assumed. For more information, see Using the AWS SDK for .NET with Amazon S3 Glacier (p. 120).
3. Run the CreateVault method by providing the request object as a parameter.
Example: Vault Operations Using the Low-Level API of the AWS SDK for .NET
The following C# example illustrates the preceding steps. The example creates a vault in the US West
(Oregon) Region. In addition, the code example retrieves the vault information, lists all vaults in the same
AWS Region, and then deletes the vault created. The Location printed is the relative URI of the vault
that includes your account ID, the AWS Region, and the vault name.
Note
For information about the underlying REST API, see Create Vault (PUT vault) (p. 185).
For step-by-step instructions on how to run the following example, see Running Code
Examples (p. 121). You need to update the code as shown with a vault name.
Example
using System;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class VaultCreateDescribeListVaultsDelete
{
static string vaultName = "*** Provide vault name ***";
static AmazonGlacierClient client;
{
DescribeVaultRequest describeVaultRequest = new DescribeVaultRequest()
{
VaultName = vaultName
};
DescribeVaultResponse describeVaultResponse =
client.DescribeVault(describeVaultRequest);
Console.WriteLine("\nVault description...");
Console.WriteLine(
"\nVaultName: " + describeVaultResponse.VaultName +
"\nVaultARN: " + describeVaultResponse.VaultARN +
"\nVaultCreationDate: " + describeVaultResponse.CreationDate +
"\nNumberOfArchives: " + describeVaultResponse.NumberOfArchives +
"\nSizeInBytes: " + describeVaultResponse.SizeInBytes +
"\nLastInventoryDate: " + describeVaultResponse.LastInventoryDate
);
}
Topics
• (Prerequisite) Setting Up the AWS CLI (p. 21)
• Example: Creating a Vault Using the AWS CLI (p. 32)
aws help
aws s3 ls
Expected output:
"location": "/111122223333/vaults/awsexamplevault"
}
If you retrieve a vault list, S3 Glacier returns the list sorted by the ASCII values of the vault names. The
list contains up to 1,000 vaults. You should always check the response for a marker at which to continue
the list; if there are no more items the marker field is null. You can optionally limit the number of
vaults returned in the response. If there are more vaults than are returned in the response, the result is
paginated. You need to send additional requests to fetch the next set of vaults.
Topics
• Retrieving Vault Metadata in Amazon S3 Glacier Using the AWS SDK for Java (p. 33)
• Retrieving Vault Metadata in Amazon S3 Glacier Using the AWS SDK for .NET (p. 35)
• Retrieving Vault Metadata Using the REST API (p. 36)
• Retrieving Vault Metadata in Amazon S3 Glacier Using the AWS Command Line Interface (p. 37)
You need to specify an AWS Region where the vault resides. All operations you perform using this
client apply to that AWS Region.
2. Provide request information by creating an instance of the DescribeVaultRequest class.
Amazon S3 Glacier (S3 Glacier) requires you to provide a vault name and your account ID. If you
don't provide an account ID, then the account ID associated with the credentials you provide to sign
the request is assumed. For more information, see Using the AWS SDK for Java with Amazon S3
Glacier (p. 117).
The vault metadata information that S3 Glacier returns is available in the DescribeVaultResult
object.
System.out.print(
"\nCreationDate: " + result.getCreationDate() +
"\nLastInventoryDate: " + result.getLastInventoryDate() +
"\nNumberOfArchives: " + result.getNumberOfArchives() +
"\nSizeInBytes: " + result.getSizeInBytes() +
"\nVaultARN: " + result.getVaultARN() +
"\nVaultName: " + result.getVaultName());
Note
For information about the underlying REST API, see Describe Vault (GET vault) (p. 194).
The following Java code snippet retrieves list of vaults in the us-west-2 Region. The request limits the
number of vaults returned in the response to 5. The code snippet then makes a series of listVaults
calls to retrieve the entire vault list from the AWS Region.
AmazonGlacierClient client;
client.setEndpoint("https://glacier.us-west-2.amazonaws.com/");
In the preceding code segment, if you don't specify the Limit value in the request, S3 Glacier returns
up to 10 vaults, as set by the S3 Glacier API. If there are more vaults to list, the response marker field
contains the vault Amazon Resource Name (ARN) at which to continue the list with a new request;
otherwise, the marker field is null.
Note that the information returned for each vault in the list is the same as the information you get by
calling the describeVault method for a specific vault.
Note
The listVaults method calls the underlying REST API (see List Vaults (GET vaults) (p. 210)).
Example: Retrieving Vault Metadata Using the AWS SDK for Java
For a working code example, see Example: Creating a Vault Using the AWS SDK for Java (p. 27). The
Java code example creates a vault and retrieves the vault metadata.
You need to specify an AWS Region where the vault resides. All operations you perform using this
client apply to that AWS Region.
2. Provide request information by creating an instance of the DescribeVaultRequest class.
Amazon S3 Glacier (S3 Glacier) requires you to provide a vault name and your account ID. If you
don't provide an account ID, then the account ID associated with the credentials you provide to sign
the request is assumed. For more information, see Using the AWS SDK for .NET with Amazon S3
Glacier (p. 120).
3. Run the DescribeVault method by providing the request object as a parameter.
The vault metadata information that S3 Glacier returns is available in the DescribeVaultResult
object.
The following C# code snippet illustrates the preceding steps. The snippet retrieves metadata
information of an existing vault in the US West (Oregon) Region.
AmazonGlacierClient client;
client = new AmazonGlacierClient(Amazon.RegionEndpoint.USWest2);
Note
For information about the underlying REST API, see Describe Vault (GET vault) (p. 194).
The following C# code snippet retrieves list of vaults in the US West (Oregon) Region. The request
limits the number of vaults returned in the response to 5. The code snippet then makes a series of
ListVaults calls to retrieve the entire vault list from the AWS Region.
AmazonGlacierClient client;
client = new AmazonGlacierClient(Amazon.RegionEndpoint.USWest2);
string lastMarker = null;
Console.WriteLine("\n List of vaults in your account in the specific AWS Region ...");
do
{
ListVaultsRequest request = new ListVaultsRequest()
{
Limit = 5,
Marker = lastMarker
};
ListVaultsResponse response = client.ListVaults(request);
In the preceding code segment, if you don't specify the Limit value in the request, S3 Glacier returns up
to 10 vaults, as set by the S3 Glacier API.
Note that the information returned for each vault in the list is the same as the information you get by
calling the DescribeVault method for a specific vault.
Note
The ListVaults method calls the underlying REST API (see List Vaults (GET vaults) (p. 210)).
Topics
• (Prerequisite) Setting Up the AWS CLI (p. 21)
• Example: Retrieving Vault Metadata Using the AWS CLI (p. 37)
aws help
aws s3 ls
1. Initiate an inventory retrieval job by using the Initiate Job (POST jobs) (p. 263) operation.
Important
A data retrieval policy can cause your initiate retrieval job request to fail with a
PolicyEnforcedException exception. For more information about data retrieval policies,
see Amazon S3 Glacier Data Retrieval Policies (p. 151). For more information about the
PolicyEnforcedException exception, see Error Responses (p. 176).
2. After the job completes, download the bytes using the Get Job Output (GET output) (p. 257)
operation.
For example, retrieving an archive or a vault inventory requires you to first initiate a retrieval job. The job
request is ran asynchronously. When you initiate a retrieval job, S3 Glacier creates a job and returns a job
ID in the response. When S3 Glacier completes the job, you can get the job output, the archive bytes, or
the vault inventory data.
The job must complete before you can get its output. To determine the status of the job, you have the
following options:
• Wait for job completion notification—You can specify an Amazon Simple Notification Service
(Amazon SNS) topic to which S3 Glacier can post a notification after the job is completed. You can
specify Amazon SNS topic using the following methods:
• Specify an Amazon SNS topic per job basis.
When you initiate a job, you can optionally specify an Amazon SNS topic.
• Set notification configuration on the vault.
You can set notification configuration for specific events on the vault (see Configuring Vault
Notifications in Amazon S3 Glacier (p. 51)). S3 Glacier sends a message to the specified SNS topic
any time the specific event occur.
If you have notification configuration set on the vault and you also specify an Amazon SNS topic when
you initiate a job, S3 Glacier sends job completion message to both the topics.
You can configure the SNS topic to notify you via email or store the message in an Amazon Simple
Queue Service (Amazon SQS) that your application can poll. When a message appears in the queue,
you can check if the job is completed successfully and then download the job output.
• Request job information explicitly—S3 Glacier also provides a describe job operation (Describe Job
(GET JobID) (p. 250)) that enables you to poll for job information. You can periodically send this
request to obtain job information. However, using Amazon SNS notifications is the recommended
option.
Note
The information you get via SNS notification is the same as what you get by calling Describe
Job.
Topics
• About the Inventory (p. 39)
• Downloading a Vault Inventory in Amazon S3 Glacier Using the AWS SDK for Java (p. 39)
• Downloading a Vault Inventory in Amazon S3 Glacier Using the AWS SDK for .NET (p. 44)
• Downloading a Vault Inventory Using the REST API (p. 49)
• Downloading a Vault Inventory in Amazon S3 Glacier Using the AWS Command Line
Interface (p. 49)
You might not find it useful to retrieve a vault inventory for each archive upload. However, suppose you
maintain a database on the client-side associating metadata about the archives you upload to S3 Glacier.
Then, you might find the vault inventory useful to reconcile information, as needed, in your database
with the actual vault inventory. You can limit the number of inventory items retrieved by filtering on the
archive creation date or by setting a quota. For more information about limiting inventory retrieval, see
Range Inventory Retrieval (p. 266).
The inventory can be returned in two formats, comma-separated values (CSV) or JSON. You can
optionally specify the format when you initiate the inventory job. The default format is JSON. For more
information about the data fields returned in an inventory job output, see Response Body (p. 260) of
the Get Job Output API.
You need to specify an AWS Region where the vault resides. All operations you perform using this
client apply to that AWS Region.
2. Initiate an inventory retrieval job by executing the initiateJob method.
You must wait until the job output is ready for you to download. If you have either set a notification
configuration on the vault, or specified an Amazon Simple Notification Service (Amazon SNS) topic
when you initiated the job, S3 Glacier sends a message to the topic after it completes the job.
You can also poll S3 Glacier by calling the describeJob method to determine job completion
status. However, using an Amazon SNS topic for notification is the recommended approach. The code
example given in the following section uses Amazon SNS for S3 Glacier to publish a message.
4. Download the job output (vault inventory data) by executing the getJobOutput method.
You provide your account ID, job ID, and vault name by creating an instance of the
GetJobOutputRequest class. If you don't provide an account ID, then the account ID associated with
the credentials you provide to sign the request is used. For more information, see Using the AWS SDK
for Java with Amazon S3 Glacier (p. 117).
Note
For information about the job related underlying REST API, see Job Operations (p. 249).
The example attaches a policy to the queue to enable the Amazon SNS topic to post messages to the
queue.
• Initiates a job to download the specified archive.
In the job request, the Amazon SNS topic that was created is specified so that S3 Glacier can publish a
notification to the topic after it completes the job.
• Checks the Amazon SQS queue for a message that contains the job ID.
If there is a message, parse the JSON and check if the job completed successfully. If it did, download
the archive.
• Cleans up by deleting the Amazon SNS topic and the Amazon SQS queue that it created.
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.core.JsonParseException;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.policy.Policy;
import com.amazonaws.auth.policy.Principal;
import com.amazonaws.auth.policy.Resource;
import com.amazonaws.auth.policy.Statement;
import com.amazonaws.auth.policy.Statement.Effect;
import com.amazonaws.auth.policy.actions.SQSActions;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.model.GetJobOutputRequest;
import com.amazonaws.services.glacier.model.GetJobOutputResult;
import com.amazonaws.services.glacier.model.InitiateJobRequest;
import com.amazonaws.services.glacier.model.InitiateJobResult;
import com.amazonaws.services.glacier.model.JobParameters;
import com.amazonaws.services.sns.AmazonSNSClient;
import com.amazonaws.services.sns.model.CreateTopicRequest;
import com.amazonaws.services.sns.model.CreateTopicResult;
import com.amazonaws.services.sns.model.DeleteTopicRequest;
import com.amazonaws.services.sns.model.SubscribeRequest;
import com.amazonaws.services.sns.model.SubscribeResult;
import com.amazonaws.services.sns.model.UnsubscribeRequest;
import com.amazonaws.services.sqs.AmazonSQSClient;
import com.amazonaws.services.sqs.model.CreateQueueRequest;
import com.amazonaws.services.sqs.model.CreateQueueResult;
import com.amazonaws.services.sqs.model.DeleteQueueRequest;
import com.amazonaws.services.sqs.model.GetQueueAttributesRequest;
import com.amazonaws.services.sqs.model.GetQueueAttributesResult;
import com.amazonaws.services.sqs.model.Message;
import com.amazonaws.services.sqs.model.ReceiveMessageRequest;
import com.amazonaws.services.sqs.model.SetQueueAttributesRequest;
try {
setupSQS();
setupSNS();
downloadJobOutput(jobId);
cleanUp();
} catch (Exception e) {
System.err.println("Inventory retrieval failed.");
System.err.println(e);
}
}
Policy sqsPolicy =
new Policy().withStatements(
new Statement(Effect.Allow)
.withPrincipals(Principal.AllUsers)
.withActions(SQSActions.SendMessage)
.withResources(new Resource(sqsQueueARN)));
Map<String, String> queueAttributes = new HashMap<String, String>();
queueAttributes.put("Policy", sqsPolicy.toJson());
sqsClient.setQueueAttributes(new SetQueueAttributesRequest(sqsQueueURL,
queueAttributes));
}
private static void setupSNS() {
CreateTopicRequest request = new CreateTopicRequest()
.withName(snsTopicName);
CreateTopicResult result = snsClient.createTopic(request);
snsTopicARN = result.getTopicArn();
snsSubscriptionARN = result2.getSubscriptionArn();
}
private static String initiateJobRequest() {
.withVaultName(vaultName)
.withJobParameters(jobParameters);
return response.getJobId();
}
while (!messageFound) {
List<Message> msgs = sqsClient.receiveMessage(
new
ReceiveMessageRequest(sqsQueueUrl).withMaxNumberOfMessages(10)).getMessages();
if (msgs.size() > 0) {
for (Message m : msgs) {
JsonParser jpMessage = factory.createJsonParser(m.getBody());
JsonNode jobMessageNode = mapper.readTree(jpMessage);
String jobMessage = jobMessageNode.get("Message").textValue();
} else {
Thread.sleep(sleepTime * 1000);
}
}
return (messageFound && jobSuccessful);
}
You need to specify an AWS Region where the vault resides. All operations you perform using this
client apply to that AWS Region.
2. Initiate an inventory retrieval job by executing the InitiateJob method.
You provide job information in an InitiateJobRequest object. Amazon S3 Glacier (S3 Glacier)
returns a job ID in response. The response is available in an instance of the InitiateJobResponse
class.
AmazonGlacierClient client;
client = new AmazonGlacierClient(Amazon.RegionEndpoint.USWest2);
You must wait until the job output is ready for you to download. If you have either set a notification
configuration on the vault identifying an Amazon Simple Notification Service (Amazon SNS) topic, or
specified an Amazon SNS topic when you initiated a job, S3 Glacier sends a message to that topic after
it completes the job. The code example given in the following section uses Amazon SNS for S3 Glacier
to publish a message.
You can also poll S3 Glacier by calling the DescribeJob method to determine job completion status.
Although using Amazon SNS topic for notification is the recommended approach.
4. Download the job output (vault inventory data) by executing the GetJobOutput method.
You provide your account ID, vault name, and the job ID information by creating an instance of the
GetJobOutputRequest class. If you don't provide an account ID, then the account ID associated with
the credentials you provide to sign the request is assumed. For more information, see Using the AWS
SDK for .NET with Amazon S3 Glacier (p. 120).
Note
For information about the job related underlying REST API, see Job Operations (p. 249).
The example attaches a policy to the queue to enable the Amazon SNS topic to post messages.
• Initiate a job to download the specified archive.
In the job request, the example specifies the Amazon SNS topic so that S3 Glacier can send a message
after it completes the job.
• Periodically check the Amazon SQS queue for a message.
If there is a message, parse the JSON and check if the job completed successfully. If it did, download
the archive. The code example uses the JSON.NET library (see JSON.NET) to parse the JSON.
• Clean up by deleting the Amazon SNS topic and the Amazon SQS queue it created.
Example
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;
using Amazon.SimpleNotificationService;
using Amazon.SimpleNotificationService.Model;
using Amazon.SQS;
using Amazon.SQS.Model;
using Newtonsoft.Json;
namespace glacier.amazon.com.docsamples
{
class VaultInventoryJobLowLevelUsingSNSSQS
{
static string topicArn;
static string queueUrl;
static string queueArn;
static string vaultName = "*** Provide vault name ***";
static string fileName = "*** Provide file name and path where to store inventory
***";
static AmazonSimpleNotificationServiceClient snsClient;
static AmazonSQSClient sqsClient;
const string SQS_POLICY =
"{" +
" \"Version\" : \"2012-10-17\"," +
" \"Statement\" : [" +
" {" +
" \"Sid\" : \"sns-rule\"," +
" \"Effect\" : \"Allow\"," +
" \"Principal\" : {\"AWS\" : \"arn:aws:iam::123456789012:root\" }," +
" \"Action\" : \"sqs:SendMessage\"," +
" \"Resource\" : \"{QuernArn}\"," +
" \"Condition\" : {" +
" \"ArnLike\" : {" +
" \"aws:SourceArn\" : \"{TopicArn}\"" +
" }" +
" }" +
" }" +
" ]" +
"}";
// Add the policy to the queue so SNS can send messages to the queue.
var policy = SQS_POLICY.Replace("{TopicArn}", topicArn).Replace("{QuernArn}",
queueArn);
sqsClient.SetQueueAttributes(new SetQueueAttributesRequest()
{
QueueUrl = queueUrl,
Attributes = new Dictionary<string, string>
{
{ QueueAttributeName.Policy, policy }
}
});
// Check queue for a message and if job completed successfully, download inventory.
ProcessQueue(jobId, client);
}
if (string.Equals(statusCode, GlacierUtils.JOB_STATUS_SUCCEEDED,
StringComparison.InvariantCultureIgnoreCase))
{
Console.WriteLine("Downloading job output");
DownloadOutput(jobId, client); // Save job output to the specified file location.
}
else if (string.Equals(statusCode, GlacierUtils.JOB_STATUS_FAILED,
StringComparison.InvariantCultureIgnoreCase))
Console.WriteLine("Job failed... cannot download the inventory.");
jobDone = true;
sqsClient.DeleteMessage(new DeleteMessageRequest() { QueueUrl = queueUrl,
ReceiptHandle = message.ReceiptHandle });
}
}
GetJobOutputResponse getJobOutputResponse =
client.GetJobOutput(getJobOutputRequest);
using (Stream webStream = getJobOutputResponse.Body)
{
using (Stream fileToSave = File.OpenWrite(fileName))
{
CopyStream(webStream, fileToSave);
}
}
}
}
}
}
}
1. Initiate a job of the inventory-retrieval type. For more information, see Initiate Job (POST
jobs) (p. 263).
2. After the job completes, download the inventory data. For more information, see Get Job Output
(GET output) (p. 257).
Topics
• (Prerequisite) Setting Up the AWS CLI (p. 21)
• Example: Downloading a Vault Inventory Using the AWS CLI (p. 50)
aws help
aws s3 ls
Expected output:
{
"location": "/111122223333/vaults/awsexamplevault/jobs/*** jobid ***",
"jobId": "*** jobid ***"
}
2. Use the describe-job command to check status of the previous retrieval job.
Expected output:
{
"InventoryRetrievalParameters": {
"Format": "JSON"
},
"VaultARN": "*** vault arn ***",
"Completed": false,
"JobId": "*** jobid ***",
"Action": "InventoryRetrieval",
"CreationDate": "*** job creation date ***",
"StatusCode": "InProgress"
}
You must wait until the job output is ready for you to download. The job ID does not expire for at
least 24 hours after S3 Glacier completes the job. If you have either set a notification configuration
on the vault, or specified an Amazon Simple Notification Service (Amazon SNS) topic when you
initiated the job, S3 Glacier sends a message to the topic after it completes the job.
You can set the notification configuration for specific events on the vault. For more information, see
Configuring Vault Notifications in Amazon S3 Glacier (p. 51). S3 Glacier sends a message to the
specified SNS topic anytime the specific events occur.
4. When it's complete, use the get-job-output command to download the retrieval job to the file
output.json.
{
API Version 2012-06-01
50
Amazon S3 Glacier Developer Guide
Configuring Vault Notifications
"VaultARN":"arn:aws:glacier:region:111122223333:vaults/awsexamplevault",
"InventoryDate":"*** job completion date ***",
"ArchiveList":[
{"ArchiveId":"*** archiveid ***",
"ArchiveDescription":"*** archive description (if set) ***",
"CreationDate":"*** archive creation date ***",
"Size":"*** archive size (in bytes) ***",
"SHA256TreeHash":"*** archive hash ***"
}
{"ArchiveId":
...
]}
You can set a notification configuration on a vault so that when a job completes a message is sent to an
Amazon Simple Notification Service (Amazon SNS) topic.
Topics
• Configuring Vault Notifications in S3 Glacier: General Concepts (p. 51)
• Configuring Vault Notifications in Amazon S3 Glacier Using the AWS SDK for Java (p. 52)
• Configuring Vault Notifications in Amazon S3 Glacier Using the AWS SDK for .NET (p. 54)
• Configuring Vault Notifications in S3 Glacier Using the REST API (p. 56)
• Configuring Vault Notifications Using the Amazon S3 Glacier Console (p. 56)
"Topic": "arn:aws:sns:us-west-2:012345678901:mytopic",
"Events": ["ArchiveRetrievalCompleted", "InventoryRetrievalCompleted"]
}
Note that you can configure only one Amazon SNS topic for a vault.
Note
Adding a notification configuration to a vault causes S3 Glacier to send a notification each time
the event specified in the notification configuration occurs. You can also optionally specify an
Amazon SNS topic in each job initiation request. If you add both the notification configuration
on the vault and also specify an Amazon SNS topic in your initiate job request, S3 Glacier sends
both notifications.
The job completion message S3 Glacier sends include information such as the type of job
(InventoryRetrieval, ArchiveRetrieval), job completion status, SNS topic name, job status code,
and the vault ARN. The following is an example notification S3 Glacier sent to an SNS topic after an
InventoryRetrieval job completed.
{
"Action": "InventoryRetrieval",
"ArchiveId": null,
"ArchiveSizeInBytes": null,
"Completed": true,
"CompletionDate": "2012-06-12T22:20:40.790Z",
"CreationDate": "2012-06-12T22:20:36.814Z",
"InventorySizeInBytes":11693,
"JobDescription": "my retrieval job",
"JobId":"HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID",
"SHA256TreeHash":null,
"SNSTopic": "arn:aws:sns:us-west-2:012345678901:mytopic",
"StatusCode":"Succeeded",
"StatusMessage": "Succeeded",
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
}
If the Completed field is true, you must also check the StatusCode to check if the job completed
successfully or failed.
Note that the Amazon SNS topic must allow the vault to publish a notification. By default, only the SNS
topic owner can publish a message to the topic. However, if the SNS topic and the vault are owned by
different AWS accounts, then you must configure the SNS topic to accept publications from the vault.
You can configure the SNS topic policy in the Amazon SNS console.
For more information about Amazon SNS, see Getting Started with Amazon SNS.
You need to specify an AWS Region where the vault resides. All operations you perform using this
client apply to that AWS Region.
2. Provide notification configuration information by creating an instance of the
SetVaultNotificationsRequest class.
You need to provide the vault name, notification configuration information, and account ID. In
specifying a notification configuration, you provide the Amazon Resource Name (ARN) of an existing
Amazon SNS topic and one or more events for which you want to be notified. For a list of supported
events, see Set Vault Notification Configuration (PUT notification-configuration) (p. 219)).
3. Run the setVaultNotifications method by providing the request object as a parameter.
The following Java code snippet illustrates the preceding steps. The snippet sets a notification
configuration on a vault. The configuration requests Amazon S3 Glacier (S3 Glacier) to send a notification
to the specified Amazon SNS topic when either the ArchiveRetrievalCompleted event or the
InventoryRetrievalCompleted event occurs.
Note
For information about the underlying REST API, see Vault Operations (p. 180).
Example
import java.io.IOException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.model.DeleteVaultNotificationsRequest;
import com.amazonaws.services.glacier.model.GetVaultNotificationsRequest;
import com.amazonaws.services.glacier.model.GetVaultNotificationsResult;
import com.amazonaws.services.glacier.model.SetVaultNotificationsRequest;
import com.amazonaws.services.glacier.model.VaultNotificationConfig;
try {
} catch (Exception e) {
System.err.println("Vault operations failed." + e.getMessage());
}
}
client.setVaultNotifications(request);
System.out.println("Notification configured for vault: " + vaultName);
}
You need to specify an AWS Region where the vault resides. All operations you perform using this
client apply to that AWS Region.
2. Provide notification configuration information by creating an instance of the
SetVaultNotificationsRequest class.
You need to provide the vault name, notification configuration information, and account ID. If you
don't provide an account ID, then the account ID associated with the credentials you provide to sign
the request is assumed. For more information, see Using the AWS SDK for .NET with Amazon S3
Glacier (p. 120).
In specifying a notification configuration, you provide the Amazon Resource Name (ARN) of an existing
Amazon SNS topic and one or more events for which you want to be notified. For a list of supported
events, see Set Vault Notification Configuration (PUT notification-configuration) (p. 219)).
3. Run the SetVaultNotifications method by providing the request object as a parameter.
4. After setting notification configuration on a vault, you can retrieve configuration
information by calling the GetVaultNotifications method, and remove it by calling the
DeleteVaultNotifications method provided by the client.
For step-by-step instructions to run the following example, see Running Code Examples (p. 121). You
need to update the code as shown and provide an existing vault name and an Amazon SNS topic.
Example
using System;
using System.Collections.Generic;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class VaultNotificationSetGetDelete
{
static string vaultName = "examplevault";
static string snsTopicARN = "*** Provide Amazon SNS topic ARN ***";
is configured to notify for a specific event and you specify notification in the job initiation request, then
two notifications are sent.
1. Sign into the AWS Management Console and open the S3 Glacier console at https://
console.aws.amazon.com/glacier.
2. Select a vault in the vault list.
To... Do this...
Specify an existing Amazon SNS topic Enter the Amazon SNS topic in the Amazon SNS Topic
ARN text box.
To... Do this...
The topic is an Amazon Resource Name (ARN) that has
the form shown below.
arn:aws:sns:region:accountId:topicname
You can find the an Amazon SNS topic ARN from the
Amazon Simple Notification Service (Amazon SNS)
console.
Create a new Amazon SNS topic a. Click create a new SNS topic.
For example, to trigger notification when only archive retrieval jobs are complete, check only Get
Archive Job Complete.
7. Click Save.
Important
By default, a new topic does not have any subscriptions associated with it. To receive
notifications published to this topic, you must subscribe to the topic. Follow the steps in
Subscribe to a Topic in the Amazon Simple Notification Service Getting Started Guide to
subscribe to a new topic.
Topics
• Deleting a Vault in Amazon S3 Glacier Using the AWS SDK for Java (p. 59)
• Deleting a Vault in Amazon S3 Glacier Using the AWS SDK for .NET (p. 60)
• Deleting a Vault in S3 Glacier Using the REST API (p. 61)
• Deleting an Empty Vault Using the Amazon S3 Glacier Console (p. 61)
• Deleting a Vault in Amazon S3 Glacier Using the AWS Command Line Interface (p. 61)
You need to specify an AWS Region from where you want to delete a vault. All operations you perform
using this client apply to that AWS Region.
2. Provide request information by creating an instance of the DeleteVaultRequest class.
You need to provide the vault name and account ID. If you don't provide an account ID, then account
ID associated with the credentials you provide to sign the request is assumed. For more information,
see Using the AWS SDK for Java with Amazon S3 Glacier (p. 117).
3. Run the deleteVault method by providing the request object as a parameter.
Amazon S3 Glacier (S3 Glacier) deletes the vault only if it is empty. For more information, see Delete
Vault (DELETE vault) (p. 189).
try {
DeleteVaultRequest request = new DeleteVaultRequest()
.withVaultName("*** provide vault name ***");
client.deleteVault(request);
System.out.println("Deleted vault: " + vaultName);
} catch (Exception e) {
System.err.println(e.getMessage());
}
Note
For information about the underlying REST API, see Delete Vault (DELETE vault) (p. 189).
Topics
• Deleting a Vault Using the High-Level API of the AWS SDK for .NET (p. 60)
• Deleting a Vault Using the Low-Level API of the AWS SDK for .NET (p. 60)
Example: Deleting a Vault Using the High-Level API of the AWS SDK for .NET
For a working code example, see Example: Vault Operations Using the High-Level API of the AWS SDK
for .NET (p. 29). The C# code example shows basic vault operations including create and delete vault.
You need to specify an AWS Region from where you want to delete a vault. All operations you perform
using this client apply to that AWS Region.
2. Provide request information by creating an instance of the DeleteVaultRequest class.
You need to provide the vault name and account ID. If you don't provide an account ID, then account
ID associated with the credentials you provide to sign the request is assumed. For more information,
see Using the AWS SDK for .NET with Amazon S3 Glacier (p. 120).
3. Run the DeleteVault method by providing the request object as a parameter.
Amazon S3 Glacier (S3 Glacier) deletes the vault only if it is empty. For more information, see Delete
Vault (DELETE vault) (p. 189).
The following C# code snippet illustrates the preceding steps. The snippet retrieves metadata
information of a vault that exists in the default AWS Region.
AmazonGlacier client;
client = new AmazonGlacierClient(Amazon.RegionEndpoint.USEast1);
Note
For information about the underlying REST API, see Delete Vault (DELETE vault) (p. 189).
Example: Deleting a Vault Using the Low-Level API of the AWS SDK for .NET
For a working code example, see Example: Vault Operations Using the Low-Level API of the AWS SDK
for .NET (p. 30). The C# code example shows basic vault operations including create and delete vault.
The following are the steps to delete an empty vault using the S3 Glacier console.
1. Sign into the AWS Management Console and open the S3 Glacier console at https://
console.aws.amazon.com/glacier.
2. From the AWS Region selector, select the AWS Region where the vault exists.
3. Select the vault.
4. Click Delete Vault.
Topics
• (Prerequisite) Setting Up the AWS CLI (p. 21)
• Example: Deleting an Empty Vault Using the AWS CLI (p. 62)
• Example: Deleting a Nonempty Vault Using the AWS CLI (p. 62)
aws help
aws s3 ls
•
aws glacier delete-vault --vault-name awsexamplevault --account-id 111122223333
Expected output:
{
"location": "/111122223333/vaults/awsexamplevault/jobs/*** jobid ***",
"jobId": "*** jobid ***"
}
2. Use the describe-job command to check status of the previous retrieval job.
Expected output:
{
"InventoryRetrievalParameters": {
"Format": "JSON"
},
"VaultARN": "*** vault arn ***",
"Completed": false,
"JobId": "*** jobid ***",
"Action": "InventoryRetrieval",
"CreationDate": "*** job creation date ***",
"StatusCode": "InProgress"
}
You must wait until the job output is ready for you to download. If you set a notification
configuration on the vault or specified an Amazon Simple Notification Service (Amazon SNS) topic
when you initiated the job, S3 Glacier sends a message to the topic after it completes the job.
You can set notification configuration for specific events on the vault. For more information, see
Configuring Vault Notifications in Amazon S3 Glacier (p. 51). S3 Glacier sends a message to the
specified SNS topic anytime the specific event occurs.
4. When it's complete, use the get-job-output command to download the retrieval job to the file
output.json.
{
"VaultARN":"arn:aws:glacier:region:111122223333:vaults/awsexamplevault",
"InventoryDate":"*** job completion date ***",
"ArchiveList":[
{"ArchiveId":"*** archiveid ***",
"ArchiveDescription":*** archive description (if set) ***,
"CreationDate":"*** archive creation date ***",
"Size":"*** archive size (in bytes) ***",
"SHA256TreeHash":"*** archive hash ***"
}
{"ArchiveId":
...
]}
5. Use the delete-archive command to delete each archive from a vault until none remain.
7. When it's complete, use the delete-vault command to delete a vault with no archives.
The following topics describe how you can add, list, and remove tags for vaults.
Topics
• Tagging Vaults Using the Amazon S3 Glacier Console (p. 64)
• Tagging Vaults Using the Amazon S3 Glacier API (p. 65)
• Related Sections (p. 65)
1. Sign in to the AWS Management Console and open the S3 Glacier console at https://
console.aws.amazon.com/glacier.
2. From the AWS Region selector, choose am AWS Region.
3. On the Amazon S3 Glacier Vaults page, choose a vault.
4. Choose the Tags tab. The tags for that vault will appear.
1. Open the S3 Glacier console, and then choose a Region from the AWS Region selector.
2. On the Amazon S3 Glacier Vaults page, choose a vault.
3. Choose the Tags tab.
4. Specify the tag key in the Key field, optionally specify a tag value in the Value field, and then choose
Save.
If the Save button is not enabled, either the tag key or the tag value that you specified does not
meet the tag restrictions. For more about tag restrictions, see Tag Restrictions (p. 155).
1. Open the S3 Glacier console, and then choose a Region from the AWS Region selector.
2. On the Amazon S3 Glacier Vaults page, choose a vault.
3. Choose the Tags tab, and then choose the x at the end of the row that describes the tag you want to
delete.
4. Choose Delete.
Related Sections
• Tagging Amazon S3 Glacier Resources (p. 155)
Topics
• Vault Locking Overview (p. 65)
• Locking a Vault by Using the Amazon S3 Glacier API (p. 66)
S3 Glacier enforces the controls set in the vault lock policy to help achieve your compliance objectives,
for example, for data retention. You can deploy a variety of compliance controls in a vault lock policy
using the AWS Identity and Access Management (IAM) policy language. For more information about vault
lock policies, see Amazon S3 Glacier Access Control with Vault Lock Policies (p. 136).
A vault lock policy is different than a vault access policy. Both policies govern access controls to
your vault. However, a vault lock policy can be locked to prevent future changes, providing strong
enforcement for your compliance controls. You can use the vault lock policy to deploy regulatory and
compliance controls, which typically require tight controls on data access. In contrast, you use a vault
access policy to implement access controls that are not compliance related, temporary, and subject to
frequent modification. Vault lock and vault access policies can be used together. For example, you can
implement time-based data retention rules in the vault lock policy (deny deletes), and grant read access
to designated third parties or your business partners (allow reads).
1. Initiate the lock by attaching a vault lock policy to your vault, which sets the lock to an in-progress
state and returns a lock ID. While in the in-progress state, you have 24 hours to validate your vault
lock policy before the lock ID expires.
2. Use the lock ID to complete the lock process. If the vault lock policy doesn't work as expected, you can
stop the lock and restart from the beginning. For information on how to use the S3 Glacier API to lock
a vault, see Locking a Vault by Using the Amazon S3 Glacier API (p. 66).
If you don't complete the vault lock process within 24 hours after entering the in-progress state, your
vault automatically exits the in-progress state, and the vault lock policy is removed. You can call Initiate
Vault Lock (POST lock-policy) (p. 205) again to install a new vault lock policy and transition into the in-
progress state.
The in-progress state provides the opportunity to test your vault lock policy before you lock it. Your vault
lock policy takes full effect during the in-progress state just as if the vault has been locked, except that
you can remove the policy by calling Abort Vault Lock (DELETE lock-policy) (p. 180). To fine-tune your
policy, you can repeat the Abort Vault Lock (DELETE lock-policy) (p. 180)/Initiate Vault Lock (POST
lock-policy) (p. 205) combination as many times as necessary to validate your vault lock policy changes.
After you validate the vault lock policy, you can call Complete Vault Lock (POST lockId) (p. 187) with
the most recent lock ID to complete the vault locking process. Your vault transitions to a locked state
where the vault lock policy is unchangeable and can no longer be removed by calling Abort Vault Lock
(DELETE lock-policy) (p. 180).
Related Sections
• Amazon S3 Glacier Access Control with Vault Lock Policies (p. 136)
• Abort Vault Lock (DELETE lock-policy) (p. 180)
• Complete Vault Lock (POST lockId) (p. 187)
• Get Vault Lock (GET lock-policy) (p. 200)
• Initiate Vault Lock (POST lock-policy) (p. 205)
TJgHcrOSfAkV6hdPqOATYfp_0ZaxL1pIBOc02iZ0gDPMr2ig-
nhwd_PafstsdIf6HSrjHnP-3p6LCJClYytFT_CBhT9CwNxbRaM5MetS3I-GqwxI3Y8QtgbJbhEQPs0mJ3KExample
Archive IDs are 138 bytes long. When you upload an archive, you can provide an optional description.
You can retrieve an archive using its ID but not its description.
Important
S3 Glacier provides a management console. You can use the console to create and delete vaults.
However, all other interactions with S3 Glacier require that you use the AWS Command Line
Interface (CLI) or write code. For example, to upload data, such as photos, videos, and other
documents, you must either use the AWS CLI or write code to make requests, using either the
REST API directly or by using the AWS SDKs. For more information about using S3 Glacier
with the AWS CLI, go to AWS CLI Reference for S3 Glacier. To install the AWS CLI, go to AWS
Command Line Interface.
Topics
• Archive Operations in Amazon S3 Glacier (p. 67)
• Maintaining Client-Side Archive Metadata (p. 68)
• Uploading an Archive in Amazon S3 Glacier (p. 68)
• Downloading an Archive in Amazon S3 Glacier (p. 83)
• Deleting an Archive in Amazon S3 Glacier (p. 109)
• Querying an Archives in Amazon S3 Glacier (p. 115)
If you maintain client-side archive metadata, note that S3 Glacier maintains a vault inventory that
includes archive IDs and any descriptions you provided during the archive upload. You might occasionally
download the vault inventory to reconcile any issues in your client-side database you maintain for the
archive metadata. However, S3 Glacier takes vault inventory approximately daily. When you request a
vault inventory, S3 Glacier returns the last inventory it prepared, a point in time snapshot.
For information about using S3 Glacier with the AWS CLI, go to AWS CLI Reference for S3 Glacier. To
install the AWS CLI, go to AWS Command Line Interface. The following Uploading topics describe how to
upload archives to S3 Glacier by using the AWS SDK for Java, the AWS SDK for .NET, and the REST API.
Topics
• Options for Uploading an Archive to Amazon S3 Glacier (p. 69)
• Uploading an Archive in a Single Operation (p. 69)
• Uploading Large Archives in Parts (Multipart Upload) (p. 75)
• Upload archives in a single operation – In a single operation, you can upload archives from 1
byte to up to 4 GB in size. However, we encourage S3 Glacier customers to use multipart upload to
upload archives greater than 100 MB. For more information, see Uploading an Archive in a Single
Operation (p. 69).
• Upload archives in parts – Using the multipart upload API, you can upload large archives, up to about
40,000 GB (10,000 * 4 GB).
The multipart upload API call is designed to improve the upload experience for larger archives. You
can upload archives in parts. These parts can be uploaded independently, in any order, and in parallel.
If a part upload fails, you only need to upload that part again and not the entire archive. You can
use multipart upload for archives from 1 byte to about 40,000 GB in size. For more information, see
Uploading Large Archives in Parts (Multipart Upload) (p. 75).
Important
The S3 Glacier vault inventory is only updated once a day. When you upload an archive, you will
not immediately see the new archive added to your vault (in the console or in your downloaded
vault inventory list) until the vault inventory has been updated.
To upload existing data to Amazon S3 Glacier (S3 Glacier), you might consider using one of the AWS
Snowball device types to import data into Amazon S3, and then move it to the S3 Glacier storage class
for archival using lifecycle rules. When you transition Amazon S3 objects to the S3 Glacier storage class,
Amazon S3 internally uses S3 Glacier for durable storage at lower cost. Although the objects are stored
in S3 Glacier, they remain Amazon S3 objects that you manage in Amazon S3, and you cannot access
them directly through S3 Glacier.
For more information about Amazon S3 lifecycle configuration and transitioning objects to the S3
Glacier storage class, see Object Lifecycle Management and Transitioning Objects in the Amazon Simple
Storage Service Developer Guide.
Topics
• Uploading an Archive in a Single Operation Using the AWS SDK for Java (p. 70)
• Uploading an Archive in a Single Operation Using the AWS SDK for .NET in Amazon S3
Glacier (p. 72)
• Uploading an Archive in a Single Operation Using the REST API (p. 75)
Topics
• Uploading an Archive Using the High-Level API of the AWS SDK for Java (p. 70)
• Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK for Java
(p. 71)
Uploading an Archive Using the High-Level API of the AWS SDK for Java
The ArchiveTransferManager class of the high-level API provides the upload method, which you can
use to upload an archive to a vault.
Note
You can use the upload method to upload small or large archives. Depending on the archive
size you are uploading, this method determines whether to upload it in a single operation or use
the multipart upload API to upload the archive in parts.
Example: Uploading an Archive Using the High-Level API of the AWS SDK for Java
The following Java code example uploads an archive to a vault (examplevault) in the US West (Oregon)
Region (us-west-2). For a list of supported AWS Regions and endpoints, see Accessing Amazon S3
Glacier (p. 5).
For step-by-step instructions on how to run this example, see Running Java Examples for Amazon S3
Glacier Using Eclipse (p. 119). You need to update the code as shown with the name of the vault you
want to upload to and the name of the file you want to upload.
Example
import java.io.File;
import java.io.IOException;
import java.util.Date;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.transfer.ArchiveTransferManager;
import com.amazonaws.services.glacier.transfer.UploadResult;
try {
ArchiveTransferManager atm = new ArchiveTransferManager(client, credentials);
} catch (Exception e)
{
System.err.println(e);
}
}
}
Uploading an Archive in a Single Operation Using the Low-Level API of the AWS
SDK for Java
The low-level API provides methods for all the archive operations. The following are the steps to upload
an archive using the AWS SDK for Java.
You need to specify an AWS Region where you want to upload the archive. All operations you perform
using this client apply to that AWS Region.
2. Provide request information by creating an instance of the UploadArchiveRequest class.
In addition to the data you want to upload, you need to provide a checksum (SHA-256 tree hash) of
the payload, the vault name, the content length of the data, and your account ID.
If you don't provide an account ID, then the account ID associated with the credentials you provide to
sign the request is assumed. For more information, see Using the AWS SDK for Java with Amazon S3
Glacier (p. 117).
3. Run the uploadArchive method by providing the request object as a parameter.
In response, Amazon S3 Glacier (S3 Glacier) returns an archive ID of the newly uploaded archive.
AmazonGlacierClient client;
Example: Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK
for Java
The following Java code example uses the AWS SDK for Java to upload an archive to a vault
(examplevault). For step-by-step instructions on how to run this example, see Running Java Examples
for Amazon S3 Glacier Using Eclipse (p. 119). You need to update the code as shown with the name of
the vault you want to upload to and the name of the file you want to upload.
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.TreeHashGenerator;
import com.amazonaws.services.glacier.model.UploadArchiveRequest;
import com.amazonaws.services.glacier.model.UploadArchiveResult;
public class ArchiveUploadLowLevel {
try {
// First open file and read.
File file = new File(archiveFilePath);
InputStream is = new FileInputStream(file);
byte[] body = new byte[(int) file.length()];
is.read(body);
// Send request.
UploadArchiveRequest request = new UploadArchiveRequest()
.withVaultName(vaultName)
.withChecksum(TreeHashGenerator.calculateTreeHash(new
File(archiveFilePath)))
.withBody(new ByteArrayInputStream(body))
.withContentLength((long)body.length);
} catch (Exception e)
{
System.err.println("Archive not uploaded.");
System.err.println(e);
}
}
}
Topics
• Uploading an Archive Using the High-Level API of the AWS SDK for .NET (p. 73)
• Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK for .NET
(p. 73)
Uploading an Archive Using the High-Level API of the AWS SDK for .NET
The ArchiveTransferManager class of the high-level API provides the Upload method that you can
use to upload an archive to a vault.
Note
You can use the Upload method to upload small or large files. Depending on the file size you
are uploading, this method determines whether to upload it in a single operation or use the
multipart upload API to upload the file in parts.
Example: Uploading an Archive Using the High-Level API of the AWS SDK for .NET
The following C# code example uploads an archive to a vault (examplevault) in the US West (Oregon)
Region.
For step-by-step instructions on how to run this example, see Running Code Examples (p. 121). You
need to update the code as shown with the name of the file you want to upload.
Example
using System;
using Amazon.Glacier;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class ArchiveUploadHighLevel
{
static string vaultName = "examplevault";
static string archiveToUpload = "*** Provide file name (with full path) to upload ***";
Uploading an Archive in a Single Operation Using the Low-Level API of the AWS
SDK for .NET
The low-level API provides methods for all the archive operations. The following are the steps to upload
an archive using the AWS SDK for .NET.
You need to specify an AWS Region where you want to upload the archive. All operations you perform
using this client apply to that AWS Region.
2. Provide request information by creating an instance of the UploadArchiveRequest class.
In addition to the data you want to upload, You need to provide a checksum (SHA-256 tree hash) of
the payload, the vault name, and your account ID.
If you don't provide an account ID, then the account ID associated with the credentials you provide to
sign the request is assumed. For more information, see Using the AWS SDK for .NET with Amazon S3
Glacier (p. 120).
3. Run the UploadArchive method by providing the request object as a parameter.
Example: Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK
for .NET
The following C# code example illustrates the preceding steps. The example uses the AWS SDK for .NET
to upload an archive to a vault (examplevault).
Note
For information about the underlying REST API to upload an archive in a single request, see
Upload Archive (POST archive) (p. 224).
For step-by-step instructions on how to run this example, see Running Code Examples (p. 121). You
need to update the code as shown with the name of the file you want to upload.
Example
using System;
using System.IO;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class ArchiveUploadSingleOpLowLevel
{
static string vaultName = "examplevault";
static string archiveToUpload = "*** Provide file name (with full path) to upload ***";
When you send a request to initiate a multipart upload, S3 Glacier returns a multipart upload ID,
which is a unique identifier for your multipart upload. Any subsequent multipart upload operations
require this ID. This ID doesn't expire for at least 24 hours after S3 Glacier completes the job.
In your request to start a multipart upload, specify the part size in number of bytes. Each part you
upload, except the last part, must be this size.
Note
You don't need to know the overall archive size when using multipart uploads. This means
that you can use multipart uploads in cases where you don't know the archive size when
you start uploading the archive. You only need to decide the part size at the time you start
the multipart upload.
In the initiate multipart upload request, you can also provide an optional archive description.
2. Upload Parts
For each part upload request, you must include the multipart upload ID you obtained in step
1. In the request, you must also specify the content range, in bytes, identifying the position of
the part in the final archive. S3 Glacier later uses the content range information to assemble the
archive in proper sequence. Because you provide the content range for each part that you upload,
it determines the part's position in the final assembly of the archive, and therefore you can upload
parts in any order. You can also upload parts in parallel. If you upload a new part using the same
content range as a previously uploaded part, the previously uploaded part is overwritten.
3. Complete (or stop) Multipart Upload
After uploading all the archive parts, you use the complete operation. Again, you must specify the
upload ID in your request. S3 Glacier creates an archive by concatenating parts in ascending order
based on the content range you provided. S3 Glacier response to a Complete Multipart Upload
request includes an archive ID for the newly created archive. If you provided an optional archive
description in the Initiate Multipart Upload request, S3 Glacier associates it with the assembled
archive. After you successfully complete a multipart upload, you cannot refer to the multipart
upload ID. That means you cannot access parts associated with the multipart upload ID.
If you stop a multipart upload, you cannot upload any more parts using that multipart upload ID.
All storage consumed by any parts associated with the stopped multipart upload is freed. If any part
uploads were in-progress, they can still succeed or fail even after stopped.
• List Parts—Using this operation, you can list the parts of a specific multipart upload. It returns
information about the parts that you have uploaded for a multipart upload. For each list parts request,
S3 Glacier returns information for up to 1,000 parts. If there are more parts to list for the multipart
upload, the result is paginated and a marker is returned in the response at which to continue the list.
You need to send additional requests to retrieve subsequent parts. Note that the returned list of parts
doesn't include parts that haven't completed uploading.
• List Multipart Uploads—Using this operation, you can obtain a list of multipart uploads in progress.
An in-progress multipart upload is an upload that you have initiated, but have not yet completed or
stopped. For each list multipart uploads request, S3 Glacier returns up to 1,000 multipart uploads. If
there are more multipart uploads to list, then the result is paginated and a marker is returned in the
response at which to continue the list. You need to send additional requests to retrieve the remaining
multipart uploads.
Quick Facts
The following table provides multipart upload core specifications.
Item Specification
Part size 1 MB to 4 GB, last part can be < 1 MB. You specify the size
value in bytes.
Item Specification
Uploading Large Archives in Parts Using the AWS SDK for Java
Both the high-level and low-level APIs (p. 116) provided by the AWS SDK for Java provide a method to
upload a large archive (see Uploading an Archive in Amazon S3 Glacier (p. 68)).
• The high-level API provides a method that you can use to upload archives of any size. Depending on
the file you are uploading, the method either uploads an archive in a single operation or uses the
multipart upload support in Amazon S3 Glacier (S3 Glacier) to upload the archive in parts.
• The low-level API maps close to the underlying REST implementation. Accordingly, it provides a
method to upload smaller archives in one operation and a group of methods that support multipart
upload for larger archives. This section explains uploading large archives in parts using the low-level
API.
For more information about the high-level and low-level APIs, see Using the AWS SDK for Java with
Amazon S3 Glacier (p. 117).
Topics
• Uploading Large Archives in Parts Using the High-Level API of the AWS SDK for Java (p. 77)
• Upload Large Archives in Parts Using the Low-Level API of the AWS SDK for Java (p. 77)
Uploading Large Archives in Parts Using the High-Level API of the AWS SDK for
Java
You use the same methods of the high-level API to upload small or large archives. Based on the archive
size, the high-level API methods decide whether to upload the archive in a single operation or use the
multipart upload API provided by S3 Glacier. For more information, see Uploading an Archive Using the
High-Level API of the AWS SDK for Java (p. 70).
Upload Large Archives in Parts Using the Low-Level API of the AWS SDK for Java
For granular control of the upload you can use the low-level API where you can configure the request
and process the response. The following are the steps to upload large archives in parts using the AWS
SDK for Java.
You need to specify an AWS Region where you want to save the archive. All operations you perform
using this client apply to that AWS Region.
2. Initiate multipart upload by calling the initiateMultipartUpload method.
You need to provide vault name in which you want to upload the archive, part size you want to use to
upload archive parts, and an optional description. You provide this information by creating an instance
of the InitiateMultipartUploadRequest class. In response, S3 Glacier returns an upload ID.
3. Upload parts by calling the uploadMultipartPart method.
For each part you upload, You need to provide the vault name, the byte range in the final assembled
archive that will be uploaded in this part, the checksum of the part data, and the upload ID.
4. Complete multipart upload by calling the completeMultipartUpload method.
You need to provide the upload ID, the checksum of the entire archive, the archive size (combined size
of all parts you uploaded), and the vault name. S3 Glacier constructs the archive from the uploaded
parts and returns an archive ID.
Example: Uploading a Large Archive in a Parts Using the AWS SDK for Java
The following Java code example uses the AWS SDK for Java to upload an archive to a vault
(examplevault). For step-by-step instructions on how to run this example, see Running Java Examples
for Amazon S3 Glacier Using Eclipse (p. 119). You need to update the code as shown with the name of
the file you want to upload.
Note
This example is valid for part sizes from 1 MB to 1 GB. However, S3 Glacier supports part sizes
up to 4 GB.
Example
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.security.NoSuchAlgorithmException;
import java.util.Arrays;
import java.util.Date;
import java.util.LinkedList;
import java.util.List;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.TreeHashGenerator;
import com.amazonaws.services.glacier.model.CompleteMultipartUploadRequest;
import com.amazonaws.services.glacier.model.CompleteMultipartUploadResult;
import com.amazonaws.services.glacier.model.InitiateMultipartUploadRequest;
import com.amazonaws.services.glacier.model.InitiateMultipartUploadResult;
import com.amazonaws.services.glacier.model.UploadMultipartPartRequest;
import com.amazonaws.services.glacier.model.UploadMultipartPartResult;
import com.amazonaws.util.BinaryUtils;
try {
System.out.println("Uploading an archive.");
String uploadId = initiateMultipartUpload();
} catch (Exception e) {
System.err.println(e);
}
int filePosition = 0;
long currentPosition = 0;
byte[] buffer = new byte[Integer.valueOf(partSize)];
List<byte[]> binaryChecksums = new LinkedList<byte[]>();
//Upload part.
UploadMultipartPartRequest partRequest = new UploadMultipartPartRequest()
.withVaultName(vaultName)
.withBody(new ByteArrayInputStream(bytesRead))
.withChecksum(checksum)
.withRange(contentRange)
.withUploadId(uploadId);
CompleteMultipartUploadResult compResult =
client.completeMultipartUpload(compRequest);
return compResult.getLocation();
}
}
• The high-level API provides a method that you can use to upload archives of any size. Depending
on the file you are uploading, the method either uploads archive in a single operation or uses the
multipart upload support in Amazon S3 Glacier (S3 Glacier) to upload the archive in parts.
• The low-level API maps close to the underlying REST implementation. Accordingly, it provides a
method to upload smaller archives in one operation and a group of methods that support multipart
upload for larger archives. This section explains uploading large archives in parts using the low-level
API.
For more information about the high-level and low-level APIs, see Using the AWS SDK for .NET with
Amazon S3 Glacier (p. 120).
Topics
• Uploading Large Archives in Parts Using the High-Level API of the AWS SDK for .NET (p. 80)
• Uploading Large Archives in Parts Using the Low-Level API of the AWS SDK for .NET (p. 80)
Uploading Large Archives in Parts Using the High-Level API of the AWS SDK
for .NET
You use the same methods of the high-level API to upload small or large archives. Based on the archive
size, the high-level API methods decide whether to upload the archive in a single operation or use the
multipart upload API provided by S3 Glacier. For more information, see Uploading an Archive Using the
High-Level API of the AWS SDK for .NET (p. 73).
Uploading Large Archives in Parts Using the Low-Level API of the AWS SDK
for .NET
For granular control of the upload, you can use the low-level API, where you can configure the request
and process the response. The following are the steps to upload large archives in parts using the AWS
SDK for .NET.
You need to specify an AWS Region where you want to save the archive. All operations you perform
using this client apply to that AWS Region.
You need to provide the vault name to which you want to upload the archive, the part size you want
to use to upload archive parts, and an optional description. You provide this information by creating
an instance of the InitiateMultipartUploadRequest class. In response, S3 Glacier returns an
upload ID.
3. Upload parts by calling the UploadMultipartPart method.
For each part you upload, You need to provide the vault name, the byte range in the final assembled
archive that will be uploaded in this part, the checksum of the part data, and the upload ID.
4. Complete the multipart upload by calling the CompleteMultipartUpload method.
You need to provide the upload ID, the checksum of the entire archive, the archive size (combined size
of all parts you uploaded), and the vault name. S3 Glacier constructs the archive from the uploaded
parts and returns an archive ID.
Example: Uploading a Large Archive in Parts Using the AWS SDK for .NET
The following C# code example uses the AWS SDK for .NET to upload an archive to a vault
(examplevault). For step-by-step instructions on how to run this example, see Running Code
Examples (p. 121). You need to update the code as shown with the name of a file you want to upload.
Example
using System;
using System.Collections.Generic;
using System.IO;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class ArchiveUploadMPU
{
static string vaultName = "examplevault";
static string archiveToUpload = "*** Provide file name (with full path) to upload ***";
static long partSize = 4194304; // 4 MB.
VaultName = vaultName,
PartSize = partSize,
ArchiveDescription = "Test doc uploaded using MPU."
};
InitiateMultipartUploadResponse initiateMPUresponse =
client.InitiateMultipartUpload(initiateMPUrequest);
return initiateMPUresponse.UploadId;
}
VaultName = vaultName,
Body = uploadPartStream,
Checksum = checksum,
UploadId = uploadID
};
uploadMPUrequest.SetRange(currentPosition, currentPosition +
uploadPartStream.Length - 1);
client.UploadMultipartPart(uploadMPUrequest);
CompleteMultipartUploadResponse completeMPUresponse =
client.CompleteMultipartUpload(completeMPUrequest);
return completeMPUresponse.ArchiveId;
}
}
}
For information about using S3 Glacier with the AWS CLI, see AWS CLI Reference for S3 Glacier. To install
the AWS CLI, see AWS Command Line Interface. The following Downloading an Archive topics describe
how to download archives to S3 Glacier by using the AWS SDK for Java, the AWS SDK for .NET, and the
REST API.
Topics
• Archive Retrieval Options (p. 84)
• Ranged Archive Retrievals (p. 86)
To retrieve an archive
a. Get the ID of the archive that you want to retrieve. You can get the archive ID from an
inventory of the vault. For more information, see Downloading a Vault Inventory in Amazon S3
Glacier (p. 37).
b. Initiate a job requesting S3 Glacier to prepare an entire archive or a portion of the archive for
subsequent download by using the Initiate Job (POST jobs) (p. 263) operation.
When you initiate a job, S3 Glacier returns a job ID in the response and runs the job asynchronously.
(You cannot download the job output until after the job completes as described in Step 2.)
Important
For Standard retrievals only, a data retrieval policy can cause your initiate retrieval job
request to fail with a PolicyEnforcedException exception. For more information
about data retrieval policies, see Amazon S3 Glacier Data Retrieval Policies (p. 151).
For more information about the PolicyEnforcedException exception, see Error
Responses (p. 176).
When required, you can restore large segments of the data stored in S3 Glacier. For example, you
might want to restore data for a secondary copy. However, if you need to restore a large amount
of data, keep in mind that S3 Glacier is designed for 35 random restore requests per pebibyte (PiB)
stored per day.
For more information about restoring data from these storage classes, see Amazon S3 Storage
Classes for Archiving Objects in the Amazon Simple Storage Service Developer Guide.
2. After the job completes, download the bytes using the Get Job Output (GET output) (p. 257)
operation.
You can download all bytes or specify a byte range to download only a portion of the job output. For
larger output, downloading the output in chunks helps in the event of a download failure, such as a
network failure. If you get job output in a single request and there is a network failure, you have to
restart downloading the output from the beginning. However, if you download the output in chunks,
in the event of any failure, you need only restart the download of the smaller portion and not the
entire output.
S3 Glacier must complete a job before you can get its output. After completion, a job will not expire
for at least 24 hours after completion, which means you can download the output within the 24-hour
period after the job is completed. To determine if your job is complete, check its status by using one of
the following options:
• Wait for a job completion notification — You can specify an Amazon Simple Notification Service
(Amazon SNS) topic to which S3 Glacier can post a notification after the job is completed. S3 Glacier
sends notification only after it completes the job.
You can specify an Amazon SNS topic for a job when you initiate the job. In addition to specifying
an Amazon SNS topic in your job request, if your vault has notifications configuration set for archive
retrieval events, then S3 Glacier also publishes a notification to that SNS topic. For more information,
see Configuring Vault Notifications in Amazon S3 Glacier (p. 51).
• Request job information explicitly — You can also use the S3 Glacier describe job operation (Describe
Job (GET JobID) (p. 250)) to periodically poll for job information. However, we recommend using
Amazon SNS notifications.
Note
The information you get by using SNS notification is the same as what you get by calling
Describe Job.
• Expedited — Expedited retrievals allow you to quickly access your data when occasional urgent
requests for a subset of archives are required. For all but the largest archives (250 MB+), data accessed
using Expedited retrievals are typically made available within 1–5 minutes. Provisioned Capacity
ensures that retrieval capacity for Expedited retrievals is available when you need it. For more
information, see Provisioned Capacity (p. 85).
• Standard — Standard retrievals allow you to access any of your archives within several hours.
Standard retrievals typically complete within 3–5 hours. This is the default option for retrieval requests
that do not specify the retrieval option.
• Bulk — Bulk retrievals are S3 Glacier’s lowest-cost retrieval option, which you can use to retrieve large
amounts, even petabytes, of data inexpensively in a day. Bulk retrievals typically complete within 5–12
hours.
To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job (POST
jobs) (p. 263) REST API request to the option you want, or the equivalent in the AWS CLI or AWS
SDKs. If you have purchased provisioned capacity, then all expedited retrievals are automatically served
through your provisioned capacity.
Provisioned Capacity
Provisioned capacity helps ensure that your retrieval capacity for expedited retrievals is available when
you need it. Each unit of capacity provides that at least three expedited retrievals can be performed
every five minutes and provides up to 150 MB/s of retrieval throughput.
You should purchase provisioned retrieval capacity if your workload requires highly reliable and
predictable access to a subset of your data in minutes. Without provisioned capacity Expedited retrievals
are accepted, except for rare situations of unusually high demand. However, if you require access to
Expedited retrievals under all circumstances, you must purchase provisioned retrieval capacity.
A provisioned capacity unit lasts for one month starting at the date and time of purchase, which is the
start date. The unit expires on the expiration date, which is exactly one month after the start date to the
nearest second.
If the start date is on the 31st day of a month, the expiration date is the last day of the next month. For
example, if the start date is August 31, the expiration date is September 30. If the start date is January
31, the expiration date is February 28.
To use the S3 Glacier console to purchase provisioned capacity, choose Settings and then choose
Provisioned capacity.
If you don't have any provisioned capacity, but you want to buy it, choose Add 1 capacity unit, and then
choose Buy.
After your purchase has succeeded, you can choose Buy again to purchase additional capacity units.
When you are finished, choose Close.
• Manage your data downloads – S3 Glacier allows retrieved data to be downloaded for 24 hours after
the retrieval request completes. Therefore, you might want to retrieve only portions of the archive so
that you can manage the schedule of downloads within the given download window.
• Retrieve a targeted part of a large archive – For example, suppose you have previously aggregated
many files and uploaded them as a single archive, and now you want to retrieve a few of the files. In
this case, you can specify a range of the archive that contains the files you are interested in by using
one retrieval request. Or, you can initiate multiple retrieval requests, each with a range for one or more
files.
When initiating a retrieval job using range retrievals, you must provide a range that is megabyte aligned.
In other words, the byte range can start at zero (the beginning of your archive), or at any 1 MB interval
thereafter (1 MB, 2 MB, 3 MB, and so on).
The end of the range can either be the end of your archive or any 1 MB interval greater than the
beginning of your range. Furthermore, if you want to get checksum values when you download the data
(after the retrieval job completes), the range you request in the job initiation must also be tree-hash
aligned. Checksums are a way you can ensure that your data was not corrupted during transmission. For
more information about megabyte alignment and tree-hash alignment, see Receiving Checksums When
Downloading Data (p. 175).
Topics
• Downloading an Archive Using the High-Level API of the AWS SDK for Java (p. 87)
• Downloading an Archive Using the Low-Level API of the AWS SDK for Java (p. 88)
Example: Downloading an Archive Using the High-Level API of the AWS SDK for
Java
The following Java code example downloads an archive from a vault (examplevault) in the US West
(Oregon) Region (us-west-2).
For step-by-step instructions to run this sample, see Running Java Examples for Amazon S3 Glacier Using
Eclipse (p. 119). You need to update the code as shown with an existing archive ID and the local file
path where you want to save the downloaded archive.
Example
import java.io.File;
import java.io.IOException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.transfer.ArchiveTransferManager;
import com.amazonaws.services.sns.AmazonSNSClient;
import com.amazonaws.services.sqs.AmazonSQSClient;
try {
ArchiveTransferManager atm = new ArchiveTransferManager(glacierClient,
sqsClient, snsClient);
} catch (Exception e)
{
System.err.println(e);
}
}
}
You need to specify an AWS Region from where you want to download the archive. All operations you
perform using this client apply to that AWS Region.
2. Initiate an archive-retrieval job by executing the initiateJob method.
You provide job information, such as the archive ID of the archive you want to download and
the optional Amazon SNS topic to which you want Amazon S3 Glacier (S3 Glacier) to post a job
completion message, by creating an instance of the InitiateJobRequest class. S3 Glacier returns a
job ID in response. The response is available in an instance of the InitiateJobResult class.
You can optionally specify a byte range to request S3 Glacier to prepare only a portion of the archive.
For example, you can update the preceding request by adding the following statement to request S3
Glacier to prepare only the 1 MB to 2 MB portion of the archive.
You must wait until the job output is ready for you to download. If you have either set a notification
configuration on the vault identifying an Amazon Simple Notification Service (Amazon SNS) topic or
specified an Amazon SNS topic when you initiated a job, S3 Glacier sends a message to that topic after
it completes the job.
You can also poll S3 Glacier by calling the describeJob method to determine the job completion
status. Although, using an Amazon SNS topic for notification is the recommended approach.
4. Download the job output (archive data) by executing the getJobOutput method.
You provide the request information such as the job ID and vault name by creating an instance
of the GetJobOutputRequest class. The output that S3 Glacier returns is available in the
GetJobOutputResult object.
The preceding code snippet downloads the entire job output. You can optionally retrieve only a
portion of the output, or download the entire output in smaller chunks by specifying the byte range in
your GetJobOutputRequest.
In response to your GetJobOutput call, S3 Glacier returns the checksum of the portion of the data
you downloaded, if certain conditions are met. For more information, see Receiving Checksums When
Downloading Data (p. 175).
To verify there are no errors in the download, you can then compute the checksum on the client-side
and compare it with the checksum S3 Glacier sent in the response.
For an archive retrieval job with the optional range specified, when you get the job description, it
includes the checksum of the range you are retrieving (SHA256TreeHash). You can use this value
to further verify the accuracy of the entire byte range that you later download. For example, if you
initiate a job to retrieve a tree-hash aligned archive range and then download output in chunks such
that each of your GetJobOutput requests return a checksum, then you can compute checksum of
each portion you download on the client-side and then compute the tree hash. You can compare it
with the checksum S3 Glacier returns in response to your describe job request to verify that the entire
byte range you have downloaded is the same as the byte range that is stored in S3 Glacier.
For a working example, see Example 2: Retrieving an Archive Using the Low-Level API of the AWS SDK
for Java—Download Output in Chunks (p. 93).
Example 1: Retrieving an Archive Using the Low-Level API of the AWS SDK for
Java
The following Java code example downloads an archive from the specified vault. After the job
completes, the example downloads the entire output in a single getJobOutput call. For an example of
downloading output in chunks, see Example 2: Retrieving an Archive Using the Low-Level API of the AWS
SDK for Java—Download Output in Chunks (p. 93).
The example attaches a policy to the queue to enable the Amazon SNS topic to post messages to the
queue.
• Initiates a job to download the specified archive.
In the job request, the Amazon SNS topic that was created is specified so that S3 Glacier can publish a
notification to the topic after it completes the job.
• Periodically checks the Amazon SQS queue for a message that contains the job ID.
If there is a message, parse the JSON and check if the job completed successfully. If it did, download
the archive.
• Cleans up by deleting the Amazon SNS topic and the Amazon SQS queue that it created.
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileOutputStream;
import java.io.FileWriter;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.codehaus.jackson.JsonFactory;
import org.codehaus.jackson.JsonNode;
import org.codehaus.jackson.JsonParseException;
import org.codehaus.jackson.JsonParser;
import org.codehaus.jackson.map.ObjectMapper;
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.policy.Policy;
import com.amazonaws.auth.policy.Principal;
import com.amazonaws.auth.policy.Resource;
import com.amazonaws.auth.policy.Statement;
import com.amazonaws.auth.policy.Statement.Effect;
import com.amazonaws.auth.policy.actions.SQSActions;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.model.GetJobOutputRequest;
import com.amazonaws.services.glacier.model.GetJobOutputResult;
import com.amazonaws.services.glacier.model.InitiateJobRequest;
import com.amazonaws.services.glacier.model.InitiateJobResult;
import com.amazonaws.services.glacier.model.JobParameters;
import com.amazonaws.services.sns.AmazonSNSClient;
import com.amazonaws.services.sns.model.CreateTopicRequest;
import com.amazonaws.services.sns.model.CreateTopicResult;
import com.amazonaws.services.sns.model.DeleteTopicRequest;
import com.amazonaws.services.sns.model.SubscribeRequest;
import com.amazonaws.services.sns.model.SubscribeResult;
import com.amazonaws.services.sns.model.UnsubscribeRequest;
import com.amazonaws.services.sqs.AmazonSQSClient;
import com.amazonaws.services.sqs.model.CreateQueueRequest;
import com.amazonaws.services.sqs.model.CreateQueueResult;
import com.amazonaws.services.sqs.model.DeleteQueueRequest;
import com.amazonaws.services.sqs.model.GetQueueAttributesRequest;
import com.amazonaws.services.sqs.model.GetQueueAttributesResult;
import com.amazonaws.services.sqs.model.Message;
import com.amazonaws.services.sqs.model.ReceiveMessageRequest;
import com.amazonaws.services.sqs.model.SetQueueAttributesRequest;
try {
setupSQS();
setupSNS();
downloadJobOutput(jobId);
cleanUp();
} catch (Exception e) {
System.err.println("Archive retrieval failed.");
System.err.println(e);
}
}
.withQueueUrl(sqsQueueURL)
.withAttributeNames("QueueArn");
Policy sqsPolicy =
new Policy().withStatements(
new Statement(Effect.Allow)
.withPrincipals(Principal.AllUsers)
.withActions(SQSActions.SendMessage)
.withResources(new Resource(sqsQueueARN)));
Map<String, String> queueAttributes = new HashMap<String, String>();
queueAttributes.put("Policy", sqsPolicy.toJson());
sqsClient.setQueueAttributes(new SetQueueAttributesRequest(sqsQueueURL,
queueAttributes));
}
private static void setupSNS() {
CreateTopicRequest request = new CreateTopicRequest()
.withName(snsTopicName);
CreateTopicResult result = snsClient.createTopic(request);
snsTopicARN = result.getTopicArn();
snsSubscriptionARN = result2.getSubscriptionArn();
}
private static String initiateJobRequest() {
return response.getJobId();
}
while (!messageFound) {
List<Message> msgs = sqsClient.receiveMessage(
new
ReceiveMessageRequest(sqsQueueUrl).withMaxNumberOfMessages(10)).getMessages();
if (msgs.size() > 0) {
for (Message m : msgs) {
JsonParser jpMessage = factory.createJsonParser(m.getBody());
JsonNode jobMessageNode = mapper.readTree(jpMessage);
String jobMessage = jobMessageNode.get("Message").getTextValue();
} else {
Thread.sleep(sleepTime * 1000);
}
}
return (messageFound && jobSuccessful);
}
int bytesRead = 0;
do {
bytesRead = input.read(buffer);
if (bytesRead <= 0) break;
output.write(buffer, 0, bytesRead);
} while (bytesRead > 0);
} catch (IOException e) {
throw new AmazonClientException("Unable to save archive", e);
} finally {
try {input.close();} catch (Exception e) {}
try {output.close();} catch (Exception e) {}
}
System.out.println("Retrieved archive to " + fileName);
}
Example 2: Retrieving an Archive Using the Low-Level API of the AWS SDK for
Java—Download Output in Chunks
The following Java code example retrieves an archive from S3 Glacier. The code example downloads the
job output in chunks by specifying byte range in a GetJobOutputRequest object.
import java.io.BufferedInputStream;
import java.io.ByteArrayInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.core.JsonParseException;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.amazonaws.auth.policy.Policy;
import com.amazonaws.auth.policy.Principal;
import com.amazonaws.auth.policy.Resource;
import com.amazonaws.auth.policy.Statement;
import com.amazonaws.auth.policy.Statement.Effect;
import com.amazonaws.auth.policy.actions.SQSActions;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.TreeHashGenerator;
import com.amazonaws.services.glacier.model.GetJobOutputRequest;
import com.amazonaws.services.glacier.model.GetJobOutputResult;
import com.amazonaws.services.glacier.model.InitiateJobRequest;
import com.amazonaws.services.glacier.model.InitiateJobResult;
import com.amazonaws.services.glacier.model.JobParameters;
import com.amazonaws.services.sns.AmazonSNSClient;
import com.amazonaws.services.sns.model.CreateTopicRequest;
import com.amazonaws.services.sns.model.CreateTopicResult;
import com.amazonaws.services.sns.model.DeleteTopicRequest;
import com.amazonaws.services.sns.model.SubscribeRequest;
import com.amazonaws.services.sns.model.SubscribeResult;
import com.amazonaws.services.sns.model.UnsubscribeRequest;
import com.amazonaws.services.sqs.AmazonSQSClient;
import com.amazonaws.services.sqs.model.CreateQueueRequest;
import com.amazonaws.services.sqs.model.CreateQueueResult;
import com.amazonaws.services.sqs.model.DeleteQueueRequest;
import com.amazonaws.services.sqs.model.GetQueueAttributesRequest;
import com.amazonaws.services.sqs.model.GetQueueAttributesResult;
import com.amazonaws.services.sqs.model.Message;
import com.amazonaws.services.sqs.model.ReceiveMessageRequest;
import com.amazonaws.services.sqs.model.SetQueueAttributesRequest;
try {
setupSQS();
setupSNS();
downloadJobOutput(jobId, archiveSizeInBytes);
cleanUp();
} catch (Exception e) {
System.err.println("Archive retrieval failed.");
System.err.println(e);
}
}
Policy sqsPolicy =
new Policy().withStatements(
new Statement(Effect.Allow)
.withPrincipals(Principal.AllUsers)
.withActions(SQSActions.SendMessage)
.withResources(new Resource(sqsQueueARN)));
Map<String, String> queueAttributes = new HashMap<String, String>();
queueAttributes.put("Policy", sqsPolicy.toJson());
sqsClient.setQueueAttributes(new SetQueueAttributesRequest(sqsQueueURL,
queueAttributes));
}
private static void setupSNS() {
CreateTopicRequest request = new CreateTopicRequest()
.withName(snsTopicName);
CreateTopicResult result = snsClient.createTopic(request);
snsTopicARN = result.getTopicArn();
snsSubscriptionARN = result2.getSubscriptionArn();
}
private static String initiateJobRequest() {
return response.getJobId();
}
while (!messageFound) {
List<Message> msgs = sqsClient.receiveMessage(
new
ReceiveMessageRequest(sqsQueueUrl).withMaxNumberOfMessages(10)).getMessages();
if (msgs.size() > 0) {
for (Message m : msgs) {
JsonParser jpMessage = factory.createJsonParser(m.getBody());
JsonNode jobMessageNode = mapper.readTree(jpMessage);
String jobMessage = jobMessageNode.get("Message").textValue();
} else {
Thread.sleep(sleepTime * 1000);
}
}
return (messageFound && jobSuccessful) ? archiveSizeInBytes : -1;
}
if (archiveSizeInBytes < 0) {
System.err.println("Nothing to download.");
return;
}
do {
int totalRead = 0;
while (totalRead < buffer.length) {
int bytesRemaining = buffer.length - totalRead;
int read = is.read(buffer, totalRead, bytesRemaining);
if (read > 0) {
totalRead = totalRead + read;
} else {
break;
}
}
System.out.println("Calculated checksum: " +
TreeHashGenerator.calculateTreeHash(new ByteArrayInputStream(buffer)));
System.out.println("read = " + totalRead);
fstream.write(buffer);
fstream.close();
System.out.println("Retrieved file to " + fileName);
Topics
• Downloading an Archive Using the High-Level API of the AWS SDK for .NET (p. 98)
• Downloading an Archive Using the Low-Level API of the AWS SDK for .NET (p. 99)
Example: Downloading an Archive Using the High-Level API of the AWS SDK
for .NET
The following C# code example downloads an archive from a vault (examplevault) in the US West
(Oregon) Region.
For step-by-step instructions on how to run this example, see Running Code Examples (p. 121). You
need to update the code as shown with an existing archive ID and the local file path where you want to
save the downloaded archive.
using System;
using Amazon.Glacier;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class ArchiveDownloadHighLevel
{
static string vaultName = "examplevault";
static string archiveId = "*** Provide archive ID ***";
static string downloadFilePath = "*** Provide the file name and path to where to store
the download ***";
Console.WriteLine("Intiating the archive retrieval job and then polling SQS queue
for the archive to be available.");
Console.WriteLine("Once the archive is available, downloading will begin.");
manager.Download(vaultName, archiveId, downloadFilePath, options);
Console.WriteLine("To continue, press Enter");
Console.ReadKey();
}
catch (AmazonGlacierException e) { Console.WriteLine(e.Message); }
catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
catch (Exception e) { Console.WriteLine(e.Message); }
Console.WriteLine("To continue, press Enter");
Console.ReadKey();
}
You need to specify an AWS Region from where you want to download the archive. All operations you
perform using this client apply to that AWS Region.
2. Initiate an archive-retrieval job by executing the InitiateJob method.
You provide job information, such as the archive ID of the archive you want to download and the
optional Amazon SNS topic to which you want S3 Glacier to post a job completion message, by
creating an instance of the InitiateJobRequest class. S3 Glacier returns a job ID in response. The
response is available in an instance of the InitiateJobResponse class.
AmazonGlacierClient client;
client = new AmazonGlacierClient(Amazon.RegionEndpoint.USWest2);
You can optionally specify a byte range to request S3 Glacier to prepare only a portion of the archive
as shown in the following request. The request specifies S3 Glacier to prepare only the 1 MB to 2 MB
portion of the archive.
AmazonGlacierClient client;
client = new AmazonGlacierClient(Amazon.RegionEndpoint.USWest2);
You must wait until the job output is ready for you to download. If you have either set a notification
configuration on the vault identifying an Amazon Simple Notification Service (Amazon SNS) topic or
specified an Amazon SNS topic when you initiated a job, S3 Glacier sends a message to that topic after
it completes the job. The code example given in the following section uses Amazon SNS for S3 Glacier
to publish a message.
You can also poll S3 Glacier by calling the DescribeJob method to determine the job completion
status. Although, using an Amazon SNS topic for notification is the recommended approach .
4. Download the job output (archive data) by executing the GetJobOutput method.
You provide the request information such as the job ID and vault name by creating an instance
of the GetJobOutputRequest class. The output that S3 Glacier returns is available in the
GetJobOutputResponse object.
The preceding code snippet downloads the entire job output. You can optionally retrieve only a
portion of the output, or download the entire output in smaller chunks by specifying the byte range in
your GetJobOutputRequest.
In response to your GetJobOutput call, S3 Glacier returns the checksum of the portion of the data
you downloaded, if certain conditions are met. For more information, see Receiving Checksums When
Downloading Data (p. 175).
To verify there are no errors in the download, you can then compute the checksum on the client-side
and compare it with the checksum S3 Glacier sent in the response.
For an archive retrieval job with the optional range specified, when you get the job description, it
includes the checksum of the range you are retrieving (SHA256TreeHash).You can use this value to
further verify the accuracy of the entire byte range that you later download. For example, if you
initiate a job to retrieve a tree-hash aligned archive range and then download output in chunks such
that each of your GetJobOutput requests return a checksum, then you can compute checksum of
each portion you download on the client-side and then compute the tree hash. You can compare it
with the checksum S3 Glacier returns in response to your describe job request to verify that the entire
byte range you have downloaded is the same as the byte range that is stored in S3 Glacier.
For a working example, see Example 2: Retrieving an Archive Using the Low-Level API of the AWS SDK
for .NET—Download Output in Chunks (p. 105).
Example 1: Retrieving an Archive Using the Low-Level API of the AWS SDK
for .NET
The following C# code example downloads an archive from the specified vault. After the job
completes, the example downloads the entire output in a single GetJobOutput call. For an example of
downloading output in chunks, see Example 2: Retrieving an Archive Using the Low-Level API of the AWS
SDK for .NET—Download Output in Chunks (p. 105).
The example attaches a policy to the queue to enable the Amazon SNS topic to post messages.
• Initiates a job to download the specified archive.
In the job request, the example specifies the Amazon SNS topic so that S3 Glacier can send a message
after it completes the job.
• Periodically checks the Amazon SQS queue for a message.
If there is a message, parse the JSON and check if the job completed successfully. If it did, download
the archive. The code example uses the JSON.NET library (see JSON.NET) to parse the JSON.
• Cleans up by deleting the Amazon SNS topic and the Amazon SQS queue it created.
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Runtime;
using Amazon.SimpleNotificationService;
using Amazon.SimpleNotificationService.Model;
using Amazon.SQS;
using Amazon.SQS.Model;
using Newtonsoft.Json;
namespace glacier.amazon.com.docsamples
{
class ArchiveDownloadLowLevelUsingSNSSQS
{
static string topicArn;
static string queueUrl;
static string queueArn;
static string vaultName = "*** Provide vault name ***";
static string archiveID = "*** Provide archive ID ***";
static string fileName = "*** Provide the file name and path to where to store
downloaded archive ***";
static AmazonSimpleNotificationServiceClient snsClient;
static AmazonSQSClient sqsClient;
const string SQS_POLICY =
"{" +
" \"Version\" : \"2012-10-17\"," +
" \"Statement\" : [" +
" {" +
" \"Sid\" : \"sns-rule\"," +
" \"Effect\" : \"Allow\"," +
" \"Principal\" : {\"Service\" : \"sns.amazonaws.com\" }," +
" \"Action\" : \"sqs:SendMessage\"," +
" \"Resource\" : \"{QueueArn}\"," +
" \"Condition\" : {" +
" \"ArnLike\" : {" +
" \"aws:SourceArn\" : \"{TopicArn}\"" +
" }" +
" }" +
" }" +
" ]" +
"}";
// Add policy to the queue so SNS can send messages to the queue.
var policy = SQS_POLICY.Replace("{TopicArn}", topicArn).Replace("{QueueArn}",
queueArn);
sqsClient.SetQueueAttributes(new SetQueueAttributesRequest()
{
QueueUrl = queueUrl,
Attributes = new Dictionary<string, string>
{
{ QueueAttributeName.Policy, policy }
}
});
}
// Check queue for a message and if job completed successfully, download archive.
ProcessQueue(jobId, client);
}
if (string.Equals(statusCode, GlacierUtils.JOB_STATUS_SUCCEEDED,
StringComparison.InvariantCultureIgnoreCase))
{
Console.WriteLine("Downloading job output");
DownloadOutput(jobId, client); // Save job output to the specified file location.
}
else if (string.Equals(statusCode, GlacierUtils.JOB_STATUS_FAILED,
StringComparison.InvariantCultureIgnoreCase))
Console.WriteLine("Job failed... cannot download the archive.");
jobDone = true;
sqsClient.DeleteMessage(new DeleteMessageRequest() { QueueUrl = queueUrl,
ReceiptHandle = message.ReceiptHandle });
}
}
Example 2: Retrieving an Archive Using the Low-Level API of the AWS SDK
for .NET—Download Output in Chunks
The following C# code example retrieves an archive from S3 Glacier. The code example downloads the
job output in chunks by specifying the byte range in a GetJobOutputRequest object.
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;
using Amazon.SimpleNotificationService;
using Amazon.SimpleNotificationService.Model;
using Amazon.SQS;
using Amazon.SQS.Model;
using Newtonsoft.Json;
using System.Collections.Specialized;
namespace glacier.amazon.com.docsamples
{
class ArchiveDownloadLowLevelUsingSQLSNSOutputUsingRange
{
static string topicArn;
static string queueUrl;
static string queueArn;
static string vaultName = "*** Provide vault name ***";
static string archiveId = "*** Provide archive ID ***";
static string fileName = "*** Provide the file name and path to where to store
downloaded archive ***";
static AmazonSimpleNotificationServiceClient snsClient;
static AmazonSQSClient sqsClient;
const string SQS_POLICY =
"{" +
" \"Version\" : \"2012-10-17\"," +
" \"Statement\" : [" +
" {" +
" \"Sid\" : \"sns-rule\"," +
" \"Effect\" : \"Allow\"," +
" \"Principal\" : {\"AWS\" : \"arn:aws:iam::123456789012:root\" }," +
" \"Action\" : \"sqs:SendMessage\"," +
" \"Resource\" : \"{QuernArn}\"," +
" \"Condition\" : {" +
" \"ArnLike\" : {" +
" \"aws:SourceArn\" : \"{TopicArn}\"" +
" }" +
" }" +
" }" +
" ]" +
"}";
{
AmazonGlacierClient client;
try
{
using (client = new AmazonGlacierClient(Amazon.RegionEndpoint.USWest2))
{
Console.WriteLine("Setup SNS topic and SQS queue.");
SetupTopicAndQueue();
Console.WriteLine("To continue, press Enter"); Console.ReadKey();
Console.WriteLine("Download archive");
DownloadAnArchive(archiveId, client);
}
Console.WriteLine("Operations successful. To continue, press Enter");
Console.ReadKey();
}
catch (AmazonGlacierException e) { Console.WriteLine(e.Message); }
catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
catch (Exception e) { Console.WriteLine(e.Message); }
finally
{
// Delete SNS topic and SQS queue.
snsClient.DeleteTopic(new DeleteTopicRequest() { TopicArn = topicArn });
sqsClient.DeleteQueue(new DeleteQueueRequest() { QueueUrl = queueUrl });
}
}
// Add the policy to the queue so SNS can send messages to the queue.
var policy = SQS_POLICY.Replace("{TopicArn}", topicArn).Replace("{QuernArn}",
queueArn);
sqsClient.SetQueueAttributes(new SetQueueAttributesRequest()
{
QueueUrl = queueUrl,
Attributes = new Dictionary<string, string>
{
{ QueueAttributeName.Policy, policy }
}
});
}
VaultName = vaultName,
JobParameters = new JobParameters()
{
Type = "archive-retrieval",
ArchiveId = archiveId,
Description = "This job is to download the archive.",
SNSTopic = topicArn,
}
};
InitiateJobResponse initJobResponse = client.InitiateJob(initJobRequest);
string jobId = initJobResponse.JobId;
// Check queue for a message and if job completed successfully, download archive.
ProcessQueue(jobId, client);
}
jobDone = true;
sqsClient.DeleteMessage(new DeleteMessageRequest() { QueueUrl = queueUrl,
ReceiptHandle = message.ReceiptHandle });
}
}
long currentPosition = 0;
do
{
GetJobOutputRequest getJobOutputRequest = new GetJobOutputRequest()
{
JobId = jobId,
VaultName = vaultName
};
getJobOutputRequest.SetRange(currentPosition, endPosition);
GetJobOutputResponse getJobOutputResponse =
client.GetJobOutput(getJobOutputRequest);
1. Initiate a job of the archive-retrieval type. For more information, see Initiate Job (POST
jobs) (p. 263).
2. After the job completes, download the archive data. For more information, see Get Job Output (GET
output) (p. 257).
Topics
• Deleting an Archive in Amazon S3 Glacier Using the AWS SDK for Java (p. 109)
• Deleting an Archive in Amazon S3 Glacier Using the AWS SDK for .NET (p. 111)
• Deleting an Amazon S3 Glacier Archive Using the REST API (p. 113)
• Deleting an Archive in Amazon S3 Glacier Using the AWS Command Line Interface (p. 113)
You can delete one archive at a time from a vault. To delete the archive you must provide its archive ID
in your delete request. You can get the archive ID by downloading the vault inventory for the vault that
contains the archive. For more information about downloading the vault inventory, see Downloading a
Vault Inventory in Amazon S3 Glacier (p. 37).
After you delete an archive, you might still be able to make a successful request to initiate a job to
retrieve the deleted archive, but the archive retrieval job will fail.
Archive retrievals that are in progress for an archive ID when you delete the archive might or might not
succeed according to the following scenarios:
• If the archive retrieval job is actively preparing the data for download when S3 Glacier receives the
delete archive request, then the archival retrieval operation might fail.
• If the archive retrieval job has successfully prepared the archive for download when S3 Glacier receives
the delete archive request, then you will be able to download the output.
For more information about archive retrieval, see Downloading an Archive in Amazon S3
Glacier (p. 83).
This operation is idempotent. Deleting an already-deleted archive does not result in an error.
After you delete an archive, if you immediately download the vault inventory, it might include the
deleted archive in the list because S3 Glacier prepares vault inventory only about once a day.
You need to specify an AWS Region where the archive you want to delete is stored. All operations you
perform using this client apply to that AWS Region.
2. Provide request information by creating an instance of the DeleteArchiveRequest class.
You need to provide an archive ID, a vault name, and your account ID. If you don't provide an account
ID, then account ID associated with the credentials you provide to sign the request is assumed. For
more information, see Using the AWS SDK for Java with Amazon S3 Glacier (p. 117).
3. Run the deleteArchive method by providing the request object as a parameter.
AmazonGlacierClient client;
client.deleteArchive(request);
Note
For information about the underlying REST API, see Delete Archive (DELETE archive) (p. 222).
Example
import java.io.IOException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.model.DeleteArchiveRequest;
try {
} catch (Exception e) {
System.err.println("Archive not deleted.");
System.err.println(e);
}
}
}
Topics
• Deleting an Archive Using the High-Level API of the AWS SDK for .NET (p. 111)
• Deleting an Archive Using the Low-Level API AWS SDK for .NET (p. 111)
Example: Deleting an Archive Using the High-Level API of the AWS SDK for .NET
The following C# code example uses the high-level API of the AWS SDK for .NET to delete an archive. For
step-by-step instructions on how to run this example, see Running Code Examples (p. 121). You need to
update the code as shown with the archive ID of the archive you want to delete.
Example
using System;
using Amazon.Glacier;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class ArchiveDeleteHighLevel
{
static string vaultName = "examplevault";
static string archiveId = "*** Provide archive ID ***";
Deleting an Archive Using the Low-Level API AWS SDK for .NET
The following are the steps to delete an using the AWS SDK for .NET.
You need to specify an AWS Region where the archive you want to delete is stored. All operations you
perform using this client apply to that AWS Region.
2. Provide request information by creating an instance of the DeleteArchiveRequest class.
You need to provide an archive ID, a vault name, and your account ID. If you don't provide an account
ID, then account ID associated with the credentials you provide to sign the request is assumed. For
more information, see Using the AWS SDKs with Amazon S3 Glacier (p. 116).
3. Run the DeleteArchive method by providing the request object as a parameter.
Example: Deleting an Archive Using the Low-Level API of the AWS SDK for .NET
The following C# example illustrates the preceding steps. The example uses the low-level API of the AWS
SDK for .NET to delete an archive.
Note
For information about the underlying REST API, see Delete Archive (DELETE archive) (p. 222).
For step-by-step instructions on how to run this example, see Running Code Examples (p. 121). You
need to update the code as shown with the archive ID of the archive you want to delete.
Example
using System;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Runtime;
namespace glacier.amazon.com.docsamples
{
class ArchiveDeleteLowLevel
{
static string vaultName = "examplevault";
static string archiveId = "*** Provide archive ID ***";
};
DeleteArchiveResponse response = client.DeleteArchive(request);
}
}
}
• For information about the Delete Archive API, see Delete Archive (DELETE archive) (p. 222).
• For information about using the REST API, see API Reference for Amazon S3 Glacier (p. 160).
Topics
• (Prerequisite) Setting Up the AWS CLI (p. 21)
• Example: Deleting an Archive Using the AWS CLI (p. 113)
aws help
aws s3 ls
Expected output:
{
"location": "/111122223333/vaults/awsexamplevault/jobs/*** jobid ***",
"jobId": "*** jobid ***"
}
2. Use the describe-job command to check status of the previous retrieval job.
Expected output:
{
"InventoryRetrievalParameters": {
"Format": "JSON"
},
"VaultARN": "*** vault arn ***",
"Completed": false,
"JobId": "*** jobid ***",
"Action": "InventoryRetrieval",
"CreationDate": "*** job creation date ***",
"StatusCode": "InProgress"
}
You must wait until the job output is ready for you to download. If you set a notification
configuration on the vault or specified an Amazon Simple Notification Service (Amazon SNS) topic
when you initiated the job, S3 Glacier sends a message to the topic after it completes the job.
You can set notification configuration for specific events on the vault. For more information, see
Configuring Vault Notifications in Amazon S3 Glacier (p. 51). S3 Glacier sends a message to the
specified SNS topic anytime the specific event occurs.
4. When it's complete, use the get-job-output command to download the retrieval job to the file
output.json.
{
"VaultARN":"arn:aws:glacier:region:111122223333:vaults/awsexamplevault",
"InventoryDate":"*** job completion date ***",
"ArchiveList":[
{"ArchiveId":"*** archiveid ***",
"ArchiveDescription":*** archive description (if set) ***,
5. Use the delete-archive command to delete each archive from a vault until none remain.
Topics
• AWS SDKs that Support S3 Glacier (p. 116)
• AWS SDK Libraries for Java and .NET (p. 116)
• Using the AWS SDK for Java with Amazon S3 Glacier (p. 117)
• Using the AWS SDK for .NET with Amazon S3 Glacier (p. 120)
You can find examples of working with S3 Glacier using the Java and .NET SDKs throughout this
developer guide. For libraries and sample code in all languages, see Sample Code & Libraries.
The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services, including S3
Glacier. For information about downloading the AWS CLI, see AWS Command Line Interface. For a list of
the S3 Glacier CLI commands, see AWS CLI Command Reference.
provides a corresponding method, a request object for you to provide request information and a
response object for you to process S3 Glacier response. The low-level wrapper libraries are the most
complete implementation of the underlying S3 Glacier operations.
For information about these SDK libraries, see Using the AWS SDK for Java with Amazon S3
Glacier (p. 117) and Using the AWS SDK for .NET with Amazon S3 Glacier (p. 120).
• Uploading an archive—To upload an archive using the low-level API in addition to the file name and
the vault name where you want to save the archive, You need to provide a checksum (SHA-256 tree
hash) of the payload. However, the high-level API computes the checksum for you.
• Downloading an archive or vault inventory—To download an archive using the low-level API you first
initiate a job, wait for the job to complete, and then get the job output. You need to write additional
code to set up an Amazon Simple Notification Service (Amazon SNS) topic for S3 Glacier to notify
you when the job is complete. You also need some polling mechanism to check if a job completion
message was posted to the topic. The high-level API provides a method to download an archive that
takes care of all these steps. You only specify an archive ID and a folder path where you want to save
the downloaded data.
For information about these SDK libraries, see Using the AWS SDK for Java with Amazon S3
Glacier (p. 117) and Using the AWS SDK for .NET with Amazon S3 Glacier (p. 120).
Topics
• Using the Low-Level API (p. 118)
• Using the High-Level API (p. 118)
• Running Java Examples for Amazon S3 Glacier Using Eclipse (p. 119)
• Setting the Endpoint (p. 119)
For example, the AmazonGlacierClient class provides the createVault method to create a
vault. This method maps to the underlying Create Vault REST operation (see Create Vault (PUT
vault) (p. 185)). To use this method, you must create instances of the CreateVaultResult object that
receives the S3 Glacier response as shown in the following Java code snippet:
For example, the following Java code snippet uses the upload high-level method to upload an archive.
Note that any operations you perform apply to the AWS Region you specified when creating the
ArchiveTransferManager object. If you don't specify any AWS Region, the AWS SDK for Java sets us-
east-1 as the default AWS Region.
Note
The high-level ArchiveTransferManager class can be constructed with an
AmazonGlacierClient instance or an AWSCredentials instance.
1 Create a default credentials profile for your AWS credentials as described in the AWS SDK
for Java topic Providing AWS Credentials in the AWS SDK for Java.
2 Create a new AWS Java project in Eclipse. The project is pre-configured with the AWS SDK
for Java.
3 Copy the code from the section you are reading to your project.
4 Update the code by providing any required data. For example, if uploading a file, provide
the file path and the bucket name.
5 Run the code. Verify that the object is created by using the AWS Management Console. For
more information about the AWS Management Console, go to http://aws.amazon.com/
console/.
The following snippet shows how to set the endpoint to the US West (Oregon) Region (us-west-2) in
the low-level API.
Example
The following snippet shows how to set the endpoint to the US West (Oregon) Region in the high-level
API.
glacierClient.setEndpoint("glacier.us-west-2.amazonaws.com");
sqsClient.setEndpoint("sqs.us-west-2.amazonaws.com");
snsClient.setEndpoint("sns.us-west-2.amazonaws.com");
For a list of supported AWS Regions and endpoints, see Accessing Amazon S3 Glacier (p. 5).
Topics
• Using the Low-Level API (p. 120)
• Using the High-Level API (p. 121)
• Running Code Examples (p. 121)
• Setting the Endpoint (p. 121)
For example, the AmazonGlacierClient class provides the CreateVault method to create a
vault. This method maps to the underlying Create Vault REST operation (see Create Vault (PUT
vault) (p. 185)). To use this method, you must create instances of the CreateVaultRequest and
CreateVaultResponse classes to provide request information and receive a S3 Glacier response as
shown in the following C# code snippet:
AmazonGlacierClient client;
client = new AmazonGlacierClient(Amazon.RegionEndpoint.USEast1);
your Account ID, do not include hyphens in it. When using AWS SDK for .NET, if you don't
provide the account ID, the library sets the account ID to '-'.
For example, the following C# code snippet uses the Upload high-level method to upload an archive.
Note that any operations you perform apply to the AWS Region you specified when creating the
ArchiveTransferManager object. All the high-level examples in this guide use this pattern.
Note
The high-level ArchiveTransferManager class still needs the low-level
AmazonGlacierClient client, which you can pass either explicitly or the
ArchiveTransferManager creates the client.
The following procedure outlines steps for you to test the code examples provided in this guide.
1 Create a credentials profile for your AWS credentials as described in the AWS SDK for .NET
topic Configuring AWS Credentials.
2 Create a new Visual Studio project using the AWS Empty Project template.
3 Replace the code in the project file, Program.cs, with the code in the section you are
reading.
4 Run the code. Verify that the object is created using the AWS Management Console. For
more information about AWS Management Console, go to http://aws.amazon.com/
console/.
The following snippet shows how to set the endpoint to the US West (Oregon) Region (us-west-2) in
the low-level API.
Example
The following snippet shows how to set the endpoint to the US West (Oregon) Region in the high-level
API.
For a current list of supported AWS Regions and endpoints, see Accessing Amazon S3 Glacier (p. 5).
Security is a shared responsibility between AWS and you. The shared responsibility model describes this
as security of the cloud and security in the cloud:
• Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services
in the AWS Cloud. AWS also provides you with services that you can use securely. The effectiveness
of our security is regularly tested and verified by third-party auditors as part of the AWS compliance
programs. To learn about the compliance programs that apply to Amazon S3 Glacier (S3 Glacier), see
AWS Services in Scope by Compliance Program.
• Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also
responsible for other factors including the sensitivity of your data, your organization’s requirements,
and applicable laws and regulations.
This documentation will help you understand how to apply the shared responsibility model when
using S3 Glacier. The following topics show you how to configure S3 Glacier to meet your security and
compliance objectives. You'll also learn how to use other AWS services that can help you to monitor and
secure your S3 Glacier resources.
Topics
• Data Protection in Amazon S3 Glacier (p. 123)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
• Logging and Monitoring in Amazon S3 Glacier (p. 144)
• Compliance Validation for Amazon S3 Glacier (p. 145)
• Resilience in Amazon S3 Glacier (p. 146)
• Infrastructure Security in Amazon S3 Glacier (p. 147)
For more information about the AWS global cloud infrastructure, see Global Infrastructure.
For data protection purposes, we recommend that you protect AWS account credentials and set up
individual user accounts with AWS Identity and Access Management (IAM), so that each user is given only
the permissions necessary to fulfill their job duties.
If you require FIPS 140-2 validated cryptographic modules when accessing AWS through a command line
interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see
Federal Information Processing Standard (FIPS) 140-2.
Topics
Data Encryption
Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3 Glacier)
and at rest (while it is stored in AWS data centers). You can protect data in transit that is uploaded
directly to S3 Glacier using Secure Sockets Layer (SSL) or client-side encryption.
You can also access S3 Glacier through Amazon Simple Storage Service (Amazon S3). Amazon S3
supports lifecycle configuration on an Amazon S3 bucket, which enables you to transition objects to the
Amazon S3 S3 Glacier storage class for archival. Data in transit between Amazon S3 and S3 Glacier via
lifecycle policies is encrypted using SSL.
Data at rest stored in S3 Glacier is automatically server-side encrypted using 256-bit Advanced
Encryption Standard (AES-256) with keys maintained by AWS. If you prefer to manage your own keys,
you can also use client-side encryption before storing data in S3 Glacier. For more information about
how to setup default encryption for Amazon S3, see Amazon S3 Default Encryption in the Amazon
Simple Storage Service Developer Guide.
Key Management
Server-side encryption addresses data encryption at rest—that is, Amazon S3 Glacier encrypts your data
as it writes it to its data centers and decrypts it for you when you access it. As long as you authenticate
your request and you have access permissions, there is no difference in the way you access encrypted or
unencrypted data.
Data at rest stored in S3 Glacier is automatically server-side encrypted using AES-256, using keys
maintained by AWS. As an additional safeguard, AWS encrypts the key itself with a master key that we
regularly rotate.
VPC Endpoints
A virtual private cloud (VPC) endpoint enables you to privately connect your VPC to supported AWS
services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway,
NAT device, VPN connection, or AWS Direct Connect connection. Although S3 Glacier does not support
VPC endpoints directly, you can take advantage of Amazon Simple Storage Service (Amazon S3) VPC
endpoints if you access S3 Glacier as a storage tier integrated with Amazon S3.
For more information about Amazon S3 lifecycle configuration and transitioning objects to the S3
Glacier storage class, see Object Lifecycle Management and Transitioning Objects in the Amazon Simple
Storage Service Developer Guide. For more information about VPC endpoints, see VPC Endpoints in the
Amazon VPC User Guide.
Authentication
You can access AWS as any of the following types of identities:
• AWS account root user – When you first create an AWS account, you begin with a single sign-in
identity that has complete access to all AWS services and resources in the account. This identity is
called the AWS account root user and is accessed by signing in with the email address and password
that you used to create the account. We strongly recommend that you do not use the root user for
your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the
root user only to create your first IAM user. Then securely lock away the root user credentials and use
them to perform only a few account and service management tasks.
• IAM user – An IAM user is an identity within your AWS account that has specific custom permissions
(for example, permissions to create a vault in S3 Glacier). You can use an IAM user name and password
to sign in to secure AWS webpages like the AWS Management Console, AWS Discussion Forums, or the
AWS Support Center.
In addition to a user name and password, you can also generate access keys for each user. You can
use these keys when you access AWS services programmatically, either through one of the several
SDKs or by using the AWS Command Line Interface (CLI). The SDK and CLI tools use the access keys
to cryptographically sign your request. If you don’t use AWS tools, you must sign the request yourself.
S3 Glacier supports Signature Version 4, a protocol for authenticating inbound API requests. For more
information about authenticating requests, see Signature Version 4 Signing Process in the AWS General
Reference.
• IAM role – An IAM role is an IAM identity that you can create in your account that has specific
permissions. An IAM role is similar to an IAM user in that it is an AWS identity with permissions policies
that determine what the identity can and cannot do in AWS. However, instead of being uniquely
associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role
does not have standard long-term credentials such as a password or access keys associated with it.
Instead, when you assume a role, it provides you with temporary security credentials for your role
session. IAM roles with temporary credentials are useful in the following situations:
• Federated user access – Instead of creating an IAM user, you can use existing identities from AWS
Directory Service, your enterprise user directory, or a web identity provider. These are known as
federated users. AWS assigns a role to a federated user when access is requested through an identity
provider. For more information about federated users, see Federated Users and Roles in the IAM User
Guide.
• AWS service access – A service role is an IAM role that a service assumes to perform actions in your
account on your behalf. When you set up some AWS service environments, you must define a role
for the service to assume. This service role must include all the permissions that are required for
the service to access the AWS resources that it needs. Service roles vary from service to service, but
many allow you to choose your permissions as long as you meet the documented requirements
for that service. Service roles provide access only within your account and cannot be used to grant
access to services in other accounts. You can create, modify, and delete a service role from within
IAM. For example, you can create a role that allows Amazon Redshift to access an Amazon S3 bucket
on your behalf and then load data from that bucket into an Amazon Redshift cluster. For more
information, see Creating a Role to Delegate Permissions to an AWS Service in the IAM User Guide.
• Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials
for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This
is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance
and make it available to all of its applications, you create an instance profile that is attached to
the instance. An instance profile contains the role and enables programs that are running on the
EC2 instance to get temporary credentials. For more information, see Using an IAM Role to Grant
Permissions to Applications Running on Amazon EC2 Instances in the IAM User Guide.
Access Control
You can have valid credentials to authenticate your requests, but unless you have permissions you cannot
create or access S3 Glacier resources. For example, you must have permissions to create a S3 Glacier
vault.
The following sections describe how to manage permissions. We recommend that you read the overview
first.
• Overview of Managing Access Permissions to Your Amazon S3 Glacier Resources (p. 126)
• Using Identity-Based Policies for Amazon S3 Glacier (IAM Policies) (p. 130)
• Using Resource-Based Policies for Amazon S3 Glacier (Vault Policies) (p. 134)
When granting permissions, you decide who is getting the permissions, the resources they get
permissions for, and the specific actions that you want to allow on those resources.
Topics
• Amazon S3 Glacier Resources and Operations (p. 127)
• Understanding Resource Ownership (p. 127)
For all S3 Glacier actions, Resource specifies the vault on which you want to grant the permissions.
These resources have unique Amazon Resource Names (ARNs) associated with them as shown in the
following table, and you can use a wildcard character (*) in the ARN to match any vault name.
S3 Glacier provides a set of operations to work with the S3 Glacier resources. For information on the
available operations, see API Reference for Amazon S3 Glacier (p. 160).
• If you use the root account credentials of your AWS account to create a S3 Glacier vault, your AWS
account is the owner of the resource (in S3 Glacier, the resource is the S3 Glacier vault).
• If you create an IAM user in your AWS account and grant permissions to create a S3 Glacier vault
to that user, the user can create a S3 Glacier vault. However, your AWS account, to which the user
belongs, owns the S3 Glacier vault resource.
• If you create an IAM role in your AWS account with permissions to create a S3 Glacier vault, anyone
who can assume the role can create a S3 Glacier vault. Your AWS account, to which the role belongs,
owns the S3 Glacier vault resource.
Policies attached to an IAM identity are referred to as identity-based policies (IAM polices) and policies
attached to a resource are referred to as resource-based policies. S3 Glacier supports both identity-based
(IAM policies) and resource-based policies.
Topics
• Identity-Based Policies (IAM policies) (p. 127)
• Resource-Based Policies (Amazon S3 Glacier Vault Policies) (p. 128)
• Attach a permissions policy to a user or a group in your account – An account administrator can
use a permissions policy that is associated with a particular user to grant permissions for that user to
create a S3 Glacier vault.
• Attach a permissions policy to a role (grant cross-account permissions) – You can attach an
identity-based permissions policy to an IAM role to grant cross-account permissions. For example,
the administrator in Account A can create a role to grant cross-account permissions to another AWS
account (for example, Account B) or an AWS service as follows:
1. Account A administrator creates an IAM role and attaches a permissions policy to the role that
grants permissions on resources in Account A.
2. Account A administrator attaches a trust policy to the role identifying Account B as the principal
who can assume the role.
3. Account B administrator can then delegate permissions to assume the role to any users in Account
B. Doing this allows users in Account B to create or access resources in Account A. The principal
in the trust policy can also be an AWS service principal if you want to grant an AWS service
permissions to assume the role.
For more information about using IAM to delegate permissions, see Access Management in the IAM
User Guide.
The following is an example policy that grants permissions for three S3 Glacier vault-related actions
(glacier:CreateVault, glacier:DescribeVault and glacier:ListVaults) on a resource, using
the Amazon Resource Name (ARN) that identifies all of the vaults in the us-west-2 AWS Region. ARNs
uniquely identify AWS resources. For more information about ARNs used with S3 Glacier, see Amazon S3
Glacier Resources and Operations (p. 127).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glacier:CreateVault",
"glacier:DescribeVault",
"glacier:ListVaults"
],
"Resource": "arn:aws:glacier:us-west-2:123456789012:vaults/*"
}
]
}
The policy grants permissions to create, list, and obtain descriptions of vaults in the us-west-2 Region.
The wildcard character (*) at the end of the ARN means that this statement can match any vault name.
Important
When you grant permissions to create a vault using the glacier:CreateVault operation,
you must specify a wildcard character (*) because you don't know the vault name until after you
create the vault.
For more information about using identity-based policies with S3 Glacier, see Using Identity-Based
Policies for Amazon S3 Glacier (IAM Policies) (p. 130). For more information about users, groups, roles,
and permissions, see Identities (Users, Groups, and Roles) in the IAM User Guide.
You use S3 Glacier vault policies to manage permissions in the following ways:
• Manage user permissions in your account in a single vault policy, instead of individual user policies.
• Manage cross-account permissions as an alternative to using IAM roles.
A S3 Glacier vault can have one vault access policy and one Vault Lock policy associated with it. A S3
Glacier vault access policy is a resource-based policy that you can use to manage permissions to your
vault. A Vault Lock policy is a vault access policy that can be locked. After you lock a Vault Lock policy, the
policy cannot be changed. You can use a Vault Lock policy to enforce compliance controls.
You can use vault policies to grant permissions to all users, or you can limit access to a vault to a few
AWS accounts by attaching a policy directly to a vault resource. For example, you can use a S3 Glacier
vault policy to grant read-only permissions to all AWS accounts or to grant permissions to upload
archives to a few AWS accounts.
Vault policies make it easy to grant cross-account access when you need to share your vault with other
AWS accounts. You can specify controls such as “write once read many” (WORM) in a vault lock policy
and lock the policy from future edits. For example, you can grant read-only access on a vault to a
business partner with a different AWS account by simply including that account and allowed actions in
the vault policy. You can grant cross-account access to multiple users in this fashion and have a single
location to view all users with cross-account access in the vault access policy. For an example of a vault
policy for cross-account access, see Example 1: Grant Cross-Account Permissions for Specific Amazon S3
Glacier Actions (p. 135).
For more information about using vault policies with S3 Glacier, see Using Resource-Based Policies for
Amazon S3 Glacier (Vault Policies) (p. 134). For additional information about IAM roles (identity-based
policies) as opposed to resource-based policies, see How IAM Roles Differ from Resource-based Policies in
the IAM User Guide.
• Resource – In a policy, you use an Amazon Resource Name (ARN) to identify the resource to which the
policy applies. For more information, see Amazon S3 Glacier Resources and Operations (p. 127).
• Actions – You use action keywords to identify resource operations that you want to allow or deny.
For example, the glacier:CreateVault permission allows the user permissions to perform the S3
Glacier Create Vault operation.
• Effect – You specify the effect when the user requests the specific action—this can be either allow or
deny. If you don't explicitly grant access to (allow) a resource, access is implicitly denied. You can also
explicitly deny access to a resource, which you might do to make sure that a user cannot access it, even
if a different policy grants access.
• Principal – In identity-based policies (IAM policies), the user that the policy is attached to is the
implicit principal. For resource-based policies, you specify the user, account, service, or other entity
that you want to receive permissions (applies to resource-based policies only).
To learn more about the IAM policy syntax, and descriptions, see AWS IAM Policy Reference in the IAM
User Guide.
For a table showing all of the S3 Glacier API actions and the resources that they apply to, see Amazon S3
Glacier API Permissions: Actions, Resources, and Conditions Reference (p. 138).
AWS provides a set of predefined condition keys, called AWS-wide condition keys, for all AWS services
that support IAM for access control. AWS-wide condition keys use the prefix aws. S3 Glacier supports
all AWS-wide condition keys in vault access and Vault Lock policies. For example, you can use the
aws:MultiFactorAuthPresent condition key to require multi-factor authentication (MFA) when
requesting an action. For more information and a list of the AWS-wide condition keys, see Available Keys
for Conditions in the IAM User Guide.
In addition, S3 Glacier also provides its own condition keys that you can include in Condition elements
in an IAM permissions policy. S3 Glacier–specific condition keys are applicable only when granting S3
Glacier–specific permissions. S3 Glacier condition key names have the prefix glacier:. The following
table shows the S3 Glacier condition keys that apply to S3 Glacier resources.
For examples of using the S3 Glacier–specific condition keys, see Amazon S3 Glacier Access Control with
Vault Lock Policies (p. 136).
Related Topics
• Using Identity-Based Policies for Amazon S3 Glacier (IAM Policies) (p. 130)
• Using Resource-Based Policies for Amazon S3 Glacier (Vault Policies) (p. 134)
• Amazon S3 Glacier API Permissions: Actions, Resources, and Conditions Reference (p. 138)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glacier:CreateVault",
"glacier:DescribeVault",
"glacier:ListVaults"
],
"Resource": "arn:aws:glacier:us-west-2:123456789012:vaults/*"
}
]
}
The policy grants permissions for three S3 Glacier vault-related actions (glacier:CreateVault,
glacier:DescribeVault and glacier:ListVaults), on a resource using the Amazon Resource
Name (ARN) that identifies all of the vaults in the us-west-2 AWS Region.
The wildcard character (*) at the end of the ARN means that this statement can match any vault name.
The statement allows the glacier:DescribeVault action on any vault in the specified AWS Region,
us-west-2. If you want to limit permissions for this action to a specific vault only, you replace the
wildcard character (*) with a vault name.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"glacier:ListVaults"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Both of the S3 Glacier AWS Managed policies discussed in the next section grant permissions for
glacier:ListVaults.
avoid having to investigate what permissions are needed. For more information, see AWS Managed
Policies in the IAM User Guide.
The following AWS managed policies, which you can attach to users in your account, are specific to S3
Glacier:
Note
You can review these permissions policies by signing in to the IAM console and searching for
specific policies there.
You can also create your own custom IAM policies to allow permissions for S3 Glacier API actions
and resources. You can attach these custom policies to the IAM users or groups that require those
permissions or to custom execution roles (IAM roles) that you create for your S3 Glacier vaults.
Examples
• Example 1: Allow a User to Download Archives from a Vault (p. 132)
• Example 2: Allow a User to Create a Vault and Configure Notifications (p. 133)
• Example 3: Allow a User to Upload Archives to a Specific Vault (p. 133)
• Example 4: Allow a User Full Permissions on a Specific Vault (p. 133)
The policy grants these permissions on a vault named examplevault. You can get the vault ARN
from the Amazon S3 Glacier console, or programmatically by calling either the Describe Vault (GET
vault) (p. 194) or the List Vaults (GET vaults) (p. 210) API actions.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "arn:aws:glacier:us-west-2:123456789012:vaults/examplevault",
"Action":["glacier:InitiateJob",
"glacier:GetJobOutput",
"glacier:DescribeJob"]
}
]
}
{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "arn:aws:glacier:us-west-2:123456789012:vaults/*",
"Action":["glacier:CreateVault",
"glacier:SetVaultNotifications",
"glacier:GetVaultNotifications",
"glacier:DeleteVaultNotifications",
"glacier:DescribeVault",
"glacier:ListVaults"]
}
]
}
{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "arn:aws:glacier:us-west-2:123456789012:vaults/examplevault",
"Action":["glacier:UploadArchive",
"glacier:InitiateMultipartUpload",
"glacier:UploadMultipartPart",
"glacier:ListParts",
"glacier:ListMultipartUploads",
"glacier:CompleteMultipartUpload"]
}
]
}
{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "arn:aws:glacier:us-west-2:123456789012:vaults/examplevault",
"Action":["glacier:*"]
}
]
}
A S3 Glacier vault can have one vault access policy and one Vault Lock policy associated with it. An
Amazon S3 Glacier vault access policy is a resource-based policy that you can use to manage permissions
to your vault. A Vault Lock policy is vault access policy that can be locked. After you lock a Vault Lock
policy, the policy can't be changed. You can use a Vault Lock Policy to enforce compliance controls.
Topics
• Amazon S3 Glacier Access Control with Vault Access Policies (p. 134)
• Amazon S3 Glacier Access Control with Vault Lock Policies (p. 136)
You can create one vault access policy for each vault to manage permissions. You can modify permissions
in a vault access policy at any time. S3 Glacier also supports a Vault Lock policy on each vault that, after
you lock it, cannot be altered. For more information about working with Vault Lock policies, see Amazon
S3 Glacier Access Control with Vault Lock Policies (p. 136).
You can use the Glacier API, AWS SDKs, AWS CLI, or the S3 Glacier console to create and manage vault
access policies. For a list of operations allowed for vault access resource-based policies, see Amazon S3
Glacier API Permissions: Actions, Resources, and Conditions Reference (p. 138).
Examples
• Example 1: Grant Cross-Account Permissions for Specific Amazon S3 Glacier Actions (p. 135)
• Example 2: Grant Cross-Account Permissions for MFA Delete Operations (p. 135)
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"cross-account-upload",
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:root",
"arn:aws:iam::444455556666:root"
]
},
"Effect":"Allow",
"Action": [
"glacier:UploadArchive",
"glacier:InitiateMultipartUpload",
"glacier:AbortMultipartUpload",
"glacier:CompleteMultipartUpload"
],
"Resource": [
"arn:aws:glacier:us-west-2:999999999999:vaults/examplevault"
]
}
]
}
The example policy grants an AWS account with temporary credentials permission to delete archives
from a vault named examplevault, provided the request is authenticated with an MFA device. The policy
uses the aws:MultiFactorAuthPresent condition key to specify this additional requirement. For
more information, see Available Keys for Conditions in the IAM User Guide.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "add-mfa-delete-requirement",
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:root"
]
},
"Effect": "Allow",
"Action": [
"glacier:Delete*"
],
"Resource": [
"arn:aws:glacier:us-west-2:999999999999:vaults/examplevault"
],
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": true
}
}
}
]
}
Related Sections
As an example of a Vault Lock policy, suppose that you are required to retain archives for one year before
you can delete them. To implement this requirement, you can create a Vault Lock policy that denies
users permissions to delete an archive until the archive has existed for one year. You can test this policy
before locking it down. After you lock the policy, the policy becomes immutable. For more information
about the locking process, see Amazon S3 Glacier Vault Lock (p. 65). If you want to manage other user
permissions that can be changed, you can use the vault access policy (see Amazon S3 Glacier Access
Control with Vault Access Policies (p. 134)).
You can use the S3 Glacier API, AWS SDKs, AWS CLI, or the S3 Glacier console to create and manage
Vault Lock policies. For a list of S3 Glacier actions allowed for vault resource-based policies, see Amazon
S3 Glacier API Permissions: Actions, Resources, and Conditions Reference (p. 138).
Examples
• Example 1: Deny Deletion Permissions for Archives Less Than 365 Days Old (p. 136)
• Example 2: Deny Deletion Permissions Based on a Tag (p. 137)
Example 1: Deny Deletion Permissions for Archives Less Than 365 Days Old
Suppose that you have a regulatory requirement to retain archives for up to one year before you
can delete them. You can enforce that requirement by implementing the following Vault Lock
policy. The policy denies the glacier:DeleteArchive action on the examplevault vault if the
archive being deleted is less than one year old. The policy uses the S3 Glacier-specific condition key
ArchiveAgeInDays to enforce the one-year retention requirement.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid": "deny-based-on-archive-age",
"Principal": "*",
"Effect": "Deny",
"Action": "glacier:DeleteArchive",
"Resource": [
"arn:aws:glacier:us-west-2:123456789012:vaults/examplevault"
],
"Condition": {
"NumericLessThan" : {
"glacier:ArchiveAgeInDays" : "365"
}
}
}
]
}
To put these two rules in place, the following example policy has two statements:
• The first statement denies deletion permissions for everyone, locking the vault. This lock is performed
by using the LegalHold tag.
• The second statement grants deletion permissions when the archive is less than 365 days old. But
even when archives are less than 365 days old, no one can delete them when the condition in the first
statement is met.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid": "lock-vault",
"Principal": "*",
"Effect": "Deny",
"Action": [
"glacier:DeleteArchive"
],
"Resource": [
"arn:aws:glacier:us-west-2:123456789012:vaults/examplevault"
],
"Condition": {
"StringLike": {
"glacier:ResourceTag/LegalHold": [
"true",
""
]
}
}
},
{
"Sid": "you-can-delete-archive-less-than-1-year-old",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Effect": "Allow",
"Action": [
"glacier:DeleteArchive"
],
"Resource": [
"arn:aws:glacier:us-west-2:123456789012:vaults/examplevault"
],
"Condition": {
"NumericLessThan": {
"glacier:ArchiveAgeInDays": "365"
}
}
}
]
}
Related Sections
You specify the actions in the policy's Action element, and you specify the resource value in the policy's
Resource element. Also, you can use the IAM policy language Condition element to specify when a
policy should take effect.
To specify an action, use the glacier: prefix followed by the API operation name (for example,
glacier:CreateVault). For most S3 Glacier actions, Resource is the vault on which you want to
grant the permissions. You specify a vault as the Resource value by using the vault ARN. To express
conditions, you use predefined condition keys. For more information, see Overview of Managing Access
Permissions to Your Amazon S3 Glacier Resources (p. 126).
The following table lists actions that can be used with identity-based policies and resource-based
policies.
Note
Some actions can only be used with identity-based policies. These actions are marked by a red
asterisk (*) after the name of the API operation in the first column.
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources:
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources:
Resources:
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:policies/retrieval-limit-policy
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources:
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources:
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources:arn:aws:glacier:region:account-id:policies/retrieval-limit-policy
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
Resources: arn:aws:glacier:region:account-id:vaults/vault-
name, arn:aws:glacier:region:account-id:vaults/example*,
arn:aws:glacier:region:account-id:vaults/*
When using S3 Glacier via Amazon S3, you can use Amazon CloudWatch alarms to watch a single
metric over a time period that you specify. If the metric exceeds a given threshold, a notification is
sent to an Amazon SNS topic or AWS Auto Scaling policy. CloudWatch alarms do not invoke actions
because they are in a particular state. Rather the state must have changed and been maintained
for a specified number of periods. For more information, see Monitoring Metrics with Amazon
CloudWatch.
AWS CloudTrail Logs
CloudTrail provides a record of actions taken by a user, role, or an AWS service in S3 Glacier.
CloudTrail captures all API calls for S3 Glacier as events, including calls from the S3 Glacier console
and from code calls to the S3 Glacier APIs. For more information, see Logging Amazon S3 Glacier API
Calls with AWS CloudTrail (p. 157).
AWS Trusted Advisor
Trusted Advisor draws upon best practices learned from serving hundreds of thousands of AWS
customers. Trusted Advisor inspects your AWS environment and then makes recommendations
when opportunities exist to save money, improve system availability and performance, or help
close security gaps. All AWS customers have access to five Trusted Advisor checks. Customers with a
Business or Enterprise support plan can view all Trusted Advisor checks.
For more information, see AWS Trusted Advisor in the AWS Support User Guide.
AWS provides a frequently updated list of AWS services in scope of specific compliance programs at AWS
Services in Scope by Compliance Program.
Third-party audit reports are available for you to download using AWS Artifact. For more information,
see Downloading Reports in AWS Artifact in the AWS Artifact User Guide.
For more information about AWS compliance programs, see AWS Compliance Programs.
Your compliance responsibility when using S3 Glacier is determined by the sensitivity of your data, your
organization’s compliance objectives, and applicable laws and regulations. If your use of S3 Glacier is
subject to compliance with standards like HIPAA, PCI, or FedRAMP, AWS provides resources to help:
• Amazon S3 Glacier Vault Lock (p. 65) allows you to easily deploy and enforce compliance controls for
individual S3 Glacier vaults with a vault lock policy. You can specify controls such as “write once read
many” (WORM) in a vault lock policy and lock the policy from future edits. After the policy is locked, it
can no longer be changed. Vault lock policies can help you comply with regulatory frameworks such as
SEC17a-4 and HIPAA.
• Security and Compliance Quick Start Guides discuss architectural considerations and steps for
deploying security- and compliance-focused baseline environments on AWS.
• Architecting for HIPAA Security and Compliance whitepaper outlines how companies use AWS to help
them meet HIPAA requirements.
• The AWS Well-Architected Tool (AWS WA Tool) is a service in the cloud that provides a consistent
process for you to review and measure your architecture using AWS best practices. The AWS WA Tool
provides recommendations for making your workloads more reliable, secure, efficient, and cost-
effective.
• AWS Compliance Resources provide several different workbooks and guides that might apply to your
industry and location.
• AWS Config can help you assess how well your resource configurations comply with internal practices,
industry guidelines, and regulations.
• AWS Security Hub provides you with a comprehensive view of your security state within AWS and helps
you check your compliance with security industry standards and best practices.
For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure.
Access to S3 Glacier via the network is through AWS published APIs. Clients must support Transport
Layer Security (TLS) 1.0. We recommend TLS 1.2 or later. Clients must also support cipher suites with
Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Diffie-Hellman
Ephemeral (ECDHE). Most modern systems such as Java 7 and later support these modes. Additionally,
requests must be signed using an access key ID and a secret access key that is associated with an IAM
principal, or you can use the AWS Security Token Service (AWS STS) to generate temporary security
credentials to sign requests.
VPC Endpoints
A virtual private cloud (VPC) endpoint enables you to privately connect your VPC to supported AWS
services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway,
NAT device, VPN connection, or AWS Direct Connect connection. Although S3 Glacier does not support
VPC endpoints directly, you can take advantage of Amazon S3 VPC endpoints if you access S3 Glacier as
a storage tier integrated with Amazon S3.
For more information about Amazon S3 lifecycle configuration and transitioning objects to the S3
Glacier storage class, see Object Lifecycle Management and Transitioning Objects in the Amazon Simple
Storage Service Developer Guide. For more information about VPC endpoints, see VPC Endpoints in the
Amazon VPC User Guide.
When you perform select queries, S3 Glacier provides three data access tiers—expedited, standard, and
bulk. All of these tiers provide different data access times and costs, and you can choose any one of them
depending on how quickly you want your data to be available. For all but the largest archives (250 MB
+), data that is accessed using the expedited tier is typically made available within 1–5 minutes. The
standard tier finishes within 3–5 hours. The bulk retrievals finish within 5–12 hours. For information
about tier pricing, see S3 Glacier Pricing.
You can use S3 Glacier Select with the AWS SDKs, the S3 Glacier REST API, and the AWS Command Line
Interface (AWS CLI).
Topics
• S3 Glacier Select Requirements and Quotas (p. 148)
• How Do I Query Data Using S3 Glacier Select? (p. 148)
• Error Handling (p. 150)
• More Info (p. 150)
• Archive objects that are queried by S3 Glacier Select must be formatted as uncompressed comma-
separated values (CSV).
• You must have an S3 bucket to work with. In addition, the AWS account that you use to initiate a S3
Glacier Select job must have write permissions for the S3 bucket. The Amazon S3 bucket must be in
the same AWS Region as the vault that contains the archive object that is being queried.
• You must have permission to call Get Job Output (GET output) (p. 257).
• There are no quotas on the number of records that S3 Glacier Select can process. An input or output
record must not exceed 1 MB; otherwise, the query fails. There is a quota of 1,048,576 columns per
record.
• There is no quota on the size of your final result. However, your results are broken into multiple parts.
• An SQL expression is limited to 128 KB.
based data in S3 Glacier. For example, you might look for a specific name or ID among a set of archive
text files.
To query your S3 Glacier data, create a select job using the Initiate Job (POST jobs) (p. 263) operation.
When initiating a select job, you provide the SQL expression, the archive to query, and the location to
store the results in Amazon S3.
The following example expression returns all records from the archive specified by the archive ID in
Initiate Job (POST jobs) (p. 263).
S3 Glacier Select supports a subset of the ANSI SQL language. It supports common filtering SQL clauses
like SELECT, FROM, and WHERE. It does not support SUM, COUNT, GROUP BY, JOINS, DISTINCT, UNION,
ORDER BY, and LIMIT. For more information about support for SQL, see SQL Reference for Amazon S3
Select and S3 Glacier Select (p. 304).
You can specify the S3 storage class and encryption for the output objects stored in Amazon S3. S3
Glacier Select supports SSE-KMS and SSE-S3 encryption. S3 Glacier Select doesn't support SSE-C and
client-side encryption. For more information about Amazon S3 storage classes and encryption, see
Storage Classes and Protecting Data Using Server-Side Encryption in the Amazon Simple Storage Service
Developer Guide.
S3 Glacier Select results are stored in the S3 bucket using the prefix provided in the output location
specified in Initiate Job (POST jobs) (p. 263). From this information, S3 Glacier Select creates a unique
prefix referring to the job ID. This job ID prefix is returned in the x-amz-job-output-path header in an
Initiate Job (POST jobs) (p. 263) response. (Prefixes are used to group S3 objects together by beginning
object names with a common string.) Under this unique prefix, there are two new prefixes created,
results for results and errors for logs and errors. Upon completion of the job, a result manifest is
written which contains the location of all results.
There is also a placeholder file named job.txt that is written to the output location. After it is written it
is never updated. The placeholder file is used for the following:
• Validation of the write permission and majority of SQL syntax errors synchronously.
• Provides a static output similar to Describe Job (GET JobID) (p. 250) that you can easily reference
whenever you want.
For example, suppose that you initiate a S3 Glacier Select job with the output location for the
results specified as s3://example-bucket/my-prefix, and the job response returns the job ID as
examplekne1209ualkdjh812elkassdu9012e. After the select job finishes, you can see the following
Amazon S3 objects in your bucket:
s3://example-bucket/my-prefix/examplekne1209ualkdjh812elkassdu9012e/job.txt
s3://example-bucket/my-prefix/examplekne1209ualkdjh812elkassdu9012e/results/abc
s3://example-bucket/my-prefix/examplekne1209ualkdjh812elkassdu9012e/results/def
s3://example-bucket/my-prefix/examplekne1209ualkdjh812elkassdu9012e/results/ghi
s3://example-bucket/my-prefix/examplekne1209ualkdjh812elkassdu9012e/result_manifest.txt
The select query results are broken into multiple parts. In the example, S3 Glacier Select uses the prefix
that you specified when setting the output location and appends the job ID and the results prefix.
It then writes the results in three parts, with the object names ending in abc, def, and ghi. The result
manifest contains all the three files to allow programmatic retrieval. If the job fails with any error, then a
file is visible under the error prefix and an error_manifest.txt is produced.
Error Handling
S3 Glacier Select notifies you of two kinds of errors. The first set of errors is sent to you synchronously
when you submit the query in Initiate Job (POST jobs) (p. 263). These errors are sent to you as part of
the HTTP response. Another set of errors can occur after the query has been accepted successfully, but
they happen during query execution. In this case, the errors are written to the specified output location
under the errors prefix.
S3 Glacier Select will stop executing the query after encountering an error. To run the query successfully,
you must resolve all errors. You can check the logs to identify which records caused a failure.
Because queries run in parallel across multiple compute nodes, the errors that you get are not in
sequential order. For example, if your query fails with an error in row 6234, it does not mean that all
rows before row 6234 were successfully processed. The next run of the query might show an error in a
different row.
More Info
• Initiate Job (POST jobs) (p. 263)
• Describe Job (GET JobID) (p. 250)
• List Jobs (GET jobs) (p. 273)
• Working with Archives in Amazon S3 Glacier (p. 67)
Topics
• Choosing an Amazon S3 Glacier Data Retrieval Policy (p. 151)
• Using the Amazon S3 Glacier Console to Set Up a Data Retrieval Policy (p. 152)
• Using the Amazon S3 Glacier API to Set Up a Data Retrieval Policy (p. 153)
By using a Free Tier Only policy, you can keep your retrievals within your daily free tier allowance and
not incur any data retrieval cost. If you want to retrieve more data than the free tier, you can use a Max
Retrieval Rate policy to set a bytes-per-hour retrieval rate quota. The Max Retrieval Rate policy ensures
that the peak retrieval rate from all retrieval jobs across your account in an AWS Region does not exceed
the bytes-per-hour quota you set.
With both Free Tier Only and Max Retrieval Rate policies, data retrieval requests that would exceed the
retrieval quotas you specified will not be accepted. If you use a Free Tier Only policy, S3 Glacier will
synchronously reject retrieval requests that would exceed your free tier allowance. If you use a Max
Retrieval Rate policy, S3 Glacier will reject retrieval requests that would cause the peak retrieval rate
of the in progress jobs to exceed the bytes-per-hour quota set by the policy. These policies help you
simplify data retrieval cost management.
The following are some useful facts about data retrieval policies:
• Data retrieval policy settings do not change the 3 to 5 hour period that it takes to retrieve data from
S3 Glacier using standard retrievals.
• Setting a new data retrieval policy does not affect previously accepted retrieval jobs that are already in
progress.
• If a retrieval job request is rejected because of a data retrieval policy, you will not be charged for the
job or the request.
• You can set one data retrieval policy for each AWS Region, which will govern all data retrieval activities
in the AWS Region under your account. A data retrieval policy is specific to a particular AWS Region
because data retrieval costs vary across AWS Regions. For more information, see S3 Glacier pricing.
You set the data retrieval policy to Free Tier Only for a particular AWS Region. Once the policy is set, you
cannot retrieve more data in a day than your prorated daily free retrieval allowance for that AWS Region
and you will not incur data retrieval fees.
You can switch to a Free Tier Only policy after you have incurred data retrieval charges within a month.
The Free Tier Only policy will take effect for new retrieval requests, but will not affect past requests. You
will be billed for the previously incurred charges.
Setting your data retrieval policy to the Max Retrieval Rate policy can affect how much free tier you can
use in a day. For example, suppose you set Max Retrieval Rate to 1 MB per hour. This is less than the free
tier policy rate of 14 MB per hour. To ensure you make good use of the daily free tier allowance, you can
first set your policy to Free Tier Only and then switch to the Max Retrieval Rate policy later if you need
to. or more information on how your retrieval allowance is calculated, go to S3 Glacier FAQs.
You can select one of the three data retrieval policies: Free Tier Only, Max Retrieval Rate, or No
Retrieval Limit. If you click Max Retrieval Rate, you'll need to specify a value in the GB/Hour box. When
you type a value in GB/Hour, the console will calculate an estimated cost for you. Click No Retrieval
Limit if you don't want any restrictions placed on the rate of your data retrievals.
You can configure a data retrieval policy for each AWS Region. Each policy will take effect within a few
minutes after you click Save.
When using the PUT policy operation you select the data retrieval policy type by setting the JSON
Strategy field value to BytesPerHour, FreeTier, or None. BytesPerHour is equivalent to selecting
Max Retrieval Rate in the console, FreeTier to selecting Free Tier Only, and None to selecting No
Retrieval Policy.
When you use the Initiate Job (POST jobs) (p. 263) operation to initiate a data retrieval job that will
exceed the maximum retrieval rate set in your data retrieval policy, the Initiate Job operation will stop
and throw an exception.
Topics
• Tagging Basics (p. 155)
• Tag Restrictions (p. 155)
• Tracking Costs Using Tagging (p. 156)
• Managing Access Control with Tagging (p. 156)
• Related Sections (p. 156)
Tagging Basics
You use the S3 Glacier console, AWS Command Line Interface (AWS CLI), or S3 Glacier API to complete
the following tasks:
For information about how to add, list, and remove tags, see Tagging Your Amazon S3 Glacier
Vaults (p. 64).
You can use tags to categorize your vaults. For example, you can categorize vaults by purpose, owner,
or environment. Because you define the key and value for each tag, you can create a custom set of
categories to meet your specific needs. For example, you might define a set of tags that helps you track
vaults by owner and purpose for the vault. Following are a few examples of tags:
• Owner: Name
• Purpose: Video archives
• Environment: Production
Tag Restrictions
Basic tag restrictions are as follows:
• Within a set of tags for a vault, each tag key must be unique. If you add a tag with a key that's already
in use, your new tag overwrites the existing key-value pair.
• Tag keys cannot start with aws: because this prefix is reserved for use by AWS. AWS can create tags
that begin with this prefix on your behalf, but you can't edit or delete them.
• Tag keys must be from 1 to 128 Unicode characters in length.
• Tag keys must consist of the following characters: Unicode letters, digits, white space, and the
following special characters: _ . / = + - @.
Related Sections
• Tagging Your Amazon S3 Glacier Vaults (p. 64)
To learn more about CloudTrail, see the AWS CloudTrail User Guide.
For an ongoing record of events in your AWS account, including events for S3 Glacier, create a trail. A
trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail
in the console, the trail applies to all AWS Regions. The trail logs events from all AWS Regions in the
AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can
configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.
For more information, see:
All S3 Glacier actions are logged by CloudTrail and are documented in the API Reference for Amazon
S3 Glacier (p. 160). For example, calls to the Create Vault (PUT vault) (p. 185), Delete Vault (DELETE
vault) (p. 189), and List Vaults (GET vaults) (p. 210) actions generate entries in the CloudTrail log files.
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or IAM user credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
The following example shows a CloudTrail log entry that demonstrates the Create Vault (PUT
vault) (p. 185), Delete Vault (DELETE vault) (p. 189), List Vaults (GET vaults) (p. 210), and Describe
Vault (GET vault) (p. 194) actions.
{
"Records": [
{
"awsRegion": "us-east-1",
"eventID": "52f8c821-002e-4549-857f-8193a15246fa",
"eventName": "CreateVault",
"eventSource": "glacier.amazonaws.com",
"eventTime": "2014-12-10T19:05:15Z",
"eventType": "AwsApiCall",
"eventVersion": "1.02",
"recipientAccountId": "999999999999",
"requestID": "HJiLgvfXCY88QJAC6rRoexS9ThvI21Q1Nqukfly02hcUPPo",
"requestParameters": {
"accountId": "-",
"vaultName": "myVaultName"
},
"responseElements": {
"location": "/999999999999/vaults/myVaultName"
},
"sourceIPAddress": "127.0.0.1",
"userAgent": "aws-sdk-java/1.9.6 Mac_OS_X/10.9.5 Java_HotSpot(TM)_64-
Bit_Server_VM/25.25-b02/1.8.0_25",
"userIdentity": {
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accountId": "999999999999",
"arn": "arn:aws:iam::999999999999:user/myUserName",
"principalId": "A1B2C3D4E5F6G7EXAMPLE",
"type": "IAMUser",
"userName": "myUserName"
}
},
{
"awsRegion": "us-east-1",
"eventID": "cdd33060-4758-416a-b7b9-dafd3afcec90",
"eventName": "DeleteVault",
"eventSource": "glacier.amazonaws.com",
"eventTime": "2014-12-10T19:05:15Z",
"eventType": "AwsApiCall",
"eventVersion": "1.02",
"recipientAccountId": "999999999999",
"requestID": "GGdw-VfhVfLCFwAM6iVUvMQ6-fMwSqSO9FmRd0eRSa_Fc7c",
"requestParameters": {
"accountId": "-",
"vaultName": "myVaultName"
},
"responseElements": null,
"sourceIPAddress": "127.0.0.1",
"userAgent": "aws-sdk-java/1.9.6 Mac_OS_X/10.9.5 Java_HotSpot(TM)_64-
Bit_Server_VM/25.25-b02/1.8.0_25",
"userIdentity": {
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accountId": "999999999999",
"arn": "arn:aws:iam::999999999999:user/myUserName",
"principalId": "A1B2C3D4E5F6G7EXAMPLE",
"type": "IAMUser",
"userName": "myUserName"
}
},
{
"awsRegion": "us-east-1",
"eventID": "355750b4-e8b0-46be-9676-e786b1442470",
"eventName": "ListVaults",
"eventSource": "glacier.amazonaws.com",
"eventTime": "2014-12-10T19:05:15Z",
"eventType": "AwsApiCall",
"eventVersion": "1.02",
"recipientAccountId": "999999999999",
"requestID": "yPTs22ghTsWprFivb-2u30FAaDALIZP17t4jM_xL9QJQyVA",
"requestParameters": {
"accountId": "-"
},
"responseElements": null,
"sourceIPAddress": "127.0.0.1",
"userAgent": "aws-sdk-java/1.9.6 Mac_OS_X/10.9.5 Java_HotSpot(TM)_64-
Bit_Server_VM/25.25-b02/1.8.0_25",
"userIdentity": {
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accountId": "999999999999",
"arn": "arn:aws:iam::999999999999:user/myUserName",
"principalId": "A1B2C3D4E5F6G7EXAMPLE",
"type": "IAMUser",
"userName": "myUserName"
}
},
{
"awsRegion": "us-east-1",
"eventID": "569e830e-b075-4444-a826-aa8b0acad6c7",
"eventName": "DescribeVault",
"eventSource": "glacier.amazonaws.com",
"eventTime": "2014-12-10T19:05:15Z",
"eventType": "AwsApiCall",
"eventVersion": "1.02",
"recipientAccountId": "999999999999",
"requestID": "QRt1ZdFLGn0TCm784HmKafBmcB2lVaV81UU3fsOR3PtoIiM",
"requestParameters": {
"accountId": "-",
"vaultName": "myVaultName"
},
"responseElements": null,
"sourceIPAddress": "127.0.0.1",
"userAgent": "aws-sdk-java/1.9.6 Mac_OS_X/10.9.5 Java_HotSpot(TM)_64-
Bit_Server_VM/25.25-b02/1.8.0_25",
"userIdentity": {
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accountId": "999999999999",
"arn": "arn:aws:iam::999999999999:user/myUserName",
"principalId": "A1B2C3D4E5F6G7EXAMPLE",
"type": "IAMUser",
"userName": "myUserName"
}
}
]
}
You can use any programming library that can send HTTP requests to send your REST requests to S3
Glacier. When sending a REST request, S3 Glacier requires that you authenticate every request by signing
the request. Additionally, when uploading an archive, you must also compute the checksum of the
payload and include it in your request. For more information, see Signing Requests (p. 163).
If an error occurs, you need to know what S3 Glacier sends in an error response so that you can process
it. This section provides all this information, in addition to documenting the REST operations, so that you
can make REST API calls directly.
You can either use the REST API calls directly or use the AWS SDKs that provide wrapper libraries to
simplify your coding task. These libraries sign each request you send and compute the checksum of the
payload in your request. Therefore, using the AWS SDKs simplifies your coding task. This developer guide
provides working examples of basic S3 Glacier operations using the AWS SDK for Java and .NET. For more
information see, Using the AWS SDKs with Amazon S3 Glacier (p. 116).
Topics
• Common Request Headers (p. 160)
• Common Response Headers (p. 162)
• Signing Requests (p. 163)
• Computing Checksums (p. 166)
• Error Responses (p. 176)
• Vault Operations (p. 180)
• Archive Operations (p. 222)
• Multipart Upload Operations (p. 228)
• Job Operations (p. 249)
• Data Types Used in Job Operations (p. 279)
• Data Retrieval Operations (p. 292)
Type: String
Content-Length The length of the request body (without the headers). Conditional
Type: String
Date The date that can be used to create the signature Conditional
contained in the Authorization header. If the Date
header is to be used for signing it must be specified
in the ISO 8601 basic format. In this case, the x-amz-
date header is not needed. Note that when x-amz-
date is present, it always overrides the value of the
Date header.
Type: String
Type: String
Type: String
Type: String
x-amz-glacier-version The S3 Glacier API version to use. The current version Yes
is 2012-06-01.
Type: String
Type: String
Default: None
Name Description
Type: String
Date The date and time Amazon S3 Glacier (S3 Glacier) responded, for example, Wed,
10 Feb 2017 12:00:00 GMT. The format of the date must be one of the full
date formats specified by RFC 2616, section 3.3. Note that Date returned may
drift slightly from other dates, so for example, the date returned from an Upload
Archive (POST archive) (p. 224) request may not match the date shown for the
archive in an inventory list for the vault.
Name Description
Type: String
x-amzn- A value created by S3 Glacier that uniquely identifies your request. In the event
RequestId that you have a problem with S3 Glacier, AWS can use this value to troubleshoot
the problem. It is recommended that you log these values.
Type: String
x-amz-sha256- The SHA256 tree-hash checksum of the archive or inventory body. For
tree-hash more information about calculating this checksum, see Computing
Checksums (p. 166).
Type: String
Signing Requests
S3 Glacier requires that you authenticate every request you send by signing the request. To sign a
request, you calculate a digital signature using a cryptographic hash function. A cryptographic hash is a
function that returns a unique hash value based on the input. The input to the hash function includes the
text of your request and your secret access key. The hash function returns a hash value that you include
in the request as your signature. The signature is part of the Authorization header of your request.
After receiving your request, S3 Glacier recalculates the signature using the same hash function and
input that you used to sign the request. If the resulting signature matches the signature in the request,
S3 Glacier processes the request. Otherwise, the request is rejected.
S3 Glacier supports authentication using AWS Signature Version 4. The process for calculating a
signature can be broken into three tasks:
Rearrange your HTTP request into a canonical format. Using a canonical form is necessary because S3
Glacier uses the same canonical form when it recalculates a signature to compare with the one you
sent.
• Task 2: Create a String to Sign
Create a string that you will use as one of the input values to your cryptographic hash function. The
string, called the string to sign, is a concatenation of the name of the hash algorithm, the request date,
a credential scope string, and the canonicalized request from the previous task. The credential scope
string itself is a concatenation of date, AWS Region, and service information.
• Task 3: Create a Signature
Create a signature for your request by using a cryptographic hash function that accepts two input
strings: your string to sign and a derived key. The derived key is calculated by starting with your
secret access key and using the credential scope string to create a series of hash-based message
authentication codes (HMACs). Note that the hash function used in this signing step is not the tree-
hash algorithm used in S3 Glacier APIs that upload data.
Topics
• Example Signature Calculation (p. 164)
• Calculating Signatures for the Streaming Operations (p. 165)
• The time stamp of the request is Fri, 25 May 2012 00:24:53 GMT.
• The endpoint is US East (N. Virginia) Region us-east-1.
The canonical form of the request calculated for Task 1: Create a Canonical Request (p. 163) is:
PUT
/-/vaults/examplevault
host:glacier.us-east-1.amazonaws.com
x-amz-date:20120525T002453Z
x-amz-glacier-version:2012-06-01
host;x-amz-date;x-amz-glacier-version
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
The last line of the canonical request is the hash of the request body. Also, note the empty third line in
the canonical request. This is because there are no query parameters for this API.
The string to sign for Task 2: Create a String to Sign (p. 163) is:
AWS4-HMAC-SHA256
20120525T002453Z
20120525/us-east-1/glacier/aws4_request
5f1da1a2d0feb614dd03d71e87928b8e449ac87614479332aced3a701f916743
The first line of the string to sign is the algorithm, the second line is the time stamp, the third line is
the credential scope, and the last line is a hash of the canonical request from Task 1: Create a Canonical
Request (p. 163). The service name to use in the credential scope is glacier.
For Task 3: Create a Signature (p. 163), the derived key can be represented as:
3ce5b2f2fffac9262b4da9256f8d086b4aaf42eba5f111c21681a65a127b7c2a
The final step is to construct the Authorization header. For the demonstration access key
AKIAIOSFODNN7EXAMPLE, the header (with line breaks added for readability) is:
The calculation of the streaming header x-amz-content-sha256 is based on the SHA256 hash of the
entire content (payload) that is to be uploaded. Note that this calculation is different from the SHA256
tree hash (Computing Checksums (p. 166)). Besides trivial cases, the SHA 256 hash value of the payload
data will be different from the SHA256 tree hash of the payload data.
If the payload data is specified as a byte array, you can use the following Java code snippet to calculate
the SHA256 hash.
Similarly, in C# you can calculate the SHA256 hash of the payload data as shown in the following code
snippet.
return hash;
}
• The time stamp of the request is Mon, 07 May 2012 00:00:00 GMT.
• The endpoint is the US East (N. Virginia) Region, us-east-1.
• The content payload is a string "Welcome to S3 Glacier."
The general request syntax (including the JSON body) is shown in the example below. Note that the x-
amz-content-sha256 header is included. In this simplified example, the x-amz-sha256-tree-hash
and x-amz-content-sha256 are the same value. However, for archive uploads greater than 1 MB, this
is not the case.
The canonical form of the request calculated for Task 1: Create a Canonical Request (p. 163) is shown
below. Note that the streaming header x-amz-content-sha256 is included with its value. This means
you must read the payload and calculate the SHA256 hash first and then compute the signature.
POST
/-/vaults/examplevault
host:glacier.us-east-1.amazonaws.com
x-amz-content-sha256:726e392cb4d09924dbad1cc0ba3b00c3643d03d14cb4b823e2f041cff612a628
x-amz-date:20120507T000000Z
x-amz-glacier-version:2012-06-01
host;x-amz-content-sha256;x-amz-date;x-amz-glacier-version
726e392cb4d09924dbad1cc0ba3b00c3643d03d14cb4b823e2f041cff612a628
The remainder of the signature calculation follows the steps outlined in Example Signature
Calculation (p. 164). The Authorization header using the secret access key wJalrXUtnFEMI/
K7MDENG/bPxRfiCYEXAMPLEKEY and the access key AKIAIOSFODNN7EXAMPLE is shown below (with
line breaks added for readability):
Authorization=AWS4-HMAC-SHA256
Credential=AKIAIOSFODNN7EXAMPLE/20120507/us-east-1/glacier/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-glacier-version,
Signature=b092397439375d59119072764a1e9a144677c43d9906fd98a5742c57a2855de6
Computing Checksums
When uploading an archive, you must include both the x-amz-sha256-tree-hash and x-amz-
content-sha256 headers. The x-amz-sha256-tree-hash header is a checksum of the payload in
your request body. This topic describes how to calculate the x-amz-sha256-tree-hash header. The x-
amz-content-sha256 header is a hash of the entire payload and is required for authorization. For more
information, see Example Signature Calculation for Streaming API (p. 165).
• Entire archive— When uploading an archive in a single request using the Upload Archive API, you
send the entire archive in the request body. In this case, you must include the checksum of the entire
archive.
• Archive part— When uploading an archive in parts using the multipart upload API, you send only a
part of the archive in the request body. In this case, you include the checksum of the archive part. And
after you upload all the parts, you send a Complete Multipart Upload request, which must include the
checksum of the entire archive.
The checksum of the payload is a SHA-256 tree hash. It is called a tree hash because in the process of
computing the checksum you compute a tree of SHA-256 hash values. The hash value at the root is the
checksum for the entire archive.
Note
This section describes a way to compute the SHA-256 tree hash. However, you may use any
procedure as long as it produces the same result.
1. For each 1 MB chunk of payload data, compute the SHA-256 hash. The last chunk of data can be less
than 1 MB. For example, if you are uploading a 3.2 MB archive, you compute the SHA-256 hash values
for each of the first three 1 MB chunks of data, and then compute the SHA-256 hash of the remaining
0.2 MB data. These hash values form the leaf nodes of the tree.
2. Build the next level of the tree.
a. Concatenate two consecutive child node hash values and compute the SHA-256 hash of the
concatenated hash values. This concatenation and generation of the SHA-256 hash produces a
parent node for the two child nodes.
b. When only one child node remains, you promote that hash value to the next level in the tree.
3. Repeat step 2 until the resulting tree has a root. The root of the tree provides a hash of the entire
archive and a root of the appropriate subtree provides the hash for the part in a multipart upload.
Topics
• Tree Hash Example 1: Uploading an archive in a single request (p. 167)
• Tree Hash Example 2: Uploading an archive using a multipart upload (p. 168)
• Computing the Tree Hash of a File (p. 169)
• Receiving Checksums When Downloading Data (p. 175)
The following example shows how to calculate the SHA256 tree hash of a file using Java. You
can run this example by either supplying a file location as an argument or you can use the
TreeHashExample.computeSHA256TreeHash method directly from your code.
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
/**
* Compute the Hex representation of the SHA-256 tree hash for the specified
* File
*
* @param args
* args[0]: a file to compute a SHA-256 tree hash for
*/
public static void main(String[] args) {
if (args.length < 1) {
System.err.println("Missing required filename argument");
System.exit(-1);
}
/**
* Computes the SHA-256 tree hash for the given file
*
* @param inputFile
* a File to compute the SHA-256 tree hash for
* @return a byte[] containing the SHA-256 tree hash
* @throws IOException
* Thrown if there's an issue reading the input file
* @throws NoSuchAlgorithmException
*/
public static byte[] computeSHA256TreeHash(File inputFile) throws IOException,
NoSuchAlgorithmException {
/**
* Computes a SHA256 checksum for each 1 MB chunk of the input file. This
* includes the checksum for the last chunk even if it is smaller than 1 MB.
*
* @param file
* A file to compute checksums on
* @return a byte[][] containing the checksums of each 1 MB chunk
* @throws IOException
* Thrown if there's an IOException when reading the file
* @throws NoSuchAlgorithmException
* Thrown if SHA-256 MessageDigest can't be found
*/
public static byte[][] getChunkSHA256Hashes(File file) throws IOException,
NoSuchAlgorithmException {
MessageDigest md = MessageDigest.getInstance("SHA-256");
if (numChunks == 0) {
return new byte[][] { md.digest() };
}
try {
fileStream = new FileInputStream(file);
byte[] buff = new byte[ONE_MB];
int bytesRead;
int idx = 0;
return chunkSHA256Hashes;
} finally {
if (fileStream != null) {
try {
fileStream.close();
} catch (IOException ioe) {
System.err.printf("Exception while closing %s.\n %s", file.getName(),
ioe.getMessage());
}
}
}
}
/**
* Computes the SHA-256 tree hash for the passed array of 1 MB chunk
* checksums.
*
* This method uses a pair of arrays to iteratively compute the tree hash
* level by level. Each iteration takes two adjacent elements from the
* previous level source array, computes the SHA-256 hash on their
* concatenated value and places the result in the next level's destination
* array. At the end of an iteration, the destination array becomes the
* source array for the next level.
*
* @param chunkSHA256Hashes
* An array of SHA-256 checksums
* @return A byte[] containing the SHA-256 tree hash for the input chunks
* @throws NoSuchAlgorithmException
* Thrown if SHA-256 MessageDigest can't be found
*/
public static byte[] computeSHA256TreeHash(byte[][] chunkSHA256Hashes)
throws NoSuchAlgorithmException {
MessageDigest md = MessageDigest.getInstance("SHA-256");
int j = 0;
for (int i = 0; i < prevLvlHashes.length; i = i + 2, j++) {
prevLvlHashes = currLvlHashes;
}
return prevLvlHashes[0];
}
/**
* Returns the hexadecimal representation of the input byte array
*
* @param data
* a byte[] to convert to Hex characters
* @return A String containing Hex characters
*/
public static String toHex(byte[] data) {
StringBuilder sb = new StringBuilder(data.length * 2);
if (hex.length() == 1) {
// Append leading zero.
sb.append("0");
}
sb.append(hex);
}
return sb.toString().toLowerCase();
}
}
The following example shows how to calculate the SHA256 tree hash of a file. You can run this example
by supplying a file location as an argument.
using System;
using System.IO;
using System.Security.Cryptography;
namespace ExampleTreeHash
{
class Program
{
static int ONE_MB = 1024 * 1024;
/**
* Compute the Hex representation of the SHA-256 tree hash for the
* specified file
*
* @param args
* args[0]: a file to compute a SHA-256 tree hash for
*/
public static void Main(string[] args)
{
if (args.Length < 1)
{
Console.WriteLine("Missing required filename argument");
Environment.Exit(-1);
}
FileStream inputFile = File.Open(args[0], FileMode.Open, FileAccess.Read);
try
{
byte[] treeHash = ComputeSHA256TreeHash(inputFile);
Console.WriteLine("SHA-256 Tree Hash = {0}",
BitConverter.ToString(treeHash).Replace("-", "").ToLower());
Console.ReadLine();
Environment.Exit(-1);
}
catch (IOException ioe)
{
Console.WriteLine("Exception when reading from file {0}: {1}",
inputFile, ioe.Message);
Console.ReadLine();
Environment.Exit(-1);
}
catch (Exception e)
{
Console.WriteLine("Cannot locate MessageDigest algorithm for SHA-256: {0}",
e.Message);
Console.WriteLine(e.GetType());
Console.ReadLine();
Environment.Exit(-1);
}
Console.ReadLine();
}
/**
* Computes the SHA-256 tree hash for the given file
*
* @param inputFile
* A file to compute the SHA-256 tree hash for
* @return a byte[] containing the SHA-256 tree hash
*/
public static byte[] ComputeSHA256TreeHash(FileStream inputFile)
{
byte[][] chunkSHA256Hashes = GetChunkSHA256Hashes(inputFile);
return ComputeSHA256TreeHash(chunkSHA256Hashes);
}
/**
* Computes a SHA256 checksum for each 1 MB chunk of the input file. This
* includes the checksum for the last chunk even if it is smaller than 1 MB.
*
* @param file
* A file to compute checksums on
* @return a byte[][] containing the checksums of each 1MB chunk
*/
public static byte[][] GetChunkSHA256Hashes(FileStream file)
{
long numChunks = file.Length / ONE_MB;
if (file.Length % ONE_MB > 0)
{
numChunks++;
}
if (numChunks == 0)
{
return new byte[][] { CalculateSHA256Hash(null, 0) };
}
byte[][] chunkSHA256Hashes = new byte[(int)numChunks][];
try
{
byte[] buff = new byte[ONE_MB];
int bytesRead;
int idx = 0;
}
}
}
/**
* Computes the SHA-256 tree hash for the passed array of 1MB chunk
* checksums.
*
* This method uses a pair of arrays to iteratively compute the tree hash
* level by level. Each iteration takes two adjacent elements from the
* previous level source array, computes the SHA-256 hash on their
* concatenated value and places the result in the next level's destination
* array. At the end of an iteration, the destination array becomes the
* source array for the next level.
*
* @param chunkSHA256Hashes
* An array of SHA-256 checksums
* @return A byte[] containing the SHA-256 tree hash for the input chunks
*/
public static byte[] ComputeSHA256TreeHash(byte[][] chunkSHA256Hashes)
{
byte[][] prevLvlHashes = chunkSHA256Hashes;
while (prevLvlHashes.GetLength(0) > 1)
{
int j = 0;
for (int i = 0; i < prevLvlHashes.GetLength(0); i = i + 2, j++)
{
currLvlHashes[j] = CalculateSHA256Hash(concatenation,
concatenation.Length);
}
else
{ // Take care of remaining odd chunk
currLvlHashes[j] = prevLvlHashes[i];
}
}
prevLvlHashes = currLvlHashes;
}
return prevLvlHashes[0];
}
• Megabyte aligned - A range [StartByte, EndBytes] is megabyte (1024*1024) aligned when StartBytes is
divisible by 1 MB and EndBytes plus 1 is divisible by 1 MB or is equal to the end of the archive specified
(archive byte size minus 1). A range used in the Initiate Job API, if specified, is required to be megabyte
aligned.
• Tree-hash aligned - A range [StartBytes, EndBytes] is tree hash aligned with respect to an archive
if and only if the root of the tree hash built over the range is equivalent to a node in the tree hash
of the whole archive. Both the range to retrieve and range to download must be tree hash aligned
in order to receive checksum values for the data you download. For an example of ranges and their
relationship to the archive tree hash, see Tree Hash Example: Retrieving an archive range that is tree-
hash aligned (p. 176).
Note that a range that is tree-hash aligned is also megabyte aligned. However, a megabyte aligned
range is not necessarily tree-hash aligned.
The following cases describe when you receive a checksum value when you download your archive data:
• If you do not specify a range to retrieve in the Initiate Job request and you download the whole
archive in the Get Job Request.
• If you do not specify a range to retrieve in the Initiate Job request and you do specify a tree-hash
aligned range to download in the Get Job Request.
• If you specify a tree-hash aligned range to retrieve in the Initiate Job request and you download the
whole range in the Get Job Request.
• If you specify a tree-hash aligned range to retrieve in the Initiate Job request and you specify a tree-
hash aligned range to download in the Get Job Request.
If you specify a range to retrieve in the Initiate Job request that is not tree hash aligned, then you can
still get your archive data but no checksum values are returned when you download data in the Get Job
Request.
A range [A, B] is tree-hash aligned with respect to an archive if and only if when a new tree hash is built
over [A, B], the root of the tree hash of that range is equivalent to a node in the tree hash of the whole
archive. You can see this shown in the diagram in Tree Hash Example: Retrieving an archive range that is
tree-hash aligned (p. 176). In this section, we provide the specification for tree-hash alignment.
Consider [P, Q) as the range query for an archive of N megabytes (MB) and P and Q are multiples of one
MB. Note that the actual inclusive range is [P MB, Q MB – 1 byte], but for simplicity, we show it as [P, Q).
With these considerations, then
• If P is an odd number, there is only one possible tree-hash aligned range—that is [P, P + 1 MB).
• If P is an even number and k is the maximum number, where P can be written as 2k * X, then there
are at most k tree-hash aligned ranges that start with P. X is an integer greater than 0. The tree-hash
aligned ranges fall in the following categories:
• For each i, where (0 <= i <= k) and where P + 2i < N, then [P, Q + 2i) is a tree-hash aligned range.
Error Responses
In the event of an error, the API returns one of the following exceptions:
Various S3 Glacier APIs return the same exception, but with different exception messages to help you
troubleshoot the specific error encountered.
S3 Glacier returns error information in the response body. The following examples show some of the
error responses.
GET /-/vaults/examplevault/jobs/HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVEXAMPLEbadJobID HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Where:
Code
Type: String
Message
A generic description of the error condition specific to the API that returns the error.
Type: String
Type
The source of the error. The field can be one of the following values: Client, Server, or Unknown.
Type: String.
• For the error response, S3 Glacier returns status code values of 4xx and 5xx. In this example, the
status code is 404 Not Found.
• The Content-Type header value application/json indicates JSON in the body
• The JSON in the body provides the error information.
In the previous request, instead of a bad job ID, suppose you specify a vault that does not exist. The
response returns a different message.
Content-Length: 141
Date: Wed, 10 Feb 2017 12:00:00 GMT
{
"code": "InvalidParameterValueException",
"message": "The job status code is not valid: finished",
"type: "Client"
}
Vault Operations
The following are the vault operations available in S3 Glacier.
Topics
• Abort Vault Lock (DELETE lock-policy) (p. 180)
• Add Tags To Vault (POST tags add) (p. 182)
• Create Vault (PUT vault) (p. 185)
• Complete Vault Lock (POST lockId) (p. 187)
• Delete Vault (DELETE vault) (p. 189)
• Delete Vault Access Policy (DELETE access-policy) (p. 191)
• Delete Vault Notifications (DELETE notification-configuration) (p. 193)
• Describe Vault (GET vault) (p. 194)
• Get Vault Access Policy (GET access-policy) (p. 197)
• Get Vault Lock (GET lock-policy) (p. 200)
• Get Vault Notifications (GET notification-configuration) (p. 203)
• Initiate Vault Lock (POST lock-policy) (p. 205)
• List Tags For Vault (GET tags) (p. 208)
• List Vaults (GET vaults) (p. 210)
• Remove Tags From Vault (POST tags remove) (p. 215)
• Set Vault Access Policy (PUT access-policy) (p. 217)
• Set Vault Notification Configuration (PUT notification-configuration) (p. 219)
A vault lock is put into the InProgress state by calling Initiate Vault Lock (POST lock-policy) (p. 205).
A vault lock is put into the Locked state by calling Complete Vault Lock (POST lockId) (p. 187). You can
get the state of a vault lock by calling Get Vault Lock (GET lock-policy) (p. 200). For more information
about the vault locking process, see Amazon S3 Glacier Vault Lock (p. 65). For more information about
vault lock policies, see Amazon S3 Glacier Access Control with Vault Lock Policies (p. 136).
This operation is idempotent. You can successfully invoke this operation multiple times, if the vault lock
is in the InProgress state or if there is no policy associated with the vault.
Requests
To delete the vault lock policy, send an HTTP DELETE request to the URI of the vault's lock-policy
subresource.
Syntax
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
If the policy is successfully deleted, S3 Glacier returns an HTTP 204 No Content response.
Syntax
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example demonstrates how to stop the vault locking process.
Example Request
In this example, a DELETE request is sent to the lock-policy subresource of the vault named
examplevault.
Example Response
If the policy is successfully deleted S3 Glacier returns an HTTP 204 No Content response, as shown in
the following example.
Related Sections
• Complete Vault Lock (POST lockId) (p. 187)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
If a tag already exists on the vault under a specified key, the existing key value will be overwritten. For
more information about tags, see Tagging Amazon S3 Glacier Resources (p. 155).
Request Syntax
To add tags to a vault, send an HTTP POST request to the tags URI as shown in the following syntax
example.
{
"Tags":
{
"string": "string",
"string": "string"
}
}
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
The request body contains the following JSON fields.
Tags
The tags to add to the vault. Each tag is composed of a key and a value. The value can be an empty
string.
Required: Yes
Responses
If the operation request is successful, the service returns an HTTP 204 No Content response.
Syntax
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example sends an HTTP POST request with the tags to add to the vault.
{
"Tags":
{
"examplekey1": "examplevalue1",
"examplekey2": "examplevalue2"
}
}
Example Response
If the request was successful S3 Glacier returns a HTTP 204 No Content as shown in the following
example.
Related Sections
• List Tags For Vault (GET tags) (p. 208)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
This operation is idempotent, you can send the same request multiple times and it has no further effect
after the first time Amazon S3 Glacier (S3 Glacier) creates the specified vault.
Requests
Syntax
To create a vault, send an HTTP PUT request to the URI of the vault to be created.
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
The request body for this operation must be empty (0 bytes).
Responses
Syntax
HTTP/1.1 201 Created
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Location: Location
Response Headers
A successful response includes the following response headers, in addition to the response headers that
are common to all operations. For more information about common response headers, see Common
Response Headers (p. 162).
Name Description
Location The relative URI path of the vault that was created.
Type: String
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example sends an HTTP PUT request to create a vault named examplevault.
Example Response
S3 Glacier creates the vault and returns the relative URI path of the vault in the Location header. The
account ID is always displayed in the Location header regardless of whether the account ID or a hyphen
('-') was specified in the request.
Related Sections
• List Vaults (GET vaults) (p. 210)
• Delete Vault (DELETE vault) (p. 189)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
This operation is idempotent. This request is always successful if the vault lock is in the Locked state and
the provided lock ID matches the lock ID originally used to lock the vault.
If an invalid lock ID is passed in the request when the vault lock is in the Locked state, the operation
returns an AccessDeniedException error. If an invalid lock ID is passed in the request when the vault
lock is in the InProgress state, the operation throws an InvalidParameter error.
Requests
To complete the vault locking process, send an HTTP POST request to the URI of the vault's lock-
policy subresource with a valid lock ID.
Syntax
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
The lockId value is the lock ID obtained from a Initiate Vault Lock (POST lock-policy) (p. 205) request.
Request Parameters
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
Responses
If the operation request is successful, the service returns an HTTP 204 No Content response.
Syntax
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example sends an HTTP POST request with the lock ID to complete the vault locking
process.
Example Response
If the request was successful, Amazon S3 Glacier (S3 Glacier) returns an HTTP 204 No Content
response, as shown in the following example.
Related Sections
• Abort Vault Lock (DELETE lock-policy) (p. 180)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
You can use the Describe Vault (GET vault) (p. 194) operation that provides vault information, including
the number of archives in the vault; however, the information is based on the vault inventory S3 Glacier
last generated.
Requests
To delete a vault, send a DELETE request to the vault resource URI.
Syntax
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
Syntax
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example deletes a vault named examplevault. The example request is a DELETE request
to the URI of the resource (the vault) to delete.
Example Response
Related Sections
• Create Vault (PUT vault) (p. 185)
• List Vaults (GET vaults) (p. 210)
• Initiate Job (POST jobs) (p. 263)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
This operation is idempotent. You can invoke delete multiple times, even if there is no policy associated
with the vault. For more information about vault access policies, see Amazon S3 Glacier Access Control
with Vault Access Policies (p. 134).
Requests
To delete the current vault access policy, send an HTTP DELETE request to the URI of the vault's
access-policy subresource.
Syntax
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
In response, S3 Glacier returns 204 No Content if the policy is successfully deleted.
Syntax
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example demonstrates how to delete a vault access policy.
Example Request
In this example, a DELETE request is sent to the access-policy subresource of the vault named
examplevault.
Example Response
In response, if the policy is successfully deleted S3 Glacier returns a 204 No Content as shown in the
following example.
Related Sections
• Get Vault Access Policy (GET access-policy) (p. 197)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
Requests
To delete a vault's notification configuration, send a DELETE request to the vault's notification-
configuration subresource.
Syntax
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
Syntax
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example demonstrates how to remove notification configuration for a vault.
Example Request
In this example, a DELETE request is sent to the notification-configuration subresource of the
vault called examplevault.
Example Response
Related Sections
• Get Vault Notifications (GET notification-configuration) (p. 203)
• Set Vault Notification Configuration (PUT notification-configuration) (p. 219)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
from a vault, and then immediately send a Describe Vault request, the response might not reflect the
changes.
Requests
To get information about a vault, send a GET request to the URI of the specific vault resource.
Syntax
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: Length
{
"CreationDate" : String,
"LastInventoryDate" : String,
"NumberOfArchives" : Number,
"SizeInBytes" : Number,
"VaultARN" : String,
"VaultName" : String
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
CreationDate
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
LastInventoryDate
The UTC date when S3 Glacier completed the last vault inventory. For information about initiating
an inventory for a vault, see Initiate Job (POST jobs) (p. 263).
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
NumberOfArchives
The number of archives in the vault as per the last vault inventory. This field will return null if an
inventory has not yet run on the vault, for example, if you just created the vault.
Type: Number
SizeInBytes
The total size in bytes of the archives in the vault including any per-archive overhead, as of the last
inventory date. This field will return null if an inventory has not yet run on the vault, for example, if
you just created the vault.
Type: Number
VaultARN
Type: String
VaultName
The vault name that was specified at creation time. The vault name is also included in the vault's
ARN.
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example demonstrates how to get information about the vault named examplevault.
Example Response
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:02:00 GMT
Content-Type: application/json
Content-Length: 260
{
"CreationDate" : "2012-02-20T17:01:45.198Z",
"LastInventoryDate" : "2012-03-20T17:03:43.221Z",
"NumberOfArchives" : 192,
"SizeInBytes" : 78088912,
"VaultARN" : "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault",
"VaultName" : "examplevault"
}
Related Sections
• Create Vault (PUT vault) (p. 185)
• List Vaults (GET vaults) (p. 210)
• Delete Vault (DELETE vault) (p. 189)
• Initiate Job (POST jobs) (p. 263)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
Requests
To return the current vault access policy, send an HTTP GET request to the URI of the vault's access-
policy subresource.
Syntax
Date: Date
Authorization: SignatureValue
x-amz-glacier-version: 2012-06-01
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
In response, Amazon S3 Glacier (S3 Glacier) returns the vault access policy in JSON format in the body of
the response.
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: length
{
"Policy": "string"
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
Policy
The vault access policy as a JSON string, which uses "\" as an escape character.
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example demonstrates how to get a vault access policy.
Example Request
In this example, a GET request is sent to the URI of a vault's access-policy subresource.
Example Response
If the request was successful, S3 Glacier returns the vault access policy as a JSON string in the body of
the response. The returned JSON string uses "\" as an escape character, as shown in the Set Vault Access
Policy (PUT access-policy) (p. 217) examples. However, the following example shows the returned JSON
string without escape characters for readability.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: length
{
"Policy": "
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow-time-based-deletes",
"Principal": {
"AWS": "999999999999"
},
"Effect": "Allow",
"Action": "glacier:Delete*",
"Resource": [
"arn:aws:glacier:us-west-2:999999999999:vaults/examplevault"
],
"Condition": {
"DateGreaterThan": {
"aws:CurrentTime": "2018-12-31T00:00:00Z"
}
}
}
]
}
"
}
Related Sections
• Delete Vault Access Policy (DELETE access-policy) (p. 191)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
A vault lock is put into the InProgress state by calling Initiate Vault Lock (POST lock-policy) (p. 205).
A vault lock is put into the Locked state by calling Complete Vault Lock (POST lockId) (p. 187). You
can stop the vault locking process by calling Abort Vault Lock (DELETE lock-policy) (p. 180). For more
information about the vault locking process, see Amazon S3 Glacier Vault Lock (p. 65).
If there is no vault lock policy set on the vault, the operation returns a 404 Not found error. For
more information about vault lock policies, see Amazon S3 Glacier Access Control with Vault Lock
Policies (p. 136).
Requests
To return the current vault lock policy and other attributes, send an HTTP GET request to the URI of the
vault's lock-policy subresource as shown in the following syntax example.
Syntax
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
In response, Amazon S3 Glacier (S3 Glacier) returns the vault access policy in JSON format in the body of
the response.
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: length
{
"Policy": "string",
"State": "string",
"ExpirationDate": "string",
"CreationDate":"string"
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
Policy
The vault lock policy as a JSON string, which uses "\" as an escape character.
Type: String
State
Type: String
The UTC date and time at which the lock ID expires. This value can be null if the vault lock is in a
Locked state.
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
CreationDate
The UTC date and time at which the vault lock was put into the InProgress state.
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example demonstrates how to get a vault lock policy.
Example Request
In this example, a GET request is sent to the URI of a vault's lock-policy subresource.
Example Response
If the request was successful, S3 Glacier returns the vault access policy as a JSON string in the body of
the response. The returned JSON string uses "\" as an escape character, as shown in the Initiate Vault
Lock (POST lock-policy) (p. 205) example request. However, the following example shows the returned
JSON string without escape characters for readability.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: length
{
"Policy": "
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Define-vault-lock",
"Principal": {
"AWS": "arn:aws:iam::999999999999:root"
},
"Effect": "Deny",
"Action": "glacier:DeleteArchive",
"Resource": [
"arn:aws:glacier:us-west-2:999999999999:vaults/examplevault"
],
"Condition": {
"NumericLessThanEquals": {
"glacier:ArchiveAgeInDays": "365"
}
}
}
]
}
",
"State": "InProgress",
"ExpirationDate": "exampledate",
"CreationDate": "exampledate"
}
Related Sections
• Abort Vault Lock (DELETE lock-policy) (p. 180)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
Requests
To retrieve the notification configuration information, send a GET request to the URI of a vault's
notification-configuration subresource.
Syntax
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: length
{
"Events": [
String,
...
],
"SNSTopic": String
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
Events
A list of one or more events for which Amazon S3 Glacier (S3 Glacier) will send a notification to
the specified Amazon SNS topic. For information about vault events for which you can configure
a vault to publish notifications, see Set Vault Notification Configuration (PUT notification-
configuration) (p. 219).
Type: Array
SNSTopic
The Amazon Simple Notification Service (Amazon SNS) topic Amazon Resource Name (ARN). For
more information, see Getting Started with Amazon SNS in the Amazon Simple Notification Service
Getting Started Guide.
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example demonstrates how to retrieve the notification configuration for a vault.
Example Request
In this example, a GET request is sent to the notification-configuration subresource of a vault.
Example Response
A successful response shows the audit logging configuration document in the body of the response
in JSON format. In this example, the configuration shows that notifications for two events
(ArchiveRetrievalCompleted and InventoryRetrievalCompleted) are sent to the Amazon SNS
topic arn:aws:sns:us-west-2:012345678901:mytopic.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: 150
{
"Events": [
"ArchiveRetrievalCompleted",
"InventoryRetrievalCompleted"
],
"SNSTopic": "arn:aws:sns:us-west-2:012345678901:mytopic"
}
Related Sections
• Delete Vault Notifications (DELETE notification-configuration) (p. 193)
• Set Vault Notification Configuration (PUT notification-configuration) (p. 219)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
You can set one vault lock policy for each vault and this policy can be up to 20 KB in size. For
more information about vault lock policies, see Amazon S3 Glacier Access Control with Vault Lock
Policies (p. 136).
You must complete the vault locking process within 24 hours after the vault lock enters the InProgress
state. After the 24 hour window ends, the lock ID expires, the vault automatically exits the InProgress
state, and the vault lock policy is removed from the vault. You call Complete Vault Lock (POST
lockId) (p. 187) to complete the vault locking process by setting the state of the vault lock to Locked.
Note
After a vault lock is in the Locked state, you cannot initiate a new vault lock for the vault.
You can stop the vault locking process by calling Abort Vault Lock (DELETE lock-policy) (p. 180).
You can get the state of the vault lock by calling Get Vault Lock (GET lock-policy) (p. 200). For more
information about the vault locking process, see Amazon S3 Glacier Vault Lock (p. 65).
If this operation is called when the vault lock is in the InProgress state, the operation returns an
AccessDeniedException error. When the vault lock is in the InProgress state you must call Abort
Vault Lock (DELETE lock-policy) (p. 180) before you can initiate a new vault lock policy.
Requests
To initiate the vault locking process, send an HTTP POST request to the URI of the lock-policy
subresource of the vault, as shown in the following syntax example.
Syntax
{
"Policy": "string"
}
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
The request body contains the following JSON fields.
Policy
The vault lock policy as a JSON string, which uses "\" as an escape character.
Type: String
Required: Yes
Responses
Amazon S3 Glacier (S3 Glacier) returns an HTTP 201 Created response, if the policy is accepted.
Syntax
Response Headers
A successful response includes the following response headers, in addition to the response headers that
are common to all operations. For more information about common response headers, see Common
Response Headers (p. 162).
Name Description
x-amz-lock-id The lock ID, which is used to complete the vault locking process.
Type: String
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example sends an HTTP PUT request to the URI of the vault's lock-policy subresource.
The Policy JSON string uses "\" as an escape character.
{"Policy":"{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"Define-vault-lock\",
\"Effect\":\"Deny\",\"Principal\":{\"AWS\":\"arn:aws:iam::999999999999:root\"},\"Action
\":\"glacier:DeleteArchive\",\"Resource\":\"arn:aws:glacier:us-west-2:999999999999:vaults/
examplevault\",\"Condition\":{\"NumericLessThanEquals\":{\"glacier:ArchiveAgeinDays\":
\"365\"}}}]}"}
Example Response
If the request was successful, S3 Glacier returns an HTTP 201 Created response, as shown in the
following example.
Related Sections
• Abort Vault Lock (DELETE lock-policy) (p. 180)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
Request Syntax
To list the tags for a vault, send an HTTP GET request to the tags URI as shown in the following syntax
example.
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
If the operation is successful, the service sends back an HTTP 200 OK response.
Response Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: Length
{
"Tags":
{
"string" : "string",
"string" : "string"
}
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
Tags
The tags attached to the vault. Each tag is composed of a key and a value.
Required: Yes
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example: List Tags For a Vault
The following example lists the tags for a vault.
Example Request
In this example, a GET request is sent to retrieve a list of tags from the specified vault.
Example Response
If the request was successful, Amazon S3 Glacier (S3 Glacier) returns a HTTP 200 OK with a list of tags
for the vault as shown in the following example.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:02:00 GMT
Content-Type: application/json
Content-Length: length
{
"Tags",
{
"examplekey1": "examplevalue1",
"examplekey2": "examplevalue2"
}
}
Related Sections
• Add Tags To Vault (POST tags add) (p. 182)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
By default, this operation returns up to 10 items per request. If there are more vaults to list, the marker
field in the response body contains the vault Amazon Resource Name (ARN) at which to continue the
list with a new List Vaults request; otherwise, the marker field is null. In your next List Vaults request
you set the marker parameter to the value Amazon S3 Glacier (S3 Glacier) returned in the responses to
your previous List Vaults request. You can also limit the number of vaults returned in the response by
specifying the limit parameter in the request.
Requests
To get a list of vaults, you send a GET request to the vaults resource.
Syntax
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
This operation uses the following request parameters.
Type: String
marker A string used for pagination. marker specifies the vault ARN No
after which the listing of vaults should begin. (The vault specified
by marker is not included in the returned list.) Get the marker
value from a previous List Vaults response. You need to include
the marker only if you are continuing the pagination of results
started in a previous List Vaults request. Specifying an empty
value ("") for the marker returns a list of vaults starting from the
first vault.
Type: String
Constraints: None
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: Length
{
"Marker": String
"VaultList": [
{
"CreationDate": String,
"LastInventoryDate": String,
"NumberOfArchives": Number,
"SizeInBytes": Number,
"VaultARN": String,
"VaultName": String
},
...
]
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
CreationDate
The date the vault was created, in Coordinated Universal Time (UTC).
Type: String. A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
LastInventoryDate
The date of the last vault inventory, in Coordinated Universal Time (UTC). This field can be null if
an inventory has not yet run on the vault, for example, if you just created the vault. For information
about initiating an inventory for a vault, see Initiate Job (POST jobs) (p. 263).
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
Marker
The vaultARN that represents where to continue pagination of the results. You use the marker in
another List Vaults request to obtain more vaults in the list. If there are no more vaults, this value is
null.
Type: String
NumberOfArchives
Type: Number
SizeInBytes
The total size, in bytes, of all the archives in the vault including any per-archive overhead, as of the
last inventory date.
Type: Number
VaultARN
Type: String
VaultList
Type: Array
VaultName
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example: List All Vaults
The following example lists vaults. Because the marker and limit parameters are not specified in the
request, up to 10 vaults are returned.
Example Request
Example Response
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:02:00 GMT
Content-Type: application/json
Content-Length: 497
{
"Marker": null,
"VaultList": [
{
"CreationDate": "2012-03-16T22:22:47.214Z",
"LastInventoryDate": "2012-03-21T22:06:51.218Z",
"NumberOfArchives": 2,
"SizeInBytes": 12334,
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault1",
"VaultName": "examplevault1"
},
{
"CreationDate": "2012-03-19T22:06:51.218Z",
"LastInventoryDate": "2012-03-21T22:06:51.218Z",
"NumberOfArchives": 0,
"SizeInBytes": 0,
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault2",
"VaultName": "examplevault2"
},
{
"CreationDate": "2012-03-19T22:06:51.218Z",
"LastInventoryDate": "2012-03-25T12:14:31.121Z",
"NumberOfArchives": 0,
"SizeInBytes": 0,
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault3",
"VaultName": "examplevault3"
}
]
}
Example Request
GET /-/vaults?limit=2&marker=arn:aws:glacier:us-west-2:012345678901:vaults/examplevault1
HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
Two vaults are returned in the list. The Marker contains the vault ARN to continue pagination in another
List Vaults request.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:02:00 GMT
Content-Type: application/json
Content-Length: 497
{
"Marker": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault3",
"VaultList": [
{
"CreationDate": "2012-03-16T22:22:47.214Z",
"LastInventoryDate": "2012-03-21T22:06:51.218Z",
"NumberOfArchives": 2,
"SizeInBytes": 12334,
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault1",
"VaultName": "examplevault1"
},
{
"CreationDate": "2012-03-19T22:06:51.218Z",
"LastInventoryDate": "2012-03-21T22:06:51.218Z",
"NumberOfArchives": 0,
"SizeInBytes": 0,
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault2",
"VaultName": "examplevault2"
}
]
}
Related Sections
• Create Vault (PUT vault) (p. 185)
• Delete Vault (DELETE vault) (p. 189)
• Initiate Job (POST jobs) (p. 263)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
This operation is idempotent. The operation will be successful, even if there are no tags attached to the
vault.
Request Syntax
To remove tags from a vault, send an HTTP POST request to the tags URI as shown in the following
syntax example.
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
The request body contains the following JSON fields.
TagKeys
A list of tag keys. Each corresponding tag is removed from the vault.
Length constraint: Minimum of 1 item in the list. Maximum of 10 items in the list.
Required: Yes
Responses
If the action is successful, the service sends back an HTTP 204 No Content response with an empty
HTTP body.
Syntax
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example sends an HTTP POST request to remove the specified tags.
{
"TagsKeys": [
"examplekey1",
"examplekey2"
]
}
Example Response
If the request was successful Amazon S3 Glacier (S3 Glacier) returns a HTTP 204 No Content as shown
in the following example.
Related Sections
• Add Tags To Vault (POST tags add) (p. 182)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
Requests
Syntax
To set a vault access policy, send an HTTP PUT request to the URI of the vault's access-policy
subresource as shown in the following syntax example.
Host: glacier.Region.amazonaws.com
Date: Date
Authorization: SignatureValue
Content-Length: Length
x-amz-glacier-version: 2012-06-01
{
"Policy": "string"
}
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
The request body contains the following JSON fields.
Policy
The vault access policy as a JSON string, which uses "\" as an escape character.
Type: String
Required: Yes
Responses
In response, S3 Glacier returns 204 No Content if the policy is accepted.
Syntax
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example sends an HTTP PUT request to the URI of the vault's access-policy
subresource. The Policy JSON string uses "\" as an escape character.
{"Policy":"{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"Define-owner-access-rights
\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::999999999999:root\"},\"Action
\":\"glacier:DeleteArchive\",\"Resource\":\"arn:aws:glacier:us-west-2:999999999999:vaults/
examplevault\"}]}"}
Example Response
If the request was successful, Amazon S3 Glacier (S3 Glacier) returns a HTTP 204 No Content as
shown in the following example.
Related Sections
• Delete Vault Access Policy (DELETE access-policy) (p. 191)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
You can configure a vault to publish a notification for the following vault events:
• ArchiveRetrievalCompleted— This event occurs when a job that was initiated for an archive
retrieval is completed (Initiate Job (POST jobs) (p. 263)). The status of the completed job can be
Succeeded or Failed. The notification sent to the SNS topic is the same output as returned from
Describe Job (GET JobID) (p. 250).
• InventoryRetrievalCompleted— This event occurs when a job that was initiated for an inventory
retrieval is completed (Initiate Job (POST jobs) (p. 263)). The status of the completed job can be
Succeeded or Failed. The notification sent to the SNS topic is the same output as returned from
Describe Job (GET JobID) (p. 250).
Amazon SNS topics must grant permission to the vault to be allowed to publish notifications to the
topic.
Requests
To set notification configuration on your vault, send a PUT request to the URI of the vault's
notification-configuration subresource. You specify the configuration in the request body. The
configuration includes the Amazon SNS topic name and an array of events that trigger notification to
each topic.
Syntax
{
"SNSTopic": String,
"Events":[String, ...]
}
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
The JSON in the request body contains the following fields.
Events
An array of one or more events for which you want S3 Glacier to send notification.
Required: yes
Type: Array
SNSTopic
The Amazon SNS topic ARN. For more information, go to Getting Started with Amazon SNS in the
Amazon Simple Notification Service Getting Started Guide.
Required: yes
Type: String
Responses
In response, Amazon S3 Glacier (S3 Glacier) returns 204 No Content if the notification configuration is
accepted.
Syntax
Response Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example demonstrates how to configure vault notification.
Example Request
The following request sets the examplevault notification configuration so that notifications for two
events (ArchiveRetrievalCompleted and InventoryRetrievalCompleted ) are sent to the
Amazon SNS topic arn:aws:sns:us-west-2:012345678901:mytopic.
{
"Events": ["ArchiveRetrievalCompleted", "InventoryRetrievalCompleted"],
"SNSTopic": "arn:aws:sns:us-west-2:012345678901:mytopic"
}
Example Response
A successful response returns a 204 No Content.
Related Sections
• Get Vault Notifications (GET notification-configuration) (p. 203)
• Delete Vault Notifications (DELETE notification-configuration) (p. 193)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following:
Archive Operations
The following are the archive operations available for use in S3 Glacier.
Topics
• Delete Archive (DELETE archive) (p. 222)
• Upload Archive (POST archive) (p. 224)
After you delete an archive, you might still be able to make a successful request to initiate a job to
retrieve the deleted archive, but the archive retrieval job will fail.
Archive retrievals that are in progress for an archive ID when you delete the archive might or might not
succeed according to the following scenarios:
• If the archive retrieval job is actively preparing the data for download when Amazon S3 Glacier (S3
Glacier) receives the delete archive request, the archival retrieval operation might fail.
• If the archive retrieval job has successfully prepared the archive for download when S3 Glacier receives
the delete archive request, you will be able to download the output.
For more information about archive retrieval, see Downloading an Archive in Amazon S3 Glacier (p. 83).
This operation is idempotent. Attempting to delete an already-deleted archive does not result in an error.
Requests
To delete an archive you send a DELETE request to the archive resource URI.
Syntax
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
Syntax
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example demonstrates how to delete an archive from the vault named examplevault.
Example Request
The ID of the archive to be deleted is specified as a subresource of archives.
DELETE /-/vaults/examplevault/archives/NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-
TjhqG6eGoOY9Z8i1_AUyUsuhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs01MNGntHEQL8MBfGlqrEXAMPLEArchiveId
HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
If the request is successful, S3 Glacier responds with 204 No Content to indicate that the archive is
deleted.
Related Sections
• Initiate Multipart Upload (POST multipart-uploads) (p. 233)
• Upload Archive (POST archive) (p. 224)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
You must provide a SHA256 tree hash of the data you are uploading. For information about computing a
SHA256 tree hash, see Computing Checksums (p. 166).
When uploading an archive, you can optionally specify an archive description of up to 1,024 printable
ASCII characters. S3 Glacier returns the archive description when you either retrieve the archive or get
the vault inventory. S3 Glacier does not interpret the description in any way. An archive description does
not need to be unique. You cannot use the description to retrieve or sort the archive list.
Except for the optional archive description, S3 Glacier does not support any additional metadata for the
archives. The archive ID is an opaque sequence of characters from which you cannot infer any meaning
about the archive. So you might maintain metadata about the archives on the client-side. For more
information, see Working with Archives in Amazon S3 Glacier (p. 67).
Archives are immutable. After you upload an archive, you cannot edit the archive or its description.
Requests
To upload an archive, you use the HTTP POST method and scope the request to the archives
subresource of the vault in which you want to save the archive. The request must include the archive
payload size, checksum (SHA256 tree hash), and can optionally include a description of the archive.
Syntax
POST /AccountId/vaults/VaultName/archives
Host: glacier.Region.amazonaws.com
x-amz-glacier-version: 2012-06-01
Date: Date
Authorization: SignatureValue
x-amz-archive-description: Description
x-amz-sha256-tree-hash: SHA256 tree hash
x-amz-content-sha256: SHA256 linear hash
Content-Length: Length
<Request body.>
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This implementation of the operation does not use request parameters.
Request Headers
This operation uses the following request headers, in addition to the request headers that are common
to all operations. For more information about the common request headers, see Common Request
Headers (p. 160).
Content-Length The size of the object, in bytes. For more information, Yes
go to http://www.w3.org/Protocols/rfc2616/rfc2616-
sec14.html#sec14.13.
Type: Number
Default: None
Constraints: None
Type: String
Default: None
x-amz-content- The SHA256 checksum (a linear hash) of the payload. This is not Yes
sha256 the same value as you specify in the x-amz-sha256-tree-
hash header.
Type: String
Default: None
Constraints: None
Type: String
Default: None
Constraints: None
Request Body
The request body contains the data to upload.
Responses
In response, S3 Glacier durably stores the archive and returns a URI path to the archive ID.
Syntax
Response Headers
A successful response includes the following response headers, in addition to the response headers that
are common to all operations. For more information about common response headers, see Common
Response Headers (p. 162).
Name Description
Location The relative URI path of the newly added archive resource.
Type: String
Name Description
x-amz- The ID of the archive. This value is also included as part of the Location header.
archive-id
Type: String
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example shows a request to upload an archive.
Example Response
The successful response below has a Location header where you can get the ID that S3 Glacier assigned
to the archive.
Related Sections
• Working with Archives in Amazon S3 Glacier (p. 67)
• Uploading Large Archives in Parts (Multipart Upload) (p. 75)
Topics
• Abort Multipart Upload (DELETE uploadID) (p. 228)
• Complete Multipart Upload (POST uploadID) (p. 230)
• Initiate Multipart Upload (POST multipart-uploads) (p. 233)
• List Parts (GET uploadID) (p. 236)
• List Multipart Uploads (GET multipart-uploads) (p. 241)
• Upload Part (PUT uploadID) (p. 246)
After the Abort Multipart Upload request succeeds, you cannot use the upload ID to upload any more
parts or perform any other operations. Stopping a completed multipart upload fails. However, stopping
an already-stopped upload will succeed, for a short time.
For information about multipart upload, see Uploading Large Archives in Parts (Multipart Upload) (p. 75).
Requests
To stop a multipart upload, send an HTTP DELETE request to the URI of the multipart-uploads
subresource of the vault and identify the specific multipart upload ID as part of the URI.
Syntax
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
Syntax
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Example
Example Request
In the following example, a DELETE request is sent to the URI of a multipart upload ID resource.
DELETE /-/vaults/examplevault/multipart-uploads/
OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-
khxOjyEXAMPLE HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
Related Sections
• Initiate Multipart Upload (POST multipart-uploads) (p. 233)
• Upload Part (PUT uploadID) (p. 246)
For information about multipart upload, see Uploading Large Archives in Parts (Multipart Upload) (p. 75).
After assembling and saving the archive to the vault, S3 Glacier returns the archive ID of the newly
created archive resource. After you upload an archive, you should save the archive ID returned to retrieve
the archive at a later point.
In the request, you must include the computed SHA256 tree hash of the entire archive you have
uploaded. For information about computing a SHA256 tree hash, see Computing Checksums (p. 166).
On the server side, S3 Glacier also constructs the SHA256 tree hash of the assembled archive. If the
values match, S3 Glacier saves the archive to the vault; otherwise, it returns an error, and the operation
fails. The List Parts (GET uploadID) (p. 236) operation returns list of parts uploaded for a specific
multipart upload. It includes checksum information for each uploaded part that can be used to debug a
bad checksum issue.
Additionally, S3 Glacier also checks for any missing content ranges. When uploading parts, you specify
range values identifying where each part fits in the final assembly of the archive. When assembling
the final archive S3 Glacier checks for any missing content ranges and if there are any missing content
ranges, S3 Glacier returns an error and the Complete Multipart Upload operation fails.
Complete Multipart Upload is an idempotent operation. After your first successful complete multipart
upload, if you call the operation again within a short period, the operation will succeed and return
the same archive ID. This is useful in the event you experience a network issue or receive a 500 server
error, in which case you can repeat your Complete Multipart Upload request and get the same archive
ID without creating duplicate archives. Note, however, that after the multipart upload completes, you
cannot call the List Parts operation and the multipart upload will not appear in List Multipart Uploads
response, even if idempotent complete is possible.
Requests
To complete a multipart upload, you send an HTTP POST request to the URI of the upload ID that S3
Glacier created in response to your Initiate Multipart Upload request. This is the same URI you used when
uploading parts. In addition to the common required headers, you must include the result of the SHA256
tree hash of the entire archive and the total size of the archive in bytes.
Syntax
POST /AccountId/vaults/VaultName/multipart-uploads/uploadID
Host: glacier.Region.amazonaws.com
Date: date
Authorization: SignatureValue
x-amz-sha256-tree-hash: SHA256 tree hash of the archive
x-amz-archive-size: ArchiveSize in bytes
x-amz-glacier-version: 2012-06-01
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses the following request headers, in addition to the request headers that are common
to all operations. For more information about the common request headers, see Common Request
Headers (p. 160).
x-amz-archive- The total size, in bytes, of the entire archive. This value should be Yes
size the sum of all the sizes of the individual parts that you uploaded.
Type: String
Default: None
Constraints: None
x-amz-sha256- The SHA256 tree hash of the entire archive. It is the tree hash of Yes
tree-hash SHA256 tree hash of the individual parts. If the value you specify
in the request does not match the SHA256 tree hash of the final
assembled archive as computed by S3 Glacier, S3 Glacier returns an
error and the request fails.
Type: String
Default: None
Constraints: None
Request Elements
This operation does not use request elements.
Responses
Amazon S3 Glacier (S3 Glacier) creates a SHA256 tree hash of the entire archive. If the value matches the
SHA256 tree hash of the entire archive you specified in the request, S3 Glacier adds the archive to the
vault. In response it returns the HTTP Location header with the URL path of the newly added archive
resource. If the archive size or SHA256 that you sent in the request does not match, S3 Glacier will return
an error and the upload remains in the incomplete state. It is possible to retry the Complete Multipart
Upload operation later with correct values, at which point you can successfully create an archive. If a
multipart upload does not complete, then eventually S3 Glacier will reclaim the upload ID.
Syntax
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Location: Location
x-amz-archive-id: ArchiveId
Response Headers
A successful response includes the following response headers, in addition to the response headers that
are common to all operations. For more information about common response headers, see Common
Response Headers (p. 162).
Name Description
Location The relative URI path of the newly created archive. This URL includes the archive
ID that is generated by S3 Glacier.
Type: String
x-amz-archive- The ID of the archive. This value is also included as part of the Location header.
id
Type: String
Response Fields
This operation does not return a response body.
Example
Example Request
In this example, an HTTP POST request is sent to the URI that was returned by an Initiate Multipart
Upload request. The request specifies both the SHA256 tree hash of the entire archive and the total
archive size.
POST /-/vaults/examplevault/multipart-uploads/
OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-
khxOjyEXAMPLE HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
z-amz-Date: 20170210T120000Z
x-amz-sha256-tree-hash:1ffc0f54dd5fdd66b62da70d25edacd0
x-amz-archive-size:8388608
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
The following example response shows that S3 Glacier successfully created an archive from the parts you
uploaded. The response includes the archive ID with complete path.
x-amz-archive-id: NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-
TjhqG6eGoOY9Z8i1_AUyUsuhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs01MNGntHEQL8MBfGlqrEXAMPLEArchiveId
You can now send HTTP requests to the URI of the newly added resource/archive. For example, you can
send a GET request to retrieve the archive.
Related Sections
• Initiate Multipart Upload (POST multipart-uploads) (p. 233)
• Upload Part (PUT uploadID) (p. 246)
• Abort Multipart Upload (DELETE uploadID) (p. 228)
• List Multipart Uploads (GET multipart-uploads) (p. 241)
• List Parts (GET uploadID) (p. 236)
• Uploading Large Archives in Parts (Multipart Upload) (p. 75)
• Delete Archive (DELETE archive) (p. 222)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
When you initiate a multipart upload, you specify the part size in number of bytes. The part size must
be a megabyte (1024 KB) multiplied by a power of 2—for example, 1048576 (1 MB), 2097152 (2
MB), 4194304 (4 MB), 8388608 (8 MB), and so on. The minimum allowable part size is 1 MB, and the
maximum is 4 GB.
Every part you upload using this upload ID, except the last one, must have the same size. The last one
can be the same size or smaller. For example, suppose you want to upload a 16.2 MB file. If you initiate
the multipart upload with a part size of 4 MB, you will upload four parts of 4 MB each and one part of
0.2 MB.
Note
You don't need to know the size of the archive when you start a multipart upload because S3
Glacier does not require you to specify the overall archive size.
After you complete the multipart upload, S3 Glacier removes the multipart upload resource referenced
by the ID. S3 Glacier will also remove the multipart upload resource if you cancel the multipart upload
or it may be removed if there is no activity for a period of 24 hours. The ID may still be available after 24
hours, but applications should not expect this behavior.
Requests
To initiate a multipart upload, you send an HTTP POST request to the URI of the multipart-uploads
subresource of the vault in which you want to save the archive. The request must include the part size
and can optionally include a description of the archive.
Syntax
POST /AccountId/vaults/VaultName/multipart-uploads
Host: glacier.us-west-2.amazonaws.com
Date: Date
Authorization: SignatureValue
x-amz-glacier-version: 2012-06-01
x-amz-archive-description: ArchiveDescription
x-amz-part-size: PartSize
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses the following request headers, in addition to the request headers that are common
to all operations. For more information about the common request headers, see Common Request
Headers (p. 160).
x-amz-part-size The size of each part except the last, in bytes. Yes
The last part can be smaller than this part size.
Type: String
Default: None
Type: String
Default: None
Request Body
This operation does not have a request body.
Responses
In the response, S3 Glacier creates a multipart upload resource identified by an ID and returns the
relative URI path of the multipart upload ID.
Syntax
Response Headers
A successful response includes the following response headers, in addition to the response headers that
are common to all operations. For more information about common response headers, see Common
Response Headers (p. 162).
Name Description
Location The relative URI path of the multipart upload ID S3 Glacier created. You use this
URI path to scope your requests to upload parts, and to complete the multipart
upload.
Type: String
x-amz- The ID of the multipart upload. This value is also included as part of the
multipart- Location header.
upload-id
Type: String
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Example
Example Request
The following example initiates a multipart upload by sending an HTTP POST request to the URI of the
multipart-uploads subresource of a vault named examplevault. The request includes headers to
specify the part size of 4 MB (4194304 bytes) and the optional archive description.
POST /-/vaults/examplevault/multipart-uploads
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-archive-description: MyArchive-101
x-amz-part-size: 4194304
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
S3 Glacier creates a multipart upload resource and adds it to the multipart-uploads subresource of
the vault. The Location response header includes the relative URI path to the multipart upload ID.
For information about uploading individual parts, see Upload Part (PUT uploadID) (p. 246).
Related Sections
• Upload Part (PUT uploadID) (p. 246)
• Complete Multipart Upload (POST uploadID) (p. 230)
• Abort Multipart Upload (DELETE uploadID) (p. 228)
• List Multipart Uploads (GET multipart-uploads) (p. 241)
• List Parts (GET uploadID) (p. 236)
• Delete Archive (DELETE archive) (p. 222)
• Uploading Large Archives in Parts (Multipart Upload) (p. 75)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
You can make this request at any time during an in-progress multipart upload before you complete the
multipart upload. S3 Glacier returns the part list sorted by range you specified in each part upload. If you
send a List Parts request after completing the multipart upload, Amazon S3 Glacier (S3 Glacier) returns
an error.
The List Parts operation supports pagination. You should always check the Marker field in the response
body for a marker at which to continue the list. if there are no more items the marker field is null. If
the marker is not null, to fetch the next set of parts you sent another List Parts request with the marker
request parameter set to the marker value S3 Glacier returned in response to your previous List Parts
request.
You can also limit the number of parts returned in the response by specifying the limit parameter in
the request.
Requests
Syntax
To list the parts of an in-progress multipart upload, you send a GET request to the URI of the multipart
upload ID resource. The multipart upload ID is returned when you initiate a multipart upload (Initiate
Multipart Upload (POST multipart-uploads) (p. 233)). You may optionally specify marker and limit
parameters.
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
Type: String
marker An opaque string used for pagination. marker specifies the part No
at which the listing of parts should begin. Get the marker value
from the response of a previous List Parts response. You need only
include the marker if you are continuing the pagination of results
started in a previous List Parts request.
Type: String
Constraints: None
Request Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Request Body
This operation does not have a request body.
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: Length
{
"ArchiveDescription" : String,
"CreationDate" : String,
"Marker": String,
"MultipartUploadId" : String,
"PartSizeInBytes" : Number,
"Parts" :
[ {
"RangeInBytes" : String,
"SHA256TreeHash" : String
},
...
],
"VaultARN" : String
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
ArchiveDescription
The description of the archive that was specified in the Initiate Multipart Upload request. This field is
null if no archive description was specified in the Initiate Multipart Upload operation.
Type: String
CreationDate
Type: String. A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
Marker
An opaque string that represents where to continue pagination of the results. You use the marker in
a new List Parts request to obtain more jobs in the list. If there are no more parts, this value is null.
Type: String
MultipartUploadId
Type: String
PartSizeInBytes
The part size in bytes. This is the same value that you specified in the Initiate Multipart Upload
request.
Type: Number
Parts
A list of the part sizes of the multipart upload. Each object in the array contains a RangeBytes and
sha256-tree-hash name/value pair.
Type: Array
RangeInBytes
The byte range of a part, inclusive of the upper value of the range.
Type: String
SHA256TreeHash
The SHA256 tree hash value that S3 Glacier calculated for the part. This field is never null.
Type: String
VaultARN
The Amazon Resource Name (ARN) of the vault to which the multipart upload was initiated.
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example: List Parts of a Multipart Upload
The following example lists all the parts of an upload. The example sends an HTTP GET request to the
URI of the specific multipart upload ID of an in-progress multipart upload and returns up to 1,000 parts.
Example Request
GET /-/vaults/examplevault/multipart-uploads/
OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-
khxOjyEXAMPLE HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
In the response, S3 Glacier returns a list of uploaded parts associated with the specified multipart upload
ID. In this example, there are only two parts. The returned Marker field is null indicating that there are
no more parts of the multipart upload.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: 412
{
"ArchiveDescription" : "archive description",
"CreationDate" : "2012-03-20T17:03:43.221Z",
"Marker": null,
"MultipartUploadId" :
"OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-
khxOjyEXAMPLE",
"PartSizeInBytes" : 4194304,
"Parts" :
[ {
"RangeInBytes" : "0-4194303",
"SHA256TreeHash" : "01d34dabf7be316472c93b1ef80721f5d4"
},
{
"RangeInBytes" : "4194304-8388607",
"SHA256TreeHash" : "0195875365afda349fc21c84c099987164"
}],
"VaultARN" : "arn:aws:glacier:us-west-2:012345678901:vaults/demo1-vault"
}
Example: List Parts of a Multipart Upload (Specify the Marker and the Limit
Request Parameters)
The following example demonstrates how to use pagination to get a limited number of results. The
example sends an HTTP GET request to the URI of the specific multipart upload ID of an in-progress
multipart upload to return one part. A starting marker parameter specifies at which part to start
the part list. You can get the marker value from the response of a previous request for a part list.
Furthermore, in this example, the limit parameter is set to 1 and returns one part. Note that the
Marker field is not null, indicating that there is at least one more part to obtain.
Example Request
GET /-/vaults/examplevault/multipart-uploads/
OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-
khxOjyEXAMPLE?marker=1001&limit=1 HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
In the response, S3 Glacier returns a list of uploaded parts that are associated with the specified in-
progress multipart upload ID.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: text/json
Content-Length: 412
{
"ArchiveDescription" : "archive description 1",
"CreationDate" : "2012-03-20T17:03:43.221Z",
"Marker": "MfgsKHVjbQ6EldVl72bn3_n5h2TaGZQUO-Qb3B9j3TITf7WajQ",
"MultipartUploadId" :
"OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-
khxOjyEXAMPLE",
"PartSizeInBytes" : 4194304,
"Parts" :
[ {
"RangeInBytes" : "4194304-8388607",
"SHA256TreeHash" : "01d34dabf7be316472c93b1ef80721f5d4"
}],
"VaultARN" : "arn:aws:glacier:us-west-2:012345678901:vaults/demo1-vault"
}
Related Sections
• Initiate Multipart Upload (POST multipart-uploads) (p. 233)
• Upload Part (PUT uploadID) (p. 246)
• Complete Multipart Upload (POST uploadID) (p. 230)
• Abort Multipart Upload (DELETE uploadID) (p. 228)
• List Multipart Uploads (GET multipart-uploads) (p. 241)
• Uploading Large Archives in Parts (Multipart Upload) (p. 75)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
The List Multipart Uploads operation supports pagination. By default, this operation returns up to 50
multipart uploads in the response. You should always check the marker field in the response body for a
marker at which to continue the list; if there are no more items the marker field is null.
If the marker is not null, to fetch the next set of multipart uploads you sent another List Multipart
Uploads request with the marker request parameter set to the marker value Amazon S3 Glacier (S3
Glacier) returned in response to your previous List Multipart Uploads request.
Note the difference between this operation and the List Parts (GET uploadID) (p. 236)) operation. The
List Multipart Uploads operation lists all multipart uploads for a vault. The List Parts operation returns
parts of a specific multipart upload identified by an Upload ID.
For information about multipart upload, see Uploading Large Archives in Parts (Multipart Upload) (p. 75).
Requests
Syntax
To list multipart uploads, send a GET request to the URI of the multipart-uploads subresource of the
vault. You may optionally specify marker and limit parameters.
Date: Date
Authorization: SignatureValue
x-amz-glacier-version: 2012-06-01
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
Type: String
marker An opaque string used for pagination. marker specifies the upload No
at which the listing of uploads should begin. Get the marker value
from a previous List Uploads response. You need only include the
marker if you are continuing the pagination of results started in a
previous List Uploads request.
Type: String
Constraints: None
Request Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Request Body
This operation does not have a request body.
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: Length
{
"Marker": String,
"UploadsList" : [
{
"ArchiveDescription": String,
"CreationDate": String,
"MultipartUploadId": String,
"PartSizeInBytes": Number,
"VaultARN": String
},
...
]
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
ArchiveDescription
The description of the archive that was specified in the Initiate Multipart Upload request. This field is
null if no archive description was specified in the Initiate Multipart Upload operation.
Type: String
CreationDate
Type: String. A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
Marker
An opaque string that represents where to continue pagination of the results. You use the marker
in a new List Multipart Uploads request to obtain more uploads in the list. If there are no more
uploads, this value is null.
Type: String
PartSizeInBytes
The part size specified in the Initiate Multipart Upload (POST multipart-uploads) (p. 233) request.
This is the size of all the parts in the upload except the last part, which may be smaller than this size.
Type: Number
MultipartUploadId
Type: String
UploadsList
A list of metadata about multipart upload objects. Each item in the list contains a set of name-
value pairs for the corresponding upload, including ArchiveDescription, CreationDate,
MultipartUploadId, PartSizeInBytes, and VaultARN.
Type: Array
VaultARN
The Amazon Resource Name (ARN) of the vault that contains the archive.
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example: List All Multipart Uploads
The following example lists all the multipart uploads in progress for the vault. The example shows an
HTTP GET request to the URI of the multipart-uploads subresource of a specified vault. Because
the marker and limit parameters are not specified in the request, up to 1,000 in-progress multipart
uploads are returned.
Example Request
Example Response
In the response S3 Glacier returns a list of all in-progress multipart uploads for the specified vault. The
marker field is null, which indicates that there are no more uploads to list.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: 1054
{
"Marker": null,
"UploadsList": [
{
"ArchiveDescription": "archive 1",
"CreationDate": "2012-03-19T23:20:59.130Z",
"MultipartUploadId":
"xsQdFIRsfJr20CW2AbZBKpRZAFTZSJIMtL2hYf8mvp8dM0m4RUzlaqoEye6g3h3ecqB_zqwB7zLDMeSWhwo65re4C4Ev",
"PartSizeInBytes": 4194304,
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
},
{
"ArchiveDescription": "archive 2",
"CreationDate": "2012-04-01T15:00:00.000Z",
"MultipartUploadId": "nPyGOnyFcx67qqX7E-0tSGiRi88hHMOwOxR-
_jNyM6RjVMFfV29lFqZ3rNsSaWBugg6OP92pRtufeHdQH7ClIpSF6uJc",
"PartSizeInBytes": 4194304,
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
},
{
"ArchiveDescription": "archive 3",
"CreationDate": "2012-03-20T17:03:43.221Z",
"MultipartUploadId": "qt-RBst_7yO8gVIonIBsAxr2t-db0pE4s8MNeGjKjGdNpuU-
cdSAcqG62guwV9r5jh5mLyFPzFEitTpNE7iQfHiu1XoV",
"PartSizeInBytes": 4194304,
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
}
]
}
Example Request
GET /-/vaults/examplevault/multipart-uploads?
limit=1&marker=xsQdFIRsfJr20CW2AbZBKpRZAFTZSJIMtL2hYf8mvp8dM0m4RUzlaqoEye6g3h3ecqB_zqwB7zLDMeSWhwo65re4
HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
In the response, Amazon S3 Glacier (S3 Glacier) returns a list of no more than two in-progress multipart
uploads for the specified vault, starting at the specified marker and returning two results.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: 470
{
"Marker": "qt-RBst_7yO8gVIonIBsAxr2t-db0pE4s8MNeGjKjGdNpuU-
cdSAcqG62guwV9r5jh5mLyFPzFEitTpNE7iQfHiu1XoV",
"UploadsList" : [
{
"ArchiveDescription": "archive 2",
"CreationDate": "2012-04-01T15:00:00.000Z",
"MultipartUploadId": "nPyGOnyFcx67qqX7E-0tSGiRi88hHMOwOxR-
_jNyM6RjVMFfV29lFqZ3rNsSaWBugg6OP92pRtufeHdQH7ClIpSF6uJc",
"PartSizeInBytes": 4194304,
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
}
]
}
Related Sections
• Initiate Multipart Upload (POST multipart-uploads) (p. 233)
• Upload Part (PUT uploadID) (p. 246)
• Complete Multipart Upload (POST uploadID) (p. 230)
• Abort Multipart Upload (DELETE uploadID) (p. 228)
• List Parts (GET uploadID) (p. 236)
• Uploading Large Archives in Parts (Multipart Upload) (p. 75)
For information about multipart upload, see Uploading Large Archives in Parts (Multipart Upload) (p. 75).
Amazon S3 Glacier (S3 Glacier) rejects your upload part request if any of the following conditions is true:
• SHA256 tree hash does not match—To ensure that part data is not corrupted in transmission, you
compute a SHA256 tree hash of the part and include it in your request. Upon receiving the part data,
S3 Glacier also computes a SHA256 tree hash. If the two hash values don't match, the operation fails.
For information about computing a SHA256 tree hash, see Computing Checksums (p. 166).
• SHA256 linear hash does not match—Required for authorization, you compute a SHA256 linear hash
of the entire uploaded payload and include it in your request. For information about computing a
SHA256 linear hash, see Computing Checksums (p. 166).
• Part size does not match—The size of each part except the last must match the size that is specified
in the corresponding Initiate Multipart Upload (POST multipart-uploads) (p. 233) request. The size of
the last part must be the same size as, or smaller than, the specified size.
Note
If you upload a part whose size is smaller than the part size you specified in your initiate
multipart upload request and that part is not the last part, then the upload part request will
succeed. However, the subsequent Complete Multipart Upload request will fail.
• Range does not align—The byte range value in the request does not align with the part size specified
in the corresponding initiate request. For example, if you specify a part size of 4194304 bytes (4 MB),
then 0 to 4194303 bytes (4 MB —1) and 4194304 (4 MB) to 8388607 (8 MB —1) are valid part ranges.
However, if you set a range value of 2 MB to 6 MB, the range does not align with the part size and the
upload will fail.
This operation is idempotent. If you upload the same part multiple times, the data included in the most
recent request overwrites the previously uploaded data.
Requests
You send this HTTP PUT request to the URI of the upload ID that was returned by your Initiate Multipart
Upload request. S3 Glacier uses the upload ID to associate part uploads with a specific multipart upload.
The request must include a SHA256 tree hash of the part data (x-amz-SHA256-tree-hash header), a
SHA256 linear hash of the entire payload (x-amz-content-sha256 header), the byte range (Content-
Range header), and the length of the part in bytes (Content-Length header).
Syntax
Content-Type: application/octet-stream
x-amz-sha256-tree-hash: Checksum of the part
x-amz-content-sha256: Checksum of the entire payload
x-amz-glacier-version: 2012-06-01
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses the following request headers, in addition to the request headers that are common
to all operations. For more information about the common request headers, see Common Request
Headers (p. 160).
Type: String
Default: None
Constraints: None
Type: String
Default: None
Type: String
Default: None
Constraints: None
Default: None
Constraints: None
Request Body
The request body contains the data to upload.
Responses
Upon a successful part upload, S3 Glacier returns a 204 No Content response.
Syntax
Response Headers
A successful response includes the following response headers, in addition to the response headers that
are common to all operations. For more information about common response headers, see Common
Response Headers (p. 162).
Name Description
x-amz-sha256- The SHA256 tree hash that S3 Glacier computed for the uploaded part.
tree-hash
Type: String
Response Body
This operation does not return a response body.
Example
The following request uploads a 4 MB part. The request sets the byte range to make this the first part in
the archive.
Example Request
The example sends an HTTP PUT request to upload a 4 MB part. The request is sent to the URI of the
Upload ID that was returned by the Initiate Multipart Upload request. The Content-Range header
identifies the part as the first 4 MB data part of the archive.
PUT /-/vaults/examplevault/multipart-uploads/
OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-
khxOjyEXAMPLE HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Range:bytes 0-4194303/*
x-amz-sha256-tree-hash:c06f7cd4baacb087002a99a5f48bf953
x-amz-content-sha256:726e392cb4d09924dbad1cc0ba3b00c3643d03d14cb4b823e2f041cff612a628
Content-Length: 4194304
Authorization: Authorization=AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-
glacier-version,Signature=16b9a9e220a37e32f2e7be196b4ebb87120ca7974038210199ac5982e792cace
To upload the next part, the procedure is the same; however, you must calculate a new SHA256 tree hash
of the part you are uploading and also specify a new byte range to indicate where the part will go in
the final assembly. The following request uploads another part using the same upload ID. The request
specifies the next 4 MB of the archive after the previous request and a part size of 4 MB.
PUT /-/vaults/examplevault/multipart-uploads/
OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-
khxOjyEXAMPLE HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Range:bytes 4194304-8388607/*
Content-Length: 4194304
x-amz-sha256-tree-hash:f10e02544d651e2c3ce90a4307427493
x-amz-content-sha256:726e392cb4d09924dbad1cc0ba3b00c3643d03d14cb4b823e2f041cff612a628
x-amz-glacier-version: 2012-06-01
Authorization: Authorization=AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20120525/
us-west-2/glacier/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-
glacier-version, Signature=16b9a9e220a37e32f2e7be196b4ebb87120ca7974038210199ac5982e792cace
The parts can be uploaded in any order; S3 Glacier uses the range specification for each part to
determine the order in which to assemble them.
Example Response
Related Sections
• Initiate Multipart Upload (POST multipart-uploads) (p. 233)
• Upload Part (PUT uploadID) (p. 246)
• Complete Multipart Upload (POST uploadID) (p. 230)
• Abort Multipart Upload (DELETE uploadID) (p. 228)
• List Multipart Uploads (GET multipart-uploads) (p. 241)
• List Parts (GET uploadID) (p. 236)
• Uploading Large Archives in Parts (Multipart Upload) (p. 75)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
Job Operations
The following are the job operations available in S3 Glacier.
Topics
• Describe Job (GET JobID) (p. 250)
• Get Job Output (GET output) (p. 257)
• Initiate Job (POST jobs) (p. 263)
• List Jobs (GET jobs) (p. 273)
A job ID will not expire for at least 24 hours after S3 Glacier completes the job.
Requests
Syntax
To obtain information about a job, you use the HTTP GET method and scope the request to the specific
job. Note that the relative URI path is the same one that S3 Glacier returned to you when you initiated
the job.
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Note
In the request, if you omit the JobID, the response returns a list of all active jobs on the
specified vault. For more information about listing jobs, see List Jobs (GET jobs) (p. 273).
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
Syntax
{
"Action": "string",
"ArchiveId": "string",
"ArchiveSHA256TreeHash": "string",
"ArchiveSizeInBytes": number,
"Completed": boolean,
"CompletionDate": "string",
"CreationDate": "string",
"InventoryRetrievalParameters": {
"EndDate": "string",
"Format": "string",
"Limit": "string",
"Marker": "string",
"StartDate": "string"
},
"InventorySizeInBytes": number,
"JobDescription": "string",
"JobId": "string",
"JobOutputPath": "string",
"OutputLocation": {
"S3": {
"AccessControlList": [
{
"Grantee": {
"DisplayName": "string",
"EmailAddress": "string",
"ID": "string",
"Type": "string",
"URI": "string"
},
"Permission": "string"
}
],
"BucketName": "string",
"CannedACL": "string",
"Encryption": {
"EncryptionType": "string",
"KMSContext": "string",
"KMSKeyId": "string"
},
"Prefix": "string",
"StorageClass": "string",
"Tagging": {
"string": "string"
},
"UserMetadata": {
"string": "string"
}
}
},
"RetrievalByteRange": "string",
"SelectParameters": {
"Expression": "string",
"ExpressionType": "string",
"InputSerialization": {
"csv": {
"Comments": "string",
"FieldDelimiter": "string",
"FileHeaderInfo": "string",
"QuoteCharacter": "string",
"QuoteEscapeCharacter": "string",
"RecordDelimiter": "string"
}
},
"OutputSerialization": {
"csv": {
"FieldDelimiter": "string",
"QuoteCharacter": "string",
"QuoteEscapeCharacter": "string",
"QuoteFields": "string",
"RecordDelimiter": "string"
}
}
},
"SHA256TreeHash": "string",
"SNSTopic": "string",
"StatusCode": "string",
"StatusMessage": "string",
"Tier": "string",
"VaultARN": "string"
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
Action
Type: String
ArchiveId
The archive ID requested for a select or archive retrieval job. Otherwise, this field is null.
Type: String
ArchiveSHA256TreeHash
The SHA256 tree hash of the entire archive for an archive retrieval job. For inventory retrieval jobs,
this field is null.
Type: String
ArchiveSizeInBytes
For an ArchiveRetrieval job, this is the size in bytes of the archive being requested for
download. For the InventoryRetrieval job, the value is null.
Type: Number
Completed
The job status. When an archive or inventory retrieval job is completed, you get the job's output
using the Get Job Output (GET output) (p. 257).
Type: Boolean
CompletionDate
The Universal Coordinated Time (UTC) time that the job request completed. While the job is in
progress, the value is null.
Type: String
CreationDate
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
InventoryRetrievalParameters
For an InventoryRetrieval job, this is the size in bytes of the inventory requested for download.
For the ArchiveRetrieval or Select job, the value is null.
Type: Number
JobDescription
The job description you provided when you initiated the job.
Type: String
JobId
Type: String
JobOutputPath
Type: String
OutputLocation
An object that contains information about the location where the select job results and errors are
stored.
The retrieved byte range for archive retrieval jobs in the form "StartByteValue-EndByteValue."
If you don't specify a range in the archive retrieval, then the whole archive is retrieved; also
StartByteValue equals 0, and EndByteValue equals the size of the archive minus 1. For inventory
retrieval or select jobs, this field is null.
Type: String
SelectParameters
An object that contains information about the parameters used for a select.
The SHA256 tree hash value for the requested range of an archive. If the Initiate Job (POST
jobs) (p. 263) request for an archive specified a tree-hash aligned range, then this field returns a
value. For more information about tree-hash alignment for archive range retrievals, see Receiving
Checksums When Downloading Data (p. 175).
For the specific case when the whole archive is retrieved, this value is the same as the
ArchiveSHA256TreeHash value.
Type: String
SNSTopic
Type: String
StatusCode
Type: String
StatusMessage
Type: String
Tier
The data access tier to use for the select or archive retrieval.
Type: String
VaultARN
The Amazon Resource Name (ARN) of the vault of which the job is a subresource.
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example shows the request for a job that retrieves an archive.
GET /-/vaults/examplevault/jobs/HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
The response body includes JSON that describes the specified job. Note that for both the inventory
retrieval and archive retrieval jobs, the JSON fields are the same. However, when a field doesn't apply to
the type of job, its value is null. The following is an example response for an archive retrieval job. Note
the following:
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: 419
{
"Action": "ArchiveRetrieval",
"ArchiveId": "NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-
TjhqG6eGoOY9Z8i1_AUyUsuhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs01MNGntHEQL8MBfGlqrEXAMPLEArchiveId",
"ArchiveSizeInBytes": 16777216,
"ArchiveSHA256TreeHash":
"beb0fe31a1c7ca8c6c04d574ea906e3f97b31fdca7571defb5b44dca89b5af60",
"Completed": false,
"CompletionDate": null,
"CreationDate": "2012-05-15T17:21:39.339Z",
"InventorySizeInBytes": null,
"JobDescription": "My ArchiveRetrieval Job",
"JobId": "HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID",
"RetrievalByteRange": "0-16777215",
"SHA256TreeHash": "beb0fe31a1c7ca8c6c04d574ea906e3f97b31fdca7571defb5b44dca89b5af60",
"SNSTopic": "arn:aws:sns:us-west-2:012345678901:mytopic",
"StatusCode": "InProgress",
"StatusMessage": "Operation in progress.",
"Tier": "Bulk",
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
}
The following is an example response for an inventory retrieval job. Note the following:
{
"Action": "InventoryRetrieval",
"ArchiveId": null,
"ArchiveSizeInBytes": null,
"ArchiveSHA256TreeHash": null,
"Completed": false,
"CompletionDate": null,
"CreationDate": "2012-05-15T23:18:13.224Z",
"InventorySizeInBytes": null,
"JobDescription": "Inventory Description",
"JobId": "HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID",
"RetrievalByteRange": null,
"SHA256TreeHash": null,
"SNSTopic": "arn:aws:sns:us-west-2:012345678901:mytopic",
"StatusCode": "InProgress",
"StatusMessage": "Operation in progress.",
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
}
The following is an example response for a completed inventory retrieval job that contains a marker used
to continue pagination of the vault inventory retrieval.
{
"Action": "InventoryRetrieval",
"ArchiveId": null,
"ArchiveSHA256TreeHash": null,
"ArchiveSizeInBytes": null,
"Completed": true,
"CompletionDate": "2013-12-05T21:51:13.591Z",
"CreationDate": "2013-12-05T21:51:12.281Z",
"InventorySizeInBytes": 777062,
"JobDescription": null,
"JobId": "sCC2RZNBF2nildYD_roe0J9bHRdPQUbDRkmTdg-mXi2u3lc49uW6TcEhDF2D9pB2phx-
BN30JaBru7PMyOlfXHdStzu8",
"NextInventoryRetrievalMarker": null,
"RetrievalByteRange": null,
"SHA256TreeHash": null,
"SNSTopic": null,
"StatusCode": "Succeeded",
"StatusMessage": "Succeeded",
"Tier": "Bulk",
"VaultARN": "arn:aws:glacier-devo:us-west-2:836579025725:vaults/inventory-icecube-2",
"InventoryRetrievalParameters": {
"StartDate": "2013-11-12T13:43:12Z",
"EndDate": "2013-11-20T08:12:45Z",
"Limit": "120000",
"Format": "JSON",
"Marker":
"vyS0t2jHQe5qbcDggIeD50chS1SXwYMrkVKo0KHiTUjEYxBGCqRLKaiySzdN7QXGVVV5XZpNVG67pCZ_uykQXFMLaxOSu2hO_-5C0
},
}
Related Sections
• Get Job Output (GET output) (p. 257)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
You can download all the job output or download a portion of the output by specifying a byte range.
For both archive and inventory retrieval jobs, you should verify the downloaded size against the size
returned in the headers from the Get Job Output response.
For archive retrieval jobs, you should also verify that the size is what you expected. If you download a
portion of the output, the expected size is based on the range of bytes you specified. For example, if
you specify a range of bytes=0-1048575, you should verify your download size is 1,048,576 bytes.
If you download an entire archive, the expected size is the size of the archive when you uploaded it
to Amazon S3 Glacier (S3 Glacier). The expected size is also returned in the headers from the Get Job
Output response.
In the case of an archive retrieval job, depending on the byte range you specify, S3 Glacier returns
the checksum for the portion of the data. To ensure the portion you downloaded is the correct data,
compute the checksum on the client, verify that the values match, and verify that the size is what you
expected.
A job ID does not expire for at least 24 hours after S3 Glacier completes the job. That is, you can
download the job output within the 24-hour period after S3 Glacier completes the job.
Requests
Syntax
To retrieve a job output, you send the HTTP GET request to the URI of the output of the specific job.
Range: ByteRangeToRetrieve
x-amz-glacier-version: 2012-06-01
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses the following request headers, in addition to the request headers that are common
to all operations. For more information about the common request headers, see Common Request
Headers (p. 160).
Range The range of bytes to retrieve from the output. For example, if No
you want to download the first 1,048,576 bytes, specify the range
as bytes=0-1048575. For more information, go to Range Header
Field Definition. The range is relative to any range specified in
the Initiate Job request. By default, this operation downloads the
entire output.
If the job output is large, then you can use the Range request
header to retrieve a portion of the output. This allows you
to download the entire output in smaller chunks of bytes.
For example, suppose you have 1 GB job output you want to
download and you decide to download 128 MB chunks of data at
a time, a total of eight Get Job Output requests. You will use the
following process to download the job output:
Type: String
Constraints: None
Request Body
This operation does not have a request body.
Responses
Syntax
For a retrieval request that returns all of the job data, the job output response returns a 200 OK
response code. When partial content is requested, for example, if you specified the Range header in the
request, then the response code 206 Partial Content is returned.
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: ContentType
Content-Length: Length
x-amz-sha256-tree-hash: ChecksumComputedByAmazonGlacier
Response Headers
Header Description
Type: String
Type: String
x-amz-sha256-tree- The checksum of the data in the response. This header is returned only
hash when retrieving the output for an archive retrieval job. Furthermore,
this header appears when the retrieved data range requested in the
Initiate Job request is tree hash aligned and the range to download in
Header Description
the Get Job Output is also tree hash aligned. For more information about
tree hash aligned ranges, see Receiving Checksums When Downloading
Data (p. 175).
For example, if in your Initiate Job request you specified a tree hash aligned
range to retrieve (which includes the whole archive), then you will receive
the checksum of the data you download under the following conditions:
Type: String
Response Body
S3 Glacier returns the job output in the response body. Depending on the job type, the output can be
the archive contents or the vault inventory. In case of a vault inventory, by default the inventory list is
returned as the following JSON body.
{
"VaultARN": String,
"InventoryDate": String,
"ArchiveList": [
{"ArchiveId": String,
"ArchiveDescription": String,
"CreationDate": String,
"Size": Number,
"SHA256TreeHash": String
},
...
]
}
If you requested the comma-separated values (CSV) output format when you initiated the vault
inventory job, then the vault inventory is returned in CSV format in the body. The CSV format has five
columns "ArchiveId", "ArchiveDescription", "CreationDate", "Size", and "SHA256TreeHash" with the same
definitions as the corresponding JSON fields.
Note
In the returned CSV format, fields may be returned with the whole field enclosed in double-
quotes. Fields that contain a comma or double-quotes are always returned enclosed in
double-quotes. For example, my archive description,1 is returned as "my archive
description,1". Double-quote characters that are within returned double-quote enclosed
fields are escaped by preceding them with a backslash character. For example, my archive
description,1"2 is returned as "my archive description,1\"2" and my archive
description,1\"2 is returned as "my archive description,1\\"2". The backslash
character is not escaped.
ArchiveDescription
Type: String
ArchiveId
The ID of an archive.
Type: String
ArchiveList
An array of archive metadata. Each object in the array represents metadata for one archive contained
in the vault.
Type: Array
CreationDate
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
InventoryDate
The UTC date and time of the last inventory for the vault that was completed after changes to
the vault. Even though S3 Glacier prepares a vault inventory once a day, the inventory date is only
updated if there have been archive additions or deletions to the vault since the last inventory.
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
SHA256TreeHash
Type: String
Size
Type: Number
VaultARN
The Amazon Resource Name (ARN) resource from which the archive retrieval was requested.
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example shows the request for a job that retrieves an archive.
Example Request
GET /-/vaults/examplevault/jobs/HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID/output
HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
The following is an example response of an archive retrieval job. Note that the Content-Type header
is application/octet-stream and that x-amz-sha256-tree-hash header is included in the
response, which means that all the job data is returned.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
x-amz-sha256-tree-hash: beb0fe31a1c7ca8c6c04d574ea906e3f97b31fdca7571defb5b44dca89b5af60
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/octet-stream
Content-Length: 1048576
[Archive data.]
The following is an example response of an inventory retrieval job. Note that the Content-Type header
is application/json. Also note that the response does not include the x-amz-sha256-tree-hash
header.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: 906
{
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault",
"InventoryDate": "2011-12-12T14:19:01Z",
"ArchiveList": [
{
"ArchiveId": "DMTmICA2n5Tdqq5BV2z7og-
A20xnpAPKt3UXwWxdWsn_D6auTUrW6kwy5Qyj9xd1MCE1mBYvMQ63LWaT8yTMzMaCxB_9VBWrW4Jw4zsvg5kehAPDVKcppUD1X7b24J
oA",
"ArchiveDescription": "my archive1",
"CreationDate": "2012-05-15T17:19:46.700Z",
"Size": 2140123,
"SHA256TreeHash": "6b9d4cf8697bd3af6aa1b590a0b27b337da5b18988dbcc619a3e608a554a1e62"
},
{
"ArchiveId": "2lHzwhKhgF2JHyvCS-
ZRuF08IQLuyB4265Hs3AXj9MoAIhz7tbXAvcFeHusgU_hViO1WeCBe0N5lsYYHRyZ7rrmRkNRuYrXUs_sjl2K8ume_7mKO_0i7C-
uHE1oHqaW9d37pabXrSA",
"ArchiveDescription": "my archive2",
"CreationDate": "2012-05-15T17:21:39.339Z",
"Size": 2140123,
"SHA256TreeHash": "7f2fe580edb35154041fa3d4b41dd6d3adaef0c85d2ff6309f1d4b520eeecda3"
}
]
}
Example Request
GET /-/vaults/examplevault/jobs/HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID/output
HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
Range: bytes=0-1023
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
The following successful response shows the 206 Partial Content response. In this case, the
response also includes a Content-Range header that specifies the range of bytes S3 Glacier returns.
[Archive data.]
Related Sections
• Describe Job (GET JobID) (p. 250)
• Initiate Job (POST jobs) (p. 263)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
Topics
• Working with Amazon S3 Glacier Select Jobs (p. 264)
• Define an output location for the output of your select query. This location must be an Amazon
S3 bucket in the same AWS Region as the vault containing the archive object being queried.
The AWS account that initiates the job must have permissions to write to the S3 bucket. You
can specify the storage class and encryption for the output objects stored in Amazon S3. When
setting S3Location (p. 290), it might be helpful to read the following topics in the Amazon S3
documentation:
• PUT Object in the Amazon Simple Storage Service API Reference
• Managing Access with ACLs in the Amazon Simple Storage Service Developer Guide
• Protecting Data Using Server-Side Encryption in the Amazon Simple Storage Service Developer Guide
• Define the SQL expression to use for the SELECT for your query in SelectParameters (p. 291). For
example, you can use expressions like the following examples:
• The following example expression returns all records from the specified object.
• Assuming you are not using any headers for data stored in the object, you can specify columns using
positional headers.
• If you have headers and you set the fileHeaderInfo in CSVInput (p. 279) to Use, you can specify
headers in the query. (If you set the fileHeaderInfo field to Ignore, the first row is skipped for
the query.) You cannot mix ordinal positions with header column names.
For more information about using SQL with S3 Glacier Select, see SQL Reference for Amazon S3 Select
and S3 Glacier Select (p. 304).
• Specify the Expedited tier to expedite your queries. For more information, see Expedited, Standard,
and Bulk Tiers (p. 266).
• Specify details about the data serialization format of both the input object being queried and the
serialization of the CSV-encoded query results.
• Specify an Amazon Simple Notification Service (Amazon SNS) topic to which S3 Glacier can post
a notification after the job is completed. You can specify an SNS topic for each job request. The
notification is sent only after S3 Glacier completes the job.
• You can use Describe Job (GET JobID) (p. 250) to obtain job status information while a job is in
progress. However, it is more efficient to use an Amazon SNS notification to determine when a job is
complete.
• Call the GetJobOutput operation. Job output is written to the output location.
• Use ranged selection.
For an example of initiating a select job, see Example Request: Initiate a select job (p. 271).
1. Initiate a retrieval job by using the Initiate Job (POST jobs) (p. 263) operation.
Important
A data retrieval policy can cause your initiate retrieval job request to fail with a
PolicyEnforcedException. For more information about data retrieval policies, see
Amazon S3 Glacier Data Retrieval Policies (p. 151). For more information about the
PolicyEnforcedException exception, see Error Responses (p. 176).
2. After the job completes, download the bytes using the Get Job Output (GET output) (p. 257)
operation.
The retrieval request is ran asynchronously. When you initiate a retrieval job, S3 Glacier creates a job
and returns a job ID in the response. When S3 Glacier completes the job, you can get the job output
(archive or inventory data). For information about getting job output, see the Get Job Output (GET
output) (p. 257) operation.
The job must complete before you can get its output. To determine when a job is complete, you have the
following options:
• Use an Amazon SNS notification— You can specify an Amazon SNS topic to which S3 Glacier can post
a notification after the job is completed. You can specify an SNS topic per job request. The notification
is sent only after S3 Glacier completes the job. In addition to specifying an SNS topic per job request,
you can configure vault notifications for a vault so that job notifications are sent for all retrievals. For
more information, see Set Vault Notification Configuration (PUT notification-configuration) (p. 219).
• Get job details— You can make a Describe Job (GET JobID) (p. 250) request to obtain job status
information while a job is in progress. However, it is more efficient to use an Amazon SNS notification
to determine when a job is complete.
Note
The information you get via notification is same that you get by calling Describe Job (GET
JobID) (p. 250).
If for a specific event, you add both the notification configuration on the vault and also specify an SNS
topic in your initiate job request, S3 Glacier sends both notifications. For more information, see Set Vault
Notification Configuration (PUT notification-configuration) (p. 219).
inventory, the inventory date is not updated. When you initiate a job for a vault inventory, S3 Glacier
returns the last inventory it generated, which is a point-in-time snapshot and not real-time data.
After S3 Glacier creates the first inventory for the vault, it typically takes half a day and up to a day
before that inventory is available for retrieval.
You might not find it useful to retrieve a vault inventory for each archive upload. However, suppose
that you maintain a database on the client-side associating metadata about the archives you upload
to S3 Glacier. Then, you might find the vault inventory useful to reconcile information, as needed, in
your database with the actual vault inventory. For more information about the data fields returned in an
inventory job output, see Response Body (p. 260).
You can retrieve inventory items for archives created between StartDate and EndDate by specifying
values for these parameters in the Initiate Job request. Archives created on or after the StartDate and
before the EndDate are returned. If you provide only the StartDate without the EndDate, you retrieve
the inventory for all archives created on or after the StartDate. If you provide only the EndDate
without the StartDate, you get back the inventory for all archives created before the EndDate.
You can limit the number of inventory items returned by setting the Limit parameter in the Initiate Job
request. The inventory job output contains inventory items up to the specified Limit. If there are more
inventory items available, the result is paginated. After a job is complete, you can use the Describe Job
(GET JobID) (p. 250) operation to get a marker that you use in a subsequent Initiate Job request. The
marker indicates the starting point to retrieve the next set of inventory items. You can page through your
entire inventory by repeatedly making Initiate Job requests with the marker from the previous Describe
Job output. You do so until you get a marker from Describe Job that returns null, indicating that there
are no more inventory items available.
You can use the Limit parameter together with the date range parameters.
• Expedited – Expedited allows you to quickly access your data when occasional urgent requests for
a subset of archives are required. For all but the largest archives (250 MB+), data accessed using the
Expedited tier is typically made available within 1–5 minutes.
• Standard – Standard allows you to access any of your archives within several hours. Data accessed
using the Standard tier typically made available within 3–5 hours. This option is the default one for job
requests that don't specify the tier option.
• Bulk – Bulk is the lowest-cost tier for S3 Glacier, enabling you to retrieve large amounts, even
petabytes, of data inexpensively in a day. Data accessed using the Bulk tier is typically made available
within 5–12 hours.
For more information about expedited and bulk retrievals, see Retrieving S3 Glacier Archives (p. 83).
Requests
To initiate a job, you use the HTTP POST method and scope the request to the vault's jobs subresource.
You specify details of the job request in the JSON document of your request. The job type is specified
with the Type field. Optionally, you can specify an SNSTopic field to indicate an Amazon SNS topic to
which S3 Glacier can post notification after it completes the job.
Note
To post a notification to Amazon SNS, you must create the topic yourself if it doesn't already
exist. S3 Glacier doesn't create the topic for you. The topic must have permissions to receive
publications from a S3 Glacier vault. S3 Glacier doesn't verify if the vault has permission to
publish to the topic. If the permissions are not configured appropriately, you might not receive
notification even after the job completes.
Syntax
The following is the request syntax for initiating a job.
{
"jobParameters": {
"ArchiveId": "string",
"Description": "string",
"Format": "string",
"InventoryRetrievalParameters": {
"EndDate": "string",
"Limit": "string",
"Marker": "string",
"StartDate": "string"
},
"OutputLocation": {
"S3": {
"AccessControlList": [
{
"Grantee": {
"DisplayName": "string",
"EmailAddress": "string",
"ID": "string",
"Type": "string",
"URI": "string"
},
"Permission": "string"
}
],
"BucketName": "string",
"CannedACL": "string",
"Encryption": {
"EncryptionType": "string",
"KMSContext": "string",
"KMSKeyId": "string"
},
"Prefix": "string",
"StorageClass": "string",
"Tagging": {
"string" : "string"
},
"UserMetadata": {
"string" : "string"
}
}
},
"RetrievalByteRange": "string",
"SelectParameters": {
"Expression": "string",
"ExpressionType": "string",
"InputSerialization": {
"csv": {
"Comments": "string",
"FieldDelimiter": "string",
"FileHeaderInfo": "string",
"QuoteCharacter": "string",
"QuoteEscapeCharacter": "string",
"RecordDelimiter": "string"
}
},
"OutputSerialization": {
"csv": {
"FieldDelimiter": "string",
"QuoteCharacter": "string",
"QuoteEscapeCharacter": "string",
"QuoteFields": "string",
"RecordDelimiter": "string"
}
}
},
"SNSTopic": "string",
"Tier": "string",
"Type": "string"
}
}
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Body
The request accepts the following data in JSON format in the body of the request.
jobParameters
Required: Yes
Responses
S3 Glacier creates the job. In the response, it returns the URI of the job.
Syntax
Response Headers
Header Description
Location The relative URI path of the job. You can use this URI path to find the job status.
For more information, see Describe Job (GET JobID) (p. 250).
Type: String
Default: None
x-amz-job-id The ID of the job. This value is also included as part of the Location header.
Type: String
Default: None
x-amz-job- This header is only returned for select job types. The path to the location of
output-path where the select results are stored.
Type: String
Default: None
Response Body
This operation does not return a response body.
Errors
This operation includes the following error or errors, in addition to the possible errors common to all
Amazon S3 Glacier operations. For information about Amazon S3 Glacier errors and a list of error codes,
see Error Responses (p. 176).
Examples
Example Request: Initiate an archive retrieval job
{
"Type": "archive-retrieval",
"ArchiveId": "NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-
TjhqG6eGoOY9Z8i1_AUyUsuhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs01MNGntHEQL8MBfGlqrEXAMPLEArchiveId",
"Description": "My archive description",
"SNSTopic": "arn:aws:sns:us-west-2:111111111111:Glacier-ArchiveRetrieval-topic-Example",
"Tier" : "Bulk"
}
The following is an example of the body of a request that specifies a range of the archive to retrieve
using the RetrievalByteRange field.
{
"Type": "archive-retrieval",
"ArchiveId": "NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-
TjhqG6eGoOY9Z8i1_AUyUsuhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs01MNGntHEQL8MBfGlqrEXAMPLEArchiveId",
"Description": "My archive description",
"RetrievalByteRange": "2097152-4194303",
"SNSTopic": "arn:aws:sns:us-west-2:111111111111:Glacier-ArchiveRetrieval-topic-Example",
"Tier" : "Bulk"
}
Example Response
"Type": "inventory-retrieval",
"Description": "My inventory job",
"Format": "CSV",
"SNSTopic": "arn:aws:sns:us-west-2:111111111111:Glacier-InventoryRetrieval-topic-Example"
}
Example Response
Example Requests: Initiate an inventory retrieval job by using date filtering with
a set limit, and a subsequent request to retrieve the next page of inventory
items.
The following request initiates a vault inventory retrieval job by using date filtering and setting a limit.
{
"ArchiveId": null,
"Description": null,
"Format": "CSV",
"RetrievalByteRange": null,
"SNSTopic": null,
"Type": "inventory-retrieval",
"InventoryRetrievalParameters": {
"StartDate": "2013-12-04T21:25:42Z",
"EndDate": "2013-12-05T21:25:42Z",
"Limit" : "10000"
},
}
The following request is an example of a subsequent request to retrieve the next page of inventory items
using a marker obtained from Describe Job (GET JobID) (p. 250).
{
"ArchiveId": null,
"Description": null,
"Format": "CSV",
"RetrievalByteRange": null,
"SNSTopic": null,
"Type": "inventory-retrieval",
"InventoryRetrievalParameters": {
"StartDate": "2013-12-04T21:25:42Z",
"EndDate": "2013-12-05T21:25:42Z",
"Limit": "10000",
"Marker":
"vyS0t2jHQe5qbcDggIeD50chS1SXwYMrkVKo0KHiTUjEYxBGCqRLKaiySzdN7QXGVVV5XZpNVG67pCZ_uykQXFMLaxOSu2hO_-5C0
},
}
"Type": "select",
"ArchiveId": "NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-
TjhqG6eGoOY9Z8i1_AUyUsuhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs01MNGntHEQL8MBfGlqrEXAMPLEArchiveId",
"Description": null,
"SNSTopic": null,
"Tier": "Bulk",
"SelectParameters": {
"Expression": "select * from archive",
"ExpressionType": "SQL",
"InputSerialization": {
"csv": {
"Comments": null,
"FileHeaderInfo": "None",
"QuoteEscapeCharacter": "\"",
"RecordDelimiter": "\n",
"FieldDelimiter": ",",
"QuoteCharacter": "\""
}
},
"OutputSerialization": {
"csv": {
"QuoteFields": "AsNeeded",
"QuoteEscapeCharacter": null,
"RecordDelimiter": "\n",
"FieldDelimiter": ",",
"QuoteCharacter": "\""
}
}
},
"OutputLocation": {
"S3": {
"BucketName": "bucket-name",
"Prefix": "test",
"Encryption": {
"EncryptionType": "AES256"
},
"CannedACL": "private",
"StorageClass": "STANDARD"
}
}
}
Example Response
HTTP/1.1 202 Accepted
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Location: /111122223333/vaults/examplevault/jobs/HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID
x-amz-job-id: HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID
x-amz-job-output-path: test/HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID/
Related Sections
• Describe Job (GET JobID) (p. 250)
• Get Job Output (GET output) (p. 257)
• SQL Reference for Amazon S3 Select and S3 Glacier Select (p. 304)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
The List Jobs operation supports pagination. You should always check the response Marker field. If
there are no more jobs to list, the Marker field is set to null. If there are more jobs to list, the Marker
field is set to a non-null value, which you can use to continue the pagination of the list. To return a list
of jobs that begins at a specific job, set the marker request parameter to the Marker value for that job
that you obtained from a previous List Jobs request.
You can set a maximum limit for the number of jobs returned in the response by specifying the limit
parameter in the request. The default limit is 50. The number of jobs returned might be fewer than the
limit, but the number of returned jobs never exceeds the limit.
Additionally, you can filter the jobs list returned by specifying the optional statuscode parameter or
completed parameter, or both. Using the statuscode parameter, you can specify to return only jobs
that match either the InProgress, Succeeded, or Failed status. Using the completed parameter,
you can specify to return only jobs that were completed (true) or jobs that were not completed (false).
Requests
Syntax
To return a list of jobs of all types, send a GET request to the URI of the vault's jobs subresource.
Note
The AccountId value is the AWS account ID of the account that owns the vault. You can either
specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier
uses the AWS account ID associated with the credentials used to sign the request. If you use an
account ID, do not include any hyphens ('-') in the ID.
Request Parameters
completed The state of the jobs to return. You can specify true or false. No
Type: Boolean
Constraints: None
Type: String
marker An opaque string used for pagination that specifies the job at No
which the listing of jobs should begin. You get the marker value
from a previous List Jobs response. You only need to include the
marker if you are continuing the pagination of the results started
in a previous List Jobs request.
Type: String
Constraints: None
Type: String
Request Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Request Body
This operation does not have a request body.
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Location: Location
Content-Type: application/json
Content-Length: Length
{
"JobList": [
{
"Action": "string",
"ArchiveId": "string",
"ArchiveSHA256TreeHash": "string",
"ArchiveSizeInBytes": number,
"Completed": boolean,
"CompletionDate": "string",
"CreationDate": "string",
"InventoryRetrievalParameters": {
"EndDate": "string",
"Format": "string",
"Limit": "string",
"Marker": "string",
"StartDate": "string"
},
"InventorySizeInBytes": number,
"JobDescription": "string",
"JobId": "string",
"JobOutputPath": "string",
"OutputLocation": {
"S3": {
"AccessControlList": [
{
"Grantee": {
"DisplayName": "string",
"EmailAddress": "string",
"ID": "string",
"Type": "string",
"URI": "string"
},
"Permission": "string"
}
],
"BucketName": "string",
"CannedACL": "string",
"Encryption": {
"EncryptionType": "string",
"KMSContext": "string",
"KMSKeyId": "string"
},
"Prefix": "string",
"StorageClass": "string",
"Tagging": {
"string": "string"
},
"UserMetadata": {
"string": "string"
}
}
},
"RetrievalByteRange": "string",
"SelectParameters": {
"Expression": "string",
"ExpressionType": "string",
"InputSerialization": {
"csv": {
"Comments": "string",
"FieldDelimiter": "string",
"FileHeaderInfo": "string",
"QuoteCharacter": "string",
"QuoteEscapeCharacter": "string",
"RecordDelimiter": "string"
}
},
"OutputSerialization": {
"csv": {
"FieldDelimiter": "string",
"QuoteCharacter": "string",
"QuoteEscapeCharacter": "string",
"QuoteFields": "string",
"RecordDelimiter": "string"
}
}
},
"SHA256TreeHash": "string",
"SNSTopic": "string",
"StatusCode": "string",
"StatusMessage": "string",
"Tier": "string",
"VaultARN": "string"
}
],
"Marker": "string"
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
JobList
A list of job objects. Each job object contains metadata describing the job.
An opaque string that represents where to continue pagination of the results. You use the marker
value in a new List Jobs request to obtain more jobs in the list. If there are no more jobs to list,
this value is null.
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following examples demonstrate how to return information about vault jobs. The first example
returns a list of two jobs, and the second example returns a subset of jobs.
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
The following response includes an archive retrieval job and an inventory retrieval job that contains a
marker used to continue pagination of the vault inventory retrieval. The response also shows the Marker
field set to null, which indicates there are no more jobs to list.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: 1444
{
"JobList": [
{
"Action": "ArchiveRetrieval",
"ArchiveId": "BDfaUQul0dVzYwAMr8YSa_6_8abbhZq-
i1oT69g8ByClfJyBgAGBkWl2QbF5os851P7Y7KdZDOHWJIn4rh1ZHaOYD3MgFhK_g0oDPesW34uHQoVGwoIqubf6BgUEfQm_wrU4Jlm
"ArchiveSizeInBytes": 1048576,
"ArchiveSHA256TreeHash":
"25499381569ab2f85e1fd0eb93c5406a178ab77c5933056eb5d6e7d4adda609b",
"Completed": true,
"CompletionDate": "2012-05-01T00:00:09.304Z",
"CreationDate": "2012-05-01T00:00:06.663Z",
"InventorySizeInBytes": null,
"JobDescription": null,
"JobId": "hDe9t9DTHXqFw8sBGpLQQOmIM0-
JrGtu1O_YFKLnzQ64548qJc667BRWTwBLZC76Ygy1jHYruqXkdcAhRsh0hYv4eVRU",
"RetrievalByteRange": "0-1048575",
"SHA256TreeHash": "25499381569ab2f85e1fd0eb93c5406a178ab77c5933056eb5d6e7d4adda609b",
"SNSTopic": null,
"StatusCode": "Succeeded",
"StatusMessage": "Succeeded",
"Tier": "Bulk",
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
},
{
"Action": "InventoryRetrieval",
"ArchiveId": null,
"ArchiveSizeInBytes": null,
"ArchiveSHA256TreeHash": null,
"Completed": true,
"CompletionDate": "2013-05-11T00:25:18.831Z",
"CreationDate": "2013-05-11T00:25:14.981Z",
"InventorySizeInBytes": 1988,
"JobDescription": null,
"JobId":
"2cvVOnBL36btzyP3pobwIceiaJebM1bx9vZOOUtmNAr0KaVZ4WkWgVjiPldJ73VU7imlm0pnZriBVBebnqaAcirZq_C5",
"RetrievalByteRange": null,
"SHA256TreeHash": null,
"SNSTopic": null,
"StatusCode": "Succeeded",
"StatusMessage": "Succeeded",
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
"InventoryRetrievalParameters": {
"StartDate": "2013-11-12T13:43:12Z",
"EndDate": "2013-11-20T08:12:45Z",
"Limit": "120000",
"Format": "JSON",
"Marker":
"vyS0t2jHQe5qbcDggIeD50chS1SXwYMrkVKo0KHiTUjEYxBGCqRLKaiySzdN7QXGVVV5XZpNVG67pCZ_uykQXFMLaxOSu2hO_-5C0
}
],
"Marker": null
}
GET /-/vaults/examplevault/jobs?marker=HkF9p6o7yjhFx-
K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP54ZShjoQzQVVh7vEXAMPLEjobID&limit=2
HTTP/1.1
Host: glacier.us-west-2.amazonaws.com
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
The following response shows two jobs returned and the Marker field set to a non-null value that can be
used to continue pagination of the job list.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: 1744
{
"JobList": [
{
"Action": "ArchiveRetrieval",
"ArchiveId": "58-3KpZfcMPUznvMZNPaKyJx9wODCsWTnqcjtx2CjKZ6b-
XgxEuA8yvZOYTPQfd7gWR4GRm2XR08gcnWbLV4VPV_kDWtZJKi0TFhKKVPzwrZnA4-
FXuIBfViYUIVveeiBE51FO4bvg",
"ArchiveSizeInBytes": 8388608,
"ArchiveSHA256TreeHash":
"106086b256ddf0fedf3d9e72f461d5983a2566247ebe7e1949246bc61359b4f4",
"Completed": true,
"CompletionDate": "2012-05-01T00:25:20.043Z",
"CreationDate": "2012-05-01T00:25:16.344Z",
"InventorySizeInBytes": null,
"JobDescription": "aaabbbccc",
"JobId": "s4MvaNHIh6mOa1f8iY4ioG2921SDPihXxh3Kv0FBX-
JbNPctpRvE4c2_BifuhdGLqEhGBNGeB6Ub-JMunR9JoVa8y1hQ",
"RetrievalByteRange": "0-8388607",
"SHA256TreeHash": "106086b256ddf0fedf3d9e72f461d5983a2566247ebe7e1949246bc61359b4f4",
"SNSTopic": null,
"StatusCode": "Succeeded",
"StatusMessage": "Succeeded",
"Tier": "Bulk",
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
},
{
"Action": "ArchiveRetrieval",
"ArchiveId": "2NVGpf83U6qB9M2u-
Ihh61yoFLRDEoh7YLZWKBn80A2i1xG8uieBwGjAr4RkzOHA0E07ZjtI267R03Z-6Hxd8pyGQkBdciCSH1-
Lw63Kx9qKpZbPCdU0uTW_WAdwF6lR6w8iSyKdvw",
"ArchiveSizeInBytes": 1048576,
"ArchiveSHA256TreeHash":
"3d2ae052b2978727e0c51c0a5e32961c6a56650d1f2e4ceccab6472a5ed4a0",
"Completed": true,
"CompletionDate": "2012-05-01T16:59:48.444Z",
"CreationDate": "2012-05-01T16:59:42.977Z",
"InventorySizeInBytes": null,
"JobDescription": "aaabbbccc",
"JobId":
"CQ_tf6fOR4jrJCL61Mfk6VM03oY8lmnWK93KK4gLig1UPAbZiN3UV4G_5nq4AfmJHQ_dOMLOX5k8ItFv0wCPN0oaz5dG",
"RetrievalByteRange": "0-1048575",
"SHA256TreeHash": "3d2ae052b2978727e0c51c0a5e32961c6a56650d1f2e4ceccab6472a5ed4a0",
"SNSTopic": null,
"StatusCode": "Succeeded",
"StatusMessage": "Succeeded",
"Tier": "Standard",
"VaultARN": "arn:aws:glacier:us-west-2:012345678901:vaults/examplevault"
}
],
"Marker":
"CQ_tf6fOR4jrJCL61Mfk6VM03oY8lmnWK93KK4gLig1UPAbZiN3UV4G_5nq4AfmJHQ_dOMLOX5k8ItFv0wCPN0oaz5dG"
}
Related Sections
• Describe Job (GET JobID) (p. 250)
• Identity and Access Management in Amazon S3 Glacier (p. 125)
Topics
• CSVInput (p. 279)
• CSVOutput (p. 280)
• Encryption (p. 281)
• GlacierJobDescription (p. 282)
• Grant (p. 285)
• Grantee (p. 285)
• InputSerialization (p. 286)
• InventoryRetrievalJobInput (p. 286)
• jobParameters (p. 287)
• OutputLocation (p. 289)
• OutputSerialization (p. 290)
• S3Location (p. 290)
• SelectParameters (p. 291)
CSVInput
Contains information about the comma-separated values (CSV) file.
Contents
Comments
A single character used to indicate that a row should be ignored when the character is present at the
start of that row.
Type: String
Required: no
FieldDelimiter
A single character used to separate individual fields from each other within a record. The character
must be a \n, \r, or an ASCII character in the range 32–126. The default is a comma (,).
Type: String
Default: ,
Required: no
FileHeaderInfo
A value that describes what to do with the first line of the input.
Type: String
Required: no
QuoteCharacter
A single character used as an escape character where the field delimiter is part of the value.
Type: String
Required: no
QuoteEscapeCharacter
A single character used for escaping the quotation-mark character inside an already escaped value.
Type: String
Required: no
RecordDelimiter
Type: String
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
CSVOutput
Contains information about the comma-separated values (CSV) format that the job results are stored in.
Contents
FieldDelimiter
A single character used to separate individual fields from each other within a record.
Type: String
Required: no
QuoteCharacter
A single character used as an escape character where the field delimiter is part of the value.
Type: String
Required: no
QuoteEscapeCharacter
A single character used for escaping the quotation-mark character inside an already escaped value.
Type: String
Required: no
QuoteFields
A value that indicates whether all output fields should be contained within quotation marks.
Type: String
Required: no
RecordDelimiter
Type: String
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
Encryption
Contains information about the encryption used to store the job results in Amazon S3.
Contents
Encryption
The server-side encryption algorithm used when storing job results in Amazon S3. The default is no
encryption.
Type: String
Required: no
KMSContext
Optional. If the encryption type is aws:kms, you can use this value to specify the encryption
context for the job results.
Type: String
Required: no
KMSKeyId
The AWS Key Management Service (AWS KMS) key ID to use for object encryption.
Type: String
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
GlacierJobDescription
Contains the description of an Amazon S3 Glacier (S3 Glacier) job.
Contents
Action
Type: String
ArchiveId
The archive ID requested for a select or archive retrieval job. Otherwise, this field is null.
Type: String
ArchiveSHA256TreeHash
The SHA256 tree hash of the entire archive for an archive retrieval. For inventory retrieval jobs, this
field is null.
Type: String
ArchiveSizeInBytes
For an ArchiveRetrieval job, this is the size in bytes of the archive being requested for
download. For the InventoryRetrieval job, the value is null.
Type: Number
Completed
Type: Boolean
CompletionDate
The Universal Coordinated Time (UTC) time that the job request completed. While the job is in
progress, the value is null.
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
CreationDate
Type: A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
InventoryRetrievalParameters
For an InventoryRetrieval job, this is the size in bytes of the inventory requested for download.
For the ArchiveRetrieval or Select job, the value is null.
Type: Number
JobDescription
The job description that you provided when you initiated the job.
Type: String
JobId
Type: String
JobOutputPath
Type: String
OutputLocation
An object that contains information about the location where the select job results and errors are
stored.
The retrieved byte range for archive retrieval jobs in the form "StartByteValue-EndByteValue."
If no range was specified in the archive retrieval, then the whole archive is retrieved and
StartByteValue equals 0 and EndByteValue equals the size of the archive minus 1. For inventory
retrieval jobs, this field is null.
Type: String
SelectParameters
An object that contains information about the parameters used for a select.
The SHA256 tree hash value for the requested range of an archive. If the Initiate Job (POST
jobs) (p. 263) request for an archive specified a tree-hash aligned range, then this field returns a
value. For more information about tree-hash alignment for archive range retrievals, see Receiving
Checksums When Downloading Data (p. 175).
For the specific case in which the whole archive is retrieved, this value is the same as the
ArchiveSHA256TreeHash value.
Type: String
SNSTopic
The Amazon Resource Name (ARN) that represents an Amazon SNS topic where notification of job
completion or failure is sent, if notification was configured in the job initiation (Initiate Job (POST
jobs) (p. 263)).
Type: String
StatusCode
Type: String
StatusMessage
Type: String
Tier
The data access tier to use for the select or archive retrieval.
Type: String
VaultARN
Type: String
More Info
• Initiate Job (POST jobs) (p. 263)
Grant
Contains information about a grant.
Contents
Grantee
The grantee.
Required: no
Permission
Type: String
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
Grantee
Contains information about a grantee.
Contents
DisplayName
Type: String
Required: no
EmailAddress
Type: String
Required: no
ID
Type: String
Required: no
Type
Type: String
Required: no
URI
Type: String
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
InputSerialization
Describes how the archive is serialized.
Contents
CSV
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
InventoryRetrievalJobInput
Provides options for specifying a range inventory retrieval job.
Contents
EndDate
The end of the date range, in UTC time, for a vault inventory retrieval that includes archives created
before this date.
Valid Values: A string representation in the ISO 8601 date format (YYYY-MM-DDThh:mm:ssTZD) in
seconds, for example 2013-03-20T17:03:43Z.
Type: String. A string representation in the ISO 8601 date format (YYYY-MM-DDThh:mm:ssTZD) in
seconds, for example 2013-03-20T17:03:43Z.
Required: no
Format
The output format for the vault inventory list, which is set by the Initiate Job (POST jobs) (p. 263)
request when initiating a job to retrieve a vault inventory.
Required: no
Type: String
Limit
The maximum number of inventory items that can be returned for each vault inventory retrieval
request.
Type: String
Required: no
Marker
An opaque string that represents where to continue pagination of the vault inventory retrieval
results. You use this marker in a new Initiate Job request to obtain additional inventory items. If
there are no more inventory items, this value is null.
Type: String
Required: no
StartDate
The start of the date range, in UTC time, for a vault inventory retrieval that includes archives created
on or after this date.
Valid Values: A string representation in the ISO 8601 date format (YYYY-MM-DDThh:mm:ssTZD) in
seconds, for example 2013-03-20T17:03:43Z.
Type: String. A string representation in the ISO 8601 date format (YYYY-MM-DDThh:mm:ssTZD) in
seconds, for example 2013-03-20T17:03:43Z.
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
jobParameters
Provides options for defining a job.
Contents
ArchiveId
The ID of the archive that you want. This field is required if the Type field is set to select or
archive-retrieval. An error occurs if you specify this field for an inventory retrieval job request.
Valid Values: Must be a valid archive ID that you obtained from a previous request to Amazon S3
Glacier (S3 Glacier).
Type: String
Valid Values: The description must be less than or equal to 1,024 bytes. The allowable characters
are 7-bit ASCII without control codes—specifically, ASCII values 32–126 decimal or 0x20–0x7E
hexadecimal.
Type: String
Required: no
Format
(Optional) The output format, when initiating a job to retrieve a vault inventory. If you are initiating
an inventory job and don't specify a Format field, JSON is the default format.
Type: String
Required: no
InventoryRetrievalParameters
Required: no
OutputLocation
An object that contains information about the location where the select job results are stored.
An error occurs if you specify this field for an inventory-retrieval or select job request.
Type: String
Required: no
SelectParameters
An object that contains information about the parameters used for a select.
Required: no
SNSTopic
The Amazon Resource Name (ARN) of the Amazon SNS topic where S3 Glacier sends a notification
when the job is completed and output is ready for you to download. The specified topic publishes
the notification to its subscribers.
The SNS topic must exist. If it doesn't, S3 Glacier doesn't create it for you. Additionally, the SNS topic
must have a policy that allows the account that created the job to publish messages to the topic. For
information about SNS topic names, see CreateTopic in the Amazon Simple Notification Service API
Reference.
Type: String
Required: no
Tier
The tier to use for a select or an archive retrieval job. Standard is the default value used.
Type: String
Required: no
Type
The job type. You can initiate a job to perform a select query on an archive, retrieve an archive, or
get an inventory of a vault.
Type: String
Required: yes
More Info
• Initiate Job (POST jobs) (p. 263)
OutputLocation
Contains information about the location where the job results and errors are stored.
Contents
S3
An object that describes an Amazon S3 location to receive the results of the restore request.
Required: yes
More Info
• Initiate Job (POST jobs) (p. 263)
OutputSerialization
Describes how the output is serialized.
Contents
CSV
An object that describes the serialization of the comma-separated values (CSV)-encoded query
results.
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
S3Location
Contains information about the location in Amazon S3 where the job results are stored.
Contents
AccessControlList
Required: no
BucketName
The name of the Amazon S3 bucket where the job results are stored. The bucket must be in the
same AWS Region as the vault that contains the input archive object.
Type: String
Required: yes
CannedACL
The canned access control list (ACL) to apply to the job results.
Type: String
Required: no
Encryption
An object that contains information about the encryption used to store the job results in Amazon S3.
Required: no
Prefix
The prefix that is prepended to the results for this request. The maximum length for the prefix is 512
bytes.
Type: String
Required: yes
StorageClass
Type: String
Required: no
Tagging
Required: no
UserMetadata
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
SelectParameters
Contains information about the parameters used for the select.
Contents
Expression
The expression that is used to select the object. The expression must not exceed the quota of
128,000 characters.
Type: String
Required: yes
ExpressionType
Type: String
Required: yes
InputSerialization
Required: no
OutputSerialization
Required: no
More Info
• Initiate Job (POST jobs) (p. 263)
Topics
• Get Data Retrieval Policy (GET policy) (p. 292)
• List Provisioned Capacity (GET provisioned-capacity) (p. 295)
• Purchase Provisioned Capacity (POST provisioned-capacity) (p. 297)
• Set Data Retrieval Policy (PUT policy) (p. 299)
Requests
To return the current data retrieval policy, send an HTTP GET request to the data retrieval policy URI as
shown in the following syntax example.
Syntax
x-amz-glacier-version: 2012-06-01
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: length
{
"Policy":
{
"Rules":[
{
"BytesPerHour": Number,
"Strategy": String
}
]
}
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
BytesPerHour
This field will be present only if the value of the Strategy field is BytesPerHour.
Type: Number
Rules
The policy rule. Although this is a list type, currently there will be only one rule, which contains a
Strategy field and optionally a BytesPerHour field.
Type: Array
Strategy
Type: String
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example demonstrates how to get a data retrieval policy.
Example Request
In this example, a GET request is sent to the URI of a policy's location.
Example Response
A successful response shows the data retrieval policy in the body of the response in JSON format.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:00:00 GMT
Content-Type: application/json
Content-Length: 85
{
"Policy":
{
"Rules":[
{
"BytesPerHour":10737418240,
"Strategy":"BytesPerHour"
}
]
}
}
Related Sections
• Set Data Retrieval Policy (PUT policy) (p. 299)
A provisioned capacity unit lasts for one month starting at the date and time of purchase, which is the
start date. The unit expires on the expiration date, which is exactly one month after the start date to the
nearest second.
If the start date is on the 31st day of a month, the expiration date is the last day of the next month. For
example, if the start date is August 31, the expiration date is September 30. If the start date is January
31, the expiration date is February 28. You can see this functionality in the Example Response (p. 297).
Request Syntax
To list the provisioned retrieval capacity for an account, send an HTTP GET request to the provisioned-
capacity URI as shown in the following syntax example.
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
This operation does not have a request body.
Responses
If the operation is successful, the service sends back an HTTP 200 OK response.
Response Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: x-amzn-RequestId
Date: Date
Content-Type: application/json
Content-Length: Length
{
"ProvisionedCapacityList":
{
"CapacityId" : "string",
"StartDate" : "string"
"ExpirationDate" : "string"
}
}
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
The response body contains the following JSON fields.
CapacityId
Type: String.
StartDate
The date that the provisioned capacity unit was purchased, in Coordinated Universal Time (UTC).
Type: String. A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
ExpirationDate
The date that the provisioned capacity unit expires, in Coordinated Universal Time (UTC).
Type: String. A string representation in the ISO 8601 date format, for example
2013-03-20T17:03:43.221Z.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
The following example lists the provisioned capacity units for an account.
Example Request
In this example, a GET request is sent to retrieve a list of the provisioned capacity units for the specified
account.
x-amz-Date: 20170210T120000Z
x-amz-glacier-version: 2012-06-01
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20141123/
us-west-2/glacier/aws4_request,SignedHeaders=host;x-amz-date;x-amz-glacier-
version,Signature=9257c16da6b25a715ce900a5b45b03da0447acf430195dcb540091b12966f2a2
Example Response
If the request was successful, Amazon S3 Glacier (S3 Glacier) returns a HTTP 200 OK with a list of
provisioned capacity units for the account as shown in the following example.
The provisioned capacity unit listed first is an example of a unit with a start date of January 31, 2017 and
expiration date of February 28, 2017. As stated earlier, if the start date is on the 31st day of a month, the
expiration date is the last day of the next month.
HTTP/1.1 200 OK
x-amzn-RequestId: AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q
Date: Wed, 10 Feb 2017 12:02:00 GMT
Content-Type: application/json
Content-Length: length
{
"ProvisionedCapacityList",
{
"CapacityId": "zSaq7NzHFQDANTfQkDen4V7z",
"StartDate": "2017-01-31T14:26:33.031Z",
"ExpirationDate": "2017-02-28T14:26:33.000Z",
},
{
"CapacityId": "yXaq7NzHFQNADTfQkDen4V7z",
"StartDate": "2016-12-13T20:11:51.095Z"",
"ExpirationDate": "2017-01-13T20:11:51.000Z" ",
},
...
}
Related Sections
• Purchase Provisioned Capacity (POST provisioned-capacity) (p. 297)
A provisioned capacity unit lasts for one month starting at the date and time of purchase, which is the
start date. The unit expires on the expiration date, which is exactly one month after the start date to the
nearest second.
If the start date is on the 31st day of a month, the expiration date is the last day of the next month. For
example, if the start date is August 31, the expiration date is September 30. If the start date is January
31, the expiration date is February 28.
Provisioned capacity helps ensure that your retrieval capacity for expedited retrievals is available when
you need it. Each unit of capacity ensures that at least three expedited retrievals can be performed
every five minutes and provides up to 150 MB/s of retrieval throughput. For more information about
provisioned capacity, see Archive Retrieval Options (p. 84).
Requests
To purchase provisioned capacity unit for an AWS account send an HTTP POST request to the
provisioned-capacity URI.
Syntax
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
Responses
If the operation request is successful, the service returns an HTTP 201 Created response.
Syntax
Response Headers
A successful response includes the following response headers, in addition to the response headers that
are common to all operations. For more information about common response headers, see Common
Response Headers (p. 162).
Name Description
Type: String
Response Body
This operation does not return a response body.
Errors
This operation includes the following error or errors, in addition to the possible errors common to all
Amazon S3 Glacier operations. For information about Amazon S3 Glacier errors and a list of error codes,
see Error Responses (p. 176).
Examples
The following example purchases provisioned capacity for an account.
Example Request
The following example sends an HTTP POST request to purchase a provisioned capacity unit.
Example Response
If the request was successful, Amazon S3 Glacier (S3 Glacier) returns an HTTP 201 Created response,
as shown in the following example.
Related Sections
• List Provisioned Capacity (GET provisioned-capacity) (p. 295)
The set policy operation does not affect retrieval jobs that were in progress before the policy was
enacted. For more information about data retrieval policies, see Amazon S3 Glacier Data Retrieval
Policies (p. 151).
Requests
Syntax
To set a data retrieval policy, send an HTTP PUT request to the data retrieval policy URI as shown in the
following syntax example.
{
"Policy":
{
"Rules":[
{
"Strategy": String,
"BytesPerHour": Number
}
]
}
}
Note
The AccountId value is the AWS account ID. This value must match the AWS account ID
associated with the credentials used to sign the request. You can either specify an AWS account
ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID
associated with the credentials used to sign the request. If you specify your account ID, do not
include any hyphens ('-') in the ID.
Request Parameters
This operation does not use request parameters.
Request Headers
This operation uses only request headers that are common to all operations. For information about
common request headers, see Common Request Headers (p. 160).
Request Body
The request body contains the following JSON fields.
BytesPerHour
This field is required only if the value of the Strategy field is BytesPerHour. Your PUT operation
will be rejected if the Strategy field is not set to BytesPerHour and you set this field.
Type: Number
Valid Values: Minimum integer value of 1. Maximum integer value of 2^63 - 1 inclusive.
Rules
The policy rule. Although this is a list type, currently there must be only one rule, which contains a
Strategy field and optionally a BytesPerHour field.
Type: Array
Required: Yes
Strategy
Type: String
Required: Yes
Responses
Syntax
Response Headers
This operation uses only response headers that are common to most responses. For information about
common response headers, see Common Response Headers (p. 162).
Response Body
This operation does not return a response body.
Errors
For information about Amazon S3 Glacier exceptions and error messages, see Error Responses (p. 176).
Examples
Example Request
The following example sends an HTTP PUT request with the Strategy field set to BytesPerHour.
{
"Policy":
{
"Rules":[
{
"Strategy":"BytesPerHour",
"BytesPerHour":10737418240
}
]
}
}
The following example sends an HTTP PUT request with the Strategy field set to FreeTier.
{
"Policy":
{
"Rules":[
{
"Strategy":"FreeTier"
}
]
}
}
The following example sends an HTTP PUT request with the Strategy field set to None.
{
"Policy":
{
"Rules":[
{
"Strategy":"None"
}
]
}
}
Example Response
If the request was successful Amazon S3 Glacier (S3 Glacier) sets the policy and returns a HTTP 204 No
Content as shown in the following example.
Related Sections
• Get Data Retrieval Policy (GET policy) (p. 292)
Topics
• SELECT Command (p. 304)
• Data Types (p. 311)
• Operators (p. 311)
• Reserved Keywords (p. 313)
• SQL Functions (p. 317)
SELECT Command
Amazon S3 Select and S3 Glacier Select support only the SELECT SQL command. The following ANSI
standard clauses are supported for SELECT:
• SELECT list
• FROM clause
• WHERE clause
• LIMIT clause (Amazon S3 Select only)
Note
Amazon S3 Select and S3 Glacier Select queries currently do not support subqueries or joins.
SELECT List
The SELECT list names the columns, functions, and expressions that you want the query to return. The
list represents the output of the query.
SELECT *
SELECT projection [ AS column_alias | column_alias ] [, ...]
The first form with * (asterisk) returns every row that passed the WHERE clause, as-is. The second form
creates a row with user-defined output scalar expressions projection for each column.
FROM Clause
Amazon S3 Select and S3 Glacier Select support the following forms of the FROM clause:
FROM table_name
FROM table_name alias
FROM table_name AS alias
Where table_name is one of S3Object (for Amazon S3 Select) or ARCHIVE or OBJECT (for S3 Glacier
Select) referring to the archive being queried over. Users coming from traditional relational databases
can think of this as a database schema that contains multiple views over a table.
Following standard SQL, the FROM clause creates rows that are filtered in the WHERE clause and projected
in the SELECT list.
For JSON objects that are stored in Amazon S3 Select, you can also use the following forms of the FROM
clause:
FROM S3Object[*].path
FROM S3Object[*].path alias
FROM S3Object[*].path AS alias
Using this form of the FROM clause, you can select from arrays or objects within a JSON object. You can
specify path using one of the following forms:
Note
• This form of the FROM clause works only with JSON objects.
• Wildcards always emit at least one record. If no record matches, then Amazon S3 Select emits
the value MISSING. During output serialization (after the query is complete), Amazon S3
Select replaces MISSING values with empty records.
• Aggregate functions (AVG, COUNT, MAX, MIN, and SUM) skip MISSING values.
• If you don't provide an alias when using a wildcard, you can refer to the row using the last
element in the path. For example, you could select all prices from a list of books using
the query SELECT price FROM S3Object[*].books[*].price. If the path ends in a
wildcard rather than a name, then you can use the value _1 to refer to the row. For example,
instead of SELECT price FROM S3Object[*].books[*].price, you could use the query
SELECT _1.price FROM S3Object[*].books[*].
• Amazon S3 Select always treats a JSON document as an array of root-level values. Thus, even
if the JSON object that you are querying has only one root element, the FROM clause must
begin with S3Object[*]. However, for compatibility reasons, Amazon S3 Select allows you
to omit the wildcard if you don't include a path. Thus, the complete clause FROM S3Object
is equivalent to FROM S3Object[*] as S3Object. If you include a path, you must also use
the wildcard. So FROM S3Object and FROM S3Object[*].path are both valid clauses, but
FROM S3Object.path is not.
Example
Examples:
Example #1
This example shows results using the following dataset and query:
{
"Rules": [
{"id": "id-1", "condition": "x < 20"},
{"condition": "y > x"},
{"id": "id-2", "condition": "z = DEBUG"}
]
},
{
"created": "June 27",
"modified": "July 6"
}
{"id":"id-1"},
{},
{"id":"id-2"},
{}
If you don't want Amazon S3 Select to return empty records when it doesn't find a match, you can test
for the value MISSING. The following query returns the same results as the previous query, but with the
empty values omitted:
{"id":"id-1"},
{"id":"id-2"}
Example #2
This example shows results using the following dataset and queries:
{
"created": "936864000",
"dir_name": "important_docs",
"files": [
{
"name": "."
},
{
"name": ".."
},
{
"name": ".aws"
},
{
"name": "downloads"
}
],
"owner": "AWS S3"
},
{
"created": "936864000",
"dir_name": "other_docs",
"files": [
{
"name": "."
},
{
"name": ".."
},
{
"name": "my stuff"
},
{
"name": "backup"
}
],
"owner": "User"
}
{
"dir_name": "important_docs",
"files": [
{
"name": "."
},
{
"name": ".."
},
{
"name": ".aws"
},
{
"name": "downloads"
}
]
},
{
"dir_name": "other_docs",
"files": [
{
"name": "."
},
{
"name": ".."
},
{
"name": "my stuff"
},
{
"name": "backup"
}
]
}
{
"dir_name": "important_docs",
"owner": "AWS S3"
},
{
"dir_name": "other_docs",
"owner": "User"
}
WHERE Clause
The WHERE clause follows this syntax:
WHERE condition
The WHERE clause filters rows based on the condition. A condition is an expression that has a Boolean
result. Only rows for which the condition evaluates to TRUE are returned in the result.
LIMIT number
The LIMIT clause limits the number of records that you want the query to return based on number.
Note
S3 Glacier Select does not support the LIMIT clause.
Attribute Access
The SELECT and WHERE clauses can refer to record data using one of the methods in the following
sections, depending on whether the file that is being queried is in CSV or JSON format.
CSV
• Column Numbers – You can refer to the Nth column of a row with the column name _N, where N is
the column position. The position count starts at 1. For example, the first column is named _1 and the
second column is named _2.
You can refer to a column as _N or alias._N. For example, _2 and myAlias._2 are both valid ways
to refer to a column in the SELECT list and WHERE clause.
• Column Headers – For objects in CSV format that have a header row, the headers are available to the
SELECT list and WHERE clause. In particular, as in traditional SQL, within SELECT and WHERE clause
expressions, you can refer to the columns by alias.column_name or column_name.
[
{"project_name":"project1", "completed":false},
{"project_name":"project2", "completed":true}
]
}
Example #1
{"name":"Susan Smith"}
Example #2
{"project_name":"project1"}
The following examples are either 1) Amazon S3 or S3 Glacier objects in CSV format with the specified
column header(s), and with FileHeaderInfo set to "Use" for the query request; or 2) Amazon S3
objects in JSON format with the specified attributes.
• The following expression successfully returns values from the object (no quotation marks: case
insensitive):
• The following expression results in a 400 error MissingHeaderName (quotation marks: case sensitive):
Example #2: The Amazon S3 object being queried has one header/attribute with "NAME" and another
header/attribute with "name".
• The following expression results in a 400 error AmbiguousFieldName (no quotation marks: case
insensitive, but there are two matches):
• The following expression successfully returns values from the object (quotation marks: case sensitive,
so it resolves the ambiguity).
For the full list of reserved keywords see Reserved Keywords (p. 313).
The following example is either 1) an Amazon S3 or S3 Glacier object in CSV format with the specified
column headers, with FileHeaderInfo set to "Use" for the query request, or 2) an Amazon S3 object in
JSON format with the specified attributes.
Example: The object being queried has header/attribute named "CAST", which is a reserved keyword.
• The following expression successfully returns values from the object (quotation marks: use user-
defined header/attribute):
• The following expression results in a 400 parse error (no quotation marks: clash with reserved
keyword):
Scalar Expressions
Within the WHERE clause and the SELECT list, you can have SQL scalar expressions, which are expressions
that return scalar values. They have the following form:
• literal
An SQL literal.
• column_reference
Data Types
Amazon S3 Select and S3 Glacier Select support several primitive data types.
For more information about the CAST function, see CAST (p. 319).
decimal, numeric Base-10 number, with maximum precision of 38 (that is, the 123.456
maximum number of significant digits), and with scale within
31 31
the range of -2 to 2 -1 (that is, the base-10 exponent).
Operators
Amazon S3 Select and S3 Glacier Select support the following operators.
Logical Operators
• AND
• NOT
• OR
Comparison Operators
• <
• >
• <=
• >=
• =
• <>
• !=
• BETWEEN
• IN – For example: IN ('a', 'b', 'c')
Math Operators
Addition, subtraction, multiplication, division, and modulo are supported.
• +
• -
• *
• %
Operator Precedence
The following table shows the operators' precedence in decreasing order.
*, /, % left multiplication,
division, modulo
+, - left addition,
subtraction
IN set membership
BETWEEN range
containment
= right equality,
assignment
OR left logical
disjunction
Reserved Keywords
Below is the list of reserved keywords for Amazon S3 Select and S3 Glacier Select. These include function
names, data types, operators, etc., that needed to run the SQL expressions used to query object content.
absolute
action
add
all
allocate
alter
and
any
are
as
asc
assertion
at
authorization
avg
bag
begin
between
bit
bit_length
blob
bool
boolean
both
by
cascade
cascaded
case
cast
catalog
char
char_length
character
character_length
check
clob
close
coalesce
collate
collation
column
commit
connect
connection
constraint
constraints
continue
convert
corresponding
count
create
cross
current
current_date
current_time
current_timestamp
current_user
cursor
date
day
deallocate
dec
decimal
declare
default
deferrable
deferred
delete
desc
describe
descriptor
diagnostics
disconnect
distinct
domain
double
drop
else
end
end-exec
escape
except
exception
exec
execute
exists
external
extract
false
fetch
first
float
for
foreign
found
from
full
get
global
go
goto
grant
group
having
hour
identity
immediate
in
indicator
initially
inner
input
insensitive
insert
int
integer
intersect
interval
into
is
isolation
join
key
language
last
leading
left
level
like
limit
list
local
lower
match
max
min
minute
missing
module
month
names
national
natural
nchar
next
no
not
null
nullif
numeric
octet_length
of
on
only
open
option
or
order
outer
output
overlaps
pad
partial
pivot
position
precision
prepare
preserve
primary
prior
privileges
procedure
public
read
real
references
relative
restrict
revoke
right
rollback
rows
schema
scroll
second
section
select
session
session_user
set
sexp
size
smallint
some
space
sql
sqlcode
sqlerror
sqlstate
string
struct
substring
sum
symbol
system_user
table
temporary
then
time
timestamp
timezone_hour
timezone_minute
to
trailing
transaction
translate
translation
trim
true
tuple
union
unique
unknown
unpivot
update
upper
usage
user
using
value
values
varchar
varying
view
when
whenever
where
with
work
write
year
zone
SQL Functions
Amazon S3 Select and S3 Glacier Select support several SQL functions.
Topics
• Aggregate Functions (Amazon S3 Select only) (p. 317)
• Conditional Functions (p. 318)
• Conversion Functions (p. 319)
• Date Functions (p. 319)
• String Functions (p. 325)
COUNT - INT
Conditional Functions
Amazon S3 Select and S3 Glacier Select support the following conditional functions.
Topics
• COALESCE (p. 318)
• NULLIF (p. 318)
COALESCE
Evaluates the arguments in order and returns the first non-unknown, that is, the first non-null or non-
missing. This function does not propagate null and missing.
Syntax
Parameters
expression
Examples
COALESCE(1) -- 1
COALESCE(null) -- null
COALESCE(null, null) -- null
COALESCE(missing) -- null
COALESCE(missing, missing) -- null
COALESCE(1, null) -- 1
COALESCE(null, null, 1) -- 1
COALESCE(null, 'string') -- 'string'
COALESCE(missing, 1) -- 1
NULLIF
Given two expressions, returns NULL if the two expressions evaluate to the same value; otherwise,
returns the result of evaluating the first expression.
Syntax
Parameters
expression1, expression2
Examples
NULLIF(1, 1) -- null
NULLIF(1, 2) -- 1
NULLIF(1.0, 1) -- null
NULLIF(1, '1') -- 1
NULLIF([1], [1]) -- null
NULLIF(1, NULL) -- 1
NULLIF(NULL, 1) -- null
NULLIF(null, null) -- null
NULLIF(missing, null) -- null
NULLIF(missing, missing) -- null
Conversion Functions
Amazon S3 Select and S3 Glacier Select support the following conversion functions.
Topics
• CAST (p. 319)
CAST
The CAST function converts an entity, such as an expression that evaluates to a single value, from one
type to another.
Syntax
Parameters
expression
A combination of one or more values, operators, and SQL functions that evaluate to a value.
data_type
The target data type, such as INT, to cast the expression to. For a list of supported data types, see
Data Types (p. 311).
Examples
CAST('2007-04-05T14:30Z' AS TIMESTAMP)
CAST(0.456 AS FLOAT)
Date Functions
Amazon S3 Select and S3 Glacier Select support the following date functions.
Topics
• DATE_ADD (p. 320)
• DATE_DIFF (p. 320)
• EXTRACT (p. 321)
• TO_STRING (p. 322)
• TO_TIMESTAMP (p. 324)
DATE_ADD
Given a date part, a quantity, and a time stamp, returns an updated time stamp by altering the date part
by the quantity.
Syntax
Parameters
date_part
Specifies which part of the date to modify. This can be one of the following:
• year
• month
• day
• hour
• minute
• second
quantity
The value to apply to the updated time stamp. Positive values for quantity add to the time stamp's
date_part, and negative values subtract.
timestamp
Examples
DATE_DIFF
Given a date part and two valid time stamps, returns the difference in date parts. The return value is a
negative integer when the date_part value of timestamp1 is greater than the date_part value of
timestamp2. The return value is a positive integer when the date_part value of timestamp1 is less
than the date_part value of timestamp2.
Syntax
Parameters
date_part
Specifies which part of the time stamps to compare. For the definition of date_part, see
DATE_ADD (p. 320).
timestamp1
Examples
EXTRACT
Given a date part and a time stamp, returns the time stamp's date part value.
Syntax
Parameters
date_part
Specifies which part of the time stamps to extract. This can be one of the following:
• year
• month
• day
• hour
• minute
• second
• timezone_hour
• timezone_minute
timestamp
Examples
TO_STRING
Given a time stamp and a format pattern, returns a string representation of the time stamp in the given
format.
Syntax
Parameters
timestamp
yy 69 2-digit year
M 1 Month of year
MM 01 Zero-padded
month of year
d 2 Day of month
(1-31)
dd 02 Zero-padded
day of month
(01-31)
a AM AM or PM of
day
h 3 Hour of day
(1-12)
hh 03 Zero-padded
hour of day
(01-12)
H 3 Hour of day
(0-23)
HH 03 Zero-padded
hour of day
(00-23)
m 4 Minute of hour
(0-59)
mm 04 Zero-padded
minute of hour
(00-59)
s 5 Second of
minute (0-59)
ss 05 Zero-padded
second of
minute (00-59)
S 0 Fraction
of second
(precision: 0.1,
range: 0.0-0.9)
SS 6 Fraction
of second
(precision: 0.01,
range: 0.0-0.99)
SSS 60 Fraction
of second
(precision:
0.001, range:
0.0-0.999)
… … …
x 7 Offset in hours
Examples
TO_TIMESTAMP
Given a string, converts it to a time stamp. This is the inverse operation of TO_STRING.
Syntax
TO_TIMESTAMP ( string )
Parameters
string
Examples
TO_TIMESTAMP('2007T') -- `2007T`
TO_TIMESTAMP('2007-02-23T12:14:33.079-08:00') -- `2007-02-23T12:14:33.079-08:00`
UTCNOW
Returns the current time in UTC as a time stamp.
Syntax
UTCNOW()
Parameters
none
Examples
UTCNOW() -- 2017-10-13T16:02:11.123Z
String Functions
Amazon S3 Select and S3 Glacier Select support the following string functions.
Topics
• CHAR_LENGTH, CHARACTER_LENGTH (p. 325)
• LOWER (p. 326)
• SUBSTRING (p. 326)
• TRIM (p. 327)
• UPPER (p. 327)
CHAR_LENGTH, CHARACTER_LENGTH
Counts the number of characters in the specified string.
Note
CHAR_LENGTH and CHARACTER_LENGTH are synonyms.
Syntax
CHAR_LENGTH ( string )
Parameters
string
Examples
CHAR_LENGTH('') -- 0
CHAR_LENGTH('abcdefg') -- 7
LOWER
Given a string, converts all uppercase characters to lowercase characters. Any non-uppercased characters
remain unchanged.
Syntax
LOWER ( string )
Parameters
string
Examples
LOWER('AbCdEfG!@#$') -- 'abcdefg!@#$'
SUBSTRING
Given a string, a start index, and optionally a length, returns the substring from the start index up to the
end of the string, or up to the length provided.
Note
The first character of the input string has index 1. If start is < 1, it is set to 1.
Syntax
Parameters
string
The length of the substring to return. If not present, proceed to the end of the string.
Examples
SUBSTRING("123456789", 0) -- "123456789"
SUBSTRING("123456789", 1) -- "123456789"
SUBSTRING("123456789", 2) -- "23456789"
SUBSTRING("123456789", -4) -- "123456789"
SUBSTRING("123456789", 0, 999) -- "123456789"
SUBSTRING("123456789", 1, 5) -- "12345"
TRIM
Trims leading or trailing characters from a string. The default character to remove is ' '.
Syntax
Parameters
string
Whether to trim leading or trailing characters, or both leading and trailing characters.
remove_chars
The set of characters to remove. Note that remove_chars can be a string with length > 1. This
function returns the string with any character from remove_chars found at the beginning or end of
the string that was removed.
Examples
UPPER
Given a string, converts all lowercase characters to uppercase characters. Any non-lowercased characters
remain unchanged.
Syntax
UPPER ( string )
Parameters
string
Examples
UPPER('AbCdEfG!@#$') -- 'ABCDEFG!@#$'
Document History
• Latest documentation update: November 20, 2018
• Current product version: 2012-06-01
The following table describes the important changes in each release of the Amazon S3 Glacier Developer
Guide from July 5, 2018, onward. For notification about updates to this documentation, you can
subscribe to an RSS feed.
Amazon Glacier name Amazon Glacier is now Amazon November 20, 2018
change (p. 328) S3 Glacier to better reflect
Glacier's integration with
Amazon S3.
Updates now available over You can now subscribe to an July 5, 2018
RSS (p. 328) RSS feed to receive notifications
about updates to the Amazon S3
Glacier Developer Guide guide.
Earlier Updates
The following table describes the important changes in each release of the Amazon S3 Glacier Developer
Guide before July 5, 2018.
Querying archives S3 Glacier now supports querying data archives with SQL. November 29, 2017
with SQL For more information, see Querying Archives with S3
Glacier Select (p. 148).
Expedited and Bulk S3 Glacier now supports Expedited and Bulk data November 21, 2016
Data Retrievals retrievals in addition to Standard retrievals. For more
information, see Archive Retrieval Options (p. 84).
Vault Lock S3 Glacier now supports Vault Lock, which allows you July 8, 2015
to easily deploy and enforce compliance controls on
individual S3 Glacier vaults with a Vault Lock policy.
For more information, see Amazon S3 Glacier Vault
Lock (p. 65) and Amazon S3 Glacier Access Control with
Vault Lock Policies (p. 136).
Vault tagging S3 Glacier now allows you to tag your S3 Glacier vaults June 22, 2015
for easier resource and cost management. Tags are labels
Vault access policies S3 Glacier now supports managing access to your April 27, 2015
individual S3 Glacier vaults by using vault access policies.
You can now define an access policy directly on a vault,
making it easier to grant vault access to users and
business groups internal to your organization, as well as
to your external business partners. For more information,
see Amazon S3 Glacier Access Control with Vault Access
Policies (p. 134).
Data retrieval S3 Glacier now supports data retrieval policies and audit December 11, 2014
policies and audit logging. Data retrieval policies allow you to easily set
logging data retrieval limits and simplify data retrieval cost
management. You can define your own data retrieval
limits with a few clicks in the AWS console or by using
the S3 Glacier API. For more information, see Amazon S3
Glacier Data Retrieval Policies (p. 151).
Updates to Java Updated the Java code samples in this guide that use the June 27, 2014
samples AWS SDK for Java.
Limiting vault You can now limit the number of vault inventory items December 31, 2013
inventory retrieval retrieved by filtering on the archive creation date or
by setting a limit. For more information about limiting
inventory retrieval, see Range Inventory Retrieval (p. 266)
in the Initiate Job (POST jobs) (p. 263) topic.
Removed outdated Removed the URLs that pointed to the old security July 26, 2013
URLs credentials page from code examples.
Support for range S3 Glacier now supports retrieval of specific ranges of November 13, 2012
retrievals your archives. You can initiate a job requesting S3 Glacier
to prepare an entire archive or a portion of the archive
for subsequent download. When an archive is very large,
you may find it cost effective to initiate several sequential
jobs to prepare your archive.
New Guide This is the first release of the Amazon S3 Glacier August 20, 2012
Developer Guide.
AWS glossary
For the latest AWS terminology, see the AWS glossary in the AWS General Reference.