Amazon S3 stores data as objects within resources called "buckets". You can store as many objects as you want within a bucket, and write, read, and delete objects in your bucket. Objects can be up to 5 terabytes in size.

You can control access to the bucket (who can create, delete, and retrieve objects in the bucket for example), view access logs for the bucket and its objects, and choose the AWS region where a bucket is stored to optimize for latency, minimize costs, or address regulatory requirements.

Get Started with AWS Today

Try Amazon S3 for Free

AWS Free Tier includes 5GB storage, 20,000 Get Requests, and 2,000 Put Requests with Amazon S3.

View AWS Free Tier Details »

S3_ProductPage_Banner

Amazon S3 is designed as a complete storage platform. Consider the ownership value included with every GB.

Scale. Customers around the world depend on Amazon S3 to safeguard trillions of objects every day. Costs grow and shrink on demand, and global deployments can be done in minutes. Industries like financial services, healthcare, media, and entertainment use it to build big data, analytics, transcoding, and archive applications.

Durability. Amazon S3 is in 12 regions around the world (with more coming online frequently) and includes a geographic redundancy option to replicate across regions. In addition, multiple versions of an object may be preserved for point-in-time recovery.

Broad integration with other AWS services for security (IAM and KMS), alerting (CloudWatch, CloudTrail and Event Notifications), computing (Lambda), and database (EMR, Redshift), designed to integrate directly with Amazon S3.

Cloud Data Migration services. AWS storage includes multiple specialized methods to help you get data into and out of the cloud.

Lifecycle Management Policies. Objects may be moved between storage classes via customizable, automated rules. This helps lower cost while reducing management.


Amazon S3 supports multiple methods to help transfer large amounts of data into and out of the cloud. One of the simplest is Amazon S3 Transfer Accleration, which combines innovative software, protocol optimizations, and AWS edge infrastructure to accelerate data transfers up to 300% on existing infrastructure. Simply enable the feature in the S3 console, change the S3 bucket endpoint name in your application, and your Amazon S3 uploads and downloads will be accelerated without adjusting protocols, changing firewall settings, or buying acceleration hardware.

Learn more

Compare how much faster your transfers can be

Hudl

Amazon S3 Transfer Acceleration reduces the average time it takes for us to ingest videos from our global user base by almost half. This gives our customers the ability to edit and share videos sooner where speed is a critical factor.

We loved how easy it was to get started with Transfer Acceleration – just a simple endpoint change in our application and done.  

All this for a fraction of the cost of the solution we evaluated before.

- Brian Kaiser, CTO

Cross-region replication (CRR) provides automated, fast, reliable data replication across AWS regions. Every object uploaded to an S3 bucket is automatically replicated to a destination bucket in a different AWS region that you choose.

Learn more

Amazon S3 event notifications can be sent when objects are uploaded to or deleted from Amazon S3. Event notifications can be delivered using Amazon SQS or Amazon SNS, or sent directly to AWS Lambda, enabling you to trigger workflows, alerts, or other processing.

Learn more

Amazon S3 allows you to enable versioning so you can preserve, retrieve, and restore every version of every object stored in an Amazon S3 bucket.

Learn more

Amazon S3 provides a number of data stewardship capabilities, including automated artifact cleanup as well as data lifecycle migration from S3 Standard to S3 Standard - Infrequent Access and Amazon Glacier.

Learn more

Amazon S3 encrypts data in transit via SSL-encrypted endpoints and can also encrypt data at rest with three options for managing encryption keys: directly by S3, through AWS Key Management Service (AWS KMS), or you can provide your own keys.

Learn more

Amazon S3 provides several mechanisms to control and monitor who can access your data as well as how, when, and where they can access it. VPC endpoints allow you to create a secure connection without a gateway or NAT instances.

Learn more

Amazon S3 is supported by the AWS SDKs for Java, PHP, .NET, Python, Node.js, Ruby, and the AWS Mobile SDK. The SDK libraries wrap the underlying REST API, simplifying your programming tasks.

Learn more

Amazon S3 has several features for managing and controlling your costs, including bucket tagging to manage cost allocation and integration with Amazon CloudWatch to receive billing alerts.

Learn more

Amazon S3 is designed for 99.999999999% durability and up to 99.99% availability of objects over a given year. In addition to S3 Standard, there is a lower-cost Standard - Infrequent Access option for infrequently accessed data, and Amazon Glacier for archiving cold data at the lowest possible cost.

Learn more

Amazon S3 supports applications connecting directly to Amazon S3 via both IPv4 and IPv6 addressing. With this new protocol support, you can more easily meet IPv6 compliance requirements, integrate with existing IPv6-based on-premises applications, and remove the need for expensive networking equipment to handle the address translation. You can also utilize the existing source address filtering features in IAM policies and bucket policies with IPv6 addresses, expanding your options to secure applications interacting with Amazon S3.

Learn more


Data stored in Amazon S3 is secure by default; only bucket and object owners have access to the Amazon S3 resources they create. Amazon S3 supports multiple access control mechanisms, as well as encryption for both secure transit and secure storage at rest. With Amazon S3’s data protection features, you can protect your data from both logical and physical failures, guarding against data loss from unintended user actions, application errors, and infrastructure failures. For customers who must comply with regulatory standards such as PCI and HIPAA, Amazon S3’s data protection features can be used as part of an overall strategy to achieve compliance. The various data security and reliability features offered by Amazon S3 are described in detail below.

Amazon S3 supports several mechanisms that give you flexibility to control who can access your data as well as how, when, and where they can access it. Amazon S3 provides four different access control mechanisms: AWS Identity and Access Management (IAM) policies, Access Control Lists (ACLs), bucket policies, and query string authentication. IAM enables organizations to create and manage multiple users under a single AWS account. With IAM policies, you can grant IAM users fine-grained control to your Amazon S3 bucket or objects. You can use ACLs to selectively add (grant) certain permissions on individual objects. Amazon S3 bucket policies can be used to add or deny permissions across some or all of the objects within a single bucket. With Query string authentication, you have the ability to share Amazon S3 objects through URLs that are valid for a specified period of time.

You can access Amazon S3 from your Amazon Virtual Private Cloud (Amazon VPC) using VPC endpoints. VPC endpoints are easy to configure and provide reliable connectivity to Amazon S3 without requiring an Internet gateway or a Network Address Translation (NAT) instance. With VPC endpoints, the data between an Amazon VPC and Amazon S3 is transferred within the Amazon network, helping protect your instances from Internet traffic. Amazon VPC endpoints for Amazon S3 provide multiple levels of security controls to help limit access to S3 buckets. First, you can require that requests to your Amazon S3 buckets originate from a VPC using a VPC endpoint. Additionally, you can control what buckets, requests, users, or groups are allowed through a specific VPC endpoint.

You can securely upload or download your data to Amazon S3 via the SSL-encrypted endpoints using the HTTPS protocol. Amazon S3 can automatically encrypt your data at rest and gives you several choices for key management. Alternatively, you can use a client encryption library such as the Amazon S3 Encryption Client to encrypt your data before uploading to Amazon S3.

If you choose to have Amazon S3 encrypt your data at rest with server-side encryption (SSE), Amazon S3 will automatically encrypt your data on write and decrypt your data on retrieval. When Amazon S3 SSE encrypts data at rest, it uses Advanced Encryption Standard (AES) 256-bit symmetric keys. If you choose server-side encryption with Amazon S3, there are three ways to manage the encryption keys.

SSE with Amazon S3 Key Management (SSE-S3)

With SSE-S3, Amazon S3 will encrypt your data at rest and manage the encryption keys for you.

SSE with Customer-Provided Keys (SSE-C)

With SSE-C, Amazon S3 will encrypt your data at rest using the custom encryption keys that you provide. To use SSE-C, simply include your custom encryption key in your upload request, and Amazon S3 encrypts the object using that key and securely stores the encrypted data at rest. Similarly, to retrieve an encrypted object, provide your custom encryption key, and Amazon S3 decrypts the object as part of the retrieval. Amazon S3 doesn’t store your encryption key anywhere; the key is immediately discarded after Amazon S3 completes your requests.

SSE with AWS KMS (SSE-KMS)

With SSE-KMS, Amazon S3 will encrypt your data at rest using keys that you manage in the AWS Key Management Service (KMS). Using AWS KMS for key management provides several benefits. With AWS KMS, there are separate permissions for the use of the master key, providing an additional layer of control as well as protection against unauthorized access to your object stored in Amazon S3. AWS KMS provides an audit trail so you can see who used your key to access which object and when, as well as view failed attempts to access data from users without permission to decrypt the data. Additionally, AWS KMS provides additional security controls to support customer efforts to comply with PCI-DSS, HIPAA/HITECH, and FedRAMP industry requirements.

For more information refer to the Using Data Encryption topics in the Amazon S3 Developer Guide.

Amazon S3 also supports logging of requests made against your Amazon S3 resources. You can configure your Amazon S3 bucket to create access log records for the requests made against it. These server access logs capture all requests made against a bucket or the objects in it and can be used for auditing purposes.

For more information on the security features available in Amazon S3, please refer to the Access Control topic in the Amazon S3 Developer Guide. For an overview of security on AWS, including Amazon S3, please refer to Amazon Web Services: Overview of Security Processes document.

Amazon S3 provides further protection with versioning capability. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. This allows you to easily recover from both unintended user actions and application failures. By default, requests will retrieve the most recently written version. Older versions of an object can be retrieved by specifying a version in the request. Storage rates apply for every version stored. You can configure lifecycle rules to automatically control the lifetime and cost of storing multiple versions.

Amazon S3 provides additional security with Multi-Factor Authentication (MFA) Delete. When enabled, this feature requires the use of a multi-factor authentication device to delete objects stored in Amazon S3 to help protect previous versions of your objects.

By enabling MFA Delete on your Amazon S3 bucket, you can only change the versioning state of your bucket or permanently delete an object version when you provide two forms of authentication together:

  • Your AWS account credentials
  • The concatenation of a valid serial number, a space, and the six-digit code displayed on an approved authentication device

Learn more

Amazon S3 supports query string authentication, which allows you to provide a URL that is valid only for a length of time that you define. This time limited URL can be useful for scenarios such as software downloads or other applications where you want to restrict the length of time users have access to an object. Learn more


Amazon S3 event notifications can be sent in response to actions taken on objects uploaded or stored in Amazon S3. Notification messages can be sent through either Amazon SNS or Amazon SQS, or delivered directly to AWS Lambda to invoke AWS Lambda functions.

Amazon S3 event notifications enable you to run workflows, send alerts, or perform other actions in response to changes in your objects stored in Amazon S3. You can use Amazon S3 event notifications to set up triggers to perform actions including transcoding media files when they are uploaded, processing data files when they become available, and synchronizing Amazon S3 objects with other data stores. You can also set up event notifications based on object name prefixes and suffixes. For example, you can choose to receive notifications on object names that start with “images/." It may also be used to keep a secondary index of Amazon S3 objects in sync.

Amazon S3 event notifications are set up at the bucket level, and you can configure them through the Amazon S3 console, through the REST API, or by using an AWS SDK.

To learn more visit the Configuring Notifications for Amazon S3 Events topic in the Amazon S3 Developer Guide.


Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Amazon S3 redundantly stores data in multiple facilities and on multiple devices within each facility. To increase durability, Amazon S3 synchronously stores your data across multiple facilities before confirming that the data has been successfully stored. In addition, Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data. Unlike traditional systems, which can require laborious data verification and manual repair, Amazon S3 performs regular, systematic data integrity checks and is built to be automatically self-healing.

Standard is:

  • Backed with the Amazon S3 Service Level Agreement for availability.
  • Designed for 99.999999999% durability and 99.99% availability of objects over a given year.
  • Designed to sustain the concurrent loss of data in two facilities.

Standard - Infrequent Access is:

  • Backed with the Amazon S3 Service Level Agreement for availability.
  • Designed for 99.999999999% durability and 99.9% availability of objects over a given year.
  • Designed to sustain the concurrent loss of data in two facilities.

Amazon Glacier is:

  • Designed for 99.999999999% durability of objects over a given year.
  • Designed to sustain the concurrent loss of data in two facilities.

Amazon S3 makes it easy to manage your data. With Amazon S3’s data lifecycle management capabilities, you can automatically migrate older objects to Standard - Infrequent Access, archive objects to Amazon Glacier, and perform recurring deletions, enabling you to reduce your costs over an object’s lifetime. Amazon S3 also allows you to monitor and control your costs across your different business functions. All of these management capabilities can be easily administered using the Amazon S3 APIs or console. The various data management features offered by Amazon S3 are described in detail below.

Amazon S3 can automatically assign and change cost and performance characteristics as your data evolves. It can even automate common data lifecycle management tasks, including capacity provisioning, automatic migration to lower cost tiers, and regulatory compliance policies and eventual scheduled deletions.

When storing new data, Amazon S3 eliminates the need for capacity planning by enabling you to both scale on-demand and pay only for the capacity you use. With traditional storage systems, capacity planning can be an error-prone process, especially when storage growth is unpredictable. Over-provisioning capacity can result in under-utilization and higher costs, while under-provisioning can trigger expensive hardware upgrades far earlier than planned.

As your data ages, Amazon S3 takes care of automatically and transparently migrating your data to new hardware as hardware fails or reaches its end of life. This eliminates the need for you to perform expensive, time-consuming, and risky hardware migrations. Amazon S3 also enables you to automatically migrate your data to lower cost storage as your data ages. You can define rules to automatically migrate Amazon S3 objects to Standard - Infrequent Access (Standard - IA) or Amazon Glacier based on the age of the data. Migration rules are supported for Amazon S3 objects in the US-East (N. Virginia)*, US West (N. California), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Tokyo) regions.

When your data reaches its end of life, Amazon S3 provides programmatic options for recurring and high volume deletions. For recurring deletions, rules can be defined to remove sets of objects after a predefined time period. These rules can be applied to objects stored in Standard or Standard - IA, and objects that have been archived to Amazon Glacier.

You can also define lifecycle rules on versions of your Amazon S3 objects to reduce storage costs. For example, you can create rules to automatically – and cleanly - delete older versions of your objects when these versions are no longer needed, saving money and improving performance. Alternatively, you can also create rules to automatically migrate older versions to either Standard - IA or Amazon Glacier in order to further reduce your storage costs.

*US-East (N. Virginia) region was previously known as US-East (Standard)

Cross-region replication (CRR) replicates every object uploaded to your source bucket to a destination bucket in a different AWS region that you choose. The metadata and ACLs associated with the object are also part of the replication. Once you configure CRR on your source bucket, any changes to the data, metadata, or ACLs on the object trigger a new replication to the destination bucket.

CRR is a bucket-level configuration and you enable CRR on your bucket by specifying a destination bucket in a different region using either the AWS Management Console, the REST API, the AWS CLI, or the AWS SDKs. Versioning must be turned on for both the source and destination buckets to enable CRR. To learn more about CRR please visit Cross-Region Replication in the Amazon S3 Developer Guide.

Amazon S3 offers several features for managing and controlling your costs. You can use the AWS Management Console or the Amazon S3 APIs to apply tags to your Amazon S3 buckets, enabling you to allocate your costs across multiple business dimensions, including cost centers, application names, or owners. You can then view breakdowns of these costs using Amazon Web Services’ Cost Allocation Reports, which show your usage and costs aggregated by your tags. For more information on Cost Allocation and tagging, please visit About AWS Account Billing. For more information on tagging your Amazon S3 buckets, please see the Bucket Tagging topic in the Amazon S3 Developer Guide.

You can use Amazon CloudWatch to receive billing alerts that help you monitor the Amazon S3 charges on your bill. You can set up an alert to be notified automatically via e-mail when estimated charges reach a threshold that you choose. For additional information on billing alerts, you can visit the billing alerts page or see the Monitor Your Estimated Charges topic in the Amazon CloudWatch Developer Guide.


Amazon has a suite of tools that make migrating data into the cloud faster, including ways to optimize or replace your network, and ways to integrate existing workflows with S3.

One of these, Amazon S3 Transfer Acceleration, is designed to maximize transfer speeds to Amazon S3 buckets over long distances. It works by carrying HTTP and HTTPS traffic over a highly optimized network bridge that runs between the AWS Edge Location nearest your clients and your Amazon S3 bucket. There are no gateway servers to manage, no firewalls to open, no special ports or clients to integrate or upfront fees to pay. You simply change the Amazon S3 endpoint that your application uses to transfer data and acceleration is automatically applied. Use Transfer Acceleration if you:

  • Need faster uploads from clients that are located far away from your bucket, for instance across countries or continents.
  • Have clients located outside of your own datacenters, who rely on the public internet to reach Amazon S3. For clients inside your own datacenters, consider AWS Direct Connect.
  • Need to transfer datasets ranging from hundreds of GBs to approximately 75 TBs (for datasets larger then this, consider AWS Import/Export Snowball).

Learn More


Your use of this service is subject to the Amazon Web Services Customer Agreement