Amazon S3 is a cornerstone of AWS cloud storage, offering unmatched scalability, reliability, and versatility for modern applications. For AWS Associate Solution Architects, mastering S3 is essential to designing efficient, secure, and cost-effective solutions.
From understanding storage classes and lifecycle management to leveraging features like encryption, versioning, and event notifications, this guide covers the critical knowledge you need to optimize your S3 usage and succeed in your AWS journey.
Durability and Availability:
Amazon S3 offers 99.999999999% (11 nines) durability and 99.99% availability across all storage classes.
By default, S3 replicates your data across at least three Availability Zones within the same Region, providing built-in redundancy to ensure the durability of most S3 storage classes. This setup does not require additional configuration, except for S3 One Zone-IA, which stores data in a single Availability Zone.
Bucket Policies and Access Control:
When an S3 bucket is created, it is private by default, which means that all objects in the bucket are private and only AWS accounts which are either bucket owners or accounts with administrator permissions and/or with appropriate IAM policies.
Versioning and Lifecycle Management:
Amazon S3 Versioning is a valuable feature in many scenarios, particularly when data protection, audit trails, disaster recovery, or compliance with regulatory requirements are needed. It is especially useful for use cases involving frequently changing data, data that needs to be retained over time, or historical data that is important for recovery and analysis.
It is a best practice to use S3 Versioning alongside Lifecycle Management in use cases such as data backup and recovery, compliance, archiving, and storage management. This combination ensures optimal storage costs, data protection, and simplifies operations by automating versioning and data transitions.
Below, are lifecycle rules you can define for objects in a bucket. Note that objects can be filtered. You can define rules to transition objects to different storage classes to delete them after a specified period. For example, move to Glacier after 90 days and Permanently delete after 365 days.
Data Protection and Encryption:
When a bucket is created, all objects stored in the bucket will be encrypted with SSE-S3 by default. SSE-S3 is a free server-side encryption solution for data encryption at-rest.
Granular permissions with SSE-KMS: By using SSE-KMS (AWS Key Management Service), you can control who has access to the encryption keys, enabling fine-grained control over who can decrypt your data. With SSE-KMS, you can create, rotate, and revoke encryption keys, giving you full control over the life cycle of the keys used to encrypt your data.
Try it: Check if a bucket is enabled with server-side encryption.
(Note that all the information that using place holder such as region, account-id, and name are changed, and you need to replace the placeholder when you try with aws-cli)
aws s3api get-bucket-encryption --bucket <BUCKET-NAME>
- A bucket with encryption on the server side will have a similar response as below.
{
"ServerSideEncryptionConfiguration": {
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
},
"BucketKeyEnabled": false
}
]
}
}
Note: to encrypt data in transit, make sure to use HTTPS for secure data transfers.
Event Notifications:
S3 can trigger notifications to AWS services like Lambda, SNS, or SQS when specific events occur (e.g., object creation or deletion).
It is used for workflows such as image processing, logging, or custom alerts.
Try it: create an s3 bucket and configure an SNS topic to send email notifications whenever there is a file uploaded to the s3 bucket.
-
Create an S3 bucket for uploading the file.
aws s3api create-bucket --bucket <BUCKET-NAME> --region eu-north-1 --create-bucket-configuration LocationConstraint=eu-north-1
-
Create an SNS Topic.
aws sns create-topic --name MySNSTopic
-
The output will contain the Topic ARN (e.g.,
arn:aws:sns:region:account-id:MySNSTopic
). -
Note down the Topic ARN for later steps.
-
-
Subscribe to an SNS topic to receive email from the topic.
aws sns subscribe --topic-arn arn:aws:sns:region:account-id:MySNSTopic --protocol email --notification-endpoint [email protected]
-
Confirm the subscription by checking your email and clicking on the confirmation link. OR you can copy the token from the confirmation link and confirm your subscription via aws cli
aws sns confirm-subscription --topic-arn arn:aws:sns:region:account-id:MySNSTopic --token [a very long token you got in your email]
-
Once the token is confirmed, you can test if you will receive an email from the SNS topic by publishing a message to SNS.
-
Grant S3 Permission to Publish to SNS topic
-
Create a file name sns-policy.json to define the policy for S3 to publish the message.
{ "Version": "2008-10-17", "Id": "__default_policy_ID", "Statement": [ { "Sid": "__default_statement_ID", "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": "SNS:Publish", "Resource": "arn:aws:sns:regionId:account-idd:MySNSTopic", "Condition": { "StringEquals": { "AWS:SourceAccount": "account-id" }, "ArnLike": { "AWS:SourceArn": "arn:aws:s3:::test-s3-for-upload-files-202501061146" } } } ] }
-
Attach the policy to the SNS topic
aws sns set-topic-attributes --topic-arn arn:aws:sns:region:account-id:MySNSTopic --attribute-name Policy --attribute-value file://sns-policy.json
-
Good to know to save time with policy definition (I have met no errors when defining the policy, however, the link between the s3 and SNS queue could not be set up, and it took a lot of time to find out these stupid findings.)
-
-
Configure S3 to Send Notifications to SNS
-
Create a bucket notification configuration JSON file notification.json, and define the rule below which means that the s3 bucket will fire an event when an object is created and send the notification to the SNS topic.
{ "TopicConfigurations": [ { "TopicArn": "arn:aws:sns:region:account-id:MySNSTopic", "Events": ["s3:ObjectCreated:*"] } ] }
-
Attach the configuration to the s3 bucket
aws s3api put-bucket-notification-configuration --bucket <BUCKET-NAME> --notification-configuration file://notification.json
The result can also be checked from the s3 bucket’s event notifications.
-
-
Test the configuration: Upload a file, and check your email if you are receiving a notification.
touch my-local-file.txt
aws s3 cp my-local-file.txt s3://<BUCKET-NAME>/
Data Transfer Acceleration:
S3 Transfer Acceleration speeds up uploads by routing traffic through AWS edge locations using the CloudFront network. Useful for globally distributed users with high-latency connections. To use Transfer Acceleration, your S3 bucket must be created in a region that supports this feature.
- Uses a distinct Transfer Acceleration endpoint:
- Format:
bucket-name.s3-accelerate.amazonaws.com
- Format:
- Bucket name must follow DNS-compliant naming conventions.
Static Website Hosting:
S3 can host static websites by enabling Static Website Hosting on a bucket. You can configure an index.html and error.html file. Combine with Amazon Route 53 for custom domains. Common use cases are marketing websites, landing pages, and documentation portals. Simple, affordable, and cost-effective.
Try it: create a static website from an s3 bucket
-
Create an s3 bucket
aws s3api create-bucket --bucket <bucket-name> --region eu-north-1 --create-bucket-configuration LocationConstraint=eu-north-1
-
Enable static website hosting
aws s3 website s3://<bucket-name>/ --index-document index.html --error-document error.html
-
Set Bucket Policy for Public Access
-
Create a bucket-policy.json with the following content and replace the bucket-name with your s3 bucket name.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::bucket-name/*" } ] }
-
-
Apply the bucket policy to the s3 bucket.
aws s3api put-bucket-policy --bucket <bucket-name> --policy file://bucket-policy.json
-
Upload the content of the static website (for example files index.html and error.html)
-
Verify static website hosting URL
aws s3api get-bucket-website --bucket <bucket-name>
You can get a result similar to the one below as an example.
-
Browse the website
S3 Object Lock and Glacier Vault Lock:
S3 Object Lock: Enables write-once-read-many (WORM) to protect objects from being deleted or modified for a fixed retention period. Glacier Vault Lock: Enforces compliance controls for Glacier storage.
Bonus Knowledge:
Performance Optimization: S3 scales automatically and supports unlimited throughput, so there’s no need to partition buckets. Replication: Use Cross-Region Replication (CRR) or Same-Region Replication (SRR) to replicate objects automatically between buckets.
Multi-Part Upload: For large files (over 100 MB), break uploads into parts for faster and more resilient uploads.
Only when you try you will know:
Try to create a bucket; you will know that not all names are valid
aws s3api create-bucket --bucket <BUCKET-NAME> --region eu-north-1 --create-bucket-configuration LocationConstraint=eu-north-1
Rules for Naming S3 Buckets:
- Globally Unique:
- The bucket name must be globally unique across all AWS accounts (i.e., no two buckets can have the same name).
- Length:
- The bucket name must be between 3 and 63 characters in length.
- Allowed Characters:
- The bucket name can only contain lowercase letters, numbers, hyphens (-).
- Uppercase letters, spaces, and special characters (like underscores
_
, periods.
, etc.) are not allowed.
- Bucket Name Format:
- The name must start and end with a lowercase letter or number.
- It can only contain lowercase letters, numbers, and hyphens (-) in between.
- The bucket name should not contain consecutive hyphens (
--
). - Bucket names cannot contain periods (
.
) as this can interfere with SSL certificate verification.
- No IP Address Format:
- The bucket name cannot be in the format of an IP address (e.g.,
192.168.1.1
).
- The bucket name cannot be in the format of an IP address (e.g.,
- DNS-Compatibility:
-
Bucket names are also DNS-compliant because S3 uses the name as part of the URL for the bucket. The name must be compatible with domain names (RFC 1123).
-
For example,
my-bucket-name
is valid, butmy..bucket
is invalid due to consecutive periods.
-
Private by default? We can check by trying to browse for the bucket from the browser.
After a bucket is created, then we can check that the Block of all public access is ON in the bucket’s permissions settings.
In conclusion, mastering Amazon S3’s essential features—from storage classes to encryption and event notifications—lays a strong foundation for AWS Solutions Architects at the associate level. Whether you’re managing data, optimizing costs, or securing your storage, understanding S3’s capabilities will help you design scalable, efficient, and compliant cloud solutions. As you continue to explore AWS, these fundamentals will guide your approach to building resilient, cost-effective cloud architectures.