Testing S3 notifications locally with LocalStack & Terraform
Walk through an example that shows how to configure an S3 bucket for event notifications using SQS with LocalStack & Terraform
AWS Simple Storage Service (S3) is a proprietary object storage solution that can store an unlimited number of objects for many use cases. S3 is a highly scalable, durable and reliable service that we can use for various use-cases: hosting a static site, handling big data analytics, managing application logs, storing web assets and much more!
With S3, you have unlimited storage with your data stored in buckets. A bucket refers to a directory, while an object is just another term for a file. Every object (file) stores the name of the file (key), the contents (value), a version ID and the associated metadata. An integral part of managing S3 infrastructure is to be notified whenever an object is created, deleted, or modified. This is where S3 Event Notifications come into play!
With S3 Event Notifications, you can apply a notification configuration to your buckets so that S3 can send event notification messages to a specified destination. In this article, we will set up a notification configuration using AWS Simple Queue Service (SQS) queues using Terraform and test it locally using LocalStack!
What is LocalStack?
LocalStack is a cloud service emulator that can run in a single container on your local machine or in your CI environment, which lets you run your cloud and serverless applications without connecting to an AWS account.
All cloud resources your application depends on are now available locally, allowing you to run automated tests of your application in an AWS environment without the need for costly AWS developer accounts, slow re-deployments, or transient errors from remote connections.
Let’s get started with LocalStack. To install LocalStack, you must ensure that the LocalStack CLI is installed. Through pip
, you can easily do that using the following command:
pip install localstack
It will install the localstack-cli
which is used to run the Docker image that hosts the LocalStack runtime. Start LocalStack in a detached mode by running the following command:
localstack start -d
It will initialize LocalStack in a detached mode, and now you have a full suite of local AWS services that you can use for testing and playing around:
__ _______ __ __
/ / ____ _________ _/ / ___// /_____ ______/ /__
/ / / __ \/ ___/ __ `/ /\__ \/ __/ __ `/ ___/ //_/
/ /___/ /_/ / /__/ /_/ / /___/ / /_/ /_/ / /__/ ,<
/_____/\____/\___/\__,_/_//____/\__/\__,_/\___/_/|_|
💻 LocalStack CLI 1.0.3
[12:03:04] starting LocalStack in Docker mode 🐳 localstack.py:140
preparing environment bootstrap.py:667
configuring container bootstrap.py:675
starting container bootstrap.py:681
[12:03:06] detaching bootstrap.py:685
LocalStack would be running on localhost:4566
and you can push the following command on your terminal to see the available services:
curl http://localhost:4566/health | jq
Setting up Terraform with LocalStack
Terraform allows you to automate the management of AWS resources such as Containers, Lambda functions and so on by declaring them in the HashiCorp Configuration Language (HCL). In this article, we will be using Terraform to create a S3 bucket and then apply notification configuration using SQS.
Before that, we would need to manually configure the local service endpoints and credentials for Terraform to integrate with LocalStack. We will be using the AWS Provider for Terraform to interact with the many resources supported by AWS in LocalStack. Create a new file named provider.tf
and specify mock credentials for the AWS provider:
provider "aws" {
region = "us-east-1"
access_key = "fake"
secret_key = "fake"
}
We would also need to avoid issues with routing and authentication (as we do not need it). Therefore we need to supply some general parameters:
provider "aws" {
region = "us-east-1"
access_key = "fake"
secret_key = "fake"
# only required for non virtual hosted-style endpoint use case.
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs#s3_force_path_style
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
s3_force_path_style = true
}
Additionally, we have to point the individual services to LocalStack. In this case we opted to use http://localhost:4566
like the following:
endpoints {
s3 = "http://localhost:4566"
sqs = "http://localhost:4566"
}
We can further add default tags and set the required version of Terraform (1.2.8 in our case) and setup the AWS provider. The final configuration in our provider.tf
to deploy a S3 bucket and setup bucket notification with SQS would look like this:
provider "aws" {
region = "us-east-1"
access_key = "fake"
secret_key = "fake"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
s3_force_path_style = true
endpoints {
s3 = "http://localhost:4566"
sqs = "http://localhost:4566"
}
default_tags {
tags = {
Environment = "Local"
Service = "LocalStack"
}
}
}
terraform {
required_version = "= 1.2.8"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.60.0, <= 3.69.0"
}
}
}
Configuring bucket notifications with SQS
Let us create a new file named main.tf
and add the following line:
data "aws_region" "current" {}
It will use the region
that has been provided in our provider.tf
file previously (us-east-1
in our case). Now that we have defined the region, let's go ahead and configure SQS queues using Terraform.
For this to happen, we will use the aws_sqs_queue
resource type to create the SQS queue named s3-event-notification-queue
and define a policy and attach an access policy to the queue to grant S3 permission to post messages.
resource "aws_sqs_queue" "queue" {
name = "s3-event-notification-queue"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:*:*:s3-event-notification-queue",
"Condition": {
"ArnEquals": { "aws:SourceArn": "${aws_s3_bucket.bucket.arn}" }
}
}
]
}
POLICY
}
Let us know use the aws_s3_bucket
resource type with the key name resource
which we wish to create. Let's keep the bucket name as your-bucket-name
which you can obviously change depending on your needs.
resource "aws_s3_bucket" "bucket" {
bucket = "your-bucket-name"
}
Lastly, you can use the aws_s3_bucket_notification
resource type to attach the S3 bucket with the SQS queue to send notifications:
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = aws_s3_bucket.bucket.id
queue {
queue_arn = aws_sqs_queue.queue.arn
events = ["s3:ObjectCreated:*"]
}
}
The final configuration in our main.tf
file would look like this:
# https://docs.aws.amazon.com/sns/latest/api/API_Publish.html#API_Publish_Examples
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_notification
data "aws_region" "current" {}
resource "aws_sqs_queue" "queue" {
name = "s3-event-notification-queue"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:*:*:s3-event-notification-queue",
"Condition": {
"ArnEquals": { "aws:SourceArn": "${aws_s3_bucket.bucket.arn}" }
}
}
]
}
POLICY
}
resource "aws_s3_bucket" "bucket" {
bucket = "your-bucket-name"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = aws_s3_bucket.bucket.id
queue {
queue_arn = aws_sqs_queue.queue.arn
events = ["s3:ObjectCreated:*"]
}
}
Sending notifications to SQS
Since our LocalStack container is already running, we can use the AWS CLI to interact with our running LocalStack instance by defining the endpoint-url
flag. Let us try it out:
aws --endpoint-url http://localhost:4566 \
sqs create-queue --queue-name sample-queue
The following output will be received:
{
"QueueUrl": "http://localhost:4566/000000000000/sample-queue"
}
Let us go ahead and initialize our Terraform scripts and apply them:
terraform init
terraform plan
terraform apply --auto-approve
Since we are using LocalStack, no actual AWS resources will be created. Instead, LocalStack will create ephemeral development resources, which will automatically be cleaned once you stop LocalStack (using localstack stop
).
You will receive the following output:
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creation complete after 0s [id=your-bucket-name]
aws_sqs_queue.queue: Creating...
aws_sqs_queue.queue: Creation complete after 0s [id=http://localhost:4566/000000000000/s3-event-notification-queue]
aws_s3_bucket_notification.bucket_notification: Creating...
aws_s3_bucket_notification.bucket_notification: Creation complete after 0s [id=your-bucket-name]
As you can notice, the S3 buckets and the SQS queue has been created! You can now run the following command to see your SQS queue:
aws --endpoint-url http://localhost:4566 sqs list-queues
You will receive the following output:
{
"QueueUrls": [
"http://localhost:4566/000000000000/sample-queue",
"http://localhost:4566/000000000000/s3-event-notification-queue"
]
}
To trigger a notification, let us create a new file named app.txt
and enter some content. We will copy over this file inside our newly created S3 bucket named your-bucket-name
:
aws --endpoint-url http://localhost:4566 \
s3 cp app.txt s3://your-bucket-name/
The following output would be displayed:
upload: ./app.txt to s3://your-bucket-name/app.txt
Now that we have uploaded a file, it must have triggered a notification on our SQS queue. Let's check that out by using the following command:
aws --endpoint-url http://localhost:4566 \
sqs receive-message \
--queue-url http://localhost:4566/000000000000/s3-event-notification-queue \
| jq -r '.Messages[0].Body' | jq .
Here the queue-url
refers to the SQS queue URL which was initialized while creating the queue and attaching it to the S3 bucket. You will receive the following output:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "2022-08-28T10:22:14.692Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AIDAJDPLRKLG7UEXAMPLE"
},
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"responseElements": {
"x-amz-request-id": "d09ba947",
"x-amz-id-2": "eftixk72aD6Ap51TnqcoF8eFidJG9Z/2"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "tf-s3-queue-20220828102212692800000001",
"bucket": {
"name": "your-bucket-name",
"ownerIdentity": {
"principalId": "A3NL1KOZZKExample"
},
"arn": "arn:aws:s3:::your-bucket-name"
},
"object": {
"key": "app.txt",
"size": 6,
"eTag": "\"09f7e02f1290be211da707a266f153b3\"",
"versionId": null,
"sequencer": "0055AED6DCD90281E5"
}
}
}
]
}
It shows that we created an object (file) named app.txt
with the eventVersion
and eventSource
parts of the metadata associated with the SQS record.
Conclusion
S3 Notifications are integral to managing your object storage across the S3 infrastructure and keeping track of changes. In the above example, we used SQS to manage S3 Notifications, but this is just one way! In future blogs, we will show how you can create an SQS event source mapping and process the notifications with a Lambda function!
Check-out the code on LocalStack Terraform samples and check out our extended documentation on SQS and Terraform.