<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[LocalStack]]></title><description><![CDATA[LocalStack is an easy-to-use test/mocking framework for developing cloud and serverless applications on AWS and get the same functionality you would get from a ]]></description><link>https://hashnode.localstack.cloud</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 22:25:02 GMT</lastBuildDate><atom:link href="https://hashnode.localstack.cloud/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Testing Events Archive and Replay with LocalStack & EventBridge]]></title><description><![CDATA[EventBridge allows you to manage events across different AWS services and applications. Event sources for EventBridge can include a vast array of AWS services that are natively integrated, third-party applications through integrations, or custom appl...]]></description><link>https://hashnode.localstack.cloud/testing-events-archive-and-replay-with-localstack-eventbridge</link><guid isPermaLink="true">https://hashnode.localstack.cloud/testing-events-archive-and-replay-with-localstack-eventbridge</guid><category><![CDATA[AWS EventBridge]]></category><category><![CDATA[AWS]]></category><category><![CDATA[event-driven-architecture]]></category><category><![CDATA[localstack]]></category><category><![CDATA[events]]></category><dc:creator><![CDATA[Harsh Bardhan Mishra]]></dc:creator><pubDate>Mon, 19 Aug 2024 11:04:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723713507708/dd808cdc-ed85-4156-88e2-0d02e7f5aa83.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://aws.amazon.com/eventbridge/">EventBridge</a> allows you to manage events across different AWS services and applications. Event sources for EventBridge can include a vast array of AWS services that are natively integrated, third-party applications through integrations, or custom applications. Similarly, events can be directed to numerous AWS services or custom applications via API endpoints.</p>
<p>EventBridge buses, as per design, are transient carriers for events, making them a black box which event-driven systems often are. This makes it difficult to track what event came through and where it went. This can be partially mitigated by using archives, wherein you can configure a rule on your event bus to automatically save all incoming events to an archive. The replay feature then lets you replay all stored events on the event bus.</p>
<p><a target="_blank" href="https://localstack.cloud/">LocalStack</a> allows you to emulate and test this archive and replay workflow on your local machine. In this tutorial, you'll create a <a target="_blank" href="https://docs.localstack.cloud/user-guide/aws/lambda/">Lambda function locally</a> using LocalStack &amp; <a target="_blank" href="https://aws.amazon.com/cli/">AWS CLI</a> to serve as the target for an EventBridge rule. You will then create an <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-archive-event.html">archive</a>, and once events are stored in the archive, you will <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-replay-archived-event.html">replay</a> them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722924693907/9622cf55-e0b6-40fc-9d72-c67c6435b748.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p><a target="_blank" href="https://docs.localstack.cloud/getting-started/installation/#localstack-cli">LocalStack CLI</a></p>
</li>
<li><p><a target="_blank" href="https://docs.docker.com/get-docker/">Docker</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html">AWS CLI</a> with <a target="_blank" href="https://docs.localstack.cloud/user-guide/integrations/aws-cli/#localstack-aws-cli-awslocal"><code>awslocal</code> wrapper</a></p>
</li>
<li><p><a target="_blank" href="https://nodejs.org/en/download/package-manager">Node.js</a> &amp; <a target="_blank" href="https://infozip.sourceforge.net/"><code>zip</code></a></p>
</li>
<li><p><a target="_blank" href="https://app.localstack.cloud/dashboard">LocalStack Web Application</a> account (optional)</p>
</li>
</ul>
<h2 id="heading-start-your-localstack-container">Start your LocalStack container</h2>
<p>Launch the LocalStack container on your local machine using the specified command:</p>
<pre><code class="lang-bash">PROVIDER_OVERRIDE_EVENTS=v2 localstack start
</code></pre>
<p>The configuration variable <code>PROVIDER_OVERRIDE_EVENTS=v2</code> enables you to set the <a target="_blank" href="https://discuss.localstack.cloud/t/introducing-eventbridge-v2-in-localstack/946">EventBridge v2 provider</a>, which enhances emulation for EventBridge features like buses, rules, patterns, and targets, and supports archive &amp; replay.</p>
<p>Once initiated, you'll receive a confirmation output indicating that the LocalStack container is up and running.</p>
<pre><code class="lang-bash">
     __                     _______ __             __
    / /   ____  _________ _/ / ___// /_____ ______/ /__
   / /   / __ \/ ___/ __ `/ /\__ \/ __/ __ `/ ___/ //_/
  / /___/ /_/ / /__/ /_/ / /___/ / /_/ /_/ / /__/ ,&lt;
 /_____/\____/\___/\__,_/_//____/\__/\__,_/\___/_/|_|

 💻 LocalStack CLI 3.6.0
 👤 Profile: default

[20:19:34] starting LocalStack <span class="hljs-keyword">in</span>    localstack.py:503
           Docker mode 🐳
...
LocalStack version: 3.6.1.dev20240725091954
LocalStack build date: 2024-07-26
LocalStack build git <span class="hljs-built_in">hash</span>: d536652
</code></pre>
<h2 id="heading-create-a-lambda-function">Create a Lambda Function</h2>
<p>Since the event bus is transient, you need a target to verify what events pass through your system. You can create a Lambda function to log the events.</p>
<p>To begin, create a new file named <code>index.js</code>. Add the following code to log events:</p>
<pre><code class="lang-javascript"><span class="hljs-meta">'use strict'</span>;

<span class="hljs-built_in">exports</span>.handler = <span class="hljs-function">(<span class="hljs-params">event, context, callback</span>) =&gt;</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'LogScheduledEvent'</span>);
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Received event:'</span>, <span class="hljs-built_in">JSON</span>.stringify(event, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>));
    callback(<span class="hljs-literal">null</span>, <span class="hljs-string">'Finished'</span>);
};
</code></pre>
<p>This code logs details of incoming events and concludes by sending a <code>Finished</code> message via the callback.</p>
<p>Next, package this file into a ZIP archive with the command:</p>
<pre><code class="lang-bash">zip function.zip index.js
</code></pre>
<p>Deploy the Lambda function using the following command:</p>
<pre><code class="lang-bash">awslocal lambda create-function \
    --function-name LogScheduledEvent \
    --runtime nodejs18.x \
    --role arn:aws:iam::000000000000:role/lambda-ex \
    --handler index.handler \
    --zip-file fileb://function.zip
</code></pre>
<p>The command outputs details about the function, including its name, pending state, and other configuration data:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"FunctionName"</span>: <span class="hljs-string">"LogScheduledEvent"</span>,
    ...
    <span class="hljs-string">"State"</span>: <span class="hljs-string">"Pending"</span>,
    <span class="hljs-string">"StateReason"</span>: <span class="hljs-string">"The function is being created."</span>,
    <span class="hljs-string">"StateReasonCode"</span>: <span class="hljs-string">"Creating"</span>,
    ...
    <span class="hljs-string">"EphemeralStorage"</span>: {
        <span class="hljs-string">"Size"</span>: 512
    },
    ...
    <span class="hljs-string">"RuntimeVersionConfig"</span>: {
        <span class="hljs-string">"RuntimeVersionArn"</span>: <span class="hljs-string">"arn:aws:lambda:us-east-1::runtime:8eeff65f6809a3ce81507fe733fe09b835899b99481ba22fd7
5b5a7338290ec1"</span>
    }
}
</code></pre>
<h2 id="heading-create-an-eventbridge-archive">Create an EventBridge Archive</h2>
<p>You can set up an EventBridge archive to store past events. This archive can hold events indefinitely or for a specified number of days. When configuring the archive, you can either capture all events from an event bus or selectively store only targeted events. Additionally, you can apply filters or rules to store events that meet specific criteria.</p>
<p>To start, create a custom event bus named <code>test-event-bus</code> to receive events, using this command:</p>
<pre><code class="lang-bash">awslocal events create-event-bus --name test-event-bus
</code></pre>
<p>The output would confirm the creation of the event bus:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"EventBusArn"</span>: <span class="hljs-string">"arn:aws:events:us-east-1:000000000000:event-bus/test-event-bus"</span>
}
</code></pre>
<h2 id="heading-create-an-eventbridge-rule">Create an EventBridge Rule</h2>
<p>You can create a rule named <code>ARTestRule</code> to archive events that are sent to the event bus. EventBridge rules allow you to route specific events according to your needs. This rule filters events based on defined patterns and sends matching events to attached target services.</p>
<p>To create the rule, execute the following command:</p>
<pre><code class="lang-bash">awslocal events put-rule \
    --name ARTestRule \
    --event-bus-name test-event-bus \
    --event-pattern <span class="hljs-string">'{
      "detail-type": ["customerCreated"]
    }'</span>
</code></pre>
<p>You should see the following output:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"RuleArn"</span>: <span class="hljs-string">"arn:aws:events:us-east-1:000000000000:rule/test-event-bus/ARTestRule"</span>
}
</code></pre>
<h2 id="heading-create-an-eventbridge-target">Create an EventBridge Target</h2>
<p>You can then choose a target. Targets are entities that consume your events. When an event triggers a rule, all targets linked to that rule are activated.</p>
<p>In this case, you can use the Lambda function as the target by retrieving the ARN of the Lambda function:</p>
<pre><code class="lang-bash">awslocal lambda get-function \
    --function-name LogScheduledEvent \
    --query <span class="hljs-string">'Configuration.FunctionArn'</span> \
    --output text
</code></pre>
<p>The Lambda ARN output will be:</p>
<pre><code class="lang-bash">arn:aws:lambda:us-east-1:000000000000:<span class="hljs-keyword">function</span>:LogScheduledEvent
</code></pre>
<p>Finally, you can associate the Lambda function as a target to the <code>ARTestRule</code> using this command:</p>
<pre><code class="lang-bash">awslocal events put-targets \
    --rule ARTestRule \
    --event-bus-name test-event-bus \
    --targets <span class="hljs-string">'[
      {
        "Id": "1",
        "Arn": "arn:aws:lambda:us-east-1:000000000000:function:LogScheduledEvent"
      }
    ]'</span>
</code></pre>
<p>You should see the following output:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"FailedEntryCount"</span>: 0,
    <span class="hljs-string">"FailedEntries"</span>: []
}
</code></pre>
<h2 id="heading-create-an-eventbridge-archive-1">Create an EventBridge Archive</h2>
<p>Next, establish an archive named <code>ArchiveTest</code> using the ARN from your event bus:</p>
<pre><code class="lang-bash">awslocal events create-archive \
    --archive-name ArchiveTest \
    --event-source-arn arn:aws:events:us-east-1:000000000000:event-bus/test-event-bus
</code></pre>
<p>The output would show the archive's ARN, its enabled state, and creation time:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"ArchiveArn"</span>: <span class="hljs-string">"arn:aws:events:us-east-1:000000000000:archive/ArchiveTest"</span>,
    <span class="hljs-string">"State"</span>: <span class="hljs-string">"ENABLED"</span>,
    <span class="hljs-string">"CreationTime"</span>: <span class="hljs-string">"2024-08-03T19:03:00.917596+05:30"</span>
}
</code></pre>
<h2 id="heading-send-a-test-event">Send a test event</h2>
<p>With the archive and rule configured, you can send a test event to verify that the system is functioning as expected. Execute the following command to send a test event:</p>
<pre><code class="lang-bash">awslocal events put-events --entries <span class="hljs-string">'[
  {
    "Source": "TestEvent",
    "DetailType": "customerCreated",
    "Detail": "{}",
    "EventBusName": "test-event-bus"
  }
]'</span>
</code></pre>
<p>In this command:</p>
<ul>
<li><p>The event source is identified as <code>TestEvent</code>.</p>
</li>
<li><p>The event is sent to the <code>test-event-bus</code>.</p>
</li>
<li><p>The event detail is empty (<code>{}</code>) and the type is <code>customerCreated</code>.</p>
</li>
</ul>
<p>The output should confirm successful event submission:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"FailedEntryCount"</span>: 0,
    <span class="hljs-string">"Entries"</span>: [
        {
            <span class="hljs-string">"EventId"</span>: <span class="hljs-string">"ee947775-0dcd-405c-8ba9-e100dcf244fa"</span>
        }
    ]
}
</code></pre>
<h2 id="heading-replay-the-event">Replay the event</h2>
<p>Once test events are stored in the archive, you can replay them. To replay events, you need to specify both the source and the event bus they should be replayed into. Events must be replayed into the same event bus from which they were collected. You can also define a specific time window during which you want the events to be replayed into the bus.</p>
<p>Run the following command to start the replay:</p>
<pre><code class="lang-bash">awslocal events start-replay \
    --replay-name ReplayTest \
    --event-source-arn arn:aws:events:us-east-1:000000000000:archive/ArchiveTest \
    --event-start-time 2024-08-01 --event-end-time 2024-08-06 \
    --destination <span class="hljs-string">'{"Arn":"arn:aws:events:us-east-1:000000000000:event-bus/test-event-bus"}'</span>
</code></pre>
<p>In the above command:</p>
<ul>
<li><p>The replay name is specified as <code>ReplayTest</code>.</p>
</li>
<li><p>The event source ARN is the archive's ARN.</p>
</li>
<li><p>The event start and end times are chosen based on the article's date but can be customized.</p>
</li>
<li><p>The destination is the ARN for the <code>test-event-bus</code>.</p>
</li>
</ul>
<p>The output should look like this:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"ReplayArn"</span>: <span class="hljs-string">"arn:aws:events:us-east-1:000000000000:replay/ReplayTest"</span>,
    <span class="hljs-string">"State"</span>: <span class="hljs-string">"COMPLETED"</span>,
    <span class="hljs-string">"ReplayStartTime"</span>: <span class="hljs-string">"2024-08-06T12:58:01.405559+05:30"</span>
}
</code></pre>
<p>To further inspect the replay details, use the command:</p>
<pre><code class="lang-bash">awslocal events describe-replay --replay-name ReplayTest
</code></pre>
<p>The description output will provide detailed information about the replay:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"ReplayName"</span>: <span class="hljs-string">"ReplayTest"</span>,
    <span class="hljs-string">"ReplayArn"</span>: <span class="hljs-string">"arn:aws:events:us-east-1:000000000000:replay/ReplayTest"</span>,
    <span class="hljs-string">"State"</span>: <span class="hljs-string">"COMPLETED"</span>,
    <span class="hljs-string">"EventSourceArn"</span>: <span class="hljs-string">"arn:aws:events:us-east-1:000000000000:archive/ArchiveTest"</span>,
    <span class="hljs-string">"Destination"</span>: {
        <span class="hljs-string">"Arn"</span>: <span class="hljs-string">"arn:aws:events:us-east-1:000000000000:event-bus/test-event-bus"</span>
    },
    ...
}
</code></pre>
<p>Replayed events will contain metadata to distinguish them from original events. You can check LocalStack logs for Lambda invocation details:</p>
<pre><code class="lang-bash">2024-08-06T07:29:24.039  INFO --- [et.reactor-1] localstack.request.aws     : AWS events.StartReplay =&gt; 200
2024-08-06T07:29:24.681  INFO --- [et.reactor-0] localstack.request.http    : POST /_localstack_lambda/58324d0bbf2060f95e80830df11f08a8/status/58324d0bbf2060f95e80830df11f08a8/ready =&gt; 202
2024-08-06T07:29:24.720  INFO --- [et.reactor-1] localstack.request.http    : POST /_localstack_lambda/58324d0bbf2060f95e80830df11f08a8/invocations/091a672b-a832-4824-a4cc-d56c75ed9893/logs =&gt; 202
2024-08-06T07:29:24.721  INFO --- [et.reactor-0] localstack.request.http    : POST /_localstack_lambda/58324d0bbf2060f95e80830df11f08a8/invocations/091a672b-a832-4824-a4cc-d56c75ed9893/response =&gt; 202
</code></pre>
<p>On the <a target="_blank" href="https://app.localstack.cloud/inst/default/resources/cloudwatch/groups">CloudWatch Logs Resource Browser</a>, you can inspect the <code>/aws/lambda/LogScheduledEvent</code> log to view the received event:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722929441259/612dadd0-9e8a-4567-a796-16e32575ac75.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>With EventBridge Archive &amp; Replay, you can capture and replay past events to troubleshoot issues or reprocess events through newly added functionalities without manually storing your events or setting up an additional infrastructure layer like <a target="_blank" href="https://aws.amazon.com/what-is/dead-letter-queue/">Dead Letter Queues (DLQ)</a>, which can be time-consuming depending on the event consumer. LocalStack lets you test these event-driven workflows locally, simplifying the development loop and reducing development costs.</p>
<p>LocalStack’s EventBridge emulation supports testing event transmission across different AWS regions and accounts, integration with various targets (such as <a target="_blank" href="https://docs.localstack.cloud/user-guide/aws/sns/">SNS</a>, <a target="_blank" href="https://docs.localstack.cloud/user-guide/aws/sqs/">SQS</a>, <a target="_blank" href="https://docs.localstack.cloud/user-guide/aws/stepfunctions/">Step Functions</a>), and <a target="_blank" href="https://docs.localstack.cloud/user-guide/state-management/cloud-pods/">Cloud Pods</a> for state sharing &amp; collaboration. In upcoming blog posts, we will delve into some of these use cases, showcasing how LocalStack can make the black box of event-driven systems transparent for developers.</p>
]]></content:encoded></item><item><title><![CDATA[How to debug AWS ECS Tasks locally using LocalStack and VS Code]]></title><description><![CDATA[LocalStack offers some important features that can improve the developer experience of building applications on ECS, including hot-reloading and debugging for compute infrastructure like Lambda and ECS.
LocalStack allows you to debug your application...]]></description><link>https://hashnode.localstack.cloud/how-to-debug-aws-ecs-tasks-locally-using-localstack-and-vs-code</link><guid isPermaLink="true">https://hashnode.localstack.cloud/how-to-debug-aws-ecs-tasks-locally-using-localstack-and-vs-code</guid><category><![CDATA[localstack]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ECS]]></category><category><![CDATA[vscode]]></category><category><![CDATA[debugging]]></category><dc:creator><![CDATA[Harsh Bardhan Mishra]]></dc:creator><pubDate>Fri, 09 Aug 2024 10:01:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723196342991/34e05f19-663d-45dc-bb22-865317f4c0a2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>LocalStack offers some important features that can improve the developer experience of building applications on ECS, including hot-reloading and debugging for compute infrastructure like <a target="_blank" href="https://docs.localstack.cloud/user-guide/aws/lambda/">Lambda</a> and <a target="_blank" href="https://docs.localstack.cloud/user-guide/aws/ecs/">ECS</a>.</p>
<p>LocalStack allows you to debug your application code in ECS tasks by setting breakpoints and improving your development and testing process without the requirement to deploy to the real cloud. This post will show you how to set up a local ECS cluster and task, open a remote debugging port on the LocalStack container, configure VS Code for debugging, and run the debugger.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p><a target="_blank" href="https://docs.localstack.cloud/getting-started/installation/#localstack-cli"><code>localstack</code> CLI</a> with the <a target="_blank" href="https://docs.localstack.cloud/getting-started/auth-token/"><code>LOCALSTACK_AUTH_TOKEN</code></a></p>
</li>
<li><p><a target="_blank" href="https://docs.docker.com/get-docker/">Docker</a></p>
</li>
<li><p><a target="_blank" href="https://code.visualstudio.com/download">Visual Studio Code</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/cdk/latest/guide/work-with-cdk-typescript.html">AWS CDK</a> with <a target="_blank" href="https://docs.localstack.cloud/user-guide/integrations/aws-cdk"><code>cdklocal</code></a></p>
</li>
<li><p><a target="_blank" href="https://nodejs.org/en/download/prebuilt-binaries">Node.js</a> &amp; <code>npm</code></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html">AWS CLI</a> with <a target="_blank" href="https://docs.localstack.cloud/user-guide/integrations/aws-cli/#localstack-aws-cli-awslocal"><code>awslocal</code> wrapper</a> (optional)</p>
</li>
<li><p><a target="_blank" href="https://curl.se/"><code>curl</code></a></p>
</li>
</ul>
<h2 id="heading-nodejs-app-on-ecs-with-elastic-load-balancer">Node.js app on ECS with Elastic Load Balancer</h2>
<p>The sample application uses <a target="_blank" href="https://github.com/aws/aws-cdk">AWS CDK</a> to deploy a Node.js containerized app on <a target="_blank" href="https://aws.amazon.com/fargate/">AWS Fargate</a> within an <a target="_blank" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html">ECS cluster</a>. The CDK stack:</p>
<ul>
<li><p>Creates a local VPC and an ECS Cluster.</p>
</li>
<li><p>Builds and pushes the Docker image to a local ECR repository.</p>
</li>
<li><p>Adds task and container Definition for the local ECS tasks.</p>
</li>
<li><p>Runs containers on Fargate, distributing traffic using ELB.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723015645943/a97b2b4a-d539-4d68-b398-9effa224b954.png" alt class="image--center mx-auto" /></p>
<p>After deploying the CDK stack, you will create a VS Code task &amp; launch configuration and attach the debugger to the ECS task.</p>
<h3 id="heading-start-your-localstack-container">Start your LocalStack container</h3>
<p>Launch the LocalStack container on your local machine using the specified command:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> LOCALSTACK_AUTH_TOKEN=&lt;your-auth-token&gt;
ECS_DOCKER_FLAGS=<span class="hljs-string">"-e NODE_OPTIONS=--inspect-brk=0.0.0.0:9229 -p 9229:9229"</span> \
localstack start
</code></pre>
<p>The configuration variable <code>ECS_DOCKER_FLAGS</code> in the command mentioned above is used to pass additional flags to Docker when creating ECS task containers. The option is used to enable a remote debugging port for your ECS tasks. This exposes the debugger on a random port on the host machine, allowing you to remotely attach your debugger to the ECS tasks.</p>
<blockquote>
<p>Alternatively, you can utilize the <a target="_blank" href="https://github.com/localstack-samples/sample-cdk-ecs-elb/blob/main/devops-tooling/docker-compose.yml">Docker Compose configuration</a> provided in the repository to start the LocalStack container with the following command:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> devops-tooling &amp;&amp; docker compose -p ecslb up
</code></pre>
</blockquote>
<h3 id="heading-install-the-dependencies">Install the dependencies</h3>
<p>To begin, fork the <a target="_blank" href="https://github.com/localstack-samples/sample-cdk-ecs-elb">LocalStack sample repository on GitHub</a> using this command:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> git@github.com:localstack-samples/sample-cdk-ecs-elb.git
</code></pre>
<p>After cloning the repository, navigate to the <code>iac/awscdk</code> directory and install all the required dependencies with the following command:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> iac/awscdk
npm install
</code></pre>
<h3 id="heading-deploy-the-application-locally">Deploy the application locally</h3>
<p>To deploy the application, you should use <a target="_blank" href="https://github.com/localstack/aws-cdk-local"><code>cdklocal</code></a>, which is a wrapper script for utilizing the AWS Cloud Development Kit (CDK) with local APIs provided by LocalStack.</p>
<p>Start by ensuring that each AWS environment you plan to deploy resources to is bootstrapped. Run the following command in the <code>iac/awscdk</code> directory:</p>
<pre><code class="lang-bash">cdklocal bootstrap
</code></pre>
<p>Next, you can deploy the CDK stack using this command in the <code>iac/awscdk</code> directory:</p>
<pre><code class="lang-bash">cdklocal deploy
</code></pre>
<p>After a successful deployment, you should see output similar to the following:</p>
<pre><code class="lang-bash">✅  RepoStack

✨  Deployment time: 20.15s

Outputs:
RepoStack.MyFargateServiceLoadBalancerDNS704F6391 = lb-bf1b158e.elb.localhost.localstack.cloud
RepoStack.MyFargateServiceServiceURL4CF8398A = http://lb-bf1b158e.elb.localhost.localstack.cloud
RepoStack.localstackserviceslb = lb-bf1b158e.elb.localhost.localstack.cloud:4566
RepoStack.serviceslb = lb-bf1b158e.elb.localhost.localstack.cloud
Stack ARN:
arn:aws:cloudformation:us-east-1:000000000000:stack/RepoStack/77d40ed8

✨  Total time: 22.55s
</code></pre>
<p>You can now use the URLs provided to send requests to the Node.js application running in the ECS task locally. To inspect the ELB endpoint further, if you have <code>awslocal</code> installed, run the following command:</p>
<pre><code class="lang-bash">awslocal elbv2 describe-load-balancers --query <span class="hljs-string">'LoadBalancers[0].DNSName'</span> --output text
</code></pre>
<p>The output should be:</p>
<pre><code class="lang-bash">lb-bf1b158e.elb.localhost.localstack.cloud
</code></pre>
<h3 id="heading-configure-vs-code-for-remote-debugging">Configure VS Code for remote debugging</h3>
<p>While starting LocalStack, you configured <code>ECS_DOCKER_FLAGS</code> to enable the required configuration for remote debugging. You can now add a <a target="_blank" href="https://code.visualstudio.com/docs/editor/tasks">VS Code task</a> to wait for the remote debugging server. Create a <code>tasks.json</code> file in the <code>.vscode</code> directory of the project and add the following:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"version"</span>: <span class="hljs-string">"2.0.0"</span>,
    <span class="hljs-attr">"tasks"</span>: [
        {
            <span class="hljs-attr">"label"</span>: <span class="hljs-string">"Wait Remote Debugger Server"</span>,
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"shell"</span>,
            <span class="hljs-attr">"command"</span>: <span class="hljs-string">"while [[ -z $(docker ps | grep :9229) ]]; do sleep 1; done; sleep 1;"</span>
        }
    ]
}
</code></pre>
<p>Next, you can define how VS Code should connect to the remote Node.js application. Create a new <code>launch.json</code> file in the <code>.vscode</code> directory of the project and add the following:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"version"</span>: <span class="hljs-string">"0.2.0"</span>,
    <span class="hljs-attr">"configurations"</span>: [
        {
            <span class="hljs-attr">"address"</span>: <span class="hljs-string">"127.0.0.1"</span>,
            <span class="hljs-attr">"localRoot"</span>: <span class="hljs-string">"${workspaceFolder}"</span>,
            <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Attach to Remote Node.js"</span>,
            <span class="hljs-attr">"port"</span>: <span class="hljs-number">9229</span>,
            <span class="hljs-attr">"remoteRoot"</span>: <span class="hljs-string">"/app"</span>,
            <span class="hljs-attr">"request"</span>: <span class="hljs-string">"attach"</span>,
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"node"</span>,
            <span class="hljs-attr">"preLaunchTask"</span>: <span class="hljs-string">"Wait Remote Debugger Server"</span>
        }
    ]
}
</code></pre>
<h3 id="heading-run-the-debugging-process">Run the debugging process</h3>
<p>For the debugging process, you can set <a target="_blank" href="https://code.visualstudio.com/docs/editor/debugging#_breakpoints">breakpoints</a> in the Node.js application code. Breakpoints allow you to pause the code execution on a specific line to inspect it. To run the debugger:</p>
<ul>
<li><p>Click on <strong>Run and Debug</strong> in your VS Code session.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723008324621/e16740f9-38ae-4390-a88a-beef46d968c8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Select the <strong>Attach to Remote Node.js</strong> configuration and click <strong>Start Debugging</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723008472995/17e89a32-b723-4cea-acc6-67eb7908ec0d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Navigate to <code>src/app/server.js</code> in your VS Code and set breakpoints (for example, on line 5).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723008574888/5ec6bfdd-c32b-4dc6-99b0-39d75d13bf6a.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Open your terminal and execute the following command to trigger the code where you set the breakpoint:</p>
<pre><code class="lang-bash">  curl <span class="hljs-string">"lb-bf1b158e.elb.localhost.localstack.cloud:4566"</span>
</code></pre>
<p>  Replace the endpoint URL with the appropriate URL in your setup.</p>
</li>
</ul>
<p>You will now see the remote debugging in action on your VS Code:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723196567473/40c511ad-9a2f-4dae-a1c9-3e96f441a866.png" alt class="image--center mx-auto" /></p>
<p>You can now inspect the values of any identifiers within the current scope using the <strong>Variables</strong> and <strong>Watch</strong> pane in VS Code. Additionally, use the debug toolbar at the top of the editor to step through the code, continue execution, or manage breakpoints during debugging.</p>
<p>After the debugging process is completed, the command you ran previously will produce the following output:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723017502912/338145c7-3d86-48f2-9755-27c53817aca4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>That brings us to the end of our tour of how you can debug ECS tasks locally with LocalStack. You can mount code from the host filesystem into the ECS container for hot reloading, enabling quick debugging and testing without needing to rebuild and redeploy the Docker image each time. LocalStack not only lets you deploy and test your infrastructure but also offers various developer experience (<strong>DevEx</strong>) features, allowing you to develop your application without interacting with the actual cloud throughout your software development lifecycle.</p>
]]></content:encoded></item><item><title><![CDATA[Running an EC2 instance locally using LocalStack and AWS CLI]]></title><description><![CDATA[EC2, or Elastic Compute Cloud, enables users to create and manage a virtual machine in the cloud. It's a pivotal service that marked the beginning of cloud computing and is often the first service users learn & explore in the AWS ecosystem. However, ...]]></description><link>https://hashnode.localstack.cloud/running-an-ec2-instance-locally-using-localstack-and-aws-cli</link><guid isPermaLink="true">https://hashnode.localstack.cloud/running-an-ec2-instance-locally-using-localstack-and-aws-cli</guid><category><![CDATA[localstack]]></category><category><![CDATA[ec2]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Harsh Bardhan Mishra]]></dc:creator><pubDate>Wed, 24 Jul 2024 11:20:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721817967896/f3ceac54-107e-4b75-abc1-e175614e1183.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>EC2, or Elastic Compute Cloud, enables users to create and manage a virtual machine in the cloud. It's a pivotal service that marked the beginning of cloud computing and is often the first service users learn &amp; explore in the AWS ecosystem. However, there are many horror tales about people leaving an EC2 instance running and receiving a wallet-busting bill. Now, the burning question: How can someone safely run an EC2 instance for some casual learning or testing without overstepping on the free tier and playing hide-and-seek with AWS billing?</p>
<p>LocalStack is a core cloud emulator that allows you to emulate various AWS services on your local machine <strong>without needing a real AWS account</strong>. LocalStack essentially operates within a Docker container, emulating or mocking different cloud services. This setup lets you connect your integrations (like <a target="_blank" href="https://docs.localstack.cloud/user-guide/integrations/terraform/">Terraform</a>, <a target="_blank" href="https://docs.localstack.cloud/user-guide/integrations/aws-cdk/">CDK</a>, <a target="_blank" href="https://docs.localstack.cloud/user-guide/integrations/aws-cli/">AWS CLI</a>) to the running container for testing your application and infrastructure code without spending anything on infrastructure provisioning.</p>
<p>For learning, testing, and integration purposes, LocalStack supports running an emulated EC2 instance locally. This blog will walk you through launching a local EC2 instance, accessing the running instance, and deploying a basic Flask API using a user data shell script.</p>
<blockquote>
<p>Docker Desktop on macOS and Windows does not make the Docker Bridge network accessible, preventing users from completing this tutorial. To ensure a smooth end-to-end experience, it is recommended to use Linux or <a target="_blank" href="https://github.com/features/codespaces">GitHub Codespaces</a>/<a target="_blank" href="https://www.gitpod.io/">GitPod</a>.</p>
</blockquote>
<h2 id="heading-running-docker-containers-as-ec2-instances">Running Docker containers as EC2 instances</h2>
<p>In the <a target="_blank" href="https://github.com/localstack/localstack">LocalStack community edition</a>, you can mock EC2 APIs on your local machine. You send API requests to the mocked EC2 service running in LocalStack that mimics the real AWS service's interface and triggers a predefined behaviour. However, no actual instances (like virtual machines) are created. This setup often leads to testing errors because the mock implementation doesn't create real resources.</p>
<p>In LocalStack Pro, EC2 APIs utilize the Docker Engine backend to emulate EC2 instances. When you launch an EC2 instance locally, LocalStack sets up a Docker container recognized as an Amazon Machine Image (AMI). This enables users to log in to the instance, test their configurations, and conduct end-to-end integration tests on a local EC2 infrastructure.</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li><p><a target="_blank" href="https://docs.localstack.cloud/getting-started/installation/#localstack-cli">LocalStack CLI</a> with a <a target="_blank" href="https://docs.localstack.cloud/getting-started/auth-token/"><code>LOCALSTACK_AUTH_TOKEN</code></a></p>
</li>
<li><p><a target="_blank" href="https://docs.docker.com/get-docker/">Docker</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html">AWS CLI</a> with <a target="_blank" href="https://docs.localstack.cloud/user-guide/integrations/aws-cli/#localstack-aws-cli-awslocal"><code>awslocal</code> wrapper</a></p>
</li>
<li><p><a target="_blank" href="https://jqlang.github.io/jq/">jq</a> &amp; <a target="_blank" href="https://curl.se/download.html">cURL</a></p>
</li>
</ul>
<blockquote>
<p>You can sign-up for <a target="_blank" href="https://app.localstack.cloud/pricing">LocalStack Hobby Plan</a> to grab a LocalStack Auth Token and use advanced AWS APIs, such as EC2 Docker backend.</p>
</blockquote>
<h3 id="heading-create-a-flask-api">Create a Flask API</h3>
<p>To illustrate an example, set up a basic Flask API with three routes: <code>/</code>, <code>/get</code>, and <code>/post</code>. Create a new file called <a target="_blank" href="http://app.py"><code>app.py</code></a> and add the following code.</p>
<pre><code class="lang-bash">from flask import Flask, request

app = Flask(__name__)

@app.route(<span class="hljs-string">'/'</span>)
def hello_world():
    <span class="hljs-built_in">return</span> <span class="hljs-string">'Hello, LocalStack!'</span>

<span class="hljs-comment"># GET route</span>
@app.route(<span class="hljs-string">'/get'</span>, methods=[<span class="hljs-string">'GET'</span>])
def get_example():
    <span class="hljs-built_in">return</span> <span class="hljs-string">'This is a GET request example.'</span>

<span class="hljs-comment"># POST route</span>
@app.route(<span class="hljs-string">'/post'</span>, methods=[<span class="hljs-string">'POST'</span>])
def post_example():
    data = request.json
    <span class="hljs-built_in">return</span> f<span class="hljs-string">'This is a POST request example. Received data: {data}'</span>

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
    app.run(port=8000, host=<span class="hljs-string">"0.0.0.0"</span>)
</code></pre>
<p>Add this file to a fresh GitHub/GitLab repository, or use an existing template you can find at the following <a target="_blank" href="https://gitlab.com/HarshCasper/flask-api-example.git">GitLab repository</a>.</p>
<h3 id="heading-start-your-localstack-container">Start your LocalStack container</h3>
<p>Launch the LocalStack container on your local machine using the specified command:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> LOCALSTACK_AUTH_TOKEN=...
localstack start
</code></pre>
<p>Once initiated, you'll receive a confirmation output indicating that the LocalStack container is up and running.</p>
<pre><code class="lang-bash">     __                     _______ __             __
    / /   ____  _________ _/ / ___// /_____ ______/ /__
   / /   / __ \/ ___/ __ `/ /\__ \/ __/ __ `/ ___/ //_/
  / /___/ /_/ / /__/ /_/ / /___/ / /_/ /_/ / /__/ ,&lt;
 /_____/\____/\___/\__,_/_//____/\__/\__,_/\___/_/|_|

 💻 LocalStack CLI 3.5.0
 👤 Profile: default

...
─────────────────── LocalStack Runtime Log (press CTRL-C to quit) ───────────────────

LocalStack version: 3.5.1.dev20240712171301
LocalStack build date: 2024-07-14
LocalStack build git <span class="hljs-built_in">hash</span>: 646be01
...
</code></pre>
<p>To confirm the startup of your LocalStack container with the Pro services, utilize the <code>cURL</code> command to query the <code>/info</code> endpoint.</p>
<pre><code class="lang-bash">curl http://localhost:4566/_localstack/info | jq
{
  <span class="hljs-string">"version"</span>: <span class="hljs-string">"3.5.1.dev20240712171301:646be01"</span>,
  <span class="hljs-string">"edition"</span>: <span class="hljs-string">"pro"</span>,
  <span class="hljs-string">"is_license_activated"</span>: <span class="hljs-literal">true</span>,
  ...
  <span class="hljs-string">"system"</span>: <span class="hljs-string">"linux"</span>,
  <span class="hljs-string">"is_docker"</span>: <span class="hljs-literal">true</span>,
  ...
}
</code></pre>
<h3 id="heading-create-an-ec2-key-pair">Create an EC2 key pair</h3>
<p>Before you make an EC2 instance, make a <a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html">key pair</a>. EC2 keeps the public key and shows the private key for you to save. To make a key pair using the <code>awslocal</code> CLI, use this command:</p>
<pre><code class="lang-bash">awslocal ec2 create-key-pair \
    --key-name my-key \
    --query <span class="hljs-string">'KeyMaterial'</span> \
    --output text | tee key.pem
</code></pre>
<p>This saves the key pair in a file named <code>key.pem</code> in the current directory. Apply the right permissions to the file with this command:</p>
<pre><code class="lang-bash">chmod 400 key.pem
</code></pre>
<p>Alternatively, you can bring in an existing key pair. If you have an SSH public key in your home directory under <code>~/.ssh/id_</code><a target="_blank" href="http://rsa.pub"><code>rsa.pub</code></a>, run this command to import it:</p>
<pre><code class="lang-bash">awslocal ec2 import-key-pair \
    --key-name my-key \
    --public-key-material file://~/.ssh/id_rsa.pub
</code></pre>
<h3 id="heading-add-inbound-roles">Add inbound roles</h3>
<p>In LocalStack, networking features like <a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html">subnets</a> and <a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html">VPCs</a> are not emulated. LocalStack provides a <code>default</code> security group that manages the exposed ports for the EC2 instance. While users can create additional security groups, LocalStack focuses on the <code>default</code> security group.</p>
<p>By default, the SSH port <code>22</code> is open. To enable inbound traffic on the port <code>8000</code> for our Flask API, use this command to authorize the <code>default</code> security group:</p>
<pre><code class="lang-bash">awslocal ec2 authorize-security-group-ingress \
    --group-id default \
    --protocol tcp \
    --port 8000 \
    --cidr 0.0.0.0/0
</code></pre>
<p>The output will be:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"Return"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-string">"SecurityGroupRules"</span>: [
        {
            <span class="hljs-string">"SecurityGroupRuleId"</span>: <span class="hljs-string">"sgr-84956433d958d012e"</span>,
            <span class="hljs-string">"GroupId"</span>: <span class="hljs-string">"sg-952c436416c2392b0"</span>,
            <span class="hljs-string">"GroupOwnerId"</span>: <span class="hljs-string">"000000000000"</span>,
            <span class="hljs-string">"IsEgress"</span>: <span class="hljs-literal">false</span>,
            <span class="hljs-string">"IpProtocol"</span>: <span class="hljs-string">"tcp"</span>,
            <span class="hljs-string">"FromPort"</span>: 8000,
            <span class="hljs-string">"ToPort"</span>: 8000,
            <span class="hljs-string">"CidrIpv4"</span>: <span class="hljs-string">"0.0.0.0/0"</span>,
            <span class="hljs-string">"Description"</span>: <span class="hljs-string">""</span>
        }
    ]
}
</code></pre>
<p>Retrieve the security group ID with this command:</p>
<pre><code class="lang-bash">sg_id=$(awslocal ec2 describe-security-groups | jq -r <span class="hljs-string">'.SecurityGroups[0].GroupId'</span>)
<span class="hljs-built_in">echo</span> <span class="hljs-variable">$sg_id</span>
</code></pre>
<h3 id="heading-run-the-ec2-instance-locally">Run the EC2 Instance locally</h3>
<p>Now, you can start and operate the EC2 instance on your local machine. Before executing the command, create a new file named <code>user_script.sh</code> and include the following content:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash -xeu</span>

apt update
apt install python3 -y
apt-get -y install python3-pip curl git
git <span class="hljs-built_in">clone</span> https://gitlab.com/HarshCasper/flask-api-example.git
<span class="hljs-built_in">cd</span> flask-api-example
pip3 install flask
python3 app.py
</code></pre>
<p>Utilize the above script as a user data shell script, enabling you to send specific instructions to an instance during launch.</p>
<blockquote>
<p>Modify the repository URL if you want to specify your personal GitHub/GitLab repository for deployment. Include any extra commands necessary to set up the application correctly.</p>
</blockquote>
<p>Now, directly initiate the EC2 instance by executing the following command:</p>
<pre><code class="lang-bash">awslocal ec2 run-instances \
  --image-id ami-ff0fea8310f3 \
  --count 1 \
  --instance-type t3.nano --key-name my-key \
  --security-group-ids <span class="hljs-variable">$sg_id</span> \
  --user-data file://./user_script.sh
</code></pre>
<p>In the command above, the instance type is specified as <code>t2.nano</code>, but it has no impact since LocalStack uses a <code>ubuntu-20.04-focal-fossa</code> Docker image emulated as an EC2 instance. This behaviour is set by the image ID <code>ami-ff0fea8310f3</code>. To use an Amazon Linux AMI instead, specify the image ID as <code>ami-024f768332f0</code> (requiring adjustments to the user script).</p>
<p>Upon successful execution, the output will be retrieved.</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"Groups"</span>: [
        {
            <span class="hljs-string">"GroupName"</span>: <span class="hljs-string">"default"</span>,
            <span class="hljs-string">"GroupId"</span>: <span class="hljs-string">"sg-245f6a01"</span>
        }
    ],
    <span class="hljs-string">"Instances"</span>: [
        {
            <span class="hljs-string">"AmiLaunchIndex"</span>: 0,
            <span class="hljs-string">"ImageId"</span>: <span class="hljs-string">"ami-ff0fea8310f3"</span>,
            <span class="hljs-string">"InstanceId"</span>: <span class="hljs-string">"i-42e830289e675885f"</span>,
            <span class="hljs-string">"InstanceType"</span>: <span class="hljs-string">"t3.nano"</span>,
            ...
            <span class="hljs-string">"VirtualizationType"</span>: <span class="hljs-string">"paravirtual"</span>
        }
    ],
    <span class="hljs-string">"OwnerId"</span>: <span class="hljs-string">"000000000000"</span>,
    <span class="hljs-string">"ReservationId"</span>: <span class="hljs-string">"r-a8f48d6e"</span>
}
</code></pre>
<p>You can also confirm that LocalStack has spun an additional Docker container to emulate a locally running EC2 instance:</p>
<pre><code class="lang-bash">docker ps
CONTAINER ID   IMAGE                                                      COMMAND                  CREATED          STATUS                          PORTS                                                                                                                                    NAMES
0c00863e8fdd   localstack-ec2/ubuntu-20.04-focal-fossa:ami-ff0fea8310f3   <span class="hljs-string">"sleep infinity"</span>         4 seconds ago    Up 3 seconds                    0.0.0.0:22-&gt;22/tcp, 0.0.0.0:8000-&gt;8000/tcp                                                                                               localstack-ec2.i-320e3293f4029b596
3fc719173eb2   localstack/localstack-pro                                  <span class="hljs-string">"docker-entrypoint.sh"</span>   18 minutes ago   Up 18 minutes (healthy)         127.0.0.1:443-&gt;443/tcp, 127.0.0.1:4510-4560-&gt;4510-4560/tcp, 0.0.0.0:53-&gt;53/tcp, 0.0.0.0:53-&gt;53/udp, 127.0.0.1:4566-&gt;4566/tcp, 5678/tcp   localstack-main
</code></pre>
<h3 id="heading-logging-into-the-ec2-instance">Logging into the EC2 instance</h3>
<p>After launching the EC2 instance, check the LocalStack logs. In these logs, you can confirm that the EC2 instance is running successfully on your local machine.</p>
<pre><code class="lang-bash">localstack logs
...
Determined main container network: bridge
Determined main container target IP: 172.17.0.2
Instance i-42e830289e675885f will be accessible 
via SSH at: 127.0.0.1:22, 172.17.0.3:22
Instance i-42e830289e675885f port mappings (container -&gt; host): 
{<span class="hljs-string">'8000/tcp'</span>: 8000, <span class="hljs-string">'22/tcp'</span>: 22}
AWS ec2.RunInstances =&gt; 200
...
</code></pre>
<p>In the logs above, verify that the instance is accessible via SSH at <code>127.0.0.1</code>. Depending on your setup, this configuration might change. In this example, you can use the following command to log in to the EC2 instance:</p>
<pre><code class="lang-bash">ssh -i key.pem root@127.0.0.1
</code></pre>
<p>You'll be prompted to establish authenticity. After verification, you can log in to the instance.</p>
<pre><code class="lang-bash">The authenticity of host <span class="hljs-string">'127.0.0.1 (127.0.0.1)'</span> can<span class="hljs-string">'t be established.
ECDSA key fingerprint is SHA256:......
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '</span>127.0.0.1<span class="hljs-string">' (ECDSA) to the list of known hosts.
root@6ef2a4c8d8ac:~#</span>
</code></pre>
<p>In the local EC2 instance, you can execute various commands, similar to a real EC2 instance on the AWS cloud. You can also use <code>cURL</code> to confirm if your Flask API is operational:</p>
<pre><code class="lang-bash">root@6ef2a4c8d8ac:~<span class="hljs-comment"># curl localhost:8000 </span>
Hello, LocalStack!
</code></pre>
<p>Let's attempt to access the running Flask API outside of the EC2 instance.</p>
<h3 id="heading-test-the-running-flask-api">Test the running Flask API</h3>
<p>In the LocalStack logs, confirm that the port mappings from the container to the host are accessible on port <code>8000</code>. Run the following command in a separate terminal tab to check if the Flask API is reachable:</p>
<pre><code class="lang-bash">curl localhost:8000
Hello, LocalStack!
</code></pre>
<p>Additionally, send <code>GET</code> and <code>POST</code> requests to test the active Flask API:</p>
<pre><code class="lang-bash">curl -X POST \
    -H <span class="hljs-string">"Content-Type: application/json"</span> \
    -d <span class="hljs-string">'{"key": "value"}'</span> \
    localhost:8000/post
This is a POST request example. Received data: {<span class="hljs-string">'key'</span>: <span class="hljs-string">'value'</span>}

curl http://localhost:8000/get
This is a GET request example.
</code></pre>
<p>To validate the execution of your user data shell script, use the following command in your active EC2 instance:</p>
<pre><code class="lang-bash">cat /var/<span class="hljs-built_in">log</span>/cloud-init-output.log
...
+ python3 app.py
 * Serving Flask app <span class="hljs-string">'app'</span>
 * Debug mode: off
WARNING: This is a development server. Do not use it <span class="hljs-keyword">in</span> a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:8000
 * Running on http://172.17.0.3:8000
Press CTRL+C to quit
127.0.0.1 - - [15/Feb/2024 07:51:02] <span class="hljs-string">"GET / HTTP/1.1"</span> 200 -
127.0.0.1 - - [15/Feb/2024 07:51:06] <span class="hljs-string">"GET / HTTP/1.1"</span> 200 -
...
</code></pre>
<h3 id="heading-terminate-the-instance">Terminate the instance</h3>
<p>To terminate the instance, stop the LocalStack container with the command:</p>
<pre><code class="lang-bash">localstack stop
</code></pre>
<p>However, if you need to examine the container filesystem for debugging later on you can configure <code>EC2_REMOVE_CONTAINERS=0</code> while starting LocalStack. This configuration option controls whether Docker containers created during the process are removed at instance termination or when LocalStack shuts down.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>You can create additional EC2 instances using <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance">Terraform</a>, <a target="_blank" href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.Instance.html">CDK</a>, or <a target="_blank" href="https://www.pulumi.com/registry/packages/aws/api-docs/ec2/instance/">Pulumi</a>. For a visual user interface for EC2 instance creation, utilize the <a target="_blank" href="https://app.localstack.cloud/inst/default/resources/ec2">LocalStack Web Application</a>. You can also set up <a target="_blank" href="https://docs.localstack.cloud/user-guide/aws/elastic-compute-cloud/#ebs-block-devices">EBS Block Devices</a>, <a target="_blank" href="https://docs.localstack.cloud/user-guide/aws/elastic-compute-cloud/#instance-metadata-service">Instance Metadata Service</a>, and <a target="_blank" href="https://docs.localstack.cloud/user-guide/aws/elastic-load-balancing/">Elastic Load Balancers</a> for your locally running EC2 instances. We are actively enhancing our EC2 emulation features to facilitate faster local development and testing on your machine!</p>
]]></content:encoded></item><item><title><![CDATA[Developing cloud AI-powered apps with LocalStack and Ollama]]></title><description><![CDATA[Introduction
In today’s tech landscape, large language models (LLMs) and AI are transforming both tech-centric and traditional businesses. AI functionality is being integrated across various platforms to enhance user experience. However, developing a...]]></description><link>https://hashnode.localstack.cloud/developing-cloud-ai-powered-apps-with-localstack-and-ollama</link><guid isPermaLink="true">https://hashnode.localstack.cloud/developing-cloud-ai-powered-apps-with-localstack-and-ollama</guid><category><![CDATA[tinyllama]]></category><category><![CDATA[AI]]></category><category><![CDATA[ollama]]></category><category><![CDATA[localstack]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[llm]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Mon, 08 Jul 2024 07:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1720516388738/b6b72c5b-9fcc-4a8a-ae89-b56ff77dae90.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p>In today’s tech landscape, large language models (LLMs) and AI are transforming both tech-centric and traditional businesses. AI functionality is being integrated across various platforms to enhance user experience. However, developing and testing these AI integrations often requires extensive infrastructure, leading to high costs. LocalStack enables developers to build and test AI integrations locally, which accelerates the development process and avoids extra expenses.</p>
<p>In this tutorial, we’ll explore building an AI-powered chatbot using Ollama, a tool that lets users interact with information through natural language. For instance, on a government website, rather than navigating complex menus, you could ask a chatbot to find a specific form. We’ll start by setting up the entire system locally with LocalStack, then move to deploying it in the cloud.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<ul>
<li><p><a target="_blank" href="https://aws.amazon.com/free/">AWS free tier account</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/free/">LocalS</a><a target="_blank" href="https://app.localstack.cloud/sign-up?__hstc=108988063.dd010235412993db51e2f289f733e0b6.1700054802436.1720480160565.1720499143130.124&amp;__hssc=108988063.1.1720499143130&amp;__hsfp=2018930297">tac</a><a target="_blank" href="https://aws.amazon.com/free/">k Pro</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/free/">Docke</a><a target="_blank" href="https://app.localstack.cloud/sign-up?__hstc=108988063.dd010235412993db51e2f289f733e0b6.1700054802436.1720480160565.1720499143130.124&amp;__hssc=108988063.1.1720499143130&amp;__hsfp=2018930297">r - fo</a><a target="_blank" href="https://docs.docker.com/get-docker/">r run</a><a target="_blank" href="https://aws.amazon.com/free/">ning LocalStac</a><a target="_blank" href="https://app.localstack.cloud/sign-up?__hstc=108988063.dd010235412993db51e2f289f733e0b6.1700054802436.1720480160565.1720499143130.124&amp;__hssc=108988063.1.1720499143130&amp;__hsfp=2018930297">k</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/free/">AWS CLI and A</a><a target="_blank" href="https://app.localstack.cloud/sign-up?__hstc=108988063.dd010235412993db51e2f289f733e0b6.1700054802436.1720480160565.1720499143130.124&amp;__hssc=108988063.1.1720499143130&amp;__hsfp=2018930297">WS CLI</a> <a target="_blank" href="https://docs.docker.com/get-docker/">local</a></p>
</li>
<li><p><a target="_blank" href="https://docs.npmjs.com/downloading-and-installing-node-js-and-npm">npm</a> <a target="_blank" href="https://github.com/localstack/awscli-local"></a><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">-</a> <a target="_blank" href="https://aws.amazon.com/free/">for building</a> <a target="_blank" href="https://app.localstack.cloud/sign-up?__hstc=108988063.dd010235412993db51e2f289f733e0b6.1700054802436.1720480160565.1720499143130.124&amp;__hssc=108988063.1.1720499143130&amp;__hsfp=2018930297">the fr</a><a target="_blank" href="https://docs.docker.com/get-docker/">ontend</a> <a target="_blank" href="https://app.localstack.cloud/sign-up?__hstc=108988063.dd010235412993db51e2f289f733e0b6.1700054802436.1720480160565.1720499143130.124&amp;__hssc=108988063.1.1720499143130&amp;__hsfp=2018930297">a</a><a target="_blank" href="https://github.com/localstack/awscli-local">p</a><a target="_blank" href="https://aws.amazon.com/free/">p</a></p>
</li>
</ul>
<h2 id="heading-architecture-overview"><strong>Architecture Overview</strong></h2>
<p>To follow along with this post and get the React app, you can clone <a target="_blank" href="https://github.com/localstack-samples/sample-ollama-ecs-fargate-alb"><strong>the repository</strong></a> for this project.</p>
<p>We will explore a comprehensive example of running Ollama on ECS Fargate. This example includes a backend with a VPC, a load balancer, multiple security groups, and an ECR service hosting our image. For simplicity, the frontend application will be a basic chat interface with a prompt field and an answer box.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720514707123/4352761f-e723-4daf-9cfa-a298ec6e92a6.png" alt class="image--center mx-auto" /></p>
<p>The React application could make a direct request to our Ollama container, but in the real world, it’s not likely that you would run just one task and certainly not try to access the container using its IP address. Ideally you should have multiple tasks running to ensure high availability, in case one of them encounters an issue, you can always rely on the others to take over. In front of the tasks, you’ll need an application load balancer to handle the HTTP requests. This is how traffic is distributed across the containers. The load balancer will have a listener, which listens for client requests. The requests are routed to targets, which will be the IPs for the tasks/containers. The targets live in a target group, and that allows us to make configurations for all of them (for example, setting a routing algorithm, or healthcheck related configs). Our load balancer needs a security group that allows inbound traffic, and a second security group that only allows incoming traffic to the ECS service, which will run two tasks.</p>
<h2 id="heading-why-ollama-amp-tinyllama"><strong>Why Ollama &amp; Tinyllama</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720514771852/70453aa4-07ea-4cff-b6c4-b294acbf36c6.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://ollama.com/"><strong>Ollama</strong></a> is an open-source platform that allows users to run large language models (LLMs) locally on their devices. In its essence, Ollama streamlines the tasks of downloading, installing, and utilizing a broad spectrum of LLMs, enabling users to discover their potential without requiring deep technical skills or dependence on cloud-based platforms. Most importantly, Ollama allows users to run their own specialized LLMs with ease.</p>
<p>Both tools are designed for local development, so using Ollama with LocalStack for building cloud applications offers several advantages:</p>
<ul>
<li><p><strong>Complete local development environment</strong>: Combining Ollama and LocalStack allows developers to run complex cloud applications entirely on their local machines. Ollama can handle large language models locally, while LocalStack handles the AWS services, creating a comprehensive and integrated local development environment.</p>
</li>
<li><p><strong>Cost efficiency</strong>: Running applications locally avoids the costs associated with cloud resources during development and testing. This is particularly useful when working with large language models that can be resource-intensive and expensive to run in the cloud.</p>
</li>
<li><p><strong>Faster iteration cycles</strong>: Local development with Ollama and LocalStack allows for rapid prototyping and testing. Developers can quickly make changes and see results without the delay of deploying to the cloud. This speeds up the development cycle significantly.</p>
</li>
<li><p><strong>Consistent development and production environments</strong>: By using LocalStack to emulate AWS services, developers can ensure that their local development environment closely matches the production environment. This reduces the risk of environment-specific bugs and improves the application's reliability when deployed to the actual cloud.</p>
</li>
<li><p><strong>Improved Testing Capabilities</strong>: LocalStack provides a robust platform for testing AWS services, including ECS and Fargate. Running Ollama as a Fargate task on LocalStack allows for testing complex deployment scenarios and interactions with other AWS services, ensuring that the application behaves as expected before deploying to the cloud.</p>
</li>
</ul>
<p><a target="_blank" href="https://arxiv.org/pdf/2401.02385"><strong>Tinyllama</strong></a> is a compact AI language model that stands out due to its efficient size and robust training. It occupies just 637 MB and was trained on a trillion tokens, making it not only mobile-friendly but also powerful enough to surpass similar-sized models. Designed as a smaller version of Meta’s Llama 2, it shares the same architecture and tokenizer, making it an ideal choice for development and testing, particularly with applications demanding a restricted computation and memory footprint. Depending on your needs, you can also replace it with a different, or more specialized model.</p>
<h2 id="heading-running-the-application-on-localstack"><strong>Running the application on LocalStack</strong></h2>
<h3 id="heading-starting-localstack"><strong>Starting LocalStack</strong></h3>
<p>The first thing we need to do is start LocalStack by using docker compose. This permits an easy visualisation of all the necessary configs. Remember to set your <code>LOCALSTACK_AUTH_TOKEN</code> as an environment variable.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> localstack
<span class="hljs-built_in">export</span> LOCALSTACK_AUTH_TOKEN=&lt;your_auth_token&gt;
docker compose up
</code></pre>
<h5 id="heading-some-important-configs"><strong>Some important configs</strong></h5>
<p>All the following configuration flags can be found in the <code>docker-compose.yml</code> file, in the <code>localstack</code> folder:</p>
<ul>
<li><p><strong>DEBUG=1</strong> - This flag enables debug mode in LocalStack. When set to 1, LocalStack provides more detailed logs, making it easier to trace issues.</p>
</li>
<li><p><strong>ENFORCE_IAM=1</strong> - This configuration enables the enforcement of AWS IAM policies in LocalStack. Normally, LocalStack runs with simplified or no security checks to facilitate development.</p>
</li>
<li><p><strong>ECS_DOCKER_FLAGS=-e OLLAMA_ORIGINS="*"</strong> - This setting is used to pass environment variables to Docker containers spawned by the ECS service within LocalStack. Specifically, we set OLLAMA_ORIGINS="*" inside these containers to indicate that requests from any origin are allowed. This is relevant when integrating with web applications that may call APIs from various domains.</p>
</li>
<li><p><strong>DISABLE_CORS_CHECKS=1</strong> - This flag disables CORS checks in LocalStack, for ease of development.</p>
</li>
<li><p><strong>DISABLE_CUSTOM_CORS_S3=1</strong> - When set, this configuration disables the custom CORS handling for S3 services within LocalStack.</p>
</li>
</ul>
<h3 id="heading-building-the-react-app"><strong>Building the React app</strong></h3>
<p>In the <code>localstack</code> folder, there’s a directory called <code>frontend</code>. To build the React application run the following commands:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> frontend/chatbot/
npm install
npm run build
</code></pre>
<p>Notice the creation of the <code>build</code> folder. The <code>npm run build</code> command will create the static assets needed to run our app, and they will then be uploaded to the S3 bucket.</p>
<h3 id="heading-creating-the-stack"><strong>Creating the stack</strong></h3>
<p>Now we can run the bash script containing AWS CLI commands to create the necessary resources. Let’s first have a look at some of the commands in the script and identify the resources they create:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> VPC_ID=$(awslocal ec2 create-vpc --cidr-block 10.0.0.0/16 | jq -r <span class="hljs-string">'.Vpc.VpcId'</span>)
</code></pre>
<p>Creates a Virtual Private Cloud (VPC) with a specified CIDR block.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> SUBNET_ID1=$(awslocal ec2 create-subnet \
  --vpc-id <span class="hljs-variable">$VPC_ID</span> \
  --cidr-block 10.0.1.0/24 \
  --availability-zone us-east-1a \
  | jq -r <span class="hljs-string">'.Subnet.SubnetId'</span>)

<span class="hljs-built_in">export</span> SUBNET_ID2=$(awslocal ec2 create-subnet \
  --vpc-id <span class="hljs-variable">$VPC_ID</span> \
  --cidr-block 10.0.2.0/24 \
  --availability-zone us-east-1b \
  | jq -r <span class="hljs-string">'.Subnet.SubnetId'</span>)
</code></pre>
<p>Creates two subnets within the VPC, each in a different availability zone.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> RT_ID=$(awslocal ec2 create-route-table --vpc-id <span class="hljs-variable">$VPC_ID</span> | jq -r <span class="hljs-string">'.RouteTable.RouteTableId'</span>)

awslocal ec2 associate-route-table \
  --route-table-id <span class="hljs-variable">$RT_ID</span> \
  --subnet-id <span class="hljs-variable">$SUBNET_ID1</span>

awslocal ec2 associate-route-table \
  --route-table-id <span class="hljs-variable">$RT_ID</span> \
  --subnet-id <span class="hljs-variable">$SUBNET_ID2</span>

awslocal ec2 create-route \
  --route-table-id <span class="hljs-variable">$RT_ID</span> \
  --destination-cidr-block 0.0.0.0/0 \
  --gateway-id <span class="hljs-variable">$INTERNET_GW_ID</span>
</code></pre>
<p>Creates a route table, associates it with the subnets, and adds a route to the internet gateway.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> SG_ID1=$(awslocal ec2 create-security-group \
  --group-name ApplicationLoadBalancerSG \
  --description <span class="hljs-string">"Security Group of the Load Balancer"</span> \
  --vpc-id <span class="hljs-variable">$VPC_ID</span> | jq -r <span class="hljs-string">'.GroupId'</span>)

awslocal ec2 authorize-security-group-ingress \
  --group-id <span class="hljs-variable">$SG_ID1</span> \
  --protocol tcp \
  --port 80 \
  --cidr 0.0.0.0/0

<span class="hljs-built_in">export</span> SG_ID2=$(awslocal ec2 create-security-group \
  --group-name ContainerFromLoadBalancerSG \
  --description <span class="hljs-string">"Inbound traffic from the First Load Balancer"</span> \
  --vpc-id <span class="hljs-variable">$VPC_ID</span> \
  | jq -r <span class="hljs-string">'.GroupId'</span>)

awslocal ec2 authorize-security-group-ingress \
  --group-id <span class="hljs-variable">$SG_ID2</span> \
  --protocol tcp \
  --port 0-65535 \
  --source-group <span class="hljs-variable">$SG_ID1</span>
</code></pre>
<p>Creates security groups for the load balancer and the ECS service, allowing necessary traffic.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> LB_ARN=$(awslocal elbv2 create-load-balancer \
  --name ecs-load-balancer \
  --subnets <span class="hljs-variable">$SUBNET_ID1</span> <span class="hljs-variable">$SUBNET_ID2</span> \
  --security-groups <span class="hljs-variable">$SG_ID1</span> \
  --scheme internet-facing \
  --<span class="hljs-built_in">type</span> application \
  | jq -r <span class="hljs-string">'.LoadBalancers[0].LoadBalancerArn'</span>)

<span class="hljs-built_in">export</span> TG_ARN=$(awslocal elbv2 create-target-group \
  --name ecs-targets \
  --protocol HTTP \
  --port 11434 \
  --vpc-id <span class="hljs-variable">$VPC_ID</span> \
  --target-type ip \
  --health-check-protocol HTTP \
  --region us-east-1 \
  --health-check-path / \
  | jq -r <span class="hljs-string">'.TargetGroups[0].TargetGroupArn'</span>)

awslocal elbv2 create-listener \
  --load-balancer-arn <span class="hljs-variable">$LB_ARN</span> \
  --protocol HTTP \
  --port 11434 \
  --default-actions Type=forward,TargetGroupArn=<span class="hljs-variable">$TG_ARN</span>
</code></pre>
<p>Creates an internet-facing application load balancer and a target group, and sets up a listener to forward traffic.</p>
<pre><code class="lang-bash">awslocal ecr create-repository --repository-name ollama-service
<span class="hljs-built_in">export</span> MODEL_NAME=tinyllama
docker build --build-arg MODEL_NAME=<span class="hljs-variable">$MODEL_NAME</span> -t ollama-service .
docker tag ollama-service:latest 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4510/ollama-service:latest
docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4510/ollama-service:latest
</code></pre>
<p>Creates an ECR repository, builds the Docker image, and pushes it to the repository.</p>
<pre><code class="lang-bash">awslocal ecs create-cluster --cluster-name OllamaCluster

awslocal iam create-role \
  --role-name ecsTaskRole \
  --assume-role-policy-document file://ecs-task-trust-policy.json

<span class="hljs-built_in">export</span> ECS_TASK_PARN=$(awslocal iam create-policy \
  --policy-name ecsTaskPolicy \
  --policy-document file://ecs-task-policy.json \
  | jq -r <span class="hljs-string">'.Policy.Arn'</span>)

awslocal iam attach-role-policy \
  --role-name ecsTaskRole \
  --policy-arn <span class="hljs-variable">$ECS_TASK_PARN</span>

awslocal iam update-assume-role-policy \
  --role-name ecsTaskRole \
  --policy-document file://ecs-cloudwatch-policy.json

awslocal iam create-role \
  --role-name ecsTaskExecutionRole \
  --assume-role-policy-document file://ecs-trust-policy.json

<span class="hljs-built_in">export</span> ECS_TASK_EXEC_PARN=$(awslocal iam create-policy \
  --policy-name ecsTaskExecutionPolicy \
  --policy-document file://ecs-task-exec-policy.json | jq -r <span class="hljs-string">'.Policy.Arn'</span>)

awslocal iam attach-role-policy \
  --role-name ecsTaskExecutionRole \
  --policy-arn <span class="hljs-variable">$ECS_TASK_EXEC_PARN</span>

awslocal iam update-assume-role-policy \
  --role-name ecsTaskExecutionRole \
  --policy-document file://ecs-cloudwatch-policy.json
</code></pre>
<p>Creates an ECS cluster and IAM roles with necessary policies for task execution.</p>
<pre><code class="lang-bash">awslocal logs create-log-group --log-group-name ollama-service-logs
awslocal ecs register-task-definition \
  --family ollama-task \
  --cli-input-json file://task_definition.json
</code></pre>
<p>Creates a CloudWatch log group and registers the ECS task definition.</p>
<pre><code class="lang-bash">awslocal ecs create-service \
  --cluster OllamaCluster \
  --service-name OllamaService \
  --task-definition ollama-task \
  --desired-count 2 \
  --launch-type FARGATE \
  --network-configuration <span class="hljs-string">"awsvpcConfiguration={subnets=[<span class="hljs-variable">$SUBNET_ID1</span>,<span class="hljs-variable">$SUBNET_ID2</span>],securityGroups=[<span class="hljs-variable">$SG_ID2</span>],assignPublicIp=ENABLED}"</span> \
  --load-balancers <span class="hljs-string">"targetGroupArn=<span class="hljs-variable">$TG_ARN</span>,containerName=ollama-container,containerPort=11434"</span>
</code></pre>
<p>Creates an ECS service with the specified configuration, linking it to the load balancer.</p>
<pre><code class="lang-bash">awslocal s3 mb s3://frontend-bucket
awslocal s3 website s3://frontend-bucket --index-document index.html
awslocal s3api put-bucket-policy --bucket frontend-bucket --policy file://bucket-policy.json
awslocal s3 sync ./frontend/chatbot/build s3://frontend-bucket
</code></pre>
<p>Creates an S3 bucket, configures it as a website, sets the bucket policy, and syncs the frontend build to the bucket.</p>
<p>If you decide to use the AWS console to create all your resources, some of the complexity of these commands will be abstracted, and some services will be created as dependencies of other resources.</p>
<p>You can run the full <a target="_blank" href="http://commands.sh"><code>commands.sh</code></a> script and watch the LocalStack logs for updated information on the resources, as they get created. If you choose to, you can also manually run these commands, one by one, as you go through this article.</p>
<pre><code class="lang-bash">bash commands.sh
</code></pre>
<h3 id="heading-using-the-app-locally"><strong>Using the app locally</strong></h3>
<p>Now that everything is deployed, you can go to the frontend application and try it out. In your browser, navigate to <a target="_blank" href="http://frontend-bucket.s3-website.us-east-1.localhost.localstack.cloud:4566/"><code>http://frontend-bucket.s3-website.us-east-1.localhost.localstack.cloud:4566/</code></a> and start typing your question. It takes a few seconds, and then the full answer appears:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720515290807/efb27182-a776-45fd-9f41-70511620c89b.png" alt class="image--center mx-auto" /></p>
<p>If you look at the <code>App.js</code>, located in <code>frontend/chatbot/src</code>, you’ll notice the POST call payload contains a field <code>stream: false</code>. For simplicity purposes, we’re going to receive our answer from the LLM “in bulk” rather than streamed. This takes a few seconds to generate, and then it is fully received.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720515830057/03d7916a-96f5-400a-8346-514c31bce962.png" alt class="image--center mx-auto" /></p>
<p>The backend call will be made to the <strong>load balancer</strong>, at <a target="_blank" href="http://ecs-load-balancer.elb.localhost.localstack.cloud:4566/api/generate/"><code>http://ecs-load-balancer.elb.localhost.localstack.cloud:4566/api/generate/</code></a>, so we don’t have to worry about how we access the task containers.</p>
<h2 id="heading-running-on-aws"><strong>Running on AWS</strong></h2>
<p>To run this stack in the real AWS cloud, we need to make some small adjustments. The ready-to-deploy resources are in the <code>aws</code> folder, and they are the same as the ones for LocalStack, except:</p>
<ul>
<li><p>The AWS account number needs to be provided, so wherever you find <code>&lt;your_account_number&gt;</code>, it should be replaced with the 12-digit key (<code>task_definition_aws.json</code>, <a target="_blank" href="http://commands-aws.sh"><code>commands-aws.sh</code></a>).</p>
</li>
<li><p>A new bucket name needs to be set, as it needs to be unique, so the <code>&lt;your_bucket_name&gt;</code> placeholder has to be replaced with a name of your choice (<a target="_blank" href="http://commands-s3-aws.sh"><code>commands-s3-aws.sh</code></a>, <code>bucket-policy.json</code>).</p>
</li>
<li><p>Since on AWS you don’t know the final DNS name of the load balancer, and we need it for the frontend component, we’ll build the app and upload the files to the S3 bucket only after creating the stack. The <a target="_blank" href="http://commands-aws.sh"><code>commands-aws.sh</code></a> script will export and write the load balancer DNS name into the <code>.env</code> file, where the React app can pick it up from. This is generally easier using LocalStack because the DNS name of the load balancer is always as defined by the user.</p>
</li>
</ul>
<p>The steps to getting this project on AWS are:</p>
<ol>
<li><p>Make the aforementioned changes to your files.</p>
</li>
<li><p>In the <code>aws</code> root folder run <code>bash</code> <a target="_blank" href="http://commands-aws.sh"><code>commands-aws.sh</code></a>.</p>
</li>
<li><p>Build the React app:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">cd</span> frontend/chatbot
 npm install
 npm run build
</code></pre>
</li>
<li><p>Create the S3 bucket and prepare it to host the frontend application, by running <code>bash</code> <a target="_blank" href="http://commands-aws-s3.sh"><code>commands-aws-s3.sh</code></a> in the <code>aws</code> folder.</p>
<p> After the first step, you can test the backend by running the following command:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">export</span> LB_NAME=$(aws elbv2 describe-load-balancers --load-balancer-arns <span class="hljs-variable">$LB_ARN</span> | jq -r <span class="hljs-string">'.LoadBalancers[0].DNSName'</span>)
 curl <span class="hljs-variable">$LB_NAME</span>
</code></pre>
</li>
</ol>
<p>If you get a message like the following, give it a few more seconds until the Fargate instances are up and running and you should see the <code>Ollama is running</code> response.</p>
<pre><code class="lang-bash">&lt;html&gt;
&lt;head&gt;&lt;title&gt;503 Service Temporarily Unavailable&lt;/title&gt;&lt;/head&gt;
&lt;body&gt;
&lt;center&gt;&lt;h1&gt;503 Service Temporarily Unavailable&lt;/h1&gt;&lt;/center&gt;
&lt;/body&gt;
&lt;/html&gt;
</code></pre>
<p>After building the GUI part and uploading it to the S3 bucket, you’ll be able to access your chatbot at this address: <code>http://&lt;bucket-name&gt;.</code><a target="_blank" href="http://s3-website.us-east-1.amazonaws.com/"><code>s3-website.us-east-1.amazonaws.com/</code></a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720516052358/ab84dbd9-88a1-4ec1-aff3-4ac11e06466a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Developing and testing cloud AI-powered applications can be complex, particularly when ensuring they perform reliably in a production-like environment without incurring high costs. This is where integrating Ollama and LocalStack provides a robust solution. Ollama, which simplifies the process of downloading, installing, and interacting with various large language models (LLMs), paired with LocalStack’s ability to emulate AWS cloud services locally, allows developers to rigorously test AI functionalities in a controlled and cost-effective manner. By leveraging LocalStack, developers can validate integrations and behaviors of AI models managed with Ollama. The combination of Ollama’s straightforward LLM handling with LocalStack’s comprehensive AWS emulation offers a powerful toolkit for any developer looking to build reliable and scalable cloud AI applications.</p>
]]></content:encoded></item><item><title><![CDATA[More Remote Storage Options for Your Cloud Pods]]></title><description><![CDATA[After seeing how LocalStack Cloud Pods help teams work better together, let's look at the other ways companies can keep their snapshots in their own court. For environments with stricter security policies, there are a few storage options that help pr...]]></description><link>https://hashnode.localstack.cloud/secure-remote-storage-options-for-cloud-pods</link><guid isPermaLink="true">https://hashnode.localstack.cloud/secure-remote-storage-options-for-cloud-pods</guid><category><![CDATA[cloudpods]]></category><category><![CDATA[remotestorage]]></category><category><![CDATA[localstack]]></category><category><![CDATA[oras]]></category><category><![CDATA[S3]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Tue, 30 Apr 2024 06:52:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1714459715196/43847865-f431-443b-ad52-22887bce9348.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After seeing <a target="_blank" href="https://docs.localstack.cloud/tutorials/cloud-pods-collaborative-debugging/">how LocalStack Cloud Pods help teams work better together</a>, let's look at the other ways companies can keep their snapshots in their own court. For environments with stricter security policies, there are a few storage options that help protect data, making it easy to keep everything secure and accessible no matter what your needs are.</p>
<h2 id="heading-s3-bucket-remote-storage"><strong>S3 bucket remote storage</strong></h2>
<p>The S3 remote allows for the storage of Cloud Pod assets in an existing S3 bucket located in a real AWS account. The first action to take is to export the required AWS credentials during the terminal session. Side note: the S3 remote feature for Cloud Pods is only accessible when the localstack CLI is installed through <code>pip</code>, for now.</p>
<p>Let’s try it out:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">export</span> AWS_ACCESS_KEY_ID=&lt;YOUR_AWS_ACCESS_KEY_ID&gt;
$ <span class="hljs-built_in">export</span> AWS_SECRET_ACCESS_KEY=&lt;YOUR_AWS_SECRET_ACCESS_KEY&gt;
</code></pre>
<p>Next, we set up a new remote connection specifically for an S3 bucket. By using the command below, we create a remote called <code>s3-storage-aws</code>. This remote is for saving Cloud Pod items in an S3 bucket named <code>localstack-pod-storage</code>.</p>
<pre><code class="lang-bash">$ localstack pod remote add s3-storage-aws <span class="hljs-string">'s3://ls-pods-bucket-test/?access_key_id={access_key_id}&amp;secret_access_key={secret_access_key}'</span>
</code></pre>
<p><strong>Note</strong>: When setting this up, we might encounter an error message like:</p>
<p><code>SSL validation failed for</code><a target="_blank" href="https://localstack-pod-storage.s3.amazonaws.com/"><code>https://localstack-pod-storage.s3.amazonaws.com/</code></a><code>hostname.</code></p>
<p>To fix this, we can create a list of exceptions that point to AWS instead of LocalStack by using the following configuration flag in the docker-compose file:</p>
<p><code>DNS_NAME_PATTERNS_TO_RESOLVE_UPSTREAM=.*</code><a target="_blank" href="http://localstack-pod-storage.s3.amazonaws.com"><code>localstack-pod-storage.s3.amazonaws.com</code></a></p>
<p>This setting is generally used for hybrid setups, where certain API calls target AWS, whereas other services will target LocalStack.</p>
<p>Now we can save the pod:</p>
<pre><code class="lang-bash">$ localstack pod save cloud-pod-product-app s3-storage-aws

Cloud Pod cloud-pod-product-app successfully created ✅
Version: 1
Remote: s3
Services: sts,s3,iam,apigateway,dynamodb,lambda
</code></pre>
<p>The Cloud Pod is visible in the AWS S3 dashboard:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714175580015/89ec1cde-ccbe-4f75-8751-0e4e86deefd8.png" alt class="image--center mx-auto" /></p>
<p>To load the state into a new LocalStack instance, we use:</p>
<pre><code class="lang-bash">$ localstack pod load cloud-pod-product-app s3-storage-aws

Cloud Pod cloud-pod-product-app successfully loaded
</code></pre>
<h2 id="heading-oras-remote-storage"><strong>ORAS remote storage</strong></h2>
<p>ORAS, which stands for OCI Registry As Storage, is a tool designed to help you use OCI (Open Container Initiative) registries for storing and sharing a wide range of content. While OCI registries were originally created for container images, ORAS extends their use to other types of artifacts. Essentially, ORAS allows you to push and pull any content to and from OCI-compliant registries using the same workflows you'd use for container images.</p>
<p>Docker Hub comes into play as a popular, OCI-compliant container registry. It's primarily known for hosting Docker container images but, thanks to the OCI specification's flexibility, it can also serve as a storage and distribution point for other types of artifacts through tools like ORAS. This makes Docker Hub not just a hub for Docker images but a versatile cloud registry for various types of application artifacts, supporting the broader ecosystem of cloud-native development and deployment practices.</p>
<p>Let’s illustrate how you can utilize Docker Hub to store and retrieve Cloud Pods. This is very similar to the S3 bucket storage setup:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">export</span> ORAS_USERNAME=your_docker_hub_id
$ <span class="hljs-built_in">export</span> ORAS_PASSWORD=your_docker_hub_password
</code></pre>
<p>We can now use the CLI to create a new remote called <code>oras-remote</code></p>
<pre><code class="lang-bash">$ localstack pod remote add oras-remote <span class="hljs-string">'oras://{oras_username}:{oras_password}@registry.hub.docker.com/&lt;your_docker_hub_id&gt;'</span>
</code></pre>
<p>A Cloud Pod can be stored on the newly configured remote:</p>
<pre><code class="lang-bash">$ localstack pod save cloud-pod-product-app oras-remote
Cloud Pod cloud-pod-product-app successfully created ✅
Version: 1
Remote: oras
Services: sts,s3,iam,apigateway,dynamodb,lambda
</code></pre>
<p>After saving the Cloud Pod, it will appear in the Docker Hub repositories dashboard:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714175970211/7618cbfc-c101-4b08-971e-0ab298be13d5.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-viewing-all-the-remotes">Viewing all the remotes</h3>
<p>By using the command <code>localstack pod remote list</code>, you can view all the configured remote options for saving Cloud Pods, including the AWS S3 bucket and the Docker Hub repository configuration, with the default set to the LocalStack platform.</p>
<pre><code class="lang-bash">$ localstack pod remote list
┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Remote Name    ┃ URL                                                                                              ┃
┡━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ s3-storage-aws │ s3://localstack-pod-storage/?access_key_id={access_key_id}&amp;secret_access_key={secret_access_key} │
│ oras-remote    │ oras://{oras_username}:{oras_password}@registry.hub.docker.com/msmuzitiger210                    │
│ default        │ platform://localstack                                                                            │
└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────┘
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>We've seen how you can securely store Cloud Pod assets using S3 bucket remote storage and ORAS remote storage. S3 allows assets to be saved in an existing AWS account, while ORAS extends OCI registries for versatile artifact storage.</p>
]]></content:encoded></item><item><title><![CDATA[Deploy and invoke Lambda functions in LocalStack using VS Code Extension]]></title><description><![CDATA[LocalStack is a cloud service emulator designed for local development and testing of cloud applications. With LocalStack's Lambda emulation, you can define and deploy Lambda functions locally, alongside other serverless components like DynamoDB, SQS,...]]></description><link>https://hashnode.localstack.cloud/deploy-and-invoke-lambda-functions-in-localstack-using-vs-code-extension</link><guid isPermaLink="true">https://hashnode.localstack.cloud/deploy-and-invoke-lambda-functions-in-localstack-using-vs-code-extension</guid><category><![CDATA[lambda]]></category><category><![CDATA[localstack]]></category><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[vscode extensions]]></category><dc:creator><![CDATA[Harsh Bardhan Mishra]]></dc:creator><pubDate>Mon, 29 Apr 2024 08:09:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1714377338395/cc441d51-6cce-404c-a4af-ef3f2baeeb6b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>LocalStack is a cloud service emulator designed for local development and testing of cloud applications. With LocalStack's Lambda emulation, you can define and deploy Lambda functions locally, alongside other serverless components like DynamoDB, SQS, SNS, EventBridge, and more. The Lambda emulation in LocalStack offers enhanced developer features, including:</p>
<ul>
<li><p><a target="_blank" href="https://docs.localstack.cloud/user-guide/lambda-tools/hot-reloading/">Hot reloading of Lambda functions with each code change</a></p>
</li>
<li><p><a target="_blank" href="https://docs.localstack.cloud/user-guide/lambda-tools/debugging/">Ability to attach a remote debugger using an Integrated Development Environment (IDE)</a></p>
</li>
<li><p><a target="_blank" href="https://docs.localstack.cloud/user-guide/state-management/cloud-pods/">Persistence of function state using Cloud Pods</a></p>
</li>
<li><p>Additional configuration like <a target="_blank" href="https://docs.localstack.cloud/references/configuration/#lambda">cold starts, concurrency, and more</a>!</p>
</li>
</ul>
<p>In the past year, we introduced the <a target="_blank" href="https://github.com/localstack/localstack-vscode-extension">LocalStack VS Code Extension</a>, enabling you to deploy and invoke Lambda functions directly from your code editor. This blog post provides a step-by-step guide on setting up the LocalStack VS Code Extension, utilizing a sample application deployed through the <a target="_blank" href="https://aws.amazon.com/serverless/sam/">Serverless Application Model (SAM)</a>. Furthermore, we will delve into the process of viewing and managing locally created resources through the <a target="_blank" href="https://docs.localstack.cloud/user-guide/web-application/resource-browser/">LocalStack Resource Browser</a>.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p><a target="_blank" href="https://docs.localstack.cloud/getting-started/installation/#localstack-cli"><code>localstack</code> CLI</a> with the <a target="_blank" href="https://docs.localstack.cloud/getting-started/auth-token/"><code>LOCALSTACK_AUTH_TOKEN</code></a></p>
</li>
<li><p><a target="_blank" href="https://app.localstack.cloud/sign-up">LocalStack Web Application</a></p>
</li>
<li><p><a target="_blank" href="https://code.visualstudio.com/download">Visual Studio Code</a> &amp; <code>code</code> CLI</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html">Serverless Application Model (SAM) CLI</a> &amp; <a target="_blank" href="https://github.com/localstack/aws-sam-cli-local?tab=readme-ov-file#installation"><code>samlocal</code> wrapper</a></p>
</li>
</ul>
<h2 id="heading-recruiting-agency-application-with-sns-sqs-dynamodb-lambda-and-s3">Recruiting Agency application with SNS, SQS, DynamoDB, Lambda and S3</h2>
<p>This demo uses a <a target="_blank" href="https://github.com/localstack-samples/sample-sam-sns-fifo-dynamodb-lambda">public sample</a> to showcase an event-driven recruiting agency application. The system utilizes SNS topics, a DynamoDB table, SQS queues, Lambda functions, and S3 buckets. The application consists of three primary services:</p>
<ul>
<li><p>An <strong>anti-corruption service</strong> that processes a change data capture (CDC) event stream.</p>
<ul>
<li><p>This stream is converted into events and transmitted to the SNS <code>JobEvents.fifo</code> topic.</p>
</li>
<li><p>Subscribed services receive and process these events asynchronously.</p>
</li>
</ul>
</li>
<li><p>An <strong>analytics service</strong> with an SQS FIFO <code>AnalyticsJobEvents.fifo</code> queue linked to the <code>SNS FIFO JobEvents.fifo</code> topic.</p>
<ul>
<li>SQS FIFO serves as an event source for Lambda functions, storing events in an S3 bucket.</li>
</ul>
</li>
<li><p>An <strong>inventory service</strong> with an SQS FIFO <code>InventoryJobEvents.fifo</code> queue connected to the SNS FIFO <code>JobEvents.fifo</code> topic.</p>
<ul>
<li>It monitors <code>JobCreated</code> and <code>JobDeleted</code> events, storing data in a DynamoDB table with SNS filter policy assistance.</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709541731997/0bba1318-24cf-42c1-bf9b-e36a82326d6e.png" alt="AWS Architecture Diagram" class="image--center mx-auto" /></p>
<p>These resources are deployed using the <a target="_blank" href="https://aws.amazon.com/serverless/sam/">Serverless Application Model (SAM)</a>. The <a target="_blank" href="https://github.com/localstack/localstack-vscode-extension">LocalStack VS Code Extension</a> is to be used for creating the Lambda functions and invoking them directly from the code editor. For testing the event-driven architecture, <a target="_blank" href="https://docs.localstack.cloud/user-guide/web-application/resource-browser/">LocalStack Resource Browsers</a> would be utilized to visualize the created application resources.</p>
<h3 id="heading-start-your-localstack-container">Start your LocalStack container</h3>
<p>Launch the LocalStack container on your local machine using the specified command:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> LOCALSTACK_AUTH_TOKEN=&lt;your-auth-token&gt;
DEBUG=1 localstack start
</code></pre>
<blockquote>
<p>Replace <code>&lt;your-auth-token</code> with your LocalStack Auth Token to start the LocalStack Pro container.</p>
</blockquote>
<p>Setting <code>DEBUG=1</code> activates more verbose logs, enabling you to observe the event-driven architecture in action during the invocation of Lambda functions. Once initiated, you'll receive a confirmation output indicating that the LocalStack container is up and running.</p>
<pre><code class="lang-bash">     __                     _______ __             __
    / /   ____  _________ _/ / ___// /_____ ______/ /__
   / /   / __ \/ ___/ __ `/ /\__ \/ __/ __ `/ ___/ //_/
  / /___/ /_/ / /__/ /_/ / /___/ / /_/ /_/ / /__/ ,&lt;
 /_____/\____/\___/\__,_/_//____/\__/\__,_/\___/_/|_|

 💻 LocalStack CLI 3.1.0
 👤 Profile: default

[12:12:44] starting LocalStack <span class="hljs-keyword">in</span>    localstack.py:494
           Docker mode 🐳
...
─── LocalStack Runtime Log (press CTRL-C to quit) ────
LocalStack supervisor: starting
LocalStack supervisor: localstack process (PID 16) starting

LocalStack version: 3.1.1.dev20240224232735
LocalStack Docker container id: 73597f01d11a
LocalStack build date: 2024-02-26
LocalStack build git <span class="hljs-built_in">hash</span>: 323c0f8
...
</code></pre>
<h3 id="heading-set-up-the-application">Set Up the application</h3>
<p>To set up the application on your local machine, follow these commands:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/localstack-samples/sample-sam-sns-fifo-dynamodb-lambda.git
<span class="hljs-built_in">cd</span> sample-sam-sns-fifo-dynamodb-lambda
</code></pre>
<p>Next, open Visual Studio Code in the current directory using:</p>
<pre><code class="lang-bash">code .
</code></pre>
<blockquote>
<p>If you encounter an issue where <code>code</code> is not recognized as a command, refer to the <a target="_blank" href="https://code.visualstudio.com/docs/editor/command-line#_code-is-not-recognized-as-an-internal-or-external-command">VS Code documentation</a> to add it to the <code>PATH</code>. Alternatively, manually open the directory from your VS Code.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709541838093/22db1e6d-b190-48d7-b41d-e45ce2b62bbc.png" alt class="image--center mx-auto" /></p>
<p>After setting up the project, you can now install the LocalStack extension to deploy and invoke Lambda functions directly from Visual Studio Code.</p>
<h3 id="heading-install-the-localstack-vs-code-extension">Install the LocalStack VS Code Extension</h3>
<p>To install the <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=LocalStack.localstack">LocalStack VS Code Extension</a>, follow these steps:</p>
<ol>
<li><p>Open Visual Studio Code and go to the <strong>Extensions</strong> icon in the Activity Bar on the side.</p>
</li>
<li><p>In the <strong>Extensions view</strong>, you'll find the most popular extensions on the VS Code Marketplace.</p>
</li>
<li><p>Enter <strong>LocalStack</strong> in the search bar to filter the Marketplace offerings.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542149131/843367ab-2524-4b36-be66-49621114881f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click on the <strong>Install</strong> button next to the LocalStack VS Code Extension to download and install it.</p>
</li>
<li><p>Once the installation is complete, you can proceed to deploy the Lambda function using the newly installed LocalStack VS Code Extension.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542176139/637f65c3-46d3-4341-936a-38c8427d70c7.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-deploy-amp-invoke-the-lambda-function">Deploy &amp; invoke the Lambda function</h3>
<p>To deploy and invoke the Lambda function, navigate to the <code>anti-corruption-service</code> directory and open the <a target="_blank" href="http://app.py"><code>app.py</code></a> file. Within the CodeLens, you'll find two options:</p>
<ul>
<li><p><strong>Deploy Lambda function</strong></p>
</li>
<li><p><strong>Invoke Lambda function</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542217434/9eb0d1a9-37a0-48dd-bb11-a74139651442.png" alt class="image--center mx-auto" /></p>
<p>Click on <strong>Deploy Lambda function</strong> to set up local Lambdas with the running LocalStack container. The VS Code Extension supports either a CloudFormation or Serverless Application Model (SAM) deployment configuration.</p>
<p>In this project, use the SAM configuration available in the <code>template.yaml</code> file in the root directory.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542228792/6f56e8a2-77f1-42a7-aa8f-cab860443a1d.png" alt class="image--center mx-auto" /></p>
<p>You'll be asked to provide a unique name for the CloudFormation stack. Use <code>recruiting-agency</code> or your custom name and press <strong>Enter</strong> to confirm.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542305045/cc2e81c7-bdc7-48aa-a182-0aefda444d72.png" alt class="image--center mx-auto" /></p>
<p>A notification will indicate the start of the Lambda function deployment, followed by another notification confirming its creation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542386129/8959baa7-643c-45ac-8339-e8ed0e5ad2da.png" alt class="image--center mx-auto" /></p>
<p>Return to your file and click on the <strong>Invoke Lambda function</strong> CodeLens. Choose the CloudFormation stack name, which can be either <code>recruiting-agency</code> or the custom name chosen during deployment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542341269/0e810029-2839-4fee-a3fa-a98c5f7f040d.png" alt class="image--center mx-auto" /></p>
<p>Next, select the specific Lambda function to invoke, in this case, <code>recruiting-agency-AntiCorruptionFunction-*</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542367931/65d4317e-bbbf-45fd-90d9-b8fe5b62ae81.png" alt class="image--center mx-auto" /></p>
<p>LocalStack will initiate the invocation of the Lambda function. The invoked function logs should be displayed in the <strong>Output</strong> panel of the LocalStack VS Code Extension, which is automatically shown after invoking the Lambda function.</p>
<p>To check the LocalStack logs, either use <code>localstack logs</code> or view container logs on Docker Desktop. The logs include detailed information about the Lambda function invocation, such as duration and AWS service interactions.</p>
<pre><code class="lang-bash">localstack logs
...
2024-03-04T07:21:15.300 DEBUG --- [   asgi_gw_3] l.s.lambda_.provider       : Lambda invocation duration: 257.48ms
2024-03-04T07:21:15.308  INFO --- [   asgi_gw_3] localstack.request.aws     : AWS lambda.Invoke =&gt; 200
...
2024-03-04T07:21:15.769 DEBUG --- [   asgi_gw_3] l.services.sqs.models      : deleting message 91c9263f-1976-4cc2-9d98-11b98531f39e from queue arn:aws:sqs:us-east-1:000000000000:AnalyticsJobEvents.fifo
2024-03-04T07:21:15.814  INFO --- [   asgi_gw_3] localstack.request.aws     : AWS dynamodb.UpdateItem =&gt; 200
...
</code></pre>
<p>The <code>DEBUG</code> configuration variable allows you to access verbose logs, providing a comprehensive view of the architecture during the invocation of the Anti-Corruption Service, which serves as an event producer for the Inventory and Analytics service (event consumer).</p>
<h3 id="heading-visualize-the-local-resources">Visualize the local resources</h3>
<p>To view the AWS resources created locally, you can use the <a target="_blank" href="https://app.localstack.cloud/inst/default/status">LocalStack Resource Browser</a>. This browser offers an integrated interface, akin to the AWS Management Console, facilitating CRUD operations (<em>Create-Read-Update-Delete</em>) for managing local resources.</p>
<p>After invoking Lambda, navigate to the <a target="_blank" href="https://app.localstack.cloud/inst/default/resources/dynamodb">DynamoDB Resource Browser</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542449087/f3451607-560a-48b1-82ea-553e2926f275.png" alt class="image--center mx-auto" /></p>
<p>Select the <strong>InventoryTable</strong> resource, and then click on <strong>Items</strong>. You can now visualize job listings from the Inventory service using the <code>InventoryJobEvents.fifo</code> SQS queue and the <code>recruiting-agency-InventoryFunction-*</code> Lambda function.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542472393/d4e5af6d-c27b-4e10-a9b3-dd27f7aefe69.png" alt class="image--center mx-auto" /></p>
<p>You can also access the <a target="_blank" href="https://app.localstack.cloud/inst/default/resources/s3">S3 Resource Browser</a> and click on the <strong>recruiting-agency-analyticsbucket-</strong>* resource.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542482421/267677e4-0731-4070-a3b5-cf89f8fdc554.png" alt class="image--center mx-auto" /></p>
<p>This bucket is utilized by the analytics service to store processed events from the <code>recruiting-agency-AnalyticsFunction-*</code> Lambda function, which uses the <code>AnalyticsJobEvents.fifo</code> SQS queue.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709542498999/1a17f99d-c0a4-4fe6-acb6-181ee389d4dd.png" alt class="image--center mx-auto" /></p>
<p>Similarly, you can leverage the Resource Browser for other functionalities such as querying logs in <a target="_blank" href="https://app.localstack.cloud/inst/default/resources/cloudwatch/groups">CloudWatch Logs Groups</a>, verifying <a target="_blank" href="https://app.localstack.cloud/inst/default/resources/sns">SNS subscriptions</a>, and more!</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The LocalStack VS Code Extension enhances the developer experience (DevX) by accelerating the development cycle for locally running Lambda functions. Using LocalStack, you can establish a developer environment that prioritises repeatability, reproducibility, and local testing — an upfront investment that pays off in the long term. Testing application code and infrastructure deployments frequently is the key to improve quality of your cloud applications, and LocalStack enables you to do that — right on your local machine!</p>
<p>The LocalStack VS Code Extension has a few limitations:</p>
<ol>
<li><p>Presently, it supports only CloudFormation and Serverless Application Model.</p>
</li>
<li><p>Invocation of the Lambda function is restricted to the <code>us-east-1</code> region.</p>
</li>
<li><p>It exclusively supports Python Lambdas with an empty payload invocation.</p>
</li>
</ol>
<p>You can check out the <a target="_blank" href="https://github.com/localstack/localstack-vscode-extension">source code</a> for the LocalStack VS Code Extension, and share feedback/bug reports on our <a target="_blank" href="https://github.com/localstack/localstack-vscode-extension/issues">public issue tracker</a>, as we continue to improve the extension experience.</p>
]]></content:encoded></item><item><title><![CDATA[The API Gateway & Lambda Tricky Integration Configs]]></title><description><![CDATA[Creating demos with LocalStack often leads me into the depths of configurations and scenarios I wouldn't typically encounter. It really never gets boring, and it's how I learn. This hands-on exploration is crucial because it ensures that the solution...]]></description><link>https://hashnode.localstack.cloud/the-api-gateway-lambda-tricky-integration</link><guid isPermaLink="true">https://hashnode.localstack.cloud/the-api-gateway-lambda-tricky-integration</guid><category><![CDATA[localstack]]></category><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[aws-apigateway]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Thu, 25 Apr 2024 22:50:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713993430609/1adae130-9979-4cd1-afbe-afe5e3c3b30d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Creating demos with LocalStack often leads me into the depths of configurations and scenarios I wouldn't typically encounter. It really never gets boring, and it's how I learn. This hands-on exploration is crucial because it ensures that the solutions I demo are robust, match the AWS ecosystem, and go beyond your typical "Hello world!". I generally appreciate configurations that are straightforward, but this is not a story of an easy one. However, every wrong turn is always a learning opportunity.</p>
<h4 id="heading-the-challenge-of-accurate-implementation"><strong>The Challenge of Accurate Implementation</strong></h4>
<p>In a recent <a target="_blank" href="https://github.com/tinyg210/api-gw-tricky-lambda-integration"><strong>project</strong></a>, I aimed to illustrate a setup involving AWS API Gateway and Lambda functions, using Terraform for infrastructure provisioning. The goal was clear: an API Gateway integrating with two Lambda functions, one handling HTTP GET requests and the other managing POST requests. However, the simplicity of the concept did not reflect the complexity of its implementation.</p>
<p>Let's have a look at the snippet that started it all:</p>
<pre><code class="lang-apache"><span class="hljs-attribute">resource</span> <span class="hljs-string">"aws_api_gateway_integration"</span> <span class="hljs-string">"get_product_integration"</span> {
  <span class="hljs-attribute">rest_api_id</span>             = aws_api_gateway_rest_api.api.id
  <span class="hljs-attribute">resource_id</span>             = aws_api_gateway_resource.product_api.id
  <span class="hljs-attribute">http_method</span>             = aws_api_gateway_method.get_product.http_method
  <span class="hljs-attribute">type</span>                    = <span class="hljs-string">"AWS_PROXY"</span>
  <span class="hljs-attribute">integration_http_method</span> = <span class="hljs-string">"GET"</span>
  <span class="hljs-attribute">uri</span>                     = aws_lambda_function.get_product.invoke_arn
}
</code></pre>
<p>The real challenge arose while configuring the <code>integration_http_method</code> argument, under the <code>aws_api_gateway_integration</code> resource. AWS documentation, dense and meticulous like a legal contract, requires careful parsing to ensure you're extracting the correct information. If you're running low on patience, it's easy to misinterpret the purpose of <code>integration_http_method</code> as indicating the method the Lambda function will accept.</p>
<h4 id="heading-a-twist-in-the-configuration"><strong>A Twist in the Configuration</strong></h4>
<p>Using LocalStack to run the infrastructure makes it very obvious whether a Lambda function is invoked or not. Since every function starts its own container, it was clear that one was missing. I encountered the same behavior on AWS, so something strange was happening. This sent me on a quest through Google, StackOverflow, and AWS's documentation. Needless to say, I had to make many adjustments to the Terraform config file before finding the (not-so-obvious) answer.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713915419739/8e446774-b3d5-4288-85f3-af9fc642a64f.png" alt class="image--center mx-auto" /></p>
<p>Lessons lernt: configure Cloudwatch logs, but don't forget that the LocalStack logs will show you a one-liner hinting at a deeper misconfiguration.</p>
<pre><code class="lang-bash">localstack-main  | raise ApiGatewayIntegrationError(<span class="hljs-string">"Internal server error"</span>, status_code=500)
localstack-main  | localstack.services.apigateway.helpers.ApiGatewayIntegrationError: Internal server error
localstack-main  | 2024-04-25T20:28:58.226  INFO --- [   asgi_gw_2] localstack.request.http: GET /dev/productApi =&gt; 500
</code></pre>
<p>It was the <code>ApiGatewayIntegrationError</code> that started to make things clear. The breakthrough came soon. However, LocalStack was instrumental with providing rapid feedback on my configurations and allowing me to iterate quickly without waiting on cloud deployment cycles. This fast feedback loop was critical in debunking my initial configurations and narrowing down the solution.</p>
<p>I guess they had a lot of questions regarding this topic, and they made it clear now:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713991109795/a5a1f238-11b6-4a3a-b414-ff40dd76fd1d.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-clarifying-the-misconceptions"><strong>Clarifying the Misconceptions</strong></h4>
<p>It turned out that the <code>integration_http_method</code> is not about the HTTP methods (GET, POST, DELETE, etc.) that the Lambda is supposed to handle. Rather, it is about how the API Gateway communicates with the Lambda function. It specifies the method used by the API Gateway to invoke the Lambda function, which should invariably be POST, regardless of the HTTP methods your API exposes. This is a subtle yet significant distinction that impacts how services are integrated and interact within AWS.</p>
<h4 id="heading-concluding-thoughts"><strong>Concluding Thoughts</strong></h4>
<p>This experience underscored the importance of understanding the underlying mechanisms of cloud services and infrastructure as code tools. The ability to simulate environments with LocalStack before moving to actual AWS deployments can save countless hours of debugging and reconfiguration. For fellow developers venturing into similar territories, remember to check the details, and tools like LocalStack are invaluable in getting your IaC right.</p>
]]></content:encoded></item><item><title><![CDATA[Simulating outages for local cloud apps with LocalStack]]></title><description><![CDATA[LocalStack's core cloud emulator provides the capability to emulate various AWS services, including Lambda, DynamoDB, ECS, and more, directly on your local machine. One notable feature of LocalStack is its support for advanced disaster recovery testi...]]></description><link>https://hashnode.localstack.cloud/simulating-outages-for-local-cloud-apps-with-localstack</link><guid isPermaLink="true">https://hashnode.localstack.cloud/simulating-outages-for-local-cloud-apps-with-localstack</guid><category><![CDATA[localstack]]></category><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[DynamoDB]]></category><category><![CDATA[Chaos Engineering]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Harsh Bardhan Mishra]]></dc:creator><pubDate>Thu, 11 Apr 2024 07:44:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1712820497328/0cb71f95-9e98-4312-a91c-37bca5e068d6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>LocalStack's core cloud emulator provides the capability to emulate various AWS services, including Lambda, DynamoDB, ECS, and more, directly on your local machine. One notable feature of LocalStack is its support for advanced disaster recovery testing, including:</p>
<ul>
<li><p>Region failover</p>
</li>
<li><p>DNS failover</p>
</li>
<li><p>Service failure simulations</p>
</li>
</ul>
<p>All these testing scenarios can be efficiently executed within LocalStack, providing thorough coverage for critical situations in a matter of minutes rather than hours or days. To simulate service failures in LocalStack, you can use the <a target="_blank" href="https://pypi.org/project/localstack-extension-outages/">Outages extension</a> that enables you to start a local outage, right on your developer machine.</p>
<p>This allows you to quickly experiment with different failure scenarios, allowing you to perform chaos testing at an early stage by introducing errors at the infrastructure level. This is valuable as it enables you to replicate conditions that might not be feasible to mimic unless deployed to a production environment.</p>
<p>This blog will walk you through the process of setting up a cloud application on your local machine and leveraging the Outages extension to perform service failures in a local environment while using robust error handling to address and mitigate such issues. Furthermore, we will explore how to shift-left your chaos testing by integrating automated testing directly into your workflows.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p><a target="_blank" href="https://docs.localstack.cloud/references/docker-images/#localstack-pro-image">LocalStack Docker image</a> &amp; <a target="_blank" href="https://docs.localstack.cloud/getting-started/auth-token/"><code>LOCALSTACK_AUTH_TOKEN</code></a></p>
</li>
<li><p><a target="_blank" href="https://docs.docker.com/compose/install/">Docker Compose</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html">AWS CLI</a> &amp; <a target="_blank" href="https://docs.localstack.cloud/user-guide/integrations/aws-cli/#localstack-aws-cli-awslocal"><code>awslocal</code> wrapper</a></p>
</li>
<li><p><a target="_blank" href="https://maven.apache.org/install.html">Maven 3.8.5</a> &amp; <a target="_blank" href="https://www.java.com/en/download/help/download_options.html">Java 17</a></p>
</li>
<li><p><a target="_blank" href="https://www.python.org/downloads/">Python</a> &amp; <a target="_blank" href="https://docs.pytest.org/en/8.0.x/"><code>pytest</code> framework</a></p>
</li>
<li><p><a target="_blank" href="https://curl.se/docs/install.html"><code>cURL</code></a></p>
</li>
</ul>
<h2 id="heading-product-management-system-with-lambda-api-gateway-and-dynamodb">Product Management System with Lambda, API Gateway, and DynamoDB</h2>
<p>This demo sets up an HTTP CRUD API functioning as a Product Management System. The components deployed include:</p>
<ul>
<li><p>A DynamoDB table named <code>Products</code>.</p>
</li>
<li><p>Three Lambda functions:</p>
<ul>
<li><p><code>add-product</code> for product addition.</p>
</li>
<li><p><code>get-product</code> for retrieving a product.</p>
</li>
<li><p><code>process-product-events</code> for event processing and DynamoDB writes.</p>
</li>
</ul>
</li>
<li><p>A locally hosted REST API named <code>quote-api-gateway</code>.</p>
</li>
<li><p>SNS topic named <code>ProductEventsTopic</code> and SQS queue named <code>ProductEventsQueue</code>.</p>
</li>
<li><p>API Gateway resource named <code>productApi</code> with additional <code>GET</code> and <code>POST</code> methods.</p>
</li>
</ul>
<p>Additionally, the applications set up a subscription between the SQS queue and the SNS topic, along with an event source mapping between the SQS queue and the <code>process-product-events</code> Lambda function.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708931004666/61cefa35-b44a-4f29-b0e2-2e242897db6b.png" alt="AWS Architecture" class="image--center mx-auto" /></p>
<p>All resources can be deployed using a <a target="_blank" href="https://docs.localstack.cloud/references/init-hooks/">LocalStack Init Hook</a> via the <a target="_blank" href="http://init-resources.sh"><code>init-resources.sh</code></a> script in the repository. To begin, clone the repository on your local machine:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/localstack-samples/sample-outages-extension-serverless.git
<span class="hljs-built_in">cd</span> sample-outages-extension-serverless
</code></pre>
<p>Let's create a Docker Compose configuration for simulating a local outage in the running Product Management System.</p>
<h3 id="heading-set-up-the-docker-compose">Set Up the Docker Compose</h3>
<p>To start LocalStack and install the LocalStack Outages extension, create a new Docker Compose configuration. You can find the official Docker Compose file for starting the LocalStack container in <a target="_blank" href="https://docs.localstack.cloud/getting-started/installation/#docker-compose">our documentation</a>.</p>
<p>For an extended setup, include the following in your Docker Compose file:</p>
<ul>
<li><p>Add the <code>EXTENSION_AUTO_INSTALL=localstack-extension-outages</code> environment variable to install the Outages extension from PyPI whenever a new container is created.</p>
</li>
<li><p>Include the <code>LOCALSTACK_HOST=localstack</code> environment variable to ensure LocalStack services are accessible from other containers.</p>
</li>
<li><p>Create the <code>ls_network</code> network to use LocalStack as its DNS server and enable the resolution of the domain name to the LocalStack container (also specify it via <code>LAMBDA_DOCKER_NETWORK</code> environment variable).</p>
</li>
<li><p>Add a new volume attached to the LocalStack container. This volume holds the <code>init-resources.sh</code> file, which is copied to the LocalStack container and executed when the container is ready.</p>
</li>
<li><p>Add another volume to copy the built Lambda functions specified as ZIP files during Lambda function creation.</p>
</li>
<li><p>Optionally, add the <code>LAMBDA_RUNTIME_ENVIRONMENT_TIMEOUT</code> to wait for the runtime environment to start up, which may vary in speed based on your local machine.</p>
</li>
</ul>
<p>The final Docker Compose configuration is as follows (also <a target="_blank" href="https://github.com/localstack-samples/sample-outages-extension-serverless/blob/main/docker-compose.yml">provided in the cloned repository)</a>:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">"3.9"</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">localstack:</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">ls_network</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">localstack</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">localstack/localstack-pro:latest</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"127.0.0.1:4566:4566"</span>            <span class="hljs-comment"># LocalStack Gateway</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"127.0.0.1:4510-4559:4510-4559"</span>  <span class="hljs-comment"># external services port range</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"127.0.0.1:443:443"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">DOCKER_HOST=unix:///var/run/docker.sock</span> <span class="hljs-comment">#unix socket to communicate with the docker daemon</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">LOCALSTACK_HOST=localstack</span> <span class="hljs-comment"># where services are available from other containers</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">LAMBDA_DOCKER_NETWORK=ls_network</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">LOCALSTACK_AUTH_TOKEN=${LOCALSTACK_AUTH_TOKEN:?}</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">EXTENSION_AUTO_INSTALL=localstack-extension-outages</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">LAMBDA_RUNTIME_ENVIRONMENT_TIMEOUT=600</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"./volume:/var/lib/localstack"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"/var/run/docker.sock:/var/run/docker.sock"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"./lambda-functions/target/product-lambda.jar:/etc/localstack/init/ready.d/target/product-lambda.jar"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"./init-resources.sh:/etc/localstack/init/ready.d/init-resources.sh"</span>

<span class="hljs-attr">networks:</span>
  <span class="hljs-attr">ls_network:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">ls_network</span>
</code></pre>
<h3 id="heading-deploy-the-local-aws-infrastructure">Deploy the local AWS infrastructure</h3>
<p>Before deploying the demo application locally, build the Lambda functions to ensure they can be copied over during Docker Compose startup. Execute the following command:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> lambda-functions &amp;&amp; mvn clean package shade:shade
</code></pre>
<p>The built Lambda function is now available at <code>lambda-functions/target/product-lambda.jar</code>. Start the Docker Compose configuration, which automatically creates the local deployment using AWS CLI and the <code>awslocal</code> script inside the LocalStack container:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> LOCALSTACK_AUTH_TOKEN=&lt;your-auth-token&gt;
docker-compose up
</code></pre>
<p>Check the Docker Compose logs to verify that the Outages extension is being installed, along with other local AWS resources:</p>
<pre><code class="lang-bash">localstack  | Localstack extensions installer: 
localstack  | Localstack extensions installer: Extension installation completed
localstack  | 
localstack  | LocalStack version: 3.1.1.dev20240131022456
....

localstack  | Get Product Lambda...
localstack  | 2024-02-26T05:34:18.091  INFO --- [   asgi_gw_1] localstack.request.aws     : AWS lambda.CreateFunction =&gt; 201
...
localstack  | 2024-02-26T05:34:23.632  INFO --- [   asgi_gw_1] localstack.request.aws     : AWS sns.CreateTopic =&gt; 200
localstack  | {
localstack  |     <span class="hljs-string">"TopicArn"</span>: <span class="hljs-string">"arn:aws:sns:us-east-1:000000000000:ProductEventsTopic"</span>
localstack  | }
localstack  | 2024-02-26T05:34:24.229  INFO --- [   asgi_gw_2] localstack.request.aws     : AWS sqs.CreateQueue =&gt; 200
localstack  | {
localstack  |     <span class="hljs-string">"QueueUrl"</span>: <span class="hljs-string">"http://sqs.us-east-1.localstack:4566/000000000000/ProductEventsQueue"</span>
localstack  | }
...
</code></pre>
<p>After deployment, use <code>cURL</code> to create a product entity. Execute the following command:</p>
<pre><code class="lang-bash">curl --location <span class="hljs-string">'http://12345.execute-api.localhost.localstack.cloud:4566/dev/productApi'</span> \
--header <span class="hljs-string">'Content-Type: application/json'</span> \
--data <span class="hljs-string">'{
  "id": "prod-2004",
  "name": "Ultimate Gadget",
  "price": "49.99",
  "description": "The Ultimate Gadget is the perfect tool for tech enthusiasts looking for the next level in gadgetry. Compact, powerful, and loaded with features."
}'</span>
</code></pre>
<p>The output should be:</p>
<pre><code class="lang-bash">Product added/updated successfully.
</code></pre>
<p>You can verify the successful addition by scanning the DynamoDB table:</p>
<pre><code class="lang-bash">awslocal dynamodb scan \
    --table-name Products
</code></pre>
<p>The output should be:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"Items"</span>: [
        {
            <span class="hljs-string">"name"</span>: {
                <span class="hljs-string">"S"</span>: <span class="hljs-string">"Super Widget"</span>
            },
            <span class="hljs-string">"description"</span>: {
                <span class="hljs-string">"S"</span>: <span class="hljs-string">"A versatile widget that can be used for a variety of purposes. Durable, reliable, and
 affordable."</span>
            },
            <span class="hljs-string">"id"</span>: {
                <span class="hljs-string">"S"</span>: <span class="hljs-string">"prod-1002"</span>
            },
            <span class="hljs-string">"price"</span>: {
                <span class="hljs-string">"N"</span>: <span class="hljs-string">"29.99"</span>
            }
        }
    ],
    <span class="hljs-string">"Count"</span>: 1,
    <span class="hljs-string">"ScannedCount"</span>: 1,
    <span class="hljs-string">"ConsumedCapacity"</span>: null
}
</code></pre>
<h3 id="heading-injecting-chaos-in-the-local-infrastructure">Injecting Chaos in the local infrastructure</h3>
<p>You can now use the Outages extension for chaos testing of your locally deployed infrastructure. You can access the Outages extension through the REST API at <a target="_blank" href="http://outages.localhost.localstack.cloud:4566/outages"><code>http://outages.localhost.localstack.cloud:4566/outages</code></a>, accepting standard HTTP requests.</p>
<p>To create an outage, taking down the DynamoDB table in the <code>us-east-1</code> region, execute the following command:</p>
<pre><code class="lang-bash">curl --location --request POST <span class="hljs-string">'http://outages.localhost.localstack.cloud:4566/outages'</span> \
  --header <span class="hljs-string">'Content-Type: application/json'</span> \
  --data <span class="hljs-string">'
  [
    {
      "service": "dynamodb",
      "region": "us-east-1"
    }
  ]'</span>
</code></pre>
<p>The output should be:</p>
<pre><code class="lang-bash">[{<span class="hljs-string">"service"</span>: <span class="hljs-string">"dynamodb"</span>, <span class="hljs-string">"region"</span>: <span class="hljs-string">"us-east-1"</span>}]
</code></pre>
<p>This command creates an outage in the locally mocked <code>us-east-1</code> DynamoDB tables. Verify by scanning the <code>Products</code> table:</p>
<pre><code class="lang-bash">awslocal dynamodb scan \
    --table-name Products
</code></pre>
<p>The output should be:</p>
<pre><code class="lang-bash">An error occurred (ServiceUnavailableException) when calling the Scan operation (reached max retries: 2): Service <span class="hljs-string">'dynamodb'</span> not accessible <span class="hljs-keyword">in</span> <span class="hljs-string">'us-east-1'</span> region due to an outage
</code></pre>
<p>You can verify it in the LocalStack logs:</p>
<pre><code class="lang-bash">localstack  | 2024-02-26T06:12:02.196  INFO --- [   asgi_gw_1] localstack.request.aws     : AWS dynamodb.DescribeEndpoints =&gt; 503 (ServiceUnavailableException)
localstack  | 2024-02-26T06:12:02.200  INFO --- [   asgi_gw_3] localstack.request.aws     : AWS dynamodb.PutItem =&gt; 503 (ServiceUnavailableException)
</code></pre>
<p>You can retrieve the current outage configuration using the following <code>GET</code> request:</p>
<pre><code class="lang-bash">curl --location \
    --request GET <span class="hljs-string">'http://outages.localhost.localstack.cloud:4566/outages'</span>
</code></pre>
<p>The output should be:</p>
<pre><code class="lang-bash">[{<span class="hljs-string">"service"</span>: <span class="hljs-string">"dynamodb"</span>, <span class="hljs-string">"region"</span>: <span class="hljs-string">"us-east-1"</span>}]
</code></pre>
<h3 id="heading-error-handling-for-the-outage">Error handling for the outage</h3>
<p>Now that the experiment is started, the DynamoDB table is inaccessible, resulting in the user being unable to get or post any new product. The API Gateway will return an <em>Internal Server Error</em>. To prevent this, include proper error handling and a mechanism to prevent data loss during a database outage.</p>
<p>The solution includes an SNS topic, an SQS queue, and a Lambda function that picks up queued elements and retries the <code>PutItem</code> operation on the DynamoDB table. If DynamoDB is still unavailable, the item will be re-queued.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708931161491/bdab8eed-f4fc-47dc-9a32-f6ed515a2b0e.png" alt="AWS Architecture" class="image--center mx-auto" /></p>
<p>Test this by executing the following command:</p>
<pre><code class="lang-bash">curl --location <span class="hljs-string">'http://12345.execute-api.localhost.localstack.cloud:4566/dev/productApi'</span> \
     --header <span class="hljs-string">'Content-Type: application/json'</span> \
     --data <span class="hljs-string">'{
       "id": "prod-1003",
       "name": "Super Widget",
       "price": "29.99",
       "description": "A versatile widget that can be used for a variety of purposes. Durable, reliable, and affordable."
     }'</span>
</code></pre>
<p>The output should be:</p>
<pre><code class="lang-bash">A DynamoDB error occurred. Message sent to queue.
</code></pre>
<p>To stop the outage, send a <code>POST</code> request by using an empty list in the configuration. The following request will clear the current configuration:</p>
<pre><code class="lang-bash">curl --location --request POST <span class="hljs-string">'http://outages.localhost.localstack.cloud:4566/outages'</span> \
--header <span class="hljs-string">'Content-Type: application/json'</span> \
--data <span class="hljs-string">'[]'</span>
</code></pre>
<p>Now, scan the DynamoDB table and verify that the <code>Super Widget</code> item has been inserted:</p>
<pre><code class="lang-bash">awslocal dynamodb scan \
    --table-name Products
</code></pre>
<p>The output should be:</p>
<pre><code class="lang-bash">awslocal dynamodb scan --table-name Products
{
    <span class="hljs-string">"Items"</span>: [
        {
            <span class="hljs-string">"name"</span>: {
                <span class="hljs-string">"S"</span>: <span class="hljs-string">"Super Widget"</span>
            },
            ...
            }
        },
        {
            <span class="hljs-string">"name"</span>: {
                <span class="hljs-string">"S"</span>: <span class="hljs-string">"Ultimate Gadget"</span>
            },
            ...
        }
    <span class="hljs-string">"Count"</span>: 2,
    <span class="hljs-string">"ScannedCount"</span>: 2,
    <span class="hljs-string">"ConsumedCapacity"</span>: null
}
</code></pre>
<h3 id="heading-perform-automated-chaos-testing">Perform automated chaos testing</h3>
<p>You can now implement a straightforward chaos test using <code>pytest</code> to start an outage. The test will:</p>
<ul>
<li><p>Validate the availability of Lambda functions and the DynamoDB table.</p>
</li>
<li><p>Start a local outage and verify if DynamoDB API calls throw an error.</p>
</li>
<li><p>Validate the ongoing outage and its appropriate cessation.</p>
</li>
<li><p>Query the DynamoDB table for new items and assert their presence.</p>
</li>
</ul>
<p>For integration testing, you can use the AWS SDK for Python (<code>boto3</code>) and the <code>pytest</code> framework. In a new directory named <code>tests</code>, create a file named <code>test_</code><a target="_blank" href="http://chaos.py"><code>chaos.py</code></a>. Add the necessary imports and <code>pytest</code> fixtures:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pytest
<span class="hljs-keyword">import</span> time
<span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">import</span> requests

LOCALSTACK_ENDPOINT = <span class="hljs-string">"http://localhost:4566"</span>
DYNAMODB_TABLE_NAME = <span class="hljs-string">"Products"</span>
LAMBDA_FUNCTIONS = [<span class="hljs-string">"add-product"</span>, <span class="hljs-string">"get-product"</span>, <span class="hljs-string">"process-product-events"</span>]

<span class="hljs-meta">@pytest.fixture(scope="module")</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">dynamodb_resource</span>():</span>
    <span class="hljs-keyword">return</span> boto3.resource(<span class="hljs-string">"dynamodb"</span>, endpoint_url=LOCALSTACK_ENDPOINT)


<span class="hljs-meta">@pytest.fixture(scope="module")</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_client</span>():</span>
    <span class="hljs-keyword">return</span> boto3.client(<span class="hljs-string">"lambda"</span>, endpoint_url=LOCALSTACK_ENDPOINT)
</code></pre>
<p>Add the following code to perform a simple smoke test ensuring the availability of Lambda functions and the DynamoDB table:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_dynamodb_table_exists</span>(<span class="hljs-params">dynamodb_resource</span>):</span>
    tables = dynamodb_resource.tables.all()
    table_names = [table.name <span class="hljs-keyword">for</span> table <span class="hljs-keyword">in</span> tables]
    <span class="hljs-keyword">assert</span> DYNAMODB_TABLE_NAME <span class="hljs-keyword">in</span> table_names


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_lambda_functions_exist</span>(<span class="hljs-params">lambda_client</span>):</span>
    functions = lambda_client.list_functions()[<span class="hljs-string">"Functions"</span>]
    function_names = [func[<span class="hljs-string">"FunctionName"</span>] <span class="hljs-keyword">for</span> func <span class="hljs-keyword">in</span> functions]
    <span class="hljs-keyword">assert</span> all(func_name <span class="hljs-keyword">in</span> function_names <span class="hljs-keyword">for</span> func_name <span class="hljs-keyword">in</span> LAMBDA_FUNCTIONS)
</code></pre>
<p>Now, add the following code to chaos test the locally deployed DynamoDB table:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_dynamodb_outage</span>():</span>
    outage_payload = [{<span class="hljs-string">"service"</span>: <span class="hljs-string">"dynamodb"</span>, <span class="hljs-string">"region"</span>: <span class="hljs-string">"us-east-1"</span>}]
    requests.post(
        <span class="hljs-string">"http://outages.localhost.localstack.cloud:4566/outages"</span>, json=outage_payload
    )

    <span class="hljs-comment"># Make a request to DynamoDB and assert an error</span>
    url = <span class="hljs-string">"http://12345.execute-api.localhost.localstack.cloud:4566/dev/productApi"</span>
    headers = {<span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span>}
    data = {
        <span class="hljs-string">"id"</span>: <span class="hljs-string">"prod-1002"</span>,
        <span class="hljs-string">"name"</span>: <span class="hljs-string">"Super Widget"</span>,
        <span class="hljs-string">"price"</span>: <span class="hljs-string">"29.99"</span>,
        <span class="hljs-string">"description"</span>: <span class="hljs-string">"A versatile widget that can be used for a variety of purposes. Durable, reliable, and affordable."</span>,
    }

    response = requests.post(url, headers=headers, json=data)

    <span class="hljs-keyword">assert</span> <span class="hljs-string">"error"</span> <span class="hljs-keyword">in</span> response.text

    <span class="hljs-comment"># Check if outage is running</span>
    outage_status = requests.get(
        <span class="hljs-string">"http://outages.localhost.localstack.cloud:4566/outages"</span>
    ).json()
    <span class="hljs-keyword">assert</span> outage_payload == outage_status

    <span class="hljs-comment"># Stop the outage</span>
    requests.post(<span class="hljs-string">"http://outages.localhost.localstack.cloud:4566/outages"</span>, json=[])

    <span class="hljs-comment"># Check if outage is stopped</span>
    outage_status = requests.get(
        <span class="hljs-string">"http://outages.localhost.localstack.cloud:4566/outages"</span>
    ).json()
    <span class="hljs-keyword">assert</span> <span class="hljs-keyword">not</span> outage_status

    <span class="hljs-comment"># Wait for a few seconds</span>
    time.sleep(<span class="hljs-number">60</span>)

    <span class="hljs-comment"># Query if there are items in DynamoDB table</span>
    dynamodb = boto3.resource(<span class="hljs-string">"dynamodb"</span>, endpoint_url=LOCALSTACK_ENDPOINT)
    table = dynamodb.Table(DYNAMODB_TABLE_NAME)
    response = table.scan()
    items = response[<span class="hljs-string">"Items"</span>]
    print(items)
    <span class="hljs-keyword">assert</span> <span class="hljs-string">"Super Widget"</span> <span class="hljs-keyword">in</span> [item[<span class="hljs-string">"name"</span>] <span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> items]
</code></pre>
<p>Run the test locally using the following command:</p>
<pre><code class="lang-bash">pytest
</code></pre>
<p>The output should be:</p>
<pre><code class="lang-bash">=========================================== <span class="hljs-built_in">test</span> session starts ============================================
platform darwin -- Python 3.10.4, pytest-7.2.0, pluggy-1.4.0
rootdir: ...
plugins: html-3.2.0, pylint-0.19.0, json-report-1.5.0, Faker-18.4.0, cov-4.0.0, metadata-2.0.4, anyio-3.6.2, datadir-1.4.1
collected 3 items                                                                                          
collected 3 items

tests/test_outage.py ...                                                 [100%]

======================================= 3 passed <span class="hljs-keyword">in</span> 75.86s (0:01:15) =======================================
</code></pre>
<p>You now have a successful outage test running on your local machine using LocalStack 🎊</p>
<p>You can further run the tests on a continuous integration (CI) environment, such as GitHub Actions to ensure that you can build &amp; test your infrastructure's resilience with every commit. You can find the <a target="_blank" href="https://github.com/localstack-samples/sample-outages-extension-serverless/blob/main/.github/workflows/ci.yml">sample workflow on the GitHub repository</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708932973283/06e822db-b662-4b17-a2da-7bb8c5b548f0.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Outages extension allows you to further chaos test your other resources, such as Lambda functions, S3 buckets, and more to ascertain service continuity, user experience, and the system’s resilience to the failures introduced, and how far you can go on to fix them. An ideal strategy is to design the experiments and group them in the categories of <strong>knowns</strong> and <strong>unknowns</strong>, while analyzing whatever chaos your system might end up encountering.</p>
<p>In the upcoming blog posts, we'll demonstrate how to perform more complex chaos testing scenarios, such as RDS &amp; Route53 failovers, inject latency to every API call, and use AWS Resilience Testing Tools such as <a target="_blank" href="https://aws.amazon.com/fis/">Fault Injection Simulator (FIS)</a> locally. Stay tuned for more blogs on how LocalStack is enhancing your cloud development and testing experience.</p>
<p>You can find the code in this <a target="_blank" href="https://github.com/localstack-samples/sample-outages-extension-serverless">GitHub repository</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Testing AWS CDK Stacks on GitHub Actions with LocalStack]]></title><description><![CDATA[AWS Cloud Development Kit (CDK) is an open-source framework that enables the creation of Infrastructure-as-Code configurations using programming languages like TypeScript, Python, and more. CDK comes with a handy command line interface (CLI) that fac...]]></description><link>https://hashnode.localstack.cloud/testing-aws-cdk-stacks-on-github-actions-with-localstack</link><guid isPermaLink="true">https://hashnode.localstack.cloud/testing-aws-cdk-stacks-on-github-actions-with-localstack</guid><category><![CDATA[CDK]]></category><category><![CDATA[AWS]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[localstack]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[pytest]]></category><dc:creator><![CDATA[Harsh Bardhan Mishra]]></dc:creator><pubDate>Fri, 01 Mar 2024 09:44:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1709285802694/c09d4155-512b-437e-8b99-cce217c21d48.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AWS Cloud Development Kit (CDK) is an open-source framework that enables the creation of Infrastructure-as-Code configurations using programming languages like TypeScript, Python, and more. CDK comes with a handy command line interface (CLI) that facilitates direct interaction with the system, allowing users to execute various commands such as <code>deploy</code>, <code>destroy</code>, and <code>synth</code>.</p>
<p>Continuous Integration (CI) environments are commonly used for testing CDK stacks before deploying them on the actual AWS cloud. However, configuring AWS credentials or tearing down the CDK stack post-testing requires manual setups which is often tiresome. LocalStack streamlines integration testing by allowing CDK stack deployment and testing against a cloud emulator.</p>
<p>This blog will guide you in creating a GitHub Action to test CDK stack deployment within a CI workflow. Additionally, we will delve into implementing a basic integration test to verify the functional aspects of the infrastructure deployed on LocalStack.</p>
<h2 id="heading-how-does-localstack-work-with-cdk">How does LocalStack work with CDK?</h2>
<p>LocalStack runs as a Docker container either on your local machine or in an automated setting. Once started, you can utilize LocalStack alongside tools like AWS CLI or Terraform to create local AWS resources. For local deployment and testing of CDK stacks, LocalStack provides a wrapper CLI called <code>cdklocal</code> for utilizing the CDK library with local APIs.</p>
<p>To set up <code>cdklocal</code>, you can use the <a target="_blank" href="https://www.npmjs.com/package/aws-cdk-local">npm library</a> with the following commands:</p>
<pre><code class="lang-bash">npm install -g aws-cdk-local aws-cdk
...
cdklocal --version
2.121.1
</code></pre>
<p>Internally, CDK integrates with AWS CloudFormation for infrastructure deployment and provisioning. When using <code>cdklocal</code>, you leverage a LocalStack-native CloudFormation engine that allows you to create the local resources, hence removing the need to deploy and test on the real cloud.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p><a target="_blank" href="https://app.localstack.cloud/">LocalStack Web Application account</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/join">GitHub Account</a> &amp; <a target="_blank" href="https://github.com/cli/cli?tab=readme-ov-file#installation"><code>gh</code> CLI</a> (optional)</p>
</li>
</ul>
<h2 id="heading-inventory-management-system-with-sqs-lambda-s3-and-dynamodb">Inventory Management System with SQS, Lambda, S3 and DynamoDB</h2>
<p>This demo uses a <a target="_blank" href="https://github.com/aws-samples/amazon-sqs-best-practices-cdk">public AWS example</a> to showcase an event-driven inventory management system. The system deploys SQS, DynamoDB, Lambda, and S3, functioning as follows:</p>
<ul>
<li><p>CSV files are uploaded to an S3 bucket to centralize and secure the inventory data.</p>
</li>
<li><p>A Lambda function reads and parses the CSV file, extracting inventory update records.</p>
</li>
<li><p>Each record is converted into a message and sent to an SQS queue. Another Lambda function continuously checks the SQS queue for new messages.</p>
</li>
<li><p>Upon receiving the message, it retrieves the inventory update details and updates the inventory levels in DynamoDB.</p>
</li>
</ul>
<p><img src="https://github.com/aws-samples/amazon-sqs-best-practices-cdk/raw/main/static/architecture.png" alt="Architecture diagram" /></p>
<h3 id="heading-create-the-github-action-workflow">Create the GitHub Action workflow</h3>
<p>GitHub Actions is a tool that automates workflows. It lets you make custom workflows that automatically build, test, and deploy your code when you make changes to your repository.</p>
<p>For this demo, you will implement a workflow that does the following:</p>
<ul>
<li><p>Checkout the repository from GitHub.</p>
</li>
<li><p>Perform the steps to install dependencies.</p>
</li>
<li><p>Bootstrap and deploy the CDK stack on the GitHub Action Runner.</p>
</li>
<li><p>Run a basic integration test to verify the functionality.</p>
</li>
</ul>
<p>To start, fork the <a target="_blank" href="https://github.com/aws-samples/amazon-sqs-best-practices-cdk">AWS sample</a> on GitHub. If you use GitHub's <code>gh</code> CLI, fork and clone the repository with this command:</p>
<pre><code class="lang-bash">gh repo fork https://github.com/aws-samples/amazon-sqs-best-practices-cdk --<span class="hljs-built_in">clone</span>
</code></pre>
<p>After forking and cloning:</p>
<ul>
<li><p>Create a new directory called <code>.github</code> and a sub-directory called <code>workflows</code>.</p>
</li>
<li><p>Create a new file called <code>main.yml</code> in the <code>workflows</code> sub-directory.</p>
</li>
</ul>
<p>Now you're ready to create your GitHub Action workflow which will deploy your CDK stack using LocalStack's cloud emulator.</p>
<h3 id="heading-set-up-the-actions-amp-dependencies">Set Up the Actions &amp; dependencies</h3>
<p>To achieve the goal, you can use a few prebuilt Actions:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/actions/checkout"><code>actions/checkout</code></a>: Clone the repository for deploying the stacks.</p>
</li>
<li><p><a target="_blank" href="https://github.com/localstack/setup-localstack"><code>setup-localstack</code></a>: Set up the GitHub Actions workflow with LocalStack container &amp; <code>localstack</code> CLI</p>
</li>
<li><p><a target="_blank" href="https://github.com/actions/setup-node"><code>setup-node</code></a>: Set up the GitHub Actions workflow with NodeJS &amp; <code>npm</code>.</p>
</li>
<li><p><a target="_blank" href="https://github.com/actions/setup-python"><code>setup-python</code></a>: Set up the GitHub Actions workflow with Python &amp; <code>pip</code>.</p>
</li>
</ul>
<p>Add the following content to the <code>main.yml</code> file created earlier:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">on</span> <span class="hljs-string">LocalStack</span> 

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
</code></pre>
<p>This ensures that every time a pull request is raised or a new commit is pushed to the <code>main</code> branch, the action is triggered.</p>
<p>Create a new job named <code>cdk</code> and specify the GitHub-hosted runner executing our workflow steps, while checking out the code:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">cdk:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">Setup</span> <span class="hljs-string">infrastructure</span> <span class="hljs-string">using</span> <span class="hljs-string">CDK</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">the</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>
</code></pre>
<p>Now, set up the step to install Python &amp; NodeJS in the runner as part of the workflow step:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Setup</span> <span class="hljs-string">Node.js</span>
  <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-node@v3</span>
  <span class="hljs-attr">with:</span>
    <span class="hljs-attr">node-version:</span> <span class="hljs-number">18</span>

<span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">Python</span>
  <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-python@v4</span>
  <span class="hljs-attr">with:</span>
    <span class="hljs-attr">python-version:</span> <span class="hljs-string">'3.10'</span>
</code></pre>
<p>Next, set up LocalStack in your runner using the <code>setup-localstack</code> action:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Start</span> <span class="hljs-string">LocalStack</span>
  <span class="hljs-attr">uses:</span> <span class="hljs-string">LocalStack/setup-localstack@main</span>
  <span class="hljs-attr">with:</span>
    <span class="hljs-attr">image-tag:</span> <span class="hljs-string">'latest'</span>
    <span class="hljs-attr">install-awslocal:</span> <span class="hljs-string">'true'</span>
    <span class="hljs-attr">use-pro:</span> <span class="hljs-string">'true'</span>
    <span class="hljs-attr">configuration:</span> <span class="hljs-string">LS_LOG=trace</span>
  <span class="hljs-attr">env:</span>
    <span class="hljs-attr">LOCALSTACK_API_KEY:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.LOCALSTACK_API_KEY</span> <span class="hljs-string">}}</span>
</code></pre>
<p>This action pulls the LocalStack Pro image (<code>localstack/localstack-pro:latest</code>) image, installs the <code>localstack</code> CLI, and sets up <code>awslocal</code> to redirect AWS API requests to the LocalStack container. A configuration <code>LS_LOG</code> has been added to enable <code>trace</code> log level.</p>
<p>A repository secret <code>LOCALSTACK_API_KEY</code> is also specified to activate your Pro license on the GitHub Actions runner. Later in the article, you will learn the steps to configure the secret in your GitHub repository.</p>
<p>Finally, install other dependencies, such as CDK &amp; <code>cdklocal</code>, and various Python libraries specified in <code>requirements.txt</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">CDK</span>
  <span class="hljs-attr">run:</span> <span class="hljs-string">|
    npm install -g aws-cdk-local aws-cdk
    cdklocal --version
</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
  <span class="hljs-attr">run:</span> <span class="hljs-string">|</span>
    <span class="hljs-string">pip</span> <span class="hljs-string">install</span> <span class="hljs-string">-r</span> <span class="hljs-string">requirements.txt</span>
</code></pre>
<p>Now, you are ready to deploy the CDK stack on the GitHub Action runner by specifying the appropriate CDK commands in the workflow file.</p>
<h3 id="heading-deploy-the-cdk-stack-on-localstack">Deploy the CDK stack on LocalStack</h3>
<p>To deploy the CDK stack, employ the <code>cdklocal</code> wrapper. First, ensure that each AWS environment intended for resource deployment is bootstrapped. Execute the following <code>cdklocal bootstrap</code> command, adjusting the AWS account ID (<code>000000000000</code>) and region (<code>us-east-1</code>) as needed:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Bootstrap</span> <span class="hljs-string">using</span> <span class="hljs-string">CDK</span>
  <span class="hljs-attr">run:</span> <span class="hljs-string">|</span>
      <span class="hljs-string">cdklocal</span> <span class="hljs-string">bootstrap</span> <span class="hljs-string">aws://000000000000/us-east-1</span>
</code></pre>
<p>Note that the account ID and region values can be customized for multi-account and multi-region setups in LocalStack.</p>
<p>Next, confirm correct stack synthesis by running <code>cdklocal synth</code>. If your application includes multiple stacks, specify which ones to synthesize. However, in this case with a single stack, the CDK CLI automatically detects it:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Synthesize</span> <span class="hljs-string">using</span> <span class="hljs-string">CDK</span>
  <span class="hljs-attr">run:</span> <span class="hljs-string">|</span>
      <span class="hljs-string">cdklocal</span> <span class="hljs-string">synth</span>
</code></pre>
<p>Following successful synthesis, proceed to deploy the CDK stack with <code>cdklocal deploy</code>. To avoid manual confirmation in non-interactive environments like GitHub Actions, include <code>--require-approval never</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">using</span> <span class="hljs-string">CDK</span>
  <span class="hljs-attr">run:</span> <span class="hljs-string">|</span>
      <span class="hljs-string">cdklocal</span> <span class="hljs-string">deploy</span> <span class="hljs-string">--require-approval</span> <span class="hljs-string">never</span>
</code></pre>
<h3 id="heading-implement-integration-tests-against-localstack">Implement integration tests against LocalStack</h3>
<p>Now, you can implement a straightforward integration test with the following steps:</p>
<ul>
<li><p>Validate CDK outputs (<code>cdk.out</code> and <code>manifest.json</code>).</p>
</li>
<li><p>Query the deployed S3 bucket and DynamoDB table.</p>
</li>
<li><p>Trigger CSV processing by uploading a sample CSV file to the S3 bucket.</p>
</li>
<li><p>Scan the DynamoDB table to confirm inventory updates.</p>
</li>
</ul>
<p>For integration testing, you can use the AWS SDK for Python (<code>boto3</code>) and the <code>pytest</code> framework. Create a new directory called <code>tests</code> and create a file named <code>test_infra.py</code>. Add the necessary imports and <a target="_blank" href="https://docs.pytest.org/en/6.2.x/reference.html#fixtures-api"><code>pytest</code> fixtures</a>:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">import</span> pytest
<span class="hljs-keyword">import</span> time


<span class="hljs-meta">@pytest.fixture</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">s3_client</span>():</span>
    <span class="hljs-keyword">return</span> boto3.client(
        <span class="hljs-string">"s3"</span>,
        endpoint_url=<span class="hljs-string">"http://localhost:4566"</span>,
        region_name=<span class="hljs-string">"us-east-1"</span>,
        aws_access_key_id=<span class="hljs-string">"test"</span>,
        aws_secret_access_key=<span class="hljs-string">"test"</span>,
    )


<span class="hljs-meta">@pytest.fixture</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">dynamodb_client</span>():</span>
    <span class="hljs-keyword">return</span> boto3.client(
        <span class="hljs-string">"dynamodb"</span>,
        endpoint_url=<span class="hljs-string">"http://localhost:4566"</span>,
        region_name=<span class="hljs-string">"us-east-1"</span>,
        aws_access_key_id=<span class="hljs-string">"test"</span>,
        aws_secret_access_key=<span class="hljs-string">"test"</span>,
    )
</code></pre>
<p>In this code, <code>boto3</code> clients for interacting with the LocalStack instance are created. Two clients, <code>s3_client</code> and <code>dynamodb_client</code>, are generated, specifying the region and mock AWS Access Key ID and Secret Access Key.</p>
<p>Now, include the following code to execute an integration test against the deployed infrastructure:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_cdk</span>(<span class="hljs-params">s3_client, dynamodb_client</span>):</span>
    <span class="hljs-comment"># Assert CDK outputs</span>
    <span class="hljs-keyword">assert</span> os.path.exists(<span class="hljs-string">"cdk.out"</span>)
    <span class="hljs-keyword">assert</span> os.path.exists(<span class="hljs-string">"cdk.out/manifest.json"</span>)

    <span class="hljs-comment"># Check S3 bucket existence</span>
    target_bucket_prefix = <span class="hljs-string">"sqsblogstack-inventoryupdatesbucketfe-"</span>
    response = s3_client.list_buckets()
    target_bucket = next(
        (
            bucket[<span class="hljs-string">"Name"</span>]
            <span class="hljs-keyword">for</span> bucket <span class="hljs-keyword">in</span> response[<span class="hljs-string">"Buckets"</span>]
            <span class="hljs-keyword">if</span> bucket[<span class="hljs-string">"Name"</span>].startswith(target_bucket_prefix)
        ),
        <span class="hljs-literal">None</span>,
    )
    <span class="hljs-keyword">assert</span> target_bucket <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>

    local_file_path = <span class="hljs-string">"sqs_blog/sample_file.csv"</span>
    s3_object_key = <span class="hljs-string">"sample_file.csv"</span>
    s3_client.upload_file(local_file_path, target_bucket, s3_object_key)

    target_ddb_prefix = <span class="hljs-string">"SqsBlogStack-InventoryUpdates"</span>
    response = dynamodb_client.list_tables()
    target_ddb = next(
        (
            table
            <span class="hljs-keyword">for</span> table <span class="hljs-keyword">in</span> response[<span class="hljs-string">"TableNames"</span>]
            <span class="hljs-keyword">if</span> table.startswith(target_ddb_prefix)
        ),
        <span class="hljs-literal">None</span>,
    )
    <span class="hljs-keyword">assert</span> target_ddb <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>
    time.sleep(<span class="hljs-number">10</span>)

    <span class="hljs-comment"># Check if there is at least one item in the DynamoDB table</span>
    response = dynamodb_client.scan(TableName=target_ddb)
    <span class="hljs-keyword">assert</span> response.get(<span class="hljs-string">"Count"</span>, <span class="hljs-number">0</span>) &gt; <span class="hljs-number">0</span>
</code></pre>
<p>This code uploads a sample CSV file (<code>sqs_blog/sample_file.csv</code>) to the local S3 bucket and checks for inserted items in the DynamoDB table. To automate running this test in a GitHub Action workflow, add the following step:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">integration</span> <span class="hljs-string">tests</span> 
  <span class="hljs-attr">run:</span> <span class="hljs-string">|
      pip3 install boto3 pytest 
      pytest</span>
</code></pre>
<h3 id="heading-configure-a-ci-key-for-github-actions">Configure a CI key for GitHub Actions</h3>
<p>Before you trigger your workflow, set up a continuous integration (CI) key for LocalStack. LocalStack requires a CI Key for use in CI or similar automated environments.</p>
<p>Follow these steps to add your LocalStack CI key to your GitHub repository:</p>
<ol>
<li><p>Go to the <a target="_blank" href="https://app.localstack.cloud/">LocalStack Web Application</a> and access the <a target="_blank" href="https://app.localstack.cloud/workspace/ci-keys">CI Keys</a> page.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708501622712/2e787c5e-4d18-4cf5-90e1-5b55f5733dc4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Switch to the <strong>Generate CI Key</strong> tab, provide a name, and click <strong>Generate CI Key</strong>.</p>
</li>
<li><p>In your <a target="_blank" href="https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions">GitHub repository secrets</a>, set the <strong>Name</strong> as <code>LOCALSTACK_API_KEY</code> and the <strong>Secret</strong> as the CI Key.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708501683537/0d114173-6ec8-435e-9f55-1f3871f264f7.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>Now, you can commit and push your workflow to your forked GitHub repository.</p>
<h3 id="heading-run-the-github-action-workflow">Run the GitHub Action workflow</h3>
<p>With the GitHub Action Workflow in place, your CDK stack will be tested and deployed on LocalStack whenever changes are made to the <code>main</code> branch of your GitHub repository. 🎊</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708501356372/4bacddab-acf3-4b1d-8667-b18e815e33ea.png" alt="A successful CI workflow run" class="image--center mx-auto" /></p>
<p>If your CDK deployment encounters issues and fails on LocalStack, you can troubleshoot by adding extra steps to generate a diagnostics report. After downloading, you can visualize logs and environment variables using a tool like <a target="_blank" href="https://github.com/silv-io/diapretty"><code>diapretty</code></a>:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Generate</span> <span class="hljs-string">a</span> <span class="hljs-string">Diagnostic</span> <span class="hljs-string">Report</span>
  <span class="hljs-attr">if:</span> <span class="hljs-string">failure()</span>
  <span class="hljs-attr">run:</span> <span class="hljs-string">|
      curl -s localhost:4566/_localstack/diagnose | gzip -cf &gt; diagnose.json.gz
</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Upload</span> <span class="hljs-string">the</span> <span class="hljs-string">Diagnostic</span> <span class="hljs-string">Report</span>
  <span class="hljs-attr">if:</span> <span class="hljs-string">failure()</span>
  <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/upload-artifact@v3</span>
  <span class="hljs-attr">with:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">diagnose.json.gz</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">./diagnose.json.gz</span>
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Testing your infrastructure code with LocalStack provides a developer-friendly experience, supporting a quick and agile test-driven development cycle. This happens without incurring any costs on the actual AWS cloud, hence no waiting around prolonged CI runs.</p>
<p>In the upcoming blog posts, we'll demonstrate how to inject your infrastructure state and execute application integration tests without the need for manual deployments (using CDK or Terraform). Stay tuned for more blogs on how LocalStack is enhancing your cloud development and testing experience.</p>
<p>You can find the GitHub Action workflow and integration test in <a target="_blank" href="https://github.com/HarshCasper/cdk-localstack-github-actions">this GitHub repository</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Efficient LocalStack: S3 Endpoint Configuration]]></title><description><![CDATA[In my recent adventures with LocalStack, I've repeatedly been stumbling upon a question that sheds light on a critical yet frequently overlooked aspect of its configuration. This oversight can lead to puzzling issues that leave users perplexed, searc...]]></description><link>https://hashnode.localstack.cloud/efficient-localstack-s3-endpoint-configuration</link><guid isPermaLink="true">https://hashnode.localstack.cloud/efficient-localstack-s3-endpoint-configuration</guid><category><![CDATA[s3 sdk]]></category><category><![CDATA[localstack]]></category><category><![CDATA[S3]]></category><category><![CDATA[AWS SDK]]></category><category><![CDATA[AWS]]></category><category><![CDATA[S3-bucket]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Wed, 21 Feb 2024 15:04:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708515902109/159938ac-8308-4264-89b4-90477159fe06.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my recent adventures with LocalStack, I've repeatedly been stumbling upon a question that sheds light on a critical yet frequently overlooked aspect of its configuration. This oversight can lead to puzzling issues that leave users perplexed, searching for answers that, surprisingly, are often hidden in plain sight. It's a reminder of the importance of paying attention to the finer details, especially when they hold the key to unlocking the full potential of this tool.</p>
<p>That said, let's skip the unnecessary headaches and look at how to correctly make requests to our S3 buckets. As always, this post is backed by an example that lives in a <a target="_blank" href="https://github.com/tinyg210/s3-path-demo"><strong>public GitHub repository</strong></a><strong>.</strong></p>
<h2 id="heading-tldr">TL;DR</h2>
<p>In LocalStack, the S3 service stands out for its approach to endpoint configuration, which is distinct from all other services. Unlike the standard format used across LocalStack, the S3 service adopts a specialized format: <a target="_blank" href="http://s3.localhost.localstack.cloud"><code>s3.localhost.localstack.cloud</code></a>.</p>
<p>This convention mirrors AWS S3's virtual-hosted-style of addressing behavior, facilitating a more accurate emulation of S3 interactions in a local development environment.</p>
<h2 id="heading-path-style-vs-virtual-hosted-style-s3-requests"><strong>Path-Style vs. Virtual-Hosted-Style S3 Requests</strong></h2>
<p>The main difference between path-style and virtual hosting-style endpoints when accessing files in an S3 bucket lies in how the bucket name is included in the URL.</p>
<ul>
<li><p><strong>Path-style endpoints</strong> format the URL by placing the bucket name as part of the path. The structure looks like this: <a target="_blank" href="http://s3.amazonaws.com/bucket-name/key-name"><code>http://s3.&lt;region&gt;.amazonaws.com/bucket-name/key-name</code></a>.</p>
</li>
<li><p><strong>Virtual hosting-style endpoints</strong>, on the other hand, include the bucket name as a subdomain of the domain in the URL. The format is: <a target="_blank" href="http://bucket-name.s3.amazonaws.com/key-name"><code>http://bucket-name.s3.&lt;region&gt;.amazonaws.com/key-name</code></a>. This method allows S3 to serve requests from different buckets using a single web server while making it easier to use SSL/TLS certificates tied to the bucket name as a domain. It's the preferred method for most modern applications due to its cleaner URL structure and compatibility with DNS standards.</p>
</li>
</ul>
<p>The <code>s3.</code> prefix in Amazon S3 URIs serves as a component for service identification, enabling AWS to efficiently route, manage, and secure access to data stored in S3. According to the <a target="_blank" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#virtual-hosted-style-access">AWS documentation</a>, this convention also supports the virtual hosting of buckets for flexible access by acting as a delimiter and allowing the service to identify the target bucket correctly.</p>
<h3 id="heading-s3-requests-in-localstack">S3 Requests in LocalStack</h3>
<p>LocalStack, having a high parity level with AWS, also distinguishes between path-style and virtual-hosted-style requests based on the request's Host header. This means that the bucket name is part of the Host header, visible in the URL. To ensure LocalStack parses the bucket name correctly, the URL must be prefixed with <code>s3.</code>, such as <a target="_blank" href="http://s3.localhost.localstack.cloud"><code>s3.localhost.localstack.cloud</code></a>.</p>
<p>By default, most <strong>SDK</strong>s opt for virtual-hosted-style requests, automatically prefixing endpoints with the bucket name. If your endpoint doesn't start with <code>s3.</code>, LocalStack might not process your request correctly, leading to errors. You can address this by adjusting the endpoint to use the <code>s3.</code> prefix or by <strong>setting your SDK to use path-style</strong> requests.</p>
<p>The <a target="_blank" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#path-style-access">AWS documentation</a> also indicates that path-style requests will be discontinued in the near future. However, the SDKs currently support some method to "<strong>force path style</strong>," which needs to receive a <code>true</code> argument. If your endpoint does not start with <code>s3.</code>, LocalStack treats all requests as <em>path style</em> by default. For consistent S3 operations, using the <a target="_blank" href="http://s3.localhost.localstack.cloud"><code>s3.localhost.localstack.cloud</code></a> endpoint is recommended.</p>
<h2 id="heading-example">Example</h2>
<h3 id="heading-runs-on-localstack">Runs on LocalStack</h3>
<p>Let's look at the simplest example of how to properly configure an S3 client in Java to fetch a text file from a bucket and read its content.</p>
<p>First, let's create an S3 bucket, give it public access, and add a text file to it.</p>
<ul>
<li><p>Create the bucket.</p>
<p>  <code>aws --endpoint="</code><a target="_blank" href="http://localhost.localstack.cloud:4566"><code>http://localhost.localstack.cloud:4566</code></a><code>" s3api create-bucket --bucket testy-mctestface-bucket</code></p>
</li>
<li><p>Create the file.</p>
<p>  <code>echo "Hello from the test bucket." &gt; s3test.txt</code></p>
</li>
<li><p>Add the file to the bucket.</p>
<p>  <code>aws --endpoint="</code><a target="_blank" href="http://localhost.localstack.cloud:4566"><code>http://localhost.localstack.cloud:4566</code></a><code>" s3 cp s3test.txt s3://testy-mctestface-bucket</code></p>
</li>
<li><p>Programmatically getting the file and reading it.</p>
</li>
</ul>
<blockquote>
<p>LocalStack does not enforce IAM policies by default, so this should be enough for now.</p>
</blockquote>
<pre><code class="lang-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">S3EndpointDemo</span> </span>{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{

        String bucketName = <span class="hljs-string">"testy-mctestface-bucket"</span>;
        String key = <span class="hljs-string">"s3test.txt"</span>;

        AwsBasicCredentials awsCreds = AwsBasicCredentials.create(<span class="hljs-string">"test"</span>, <span class="hljs-string">"test"</span>);

        S3Client s3 = S3Client.builder()
                .credentialsProvider(StaticCredentialsProvider.create(awsCreds))
                .endpointOverride(URI.create(<span class="hljs-string">"https://s3.localhost.localstack.cloud:4566"</span>))
                .region(Region.US_EAST_1)
                .build();

        <span class="hljs-keyword">try</span> {
            GetObjectRequest getObjectRequest = GetObjectRequest.builder()
                    .bucket(bucketName)
                    .key(key)
                    .build();

            ResponseBytes&lt;GetObjectResponse&gt; objectBytes = s3.getObjectAsBytes(getObjectRequest);

            String content = <span class="hljs-keyword">new</span> String(objectBytes.asByteArray());

            System.out.println(<span class="hljs-string">"File content: \n"</span> + content);
        } <span class="hljs-keyword">catch</span> (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(<span class="hljs-number">1</span>);
        } <span class="hljs-keyword">catch</span> (SdkClientException | AwsServiceException e) {
            <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RuntimeException(e);
        }

        s3.close();
    }
}
</code></pre>
<p>This code creates an S3 client with static credentials and a custom endpoint (<code>https://s3.localhost.localstack.cloud:4566</code>) to retrieve and print the content of a specific file (<code>s3test.txt</code>) from a bucket (<code>testy-mctestface-bucket</code>). In case the endpoint is misconfigured or the bucket does not exist, this will result in a <code>The specified bucket does not exist</code> message.</p>
<blockquote>
<p>While this code runs locally and required minimal confirguration, other compute services, such as Lambda, require the same endpoint configuration.</p>
</blockquote>
<p>Additionally, you can access your file content using a <code>curl</code> command:</p>
<ul>
<li><p>Virtual-hosted-style: <code>curl</code><a target="_blank" href="http://testy-mctestface-bucket.s3.us-east-1.localhost.localstack.cloud:4566/s3test.txt"><code>http://testy-mctestface-bucket.s3.us-east-1.localhost.localstack.cloud:4566/s3test.txt</code></a></p>
</li>
<li><p>Path-style: <code>curl</code><a target="_blank" href="http://s3.us-east-1.localhost.localstack.cloud:4566/testy-mctestface-bucket/s3test.txt"><code>http://s3.us-east-1.localhost.localstack.cloud:4566/testy-mctestface-bucket/s3test.txt</code></a></p>
</li>
</ul>
<h3 id="heading-runs-on-aws">Runs on AWS</h3>
<ul>
<li><p>The previous commands work on AWS by removing the <code>--endpoint</code> flag and making the bucket public.</p>
</li>
<li><p>Don't forget to configure your AWS CLI to use the right credentials or export the <code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code> environment variables.</p>
</li>
<li><p>The S3 client will have a simpler configuration:<br />  <code>S3Client s3 = S3Client.builder().region(</code><a target="_blank" href="http://Region.US"><code>Region.US</code></a><code>_EAST_1).build();</code></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Generate IAM policies locally using LocalStack]]></title><description><![CDATA[When you're developing cloud and serverless applications, you need to grant access to various AWS resources like S3 buckets and RDS databases. To handle this, you create IAM roles and assign permissions through policies. However, configuring these po...]]></description><link>https://hashnode.localstack.cloud/generate-iam-policies-locally-using-localstack</link><guid isPermaLink="true">https://hashnode.localstack.cloud/generate-iam-policies-locally-using-localstack</guid><category><![CDATA[IAM]]></category><category><![CDATA[AWS]]></category><category><![CDATA[localstack]]></category><category><![CDATA[Security]]></category><category><![CDATA[AWS IAM]]></category><category><![CDATA[aws iam policies]]></category><dc:creator><![CDATA[Harsh Bardhan Mishra]]></dc:creator><pubDate>Fri, 16 Feb 2024 09:59:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708064159126/f7cd3f2c-c9cd-425f-986d-9a088a446544.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When you're developing cloud and serverless applications, you need to grant access to various AWS resources like S3 buckets and RDS databases. To handle this, you create IAM roles and assign permissions through policies. However, configuring these policies can be challenging, especially if you want to ensure minimal access of all principals to your resources.</p>
<p><a target="_blank" href="https://app.localstack.cloud/policy-stream">LocalStack's IAM Policy Stream</a> automates the generation of IAM policies for your AWS API requests on your local machine. This stream helps you identify the necessary permissions for your cloud application and allows you to detect logical errors, such as unexpected actions in your policies.</p>
<p>In this blog, we'll guide you through setting up IAM Policy Stream for a locally running AWS application. We'll use a basic example involving an SNS topic, an SQS queue, and a subscription of the queue to the SNS topic. You'll be able to generate and insert the policy without manual effort, adhering to the principle of least privilege.</p>
<h2 id="heading-why-use-iam-policy-stream">Why use IAM Policy Stream?</h2>
<p>LocalStack is a tool that lets you simulate the AWS cloud on your local machine, allowing you to run your AWS cloud and serverless applications locally. It enables you to create and enforce local IAM roles and policies using the <a target="_blank" href="https://docs.localstack.cloud/user-guide/security-testing/iam-enforcement/"><code>ENFORCE_IAM</code> feature</a>. However, users often struggle to figure out the necessary permissions for different actions. It's important to find a balance, avoiding giving too many permissions while making sure the right ones are granted.</p>
<p>This challenge becomes more complex when dealing with AWS services that make requests not directly visible to users. For instance, if an SNS topic sends a message to an SQS queue and the underlying call fails, there might be no clear error message, causing confusion, especially for those less familiar with the services.</p>
<p>IAM Policy Stream simplifies this by automatically generating the needed policies and showing them to users. This makes it easier to integrate with resources, roles, and users, streamlining the development process. Additionally, it serves as a useful learning tool, helping users understand the permissions linked to various AWS calls and improving the onboarding experience for newcomers to AWS.</p>
<h2 id="heading-prerequisite">Prerequisite</h2>
<ul>
<li><p><a target="_blank" href="https://docs.localstack.cloud/getting-started/installation/#localstack-cli">LocalStack CLI</a> with <a target="_blank" href="https://docs.localstack.cloud/getting-started/auth-token/"><code>LOCALSTACK_AUTH_TOKEN</code></a></p>
</li>
<li><p><a target="_blank" href="https://docs.localstack.cloud/getting-started/auth-token/">Docker</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html">AWS</a> CLI with <a target="_blank" href="https://github.com/localstack/awscli-local"><code>awslocal</code> wrapper</a></p>
</li>
<li><p><a target="_blank" href="https://app.localstack.cloud/sign-up">LocalStack Web Application account</a></p>
</li>
<li><p><a target="_blank" href="https://jqlang.github.io/jq/download/">jq</a></p>
</li>
</ul>
<h2 id="heading-subscribing-a-sqs-queue-to-a-sns-topic">Subscribing a SQS queue to a SNS topic</h2>
<p>We've got a basic demo app featuring an SNS topic named <code>test-topic</code> and an SQS queue named <code>test-queue</code>. There's also a subscription in place. The procedure includes sending a message to SNS, which, thanks to the subscription, gets pushed into the SQS queue. With LocalStack's IAM enforcement enabled, you can thoroughly test your policy and address the IAM violations by auto-generating your policies through the IAM Policy Stream.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708064002508/b882173f-82b7-4363-a6b9-5d84d889ef24.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-start-your-localstack-container">Start your LocalStack container</h3>
<p>Launch the LocalStack container on your local machine using the specified command:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> LOCALSTACK_AUTH_TOKEN=...
DEBUG=1 ENFORCE_IAM=1 localstack start
</code></pre>
<p>Once initiated, you'll receive a confirmation output indicating that the LocalStack container is up and running.</p>
<pre><code class="lang-bash">     __                     _______ __             __
    / /   ____  _________ _/ / ___// /_____ ______/ /__
   / /   / __ \/ ___/ __ `/ /\__ \/ __/ __ `/ ___/ //_/
  / /___/ /_/ / /__/ /_/ / /___/ / /_/ /_/ / /__/ ,&lt;
 /_____/\____/\___/\__,_/_//____/\__/\__,_/\___/_/|_|

 💻 LocalStack CLI 3.1.0
 👤 Profile: default

[11:44:39] starting LocalStack <span class="hljs-keyword">in</span>      localstack.py:494
           Docker mode 🐳

...

──── LocalStack Runtime Log (press CTRL-C to quit) ─────
LocalStack supervisor: starting
LocalStack supervisor: localstack process (PID 18) starting

LocalStack version: 3.1.1.dev20240131022456
LocalStack Docker container id: 931fae5c27d2
LocalStack build date: 2024-02-01
LocalStack build git <span class="hljs-built_in">hash</span>: 616ef31
</code></pre>
<h3 id="heading-navigate-to-iam-policy-stream">Navigate to IAM Policy Stream</h3>
<p>Access the LocalStack Web Application and go to the IAM Policy Stream dashboard. This feature enables you to directly examine the generated policies, displaying the precise permissions required for each API call.</p>
<p><a target="_blank" href="https://app.localstack.cloud/policy-stream"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707722571851/e3aaa880-5b8b-4554-8032-ebf53612ddb9.png" alt="IAM Policy Stream dashboard" class="image--center mx-auto" /></a></p>
<p>Upon the successful launch of your LocalStack container, you'll observe the <strong>Stream active</strong> status icon, indicating that making any local AWS API request will trigger the generation of an IAM Policy. Now, let's proceed to create the SNS topic and the SQS queue.</p>
<h3 id="heading-create-the-aws-resources">Create the AWS resources</h3>
<p>Create a local SNS topic with the command:</p>
<pre><code class="lang-bash">awslocal sns create-topic --name test-topic
</code></pre>
<p>The output will be:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"TopicArn"</span>: <span class="hljs-string">"arn:aws:sns:us-east-1:000000000000:test-topic"</span>
}
</code></pre>
<p>Go to the IAM Policy Stream dashboard, and you'll see the generated policy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707722713879/6131c094-d4a7-49ac-9b9a-d3d8f5d34b4a.png" alt="Policy Stream for SNS Topic" class="image--center mx-auto" /></p>
<p>The AWS API call created an identity-based policy for the <code>root</code> user; If you do not set any credentials returned by IAM or STS, LocalStack will identify the request as being made by the <code>root</code> user. The policy is for the <code>CreateTopic</code> API action, allowing the specified resource.</p>
<p>Now, create the SQS queue with:</p>
<pre><code class="lang-bash">awslocal sqs create-queue --queue-name test-queue
</code></pre>
<p>The output will be:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"QueueUrl"</span>: <span class="hljs-string">"http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/test-queue"</span>
}
</code></pre>
<p>On the IAM Policy Stream dashboard, you'll notice the generated policy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707722780628/5646e568-b960-40a5-ac92-4b4860176c7f.png" alt="Policy Stream for SQS queue" class="image--center mx-auto" /></p>
<p>Create a subscription with the topic ARN of the SNS topic, the protocol, and the notification endpoint:</p>
<pre><code class="lang-bash">awslocal sns subscribe \
    --topic-arn arn:aws:sns:us-east-1:000000000000:test-topic \
    --protocol sqs \
    --notification-endpoint arn:aws:sqs:us-east-1:000000000000:test-queue
</code></pre>
<p>The output will be:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"SubscriptionArn"</span>: <span class="hljs-string">"arn:aws:sns:us-east-1:000000000000:test-topic:4283d647-18b6-4aeb-b283-19d9327a963a"</span>
}
</code></pre>
<h3 id="heading-testing-the-subscription">Testing the subscription</h3>
<p>To test the subscription, publish a message to the SNS topic:</p>
<pre><code class="lang-bash">awslocal sns publish \
    --topic-arn arn:aws:sns:us-east-1:000000000000:test-topic \
    --message <span class="hljs-string">'{"some": "event"}'</span>
</code></pre>
<p>The output will be:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"MessageId"</span>: <span class="hljs-string">"63317413-c40b-41ed-982b-db722337eb5b"</span>
}
</code></pre>
<p>Check if your SQS queue received the message:</p>
<pre><code class="lang-bash">awslocal sqs receive-message \
    --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/test-queue
</code></pre>
<p>Unfortunately, there is no message due to a lack of proper IAM policy, preventing the SNS service from publishing to the SQS queue. Let's use IAM Policy Stream to resolve this issue.</p>
<h3 id="heading-analyzing-iam-policies">Analyzing IAM Policies</h3>
<p>Navigate to the IAM Policy Stream dashboard and observe various API calls like <code>Publish</code>, <code>SendMessage</code>, <code>ReceiveMessage</code>. Note that the <code>SendMessage</code> call was rejected due to an IAM violation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707722856818/903f9994-d98b-4e64-a64c-a6646fb8d72c.png" alt="SendMessage call being rejected due to IAM violation" class="image--center mx-auto" /></p>
<p>Click on <strong>SQS.SendMessage</strong> to view the request parameters and the required resource-based policy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707722915707/6710765c-1d5b-45b6-bb0d-f77ac825c916.png" alt="Generated resource-based policy and request params for the SQS SendMessage API" class="image--center mx-auto" /></p>
<p>LocalStack automatically suggests a resource-based policy for the <code>arn:aws:sqs:us-east-1:000000000000:test-queue</code> SQS queue. Copy and paste the policy into a new JSON file named <code>policy.json</code>:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
  <span class="hljs-attr">"Statement"</span>: [
    {
      <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"Test432a8c7b"</span>,
      <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"sqs:SendMessage"</span>,
      <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:sqs:us-east-1:000000000000:test-queue"</span>,
      <span class="hljs-attr">"Principal"</span>: {
        <span class="hljs-attr">"Service"</span>: [
          <span class="hljs-string">"sns.amazonaws.com"</span>
        ]
      },
      <span class="hljs-attr">"Condition"</span>: {
        <span class="hljs-attr">"ArnEquals"</span>: {
          <span class="hljs-attr">"aws:SourceArn"</span>: <span class="hljs-string">"arn:aws:sns:us-east-1:000000000000:test-topic"</span>
        }
      }
    }
  ]
}
</code></pre>
<p>Encode this policy as a string using <code>jq</code>, required by the AWS CLI:</p>
<pre><code class="lang-bash">jq @json &lt; policy.json
<span class="hljs-string">"{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"Test432a8c7b\",\"Effect\":\"Allow\",\"Action\":\"sqs:SendMessage\",\"Resource\":\"arn:aws:sqs:us-east-1:000000000000:test-queue\",\"Principal\":{\"Service\":[\"sns.amazonaws.com\"]},\"Condition\":{\"ArnEquals\":{\"aws:SourceArn\":\"arn:aws:sns:us-east-1:000000000000:test-topic\"}}}]}"</span>
</code></pre>
<p>Create a new file named <code>sqs-queue-attributes.json</code> and paste the generated policy:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Policy"</span>: <span class="hljs-string">"{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"Test432a8c7b\",\"Effect\":\"Allow\",\"Action\":\"sqs:SendMessage\",\"Resource\":\"arn:aws:sqs:us-east-1:000000000000:test-queue\",\"Principal\":{\"Service\":[\"sns.amazonaws.com\"]},\"Condition\":{\"ArnEquals\":{\"aws:SourceArn\":\"arn:aws:sns:us-east-1:000000000000:test-topic\"}}}]}"</span>
}
</code></pre>
<p>Set the queue attributes for the SQS queue:</p>
<pre><code class="lang-bash">awslocal sqs set-queue-attributes \
    --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/test-queue \
    --attributes file://sqs-queue-attributes.json
</code></pre>
<p>Send another message to the SNS topic to be received by the SQS queue:</p>
<pre><code class="lang-bash">awslocal sns publish \
    --topic-arn arn:aws:sns:us-east-1:000000000000:test-topic \
    --message <span class="hljs-string">'{"some": "event"}'</span>
awslocal sqs receive-message \
    --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/test-queue
</code></pre>
<p>Verify that you have received the message:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"Messages"</span>: [
        {
            <span class="hljs-string">"MessageId"</span>: <span class="hljs-string">"84a24f67-995b-45db-87d7-f6fca8891f4e"</span>,
            <span class="hljs-string">"ReceiptHandle"</span>: <span class="hljs-string">"MDUwNTAwYzQtMTJiNS00NzdiLTg2OTYtODA5MjAzMWQ3YzY1IGFybjphd3M6c3FzOnVzLWVhc3QtMTowMDAwMDAwMDAwMDA6dGVzdC1xdWV1ZSA4NGEyNGY2Ny05OTViLTQ1ZGItODdkNy1mNmZjYTg4OTFmNGUgMTcwNzcyMDg2My45OTEwMDQ="</span>,
            <span class="hljs-string">"MD5OfBody"</span>: <span class="hljs-string">"a85e018e2a7d2d06866b7da00268fcc9"</span>,
            <span class="hljs-string">"Body"</span>: <span class="hljs-string">"{\"Type\": \"Notification\", \"MessageId\": \"4dbec68e-2489-4e73-bee4-90d02ffe691f\", \"TopicArn\": \"arn:aws:sns:us-east-1:000000000000:test-topic\", \"Message\": \"{\\\"some\\\": \\\"event\\\"}\", \"Timestamp\": \"2024-02-12T06:54:20.269Z\", \"UnsubscribeURL\": \"http://localhost.localstack.cloud:4566/?Action=Unsubscribe&amp;SubscriptionArn=arn:aws:sns:us-east-1:000000000000:test-topic:4283d647-18b6-4aeb-b283-19d9327a963a\", \"SignatureVersion\": \"1\", \"Signature\": \"ymKIdXa+KzAy5aZ6XAA7P1TM7azzCounaFv4etpm7GL7qHawdn86aeM6q7VhDgTzCBdI3iGEjOaoAzaWCnB1RdQd9rt8Gfwckk0QtlGefEJBVdiH1DCNyGD+A48hSGUPtAk22d0Ar1AzBWtQ49DFTbfgEqfGGNlPdrri+JJmfztgg7hb0tUZoeWM3p7wfNuj7+nXGtS4JuVf5yC0f/v6ryo0IbNiGEjfsXyAU5++Lx2V3o2aZK/WfUWa5EkIqjAc6RzmnIE60IUYvn/7mVEgdl5CZLvJ0hPGrLaxFwdMd04LiYJCE4bLMfWxDyVQFysKH8GwFqRfZjrYmpdiVtI9Rw==\", \"SigningCertURL\": \"http://localhost.localstack.cloud:4566/_aws/sns/SimpleNotificationService-6c6f63616c737461636b69736e696365.pem\"}"</span>
        }
    ]
</code></pre>
<p>You can now successfully verify on the IAM Policy Stream dashboard that no violation has been noticed, and your AWS API requests have successfully executed with the right IAM policies.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707722980586/d7db0dcd-709f-4c1c-ae2e-c304a40073c0.png" alt="Successful verification on the IAM Policy Stream dashboard" class="image--center mx-auto" /></p>
<h2 id="heading-generating-a-comprehensive-policy">Generating a comprehensive policy</h2>
<p>In scenarios where there are many AWS services, and every AWS API request generates a policy it might be cumbersome to analyze every policy. In such cases, you can generate one comprehensive policy for all your AWS resources together.</p>
<p>You can navigate to the <strong>Summary Policy</strong> tab on the IAM Policy Stream dashboard. This concatenates the policy per principle which the policy should be attached to. For the example above, you would be able to see the <strong>Identity Policy</strong> for the root user which has all the actions and resources inside one single policy file for the operations we performed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707723029424/a3636fce-6e8c-429e-8c8d-6f7e1ecd6ea3.png" alt="Generated Identity-based policy" class="image--center mx-auto" /></p>
<p>On the other hand, you have the <strong>Resource Policy</strong> for the SQS queue, where you can see the permission necessary for the subscription. For larger AWS applications, you would be able to find multiple roles and multiple resource-based policies depending on your scenario.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707723069060/7722f268-3dbb-4370-8d45-2a5e85865690.png" alt="Generated resource-based policy" class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>IAM Policy Stream streamlines your development process by minimizing the manual creation of policies and confirming the necessity of granted permissions. However, it is advisable to manually confirm that your policy aligns with your intended actions. Your code may unintentionally make requests, and LocalStack considers all requests made during policy generation as valid.</p>
<p>A practical scenario is automating tests like integration or end-to-end testing against your application, allowing LocalStack to automatically generate policies with required permissions. You can then review and customize them to meet your needs, ensuring that overly permissive policies don't find their way into production environments.</p>
<p>We are actively working on expanding this feature and offering more advantages for developers, such as automatically analyzing deployed policies for unused permissions and more! Stay tuned for updates on our IAM feature set!</p>
]]></content:encoded></item><item><title><![CDATA[LocalStack in 2023: The Journey in a Community Transformed and Thriving]]></title><description><![CDATA[How it started
As I approach my one-year anniversary with LocalStack, it's remarkable to reflect on the transformative journey we've embarked upon. The community I joined is not the same one it is today; it's grown exponentially, becoming bigger, str...]]></description><link>https://hashnode.localstack.cloud/localstack-in-2023-the-journey-in-a-community-transformed-and-thriving</link><guid isPermaLink="true">https://hashnode.localstack.cloud/localstack-in-2023-the-journey-in-a-community-transformed-and-thriving</guid><category><![CDATA[localstack]]></category><category><![CDATA[community]]></category><category><![CDATA[DevRel]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Mon, 11 Dec 2023 13:23:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1702300725385/0e1caa77-eb77-4480-af64-10d5cce54c2b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-how-it-started">How it started</h1>
<p>As I approach my one-year anniversary with LocalStack, it's remarkable to reflect on the transformative journey we've embarked upon. The community I joined is not the same one it is today; it's grown exponentially, becoming bigger, stronger, and more vibrant. In this blog post, I aim to shed light on the incredible strides we've made in community growth, developer advocacy, and user support. We've also evolved significantly in how we present (ourselves) to our users, marking a year of meaningful progress.</p>
<p>While our journey this past year has been filled with amazing achievements, it's important to acknowledge that not every moment was rosy and celebratory. We encountered our share of hurdles, with some projects proving too ambitious to launch as initially envisioned. However, these experiences are not setbacks but stepping stones. What I really appreciate at LocalStack is the academic approach to starting projects: we pitch ideas, we present the plan, we discuss, we ask questions, we execute, or go back to the whiteboard to act on the feedback. Whatever didn't work in the past will come back stronger and better.</p>
<p>This has been one year in the rocket ship.</p>
<h1 id="heading-how-its-going">How it's going</h1>
<h2 id="heading-for-the-looks-of-localstack">For the looks of LocalStack</h2>
<p>Let's kick things off with a bit of a time travel exercise. Picture this: as I was digging up some gems for this article, I took a scroll through our old social media posts. Talk about a 'throwback' moment! It was almost like flipping through an old family album, seeing our early design days, seeing my colleagues before I actually knew them. It's pretty wild to see how far we've come.</p>
<p>So when we have new releases, we no longer look like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701989538148/6c73dbfa-bc66-4947-8921-faaf3da7af28.jpeg" alt class="image--center mx-auto" /></p>
<p>But rather like this...</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701989561041/d545d066-dfc8-4165-804f-c833a231eb23.jpeg" alt class="image--center mx-auto" /></p>
<p>Don't get me wrong, they're both amazing, but just like in the world of software (and let's not forget rockets), each upgrade is leveling up - it just keeps getting cooler and cooler. This is also our way of showing how much we value the community's support and opinion, which are like rocket fuel, pushing us to soar higher and do even better.</p>
<h2 id="heading-for-the-numbers">For the numbers</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702041236980/cf96f0b0-f7d8-470a-8b98-d816dc90e83f.png" alt class="image--center mx-auto" /></p>
<p>As we look back on 2023, it’s thrilling to see the leaps our social media presence has made. Our digital footprint tells the story of our community's steady engagement and growth over the year.</p>
<p>On Slack, we witnessed a remarkable 265% increase in active members alongside a 105% growth in our community size. This surge reflects the dynamic conversations and collaborations happening among our members. We can only go so far with helping each individual situation, so we strive to foster an environment of helping out and knowledge sharing between the people.</p>
<p>YouTube has been another spectacular success story. Our channel's views skyrocketed by 766% to 28.4K, while the watch time astonishingly increased by 495%. These numbers don’t just speak volumes; they shout out about the engaging and valuable content we’ve been producing. Not to mention, our subscriber count grew by 386%, and we garnered 330K impressions, clearly expanding our reach.</p>
<p>In the realm of Discuss, our platform has been buzzing with activity. Over 550 new posts were created by more than 150 new contributors, generating a whopping 150K consolidated page views.</p>
<p>As Twitter/X provides a different set of tools to do analytics, we can currently say that we've gained 356 new followers since August and 273,5K impressions in 2023.</p>
<p>Since our <a target="_blank" href="https://blog.localstack.cloud/2022-04-22-localstack-40k-stars-90m-pulls-and-an-engaged-community/">first community blog post</a>, a little over a year ago, our project has seen tremendous growth, surpassing 50,000 GitHub stars and over 194M Docker pulls. These numbers are not just metrics; they are a reflection of the trust and confidence our users place in us. And if you really want to be up to date with it all, you should definitely read about <a target="_blank" href="https://blog.localstack.cloud/2023-11-16-announcing-localstack-30-general-availability/">everything that's new in LocalStack v3</a>.</p>
<h2 id="heading-for-the-bond-created-with-our-users">For the bond created with our users</h2>
<p>We've seen how the LocalStack community has grown with every milestone. Every step we've taken and every initiative we've launched has been welcomed with enthusiasm by our users. Open source contributors have become colleagues and acquaintances turned into supporting friends.</p>
<h3 id="heading-expanding-our-reach-through-webinars-and-meetups"><strong>Expanding Our Reach Through Webinars and Meetups</strong></h3>
<p>This year, we organized 8 engaging community webinars featuring esteemed guests from Wing, FABR, and XO. These events were a great success, contributing to an astounding 145% increase in the size of our <a target="_blank" href="https://www.meetup.com/localstack-community">Meetup group</a>. It's been a delight to see our user base grow and engage more deeply with LocalStack. Of course, we could go on forever about how much we like it.</p>
<h3 id="heading-making-a-mark-at-conferences-and-summits"><strong>Making a Mark at Conferences and Summits</strong></h3>
<p>LocalStack's presence was felt strongly across major tech events. We had the privilege of presenting at 10 key conferences, including DockerCon, EuroPython, PyCon APAC, SpringIO, and DevFest, as well as hosting 2 webinars with LambdaTest and Lumigo. Additionally, our participation in various AWS Summits across Europe and the Middle East helped us reach out to even more cloud enthusiasts and developers. I know we always say we're local, but in this case, we're going global.</p>
<h3 id="heading-hosting-in-person-events"><strong>Hosting In-Person Events</strong></h3>
<p>We went a step further in fostering community connections by organizing three in-person "Cloud DevXchange" events in Bangalore, Chennai, and Vienna. Collaborating with Docker, MongoDB, Labyrinth Labs, and AWS, these events were a melting pot of ideas, innovation, and networking. Additionally, we had the honor of being featured speakers at several gatherings, including the Enterprise Java User Group, the Cloud Native Computing Foundation (CNCF), the Elastic User Group, and various DevOps meetups.</p>
<h3 id="heading-driving-engagement-through-content"><strong>Driving Engagement Through Content</strong></h3>
<p>Our blog became a hub of knowledge sharing, with 22 new posts that attracted over 60,000 views. Highlighting significant updates like LocalStack 2.0 and 3.0 GA, and introducing new tools like our Docker Extension and Desktop App, we kept our community informed and engaged.</p>
<h3 id="heading-collaborating-with-industry-leaders"><strong>Collaborating with Industry Leaders</strong></h3>
<p>2023 was also a year of fruitful collaborations with Docker, AWS, Cloudflare, LambdaTest, AtomicJar, and Pulumi, among others. These partnerships have been instrumental in expanding our reach and enhancing the LocalStack experience through tools our users already love.</p>
<h3 id="heading-launching-the-developer-hub"><strong>Launching the Developer Hub</strong></h3>
<p>Perhaps one of our proudest achievements this year is the launch of the all-new <a target="_blank" href="https://docs.localstack.cloud/developer-hub/">Developer Hub</a>. Featuring 7 tutorials, 21 application samples, and the inception of <a target="_blank" href="https://docs.localstack.cloud/academy/">LocalStack Academy</a> with 7 comprehensive lessons, the Developer Hub has become a cornerstone for learning and development for the curious ones.</p>
<p>Finally, I'd like to share something special with you – a collage that captures the essence of the LocalStack team's dedication. This late-night piece of art is a snapshot of our unwavering commitment to engaging, informing, and supporting our community in every way possible. It's visual evidence of the hard work, creativity, and passion that fuels our team's efforts to connect with each and every one of you, whether in person or virtually.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702042001434/c55caa3f-5e30-46b8-a32a-e7923169b21b.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-how-will-it-go">How will it go</h1>
<p>As we set our sights on 2024, we're practically buzzing with gratitude and excitement! It's like our community's growth and our shared achievements have strapped a jetpack to our backs, propelling us toward continuous innovation. We're super committed to upping our game, deepening those awesome connections, and scaling even higher peaks together. And hey, we know we can't always be everywhere for everyone, but we're giving it our all and then some. In 2024, we're rolling up our sleeves to support you even better - let's make it a year to remember!</p>
]]></content:encoded></item><item><title><![CDATA[LocalStack to the Max - Invoking 50 Functions]]></title><description><![CDATA[Ok, we’ve been pretty serious so far, so let’s take a few moments to step outside of our day-to-day software-building responsibilities and try something new.
I received this question a few times and thought it was finally time to put it to the test: ...]]></description><link>https://hashnode.localstack.cloud/localstack-to-the-max</link><guid isPermaLink="true">https://hashnode.localstack.cloud/localstack-to-the-max</guid><category><![CDATA[lambda]]></category><category><![CDATA[Load Testing]]></category><category><![CDATA[localsatck]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Thu, 14 Sep 2023 22:00:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693178520304/b001fd98-5e19-468e-823e-32bd07d08245.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ok, we’ve been pretty serious so far, so let’s take a few moments to step outside of our day-to-day software-building responsibilities and try something new.</p>
<p>I received this question a few times and thought it was finally time to put it to the test: Can you create X Lambdas on LocalStack?</p>
<p>Let’s see what happens when we want to create 50 Lambdas and invoke them sequentially…twice. It’s for science, ok?</p>
<p>The code and the scripts for creating and invoking the functions can be found, as always, in the <a target="_blank" href="https://github.com/tinyg210/stack-bytes-apigw-lambda-s3/tree/main/localstack-to-the-max">Stack Bytes repository</a>.</p>
<p>The setup is pretty basic: We have a Lambda function that takes an input and prints out a message using said input. To make it less boring and more unique, each function will receive its own number. We use the local AWS CLI to create all the functions using a loop, meaning this will take a while. So let’s take out our time measuring devices and see:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693178549146/03789c01-fc48-487f-b783-685be441f2a1.png" alt class="image--center mx-auto" /></p>
<p>I was surprised it only took approximately two minutes to create 50 Lambdas. I don’t want to know how long this would take on AWS. But while we’re on the topic, a bit of a disclaimer: my setup is, by today’s standards (Aug. 2023), on the middle-upper side. There are people out there getting by with a lot less and others with a lot more, but as a developer, I’d say it’s a good place to be in:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693178580966/5ed681b7-f674-4ace-ac6e-45c3ae077a24.png" alt class="image--center mx-auto" /></p>
<p>Now think of all the things one can do with all the fancy new chips.</p>
<p>Let’s move on and invoke all 50 of our Lambdas:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693178599904/4043a822-bca5-419b-9019-55c7d5a2c444.png" alt class="image--center mx-auto" /></p>
<p>A little discouraging and slightly disappointing that it wasn’t exactly the 5-minute mark. And if we check the logs of a randomly picked container, the message is there as expected:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693178618474/87fcefb4-9179-41eb-a813-0747484cbb6f.png" alt class="image--center mx-auto" /></p>
<p>The slow invocations happen because the Lambda environment provisions resources and initializes the runtime environment to execute the function code. For Java Lambda functions, this includes starting the JVM, loading classes, and performing any initialization tasks specified in your code or dependencies.</p>
<p>Once the Lambda function is "warmed up," meaning the runtime environment is initialized, subsequent invocations of the function are typically faster because the JVM is already running. Let’s check again with the warm starts, the second round of invocations:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693178649154/d04ab69e-11c4-4dc8-8054-ecabf4bb5e2a.png" alt class="image--center mx-auto" /></p>
<p>Now that’s more like it. And with the dawn of the <a target="_blank" href="https://www.azul.com/products/components/crac/">CRaC project</a>, I’m really excited to leave the cold starts in the past.</p>
<p>PS: I’m looking forward to reading your stories of taking things even further ;).</p>
]]></content:encoded></item><item><title><![CDATA[Mounting the Docker Socket]]></title><description><![CDATA[Alright, let's get real for a moment. This one's a PSA, folks. We're diving into that elusive puzzle piece you might have missed in your docker-compose files. You know the one that's been quietly driving you nuts? Yeah, we've been there too. And hey,...]]></description><link>https://hashnode.localstack.cloud/mounting-the-docker-socket</link><guid isPermaLink="true">https://hashnode.localstack.cloud/mounting-the-docker-socket</guid><category><![CDATA[Docker]]></category><category><![CDATA[localstack]]></category><category><![CDATA[volume mount]]></category><category><![CDATA[docker socket]]></category><category><![CDATA[volumemount]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Wed, 13 Sep 2023 09:30:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693178140368/e0c33d0f-75e7-4eb5-b874-ed36dcd3dadd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Alright, let's get real for a moment. This one's a PSA, folks. We're diving into that elusive puzzle piece you might have missed in your docker-compose files. You know the one that's been quietly driving you nuts? Yeah, we've been there too. And hey, we even tweaked the logs to drop you some hints, but let's dissect this riddle anyway. You guessed it it’s the "/var/run/docker.sock:/var/run/docker.sock" volume mount. If you try to create a Lambda function, for example, you won’t be able to. This has happened enough times now that the logs look like this:</p>
<pre><code class="lang-bash">localstack  | 2023-08-24T20:56:47.079 ERROR --- [Executor-1_0] l.services.lambda_.hints   : 
Failed to pull Docker image because Docker is not available <span class="hljs-keyword">in</span> the LocalStack container but
required to run Lambda <span class="hljs-built_in">functions</span>. Please add the Docker volume mount 
<span class="hljs-string">"/var/run/docker.sock:/var/run/docker.sock"</span> to your LocalStack startup.
 https://docs.localstack.cloud/user-guide/aws/lambda/<span class="hljs-comment">#docker-not-available</span>
</code></pre>
<p>Let’s break it down a bit and see how we got here. To optimize LocalStack as much as possible, every service that is available as an image will run in its own container. You’ll be able to see this with services like Lambda, ECS, certain databases, while resources like S3, API Gateway, etc., will continue to be part of the LocalStack main container.`</p>
<p>There are two striking ways of achieving this behaviour, known around the Internet as Docker-in-Docker and Docker-out-of-Docker. Jérôme Petazzoni adeptly articulates what lies behind both methods in this great <a target="_blank" href="http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/">blog post</a> and why you’d wanna use one over the other.</p>
<p>DinD involves creating a separate Docker runtime environment within a container. This means that inside the container, you're essentially running another Docker daemon, isolated from the host's Docker daemon. This approach offers isolation and encapsulation, but it comes with its own set of challenges: security vulnerabilities, risk of data corruption, and overall the complexity might outweigh the isolation benefits.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693178208017/d7b097bb-f655-46c7-bfb9-d3a5bafd3a3d.png" alt class="image--center mx-auto" /></p>
<p>In the DooD approach, you use the Docker daemon from the host system to interact with containers. Containers themselves don't have their own Docker runtime; they communicate with the host's Docker. This offers some distinct advantages: simplicity in managing the containers and resource efficiency, as containers don't need to run their own Docker daemon.</p>
<p>This way the main container will have access to the Docker socket and will, therefore, be able to start containers. The only difference is that instead of starting “child” containers, it will start “sibling” containers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693178230450/3b5e8d84-a6e6-40b9-acce-8b2f623e67be.png" alt class="image--center mx-auto" /></p>
<p>So, if you’re reading this on the go, next time you set up your docker-compose file you’ll hopefully visualize these diagrams and remember to add that simple YAML line before you realize you’re missing services ;).</p>
]]></content:encoded></item><item><title><![CDATA[LocalStack Interacting with Client Code]]></title><description><![CDATA[Using Docker for local development offers several advantages: isolation, consistency, reproducibility, efficiency, portability, easy cleanup, and security. You get the gist.
These are also the great advantages of LocalStack shipping as a Docker image...]]></description><link>https://hashnode.localstack.cloud/localstack-interacting-with-client-code</link><guid isPermaLink="true">https://hashnode.localstack.cloud/localstack-interacting-with-client-code</guid><category><![CDATA[localstack]]></category><category><![CDATA[React]]></category><category><![CDATA[AWS]]></category><category><![CDATA[apigateway]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Mon, 11 Sep 2023 09:30:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693177513780/979626d3-fbec-422f-a9c7-9b064b1c2866.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Using Docker for local development offers several advantages: isolation, consistency, reproducibility, efficiency, portability, easy cleanup, and security. You get the gist.</p>
<p>These are also the great advantages of LocalStack shipping as a Docker image.</p>
<p>Let’s say your product is a multi-functional web application that does all sorts of operations for your clients. The backend is handled by AWS-managed services, so your main focus is providing the best user experience possible. For the development phase, your setup would look something like this diagram, the center of attention being the React app, while the rest are necessary dependencies:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693177498416/423f7f94-dad1-496f-9aa0-93de03b141e5.png" alt class="image--center mx-auto" /></p>
<p>A new frontend engineer is joining the team and they want to get their setup running as fast as possible so that they can start making much-needed improvements to the app. Of course, they would say that talk is cheap, they want to see the code:</p>
<ol>
<li><p>Clone the repository:</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/tinyg210/stack-bytes-apigw-lambda-s3.git
</code></pre>
</li>
<li><p>Switch to the module folder:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">cd</span> stack-bytes-apigw-lambda-s3/frontend-client-local-machine
</code></pre>
</li>
<li><p>Run the preconfigured docker-compose file in detached mode:</p>
<pre><code class="lang-bash"> docker compose up -d
</code></pre>
</li>
<li><p>Install the React app dependencies:</p>
<pre><code class="lang-bash"> npm install
</code></pre>
</li>
<li><p>Run the web application:</p>
<pre><code class="lang-bash"> npm start
</code></pre>
</li>
</ol>
<p>That’s it. In just five easy steps your new developer can start understanding and working on the app. No need for days of waiting to obtain all the necessary credentials and permissions for a new AWS account and no accidental cost spikes for learning purposes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693177778035/e438e14b-28b4-4f44-9aa2-8f154cb302d4.png" alt class="image--center mx-auto" /></p>
<p>We can now see that our sophisticated app is running on <a target="_blank" href="http://localhost:3000">localhost:3000</a>. It is communicating with the API Gateway, which uses 2 separate Lambdas for creating and fetching, and an S3 bucket for storing the text files that contain the quotes that we want to remember forever. The reason why everything was so fast is a combination of features that are there to enhance the developer experience. The docker-compose file simply requests the LocalStack image, makes sure the right ports are exposed, and adds some configs regarding the network and access to the Docker socket (we’ll talk about it soon). But the more important parts lay in these lines:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"../stack-bytes-lambda/target/apigw-lambda.jar:/etc/localstack/init/ready.d/target/apigw-lambda.jar"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"../init-resources.sh:/etc/localstack/init/ready.d/init-resources.sh"</span>
</code></pre>
<p>These are the <a target="_blank" href="https://hashnode.localstack.cloud/localstack-initialization-hooks">init hooks</a> that ensure the needed resources are created at startup.</p>
<p>From here on, the endpoints are quickly configured for local development, and everything is set:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693177873015/d91ac9f2-d3bd-4f20-874b-0caa5e50db08.png" alt class="image--center mx-auto" /></p>
<p>Notice how fast getting someone onboarded is. This combination facilitates swift resource replication across multiple environments, enabling rapid team integration and productive collaboration. 👍</p>
]]></content:encoded></item><item><title><![CDATA[Github Actions & End-to-End Testing with Testcontainers & LocalStack]]></title><description><![CDATA[If you’re a developer dealing with multiple systems, you already know there’s no way around end-to-end testing. You need to make sure that all your pieces fit together constantly. And if you’re a business releasing new software features often, you ne...]]></description><link>https://hashnode.localstack.cloud/github-actions-end-to-end-testing-with-testcontainers-localstack</link><guid isPermaLink="true">https://hashnode.localstack.cloud/github-actions-end-to-end-testing-with-testcontainers-localstack</guid><category><![CDATA[localstack]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[Testcontainers]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Testing]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Fri, 08 Sep 2023 09:30:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176786963/2164e1e2-beda-4b84-93b9-0bafe7a50931.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’re a developer dealing with multiple systems, you already know there’s no way around end-to-end testing. You need to make sure that all your pieces fit together constantly. And if you’re a business releasing new software features often, you need a <a target="_blank" href="https://www.linkedin.com/feed/hashtag/?keywords=cicd&amp;highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7099854933037907968">C</a>I/CD pipeline that builds, tests, and releases software the moment you push your code. A comprehensive automated test suite is not as straightforward and easy as unit testing but it is essential. So, how can we bring the simplicity and speed of unit tests into these integration tests? On top of that, we’d prefer live services over mocked behavior for testing, aiming to replicate production behavior during tests.</p>
<p>This is where <strong>Testcontainers</strong> and <strong>LocalStack</strong> work beautifully together to bring you the best of integration tests and cloud services on your machine and in your CI/CD pipeline.</p>
<p>Today, we’re discussing the ease of setting up a workflow that will always make sure our system behaves as expected. Even visually, in a diagram, beyond the bigger picture, we can see that two key areas go hand in hand: <em>setting up the right infrastructure and holding that in check using tests</em>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176901411/d3671578-3f19-436d-a208-3d663a327b9b.png" alt class="image--center mx-auto" /></p>
<p>You can find the full working example in the <a target="_blank" href="https://github.com/tinyg210/stack-bytes-apigw-lambda-s3">Stack Bytes repository</a>, the workflow, as is standard, will be under <code>.github/workflow</code>, and the test examples sit in the <code>github-actions-testcontainers</code> folder.</p>
<p>When you make changes to your application's code and push it to GitHub, GitHub Actions automatically kicks in. It takes your code and starts testing it against different scenarios, just like trying different puzzle pieces. This is especially useful for end-to-end testing, where you want to see if the entire application works well together. In this case we’re interested in testing the Lambda functions, but there are so many other apps that we can plug in.</p>
<p>With Testcontainers, you will set up a "sandbox" environment to put the puzzle pieces together, in our case, the AWS services we need. GitHub Actions runs your tests, simulating real user interactions. It's like a rehearsal before the big show – making sure everything runs smoothly before it's in front of your users. Another great aspect of using Testcontainrs with LocalStack is that all the steps leading up to the tests are already taken care of: Testcontainers manages the lifecycle of LocalStack, while provisioning the infrastructure is the same as in production, weather it’s initialisation hooks, Terraform, CDK, or CLI.</p>
<p>Here are some of the things that will make your life easier when you’re using Testcontainers in CI or on your machine:</p>
<ul>
<li><p>Use a waiter to make sure your Lambdas are <code>ACTIVE</code> and not just created:</p>
<pre><code class="lang-java">  LambdaWaiter waiter = lambdaClient.waiter();
      GetFunctionRequest getFunctionRequest = GetFunctionRequest.builder()
          .functionName(<span class="hljs-string">"create-quote"</span>)
          .build();
      WaiterResponse&lt;GetFunctionResponse&gt; waiterResponse = waiter.waitUntilFunctionActiveV2(
          getFunctionRequest);
      waiterResponse.matched().response().ifPresent(response -&gt; LOGGER.info(response.toString()));
</code></pre>
</li>
<li><p>Use this nifty configuration to scan the LocalStack logs and make sure your instance is in the right state before the tests are allowed to start:</p>
<pre><code class="lang-java">  <span class="hljs-keyword">protected</span> <span class="hljs-keyword">static</span> LocalStackContainer localStack =
        <span class="hljs-keyword">new</span> LocalStackContainer(DockerImageName.parse(<span class="hljs-string">"localstack/localstack-pro:2.2.0"</span>))
  ........
  .waitingFor(Wait.forLogMessage(<span class="hljs-string">".*Finished creating resources.*\\n"</span>, <span class="hljs-number">1</span>));
</code></pre>
</li>
<li><p>It’s been recently discovered that some Lambda containers are not removed when the tests end. Don’t worry, a fix is on the way, but in the meantime, you can use a dedicated function to clean up at the end of the test suite (only use this on your machine):</p>
<pre><code class="lang-java">  <span class="hljs-function"><span class="hljs-keyword">protected</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">cleanLambdaContainers</span><span class="hljs-params">()</span> </span>{
      <span class="hljs-keyword">try</span> {
        String scriptPath = <span class="hljs-string">"src/test/resources/delete_lambda_containers.sh"</span>;
        ProcessBuilder processBuilder = <span class="hljs-keyword">new</span> ProcessBuilder(scriptPath);
        processBuilder.inheritIO();
        Process process = processBuilder.start();
        <span class="hljs-keyword">int</span> exitCode = process.waitFor();
        System.out.println(<span class="hljs-string">"Script exited with code: "</span> + exitCode);
      } <span class="hljs-keyword">catch</span> (IOException | InterruptedException e) {
        e.printStackTrace();
      }
    }
</code></pre>
<pre><code class="lang-java">  #!/bin/bash

  # get a list of running container ids with the word <span class="hljs-string">"lambda"</span> in their names
  container_ids=$(docker ps -q --filter name=lambda)

  # loop through the ids and stop and remove each container
  <span class="hljs-keyword">for</span> id in $container_ids; <span class="hljs-keyword">do</span>
      echo <span class="hljs-string">"Stopping and removing container: $id"</span>
      docker stop $id
      docker rm $id
  done
</code></pre>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[LocalStack Stack Insights]]></title><description><![CDATA[Ah, the world of CLI commands, where everyone aspires to be the commander of codes. It's a journey through syntax and semantics, packed with digital dexterity. Faster than the weekends going by and easier than reciting the alphabet backward, CLI hold...]]></description><link>https://hashnode.localstack.cloud/localstack-stack-insights</link><guid isPermaLink="true">https://hashnode.localstack.cloud/localstack-stack-insights</guid><category><![CDATA[localstack]]></category><category><![CDATA[dashboard]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[management console]]></category><category><![CDATA[stack insights]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Wed, 06 Sep 2023 09:30:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176231070/a5decb27-0345-4f36-8df6-4bdb96a076ef.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ah, the world of CLI commands, where everyone aspires to be the commander of codes. It's a journey through syntax and semantics, packed with digital dexterity. Faster than the weekends going by and easier than reciting the alphabet backward, CLI holds the secrets of your realm, just a few words away. Creating resources becomes so easy and straightforward. What is usually hard to do using a Terminal is monitor your resources and have charts generated based on your stack’s telemetry.</p>
<p>Well, buckle up because what we have here is a full-blown visual Byte. We’re exploring a lesser-known instrument in the LocalStack toolbox: <a target="_blank" href="http://app.localstack.cloud">the web application</a>. This isn't just your run-of-the-mill admin dashboard – oh no. On top of all the administrative things like managing your account details, subscription, etc, the web app visually interrogates your LocalStack instance on its activity and sees its every twist and turn.</p>
<p>As soon as you land on the dashboard on the lower side, you can find the Stack Insights:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176307852/64f8a283-4644-4811-919d-9518607fb08d.png" alt class="image--center mx-auto" /></p>
<p>Right within your reach lies the complete history of all those LocalStack instances you've ever conjured:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176342779/f61081a2-4507-481b-a369-b79126a6d96a.png" alt class="image--center mx-auto" /></p>
<p>We’re currently running our Stack Bytes sample application consisting of an API Gateway, two Lambdas, and an S3 bucket. When we select the active stack, we see the comprehensive charts of API calls, service invocations, and clients used for them. You’ll also have access to the full operation-per-service list where you can see their corresponding details.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176392009/4a427141-f2e3-49a4-930a-1e1b22db223f.png" alt class="image--center mx-auto" /></p>
<p>Due to constant improvements, these dashboards have been optimized to use an aggregation script that aims to save ~80-90% of the traffic by bundling and sending the information. That means operations won’t show up instantly but rather after a few seconds.</p>
<p>Another thing to keep in mind is that invocations executed in the context of an internal API call (the case where one API uses a boto3 client to call another API internally) will not appear in these dashboards. So, in our examples, our Lambda invocations will not be present because they happen as part of the API Gateway integration.</p>
<p>Above and beyond, let me tell you, the web app doesn't stop at that. It's also your real-time radar for all things service-related. Whether it's the availability overview or the running service status, this System Status has got you covered.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176420547/58197e37-275d-4aa5-a1a5-7047c0e5a959.png" alt class="image--center mx-auto" /></p>
<p>And the Resource Browser? It's like your user-friendly compass, guiding you through neatly categorized vistas of your AWS services.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176452994/23ad4081-09f8-4c11-9bcb-2177ae97d7e2.png" alt class="image--center mx-auto" /></p>
<p>We’re not done yet. From here, you can dive into individual services and see all the nooks and crannies of your stack. In our example, we can see that all the quotes of our beloved characters are there, stored neatly, as individual text files:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176487458/6012a098-afa8-4c69-acd5-e81b49915ec2.png" alt class="image--center mx-auto" /></p>
<p>Let's talk about those Lambdas, shall we? They're not just functions anymore; they're practically an open book here, revealing all their configurations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693176517059/ddbf4fb5-9b4c-4866-960d-b5a05ed2415c.png" alt class="image--center mx-auto" /></p>
<p>Hopefully, this incursion in pictures will encourage you to explore even more services and options in the LocalStack web application, and in the meantime, you can check out the docs.</p>
]]></content:encoded></item><item><title><![CDATA[GitHub Actions & Infrastructure Testing with LocalStack]]></title><description><![CDATA[Suppose you’re working on a client application that interacts with some AWS backend services. You want to ensure that your backend is in place while you focus on your main application. As you might have anticipated, we’ll look at how to set up a work...]]></description><link>https://hashnode.localstack.cloud/github-actions-infrastructure-testing-with-localstack</link><guid isPermaLink="true">https://hashnode.localstack.cloud/github-actions-infrastructure-testing-with-localstack</guid><category><![CDATA[github-actions]]></category><category><![CDATA[localstack]]></category><category><![CDATA[AWS]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Mon, 04 Sep 2023 09:30:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693175370508/e67efb62-c7af-4a35-b665-84587a67c064.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Suppose you’re working on a client application that interacts with some AWS backend services. You want to ensure that your backend is in place while you focus on your main application. As you might have anticipated, we’ll look at how to set up a workflow that ensures our infrastructure provisioner consistently delivers what is needed to run our apps on.</p>
<p>In the context of infrastructure testing and system interaction checks, GitHub Actions can be used to automate and streamline essential tasks, such as infrastructure testing, integration testing, and end-to-end testing.</p>
<p>The infrastructure we have:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693175445803/25eb6d66-1367-4ea1-9a7b-faf9572d6ee6.png" alt class="image--center mx-auto" /></p>
<p>Let’s say that all of these resources are configured using Terraform. It may not look like much, but the file creating these services can get lengthy, so monitoring it and ensuring that everything is there might be hard. Not to mention, there are roles and permission policies at play that we don’t often see. Luckily, there’s a way to set up a mechanism that has repetitive actions with predictable outcomes, which will be our safety check that the services we need are always there. We achieve this through GitHub Actions.</p>
<p><strong>TL;DR:</strong> GitHub Actions is a powerful CI/CD (continuous integration and continuous deployment ) platform that allows you to automate various workflows, such as building, testing, and deploying your code directly from your GitHub repository.</p>
<p>A workflow configuration file specifies the steps and actions to be executed whenever a certain event occurs, such as a push to the repository. GitHub Actions will automatically detect the new workflow configuration and execute the defined steps whenever a push event occurs. The workflow status and logs will be visible in the "Actions" tab of your GitHub repository.</p>
<p>Our entire YAML file can be found in the <a target="_blank" href="https://github.com/tinyg210/stack-bytes-apigw-lambda-s3"><code>https://github.com/tinyg210/stack-bytes-apigw-lambda-s3</code></a> repository under the <code>.github/workflows</code> folder, where it will be automatically picked up from.</p>
<p>Whenever we build a workflow, there are key aspects that should not go unnoticed:</p>
<p><strong>Name and Purpose</strong>: A clear and self-explanatory title that can answer the following: What does it do? Why is it useful? <code>Create and Verify Infrastructure on LocalStack</code> - a concise yet descriptive sentence.</p>
<p><strong>Workflow Triggers</strong>: Indicate when and under what conditions the action is triggered. We’ll run our checks on each push against the <code>main</code> branch but ignore changes to the <a target="_blank" href="http://README.md">README.md</a> file, as those are irrelevant here.</p>
<p><strong>Environment</strong>: Define the environment or context in which the action runs. We’re storing our <code>LOCALSTACK_API_KEY</code> as a secret that GitHub will encrypt and manage.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693175483120/1246ffb5-87f8-433f-8b3f-fc5993d0a735.png" alt class="image--center mx-auto" /></p>
<p><strong>Steps and Actions</strong>: The steps or actions composing the workflow. The steps we need to have in our infrastructure-check workflow are actually quite simple, and GitHub Actions uses YAML-based configuration files to define this process:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693175530889/d2b37d98-0c56-4ef3-920b-fbe17ee20e39.png" alt class="image--center mx-auto" /></p>
<p><strong>Dependencies</strong>: If the action depends on other services, tools, or resources, we need to bring them in for successful execution and prepare the workflow context. We start with runner (VM executing the workflow) - <code>ubuntu-latest</code> , and we also set up a JDK, install Python, aws-local CLI, terraform-local CLI, etc. In our diagram, we’re interested in what happens after the dotted line. Anything before that is just preparing the ground.</p>
<p><strong>Error Handling</strong>: How the action handles errors or failures. In this case, It’s enough to check the logs and the messages they’re presenting, but also get a diagnose report from a LocalStack endpoint, in case that’s where the process failed.</p>
<p>By covering these aspects in your GitHub Action description, you'll provide your team with a clear understanding of what the action does, how to use it, and what to expect when incorporating it into other workflows. This is how you ensure that your dependencies are in place every time.</p>
]]></content:encoded></item><item><title><![CDATA[Language SDKs to Use with LocalStack]]></title><description><![CDATA[Is your application interacting with Amazon Web Services? No worries! AWS provides client libraries and SDKs for a wide range of programming languages, here are just a few of them:

As promised, LocalStack can be a drop-in replacement for the most po...]]></description><link>https://hashnode.localstack.cloud/language-sdks-to-use-with-localstack</link><guid isPermaLink="true">https://hashnode.localstack.cloud/language-sdks-to-use-with-localstack</guid><category><![CDATA[AWS]]></category><category><![CDATA[localstack]]></category><category><![CDATA[AWS SDK]]></category><category><![CDATA[sdk]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Fri, 01 Sep 2023 09:30:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693154716903/104d18f7-93aa-4b3d-95f8-809a18da8a03.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Is your application interacting with Amazon Web Services? No worries! AWS provides client libraries and SDKs for a wide range of programming languages, here are just a few of them:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693153265275/5570f6a3-9b13-4678-be29-2e18d202b8ee.png" alt class="image--center mx-auto" /></p>
<p>As promised, LocalStack can be a drop-in replacement for the most popular AWS services, which means there are many ways of easily configuring your clients to point to a different endpoint.</p>
<p>These client libraries allow developers to interact with various AWS/LocalStack services and APIs more easily and efficiently from within your preferred programming language. Each SDK typically provides a set of APIs, classes, and methods that abstract the low-level details of making HTTP requests and handling authentication, making it easier to integrate AWS services into your applications.</p>
<p>The <strong>TL;DR</strong> part: Here are some easy ways you can configure an S3 client in a few different other languages. Notice how similar they are and how you can always transition between AWS and LocalStack with just a few variables:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693155761313/92e392ca-905c-4b79-8892-a59f580ed193.png" alt class="image--center mx-auto" /></p>
<p>This will be a follow-along type of article :). Let’s have a look at an S3 Java client.</p>
<ol>
<li><p>Clone the repository.</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> &lt;https://github.com/tinyg210/stack-bytes-apigw-lambda-s3.git&gt;
</code></pre>
</li>
<li><p>Export your <code>LOCALSTACK_API_KEY</code> as an environment variable.</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">export</span> LOCALSTACK_API_KEY=&lt;YOUR_API_KEY&gt;
</code></pre>
<p> *Sidenote: make sure there’s <code>apigw-lambda.jar</code> in the <code>/stack-bytes-lambda/target/</code> folder. If not, or if anything fails, please run <code>mvn clean package shade:shade</code> in the <code>stack-bytes-lambda</code> folder.</p>
</li>
<li><p>Start LocalStack (we stay in the root folder).</p>
<pre><code class="lang-bash"> docker compose up
</code></pre>
<p> Let’s have a look at our setup’s diagram:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693154088522/b86024f7-0150-46f7-b881-d318932105a6.png" alt class="image--center mx-auto" /></p>
<p> Notice how we have a small Java app, with an S3 client (in the <code>S3Configs</code> class) on the right-hand side.</p>
</li>
<li><p>Switch to the <code>stack-bytes-sdk</code> folder where the new app resides.</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">cd</span> stack-bytes-sdk
</code></pre>
<pre><code class="lang-bash"> mvn clean package
</code></pre>
<p> The S3 Java Client:</p>
<pre><code class="lang-java"> <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String ACCESS_KEY = <span class="hljs-string">"test"</span>;
 <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String SECRET_KEY = <span class="hljs-string">"test"</span>;

 <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String LOCALSTACK_ENDPOINT = <span class="hljs-string">"https://s3.localhost.localstack.cloud:4566"</span>;

 <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> Region region = Region.US_EAST_1;
   <span class="hljs-keyword">protected</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String BUCKET_NAME = <span class="hljs-string">"quotes"</span>;
   <span class="hljs-comment">// create an S3 client</span>
   <span class="hljs-keyword">protected</span> <span class="hljs-keyword">static</span> S3Client s3Client = S3Client.builder()
       .endpointOverride(URI.create(LOCALSTACK_ENDPOINT))
       .credentialsProvider(StaticCredentialsProvider.create(
           AwsBasicCredentials.create(ACCESS_KEY, SECRET_KEY)))
       .region(region)
       .build();
</code></pre>
<p> The Java SDK client has an easy and intuitive way of adding the necessary configurations so that everything can be replaced with real values and continue to work on the AWS platform.</p>
</li>
<li><p>Let’s post a file to the S3 bucket using our new client in the S3PostRequest class:</p>
<pre><code class="lang-bash"> mvn <span class="hljs-built_in">exec</span>:java -Dexec.mainClass=<span class="hljs-string">"s3service.S3PostRequest"</span>
</code></pre>
<p> This will take an existing file in the <code>src/main/resources</code> folder and add it to the quotes bucket.</p>
</li>
<li><p>Let’s read it now, using the same client:</p>
<pre><code class="lang-bash"> mvn <span class="hljs-built_in">exec</span>:java -Dexec.mainClass=<span class="hljs-string">"s3service.S3GetRequest"</span>
</code></pre>
<pre><code class="lang-bash"> Object text: Author: Fiona
 Quote: I want what any princess wants - to live happily ever after... with the ogre I married.
</code></pre>
</li>
<li><p>Now, let’s retrieve the same object using a different client and the dedicated Lambda function:</p>
<pre><code class="lang-bash"> curl --location <span class="hljs-string">'http://id12345.execute-api.localhost.localstack.cloud:4566/dev/quoteApi?author=Fiona'</span>
</code></pre>
<pre><code class="lang-bash"> {<span class="hljs-string">"text"</span>:<span class="hljs-string">"Quote: I want what any princess wants - to live happily ever after... with the ogre I married."</span>}
</code></pre>
</li>
</ol>
<p>The output formats differ, as this is the processed output of the GET Lambda, but essentially, they are the same.</p>
<p>For this case, we used the Java client, but don’t worry, the <a target="_blank" href="https://docs.localstack.cloud/applications/">LocalStack Developer Hub</a> is packed with examples using different other programming languages.</p>
<p>Nothing can stop you from using LocalStack as a drop-in replacement for AWS now. With minimal configurations, your applications won’t know the difference.</p>
]]></content:encoded></item><item><title><![CDATA[LocalStack Cloud Pods]]></title><description><![CDATA[Cloud Pods - sounds fancy, right? Well, they certainly are an elegant solution - they're all about a dynamic and efficient approach to managing state within your LocalStack ecosystem and sharing it like a pro with your squad.
Gone are the days of mun...]]></description><link>https://hashnode.localstack.cloud/localstack-cloud-pods</link><guid isPermaLink="true">https://hashnode.localstack.cloud/localstack-cloud-pods</guid><category><![CDATA[localstack]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[apigateway]]></category><category><![CDATA[cloud-pod]]></category><dc:creator><![CDATA[Anca G]]></dc:creator><pubDate>Wed, 30 Aug 2023 09:30:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693152565299/15b75504-7d96-4651-b6d0-0585a8313958.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Cloud Pods - sounds fancy, right? Well, they certainly are an elegant solution - they're all about a dynamic and efficient approach to managing state within your LocalStack ecosystem and sharing it like a pro with your squad.</p>
<p>Gone are the days of mundane infrastructure setup and data restore routines at every startup of LocalStack. Cloud Pods eliminate all those reboot worries, making it a breeze to set the stage for your top-priority work.</p>
<p>We’ll get to a follow-along example in a minute, but first, let’s have a look at how things work.</p>
<p>Instead of simply restoring a state when restarting LocalStack, Cloud Pods allow you to take snapshots of your local instance (with the <code>save</code> command) and inject such snapshots into a running instance (with the <code>load</code> command) without requiring a restart.</p>
<p>In addition, we provide a remote storage backend that can be used to store the state of your running application and share it with your team members.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693153073015/ff5f3b60-96b9-4db4-835b-20a58d80522d.png" alt class="image--center mx-auto" /></p>
<p>Now, let’s explore how this is done.</p>
<ol>
<li><p>Clone the repository.</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/tinyg210/stack-bytes-apigw-lambda-s3.git
</code></pre>
</li>
<li><p>Export your <code>LOCALSTACK_API_KEY</code> as an environment variable.</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">export</span> LOCALSTACK_API_KEY=&lt;YOUR_API_KEY&gt;
</code></pre>
<p> *Sidenote: make sure there’s <code>apigw-lambda.jar</code> in the <code>/stack-bytes-lambda/target/</code> folder. If not, or if anything fails, please run <code>mvn clean package shade:shade</code> in the <code>stack-bytes-lambda</code> folder.</p>
</li>
<li><p>Start LocalStack:</p>
<pre><code class="lang-bash"> docker compose up
</code></pre>
</li>
<li><p>Let’s add some Quotes to our S3 storage:</p>
<pre><code class="lang-bash"> curl --location <span class="hljs-string">'http://id12345.execute-api.localhost.localstack.cloud:4566/dev/quoteApi'</span> \
                                     --header <span class="hljs-string">'Content-Type: application/json'</span> \
                                     --data <span class="hljs-string">'{
                                     "author": "Shrek",
                                     "text": "NO! You dense, irritating, miniature beast of burden! Ogres are like onions!"
                                 }'</span>
 curl --location <span class="hljs-string">'http://id12345.execute-api.localhost.localstack.cloud:4566/dev/quoteApi'</span> \
                                     --header <span class="hljs-string">'Content-Type: application/json'</span> \
                                     --data <span class="hljs-string">'{
                                     "author": "Donkey",
                                     "text": "And in the morning...I\'</span>m making waffles!<span class="hljs-string">"
                                 }'</span>
</code></pre>
</li>
<li><p>Log in to the <code>localstack CLI</code> tool with your registered account and password to get the full Cloud Pod experience. You can only create a snapshot on your machine if you are not logged in.</p>
<pre><code class="lang-bash"> localstack login
</code></pre>
</li>
<li><p>Now that we have a reasonable amount of data, let’s save our state:</p>
<pre><code class="lang-bash"> localstack pod save cloud-pod-quotes

 Cloud Pod cloud-pod-quotes successfully exported.
</code></pre>
</li>
<li><p>We can confidently shut down our containers.</p>
<pre><code class="lang-bash"> docker compose down
</code></pre>
</li>
<li><p>Verify the pod is there for your team to see.</p>
<pre><code class="lang-bash"> localstack pod list
 ┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
 ┃ <span class="hljs-built_in">local</span>/remote ┃ Name                                 ┃
 ┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
 │    remote    │ cloud-pod-quotes                     │
</code></pre>
</li>
<li><p>Let’s start a fresh instance of LocalStack, this time using the CLI tool.</p>
<pre><code class="lang-bash"> localstack start
</code></pre>
</li>
<li><p>Load the pod into the LocalStack container.</p>
<pre><code class="lang-bash">localstack pod load cloud-pod-quotes
</code></pre>
</li>
<li><p>Verify that the state and data are there:</p>
<pre><code class="lang-bash">curl --location <span class="hljs-string">'http://id12345.execute-api.localhost.localstack.cloud:4566/dev/quoteApi?author=Donkey'</span>

{<span class="hljs-string">"text"</span>:<span class="hljs-string">"Quote: And in the morning...I'm making waffles!"</span>}
</code></pre>
<pre><code class="lang-bash">curl --location <span class="hljs-string">'http://id12345.execute-api.localhost.localstack.cloud:4566/dev/quoteApi?author=Shrek'</span>

{<span class="hljs-string">"text"</span>:<span class="hljs-string">"Quote: NO! You dense, irritating, miniature beast of burden! Ogres are like onions!"</span>}⏎
</code></pre>
</li>
</ol>
<p>Smooth sailing all the way! We did a little state and data magic and tucked them neatly into a Cloud Pod. Now, it's our secret weapon – perfect for sprinkling LocalStack goodness into those repetitive acts like CI tests. And hey, sharing is caring, right? So, we pass this Cloud Pod gem to our colleagues for a front-row seat to our LocalStack brilliance.</p>
]]></content:encoded></item></channel></rss>