Testing new versions of SQS-based microservices efficiently and safely is crucial for modern development workflows. This guide explores how to leverage Signadot Sandboxes to rapidly test changes, whether during pull request reviews or local development, without disrupting stable environments. We\’ll focus on integrating Amazon SQS within a Minikube cluster and briefly touch upon the SNS-to-SQS fanout pattern for broader message distribution. This setup empowers developers to iterate quickly on consumer message processing, isolate changes, observe message flow, and safely test SQS integration patterns.
What You Will Learn:
- Deploying SQS-based microservices and Signadot in Kubernetes.
- Running both baseline and sandboxed versions of producer and consumer services.
- Understanding AWS SQS message distribution and SNS + SQS fanout integration.
- Deploying a sandboxed consumer and routing messages to it.
- Gaining insight into message routing and selective processing.
Prerequisites
Before diving in, ensure you have the following components installed and configured:
1. Minikube with Docker
- Install Docker, Minikube, and Helm on your local machine.
- Start Minikube:
minikube start --driver=Docker - Use Minikube’s Docker daemon:
eval $(minikube docker-env) - Verify cluster readiness:
kubectl cluster-info
2. Active AWS Account
- Create and activate an AWS account.
- Alternatively, use LocalStack for emulating AWS services (14-day trial available).
- Create an IAM user with necessary permissions to access AWS SQS and SNS.
3. Signadot Account and Operator
- Sign up for a Signadot account.
- Install the Signadot Operator in your Kubernetes cluster.
- Install the Signadot CLI tool locally.
Project Setup
To begin, clone the project repository:
$ clone https://github.com/your-org/SQS-Based-Microservices-with-Signadot
$ cd SQS-Based-Microservices-with-Signadot
The project repository is structured as follows:
├── apps/ # Contains distinct microservices.
│ ├── consumer/ # SQS message consumer service.
│ │ └── app.py
│ ├── frontend/ # User-facing web server and UI assets.
│ │ ├── app.py
│ │ └── public/ # Static files (HTML, CSS, images).
│ └── producer/ # Service for publishing messages.
│ └── app.py
├── modules/ # Shared, reusable code modules.
│ ├── DataTransferObjects/ # Pydantic models for API data.
│ ├── events/ # Event logging and retrieval using Redis.
│ ├── logger/ # Standardized logging configuration.
│ ├── otel/ # OpenTelemetry instrumentation helpers.
│ ├── pull_router/ # Client for Signadot routing service.
│ ├── sns/ # AWS SNS client module.
│ └── sqs/ # AWS SQS client module.
├── sandbox/ # Signadot sandboxing configuration.
│ └── sns-sqs-router-grp.yaml
├── Dockerfile # Instructions for building the Docker image.
├── main.py # Main script to launch microservices locally.
├── README.md # Project documentation.
└── requirements.txt # Python package dependencies.
AWS Cloud Setup for SQS and SNS
- Navigate to the AWS signup page and create your account.
- Create a new IAM user and grant
AmazonSNSFullAccessandAmazonSQSFullAccesspermissions. - Generate an access key for this IAM user. Store the
Access Key IDandSecret Access Keysecurely. - Update the
k8s/secrets.yamlfile with your base64-encoded AWS credentials:
plaintext
# k8s/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: aws-credentials
namespace: aws-sqs-app
type: Opaque
data:
AWS_ACCESS_KEY_ID: <Base64 Encoded Access Key Id>
AWS_SECRET_ACCESS_KEY: <Base64 Encoded Secret Access Key>
An image in the original article showed the IAM usersqs-sns-userwithAmazonSNSFullAccessandAmazonSQSFullAccesspolicies attached.
Build the Demo Application
First, build the Docker image for the demo application, named sqs-signadot, which showcases shared SQS and SNS + SQS fan-out patterns.
$ docker build -t sqs-signadot:latest .
$ docker image ls
Verify the image is present in your Minikube Docker repository.
An image in the original article showed the sqs-signadot image successfully built and listed in the Docker images.
Deploy the Demo Application
Deploy the demo application, consisting of Frontend, Producer, and Consumer services, to establish the baseline AWS SQS & SNS flow with Signadot integration.
$ export NAME_SPACE=aws-sqs-app
$ kubectl create ns $NAME_SPACE
namespace/aws-sqs-app created
$ kubectl apply -f k8s/
configmap/app-config created
deployment.apps/consumer-deployment created
service/frontend-service created
deployment.apps/frontend-deployment created
service/producer-service created
deployment.apps/producer-deployment created
service/redis created
statefulset.apps/redis created
secret/aws-credentials created
The deployed services include:
- Frontend: Exposes a GUI and forwards messages to the producer via HTTP.
- Producer: Publishes messages to the AWS SQS queue.
- Consumer: Implements SQS subscription for selective message consumption between baseline and sandbox.
- Redis Server: Stores and retrieves event logs for message distribution tracking.
Verify that all pods are running:
$ kubectl -n $NAME_SPACE get po
Example output:
NAME READY STATUS RESTARTS AGE
consumer-deployment-7444f9b7f8-96vzm 2/2 Running 2 (4m39s ago) 27m
frontend-deployment-6c5f85dc7-rjzx5 1/1 Running 1 (4m39s ago) 27m
producer-deployment-85c6f7d747-b9lpv 2/2 Running 2 (4m39s ago) 27m
redis-0 1/1 Running 1 (4m39s ago) 27m
Signadot will establish a tunnel, making in-cluster services available locally. To configure this, create ~/.signadot/config.yaml as per the Signadot CLI documentation. Use the output of kubectl config current-context for the kubeContext value.
Connect to Signadot locally:
$ signadot local connect --config ~/.signadot/config.yaml
signadot local connect needs root privileges for:
- updating /etc/hosts with cluster service names
- configuring networking to direct local traffic to the cluster
signadot local connect has been started ✓
* runtime config: cluster crop-staging-1, running with root-daemon
✓ Local connection healthy!
* operator version 0.19.2
* port-forward listening at ":39911"
* localnet has been configured
* 19 hosts accessible via /etc/hosts
* sandboxes watcher is running
* Connected Sandboxes:
- No active sandbox
The frontend service will be exposed at `http://frontend-service.aws-sqs-app:8080`.
Initializing AWS SQS Queue and SNS Topic
In cloud-native environments, infrastructure provisioning automation is a powerful capability. The following code snippet demonstrates how SQS queues and SNS topics can be created programmatically during the producer or consumer service startup. This lazy loading prevents services that don\’t require AWS modules from crashing if credentials aren\’t configured.
if args.producer or args.consumer:
# Lazy load the AWS modules only when the producer or consumer are
# actually run. This prevents services that do not need them (like
# the frontend) from crashing if AWS credentials are not configured
# in their environment.
from modules.sqs.sqs_client import create_queue, get_queue_arn
from modules.sns.sns_client import create_topic, subscribe_sqs_to_sns
logger.info("Initializing SQS queue...")
queue_url = create_queue()
if not queue_url:
logger.error("Failed to create or get SQS queue. Exiting.")
return
logger.info("Initializing SNS topic and subscription...")
topic_arn = create_topic()
if not topic_arn:
logger.error("Failed to create or get SNS topic. Exiting.")
return
queue_arn = get_queue_arn(queue_url)
if not queue_arn: logger.error("Failed to get SQS queue ARN. Exiting.")
return
subscription_arn = subscribe_sqs_to_sns(topic_arn, queue_arn, queue_url)
if not subscription_arn:
logger.error("Failed to subscribe SQS queue to SNS topic. Exiting.")
return
logger.info(f"Successfully subscribed queue to topic. Subscription ARN: {subscription_arn}")
An image in the original article displayed an SQS queue in the AWS console, highlighting its creation and an access policy configured for SNS to SQS Fanout.
Another image in the original article showed an AWS SNS topic with an SQS queue subscribed to it, illustrating the SNS + SQS fan-out pattern.
Testing Baseline Flow without Signadot Sandboxes
The diagram below illustrates the architectural flow for baseline message processing.
An image in the original article showed a diagram of the baseline message processing flow: Frontend -> Producer -> SQS Queue -> Consumer -> Redis.
The producer publishes messages to the AWS SQS queue using the following code:
# Send message to SQS queue
logger.info(f"Sending message to SQS queue: {SQS_QUEUE_URL}")
response = sqs_client.send_message(
QueueUrl=SQS_QUEUE_URL,
MessageBody=json.dumps(msg_dict),
MessageAttributes=message_attributes,
)
Access the AWS SQS demo frontend at `http://localhost:8080`. Send a message, and observe as the baseline consumer picks it up and displays it in the frontend interface.
An image in the original article showed the AWS SQS demo frontend with a message successfully processed by the baseline consumer.
Producer\’s Header Context Propagation
Moving towards consumer sandbox testing, a key challenge is rapidly testing new versions of producer and/or consumer code without disrupting shared testing environments. The goal is to create an isolated testing environment for validating changes before merging. This is achieved by:
- Using OpenTelemetry auto-instrumentation: Propagating request headers to ensure context flows seamlessly from producers through the messaging system to consumers.
- Implementing selective routing: Directing traffic to sandboxed versions of services based on specific header values.
- Deploying new service versions with Signadot sandboxes: Creating isolated environments for testing code changes from development branches or local workstations.
This approach allows simultaneous testing of new consumer logic, producer modifications, or both in a controlled sandbox.
OpenTelemetry auto-instrumentation propagates headers without modifying application code. The Dockerfile installs necessary packages for OTel auto-instrumentation:
# Install OpenTelemetry SDK + instrumentations
RUN pip install --no-cache-dir \
opentelemetry-distro \
opentelemetry-exporter-otlp \
opentelemetry-instrumentation-asgi \
opentelemetry-instrumentation-fastapi \
opentelemetry-instrumentation-requests \
opentelemetry-instrumentation-botocore
# Install OpenTelemetry bootstrap separately
RUN opentelemetry-bootstrap -a install
FastAPI services (Frontend & Producer) are then empowered with OTel automatic header context propagation:
def run_frontend():
# Launches the FastAPI frontend app using Uvicorn as a subprocess with OTel auto-instrumentation
# and handles graceful shutdown on KeyboardInterrupt (Ctrl+C).
command = ["opentelemetry-instrument", "uvicorn", "apps.frontend.app:app", "--host", "0.0.0.0", "--port", "8000"]
logger.info(f"Starting frontend server with command: {' '.join(command)}")
# Pass the modified environment to the subprocess
process = subprocess.Popen(command)
def run_producer(queue_url: str, topic_arn: str):
# Launches the FastAPI producer app using Uvicorn as a subprocess with OTel auto-instrumentation
# and handles graceful shutdown on KeyboardInterrupt (Ctrl+C).
# Set the queue URL as an environment variable for the producer subprocess
env = os.environ.copy()
env["SQS_QUEUE_URL"] = queue_url
env["SNS_TOPIC_ARN"] = topic_arn
command = ["opentelemetry-instrument", "uvicorn", "apps.producer.app:app", "--host", "0.0.0.0", "--port", "8000"]
logger.info(f"Starting producer server with command: {' '.join(command)}")
# Pass the modified environment to the subprocess
process = subprocess.Popen(command, env=env)
Testing Sandbox Flow with Signadot Sandboxes
A Signadot sandbox provides an isolated, short-lived environment for safely testing code changes without impacting other test environment traffic.
In a consumer sandbox, the following occurs:
- Dedicated Subscribers: Each sandbox instantiates dedicated subscribers to maintain isolated consumption offsets.
- Message Filtering: Irrelevant messages are filtered out using the Routes API, based on routing key evaluation.
- Context Preservation: The routing key is propagated downstream when the subscriber communicates with other services or message flows.
The routes_client periodically fetches routing keys in the consumer sandbox:
# --- Start asyncio background task in a separate thread ---
routes_client = RoutesAPIClient(sandbox_name=SANDBOX_NAME)
cache_updater_coro = routes_client._periodic_cache_updater()
asyncio_thread = threading.Thread(
target=start_async_loop,
args=(cache_updater_coro,),
daemon=True,
)
asyncio_thread.start()
logger.info("Started background cache updater.")
A background thread keeps the router API alive, with _periodic_cache_updater fetching routing keys every 0.5 seconds.
The OpenTelemetry baggage header is extracted within the consumer code:
def extract_routing_key_from_baggage(message_attributes: dict, getter: Optional[Getter] = sqs_getter) -> Optional[str]:
ctx = W3CBaggagePropagator().extract(
carrier=message_attributes,
getter=getter
)
return baggage.get_baggage(ROUTING_KEY, ctx)
Selective consumption occurs when the consumer checks the routing key of each message against its sandbox\’s routing key using the Routes API. If there\’s no match, the message is skipped and immediately returned to the queue, reducing its visibility in the current sandbox and making it available for the correct sandbox\’s consumer. This ensures isolation and efficient message delivery:
if not router_api.should_process(routing_key):
# This message is not for this consumer instance. Make it immediately visible again for other consumers.
logger.info(f"Skipping message with routing_key: '{routing_key}'. Releasing back to queue.")
sqs_client.change_message_visibility(
QueueUrl=sqs_queue_url,
ReceiptHandle=message["ReceiptHandle"],
VisibilityTimeout=0,
)
continue
Creating the Sandbox – Signadot\’s Key Feature
To create a consumer sandbox, define a sandbox configuration file:
# sqs_sandbox.yaml
apiVersion: signadot.com/v1
kind: Sandbox
name: sqs-counsumer-sandbox
spec:
labels:
app: "sns-sqs-fanout-sandbox"
description: Isolated sandbox environments to enable sqs messgae routing
cluster: "@{cluster}"
forks:
- forkOf:
kind: Deployment
name: consumer-deployment
namespace: aws-sqs-app
This YAML instructs Signadot to create a sandbox named sqs-consumer-sandbox in your cluster, fork the consumer-deployment from the aws-sqs-app namespace, and set an environment variable indicating the sandbox name.
Apply the sandbox configuration using the Signadot CLI:
$ signadot sandbox apply -f ./sandbox/sqs_sandbox.yaml --set cluster="crop-staging-1"
# To list the pods being created
$ kubectl -n $NAME_SPACE get po
Example output, showing the newly created sandbox pod:
NAME READY STATUS RESTARTS AGE
consumer-deployment-7444f9b7f8-lrcld 2/2 Running 2 (5m58s ago) 23h
frontend-deployment-6c5f85dc7-8mzfk 1/1 Running 1 (5m58s ago) 23h
producer-deployment-85c6f7d747-vt56s 2/2 Running 2 (5m58s ago) 23h
redis-0 1/1 Running 1 (5m58s ago) 23h
sqs-counsumer-sandbox-dep-consumer-deployment-7ca2ec39-cc4dfw2p 2/2 Running 0 61s
The sqs-consumer-sandbox-dep-consumer-deployment-... pod indicates the successful creation of the sandbox.
Testing Sandbox Behavior with Routing Key
The diagram below illustrates the routing key mechanism.
An image in the original article showed a diagram detailing how the routing key works to direct messages to the correct consumer, either baseline or sandbox.
After enabling Signadot’s browser extension, select sqs-consumer-sandbox to route traffic to it.
An image in the original article showed the AWS SQS demo frontend with the Signadot browser extension enabled, and sqs-consumer-sandbox selected.
Building on the shared SQS pattern, we now explore the SNS-to-SQS Fan-out pattern. For simplicity, this demonstration uses an existing queue with multiple consumers sharing it and coordinating message handling, similar to plain SQS. Another approach for SNS/SQS involves giving each consumer its own queue, dynamically created, where consumers apply selective logic to drop irrelevant messages. More details on the SNS + SQS Fanout pattern with ephemeral SQS queues can be found in the AWS documentation.
Creating Producer Sandbox for SNS to SQS Fanout Pattern
To create a producer sandbox for SNS integration, define the sns_sandbox.yaml configuration:
# sns_sandbox.yaml
apiVersion: signadot.com/v1
kind: Sandbox
name: sns-sqs-fanout-sandbox
spec:
labels:
app: "sns-sqs-fanout-sandbox"
description: Isolated sandbox environments to enable sns to sqs fanout routing
cluster: "@{cluster}"
forks:
- forkOf:
kind: Deployment
name: producer-deployment
namespace: aws-sqs-app
customizations:
env:
- name: SNS_FANOUT_PUBLISH
value: "true"
Key points for this producer sandbox:
- Forked workload — The producer deployment is sandboxed specifically for SNS integration.
- Environment variable — A new environment variable,
SNS_FANOUT_PUBLISH, is introduced to conditionally control SQS message publishing from SNS publishing. The relevant code logic is:event_description = ( \'Sending produce request to SNS topic\' if SNS_FANOUT_PUBLISH else \'Sending produce request to SQS queue\' ) if SNS_FANOUT_PUBLISH: # Publish message to SNS topic logger.info(f"Publishing message to SNS topic: {SNS_TOPIC_ARN}") response = sns_client.publish( TopicArn=SNS_TOPIC_ARN, Message=json.dumps(msg_dict), MessageAttributes=message_attributes, ) else: # Send message to SQS queue logger.info(f"Sending message to SQS queue: {SQS_QUEUE_URL}") response = sqs_client.send_message( QueueUrl=SQS_QUEUE_URL, MessageBody=json.dumps(msg_dict), MessageAttributes=message_attributes, )Provision the sandbox:
$ signadot sandbox apply -f ./sandbox/sns_sandbox.yaml --set cluster="crop-staging-1"
Next, create a Signadot Router Group to control traffic routing into sandboxes:
# sns-sqs-router-grp.yaml
name: sns-sqs-router-grp
spec:
cluster: "@{cluster}"
description: "route group for testing multiple sandboxes together"
match:
any:
- label:
key: app
value: sns-sqs-fanout-sandbox
A router group routes network traffic from one or more sandboxes to one or more endpoints based on label selectors, acting as a traffic router or load balancer within your Signadot sandboxes and Kubernetes clusters.
Provision the router group:
$ signadot routegroup apply -f ./sandbox/sns-sqs-router-grp.yaml --set cluster="crop-staging-1"
Scenario 1 – SNS to SQS Baseline Consumer
The diagram below illustrates message flow in this scenario.
An image in the original article showed a diagram depicting message flow: Frontend (with Signadot extension enabled, selecting producer sandbox) -> Producer Sandbox (publishes to SNS) -> SNS Topic -> SQS Queue -> Baseline Consumer -> Redis.
In this scenario, a request is sent through the producer sandbox, which publishes the message to an AWS SNS topic. This message is then consumed by the baseline consumer.
An image in the original article showed the AWS SQS demo frontend, demonstrating a message sent via the producer sandbox and consumed by the baseline consumer.
Scenario 2 – SNS to SQS Sandbox Consumer
This diagram illustrates message flow behavior in the second scenario.
An image in the original article showed a diagram depicting message flow: Frontend (with Signadot extension enabled, selecting producer sandbox AND consumer sandbox) -> Producer Sandbox (publishes to SNS) -> SNS Topic -> SQS Queue -> Sandbox Consumer -> Redis.
Here, a request is sent through the producer sandbox, which publishes the message to an AWS SNS topic. This message is then consumed by the sandbox consumer.
An image in the original article showed the AWS SQS demo frontend, demonstrating a message sent via the producer sandbox and consumed by the sandbox consumer.
Summary
This tutorial demonstrated how to effectively use a shared Amazon SQS queue with Signadot Sandboxes for rapid and isolated testing of new message processing logic within a Minikube environment. We covered deploying baseline services, securely routing messages through the shared queue, and leveraging sandboxes to validate changes without impacting the main processing flow. Additionally, we explored the SNS-to-SQS fanout pattern for broadcasting messages to multiple queues in broader testing scenarios.
This approach offers significant advantages for event-driven microservices architectures, enabling faster iteration and more reliable integration testing through:
- Realistic message flow simulation with shared SQS queues.
- Safe isolation for experimental consumers in sandboxes.
- Compatibility with fanout-based testing via SNS-to-SQS.
- Reduced risk when validating new logic alongside live-like traffic.