Scaling microservices with message queues to handle data bursts

AWS SQS is a great middleware solution for decoupling message producers from consumers to manage thousands of transactions per second

With over two million users and tens of thousands of daily users playing our Panya Studios live trivia mobile app, we regularly experience huge bursts of activity within a short span of time. Since we decided to store chat data for analytical purposes, we needed a way to manage this process effectively.

In an earlier blog, we demonstrated how our engineering team stores large volumes of chat data in PostgreSQL through dynamic partitions. In this blog, we’ll cover what happens after the data has been sent from the client, but before it reaches the server and use message queue to scale our Chat MicroService.

Panya’s Chat MicroService Architecture with Amazon’s Simple Queue Service

We could build a large web server and upgrade our database size to handle hundreds of thousands of messages per second, but that would be expensive and difficult to maintain — and we would have to pre-provision our environment for the maximum expected workload and it would not be resilient e.g. database failure or throttling.

If few people were playing, then we’re overpaying for infrastructure; if there’s a traffic spike, then the server could crash and data could be lost.

Instead, we want to find a middleware solution that is highly scalable where it could easily handle 1 message per second or 1 million messages per second.

Rather than bombarding our database directly, we would send requests to this middle layer and then have our microservice pull messages from it as fast as possible to insert into our database.

In other words, what we want is a Message Queue.

Message Queue 101

Message queues enable different applications to have an indirect one-way communication with each other. The sender and receiver of the message do not interact with each other directly, nor do they need to interact with the message queue at the same time. To put it simply, the messenger and the recipients are decoupled.

Amazon offers a bit more formal definition — Message Queue Service is a form of asynchronous service-to-service communication used in serverless and microservices architectures. Messages are stored on the queue until they are processed and deleted. Each message is processed only once, by a single consumer. Message queues can be used to decouple heavyweight processing, to buffer or batch work, and to smooth spiky workloads.

“Message queues enables different applications to have an indirect one-way communication with each other.”

What do you mean by one-way communication?

Good question! It means that information is transferred in one direction only, from producer → consumer. Let’s try to illustrate this by comparing message queues with another popular form of app communication: APIs.

Comparing Message Queues versus APIs

APIs (application program interface) basically allows for different applications to have a two-way communication with each other. In the API world, when you send a request, you expect a response.

APIs are 2-way communication.

On the other hand, in the message queue world, when you send a request, you don’t expect a response. Messages sent get stored in a queue until it is read.

Message Queues are 1-way communication.

There are pros and cons to both methods — and we use both depending on the situation — but for our current requirement, using message queues made more sense because:

  1. Decoupled Architecture — The sender’s job is to send messages to the queue and the receiver’s job is to receive messages from the queue. This separation allows for components to operate independently, while still communicating with each other.
  2. Fault tolerant — If the consumer server is down, the messages would still be stored in queue until the server goes back up and retrieves it
  3. Scalability —We can push all the messages in the queue and configure the rate at which the consumer server pulls from it. This way, the receiver won’t get overloaded with network requests and can instead maintain a steady stream of data to process.
  4. Serverless — AWS SQS is easy to set up. No administrative overhead and infrastructure to support.
  5. Single Consumer — we only have one microservice pulling from the queue. Once the message is consumed, it is removed from the queue. This is the reason why we opted for SQS instead of SNS (Simple Notification Service) with Pub/Sub messaging.

Comparing Message Queues versus Pub/Sub

The Pub/Sub (a.k.a. Publisher/Subscriber) pattern is a standard model used in many messaging systems. As you can guess by the name, there are Publishers and Subscribers.

  • Publishers are message senders
  • Subscribers are message receivers

Like message queues, they are loosely coupled through a communication layer. Publishers do not know which subscribers will see that data or when they will see it and vice versa. The publisher’s job is to send message to the middleware and the receiver’s job is to retrieve message from the middleware.

Many-to-Many vs Many-to-1

While similar to message queues (they are siblings), pub/sub messaging allows multiple consumers to receive each message in a topic.

On the other hand, message queues allows one consumer to receive each message. Technically you can have more than one consumer, but once the message is read the other consumer(s) will be unable to read the message.

What are the types of queues?

Amazon SQS offers two queue types for different application requirements:

Standard Queue

Standard Queue

FIFO (First In First Out) Queue

FIFO Queue

As you can see, both queues are used to move data. The main difference between them, however, are messages per second (MPS) and ordering.

Standard queues have unlimited throughput, while FIFO queues support up to 300 MPS. Moreover, standard queues may deliver messages in a different order from which they were sent, while FIFO queues preserves it.

For our requirements, we use standard queues because we can and regularly do get more than 300 MPS.

TLDR; show me the code!

To follow these examples, you would need two Node.js apps and an AWS account.

Step 1: Create Queue
Use Amazon’s UI to create a new message queue.

Step 2: Create a Producer
The following code is directly from Amazon’s documentation

var AWS = require('aws-sdk'); 
AWS.config.update({region: 'REGION'});
// Create an SQS service object 
var sqs = new AWS.SQS({apiVersion: '2012-11-05'});
var params = {
DelaySeconds: 10,
MessageAttributes: {
"Medium": { DataType: "String",
StringValue: "ST wuz Here" }
MessageBody: "Hello world!",
// send the message
sqs.sendMessage(params, function(err, data) {
if (err) { console.log("Error", err); }
else { console.log("Success", data.MessageId); }

Step 3: Create a Consumer
We use sqs-consumer library to Build SQS-based applications on Node.js without the boilerplate code.

const Consumer = require('sqs-consumer');
const AWS = require('aws-sdk');
region: 'REGION',
accessKeyId: '...',
secretAccessKey: '...'
const app = Consumer.create({
queueUrl: "SQS_QUEUE_URL",
handleMessage: (message, done) => {
// post to db
sqs: new AWS.SQS()
app.on('error', (err) => {

With a few lines of code, you can set up a running AWS SQS producer that will send messages to a queue, create a consumer that will listen to any messages from the queue, and then do something with the message such as post it to a database.

Message Queues — middleware for the win

Message Queues are a great middleware for handling huge bursts of data. By decoupling the message producers with our message consumer, we are able to scale our microservice as SQS will handle the majority of the workload. Instead of using the conventional API or Pub/Sub model, message queues worked better for us as a form of communication between microservices.

While there are other message queue providers out there like RabbitMQ and Apache Kafka, we chose SQS because we are already using AWS as our cloud provider for many services (EC2, RDS, S3, etc). Moreover, the fact that it’s serverless made it even easier to quickly set up and start running without worrying about the supporting infrastructure.

Try giving message queues a shot! If you have another way of doing this or you have any problems with examples above, just drop a comment below to let me know.

Thanks for reading — and please follow me here on Medium for more interesting software engineering articles!

Scaling microservices with message queues to handle data bursts was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.