Amazon SQS FIFO: Understanding Message Ordering with Multiple Consumers in .NET
Learn how message ordering works in Amazon SQS FIFO queues, how message groups affect processing order, and what happens when you scale with multiple consumers in .NET applications.
One of the main features of Amazon SQS FIFO queues is message ordering — ensuring messages are processed in the exact order they're sent.
When your business requirements demand strict ordering, FIFO queues provide the guarantee that messages won't be processed out of sequence.
In this post, let's explore how message ordering works in Amazon SQS FIFO queues, how message groups affect the order, and what happens when you scale out with multiple consumers.
Thanks to AWS for sponsoring this article in my .NET on AWS Series.
Amazon SQS For the .NET Developer: How to Easily Get Started

Standard Queue vs FIFO Queue
Let's start with the fundamental difference between Standard and FIFO queues when it comes to message ordering.
The sample ASP.NET API application used in this demo has two new endpoints:
publish-batch-standard- Publishes to a Standard Queuepublish-batch-fifo- Publishes to a FIFO Queue
Both endpoints loop through and create 10 messages, then use the SendMessageBatchAsync method to publish them to their respective queues.
for (int i = 1; i <= 10; i++){var message = new SendMessageBatchRequestEntry{Id = i.ToString(),MessageBody = JsonSerializer.Serialize(new WeatherForecast{Date = DateOnly.FromDateTime(DateTime.Now.AddDays(i)),TemperatureC = Random.Shared.Next(-20, 55),Summary = $"Weather forecast {i}"})};messages.Add(message);}var request = new SendMessageBatchRequest{QueueUrl = queueUrl,Entries = messages};await sqsClient.SendMessageBatchAsync(request);
On the consumer side, we have a console application running with both StandardQueueProcessor and FifoQueueProcessor background services, listening to their respective queues.
Standard Queue Behavior
Let's see what happens when we publish 10 messages to a Standard Queue.
After publishing 10 messages to the Standard Queue, you'll notice the messages are processed out of order. The console shows messages arriving in a sequence like 1, 2, 5, 6, 9, 10, and so on.
This demonstrates that Standard Queues don't guarantee ordering. While messages are published in sequence (1-10), they can be delivered and processed in any order.
FIFO Queue Guarantees Ordering
When your business logic requires strict ordering, FIFO queues come to the rescue.
Let's publish 10 messages to the FIFO Queue using the publish-batch-fifo endpoint.
for (int i = 1; i <= 10; i++){var message = new SendMessageBatchRequestEntry{Id = i.ToString(),MessageBody = JsonSerializer.Serialize(weatherForecast),MessageGroupId = "message-group-id",MessageDeduplicationId = Guid.NewGuid().ToString()};messages.Add(message);}
After publishing, the console application processes the messages in the exact same order: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
FIFO queues guarantee that messages are delivered and processed in the order they arrive in the queue.
Message Groups and Ordering
Message ordering in FIFO queues is controlled by the MessageGroupId property. This is a crucial concept to understand.
Messages within the same group are always ordered relative to each other.
However, if you have multiple message groups, messages from different groups may be delivered out of order relative to one another. The only guarantee is that messages within each group maintain their order.
Let's see this in action by modifying how we assign the MessageGroupId.
Multiple Message Groups Example
Update the code to create two different message groups based on whether the index is even or odd:
for (int i = 1; i <= 10; i++){var message = new SendMessageBatchRequestEntry{Id = i.ToString(),MessageBody = JsonSerializer.Serialize(weatherForecast),MessageGroupId = $"message-group-id-{i % 2}", // Creates groups 0 and 1MessageDeduplicationId = Guid.NewGuid().ToString()};messages.Add(message);}
This creates two groups:
- Group 0: Messages 2, 4, 6, 8, 10 (even numbers)
- Group 1: Messages 1, 3, 5, 7, 9 (odd numbers)
After publishing, you'll see the messages arrive in a pattern like: 1, 3, 5, 7, 9, 2, 4, 6, 8, 10.
All odd-numbered messages (Group 1) were processed first, maintaining their order, followed by all even-numbered messages (Group 0) in order.
This happens because we're fetching 10 messages at once using MaxNumberOfMessages = 10 in the consumer.
Impact of MaxNumberOfMessages
The MaxNumberOfMessages property on the receive request determines how many messages are retrieved in a single call.
Let's see what happens when we reduce this to 2 messages per request.
var request = new ReceiveMessageRequest{QueueUrl = queueUrl,MaxNumberOfMessages = 2, // Get up to 2 messages};
After restarting the consumer and publishing 10 messages with two message groups, you'll see a different pattern.
The consumer might receive:
- First batch: Messages 1, 3 (both from Group 1)
- Second batch: Messages 2, 4 (both from Group 0)
- Third batch: Message 5, 7 (from Group 1)
- Fourth batch: Message 6, 8 (from Group 0)
- And so on...
The pattern doesn't have to be strictly alternating, but you'll see messages from different groups being delivered at different times. The key point is that within each group, the order is always maintained.
Scaling with Multiple Consumers
Let's explore what happens when you have multiple consumers processing messages from the same FIFO queue.
This is a common scenario in production systems where you need to scale message processing by running multiple instances of your consumer application.
Setting Up Multiple Consumers
To simulate multiple consumers, open two terminal windows and run the console application in both.
In Windows Terminal, you can use Ctrl+Shift+P to duplicate the pane.
# Terminal 1.\WeatherForecastConsumer.exe# Terminal 2.\WeatherForecastConsumer.exe
Both consumers are now listening to the same FIFO queue, ready to process messages.
Multiple Message Groups with Multiple Consumers
With MaxNumberOfMessages = 2 and two message groups (even/odd), let's publish 10 messages.
You'll observe something interesting:
- Consumer 1 picks up: 1, 3, 5, 7, 9 (all odd numbers from Group 1)
- Consumer 2 picks up: 2, 4, 6, 8, 10 (all even numbers from Group 0)
Each consumer processes messages from different groups, and within each consumer, the messages maintain their order.
This demonstrates an important pattern: Different message groups can be processed in parallel by different consumers, while still maintaining order within each group.
This is the key to scaling FIFO queue processing — partition your messages into appropriate groups based on your business logic.
Single Message Group with Multiple Consumers
What happens when all messages belong to the same group but you have multiple consumers?
Let's change the code back to use a single message group:
MessageGroupId = "message-group-id", // Same group for all messages
To make this more interesting, let's introduce an artificial delay in message processing:
private async Task ProcessMessageAsync(SQSMessage message){Console.WriteLine($"Processing message: {message.Body}");await Task.Delay(2000); // Simulate 2 seconds of processing// Delete message after processingawait sqsClient.DeleteMessageAsync(queueUrl, message.ReceiptHandle);}
With both consumers running and an artificial 2-second delay, publish 10 messages.
You'll observe:
- Consumer 1 picks up messages 1, 2
- Consumer 2 sits idle
- After Consumer 1 processes them, it picks up 3, 4
- Consumer 2 continues to sit idle
- This pattern continues...
(One of the consumer will always be idle - not necessarily the same one all the time.)
Only one consumer processes messages while the other remains idle.
Why? Because all messages belong to the same group, and SQS must maintain strict ordering. The queue won't release the next set of messages to other consumers until the currently processing messages are successfully deleted from the queue.
This is a critical consideration for FIFO queue design: If you put all messages in a single group, you lose the ability to process messages in parallel, even with multiple consumers.
The key is to partition your messages based on your business requirements for ordering. Messages that must be processed in order should share the same group ID, while independent workflows can use different groups.