Docs For AI
Message queues

Kafka vs RabbitMQ

Comparing Apache Kafka and RabbitMQ - architecture, performance, use cases, and selection criteria

Kafka vs RabbitMQ

A comparison of two widely-used messaging systems with fundamentally different architectures and strengths.

Architecture Overview

Apache Kafka

Kafka is a distributed event streaming platform built around an append-only log. Messages are written to partitioned topics and retained for a configurable duration, allowing consumers to replay events.

Producer → Topic (Partition 0) → Consumer Group A
                                → Consumer Group B
         → Topic (Partition 1) → Consumer Group A
                                → Consumer Group B
         → Topic (Partition 2) → Consumer Group A
                                → Consumer Group B

RabbitMQ

RabbitMQ is a traditional message broker implementing the AMQP protocol. Messages are routed from exchanges to queues through bindings, and consumed (deleted) once acknowledged.

Producer → Exchange ─── Binding ──→ Queue 1 → Consumer A
                    └── Binding ──→ Queue 2 → Consumer B
                    └── Binding ──→ Queue 3 → Consumer C

Core Comparison

FeatureKafkaRabbitMQ
ModelDistributed log / Event streamingMessage broker / Task queue
Message RetentionConfigurable (hours to forever)Until consumed and acknowledged
OrderingPer-partition ordering guaranteedPer-queue ordering (single consumer)
ReplayConsumers can re-read past messagesNot possible (message deleted after ack)
ProtocolCustom binary protocolAMQP 0-9-1, STOMP, MQTT
ThroughputMillions of messages/sec~50K messages/sec per queue
LatencyLow (ms), optimized for throughputVery low (sub-ms for small payloads)
RoutingTopic-based (limited)Flexible (direct, topic, fanout, headers)
Consumer ModelPull-based (consumers poll)Push-based (broker delivers)
ScalingHorizontal (add partitions/brokers)Vertical + clustering
StorageDisk-based (sequential I/O)Memory + disk overflow

Kafka Configuration

Producer

Properties props = new Properties();
props.put("bootstrap.servers", "kafka-1:9092,kafka-2:9092,kafka-3:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("acks", "all");                    // Wait for all replicas
props.put("retries", 3);                     // Retry on failure
props.put("enable.idempotence", true);       // Exactly-once semantics
props.put("compression.type", "lz4");        // Compress messages
props.put("batch.size", 16384);              // Batch size in bytes
props.put("linger.ms", 10);                  // Wait for batch to fill

KafkaProducer<String, String> producer = new KafkaProducer<>(props);

producer.send(new ProducerRecord<>("orders", orderId, orderJson), (metadata, exception) -> {
    if (exception != null) {
        log.error("Failed to send message", exception);
    } else {
        log.info("Sent to partition {} offset {}", metadata.partition(), metadata.offset());
    }
});

Consumer

Properties props = new Properties();
props.put("bootstrap.servers", "kafka-1:9092,kafka-2:9092");
props.put("group.id", "order-processor");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("auto.offset.reset", "earliest");  // Start from beginning if no committed offset
props.put("enable.auto.commit", false);       // Manual commit for reliability
props.put("max.poll.records", 500);

KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(List.of("orders"));

while (true) {
    ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
    for (ConsumerRecord<String, String> record : records) {
        processOrder(record.value());
    }
    consumer.commitSync();
}

Topic Configuration

# Create topic with 12 partitions, replication factor 3
kafka-topics.sh --create \
  --bootstrap-server kafka-1:9092 \
  --topic orders \
  --partitions 12 \
  --replication-factor 3 \
  --config retention.ms=604800000 \        # 7 days
  --config cleanup.policy=delete \
  --config min.insync.replicas=2

RabbitMQ Configuration

Producer (Node.js)

const amqp = require('amqplib');

async function publish() {
  const conn = await amqp.connect('amqp://user:pass@rabbitmq:5672');
  const channel = await conn.createChannel();

  // Declare exchange
  await channel.assertExchange('orders', 'topic', { durable: true });

  // Publish with routing key
  channel.publish('orders', 'order.created', Buffer.from(JSON.stringify({
    orderId: '123',
    amount: 99.99,
  })), {
    persistent: true,           // Survive broker restart
    messageId: uuid(),
    contentType: 'application/json',
    headers: { 'x-retry-count': 0 },
  });
}

Consumer (Node.js)

async function consume() {
  const conn = await amqp.connect('amqp://user:pass@rabbitmq:5672');
  const channel = await conn.createChannel();

  // Declare queue
  await channel.assertQueue('order-processing', {
    durable: true,
    arguments: {
      'x-dead-letter-exchange': 'orders-dlx',
      'x-dead-letter-routing-key': 'order.failed',
      'x-message-ttl': 300000,  // 5 min TTL
    },
  });

  // Bind queue to exchange
  await channel.bindQueue('order-processing', 'orders', 'order.created');

  // Prefetch 10 messages at a time
  await channel.prefetch(10);

  // Consume
  channel.consume('order-processing', async (msg) => {
    try {
      const order = JSON.parse(msg.content.toString());
      await processOrder(order);
      channel.ack(msg);
    } catch (err) {
      // Reject and send to dead letter queue
      channel.nack(msg, false, false);
    }
  });
}

Exchange Types

Exchange TypeRouting BehaviorUse Case
DirectExact routing key matchTask queues, point-to-point
TopicPattern matching (order.*, #.error)Selective event routing
FanoutBroadcast to all bound queuesNotifications, cache invalidation
HeadersMatch on message headersComplex routing logic

Docker Compose Setup

version: '3.8'

services:
  # Kafka with KRaft (no ZooKeeper)
  kafka:
    image: confluentinc/cp-kafka:7.6.0
    ports:
      - "9092:9092"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:29093
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:29093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      CLUSTER_ID: "MkU3OEVBNTcwNTJENDM2Qg"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    volumes:
      - kafka-data:/var/lib/kafka/data

  # RabbitMQ
  rabbitmq:
    image: rabbitmq:3-management-alpine
    ports:
      - "5672:5672"
      - "15672:15672"  # Management UI
    environment:
      RABBITMQ_DEFAULT_USER: user
      RABBITMQ_DEFAULT_PASS: password
    volumes:
      - rabbitmq-data:/var/lib/rabbitmq

volumes:
  kafka-data:
  rabbitmq-data:

Performance Comparison

MetricKafkaRabbitMQ
Throughput (single node)100K-1M msg/s20K-50K msg/s
Latency (p50)2-5ms< 1ms
Latency (p99)10-50ms5-20ms
Max message size1MB default (configurable)No hard limit (practical ~128MB)
Storage efficiencyHigh (sequential disk I/O)Moderate (memory + disk)
Consumer lag handlingExcellent (log-based replay)Poor (messages deleted on ack)

Use Cases

Choose Kafka When

  • Event sourcing and CQRS architectures
  • Log aggregation from many services into ELK / data lake
  • Stream processing with Kafka Streams or ksqlDB
  • High-throughput requirements (100K+ events/sec)
  • Consumers need to replay historical messages
  • Building a data pipeline to analytics / data warehouse
  • Event-driven microservices with multiple consumer groups

Choose RabbitMQ When

  • Task queues with complex routing requirements
  • Request-reply (RPC) patterns between services
  • Priority queues (RabbitMQ has native support)
  • Need flexible message routing (topic, headers, fanout)
  • Low-latency delivery is more important than throughput
  • Smaller teams wanting simpler operations
  • Legacy system integration via AMQP/STOMP/MQTT protocols

Summary

CriteriaWinner
Raw throughputKafka
LatencyRabbitMQ
Message replayKafka
Routing flexibilityRabbitMQ
Operational simplicityRabbitMQ
Horizontal scalingKafka
Stream processingKafka
Priority queuesRabbitMQ
Protocol supportRabbitMQ
Ecosystem (data pipelines)Kafka

On this page