KubeBlocks
BlogsEnterprise
⌘K
​
Blogs
Overview
Quickstart
Architecture

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage RabbitMQ Services
Decommission RabbitMQ Replica

Monitoring

Observability for RabbitMQ Clusters

tpl

  1. Cluster Mode Architecture (clustermode)
    1. Resource Hierarchy
    2. Containers Inside Each Pod
    3. Erlang Distributed Clustering
    4. High Availability via Quorum Queues
    5. Traffic Routing
  2. Automatic Failover
  3. System Accounts

RabbitMQ Architecture in KubeBlocks

KubeBlocks supports one RabbitMQ deployment topology:

TopologyLayoutHA MechanismUse Case
clustermodeN broker nodes in a single Erlang clusterQuorum Queues (Raft consensus); Erlang Distribution Protocol for inter-node communicationGeneral HA; message durability; odd node count (3 or 5) recommended for quorum

Cluster Mode Architecture (clustermode)

In the clustermode topology, every pod is a full RabbitMQ broker node that participates in the same Erlang cluster. There is a single KubeBlocks Component (rabbitmq) for all nodes, and the entire pod set forms one logical RabbitMQ broker. Quorum Queues use a Raft-based majority protocol to replicate message data across nodes and elect a new queue leader when a node fails.

AMQP ClientsAMQP 0-9-1STOMP / MQTT
↓
rabbitmq-cluster:5672ClusterIP Service
↓
Raft Quorum — 3-node cluster
rabbit-0Leader
:5672:15692
PVC · 20 Gi
rabbit-1Follower
:5672:15692
PVC · 20 Gi
rabbit-2Follower
:5672:15692
PVC · 20 Gi
AMQP:5672
Management UI:15672
Prometheus:15692

Resource Hierarchy

Cluster  →  Component (rabbitmq)  →  InstanceSet  →  Pod × N
ResourceRole
ClusterUser-facing declaration — specifies the broker node count, storage size, and resources
Component (rabbitmq)Generated automatically; references the cmpd-rabbitmq ComponentDefinition; all pods are identical broker nodes
InstanceSetKubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities
PodActual running RabbitMQ broker; each pod gets a unique ordinal and its own PVC

An odd number of nodes (3 or 5) is recommended so that quorum queues can maintain a write majority after a single-node failure.

Containers Inside Each Pod

Each RabbitMQ pod runs one container (no init containers):

ContainerPortPurpose
rabbitmq4369 (EPMD), 5672 (AMQP), 15672 (management), 15674 (web-stomp), 25672 (Erlang distribution), 15692 (Prometheus metrics)RabbitMQ broker — handles message routing, exchanges, queues, Erlang clustering, and built-in Prometheus metrics

Each pod mounts a single PVC at /var/lib/rabbitmq. RabbitMQ stores the Mnesia database (queue state, message data) under /var/lib/rabbitmq/mnesia and logs under /var/lib/rabbitmq/logs, both on the same PVC.

Erlang Distributed Clustering

RabbitMQ uses the Erlang Distribution Protocol for inter-node communication. Nodes discover each other using the headless service DNS and authenticate with a shared Erlang cookie:

MechanismDescription
Erlang cookieA shared secret mounted from the rabbitmq-config volume; all broker nodes in the cluster must use the same cookie
Node namingEach broker is identified as rabbit@{pod-name}.{cluster}-rabbitmq-headless.{namespace}.svc.cluster.local
EPMDErlang Port Mapper Daemon on port 4369 — maps node names to listener ports; required for initial node discovery
Cluster joinOn startup, each node contacts an existing cluster member via the headless service on port 25672 (Erlang distribution)
memberLeaveBefore scale-in, KubeBlocks runs the memberLeave lifecycle action (rabbitmq-upgrade drain) on the departing node to safely transfer queue leadership before the pod is terminated

High Availability via Quorum Queues

RabbitMQ provides HA through Quorum Queues, which use the Raft consensus protocol for leader election and log replication:

Quorum Queue ConceptDescription
Queue leaderThe single broker responsible for accepting writes to a queue; elected via Raft
Queue followerMaintains a replicated copy of the queue log on another node; eligible to become leader
Write quorumA majority of queue replicas must acknowledge an enqueue before the broker confirms it to the publisher
Leader electionWhen the leader node fails, the surviving majority holds a Raft election and elects a new leader automatically
Quorum tolerance3-node cluster tolerates 1 node failure per queue; 5-node cluster tolerates 2
Classic mirrored queuesLegacy HA mechanism; deprecated in RabbitMQ 3.9+ in favor of quorum queues

Quorum queues protect publisher-confirmed messages from loss when a node fails, provided a majority of queue replicas stays available. Applications should use publisher confirms and appropriate consumer acknowledgements for end-to-end durability guarantees.

Traffic Routing

KubeBlocks creates two services for each RabbitMQ cluster:

ServiceTypePortsNotes
{cluster}-rabbitmqClusterIP5672 (AMQP), 15672 (management), 15674 (web-stomp)All pods; no roleSelector — any broker node can accept client connections
{cluster}-rabbitmq-headlessHeadless—All pods; used for Erlang inter-node communication (port 25672), EPMD (port 4369), and operator access

Client applications connect to the AMQP port on the ClusterIP service. Any broker node accepts connections and routes messages to the appropriate queue leader internally. The management UI and REST API are on port 15672; Web STOMP is on port 15674, both on the same ClusterIP.

Port 15692 (Prometheus metrics) is exposed on each pod container but is not included in the ClusterIP Service — scrape metrics via pod IP, headless endpoints, or a PodMonitor.

Erlang inter-node traffic flows over the headless service, where each pod is addressable by its stable DNS name:

{pod-name}.{cluster}-rabbitmq-headless.{namespace}.svc.cluster.local:25672

Automatic Failover

When a RabbitMQ node fails, the Erlang cluster and Raft protocol respond as follows:

  1. Node becomes unreachable — remaining nodes detect the lost Erlang distribution connection via net_ticktime timeout
  2. Cluster marks node down — the Erlang cluster removes the failed node from its membership list
  3. Quorum queue leader election — for each quorum queue whose leader was on the failed node, Raft elects a new leader from the surviving majority
  4. Message continuity — producers and consumers reconnect to any remaining broker node; the broker routes traffic to the new queue leaders; brief connection-level retries are normal
  5. KubeBlocks pod recovery — KubeBlocks restarts the failed pod; on startup the node re-joins the Erlang cluster via the headless service and syncs queue state from the current leaders before accepting traffic

System Accounts

KubeBlocks automatically manages the following RabbitMQ system account. The password is stored in a Secret named {cluster}-{component}-account-{name}.

AccountPurpose
rootDefault administrative account — injected as RABBITMQ_DEFAULT_USER / RABBITMQ_DEFAULT_PASS at broker startup; used for initial cluster setup and management operations
NOTE

RabbitMQ's built-in guest account is disabled for remote connections by default (accessible on localhost only). Use the root account credentials from the {cluster}-rabbitmq-account-root Secret for application and management access.

© 2026 KUBEBLOCKS INC