KubeBlocks supports one RabbitMQ deployment topology:
| Topology | Layout | HA Mechanism | Use Case |
|---|---|---|---|
| clustermode | N broker nodes in a single Erlang cluster | Quorum Queues (Raft consensus); Erlang Distribution Protocol for inter-node communication | General HA; message durability; odd node count (3 or 5) recommended for quorum |
In the clustermode topology, every pod is a full RabbitMQ broker node that participates in the same Erlang cluster. There is a single KubeBlocks Component (rabbitmq) for all nodes, and the entire pod set forms one logical RabbitMQ broker. Quorum Queues use a Raft-based majority protocol to replicate message data across nodes and elect a new queue leader when a node fails.
Cluster → Component (rabbitmq) → InstanceSet → Pod × N
| Resource | Role |
|---|---|
| Cluster | User-facing declaration — specifies the broker node count, storage size, and resources |
| Component (rabbitmq) | Generated automatically; references the cmpd-rabbitmq ComponentDefinition; all pods are identical broker nodes |
| InstanceSet | KubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities |
| Pod | Actual running RabbitMQ broker; each pod gets a unique ordinal and its own PVC |
An odd number of nodes (3 or 5) is recommended so that quorum queues can maintain a write majority after a single-node failure.
Each RabbitMQ pod runs one container (no init containers):
| Container | Port | Purpose |
|---|---|---|
rabbitmq | 4369 (EPMD), 5672 (AMQP), 15672 (management), 15674 (web-stomp), 25672 (Erlang distribution), 15692 (Prometheus metrics) | RabbitMQ broker — handles message routing, exchanges, queues, Erlang clustering, and built-in Prometheus metrics |
Each pod mounts a single PVC at /var/lib/rabbitmq. RabbitMQ stores the Mnesia database (queue state, message data) under /var/lib/rabbitmq/mnesia and logs under /var/lib/rabbitmq/logs, both on the same PVC.
RabbitMQ uses the Erlang Distribution Protocol for inter-node communication. Nodes discover each other using the headless service DNS and authenticate with a shared Erlang cookie:
| Mechanism | Description |
|---|---|
| Erlang cookie | A shared secret mounted from the rabbitmq-config volume; all broker nodes in the cluster must use the same cookie |
| Node naming | Each broker is identified as rabbit@{pod-name}.{cluster}-rabbitmq-headless.{namespace}.svc.cluster.local |
| EPMD | Erlang Port Mapper Daemon on port 4369 — maps node names to listener ports; required for initial node discovery |
| Cluster join | On startup, each node contacts an existing cluster member via the headless service on port 25672 (Erlang distribution) |
| memberLeave | Before scale-in, KubeBlocks runs the memberLeave lifecycle action (rabbitmq-upgrade drain) on the departing node to safely transfer queue leadership before the pod is terminated |
RabbitMQ provides HA through Quorum Queues, which use the Raft consensus protocol for leader election and log replication:
| Quorum Queue Concept | Description |
|---|---|
| Queue leader | The single broker responsible for accepting writes to a queue; elected via Raft |
| Queue follower | Maintains a replicated copy of the queue log on another node; eligible to become leader |
| Write quorum | A majority of queue replicas must acknowledge an enqueue before the broker confirms it to the publisher |
| Leader election | When the leader node fails, the surviving majority holds a Raft election and elects a new leader automatically |
| Quorum tolerance | 3-node cluster tolerates 1 node failure per queue; 5-node cluster tolerates 2 |
| Classic mirrored queues | Legacy HA mechanism; deprecated in RabbitMQ 3.9+ in favor of quorum queues |
Quorum queues protect publisher-confirmed messages from loss when a node fails, provided a majority of queue replicas stays available. Applications should use publisher confirms and appropriate consumer acknowledgements for end-to-end durability guarantees.
KubeBlocks creates two services for each RabbitMQ cluster:
| Service | Type | Ports | Notes |
|---|---|---|---|
{cluster}-rabbitmq | ClusterIP | 5672 (AMQP), 15672 (management), 15674 (web-stomp) | All pods; no roleSelector — any broker node can accept client connections |
{cluster}-rabbitmq-headless | Headless | — | All pods; used for Erlang inter-node communication (port 25672), EPMD (port 4369), and operator access |
Client applications connect to the AMQP port on the ClusterIP service. Any broker node accepts connections and routes messages to the appropriate queue leader internally. The management UI and REST API are on port 15672; Web STOMP is on port 15674, both on the same ClusterIP.
Port 15692 (Prometheus metrics) is exposed on each pod container but is not included in the ClusterIP Service — scrape metrics via pod IP, headless endpoints, or a PodMonitor.
Erlang inter-node traffic flows over the headless service, where each pod is addressable by its stable DNS name:
{pod-name}.{cluster}-rabbitmq-headless.{namespace}.svc.cluster.local:25672
When a RabbitMQ node fails, the Erlang cluster and Raft protocol respond as follows:
net_ticktime timeoutKubeBlocks automatically manages the following RabbitMQ system account. The password is stored in a Secret named {cluster}-{component}-account-{name}.
| Account | Purpose |
|---|---|
root | Default administrative account — injected as RABBITMQ_DEFAULT_USER / RABBITMQ_DEFAULT_PASS at broker startup; used for initial cluster setup and management operations |
RabbitMQ's built-in guest account is disabled for remote connections by default (accessible on localhost only). Use the root account credentials from the {cluster}-rabbitmq-account-root Secret for application and management access.