KubeBlocks supports two distinct MongoDB deployment architectures:
| Architecture | Topology | Use Case |
|---|---|---|
| Replica Set | Primary + secondaries, oplog replication | Single-dataset HA; datasets that fit on one node |
| Sharding | Mongos routers + config servers + data shards | Horizontal scaling; datasets too large for a single replica set; high write throughput |
A MongoDB replica set maintains multiple copies of the same dataset across pods. One pod acts as the primary (accepts all writes); the others are secondaries that replicate the primary's oplog and can serve reads.
mongo-cluster-mongodb-mongodb:27017mongo-cluster-mongodb-mongodb-ro:27017kubeblocks.io/role=primarykubeblocks.io/role=secondaryCluster → Component → InstanceSet → Pod × N
| Resource | Role |
|---|---|
| Cluster | User-facing declaration — specifies topology, replica count, storage size, and resources |
| Component | Generated automatically; references a ComponentDefinition describing container specs, lifecycle actions, and services |
| InstanceSet | KubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities and role awareness |
| Pod | Actual running MongoDB instance; each pod gets a unique ordinal and its own PVC |
Each replica set pod runs three containers (plus three init containers on startup: init-syncer copies /bin/syncer and /bin/syncerctl to /tools; init-kubectl copies the kubectl binary to dataMountPath/tmp/bin; init-pbm-agent copies pbm, pbm-agent, and pbm-agent-entrypoint to /tools):
| Container | Port | Purpose |
|---|---|---|
mongodb | 27017, 3601 (ha replication) | MongoDB database engine; participates in replica set replication and election; roleProbe runs /tools/syncerctl getrole inside this container |
mongodb-backup-agent | — | Percona Backup for MongoDB (PBM) agent; coordinates cluster-wide consistent backups |
exporter | 9216 | Prometheus metrics exporter |
Each pod mounts its own PVC for the MongoDB data directory (default /data/mongodb, set by dataMountPath in chart values).
MongoDB replica sets use oplog-based replication and a majority-vote (Raft-like) election protocol:
| Concept | Description |
|---|---|
| Primary | Receives all write operations; records changes to the oplog |
| Secondary | Replicates the primary's oplog; can serve reads when readPreference is configured |
| Election | When the primary fails, secondaries vote; the candidate with the most up-to-date oplog and a majority of votes wins |
| Write concern | w:majority ensures a write is durable on a quorum before acknowledging |
A 3-member replica set tolerates 1 failure.
electionTimeoutMillis), one secondary calls for an electionsyncerctl getrole returns primary for the new pod → kubeblocks.io/role=primary label is applied{cluster}-mongodb-mongodb ClusterIP service automatically routes writes to the new primaryFailover typically completes within 10–30 seconds.
| Service | Type | Port | Selector |
|---|---|---|---|
{cluster}-mongodb-mongodb | ClusterIP | 27017 | kubeblocks.io/role=primary |
{cluster}-mongodb-mongodb-ro | ClusterIP | 27017 | kubeblocks.io/role=secondary |
{cluster}-mongodb | ClusterIP | 27017 | all pods (no roleSelector — everypod) |
{cluster}-mongodb-headless | Headless | 27017 | all pods |
{cluster}-mongodb-mongodb:27017 (roleSelector: primary — always routes to the current primary){cluster}-mongodb-mongodb-ro:27017 (roleSelector: secondary){cluster}-mongodb (two-segment name) is the everypod service — it routes to all pods and is not a write-only endpointMongoDB Sharding distributes data across multiple independent replica sets (shards) using a shard key. A layer of stateless mongos routers sits in front, and a config server replica set (CSRS) stores the chunk routing metadata.
{cluster}-mongos-mongos-0:27017, {cluster}-mongos-mongos-1:27017 … (one ClusterIP per pod, podService: true){cluster}-mongos-headless for DNS-based discovery · Mongos routes each query to the correct shardThe sharding topology uses both Component (for mongos and config-server) and Sharding (for data shards):
Cluster → Component (mongos) → InstanceSet → Pod × N
→ Component (config-server) → InstanceSet → Pod × 3
→ Sharding (shard) → Shard × N → InstanceSet → Pod × replicas
| Resource | Role |
|---|---|
| Cluster | Specifies topology sharding; declares mongos, config-server, and shard specs |
| Component (mongos) | Stateless query routers; requires config-server to be reachable before routing |
| Component (config-server) | 3-node replica set (CSRS) storing chunk map and shard membership |
| Sharding | KubeBlocks sharding spec; manages N identical shard Components |
| Shard | An independent replica set owning a range of chunks; each shard fails over independently |
Mongos pods (stateless — no PVC; each pod also runs an init-kubectl init container that copies the kubectl binary into the container's tools path for use by lifecycle scripts):
| Container | Port | Purpose |
|---|---|---|
mongos | 27017 | MongoDB query router — reads chunk map from CSRS and forwards queries to the correct shard |
exporter | 9216 | Prometheus metrics exporter |
Config server pods (3-node CSRS replica set; same three init containers as replica set pods — init-syncer, init-kubectl, init-pbm-agent):
| Container | Port | Purpose |
|---|---|---|
mongodb | 27017, 3601 (ha replication) | Config server mongod — stores chunk routing metadata; must use w:majority for all config writes; roleProbe runs /tools/syncerctl getrole |
mongodb-backup-agent | — | Percona Backup for MongoDB (PBM) agent |
exporter | 9216 | Prometheus metrics exporter |
Shard pods (each shard = independent replica set; same three init containers — init-syncer, init-kubectl, init-pbm-agent):
| Container | Port | Purpose |
|---|---|---|
mongodb | 27017, 3601 (ha replication) | Data shard mongod — stores documents assigned to this shard's chunk range; roleProbe runs /tools/syncerctl getrole |
mongodb-backup-agent | — | Percona Backup for MongoDB (PBM) agent |
exporter | 9216 | Prometheus metrics exporter |
Each shard pod mounts its own PVC for its data directory. At least 3 shards are recommended for balanced distribution (not enforced by the addon).
| Service | Type | Port | Notes |
|---|---|---|---|
{cluster}-mongos-mongos-<ordinal> | ClusterIP (per-pod) | 27017 | One service per mongos pod (podService: true); use all pod addresses as URI seed list |
{cluster}-mongos-headless | Headless | 27017 | DNS-based discovery of all mongos pods |
{cluster}-mongos-internal | ClusterIP | 27018 | Intra-cluster use only; not for application traffic |
Clients connect through mongos using a MongoDB URI seed list, e.g.:
mongodb://{cluster}-mongos-mongos-0:27017,{cluster}-mongos-mongos-1:27017/
Or use {cluster}-mongos-headless for DNS-based discovery. Direct shard or config server access is not intended for application traffic.
Each component fails over independently:
KubeBlocks automatically manages the following MongoDB system accounts. Passwords are stored in Secrets named {cluster}-{component}-account-{name}.
| Account | Role | Purpose |
|---|---|---|
root | Superuser | Default administrative account used for cluster initialization and management |