KubeBlocks supports four distinct Elasticsearch deployment topologies:
| Topology | spec.topology | Components | Use Case |
|---|---|---|---|
| Single Node | single-node | mdit (1 replica) | Development and local testing |
| MDIT | mdit | mdit (N replicas) | Medium workloads — scalable, no dedicated masters |
| Multi-Node | multi-node (default) | master (3) + dit (N) | Production HA — dedicated master + data/ingest/transform nodes |
| Full Separation | m-d-i-t | m + d + i + t | Large scale — each role component sized and scaled independently |
Each topology uses the same KubeBlocks resource hierarchy: Cluster → Component → InstanceSet → Pod × N.
A single Elasticsearch pod running all roles simultaneously — master, data, ingest, and transform. The entire cluster state, data, and pipeline processing lives on one node. Designed purely for development and testing: there is no failover, no shard replication, and no HA.
es-cluster-mdit-http:9200| Component | Replicas | Roles |
|---|---|---|
mdit | 1 | master · data · ingest · transform |
| Container | Port | Purpose |
|---|---|---|
elasticsearch | 9200 (HTTP), 9300 (transport) | Elasticsearch engine — all roles |
es-agent | 8080 | Lifecycle operations and configuration management |
exporter | 9114 | Prometheus metrics (elasticsearch-exporter) |
| Service | Type | Port | Notes |
|---|---|---|---|
{cluster}-mdit-http | ClusterIP | 9200 | Client REST API — single pod |
{cluster}-mdit-agent | ClusterIP | 8080 | es-agent sidecar |
{cluster}-mdit-headless | Headless | 9200, 9300 | Pod DNS — operator probes |
None. Pod failure means the cluster is unavailable until Kubernetes restarts the pod. Do not use in production.
Multiple Elasticsearch pods each running all four roles (master, data, ingest, transform). Unlike single-node, you can scale out replicas for higher throughput and capacity. All pods participate in master election — any pod can become master. There are no dedicated master nodes.
es-cluster-mdit-http:9200 · any pod can serve requests| Component | Replicas | Roles |
|---|---|---|
mdit | N (configurable) | master · data · ingest · transform |
| Container | Port | Purpose |
|---|---|---|
elasticsearch | 9200, 9300 | Elasticsearch engine — all roles |
es-agent | 8080 | Lifecycle operations and configuration management |
exporter | 9114 | Prometheus metrics |
| Service | Type | Port | Notes |
|---|---|---|---|
{cluster}-mdit-http | ClusterIP | 9200 | Client REST API — load-balanced across all pods |
{cluster}-mdit-agent | ClusterIP | 8080 | es-agent sidecar |
{cluster}-mdit-headless | Headless | 9200, 9300 | Per-pod DNS — inter-node transport and operator probes |
Any node failure requires re-election among surviving pods. With N=3 pods, the cluster tolerates 1 pod failure. Data is stored on each pod's own PVC; shard replication ensures data redundancy across pods. For stronger master stability, prefer multi-node.
The recommended production topology. Separates cluster management from data operations: three dedicated master nodes handle cluster state and shard allocation, while the combined DIT (data + ingest + transform) nodes handle indexing, search, and pipeline processing. Client traffic goes only to DIT nodes.
es-cluster-dit-http:9200 · DIT component handles all client traffic| Component | Replicas | Roles | Purpose |
|---|---|---|---|
master | 3 | master only | Cluster state management, shard allocation, index mappings |
dit | N (configurable) | data · ingest · transform | Indexing, search, pipeline processing |
Both master and dit pods:
| Container | Port | Purpose |
|---|---|---|
elasticsearch | 9200 (dit only), 9300 | Elasticsearch engine; master pods do not expose :9200 to clients |
es-agent | 8080 | Lifecycle operations and configuration management |
exporter | 9114 | Prometheus metrics |
| Service | Type | Port | Notes |
|---|---|---|---|
{cluster}-dit-http | ClusterIP | 9200 | Client REST API — routes to DIT pods |
{cluster}-master-headless | Headless | 9300 | Inter-node transport for master component |
{cluster}-dit-headless | Headless | 9200, 9300 | Per-pod DNS for DIT component |
Master pods are only reachable via the headless service on :9300 — they do not serve client REST traffic.
| Mechanism | Description |
|---|---|
| Master quorum | 3 master-eligible nodes — tolerates 1 failure without losing cluster state |
| Shard replication | Primary and replica shards distributed across DIT pods — promotes replica on data node failure |
| Split-brain prevention | Dedicated master nodes never also hold data — master election cannot be confused by data node failures |
| Rolling upgrades | KubeBlocks upgrades DIT nodes first, then master nodes last (quorum maintained throughout) |
Fully separated role components: each of the four Elasticsearch roles runs as its own independent KubeBlocks Component with its own replica count, resource limits, and PVC size. Best for large-scale deployments where fine-grained resource tuning and independent scaling per role is critical.
es-cluster-d-http:9200 · Ingest → es-cluster-i-http:9200| Component | Replicas | Role | Primary Resource Driver |
|---|---|---|---|
m | 3 | master | Low CPU/memory — manages metadata only |
d | N | data | High storage (large PVCs) + memory for JVM heap |
i | N | ingest | High CPU for pipeline transforms |
t | N | transform | Medium CPU for continuous aggregation jobs |
All components include elasticsearch (role-specific config) and es-agent (:8080). Data and ingest pods also include exporter (:9114) for Prometheus metrics.
| Service | Type | Port | Notes |
|---|---|---|---|
{cluster}-d-http | ClusterIP | 9200 | Search traffic → data nodes |
{cluster}-i-http | ClusterIP | 9200 | Ingest traffic → ingest nodes (pipeline processing) |
{cluster}-m-headless | Headless | 9300 | Inter-node transport for master component |
{cluster}-d-headless | Headless | 9200, 9300 | Per-pod DNS for data component |
{cluster}-i-headless | Headless | 9200, 9300 | Per-pod DNS for ingest component |
{cluster}-t-headless | Headless | 9200, 9300 | Per-pod DNS for transform component |
Same mechanisms as multi-node (master quorum + shard replication), with the additional benefit that ingest pipeline failures and transform job failures are fully isolated from search and indexing workloads. Each component can be scaled independently without affecting other roles.
All Elasticsearch pods include three init containers on startup:
| Init Container | Purpose |
|---|---|
prepare-plugins | Stages plugin files from a plugin image into a shared volume |
install-plugins | Installs plugins and prepares the filesystem layout |
install-es-agent | Copies the es-agent binary into the container's local bin path |
Each pod mounts its own PVC for the Elasticsearch data directory (/usr/share/elasticsearch/data), providing independent persistent storage per node.
KubeBlocks automatically provisions the following Elasticsearch accounts. Credentials are stored in Secrets named {cluster}-{component}-account-{name}.
| Account | Role | Purpose |
|---|---|---|
elastic | Superuser | Built-in Elasticsearch superuser; used for cluster setup, index management, and security configuration |
kibana_system | Monitor / manage index | Built-in account used by Kibana to communicate with Elasticsearch |