Deploy production-grade ClickHouse clusters in minutes. Manage sharding, replication, ClickHouse Keeper HA, and full backup/restore via a single open-source operator.
Max Shards
Deployment Topologies
Open Source
Deploy ClickHouse in 4 steps
Install KubeBlocks
# Add Helm repo helm repo add kubeblocks https://apecloud.github.io/helm-charts helm repo update # Install KubeBlocks helm install kubeblocks kubeblocks/kubeblocks \ --namespace kb-system --create-namespace
Install ClickHouse Addon
helm upgrade -i kb-addon-clickhouse kubeblocks/clickhouse \ -n kb-system
Create a ClickHouse Cluster
apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: ch-cluster namespace: demo spec: clusterDef: clickhouse terminationPolicy: Delete topology: standalone # or cluster shardings: - name: clickhouse shards: 1 template: replicas: 1
Cluster is Ready
$ kubectl get cluster ch-cluster -n demo NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE ch-cluster clickhouse Delete Running 3m
Trusted by Engineering Teams at Scale
From lightweight standalone analytics to production-grade replicated clusters with built-in Keeper coordination.
Standalone
One or more independent ClickHouse shards without a coordinator. MergeTree tables are fully supported; ReplicatedMergeTree is not available in standalone topology — use the cluster topology with built-in ClickHouse Keeper.
Single or multi-shard deployment (up to 128 shards)
MergeTree, SummingMergeTree, AggregatingMergeTree engines
HTTP (8123) and native TCP (9000) client access
MySQL and PostgreSQL wire protocol compatibility
Prometheus metrics on port 8001
Full and incremental backup via clickhouse-backup
{cluster}-{shardComponentName}:8123{cluster}-{shardComponentName}:9000| Feature | KubeBlocks | Altinity Operator | Bitnami Helm |
|---|---|---|---|
| Kubernetes-native CRD API | ✓ | ✓ | ✗ |
| Standalone topology | ✓ | ✓ | ~ |
| Cluster + ClickHouse Keeper HA | ✓ | ✓ | ✓ |
| Multi-shard deployment (1–128) | ✓ | ✓ | ✓ |
| Horizontal shard scaling | ✓ | ✓ | ~ |
| Replica scaling within shard | ✓ | ✓ | ~ |
| Vertical scaling (CPU/memory) | ✓ | ✓ | ✓ |
| PVC volume expansion | ✓ | ~ | ✗ |
| Full backup & restore | ✓ | ~ | ✗ |
| Incremental backup | ✓ | ~ | ✗ |
| Dynamic parameter reconfiguration | ✓ | ~ | ✗ |
| Rolling version upgrade | ✓ | ✓ | ✓ |
| TLS encryption | ✓ | ✓ | ✓ |
| Prometheus metrics | ✓ | ✓ | ✓ |
| Stop / start cluster | ✓ | ✓ | ~ |
| Open Source | ✓ | ✓ | ✓ |
| Cluster management web UI | Enterprise | ✗ | ✗ |
✓ = Supported · ~ = Partial / Limited · ✗ = Not supported
Enterprise= Available in KubeBlocks Enterprise edition. Based on documentation and GitHub issues; features may vary by version.
Full Backup
Incremental Backup
Restore
No SSH into pods, no shell scripts. Submit an OpsRequest and KubeBlocks handles the rest.
Sharding & Availability
Shard Scale-out
Add new shards to an existing ClickHouse cluster. After provisioning, run a post-scale-out OpsRequest to register the new shards in the cluster configuration.
Replica Scaling
Add or remove replicas within a shard for read throughput or storage redundancy.
Vertical Scaling
Resize CPU and memory for ClickHouse shards or Keeper nodes via rolling OpsRequest.
Volume Expansion
Expand PVC storage on any shard without pod restarts on supported storage classes.
Stop / Start
Suspend the entire cluster (shards + Keeper) to eliminate compute cost; resume with full state.
Rolling Restart
Restart pods shard by shard with configurable batch size and health-check gates.
Configuration, Data & Observability
Full Backup & Restore
Consistent full snapshots via clickhouse-backup, uploaded to S3-compatible object storage.
Incremental Backup
Capture only changed parts since the last backup to minimize storage cost and backup duration.
Parameter Reconfiguration
Update server and user XML configuration via ConfigMap; mutable parameters apply at runtime without restart.
Version Upgrade
Rolling upgrade across supported ClickHouse versions (22.x → 25.x) with health checks.
TLS Encryption
Enable mutual TLS for client-server and inter-shard communication on any running cluster.
Prometheus Metrics
Built-in metrics endpoint on port 8001 for ClickHouse shards and Keeper nodes; works with Grafana.
Open source and production-ready. Enterprise customers get dedicated onboarding, migration support, and SLA guarantees.