Operations
Backup And Restores
Custom Secret
Monitoring
tpl
The Redis ClusterDefinition exposes three patterns. They differ by topology name and whether Redis Sentinel is deployed:
| Pattern | Topology name | Components | HA / failover |
|---|---|---|---|
| Standalone | standalone | redis only | No Sentinel; no automatic primary failover |
| Sentinel (primary + replicas) | replication, replication-twemproxy | redis + redis-sentinel | Sentinel quorum monitors the primary and drives failover |
| Redis Cluster (sharded) | cluster | Sharding (shard) | Gossip on the cluster bus; no Sentinel processes |
Use topology: standalone when you want a single Redis Component and do not provision Sentinel.
redis-standalone-redis-redis:6379Cluster → Component (redis) → InstanceSet → Pod × N
redis-sentinel Component and no Sentinel quorum — the monitoring and failover flow in the Sentinel architecture section does not apply.replication (or replication-twemproxy with Twemproxy), not standalone.Everything in this section applies to replication and replication-twemproxy only. It does not apply to standalone (no Sentinel) or cluster (Redis Cluster / gossip, no Sentinel).
Redis Sentinel uses a dedicated set of Sentinel processes to monitor the Redis primary, detect failures, and coordinate automatic failover. All data lives on a single primary; replicas serve as hot standbys and optional read targets.
redis-cluster-redis-redis:6379redis-cluster-redis-sentinel-redis-sentinel:26379kubeblocks.io/role=primaryneedSnapshot: true)needSnapshot: true)needSnapshot: true)KubeBlocks models a Redis Sentinel deployment as a hierarchy of Kubernetes custom resources:
Cluster → Component (redis) → InstanceSet → Pod × N
→ Component (redis-sentinel) → InstanceSet → Pod × 3
| Resource | Role |
|---|---|
| Cluster | User-facing declaration — specifies topology, replica count, storage size, and resources |
| Component | Generated automatically; one Component for Redis data pods and one for Sentinel pods |
| InstanceSet | KubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities and role awareness |
| Pod | Actual running Redis or Sentinel process; data pods each get their own PVC |
At least 3 Sentinel pods are required to form a voting quorum. The ClusterDefinition provisions redis-sentinel before redis (sentinel must be ready before data pods register with it).
Redis Data Pods
Each Redis data pod runs two main containers (plus an init-dbctl init container that copies dbctl into /tools/ for use by the roleProbe):
| Container | Port | Purpose |
|---|---|---|
redis | 6379 | Redis data server; primary accepts writes and replicates to replicas via async replication; roleProbe runs /tools/dbctl redis getrole inside this container |
metrics | 9121 | Prometheus metrics exporter (redis_exporter) |
Each data pod mounts its own PVC for the Redis data directory, preserving RDB/AOF files across restarts.
Redis Sentinel Pods
| Container | Port | Purpose |
|---|---|---|
sentinel | 26379 | Redis Sentinel process — monitors the primary, detects failures, coordinates failover |
Each Sentinel pod mounts its own PVC (needSnapshot: true) for the Sentinel data directory, preserving Sentinel configuration across restarts.
| Sentinel Concept | Description |
|---|---|
| Monitoring | Each Sentinel continuously pings the Redis primary and all replicas |
| Subjective down (SDOWN) | A Sentinel marks a node as subjectively down when it fails to respond within down-after-milliseconds |
| Objective down (ODOWN) | A primary is declared objectively down when a quorum of Sentinels agree it is unreachable |
| Leader election | Sentinels elect one of themselves as the failover coordinator |
| Failover | The elected Sentinel promotes the most up-to-date replica and reconfigures other replicas |
| Quorum | (N/2 + 1) Sentinels must agree; 3 Sentinels require 2 to agree |
| Service | Type | Port | Selector |
|---|---|---|---|
{cluster}-redis-redis | ClusterIP | 6379 | kubeblocks.io/role=primary |
{cluster}-redis-sentinel-redis-sentinel | ClusterIP | 26379 | all Sentinel pods |
{cluster}-redis-redis:6379 (always routes to the current primary){cluster}-redis-sentinel-redis-sentinel:26379down-after-milliseconds (default 20 s)REPLICAOF NO ONE to the chosen replicakubeblocks.io/role=primary applied to the new primary podTotal failover time is typically 10–30 seconds.
Redis Cluster (also called sharding mode) distributes data across multiple independent shards using hash-slot-based partitioning. Each shard is an independent KubeBlocks Sharding component with its own primary and replicas. There are no Sentinel processes — nodes coordinate through a gossip protocol on the cluster bus.
CLUSTER SLOTS · follows MOVED redirects automaticallyRedis Cluster uses the KubeBlocks Sharding abstraction, which is different from the standard Component-based hierarchy:
Cluster → Sharding → Shard × N → InstanceSet → Pod × replicas
| Resource | Role |
|---|---|
| Cluster | User-facing declaration — uses spec.shardings to specify the number of shards and per-shard configuration |
| Sharding | KubeBlocks sharding spec — creates and manages N identical shard Components |
| Shard | An individual Component; each shard owns a range of hash slots (contiguous at creation; may become non-contiguous after rebalancing) and runs its own primary + replica pods |
| InstanceSet | Manages pods within a shard with stable identities |
| Pod | Actual running redis-cluster process; each pod gets its own PVC |
A minimum of 3 shards is required for Redis Cluster to achieve quorum. Each shard tolerates 1 pod failure when at least 1 replica is configured.
Every Redis Cluster pod runs two main containers (plus an init-dbctl init container that copies dbctl into /tools/ for use by the roleProbe):
| Container | Port | Purpose |
|---|---|---|
redis-cluster | 6379 (client), 16379 (cluster bus) | Redis node in cluster mode; handles client requests and participates in gossip; roleProbe runs /tools/dbctl redis getrole inside this container |
metrics | 9121 | Prometheus metrics exporter (redis_exporter) |
Each pod mounts its own PVC for the Redis data directory.
Redis Cluster divides the key space into 16384 hash slots. Each key is assigned to a slot via CRC16(key) mod 16384. Slots are distributed evenly across shards at cluster creation:
| Shard | Initial slot range (at creation) |
|---|---|
| shard-0 | 0 – 5460 |
| shard-1 | 5461 – 10922 |
| shard-2 | 10923 – 16383 |
The shard names in this table (shard-0, shard-1, shard-2) are simplified for illustration. KubeBlocks assigns each shard component a random 3-character suffix at creation time (e.g., shard-x7k, shard-m2p). The actual component and service names therefore differ from these examples. To use predictable names, set explicit shard IDs in shardTemplates[].shardIDs when creating the cluster.
When shards are added or removed, KubeBlocks triggers a slot rebalancing operation to redistribute slots across the new shard count. After rebalancing, each shard may hold multiple non-contiguous slot ranges rather than a single interval.
Redis Cluster nodes use a peer-to-peer gossip protocol over port 16379 (cluster bus) for failure detection and topology propagation — no external coordinator is needed:
| Concept | Description |
|---|---|
| PFAIL | A node is marked "possibly failed" when it stops responding to pings |
| FAIL | A node is declared failed when a majority of primaries agree it is unreachable |
| Replica promotion | The shard's replica requests votes from other primaries; the winner steps up |
| MOVED redirect | When a client sends a key to the wrong shard, that node responds with a MOVED reply pointing to the correct shard |
Redis Cluster does not use a standard ClusterIP service for write routing. Instead, a cluster-aware client fetches the slot map directly from the cluster:
CLUSTER SLOTS (or CLUSTER SHARDS) to retrieve the full slot-to-node mappingMOVED or ASK response, the client updates its routing table and retriesFor external access, per-pod NodePort or LoadBalancer services (redis-advertised) can be enabled to expose each pod's advertised address.
PFAILFAIL state propagated cluster-widekubeblocks.io/role=primary applied by KubeBlocksKubeBlocks manages the following Redis account for all topology patterns on this page (standalone, Sentinel, and Redis Cluster). The password is auto-generated and stored in a Secret named {cluster}-{component}-account-default.
| Account | Role | Purpose |
|---|---|---|
default | Admin | Default Redis authentication account; used for all client connections and inter-component communication |