KubeBlocks
BlogsEnterprise
⌘K
​
Blogs
Overview
Quickstart
Architecture

Topologies

Redis Standalone Cluster
Redis Replication Cluster
Redis Sharding Cluster

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage Redis Services
Modify Redis Parameters
Redis Switchover
Decommission Redis Replica

Backup And Restores

Create BackupRepo
Create Full Backup
Scheduled Backups
Scheduled Continuous Backup
Restore Redis Cluster
Restore with PITR

Custom Secret

Custom Password

Monitoring

Observability for Redis Clusters
FAQs

tpl

  1. Standalone architecture
  2. Sentinel architecture
    1. Resource Hierarchy
    2. Containers Inside Each Pod
    3. High Availability via Redis Sentinel
    4. Traffic Routing
    5. Automatic Failover
  3. Redis Cluster Architecture
    1. Resource Hierarchy
    2. Containers Inside Each Pod
    3. Hash Slot Distribution
    4. High Availability via Gossip Protocol
    5. Traffic Routing
    6. Automatic Failover
  4. System Accounts

Redis Architecture in KubeBlocks

The Redis ClusterDefinition exposes three patterns. They differ by topology name and whether Redis Sentinel is deployed:

PatternTopology nameComponentsHA / failover
Standalonestandaloneredis onlyNo Sentinel; no automatic primary failover
Sentinel (primary + replicas)replication, replication-twemproxyredis + redis-sentinelSentinel quorum monitors the primary and drives failover
Redis Cluster (sharded)clusterSharding (shard)Gossip on the cluster bus; no Sentinel processes

Standalone architecture

Use topology: standalone when you want a single Redis Component and do not provision Sentinel.

Application / Client
Read/Write  redis-standalone-redis-redis:6379
「redis-standalone」为示例 cluster 名
RW traffic → single pod
Kubernetes Services
redis-standalone-redis-redis
ClusterIP · :6379
selector: all redis pods (no role filter)
Direct connection to the single Redis instance
ReadWrite
→ redis-0
Redis Pod · Worker Node
redis-0STANDALONE
🔴
redis (Redis Server)
:6379 · standalone mode · accepts all R/W
🔍
[init] dbctl (→ /tools/dbctl)
copies dbctl binary for roleProbe
📊
redis-exporter
:9121 metrics (Prometheus)
💾 PVC data-0 · 20Gi · RDB / AOF data directory
🔗Headless service — stable pod DNS for operator probes and internal use; not a client endpoint
RW Traffic
Persistent Storage
Cluster  →  Component (redis)  →  InstanceSet  →  Pod × N
  • There is no redis-sentinel Component and no Sentinel quorum — the monitoring and failover flow in the Sentinel architecture section does not apply.
  • Pod layout (Redis server + metrics sidecar, PVC per data pod) matches the Redis data pods description under Sentinel below.
  • For HA with automatic failover on a single shard, use replication (or replication-twemproxy with Twemproxy), not standalone.

Sentinel architecture

NOTE

Everything in this section applies to replication and replication-twemproxy only. It does not apply to standalone (no Sentinel) or cluster (Redis Cluster / gossip, no Sentinel).

Redis Sentinel uses a dedicated set of Sentinel processes to monitor the Redis primary, detect failures, and coordinate automatic failover. All data lives on a single primary; replicas serve as hot standbys and optional read targets.

Application / Client
Read/Write  redis-cluster-redis-redis:6379
Sentinel Discovery  redis-cluster-redis-sentinel-redis-sentinel:26379
「redis-cluster」为示例 cluster 名
RW traffic → roleSelector: primary
Kubernetes Services
redis-cluster-redis-redis
ClusterIP · :6379
selector: kubeblocks.io/role=primary
Endpoints auto-switch with primary
ReadWrite
redis-cluster-redis-sentinel-redis-sentinel
ClusterIP · :26379
all sentinel pods
Sentinel quorum · master discovery
Sentinel
→ primary pod only
Redis Data Pods · Worker Nodes
redis-0PRIMARY
🔴
redis (Redis Server)
:6379 redis · primary role
primary
🔍
[init] dbctl (→ /tools/dbctl)
exec dbctl redis getrole
📊
redis-exporter
:9121 metrics
💾 PVC data-0 · 20Gi
redis-1REPLICA
🔴
redis (Redis Server)
:6379 redis · replica role
replica
🔍
[init] dbctl (→ /tools/dbctl)
exec dbctl redis getrole
📊
redis-exporter
:9121 metrics
💾 PVC data-1 · 20Gi
↔Redis Replication (PSYNC2)primary-0 → replica-1  |  Sentinel monitors & triggers failover
🔗Headless service — stable pod DNS for internal use (replication, HA heartbeat, operator probes); not a client endpoint
Sentinel Pods · PVC (needSnapshot: true)
sentinel-0
:26379 sentinel · monitors master
PVC data · Sentinel state (volume needSnapshot: true)
sentinel-1
:26379 sentinel · monitors master
PVC data · Sentinel state (volume needSnapshot: true)
sentinel-2
:26379 sentinel · monitors master
PVC data · Sentinel state (volume needSnapshot: true)
Primary / RW Traffic
Replica Pod
Persistent Storage

Resource Hierarchy

KubeBlocks models a Redis Sentinel deployment as a hierarchy of Kubernetes custom resources:

Cluster  →  Component (redis)          →  InstanceSet  →  Pod × N
         →  Component (redis-sentinel)  →  InstanceSet  →  Pod × 3
ResourceRole
ClusterUser-facing declaration — specifies topology, replica count, storage size, and resources
ComponentGenerated automatically; one Component for Redis data pods and one for Sentinel pods
InstanceSetKubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities and role awareness
PodActual running Redis or Sentinel process; data pods each get their own PVC

At least 3 Sentinel pods are required to form a voting quorum. The ClusterDefinition provisions redis-sentinel before redis (sentinel must be ready before data pods register with it).

Containers Inside Each Pod

Redis Data Pods

Each Redis data pod runs two main containers (plus an init-dbctl init container that copies dbctl into /tools/ for use by the roleProbe):

ContainerPortPurpose
redis6379Redis data server; primary accepts writes and replicates to replicas via async replication; roleProbe runs /tools/dbctl redis getrole inside this container
metrics9121Prometheus metrics exporter (redis_exporter)

Each data pod mounts its own PVC for the Redis data directory, preserving RDB/AOF files across restarts.

Redis Sentinel Pods

ContainerPortPurpose
sentinel26379Redis Sentinel process — monitors the primary, detects failures, coordinates failover

Each Sentinel pod mounts its own PVC (needSnapshot: true) for the Sentinel data directory, preserving Sentinel configuration across restarts.

High Availability via Redis Sentinel

Sentinel ConceptDescription
MonitoringEach Sentinel continuously pings the Redis primary and all replicas
Subjective down (SDOWN)A Sentinel marks a node as subjectively down when it fails to respond within down-after-milliseconds
Objective down (ODOWN)A primary is declared objectively down when a quorum of Sentinels agree it is unreachable
Leader electionSentinels elect one of themselves as the failover coordinator
FailoverThe elected Sentinel promotes the most up-to-date replica and reconfigures other replicas
Quorum(N/2 + 1) Sentinels must agree; 3 Sentinels require 2 to agree

Traffic Routing

ServiceTypePortSelector
{cluster}-redis-redisClusterIP6379kubeblocks.io/role=primary
{cluster}-redis-sentinel-redis-sentinelClusterIP26379all Sentinel pods
  • Write traffic: connect to {cluster}-redis-redis:6379 (always routes to the current primary)
  • Sentinel queries (for client-side Sentinel discovery): connect to {cluster}-redis-sentinel-redis-sentinel:26379

Automatic Failover

  1. Primary stops responding — Sentinels detect missing pings within down-after-milliseconds (default 20 s)
  2. Quorum agreement (ODOWN) — the configured quorum of Sentinels agree the primary is objectively down
  3. Sentinel leader election — Sentinels elect one of themselves as the failover coordinator
  4. Replica selection — the coordinator picks the replica with the smallest replication lag
  5. Promotion — the coordinator sends REPLICAOF NO ONE to the chosen replica
  6. Reconfiguration — other replicas and Sentinels update their configuration to the new primary
  7. Pod label updated — kubeblocks.io/role=primary applied to the new primary pod
  8. Service Endpoints switch — the ClusterIP service automatically routes traffic to the new primary

Total failover time is typically 10–30 seconds.


Redis Cluster Architecture

Redis Cluster (also called sharding mode) distributes data across multiple independent shards using hash-slot-based partitioning. Each shard is an independent KubeBlocks Sharding component with its own primary and replicas. There are no Sentinel processes — nodes coordinate through a gossip protocol on the cluster bus.

Application / Client
Requires a cluster-aware Redis client (e.g. redis-py cluster mode, Lettuce, ioredis)
Client fetches slot map via CLUSTER SLOTS · follows MOVED redirects automatically
hash(key) → slot → correct shard primary
🗂16384 hash slots distributed evenly across shards  ·  each key hashed via CRC16 to determine its slot  ·  slot determines the shard
Redis Cluster Shards (KubeBlocks Sharding)· each shard = 1 KubeBlocks Component · primary + replicas
shard-0slots 0–5460
shard-0-0Primary
🔴redis-cluster:6379 · :16379
⚙init-dbctl (dbctl → /tools)
📊metrics:9187
shard-0-1Replica
🔴redis-cluster:6379 · :16379
shard-1slots 5461–10922
shard-1-0Primary
🔴redis-cluster:6379 · :16379
⚙init-dbctl (dbctl → /tools)
📊metrics:9187
shard-1-1Replica
🔴redis-cluster:6379 · :16379
shard-2slots 10923–16383
shard-2-0Primary
🔴redis-cluster:6379 · :16379
⚙init-dbctl (dbctl → /tools)
📊metrics:9187
shard-2-1Replica
🔴redis-cluster:6379 · :16379
↔Gossip Protocol (cluster bus :16379)all nodes exchange heartbeats · detect failures · propagate slot ownership changes · no Sentinel required
🔗Headless service — stable pod DNS for internal use (replication, operator probes); not a client endpoint. Client access uses per-pod NodePort/LB services when external exposure is needed.
Shard Primary
Shard Replica
Gossip / Cluster Bus
Hash Slots

Resource Hierarchy

Redis Cluster uses the KubeBlocks Sharding abstraction, which is different from the standard Component-based hierarchy:

Cluster  →  Sharding  →  Shard × N  →  InstanceSet  →  Pod × replicas
ResourceRole
ClusterUser-facing declaration — uses spec.shardings to specify the number of shards and per-shard configuration
ShardingKubeBlocks sharding spec — creates and manages N identical shard Components
ShardAn individual Component; each shard owns a range of hash slots (contiguous at creation; may become non-contiguous after rebalancing) and runs its own primary + replica pods
InstanceSetManages pods within a shard with stable identities
PodActual running redis-cluster process; each pod gets its own PVC

A minimum of 3 shards is required for Redis Cluster to achieve quorum. Each shard tolerates 1 pod failure when at least 1 replica is configured.

Containers Inside Each Pod

Every Redis Cluster pod runs two main containers (plus an init-dbctl init container that copies dbctl into /tools/ for use by the roleProbe):

ContainerPortPurpose
redis-cluster6379 (client), 16379 (cluster bus)Redis node in cluster mode; handles client requests and participates in gossip; roleProbe runs /tools/dbctl redis getrole inside this container
metrics9121Prometheus metrics exporter (redis_exporter)

Each pod mounts its own PVC for the Redis data directory.

Hash Slot Distribution

Redis Cluster divides the key space into 16384 hash slots. Each key is assigned to a slot via CRC16(key) mod 16384. Slots are distributed evenly across shards at cluster creation:

ShardInitial slot range (at creation)
shard-00 – 5460
shard-15461 – 10922
shard-210923 – 16383
NOTE

The shard names in this table (shard-0, shard-1, shard-2) are simplified for illustration. KubeBlocks assigns each shard component a random 3-character suffix at creation time (e.g., shard-x7k, shard-m2p). The actual component and service names therefore differ from these examples. To use predictable names, set explicit shard IDs in shardTemplates[].shardIDs when creating the cluster.

When shards are added or removed, KubeBlocks triggers a slot rebalancing operation to redistribute slots across the new shard count. After rebalancing, each shard may hold multiple non-contiguous slot ranges rather than a single interval.

High Availability via Gossip Protocol

Redis Cluster nodes use a peer-to-peer gossip protocol over port 16379 (cluster bus) for failure detection and topology propagation — no external coordinator is needed:

ConceptDescription
PFAILA node is marked "possibly failed" when it stops responding to pings
FAILA node is declared failed when a majority of primaries agree it is unreachable
Replica promotionThe shard's replica requests votes from other primaries; the winner steps up
MOVED redirectWhen a client sends a key to the wrong shard, that node responds with a MOVED reply pointing to the correct shard

Traffic Routing

Redis Cluster does not use a standard ClusterIP service for write routing. Instead, a cluster-aware client fetches the slot map directly from the cluster:

  1. The client connects to any cluster node on port 6379
  2. It issues CLUSTER SLOTS (or CLUSTER SHARDS) to retrieve the full slot-to-node mapping
  3. The client routes each command directly to the shard primary owning the key's slot
  4. On a MOVED or ASK response, the client updates its routing table and retries

For external access, per-pod NodePort or LoadBalancer services (redis-advertised) can be enabled to expose each pod's advertised address.

Automatic Failover

  1. Shard primary stops responding — peers detect missing gossip heartbeats
  2. PFAIL declared — nodes that cannot reach the primary mark it as PFAIL
  3. FAIL declared — a majority of other primaries agree → FAIL state propagated cluster-wide
  4. Replica election — the shard replica requests votes from other primaries; majority vote promotes it
  5. Slot ownership transferred — cluster topology updated with the new primary for the affected slots
  6. Pod label updated — kubeblocks.io/role=primary applied by KubeBlocks
  7. Clients follow MOVED — cluster-aware clients refresh their routing table automatically

System Accounts

KubeBlocks manages the following Redis account for all topology patterns on this page (standalone, Sentinel, and Redis Cluster). The password is auto-generated and stored in a Secret named {cluster}-{component}-account-default.

AccountRolePurpose
defaultAdminDefault Redis authentication account; used for all client connections and inter-component communication

© 2026 KUBEBLOCKS INC