KubeBlocks is a cloud-native solution designed specifically for managing databases on Kubernetes, developed by domain experts with decades of experience. It supports a wide range of stateful workloads, including common components such as relational databases, NoSQL databases, and message queues. By simplifying operations, enhancing deployment flexibility, and providing powerful scalability, KubeBlocks significantly improves the efficiency and convenience of managing and running databases in a cloud-native environment.
Before delving into KubeBlocks' network modes, let's briefly review some Kubernetes network-related concepts:
Kubernetes assigns a unique IP address to each Pod, which can be directly accessed by other Pods within the cluster.
Containers can directly use the host's network namespace, granting them host network privileges.
A Service provides a stable access point for a group of Pods, typically used for load balancing, and includes the following 4 service types:
ClusterIP: Assigns an IP address from a reserved IP address pool within the cluster. This is suitable for services that are only called by other Pods or services within the Kubernetes cluster.
Headless Service: When .spec.clusterIP
is set to None
, Kubernetes does not assign an IP address to it. A Headless Service can report the endpoint IP addresses of individual Pods through internal DNS records.
NodePort: Opens a specified port on each node (default: 30000-32767, Kubernetes can specify the port range via the --service-node-port-range
flag), allowing external access to the service via any <NodeIP>:<NodePort>
.
LoadBalancer: A third-party provided external load balancer that assigns an external IP address to the service.
ExternalName: Maps a service to an external DNS name (such as an API gateway, third-party database, external service, etc.), and does not point to any Pod.
Headless Service is the standard network mode for cloud-native stateful services and is suitable for most databases. KubeBlocks creates an InstanceSet Workload (similar to StatefulSet but more powerful) for each Component, and by default creates a Headless Service pointing to this Workload. This allows the Pod to obtain its exclusive DNS subdomain, in the format:
${podName}.${headlessSvcName}.${namespace}.${clusterDomain}
The following is a YAML configuration example for Redis Sentinel:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
# Specify the component names that need to use Headless Service mode
annotations:
kubeblocks.io/headless-service: "redis,redis-sentinel"
name: redis-replication
namespace: demo
spec:
terminationPolicy: Delete
clusterDef: redis
topology: replication
componentSpecs:
- name: redis
serviceVersion: "7.2.4"
disableExporter: false
replicas: 2
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
- name: redis-sentinel
serviceVersion: "7.2.4"
replicas: 3
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
When a client accesses a Redis Sentinel cluster from outside the Kubernetes cluster, the request first reaches the Sentinel. The Sentinel then returns the announced IP of the registered Redis master node (i.e., the Pod FQDN of the Headless Service). If this address is unreachable in the client's network, the connection will fail.
Fixed Pod IP mode is suitable for databases that rely on IP addresses as unique node identifiers for internal communication. Kubernetes assigns a unique IP address to each Pod, but Pod IPs can be reallocated in the following scenarios, for example:
This IP address instability is not suitable for database internal communication addresses. Fixed Pod IP can solve this problem. There are many solutions in the open-source community, such as SpiderPool. KubeBlocks has added support for InstanceSet controller based on this solution.
The following is a YAML configuration example for a Redis primary-replica cluster:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: redis-replication
namespace: demo
spec:
terminationPolicy: Delete
clusterDef: redis
topology: replication
componentSpecs:
- name: redis
serviceVersion: "7.2.4"
disableExporter: false
replicas: 2
# Enable fixed pod IP mode
# Note: This option can only be enabled if the current Kubernetes cluster has deployed a fixed IP network solution (e.g., SpiderPool)
env:
- name: FIXED_POD_IP_ENABLED
value: "true"
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
- name: redis-sentinel
replicas: 3
# Enable fixed pod IP mode
# Note: This option can only be enabled if the current Kubernetes cluster has deployed a fixed IP network solution (e.g., SpiderPool)
env:
- name: FIXED_POD_IP_ENABLED
value: "true"
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
HostNetwork is a Kubernetes network mode where, when a Pod sets hostNetwork
to true
, the Pod uses the host's network namespace, sharing the host's IP and ports. KubeBlocks maintains a global Host Port allocation table, with a default port range of 1025-65536, excluding some common ports (including NodePort range 30000-32767, 6443, etc.). Users can customize the port range via environment variables HOST_PORT_INCLUDE_RANGES
and HOST_PORT_EXCLUDE_RANGES
.
The following is a YAML configuration example for a Redis primary-replica cluster:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: redis-replication
namespace: demo
annotations:
# Specify the component names that need to use hostNetwork mode via annotation, separated by commas for multiple component names
kubeblocks.io/host-network: "redis,redis-sentinel"
spec:
terminationPolicy: Delete
clusterDef: redis
topology: replication
componentSpecs:
- name: redis
serviceVersion: "7.2.4"
disableExporter: false
replicas: 2
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
- name: redis-sentinel
serviceVersion: "7.2.4"
replicas: 3
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Port conflict risk: Since KubeBlocks can only manage the Host Ports it allocates, if other services on the physical host occupy ports allocated by KubeBlocks, it will lead to port conflicts. It must be ensured that the host ports are not occupied by other services.
Scheduling limitations: Database engines communicate via Host IP + Host Port. After a Pod is rebuilt, it may need to be scheduled to the same node, otherwise it may fail to start. It is recommended to use Local PV (e.g., Redis Sentinel cluster) to ensure Pods are scheduled to the same node. For Redis Shard clusters, once the cluster is built, each node will be assigned a unique node ID. As long as the nodes.conf
file remains unchanged, the node will automatically broadcast new Cluster Announce information to other nodes in the cluster after restarting, achieving dynamic address updates and automatic cluster topology repair.
Port range limitations: Some enterprise security teams limit the host port range, which indirectly restricts the number of services each host can provide in this case.
NodePort is a Kubernetes Service type that opens a static port on each node in the cluster (usually 30000-32767), allowing external access to the service via any <NodeIP>:<NodePort>
. KubeBlocks supports creating corresponding Services for each Pod and injecting Service information into the Pod container via environment variables, which Addon developers can use to build clusters.
The advertise port is the service port that a Redis or Sentinel node announces to other nodes or clients for external access. This is particularly important in containerized, NAT, or port-mapped environments, as the actual listening port inside the container may differ from the port exposed to external services. This service can be set to NodePort type. When a Redis node starts, it reads these NodePort information to configure the cluster-announce parameter.
The following is a YAML configuration example for a Redis primary-replica cluster:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: redis-replication
namespace: demo
spec:
terminationPolicy: Delete
clusterDef: redis
topology: replication
componentSpecs:
- name: redis
serviceVersion: "7.2.4"
disableExporter: false
replicas: 2
services:
- name: redis-advertised
# Use NodePort type Service
serviceType: NodePort
# Create independent Service for each Pod
# Redis node will read these environment variables to configure cluster-announce parameter when starting
podService: true
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
- name: redis-sentinel
serviceVersion: "7.2.4"
replicas: 3
services:
- name: sentinel-advertised
# Use NodePort type Service
serviceType: NodePort
# Create independent Service for each Pod
# Sentinel node will read these environment variables to configure cluster-announce parameter when starting
podService: true
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
nodes.conf
file remains unchanged, the node will automatically broadcast new Cluster Announce information to other nodes in the cluster after restarting, achieving dynamic address updates and automatic cluster topology repair.LoadBalancer is a Kubernetes Service type that provides an external load balancer outside the cluster to distribute traffic to backend Pods. KubeBlocks supports creating corresponding LoadBalancer Services for each Pod and injecting Service information into the Pod container via environment variables, which Addon developers can use to build clusters.
The usage is similar to NodePort: replace serviceType
with LoadBalancer
and add the necessary annotations for the load balancer. With this mode, Redis Sentinel clusters will not be forced to rely on Local PV, thereby improving deployment flexibility and cross-node scheduling capabilities.
To help you better choose the appropriate network mode, the following table summarizes the key features of the 5 network modes supported by KubeBlocks for Redis:
Network Mode | Access Scope | Performance Characteristics | Key Advantages | Main Limitations | Applicable Scenarios |
---|---|---|---|---|---|
Headless Service | Cluster internal | Standard container network performance | - Cloud-native standard - Simple configuration - Stable DNS resolution | - Limited to cluster internal access - External client connection restricted | - Cluster internal communication - Development and testing environments |
Fixed Pod IP | Depends on network configuration | Standard container network performance | - Fixed IP addresses - Suitable for IP-dependent scenarios - Stable cross-node communication | - Depends on SpiderPool - Complex IP address management - Scaling limitations | - Container network and physical network interconnected - Applications requiring fixed IPs - Containerization of traditional applications |
HostNetwork | Direct host network connection | Highest network performance | - Best network performance - Direct use of host network - Avoids network virtualization overhead | - Port conflict risk - Scheduling limitations - Reduced security | - High-performance requirements |
NodePort Service | External cluster access | Standard network performance | - Simple external access - Load balancing - Flexible configuration | - Port range limitations - Scheduling dependency | - Development and testing environments - Small-scale external access - Simple external exposure |
LoadBalancer Service | External cluster access | Standard network performance | - Production-grade external access - Automatic load balancing - No Local PV dependency | - Higher cost - IP address consumption - Depends on cloud provider support | - Production environments - Large-scale external access - Cloud-native deployment |
The 5 network modes supported by KubeBlocks fully cover various scenarios from development and testing to production deployment. A deep understanding of the characteristics and applicable scope of each mode helps us make more reasonable architectural choices in actual projects. Whether you are building high-concurrency OLTP systems, internet-facing services, or private cloud deployments, KubeBlocks can provide solid support for your database services with its flexible and powerful networking capabilities. If you need further information about a specific network mode, feel free to ask!