KubeBlocks
BlogsEnterprise
⌘K
​
Blogs

Overview
Quickstart
Architecture

Operations

Stop / Start / Restart
Vertical Scaling
Horizontal Scaling
Volume Expansion
Reconfigure
Switchover
Manage Services

Observability

Prometheus Integration
  1. Prerequisites
  2. Update max_bytes_to_read for the web profile (recommended)
    1. Recommended: export the live user.xml, edit locally, then apply
    2. Alternative: apply a hand-written user.xml manifest
    3. Attach the template to the Cluster
    4. Verify
    5. Server-level settings (00_default_overrides.xml)
  3. Reconfiguring OpsRequest is not supported
  4. Cleanup (ConfigMap workflow)

Reconfigure ClickHouse Parameters

In KubeBlocks 1.0, the ClickHouse add-on manages configuration as whole XML files (for example user.xml for profiles and users, and 00_default_overrides.xml for server/network settings). Those files are not reduced to flat key–value pairs inside the operator.

Supported approach: change settings by authoring or updating a ConfigMap that holds the XML (or a Helm-style template), and referencing it from the Cluster under spec.shardings[].template.configs using the slot name clickhouse-user-tpl (user/profile XML) or clickhouse-tpl (server overrides).

NOTE

ClickHouse does not support the Reconfiguring OpsRequest type in KubeBlocks: configuration is delivered as whole XML templates, not as a flat parameter map that OpsRequest reconfiguration can merge. Use ConfigMaps and configs only. Rationale and add-on details are discussed in kubeblocks-addons.

Also see the upstream example cluster-with-config-templates.yaml.

Prerequisites

    • A functional Kubernetes cluster (v1.21+ recommended)
    • kubectl v1.21+ installed and configured with cluster access
    • KubeBlocks installed (installation guide)
    • ClickHouse Add-on enabled
    • A demo namespace: kubectl create ns demo

    For the steps below, use the same namespace and admin Secret as in the Quickstart (demo, udf-account-info, password password123 unless you changed them).

    Update max_bytes_to_read for the web profile (recommended)

    The historical OpsRequest key clickhouse.profiles.web.max_bytes_to_read corresponds to the XML element max_bytes_to_read under <profiles><web> in user.xml.

    Recommended: export the live user.xml, edit locally, then apply

    This mirrors the usual operator workflow: start from the effective file inside a running pod, change only what you need, then publish it as a ConfigMap.

    1. Export user.xml from any ready shard pod (label apps.kubeblocks.io/sharding-name=clickhouse). The Bitnami layout mounts it at /bitnami/clickhouse/etc/users.d/default/user.xml (see cmpd-ch.yaml):

      CH_POD="$(kubectl get pods -n demo \ -l app.kubernetes.io/instance=clickhouse-cluster,apps.kubeblocks.io/sharding-name=clickhouse \ -o jsonpath='{.items[0].metadata.name}')" kubectl exec -n demo "$CH_POD" -c clickhouse -- \ cat /bitnami/clickhouse/etc/users.d/default/user.xml > user-exported.xml
    2. Edit the copy. Under <profiles>, ensure there is a <web> section (add it if your template only had default), and set:

      <max_bytes_to_read>200000000000</max_bytes_to_read>

      You can also patch a line in place, for example:

      sed -i.bak 's|<max_bytes_to_read>.*</max_bytes_to_read>|<max_bytes_to_read>200000000000</max_bytes_to_read>|' user-exported.xml

      The file on disk is rendered XML (not the Helm user.xml.tpl with {{ ... }}). That is expected: you are replacing the materialized user.xml that the Pods already consume.

    3. Create or refresh the ConfigMap from the edited file. Keep the data key name user.xml so it matches the template slot:

      kubectl create configmap custom-ch-user-tpl -n demo \ --from-file=user.xml=./user-exported.xml \ --dry-run=client -o yaml | kubectl apply -f -

    Alternative: apply a hand-written user.xml manifest

    If you prefer to author XML from scratch or from version control, apply a ConfigMap manifest. The following minimal example only highlights the web profile limit; expand profiles, quotas, and users to match your environment (for day‑to‑day work, exporting from a pod is usually safer).

    Example: inline ConfigMap (custom-ch-user-tpl)
    kubectl apply -f - <<'EOF' apiVersion: v1 kind: ConfigMap metadata: name: custom-ch-user-tpl namespace: demo data: user.xml: | <clickhouse> <profiles> <default> <max_threads>8</max_threads> <log_queries>1</log_queries> <log_queries_min_query_duration_ms>2000</log_queries_min_query_duration_ms> </default> <web> <max_rows_to_read>1000000000</max_rows_to_read> <max_bytes_to_read>200000000000</max_bytes_to_read> <readonly>1</readonly> </web> </profiles> <quotas> <default> <interval> <duration>3600</duration> <queries>0</queries> <errors>0</errors> <result_rows>0</result_rows> <read_rows>0</read_rows> <execution_time>0</execution_time> </interval> </default> </quotas> <users> <admin replace="replace"> <password from_env="CLICKHOUSE_ADMIN_PASSWORD"/> <access_management>1</access_management> <named_collection_control>1</named_collection_control> <show_named_collections>1</show_named_collections> <show_named_collections_secrets>1</show_named_collections_secrets> <networks replace="replace"> <ip>::/0</ip> </networks> <profile>default</profile> <quota>default</quota> </admin> </users> </clickhouse> EOF

    For production baselines, you can also start from the chart’s user.xml.tpl and render or materialize it before placing the result in a ConfigMap.

    Attach the template to the Cluster

    Under spec.shardings[].template, add configs so the user template ConfigMap is mounted as clickhouse-user-tpl:

    configs: - name: clickhouse-user-tpl configMap: name: custom-ch-user-tpl

    If the cluster already exists (for example created from the Quickstart standalone manifest), merge this block into the existing shard template alongside name, replicas, systemAccounts, resources, and volumeClaimTemplates. The least error-prone method is kubectl edit cluster clickhouse-cluster -n demo and insert configs under spec.shardings[0].template.

    If configs is not yet set on that template, you can add it in one step (adjust shardings/0 if your shard order differs):

    kubectl patch cluster clickhouse-cluster -n demo --type=json -p='[ {"op": "add", "path": "/spec/shardings/0/template/configs", "value": [ {"name": "clickhouse-user-tpl", "configMap": {"name": "custom-ch-user-tpl"}} ]} ]'

    After you save, wait until the Cluster returns to Running.

    Verify

    Run a query using the web profile and check the setting. Select any ClickHouse data pod for the cluster (shard workloads carry apps.kubeblocks.io/sharding-name=clickhouse; the middle segment of the pod name may vary with topology):

    CH_POD="$(kubectl get pods -n demo \ -l app.kubernetes.io/instance=clickhouse-cluster,apps.kubeblocks.io/sharding-name=clickhouse \ -o jsonpath='{.items[0].metadata.name}')"

    If you created the cluster exactly as in the Quickstart, the admin password is the value you stored in udf-account-info (example: password123). Otherwise, read it from the admin account Secret KubeBlocks created for the ClickHouse component, or from the pod environment:

    CH_PASS="$(kubectl exec -n demo "$CH_POD" -c clickhouse -- printenv CLICKHOUSE_ADMIN_PASSWORD)" CH_USER="$(kubectl exec -n demo "$CH_POD" -c clickhouse -- printenv CLICKHOUSE_ADMIN_USER)" kubectl exec -n demo "$CH_POD" -c clickhouse -- \ clickhouse-client --user "$CH_USER" --password "$CH_PASS" \ --query "SET profile = 'web'; SELECT name, value FROM system.settings WHERE name = 'max_bytes_to_read'"
    Example Output
    max_bytes_to_read 200000000000

    Server-level settings (00_default_overrides.xml)

    Options such as http_port, tcp_port, listen_host, macros, and logger live in the server XML (clickhouse-tpl / 00_default_overrides.xml), not in user.xml. To change them, supply a ConfigMap whose key matches the server template and reference it with:

    configs: - name: clickhouse-tpl configMap: name: your-server-overrides-tpl

    Use the chart’s 00_default_overrides.xml.tpl as a baseline.

    Reconfiguring OpsRequest is not supported

    For ClickHouse, Reconfiguring OpsRequest does not apply configuration to user.xml / server XML templates. Do not use type: Reconfiguring for this engine—follow the ConfigMap workflow above.

    Cleanup (ConfigMap workflow)

    kubectl delete configmap custom-ch-user-tpl -n demo --ignore-not-found

    Remove the configs entry from the Cluster (or restore the previous manifest) when you no longer need the custom template.

    © 2026 KUBEBLOCKS INC