Tanzu Application Catalog services

Bitnami package for Apache ZooKeeper

Last Updated March 07, 2025

Apache ZooKeeper provides a reliable, centralized register of configuration data and services for distributed applications.

Overview of Apache ZooKeeper

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/zookeeper

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository.

Introduction

This chart bootstraps a ZooKeeper deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/zookeeper

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

These commands deploy ZooKeeper on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Update credentials

Bitnami charts configure credentials at first boot. Any further change in the secrets or credentials require manual intervention. Follow these instructions:

  • Update the user password following the upstream documentation
  • Update the password secret with the new values (replace the SECRET_NAME, CLIENT_PASSWORD and SERVER_PASSWORD placeholders)
kubectl create secret generic SECRET_NAME --from-literal=client-password=CLIENT_PASSWORD --from-literal=server-password=SERVER_PASSWORD --dry-run -o yaml | kubectl apply -f -

Prometheus metrics

This chart can be integrated with Prometheus by setting metrics.enabled to true. This will expose Zookeeper native Prometheus endpoint and a metrics service configurable under the metrics.service section. It will have the necessary annotations to be automatically scraped by Prometheus.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Rolling vs Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Configure log level

You can configure the ZooKeeper log level using the ZOO_LOG_LEVEL environment variable or the parameter logLevel. By default, it is set to ERROR because each use of the liveness probe and the readiness probe produces an INFO message on connection and a WARN message on disconnection, generating a high volume of noise in your logs.

In order to remove that log noise so levels can be set to ‘INFO’, two changes must be made.

First, ensure that you are not getting metrics via the deprecated pattern of polling ‘mntr’ on the ZooKeeper client port. The preferred method of polling for Apache ZooKeeper metrics is the ZooKeeper metrics server. This is supported in this chart when setting metrics.enabled to true.

Second, to avoid the connection/disconnection messages from the probes, you can set custom values for these checks which direct them to the ZooKeeper Admin Server instead of the client port. By default, an Admin Server will be started that listens on localhost at port 8080. The following is an example of this use of the Admin Server for probes:

livenessProbe:
  enabled: false
readinessProbe:
  enabled: false
customLivenessProbe:
  exec:
    command: ['/bin/bash', '-c', 'curl -s -m 2 http://localhost:8080/commands/ruok | grep ruok']
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 6
customReadinessProbe:
  exec:
    command: ['/bin/bash', '-c', 'curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null']
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 6

You can also set the log4j logging level and what log appenders are turned on, by using ZOO_LOG4J_PROP set inside of conf/log4j.properties as zookeeper.root.logger by default to

zookeeper.root.logger=INFO, CONSOLE

the available appender is

  • CONSOLE
  • ROLLINGFILE
  • RFAAUDIT
  • TRACEFILE

Backup and restore

To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.

Persistence

The Bitnami ZooKeeper image stores the ZooKeeper data and configurations at the /bitnami/zookeeper path of the container.

Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the Parameters section to configure the PVC or to disable persistence.

If you encounter errors when working with persistent volumes, refer to our troubleshooting guide for persistent volumes.

Adjust permissions of persistent volume mountpoint

As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.

By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.

You can enable this initContainer by setting volumePermissions.enabled to true.

Configure the data log directory

You can use a dedicated device for logs (instead of using the data directory) to help avoiding competition between logging and snaphots. To do so, set the dataLogDir parameter with the path to be used for writing transaction logs. Alternatively, set this parameter with an empty string and it will result in the log being written to the data directory (Zookeeper’s default behavior).

When using a dedicated device for logs, you can use a PVC to persist the logs. To do so, set persistence.enabled to true. See the Persistence Parameters section for more information.

Set pod affinity

This chart allows you to set custom pod affinity using the affinity parameter. Find more information about pod affinity in the Kubernetes documentation.

As an alternative, you can use any of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset, podAntiAffinityPreset, or nodeAffinityPreset parameters.

Parameters

Global parameters

NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.storageClassDEPRECATED: use global.defaultStorageClass instead""
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto

Common parameters

NameDescriptionValue
kubeVersionOverride Kubernetes version""
nameOverrideString to partially override common.names.fullname template (will maintain the release name)""
fullnameOverrideString to fully override common.names.fullname template""
clusterDomainKubernetes Cluster Domaincluster.local
extraDeployExtra objects to deploy (evaluated as a template)[]
commonLabelsAdd labels to all the deployed resources{}
commonAnnotationsAdd annotations to all the deployed resources{}
namespaceOverrideOverride namespace for ZooKeeper resources""
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the statefulset["sleep"]
diagnosticMode.argsArgs to override all containers in the statefulset["infinity"]

ZooKeeper chart parameters

NameDescriptionValue
image.registryZooKeeper image registryREGISTRY_NAME
image.repositoryZooKeeper image repositoryREPOSITORY_NAME/zookeeper
image.digestZooKeeper image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
image.pullPolicyZooKeeper image pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an array[]
image.debugSpecify if debug values should be setfalse
auth.client.enabledEnable ZooKeeper client-server authentication. It uses SASL/Digest-MD5false
auth.client.clientUserUser that will use ZooKeeper clients to auth""
auth.client.clientPasswordPassword that will use ZooKeeper clients to auth""
auth.client.serverUsersComma, semicolon or whitespace separated list of user to be created""
auth.client.serverPasswordsComma, semicolon or whitespace separated list of passwords to assign to users when created""
auth.client.existingSecretUse existing secret (ignores previous passwords)""
auth.quorum.enabledEnable ZooKeeper server-server authentication. It uses SASL/Digest-MD5false
auth.quorum.learnerUserUser that the ZooKeeper quorumLearner will use to authenticate to quorumServers.""
auth.quorum.learnerPasswordPassword that the ZooKeeper quorumLearner will use to authenticate to quorumServers.""
auth.quorum.serverUsersComma, semicolon or whitespace separated list of users for the quorumServers.""
auth.quorum.serverPasswordsComma, semicolon or whitespace separated list of passwords to assign to users when created""
auth.quorum.existingSecretUse existing secret (ignores previous passwords)""
tickTimeBasic time unit (in milliseconds) used by ZooKeeper for heartbeats2000
initLimitZooKeeper uses to limit the length of time the ZooKeeper servers in quorum have to connect to a leader10
syncLimitHow far out of date a server can be from a leader5
preAllocSizeBlock size for transaction log file65536
snapCountThe number of transactions recorded in the transaction log before a snapshot can be taken (and the transaction log rolled)100000
maxClientCnxnsLimits the number of concurrent connections that a single client may make to a single member of the ZooKeeper ensemble60
maxSessionTimeoutMaximum session timeout (in milliseconds) that the server will allow the client to negotiate40000
heapSizeSize (in MB) for the Java Heap options (Xmx and Xms)1024
fourlwCommandsWhitelistA list of comma separated Four Letter Words commands that can be executedsrvr, mntr, ruok
minServerIdMinimal SERVER_ID value, nodes increment their IDs respectively1
listenOnAllIPsAllow ZooKeeper to listen for connections from its peers on all available IP addressesfalse
zooServersZooKeeper space separated servers list. Leave empty to use the default ZooKeeper server names.""
autopurge.snapRetainCountThe most recent snapshots amount (and corresponding transaction logs) to retain10
autopurge.purgeIntervalThe time interval (in hours) for which the purge task has to be triggered1
logLevelLog level for the ZooKeeper server. ERROR by defaultERROR
jvmFlagsDefault JVM flags for the ZooKeeper process""
dataLogDirDedicated data log directory""
configurationConfigure ZooKeeper with a custom zoo.cfg file""
existingConfigmapThe name of an existing ConfigMap with your custom configuration for ZooKeeper""
extraEnvVarsArray with extra environment variables to add to ZooKeeper nodes[]
extraEnvVarsCMName of existing ConfigMap containing extra env vars for ZooKeeper nodes""
extraEnvVarsSecretName of existing Secret containing extra env vars for ZooKeeper nodes""
commandOverride default container command (useful when using custom images)["/scripts/setup.sh"]
argsOverride default container args (useful when using custom images)[]

Statefulset parameters

NameDescriptionValue
replicaCountNumber of ZooKeeper nodes1
revisionHistoryLimitThe number of old history to retain to allow rollback10
containerPorts.clientZooKeeper client container port2181
containerPorts.tlsZooKeeper TLS container port3181
containerPorts.followerZooKeeper follower container port2888
containerPorts.electionZooKeeper election container port3888
containerPorts.adminServerZooKeeper admin server container port8080
containerPorts.metricsZooKeeper Prometheus Exporter container port9141
livenessProbe.enabledEnable livenessProbe on ZooKeeper containerstrue
livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe30
livenessProbe.periodSecondsPeriod seconds for livenessProbe10
livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
livenessProbe.failureThresholdFailure threshold for livenessProbe6
livenessProbe.successThresholdSuccess threshold for livenessProbe1
livenessProbe.probeCommandTimeoutProbe command timeout for livenessProbe3
readinessProbe.enabledEnable readinessProbe on ZooKeeper containerstrue
readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
readinessProbe.periodSecondsPeriod seconds for readinessProbe10
readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
readinessProbe.failureThresholdFailure threshold for readinessProbe6
readinessProbe.successThresholdSuccess threshold for readinessProbe1
readinessProbe.probeCommandTimeoutProbe command timeout for readinessProbe2
startupProbe.enabledEnable startupProbe on ZooKeeper containersfalse
startupProbe.initialDelaySecondsInitial delay seconds for startupProbe30
startupProbe.periodSecondsPeriod seconds for startupProbe10
startupProbe.timeoutSecondsTimeout seconds for startupProbe1
startupProbe.failureThresholdFailure threshold for startupProbe15
startupProbe.successThresholdSuccess threshold for startupProbe1
customLivenessProbeCustom livenessProbe that overrides the default one{}
customReadinessProbeCustom readinessProbe that overrides the default one{}
customStartupProbeCustom startupProbe that overrides the default one{}
lifecycleHooksfor the ZooKeeper container(s) to automate configuration before or after startup{}
resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).micro
resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
podSecurityContext.enabledEnabled ZooKeeper pods’ Security Contexttrue
podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
podSecurityContext.supplementalGroupsSet filesystem extra groups[]
podSecurityContext.fsGroupSet ZooKeeper pod’s Security Context fsGroup1001
containerSecurityContext.enabledEnabled containers’ Security Contexttrue
containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
containerSecurityContext.runAsGroupSet containers’ Security Context runAsGroup1001
containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemtrue
containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
automountServiceAccountTokenMount Service Account token in podfalse
hostAliasesZooKeeper pods host aliases[]
podLabelsExtra labels for ZooKeeper pods{}
podAnnotationsAnnotations for ZooKeeper pods{}
podAffinityPresetPod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
podAntiAffinityPresetPod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
nodeAffinityPreset.typeNode affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
nodeAffinityPreset.keyNode label key to match Ignored if affinity is set.""
nodeAffinityPreset.valuesNode label values to match. Ignored if affinity is set.[]
affinityAffinity for pod assignment{}
nodeSelectorNode labels for pod assignment{}
tolerationsTolerations for pod assignment[]
topologySpreadConstraintsTopology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template[]
podManagementPolicyStatefulSet controller supports relax its ordering guarantees while preserving its uniqueness and identity guarantees. There are two valid pod management policies: OrderedReady and ParallelParallel
priorityClassNameName of the existing priority class to be used by ZooKeeper pods, priority class needs to be created beforehand""
schedulerNameKubernetes pod scheduler registry""
updateStrategy.typeZooKeeper statefulset strategy typeRollingUpdate
updateStrategy.rollingUpdateZooKeeper statefulset rolling update configuration parameters{}
extraVolumesOptionally specify extra list of additional volumes for the ZooKeeper pod(s)[]
extraVolumeMountsOptionally specify extra list of additional volumeMounts for the ZooKeeper container(s)[]
sidecarsAdd additional sidecar containers to the ZooKeeper pod(s)[]
initContainersAdd additional init containers to the ZooKeeper pod(s)[]
pdb.createDeploy a pdb object for the ZooKeeper podtrue
pdb.minAvailableMinimum available ZooKeeper replicas""
pdb.maxUnavailableMaximum unavailable ZooKeeper replicas. Defaults to 1 if both pdb.minAvailable and pdb.maxUnavailable are empty.""
enableServiceLinksWhether information about services should be injected into pod’s environment variabletrue
dnsPolicySpecifies the DNS policy for the zookeeper pods""
dnsConfigallows users more control on the DNS settings for a Pod. Required if dnsPolicy is set to None{}

Traffic Exposure parameters

NameDescriptionValue
service.typeKubernetes Service typeClusterIP
service.ports.clientZooKeeper client service port2181
service.ports.tlsZooKeeper TLS service port3181
service.ports.followerZooKeeper follower service port2888
service.ports.electionZooKeeper election service port3888
service.nodePorts.clientNode port for clients""
service.nodePorts.tlsNode port for TLS""
service.disableBaseClientPortRemove client port from service definitions.false
service.sessionAffinityControl where client requests go, to the same pod or round-robinNone
service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
service.clusterIPZooKeeper service Cluster IP""
service.loadBalancerIPZooKeeper service Load Balancer IP""
service.loadBalancerSourceRangesZooKeeper service Load Balancer sources[]
service.externalTrafficPolicyZooKeeper service external traffic policyCluster
service.annotationsAdditional custom annotations for ZooKeeper service{}
service.extraPortsExtra ports to expose in the ZooKeeper service (normally used with the sidecar value)[]
service.headless.annotationsAnnotations for the Headless Service{}
service.headless.publishNotReadyAddressesIf the ZooKeeper headless service should publish DNS records for not ready podstrue
service.headless.servicenameOverrideString to partially override headless service name""
networkPolicy.enabledSpecifies whether a NetworkPolicy should be createdtrue
networkPolicy.allowExternalDon’t require client label for connectionstrue
networkPolicy.allowExternalEgressAllow the pod to access any range of port and all destinations.true
networkPolicy.extraIngressAdd extra ingress rules to the NetworkPolicy[]
networkPolicy.extraEgressAdd extra ingress rules to the NetworkPolicy[]
networkPolicy.ingressNSMatchLabelsLabels to match to allow traffic from other namespaces{}
networkPolicy.ingressNSPodMatchLabelsPod labels to match to allow traffic from other namespaces{}

Other Parameters

NameDescriptionValue
serviceAccount.createEnable creation of ServiceAccount for ZooKeeper podtrue
serviceAccount.nameThe name of the ServiceAccount to use.""
serviceAccount.automountServiceAccountTokenAllows auto mount of ServiceAccountToken on the serviceAccount createdfalse
serviceAccount.annotationsAdditional custom annotations for the ServiceAccount{}

Persistence parameters

NameDescriptionValue
persistence.enabledEnable ZooKeeper data persistence using PVC. If false, use emptyDirtrue
persistence.existingClaimName of an existing PVC to use (only when deploying a single replica)""
persistence.storageClassPVC Storage Class for ZooKeeper data volume""
persistence.accessModesPVC Access modes["ReadWriteOnce"]
persistence.sizePVC Storage Request for ZooKeeper data volume8Gi
persistence.annotationsAnnotations for the PVC{}
persistence.labelsLabels for the PVC{}
persistence.selectorSelector to match an existing Persistent Volume for ZooKeeper’s data PVC{}
persistence.dataLogDir.sizePVC Storage Request for ZooKeeper’s dedicated data log directory8Gi
persistence.dataLogDir.existingClaimProvide an existing PersistentVolumeClaim for ZooKeeper’s data log directory""
persistence.dataLogDir.selectorSelector to match an existing Persistent Volume for ZooKeeper’s data log PVC{}

Volume Permissions parameters

NameDescriptionValue
volumePermissions.enabledEnable init container that changes the owner and group of the persistent volumefalse
volumePermissions.image.registryInit container volume-permissions image registryREGISTRY_NAME
volumePermissions.image.repositoryInit container volume-permissions image repositoryREPOSITORY_NAME/os-shell
volumePermissions.image.digestInit container volume-permissions image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
volumePermissions.image.pullPolicyInit container volume-permissions image pull policyIfNotPresent
volumePermissions.image.pullSecretsInit container volume-permissions image pull secrets[]
volumePermissions.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production).nano
volumePermissions.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
volumePermissions.containerSecurityContext.enabledEnabled init container Security Contexttrue
volumePermissions.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
volumePermissions.containerSecurityContext.runAsUserUser ID for the init container0

Metrics parameters

NameDescriptionValue
metrics.enabledEnable Prometheus to access ZooKeeper metrics endpointfalse
metrics.service.typeZooKeeper Prometheus Exporter service typeClusterIP
metrics.service.portZooKeeper Prometheus Exporter service port9141
metrics.service.annotationsAnnotations for Prometheus to auto-discover the metrics endpoint{}
metrics.serviceMonitor.enabledCreate ServiceMonitor Resource for scraping metrics using Prometheus Operatorfalse
metrics.serviceMonitor.namespaceNamespace for the ServiceMonitor Resource (defaults to the Release Namespace)""
metrics.serviceMonitor.intervalInterval at which metrics should be scraped.""
metrics.serviceMonitor.scrapeTimeoutTimeout after which the scrape is ended""
metrics.serviceMonitor.additionalLabelsAdditional labels that can be used so ServiceMonitor will be discovered by Prometheus{}
metrics.serviceMonitor.selectorPrometheus instance selector labels{}
metrics.serviceMonitor.relabelingsRelabelConfigs to apply to samples before scraping[]
metrics.serviceMonitor.metricRelabelingsMetricRelabelConfigs to apply to samples before ingestion[]
metrics.serviceMonitor.honorLabelsSpecify honorLabels parameter to add the scrape endpointfalse
metrics.serviceMonitor.jobLabelThe name of the label on the target service to use as the job name in prometheus.""
metrics.serviceMonitor.schemeThe explicit scheme for metrics scraping.""
metrics.serviceMonitor.tlsConfigTLS configuration used for scrape endpoints used by Prometheus{}
metrics.prometheusRule.enabledCreate a PrometheusRule for Prometheus Operatorfalse
metrics.prometheusRule.namespaceNamespace for the PrometheusRule Resource (defaults to the Release Namespace)""
metrics.prometheusRule.additionalLabelsAdditional labels that can be used so PrometheusRule will be discovered by Prometheus{}
metrics.prometheusRule.rulesPrometheusRule definitions[]

TLS/SSL parameters

NameDescriptionValue
tls.client.enabledEnable TLS for client connectionsfalse
tls.client.authSSL Client auth. Can be “none”, “want” or “need”.none
tls.client.autoGeneratedGenerate automatically self-signed TLS certificates for ZooKeeper client communicationsfalse
tls.client.existingSecretName of the existing secret containing the TLS certificates for ZooKeeper client communications""
tls.client.existingSecretKeystoreKeyThe secret key from the tls.client.existingSecret containing the Keystore.""
tls.client.existingSecretTruststoreKeyThe secret key from the tls.client.existingSecret containing the Truststore.""
tls.client.keystorePathLocation of the KeyStore file used for Client connections/opt/bitnami/zookeeper/config/certs/client/zookeeper.keystore.jks
tls.client.truststorePathLocation of the TrustStore file used for Client connections/opt/bitnami/zookeeper/config/certs/client/zookeeper.truststore.jks
tls.client.passwordsSecretNameExisting secret containing Keystore and truststore passwords""
tls.client.passwordsSecretKeystoreKeyThe secret key from the tls.client.passwordsSecretName containing the password for the Keystore.""
tls.client.passwordsSecretTruststoreKeyThe secret key from the tls.client.passwordsSecretName containing the password for the Truststore.""
tls.client.keystorePasswordPassword to access KeyStore if needed""
tls.client.truststorePasswordPassword to access TrustStore if needed""
tls.quorum.enabledEnable TLS for quorum protocolfalse
tls.quorum.authSSL Quorum Client auth. Can be “none”, “want” or “need”.none
tls.quorum.autoGeneratedCreate self-signed TLS certificates. Currently only supports PEM certificates.false
tls.quorum.existingSecretName of the existing secret containing the TLS certificates for ZooKeeper quorum protocol""
tls.quorum.existingSecretKeystoreKeyThe secret key from the tls.quorum.existingSecret containing the Keystore.""
tls.quorum.existingSecretTruststoreKeyThe secret key from the tls.quorum.existingSecret containing the Truststore.""
tls.quorum.keystorePathLocation of the KeyStore file used for Quorum protocol/opt/bitnami/zookeeper/config/certs/quorum/zookeeper.keystore.jks
tls.quorum.truststorePathLocation of the TrustStore file used for Quorum protocol/opt/bitnami/zookeeper/config/certs/quorum/zookeeper.truststore.jks
tls.quorum.passwordsSecretNameExisting secret containing Keystore and truststore passwords""
tls.quorum.passwordsSecretKeystoreKeyThe secret key from the tls.quorum.passwordsSecretName containing the password for the Keystore.""
tls.quorum.passwordsSecretTruststoreKeyThe secret key from the tls.quorum.passwordsSecretName containing the password for the Truststore.""
tls.quorum.keystorePasswordPassword to access KeyStore if needed""
tls.quorum.truststorePasswordPassword to access TrustStore if needed""
tls.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if tls.resources is set (tls.resources is recommended for production).nano
tls.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install my-release \
  --set auth.clientUser=newUser \
    oci://REGISTRY_NAME/REPOSITORY_NAME/zookeeper

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command sets the ZooKeeper user to newUser.

NOTE: Once this chart is deployed, it is not possible to change the application’s access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application’s built-in administrative tools if available.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/zookeeper

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: You can use the default values.yaml

Troubleshooting

Find more information about how to deal with common errors related to Bitnami’s Helm charts in this troubleshooting guide.

Upgrading

To 13.7.0

This version introduces image verification for security purposes. To disable it, set global.security.allowInsecureImages to true. More details at GitHub issue.

To 13.0.0

This major bump changes the following security defaults:

  • runAsGroup is changed from 0 to 1001
  • readOnlyRootFilesystem is set to true
  • resourcesPreset is changed from none to the minimum size working in our test suites (NOTE: resourcesPreset is not meant for production usage, but resources adapted to your use case).
  • global.compatibility.openshift.adaptSecurityContext is changed from disabled to auto.

This could potentially break any customization or init scripts used in your deployment. If this is the case, change the default values to the previous ones.

To 12.0.0

This new version of the chart includes the new ZooKeeper major version 3.9.x. For more information, please refer to Zookeeper 3.9.0 Release Notes

To 11.0.0

This major version removes commonAnnotations and commonLabels from volumeClaimTemplates. Now annotations and labels can be set in volume claims using persistence.annotations and persistence.labels values. If the previous deployment has already set commonAnnotations and/or commonLabels values, to ensure a clean upgrade from previous version without loosing data, please set persistence.annotations and/or persistence.labels values with the same content as the common values.

To 10.0.0

This new version of the chart adds support for server-server authentication. The chart previously supported client-server authentication, to avoid confusion, the previous parameters have been renamed from auth.* to auth.client.*.

To 9.0.0

This new version of the chart includes the new ZooKeeper major version 3.8.0. Upgrade compatibility is not guaranteed.

To 8.0.0

This major release renames several values in this chart and adds missing features, in order to be inline with the rest of assets in the Bitnami charts repository.

Affected values:

  • allowAnonymousLogin is deprecated.
  • containerPort, tlsContainerPort, followerContainerPort and electionContainerPort have been regrouped under the containerPorts map.
  • service.port, service.tlsClientPort, service.followerPort, and service.electionPort have been regrouped under the service.ports map.
  • updateStrategy (string) and rollingUpdatePartition are regrouped under the updateStrategy map.
  • podDisruptionBudget.* parameters are renamed to pdb.*.

To 7.0.0

This new version renames the parameters used to configure TLS for both client and quorum.

  • service.tls.disable_base_client_port is renamed to service.disableBaseClientPort
  • service.tls.client_port is renamed to service.tlsClientPort
  • service.tls.client_enable is renamed to tls.client.enabled
  • service.tls.client_keystore_path is renamed to tls.client.keystorePath
  • service.tls.client_truststore_path is renamed to tls.client.truststorePath
  • service.tls.client_keystore_password is renamed to tls.client.keystorePassword
  • service.tls.client_truststore_password is renamed to tls.client.truststorePassword
  • service.tls.quorum_enable is renamed to tls.quorum.enabled
  • service.tls.quorum_keystore_path is renamed to tls.quorum.keystorePath
  • service.tls.quorum_truststore_path is renamed to tls.quorum.truststorePath
  • service.tls.quorum_keystore_password is renamed to tls.quorum.keystorePassword
  • service.tls.quorum_truststore_password is renamed to tls.quorum.truststorePassword

To 6.1.0

This version introduces bitnami/common, a library chart as a dependency. More documentation about this new utility could be found here. Please, make sure that you have updated the chart dependencies before executing any upgrade.

To 6.0.0

On November 13, 2020, Helm v2 support was formally finished, this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.

To 5.21.0

A couple of parameters related to Zookeeper metrics were renamed or disappeared in favor of new ones:

  • metrics.port is renamed to metrics.containerPort.
  • metrics.annotations is deprecated in favor of metrics.service.annotations.

To 3.0.0

This new version of the chart includes the new ZooKeeper major version 3.5.5. Note that to perform an automatic upgrade of the application, each node will need to have at least one snapshot file created in the data directory. If not, the new version of the application won’t be able to start the service. Please refer to ZOOKEEPER-3056 in order to find ways to workaround this issue in case you are facing it.

To 2.0.0

Backwards compatibility is not guaranteed unless you modify the labels used on the chart’s statefulsets. Use the workaround below to upgrade from versions previous to 2.0.0. The following example assumes that the release name is zookeeper:

kubectl delete statefulset zookeeper-zookeeper --cascade=false

To 1.0.0

Backwards compatibility is not guaranteed unless you modify the labels used on the chart’s deployments. Use the workaround below to upgrade from versions previous to 1.0.0. The following example assumes that the release name is zookeeper:

kubectl delete statefulset zookeeper-zookeeper --cascade=false

License

Copyright © 2025 Broadcom. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.