Tanzu Application Catalog services

Bitnami package for Valkey Cluster

Last Updated March 07, 2025

Valkey is an open source (BSD) high-performance key/value datastore that supports a variety workloads such as caching, message queues, and can act as a primary database.

Overview of Valkey Cluster

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/valkey-cluster

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository.

Introduction

This chart bootstraps a Valkey deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.

Choose between Valkey Helm Chart and Valkey Cluster Helm Chart

You can choose any of the two Valkey Helm charts for deploying a Valkey cluster. While Valkey Helm Chart will deploy a primary-replica cluster using Valkey Sentinel, the Valkey Cluster Helm Chart will deploy a Valkey Cluster with sharding. The main features of each chart are the following:

ValkeyValkey Cluster
Supports multiple databasesSupports only one database. Better if you have a big dataset
Single write point (single primary)Multiple write points (multiple primary nodes)
Valkey TopologyValkey Cluster Topology

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/valkey-cluster

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys Valkey on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

NOTE: if you get a timeout error waiting for the hook to complete increase the default timeout (300s) to a higher one, for example:

helm install --timeout 600s myrelease oci://REGISTRY_NAME/REPOSITORY_NAME/valkey-cluster

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Update credentials

The Bitnami Valkey Cluster chart, when upgrading, reuses the secret previously rendered by the chart or the one specified in existingSecret. To update credentials, use one of the following:

  • Run helm upgrade specifying a new password in password
  • Run helm upgrade specifying a new secret in existingSecret

Prometheus metrics

This chart can be integrated with Prometheus by setting metrics.enabled to true. This will deploy a sidecar container with redis_exporter in all pods and a metrics service, which can be configured under the metrics.service section. This metrics service will have the necessary annotations to be automatically scraped by Prometheus.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Use a different Valkey version

To modify the application version used in this chart, specify a different version of the image using the image.tag parameter and/or a different repository using the image.repository parameter.

Cluster topology

To successfully set the cluster up, it will need to have at least 3 primary nodes. The total number of nodes is calculated like- nodes = numOfPrimaryNodes + numOfPrimaryNodes * replicas. Hence, the defaults cluster.nodes = 6 and cluster.replicas = 1 means, 3 primary and 3 replica nodes will be deployed by the chart.

By default the Valkey Cluster is not accessible from outside the Kubernetes cluster, to access the Valkey Cluster from outside you have to set cluster.externalAccess.enabled=true at deployment time. It will create in the first installation only 6 LoadBalancer services, one for each Valkey node, once you have the external IPs of each service you will need to perform an upgrade passing those IPs to the cluster.externalAccess.service.loadbalancerIP array.

The replicas will be read-only replicas of the primary nodes. By default only one service is exposed (when not using the external access mode). You will connect your client to the exposed service, regardless you need to read or write. When a write operation arrives to a replica it will redirect the client to the proper primary node. For example, using valkey-cli you will need to provide the -c flag for valkey-cli to follow the redirection automatically.

Using the external access mode, you can connect to any of the pods and the replicas will redirect the client in the same way as explained before, but the all the IPs will be public.

In case the primary crashes, one of his replicas will be promoted to primary. The slots stored by the crashed primary will be unavailable until the replica finish the promotion. If a primary and all his replicas crash, the cluster will be down until one of them is up again. To avoid downtime, it is possible to configure the number of Valkey nodes with cluster.nodes and the number of replicas that will be assigned to each primary with cluster.replicas. For example:

  • cluster.nodes=9 ( 3 primary plus 2 replicas for each primary)
  • cluster.replicas=2

Providing the values above, the cluster will have 3 primarys and, each primary, will have 2 replicas.

NOTE: By default cluster.init will be set to true in order to initialize the Valkey Cluster in the first installation. If for testing purposes you only want to deploy or upgrade the nodes but avoiding the creation of the cluster you can set cluster.init to false.

Adding a new node to the cluster

There is a job that will be executed using a post-upgrade hook that will allow you to add a new node. To use it, you should provide some parameters to the upgrade:

  • Pass as password the password used in the installation time. If you did not provide a password follow the instructions from the NOTES.txt to get the generated password.
  • Set the desired number of nodes at cluster.nodes.
  • Set the number of current nodes at cluster.update.currentNumberOfNodes.
  • Set to true cluster.update.addNodes.

The following will be an example to add one more node:

helm upgrade --timeout 600s <release> --set "password=${VALKEY_PASSWORD},cluster.nodes=7,cluster.update.addNodes=true,cluster.update.currentNumberOfNodes=6" oci://REGISTRY_NAME/REPOSITORY_NAME/valkey-cluster

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Where VALKEY_PASSWORD is the password obtained with the command that appears after the first installation of the Helm Chart. The cluster will continue up while restarting pods one by one as the quorum is not lost.

External Access

If you are using external access, to add a new node you will need to perform two upgrades. First upgrade the release to add a new Valkey node and to get a LoadBalancerIP service. For example:

helm upgrade <release> --set "password=${VALKEY_PASSWORD},cluster.externalAccess.enabled=true,cluster.externalAccess.service.type=LoadBalancer,cluster.externalAccess.service.loadBalancerIP[0]=<loadBalancerip-0>,cluster.externalAccess.service.loadBalancerIP[1]=<loadbalanacerip-1>,cluster.externalAccess.service.loadBalancerIP[2]=<loadbalancerip-2>,cluster.externalAccess.service.loadBalancerIP[3]=<loadbalancerip-3>,cluster.externalAccess.service.loadBalancerIP[4]=<loadbalancerip-4>,cluster.externalAccess.service.loadBalancerIP[5]=<loadbalancerip-5>,cluster.externalAccess.service.loadBalancerIP[6]=,cluster.nodes=7,cluster.init=false oci://REGISTRY_NAME/REPOSITORY_NAME/valkey-cluster

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Important here to provide the loadBalancerIP parameters for the new nodes empty to not get an index error.

As we want to add a new node, we are setting cluster.nodes=7 and we leave empty the LoadBalancerIP for the new node, so the cluster will provide the correct one. VALKEY_PASSWORD is the password obtained with the command that appears after the first installation of the Helm Chart. At this point, you will have a new Valkey Pod that will remain in crashLoopBackOff state until we provide the LoadBalancerIP for the new service. Now, wait until the cluster provides the new LoadBalancerIP for the new service and perform the second upgrade:

helm upgrade <release> --set "password=${VALKEY_PASSWORD},cluster.externalAccess.enabled=true,cluster.externalAccess.service.type=LoadBalancer,cluster.externalAccess.service.loadBalancerIP[0]=<loadbalancerip-0>,cluster.externalAccess.service.loadBalancerIP[1]=<loadbalancerip-1>,cluster.externalAccess.service.loadBalancerIP[2]=<loadbalancerip-2>,cluster.externalAccess.service.loadBalancerIP[3]=<loadbalancerip-3>,cluster.externalAccess.service.loadBalancerIP[4]=<loadbalancerip-4>,cluster.externalAccess.service.loadBalancerIP[5]=<loadbalancerip-5>,cluster.externalAccess.service.loadBalancerIP[6]=<loadbalancerip-6>,cluster.nodes=7,cluster.init=false,cluster.update.addNodes=true,cluster.update.newExternalIPs[0]=<load-balancerip-6>" oci://REGISTRY_NAME/REPOSITORY_NAME/valkey-cluster

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Note we are providing the new IPs at cluster.update.newExternalIPs, the flag cluster.update.addNodes=true to enable the creation of the Job that adds a new node and now we are setting the LoadBalancerIP of the new service instead of leave it empty.

NOTE: To avoid the creation of the Job that initializes the Valkey Cluster again, you will need to provide cluster.init=false.

Scale down the cluster

To scale down the Valkey Cluster, follow these steps:

First perform a normal upgrade setting the cluster.nodes value to the desired number of nodes. It should not be less than 6 and the difference between current number of nodes and the desired should be less or equal to cluster.replicas to avoid removing primary node an its replicas at the same time. Also it is needed to provide the password using the password. For example, having more than 6 nodes, to scale down the cluster to 6 nodes:

helm upgrade --timeout 600s <release> --set "password=${VALKEY_PASSWORD},cluster.nodes=6" .

The cluster will continue working during the update as long as the quorum is not lost.

NOTE: To avoid the creation of the Job that initializes the Valkey Cluster again, you will need to provide cluster.init=false.

Once all the nodes are ready, get the list of nodes in the cluster using the CLUSTER NODES command. You will see references to the ones that were removed. Write down the node IDs of the nodes that show fail. In the following example the cluster scaled down from 7 to 6 nodes.

valkey-cli -a $VALKEY_PASSWORD CLUSTER NODES

...
b23bcffa1fd64368d445c1d9bd9aeb92641105f7 10.0.0.70:6379@16379 slave,fail - 1645633139060 0 0 connected
...

In each cluster node, execute the following command. Replace the NODE_ID placeholder.

valkey-cli -a $VALKEY_PASSWORD CLUSTER FORGET NODE_ID

In the previous example the commands would look like this in each cluster node:

valkey-cli -a $VALKEY_PASSWORD CLUSTER FORGET b23bcffa1fd64368d445c1d9bd9aeb92641105f7

Using password file

To use a password file for Valkey you need to create a secret containing the password.

NOTE: It is important that the file with the password must be called valkey-password

And then deploy the Helm Chart using the secret name as parameter:

usePassword=true
usePasswordFile=true
existingSecret=valkey-password-secret
metrics.enabled=true

Securing traffic using TLS

TLS support can be enabled in the chart by specifying the tls. parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the cluster:

  • tls.enabled: Enable TLS support. Defaults to false
  • tls.existingSecret: Name of the secret that contains the certificates. No defaults.
  • tls.certFilename: Certificate filename. No defaults.
  • tls.certKeyFilename: Certificate key filename. No defaults.
  • tls.certCAFilename: CA Certificate filename. No defaults.

For example:

First, create the secret with the certificates files:

kubectl create secret generic certificates-tls-secret --from-file=./cert.pem --from-file=./cert.key --from-file=./ca.pem

Then, use the following parameters:

tls.enabled="true"
tls.existingSecret="certificates-tls-secret"
tls.certFilename="cert.pem"
tls.certKeyFilename="cert.key"
tls.certCAFilename="ca.pem"

Sidecars and Init Containers

If you have a need for additional containers to run within the same pod as Valkey (e.g. an additional metrics or logging exporter), you can do so via the sidecars config parameter. Simply define your container according to the Kubernetes container spec.

sidecars:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
       containerPort: 1234

Similarly, you can add extra init containers using the initContainers parameter.

initContainers:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
        containerPort: 1234

Adding extra environment variables

In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars property.

extraEnvVars:
  - name: VALKEY_WHATEVER
    value: value

Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM or the extraEnvVarsSecret values.

Metrics

The chart optionally can start a metrics exporter for prometheus. The metrics endpoint (port 9121) is exposed in the service. Metrics can be scraped from within the cluster using something similar as the described in the example Prometheus scrape configuration. If metrics are to be scraped from outside the cluster, the Kubernetes API proxy can be utilized to access the endpoint.

Host Kernel Settings

Valkey may require some changes in the kernel of the host machine to work as expected, in particular increasing the somaxconn value and disabling transparent huge pages. To do so, you can set up a privileged initContainer with the sysctlImage config values, for example:

sysctlImage:
  enabled: true
  mountHostSys: true
  command:
    - /bin/sh
    - -c
    - |-
      sysctl -w net.core.somaxconn=10000
      echo never > /host-sys/kernel/mm/transparent_hugepage/enabled

Alternatively, for Kubernetes 1.12+ you can set podSecurityContext.sysctls which will configure sysctls for primary and replica pods. Example:

podSecurityContext:
  sysctls:
  - name: net.core.somaxconn
    value: "10000"

Note that this will not disable transparent huge tables.

Helm Upgrade

By default cluster.init will be set to true in order to initialize the Valkey Cluster in the first installation. If for testing purposes you only want to deploy or upgrade the nodes but avoiding the creation of the cluster you can set cluster.init to false.

Backup and restore

To back up and restore Valkey Cluster Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool.

These are the steps you will usually follow to back up and restore your Valkey Cluster database cluster data:

  • Install Velero on the source and destination clusters.
  • Use Velero to back up the PersistentVolumes (PVs) used by the deployment on the source cluster.
  • Use Velero to restore the backed-up PVs on the destination cluster.
  • Create a new deployment on the destination cluster with the same chart, deployment name, credentials and other parameters as the original. This new deployment will use the restored PVs and hence the original data.

NetworkPolicy

To enable network policy for Valkey, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set networkPolicy.enabled to true.

For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for all pods in the namespace:

kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"

With NetworkPolicy enabled, only pods with the generated client label will be able to connect to Valkey. This label will be displayed in the output after a successful install.

With networkPolicy.ingressNSMatchLabels pods from other namespaces can connect to valkey. Set networkPolicy.ingressNSPodMatchLabels to match pod labels in matched namespace. For example, for a namespace labeled valkey=external and pods in that namespace labeled valkey-client=true the fields should be set:

networkPolicy:
  enabled: true
  ingressNSMatchLabels:
    valkey: external
  ingressNSPodMatchLabels:
    valkey-client: true

Setting Pod’s affinity

This chart allows you to set your custom affinity using the XXX.affinity paremeter(s). Find more information about Pod’s affinity in the kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset, XXX.podAntiAffinityPreset, or XXX.nodeAffinityPreset parameters.

Persistence

By default, the chart mounts a Persistent Volume at the /bitnami path. The volume is created using dynamic volume provisioning.

If persistence is disabled, an emptyDir volume is used. This is only recommended for testing environments because the required information included in the nodes.conf file is missing. This file contains the relationship between the nodes and the cluster. For example, if any node is down or faulty, when it starts again, it is a self-proclaimed primary and also acts as an independent node outside the main cluster as it doesn’t have the necessary information to connect to it.

To reconnect the failed node, run the following:

See nodes.sh

$ cat /bitnami/valkey/data/nodes.sh
declare -A host_2_ip_array=([valkey-node-0]="192.168.192.6" [valkey-node-1]="192.168.192.2" [valkey-node-2]="192.168.192.4" [valkey-node-3]="192.168.192.5" [valkey-node-4]="192.168.192.3" [valkey-node-5]="192.168.192.7" )

Run valkey-cli and run CLUSTER MEET to any other node in the cluster. Now the node has connected to the main cluster.

$ REDISCLI_AUTH=bitnami valkey-cli
127.0.0.1:6379> cluster meet 192.168.192.7 6379
OK

Parameters

Global parameters

NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.storageClassDEPRECATED: use global.defaultStorageClass instead""
global.valkey.passwordValkey password (overrides password)""
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto

Valkey Cluster Common parameters

NameDescriptionValue
nameOverrideString to partially override common.names.fullname template (will maintain the release name)""
fullnameOverrideString to fully override common.names.fullname template""
clusterDomainKubernetes Cluster Domaincluster.local
commonAnnotationsAnnotations to add to all deployed objects{}
commonLabelsLabels to add to all deployed objects{}
extraDeployArray of extra objects to deploy with the release (evaluated as a template)[]
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the deployment["sleep"]
diagnosticMode.argsArgs to override all containers in the deployment["infinity"]
image.registryValkey cluster image registryREGISTRY_NAME
image.repositoryValkey cluster image repositoryREPOSITORY_NAME/valkey-cluster
image.digestValkey cluster image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
image.pullPolicyValkey cluster image pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an array[]
image.debugEnable image debug modefalse
networkPolicy.enabledEnable creation of NetworkPolicy resourcestrue
networkPolicy.allowExternalThe Policy model to applytrue
networkPolicy.allowExternalEgressAllow the pod to access any range of port and all destinations.true
networkPolicy.extraIngressAdd extra ingress rules to the NetworkPolicy[]
networkPolicy.extraEgressAdd extra ingress rules to the NetworkPolicy[]
networkPolicy.ingressNSMatchLabelsLabels to match to allow traffic from other namespaces{}
networkPolicy.ingressNSPodMatchLabelsPod labels to match to allow traffic from other namespaces{}
serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
serviceAccount.nameThe name of the ServiceAccount to create""
serviceAccount.annotationsAnnotations for Cassandra Service Account{}
serviceAccount.automountServiceAccountTokenAutomount API credentials for a service account.false
rbac.createSpecifies whether RBAC resources should be createdfalse
rbac.role.rulesRules to create. It follows the role specification[]
podSecurityContext.enabledEnable Valkey pod Security Contexttrue
podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
podSecurityContext.supplementalGroupsSet filesystem extra groups[]
podSecurityContext.fsGroupGroup ID for the pods1001
podSecurityContext.sysctlsSet namespaced sysctls for the pods[]
podDisruptionBudgetDEPRECATED please use pdb instead{}
pdb.createCreated a PodDisruptionBudgettrue
pdb.minAvailableMin number of pods that must still be available after the eviction.""
pdb.maxUnavailableMax number of pods that can be unavailable after the eviction.""
containerSecurityContext.enabledEnabled containers’ Security Contexttrue
containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
containerSecurityContext.runAsGroupSet containers’ Security Context runAsGroup1001
containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemtrue
containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
usePasswordUse password authenticationtrue
passwordValkey password (ignored if existingSecret set)""
existingSecretName of existing secret object (for password authentication)""
existingSecretPasswordKeyName of key containing password to be retrieved from the existing secret""
usePasswordFilesMount passwords as files instead of environment variablestrue
tls.enabledEnable TLS support for replication trafficfalse
tls.authClientsRequire clients to authenticate or nottrue
tls.autoGeneratedGenerate automatically self-signed TLS certificatesfalse
tls.existingSecretThe name of the existing secret that contains the TLS certificates""
tls.certificatesSecretDEPRECATED. Use tls.existingSecret instead""
tls.certFilenameCertificate filename""
tls.certKeyFilenameCertificate key filename""
tls.certCAFilenameCA Certificate filename""
tls.dhParamsFilenameFile containing DH params (in order to support DH based ciphers)""
service.ports.valkeyKubernetes Valkey service port6379
service.nodePorts.valkeyNode port for Valkey""
service.extraPortsExtra ports to expose in the service (normally used with the sidecar value)[]
service.annotationsProvide any additional annotations which may be required.{}
service.labelsAdditional labels for valkey service{}
service.typeService type for default valkey serviceClusterIP
service.clusterIPService Cluster IP""
service.loadBalancerIPLoad balancer IP if service.type is LoadBalancer""
service.loadBalancerSourceRangesService Load Balancer sources[]
service.externalTrafficPolicyService external traffic policyCluster
service.sessionAffinitySession Affinity for Kubernetes service, can be “None” or “ClientIP”None
service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
service.headless.annotationsAnnotations for the headless service.{}
persistence.enabledEnable persistence on Valkeytrue
persistence.pathPath to mount the volume at, to use other images Valkey images./bitnami/valkey/data
persistence.subPathThe subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services""
persistence.storageClassStorage class of backing PVC""
persistence.annotationsPersistent Volume Claim annotations{}
persistence.labelsPersistent Volume Claim labels{}
persistence.accessModesPersistent Volume Access Modes["ReadWriteOnce"]
persistence.sizeSize of data volume8Gi
persistence.matchLabelsPersistent Volume selectors{}
persistence.matchExpressionsmatchExpressions Persistent Volume selectors{}
persistentVolumeClaimRetentionPolicy.enabledControls if and how PVCs are deleted during the lifecycle of a StatefulSetfalse
persistentVolumeClaimRetentionPolicy.whenScaledVolume retention behavior when the replica count of the StatefulSet is reducedRetain
persistentVolumeClaimRetentionPolicy.whenDeletedVolume retention behavior that applies when the StatefulSet is deletedRetain
volumePermissions.enabledEnable init container that changes volume permissions in the registry (for cases where the default k8s runAsUser and fsUser values do not work)false
volumePermissions.image.registryInit container volume-permissions image registryREGISTRY_NAME
volumePermissions.image.repositoryInit container volume-permissions image repositoryREPOSITORY_NAME/os-shell
volumePermissions.image.digestInit container volume-permissions image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
volumePermissions.image.pullPolicyInit container volume-permissions image pull policyIfNotPresent
volumePermissions.image.pullSecretsSpecify docker-registry secret names as an array[]
volumePermissions.containerSecurityContext.enabledEnable Containers’ Security Contexttrue
volumePermissions.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
volumePermissions.containerSecurityContext.runAsUserUser ID for the containers.0
volumePermissions.containerSecurityContext.privilegedRun container as privilegedfalse
volumePermissions.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production).nano
volumePermissions.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}

Valkey statefulset parameters

NameDescriptionValue
valkey.commandValkey entrypoint string. The command valkey-server is executed if this is not provided[]
valkey.argsArguments for the provided command if needed[]
valkey.updateStrategy.typeArgo Workflows statefulset strategy typeRollingUpdate
valkey.updateStrategy.rollingUpdate.partitionPartition update strategy0
valkey.podManagementPolicyStatefulset Pod management policy, it needs to be Parallel to be able to complete the cluster joinParallel
valkey.automountServiceAccountTokenMount Service Account token in podfalse
valkey.hostAliasesDeployment pod host aliases[]
valkey.hostNetworkHost networking requested for this pod. Use the host’s network namespace.false
valkey.useAOFPersistenceWhether to use AOF Persistence mode or notyes
valkey.containerPorts.valkeyValkey port6379
valkey.containerPorts.busThe busPort should be obtained adding 10000 to the valkeyPort. By default: 10000 + 6379 = 1637916379
valkey.lifecycleHooksLifecycleHook to set additional configuration before or after startup. Evaluated as a template{}
valkey.extraVolumesExtra volumes to add to the deployment[]
valkey.extraVolumeMountsExtra volume mounts to add to the container[]
valkey.customLivenessProbeOverride default liveness probe{}
valkey.customReadinessProbeOverride default readiness probe{}
valkey.customStartupProbeCustom startupProbe that overrides the default one{}
valkey.initContainersExtra init containers to add to the deployment[]
valkey.sidecarsExtra sidecar containers to add to the deployment[]
valkey.podLabelsAdditional labels for Valkey pod{}
valkey.priorityClassNameValkey Primary pod priorityClassName""
valkey.defaultConfigOverrideOptional default Valkey configuration for the nodes""
valkey.configmapAdditional Valkey configuration for the nodes""
valkey.extraEnvVarsAn array to add extra environment variables[]
valkey.extraEnvVarsCMConfigMap with extra environment variables""
valkey.extraEnvVarsSecretSecret with extra environment variables""
valkey.podAnnotationsValkey additional annotations{}
valkey.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if valkey.resources is set (valkey.resources is recommended for production).nano
valkey.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
valkey.schedulerNameUse an alternate scheduler, e.g. “stork”.""
valkey.shareProcessNamespaceEnable shared process namespace in a pod.false
valkey.livenessProbe.enabledEnable livenessProbetrue
valkey.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe5
valkey.livenessProbe.periodSecondsPeriod seconds for livenessProbe5
valkey.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
valkey.livenessProbe.failureThresholdFailure threshold for livenessProbe5
valkey.livenessProbe.successThresholdSuccess threshold for livenessProbe1
valkey.readinessProbe.enabledEnable readinessProbetrue
valkey.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
valkey.readinessProbe.periodSecondsPeriod seconds for readinessProbe5
valkey.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe1
valkey.readinessProbe.failureThresholdFailure threshold for readinessProbe5
valkey.readinessProbe.successThresholdSuccess threshold for readinessProbe1
valkey.startupProbe.enabledEnable startupProbefalse
valkey.startupProbe.pathPath to check for startupProbe/
valkey.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe300
valkey.startupProbe.periodSecondsPeriod seconds for startupProbe10
valkey.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
valkey.startupProbe.failureThresholdFailure threshold for startupProbe6
valkey.startupProbe.successThresholdSuccess threshold for startupProbe1
valkey.podAffinityPresetValkey pod affinity preset. Ignored if valkey.affinity is set. Allowed values: soft or hard""
valkey.podAntiAffinityPresetValkey pod anti-affinity preset. Ignored if valkey.affinity is set. Allowed values: soft or hardsoft
valkey.nodeAffinityPreset.typeValkey node affinity preset type. Ignored if valkey.affinity is set. Allowed values: soft or hard""
valkey.nodeAffinityPreset.keyValkey node label key to match Ignored if valkey.affinity is set.""
valkey.nodeAffinityPreset.valuesValkey node label values to match. Ignored if valkey.affinity is set.[]
valkey.affinityAffinity settings for Valkey pod assignment{}
valkey.nodeSelectorNode labels for Valkey pods assignment{}
valkey.tolerationsTolerations for Valkey pods assignment[]
valkey.topologySpreadConstraintsPod topology spread constraints for Valkey pod[]

Cluster update job parameters

NameDescriptionValue
updateJob.activeDeadlineSecondsNumber of seconds the Job to create the cluster will be waiting for the Nodes to be ready.600
updateJob.commandContainer command (using container default if not set)[]
updateJob.argsContainer args (using container default if not set)[]
updateJob.automountServiceAccountTokenMount Service Account token in podfalse
updateJob.hostAliasesDeployment pod host aliases[]
updateJob.helmHookJob Helm hookpost-upgrade
updateJob.annotationsJob annotations{}
updateJob.podAnnotationsJob pod annotations{}
updateJob.podLabelsPod extra labels{}
updateJob.extraEnvVarsAn array to add extra environment variables[]
updateJob.extraEnvVarsCMConfigMap containing extra environment variables""
updateJob.extraEnvVarsSecretSecret containing extra environment variables""
updateJob.extraVolumesExtra volumes to add to the deployment[]
updateJob.extraVolumeMountsExtra volume mounts to add to the container[]
updateJob.initContainersExtra init containers to add to the deployment[]
updateJob.podAffinityPresetUpdate job pod affinity preset. Ignored if updateJob.affinity is set. Allowed values: soft or hard""
updateJob.podAntiAffinityPresetUpdate job pod anti-affinity preset. Ignored if updateJob.affinity is set. Allowed values: soft or hardsoft
updateJob.nodeAffinityPreset.typeUpdate job node affinity preset type. Ignored if updateJob.affinity is set. Allowed values: soft or hard""
updateJob.nodeAffinityPreset.keyUpdate job node label key to match Ignored if updateJob.affinity is set.""
updateJob.nodeAffinityPreset.valuesUpdate job node label values to match. Ignored if updateJob.affinity is set.[]
updateJob.affinityAffinity for update job pods assignment{}
updateJob.nodeSelectorNode labels for update job pods assignment{}
updateJob.tolerationsTolerations for update job pods assignment[]
updateJob.priorityClassNamePriority class name""
updateJob.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if updateJob.resources is set (updateJob.resources is recommended for production).nano
updateJob.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}

Cluster management parameters

NameDescriptionValue
cluster.initEnable the initialization of the Valkey Clustertrue
cluster.nodesThe number of primary nodes should always be >= 3, otherwise cluster creation will fail6
cluster.replicasNumber of replicas for every primary in the cluster1
cluster.externalAccess.enabledEnable access to the Valkeyfalse
cluster.externalAccess.hostModeSet cluster preferred endpoint type as hostnamefalse
cluster.externalAccess.service.disableLoadBalancerIPDisable use of Service.spec.loadBalancerIPfalse
cluster.externalAccess.service.loadBalancerIPAnnotaionName of annotation to specify fixed IP for service in. Disables Service.spec.loadBalancerIP if not empty""
cluster.externalAccess.service.typeType for the services used to expose every PodLoadBalancer
cluster.externalAccess.service.portPort for the services used to expose every Pod6379
cluster.externalAccess.service.loadBalancerIPArray of load balancer IPs for each Valkey node. Length must be the same as cluster.nodes[]
cluster.externalAccess.service.loadBalancerSourceRangesService Load Balancer sources[]
cluster.externalAccess.service.annotationsAnnotations to add to the services used to expose every Pod of the Valkey Cluster{}
cluster.update.addNodesBoolean to specify if you want to add nodes after the upgradefalse
cluster.update.currentNumberOfNodesNumber of currently deployed Valkey nodes6
cluster.update.currentNumberOfReplicasNumber of currently deployed Valkey replicas1
cluster.update.newExternalIPsExternal IPs obtained from the services for the new nodes to add to the cluster[]

Metrics sidecar parameters

NameDescriptionValue
metrics.enabledStart a side-car prometheus exporterfalse
metrics.image.registryValkey exporter image registryREGISTRY_NAME
metrics.image.repositoryValkey exporter image nameREPOSITORY_NAME/redis-exporter
metrics.image.digestValkey exporter image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
metrics.image.pullPolicyValkey exporter image pull policyIfNotPresent
metrics.image.pullSecretsSpecify docker-registry secret names as an array[]
metrics.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if metrics.resources is set (metrics.resources is recommended for production).nano
metrics.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
metrics.extraArgsExtra arguments for the binary; possible values here{}
metrics.extraEnvVarsArray with extra environment variables to add to Valkey exporter[]
metrics.containerPorts.httpMetrics HTTP container port9121
metrics.podAnnotationsAdditional annotations for Metrics exporter pod{}
metrics.podLabelsAdditional labels for Metrics exporter pod{}
metrics.containerSecurityContext.enabledEnabled containers’ Security Contexttrue
metrics.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
metrics.containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
metrics.containerSecurityContext.runAsGroupSet containers’ Security Context runAsGroup1001
metrics.containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
metrics.containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
metrics.containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemtrue
metrics.containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
metrics.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
metrics.containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
metrics.serviceMonitor.enabledIf true, creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true)false
metrics.serviceMonitor.namespaceOptional namespace which Prometheus is running in""
metrics.serviceMonitor.intervalHow frequently to scrape metrics (use by default, falling back to Prometheus’ default)""
metrics.serviceMonitor.scrapeTimeoutTimeout after which the scrape is ended""
metrics.serviceMonitor.selectorPrometheus instance selector labels{}
metrics.serviceMonitor.labelsServiceMonitor extra labels{}
metrics.serviceMonitor.annotationsServiceMonitor annotations{}
metrics.serviceMonitor.jobLabelThe name of the label on the target service to use as the job name in prometheus.""
metrics.serviceMonitor.relabelingsRelabelConfigs to apply to samples before scraping[]
metrics.serviceMonitor.metricRelabelingsMetricRelabelConfigs to apply to samples before ingestion[]
metrics.prometheusRule.enabledSet this to true to create prometheusRules for Prometheus operatorfalse
metrics.prometheusRule.additionalLabelsAdditional labels that can be used so prometheusRules will be discovered by Prometheus{}
metrics.prometheusRule.namespacenamespace where prometheusRules resource should be created""
metrics.prometheusRule.rulesCreate specified rules, check values for an example.[]
metrics.priorityClassNameMetrics exporter pod priorityClassName""
metrics.service.typeKubernetes Service type (valkey metrics)ClusterIP
metrics.service.loadBalancerIPUse serviceLoadBalancerIP to request a specific static IP, otherwise leave blank""
metrics.service.annotationsAnnotations for the services to monitor.{}
metrics.service.labelsAdditional labels for the metrics service{}
metrics.service.ports.httpMetrics HTTP service port9121
metrics.service.clusterIPService Cluster IP""

Sysctl Image parameters

NameDescriptionValue
sysctlImage.enabledEnable an init container to modify Kernel settingsfalse
sysctlImage.commandsysctlImage command to execute[]
sysctlImage.registrysysctlImage Init container registryREGISTRY_NAME
sysctlImage.repositorysysctlImage Init container repositoryREPOSITORY_NAME/os-shell
sysctlImage.digestsysctlImage Init container digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
sysctlImage.pullPolicysysctlImage Init container pull policyIfNotPresent
sysctlImage.pullSecretsSpecify docker-registry secret names as an array[]
sysctlImage.mountHostSysMount the host /sys folder to /host-sysfalse
sysctlImage.containerSecurityContext.enabledEnable Containers’ Security Contexttrue
sysctlImage.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
sysctlImage.containerSecurityContext.runAsUserUser ID for the containers.0
sysctlImage.containerSecurityContext.privilegedRun privileged as privilegedtrue
sysctlImage.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if sysctlImage.resources is set (sysctlImage.resources is recommended for production).nano
sysctlImage.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install my-release \
  --set password=secretpassword \
    oci://REGISTRY_NAME/REPOSITORY_NAME/valkey-cluster

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command sets the Valkey server password to secretpassword.

NOTE: Once this chart is deployed, it is not possible to change the application’s access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application’s built-in administrative tools if available.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/valkey-cluster

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: You can use the default values.yaml Note for minikube users: Current versions of minikube (v0.24.1 at the time of writing) provision hostPath persistent volumes that are only writable by root. Using chart defaults cause pod failure for the Valkey pod as it attempts to write to the /bitnami directory. See minikube issue 1990 for more information.

Troubleshooting

Find more information about how to deal with common errors related to Bitnami’s Helm charts in this troubleshooting guide.

Upgrading

To 2.1.0

This version introduces image verification for security purposes. To disable it, set global.security.allowInsecureImages to true. More details at GitHub issue.

To 2.0.0

This major updates all the references from master/slave to primary/replica to follow the upstream project strategy:

  • The term master has been replaced by the term primary. Therefore, parameters prefixed with master are now prefixed with primary.
  • Environment variables previously prefixed as VALKEY_MASTER or VALKEY_SENTINEL_MASTER use VALKEY_PRIMARY and VALKEY_SENTINEL_PRIMARY now.

Consequences:

Backwards compatibility is not guaranteed. To upgrade to 2.0.0, install a new release of the Valkey chart, and migrate the data from your previous release. You have to create a backup of the database, and restore it on the new release as explained in the Backup and restore section.

License

Copyright © 2025 Broadcom. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.