Tanzu Application Catalog services

Bitnami package for Kong

Last Updated March 07, 2025

Kong is an open source Microservice API gateway and platform designed for managing microservices requests of high-availability, fault-tolerance, and distributed systems.

Overview of Kong

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kong

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository.

Introduction

This chart bootstraps a kong deployment on a Kubernetes cluster using the Helm package manager. It also includes the kong-ingress-controller container for managing Ingress resources using Kong.

Extra functionalities beyond the Kong core are extended through plugins. Kong is built on top of reliable technologies like NGINX and provides an easy-to-use RESTful API to operate and configure the system.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kong

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

These commands deploy kong on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Prometheus metrics

This chart can be integrated with Prometheus by setting metrics.enabled to true. This will enable Kong native prometheus port in all pods and a metrics service, which can be configured under the metrics.service section. This metrics service will have the necessary annotations to be automatically scraped by Prometheus.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Database backend

The Bitnami Kong chart allows setting two database backends: PostgreSQL or Cassandra. For each option, there are two extra possibilities: deploy a sub-chart with the database installation or use an existing one. The list below details the different options (replace the placeholders specified between UNDERSCORES):

  • Deploy the PostgreSQL sub-chart (default)
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kong

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

  • Use an external PostgreSQL database
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kong \
    --set postgresql.enabled=false \
    --set postgresql.external.host=_HOST_OF_YOUR_POSTGRESQL_INSTALLATION_ \
    --set postgresql.external.password=_PASSWORD_OF_YOUR_POSTGRESQL_INSTALLATION_ \
    --set postgresql.external.user=_USER_OF_YOUR_POSTGRESQL_INSTALLATION_

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

  • Deploy the Cassandra sub-chart
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kong \
    --set database=cassandra \
    --set postgresql.enabled=false \
    --set cassandra.enabled=true

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

  • Use an existing Cassandra installation
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kong \
    --set database=cassandra \
    --set postgresql.enabled=false \
    --set cassandra.enabled=false \
    --set cassandra.external.hosts[0]=_CONTACT_POINT_0_OF_YOUR_CASSANDRA_CLUSTER_ \
    --set cassandra.external.hosts[1]=_CONTACT_POINT_1_OF_YOUR_CASSANDRA_CLUSTER_ \
    ...
    --set cassandra.external.user=_USER_OF_YOUR_CASSANDRA_INSTALLATION_ \
    --set cassandra.external.password=_PASSWORD_OF_YOUR_CASSANDRA_INSTALLATION_

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

DB-less

Kong 1.1 added the capability to run Kong without a database, using only in-memory storage for entities: we call this DB-less mode. When running Kong DB-less, the configuration of entities is done in a second configuration file, in YAML or JSON, using declarative configuration (ref. Link). As is said in step 4 of kong official docker installation, just add the env variable “KONG_DATABASE=off”.

Backup and restore

To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.

How to enable it

  1. Set database value with any value other than “postgresql” or “cassandra”. For example database: "off"
  2. Use kong.extraEnvVars value to set the KONG_DATABASE environment variable:
kong.extraEnvVars:
- name: KONG_DATABASE
  value: "off"

Sidecars and Init Containers

If you have a need for additional containers to run within the same pod as Kong (e.g. an additional metrics or logging exporter), you can do so via the sidecars config parameter. Simply define your container according to the Kubernetes container spec.

sidecars:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
       containerPort: 1234

Similarly, you can add extra init containers using the initContainers parameter.

initContainers:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
        containerPort: 1234

Adding extra environment variables

In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the kong.extraEnvVars property.

kong:
  extraEnvVars:
    - name: KONG_LOG_LEVEL
      value: error

Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the kong.extraEnvVarsCM or the kong.extraEnvVarsSecret values.

The Kong Ingress Controller and the Kong Migration job also allow this kind of configuration via the ingressController.extraEnvVars, ingressController.extraEnvVarsCM, ingressController.extraEnvVarsSecret, migration.extraEnvVars, migration.extraEnvVarsCM and migration.extraEnvVarsSecret values.

Using custom init scripts

For advanced operations, the Bitnami Kong charts allows using custom init scripts that will be mounted in /docker-entrypoint.init-db. You can use a ConfigMap or a Secret (in case of sensitive data) for mounting these extra scripts. Then use the kong.initScriptsCM and kong.initScriptsSecret values.

elasticsearch.hosts[0]=elasticsearch-host
elasticsearch.port=9200
initScriptsCM=special-scripts
initScriptsSecret=special-scripts-sensitive

Deploying extra resources

There are cases where you may want to deploy extra objects, such as KongPlugins, KongConsumers, amongst others. For covering this case, the chart allows adding the full specification of other objects using the extraDeploy parameter. The following example would activate a plugin at deployment time.

## Extra objects to deploy (value evaluated as a template)
##
extraDeploy:
  - |
    apiVersion: configuration.konghq.com/v1
    kind: KongPlugin
    metadata:
      name: {{ include "common.names.fullname" . }}-plugin-correlation
      namespace: {{ .Release.Namespace }}
      labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 6 }}
    config:
      header_name: my-request-id
    plugin: correlation-id

Setting Pod’s affinity

This chart allows you to set your custom affinity using the affinity parameter. Find more information about Pod’s affinity in the kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset, podAntiAffinityPreset, or nodeAffinityPreset parameters.

Parameters

Global parameters

NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto

Common parameters

NameDescriptionValue
kubeVersionForce target Kubernetes version (using Helm capabilities if not set)""
apiVersionsOverride Kubernetes API versions reported by .Capabilities[]
nameOverrideString to partially override common.names.fullname template with a string (will prepend the release name)""
fullnameOverrideString to fully override common.names.fullname template with a string""
commonAnnotationsCommon annotations to add to all Kong resources (sub-charts are not considered). Evaluated as a template{}
commonLabelsCommon labels to add to all Kong resources (sub-charts are not considered). Evaluated as a template{}
clusterDomainKubernetes cluster domaincluster.local
extraDeployArray of extra objects to deploy with the release (evaluated as a template).[]
usePasswordFilesMount credentials as files instead of using environment variablestrue
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the daemonset/deployment["sleep"]
diagnosticMode.argsArgs to override all containers in the daemonset/deployment["infinity"]

Kong common parameters

NameDescriptionValue
image.registrykong image registryREGISTRY_NAME
image.repositorykong image repositoryREPOSITORY_NAME/kong
image.digestkong image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
image.pullPolicykong image pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an array[]
image.debugEnable image debug modefalse
databaseSelect which database backend Kong will use. Can be ‘postgresql’, ‘cassandra’ or ‘off’postgresql

Kong deployment / daemonset parameters

NameDescriptionValue
useDaemonsetUse a daemonset instead of a deployment. replicaCount will not take effect.false
replicaCountNumber of Kong replicas2
containerSecurityContext.enabledEnabled containers’ Security Contexttrue
containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
containerSecurityContext.runAsGroupSet containers’ Security Context runAsGroup1001
containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemtrue
containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
podSecurityContext.enabledEnabled Kong pods’ Security Contexttrue
podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
podSecurityContext.supplementalGroupsSet filesystem extra groups[]
podSecurityContext.fsGroupSet Kong pod’s Security Context fsGroup1001
updateStrategy.typeKong update strategyRollingUpdate
updateStrategy.rollingUpdateKong deployment rolling update configuration parameters{}
automountServiceAccountTokenMount Service Account token in podtrue
hostAliasesAdd deployment host aliases[]
topologySpreadConstraintsTopology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template[]
priorityClassNamePriority Class Name""
schedulerNameUse an alternate scheduler, e.g. “stork”.""
terminationGracePeriodSecondsSeconds Kong pod needs to terminate gracefully""
podAnnotationsAdditional pod annotations{}
podLabelsAdditional pod labels{}
podAffinityPresetPod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
podAntiAffinityPresetPod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
nodeAffinityPreset.typeNode affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
nodeAffinityPreset.keyNode label key to match Ignored if affinity is set.""
nodeAffinityPreset.valuesNode label values to match. Ignored if affinity is set.[]
affinityAffinity for pod assignment{}
nodeSelectorNode labels for pod assignment{}
tolerationsTolerations for pod assignment[]
extraVolumesArray of extra volumes to be added to the Kong deployment deployment (evaluated as template). Requires setting extraVolumeMounts[]
initContainersAdd additional init containers to the Kong pods[]
sidecarsAdd additional sidecar containers to the Kong pods[]
autoscaling.enabledDeploy a HorizontalPodAutoscaler object for the Kong deploymentfalse
autoscaling.minReplicasMinimum number of replicas to scale back2
autoscaling.maxReplicasMaximum number of replicas to scale out5
autoscaling.metricsMetrics to use when deciding to scale the deployment (evaluated as a template)[]
pdb.createDeploy a PodDisruptionBudget object for Kong deploymenttrue
pdb.minAvailableMinimum available Kong replicas (expressed in percentage)""
pdb.maxUnavailableMaximum unavailable Kong replicas (expressed in percentage)50%

Kong Container Parameters

NameDescriptionValue
kong.commandOverride default container command (useful when using custom images)[]
kong.argsOverride default container args (useful when using custom images)[]
kong.initScriptsCMConfigmap with init scripts to execute""
kong.initScriptsSecretConfigmap with init scripts to execute""
kong.declarativeConfigDeclarative configuration to be loaded by Kong (evaluated as a template)""
kong.declarativeConfigCMConfigmap with declarative configuration to be loaded by Kong (evaluated as a template)""
kong.extraEnvVarsArray containing extra env vars to configure Kong[]
kong.extraEnvVarsCMConfigMap containing extra env vars to configure Kong""
kong.extraEnvVarsSecretSecret containing extra env vars to configure Kong (in case of sensitive data)""
kong.extraVolumeMountsArray of extra volume mounts to be added to the Kong Container (evaluated as template). Normally used with extraVolumes.[]
kong.containerPorts.proxyHttpKong proxy HTTP container port8000
kong.containerPorts.proxyHttpsKong proxy HTTPS container port8443
kong.containerPorts.adminHttpKong admin HTTP container port8001
kong.containerPorts.adminHttpsKong admin HTTPS container port8444
kong.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if kong.resources is set (kong.resources is recommended for production).medium
kong.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
kong.livenessProbe.enabledEnable livenessProbe on Kong containerstrue
kong.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe120
kong.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
kong.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
kong.livenessProbe.failureThresholdFailure threshold for livenessProbe6
kong.livenessProbe.successThresholdSuccess threshold for livenessProbe1
kong.readinessProbe.enabledEnable readinessProbe on Kong containerstrue
kong.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe30
kong.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
kong.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
kong.readinessProbe.failureThresholdFailure threshold for readinessProbe6
kong.readinessProbe.successThresholdSuccess threshold for readinessProbe1
kong.startupProbe.enabledEnable startupProbe on Kong containersfalse
kong.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe10
kong.startupProbe.periodSecondsPeriod seconds for startupProbe15
kong.startupProbe.timeoutSecondsTimeout seconds for startupProbe3
kong.startupProbe.failureThresholdFailure threshold for startupProbe20
kong.startupProbe.successThresholdSuccess threshold for startupProbe1
kong.customLivenessProbeOverride default liveness probe (kong container){}
kong.customReadinessProbeOverride default readiness probe (kong container){}
kong.customStartupProbeOverride default startup probe (kong container){}
kong.lifecycleHooksLifecycle hooks (kong container){}

Traffic Exposure Parameters

NameDescriptionValue
service.typeKubernetes Service typeClusterIP
service.exposeAdminAdd the Kong Admin ports to the servicefalse
service.disableHttpPortDisable Kong proxy HTTP and Kong admin HTTP portsfalse
service.ports.proxyHttpKong proxy service HTTP port80
service.ports.proxyHttpsKong proxy service HTTPS port443
service.ports.adminHttpKong admin service HTTP port (only if service.exposeAdmin=true)8001
service.ports.adminHttpsKong admin service HTTPS port (only if service.exposeAdmin=true)8444
service.nodePorts.proxyHttpNodePort for the Kong proxy HTTP endpoint""
service.nodePorts.proxyHttpsNodePort for the Kong proxy HTTPS endpoint""
service.nodePorts.adminHttpNodePort for the Kong admin HTTP endpoint""
service.nodePorts.adminHttpsNodePort for the Kong admin HTTPS endpoint""
service.sessionAffinityControl where client requests go, to the same pod or round-robinNone
service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
service.clusterIPCluster internal IP of the service""
service.externalTrafficPolicyexternal traffic policy managing client source IP preservation""
service.loadBalancerIPloadBalancerIP if kong service type is LoadBalancer""
service.loadBalancerSourceRangesKong service Load Balancer sources[]
service.annotationsAnnotations for Kong service{}
service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
networkPolicy.enabledSpecifies whether a NetworkPolicy should be createdtrue
networkPolicy.allowExternalDon’t require server label for connectionstrue
networkPolicy.allowExternalEgressAllow the pod to access any range of port and all destinations.true
networkPolicy.kubeAPIServerPortsList of possible endpoints to kube-apiserver (limit to your cluster settings to increase security)[]
networkPolicy.extraIngressAdd extra ingress rules to the NetworkPolicy[]
networkPolicy.extraEgressAdd extra ingress rules to the NetworkPolicy[]
networkPolicy.ingressNSMatchLabelsLabels to match to allow traffic from other namespaces{}
networkPolicy.ingressNSPodMatchLabelsPod labels to match to allow traffic from other namespaces{}
ingress.enabledEnable ingress controller resourcefalse
ingress.ingressClassNameIngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)""
ingress.pathTypeIngress path typeImplementationSpecific
ingress.apiVersionForce Ingress API version (automatically detected if not set)""
ingress.hostnameDefault host for the ingress resourcekong.local
ingress.pathIngress path/
ingress.annotationsAdditional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.{}
ingress.tlsEnable TLS configuration for the host defined at ingress.hostname parameterfalse
ingress.selfSignedCreate a TLS secret for this ingress record using self-signed certificates generated by Helmfalse
ingress.extraHostsThe list of additional hostnames to be covered with this ingress record.[]
ingress.extraPathsAdditional arbitrary path/backend objects[]
ingress.extraTlsThe tls configuration for additional hostnames to be covered with this ingress record.[]
ingress.secretsIf you’re providing your own certificates, please use this to add the certificates as secrets[]
ingress.extraRulesAdditional rules to be covered with this ingress record[]

Kong Ingress Controller Container Parameters

NameDescriptionValue
ingressController.enabledEnable/disable the Kong Ingress Controllertrue
ingressController.image.registryKong Ingress Controller image registryREGISTRY_NAME
ingressController.image.repositoryKong Ingress Controller image nameREPOSITORY_NAME/kong-ingress-controller
ingressController.image.digestKong Ingress Controller image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
ingressController.image.pullPolicyKong Ingress Controller image pull policyIfNotPresent
ingressController.image.pullSecretsSpecify docker-registry secret names as an array[]
ingressController.proxyReadyTimeoutMaximum time (in seconds) to wait for the Kong container to be ready300
ingressController.ingressClassName of the class to register Kong Ingress Controller (useful when having other Ingress Controllers in the cluster)kong
ingressController.commandOverride default container command (useful when using custom images)[]
ingressController.argsOverride default container args (useful when using custom images)[]
ingressController.extraEnvVarsArray containing extra env vars to configure Kong[]
ingressController.extraEnvVarsCMConfigMap containing extra env vars to configure Kong Ingress Controller""
ingressController.extraEnvVarsSecretSecret containing extra env vars to configure Kong Ingress Controller (in case of sensitive data)""
ingressController.extraVolumeMountsArray of extra volume mounts to be added to the Kong Ingress Controller container (evaluated as template). Normally used with extraVolumes.[]
ingressController.containerPorts.healthKong Ingress Controller health container port10254
ingressController.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if ingressController.resources is set (ingressController.resources is recommended for production).nano
ingressController.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
ingressController.livenessProbe.enabledEnable livenessProbe on Kong Ingress Controller containerstrue
ingressController.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe120
ingressController.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
ingressController.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
ingressController.livenessProbe.failureThresholdFailure threshold for livenessProbe6
ingressController.livenessProbe.successThresholdSuccess threshold for livenessProbe1
ingressController.readinessProbe.enabledEnable readinessProbe on Kong Ingress Controller containerstrue
ingressController.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe30
ingressController.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
ingressController.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
ingressController.readinessProbe.failureThresholdFailure threshold for readinessProbe6
ingressController.readinessProbe.successThresholdSuccess threshold for readinessProbe1
ingressController.startupProbe.enabledEnable startupProbe on Kong Ingress Controller containersfalse
ingressController.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe10
ingressController.startupProbe.periodSecondsPeriod seconds for startupProbe15
ingressController.startupProbe.timeoutSecondsTimeout seconds for startupProbe3
ingressController.startupProbe.failureThresholdFailure threshold for startupProbe20
ingressController.startupProbe.successThresholdSuccess threshold for startupProbe1
ingressController.customLivenessProbeOverride default liveness probe (Kong Ingress Controller container){}
ingressController.customReadinessProbeOverride default readiness probe (Kong Ingress Controller container){}
ingressController.customStartupProbeOverride default startup probe (Kong Ingress Controller container){}
ingressController.lifecycleHooksLifecycle hooks (Kong Ingress Controller container){}
ingressController.serviceAccount.createEnable the creation of a ServiceAccount for Kong podstrue
ingressController.serviceAccount.nameName of the created ServiceAccount (name generated using common.names.fullname template otherwise)""
ingressController.serviceAccount.automountServiceAccountTokenAuto-mount the service account token in the podfalse
ingressController.serviceAccount.annotationsAdditional custom annotations for the ServiceAccount{}
ingressController.rbac.createCreate the necessary RBAC resources for the Ingress Controller to worktrue
ingressController.rbac.rulesCustom RBAC rules[]

Kong Migration job Parameters

NameDescriptionValue
migration.commandOverride default container command (useful when using custom images)[]
migration.argsOverride default container args (useful when using custom images)[]
migration.extraEnvVarsArray containing extra env vars to configure the Kong migration job[]
migration.extraEnvVarsCMConfigMap containing extra env vars to configure the Kong migration job""
migration.extraEnvVarsSecretSecret containing extra env vars to configure the Kong migration job (in case of sensitive data)""
migration.extraVolumeMountsArray of extra volume mounts to be added to the Kong Container (evaluated as template). Normally used with extraVolumes.[]
migration.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if migration.resources is set (migration.resources is recommended for production).nano
migration.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
migration.automountServiceAccountTokenMount Service Account token in podtrue
migration.hostAliasesAdd deployment host aliases[]
migration.annotationsAdd annotations to the job{}
migration.podLabelsAdditional pod labels{}
migration.podAnnotationsAdditional pod annotations{}

PostgreSQL Parameters

NameDescriptionValue
postgresql.enabledSwitch to enable or disable the PostgreSQL helm charttrue
postgresql.auth.postgresPasswordPassword for the “postgres” admin user""
postgresql.auth.usernameName for a custom user to createkong
postgresql.auth.passwordPassword for the custom user to create""
postgresql.auth.databaseName for a custom database to createkong
postgresql.auth.existingSecretName of existing secret to use for PostgreSQL credentials""
postgresql.architecturePostgreSQL architecture (standalone or replication)standalone
postgresql.primary.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if primary.resources is set (primary.resources is recommended for production).nano
postgresql.primary.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
postgresql.external.hostDatabase host""
postgresql.external.portDatabase port number5432
postgresql.external.userNon-root username for Kongkong
postgresql.external.passwordPassword for the non-root username for Kong""
postgresql.external.databaseKong database namekong
postgresql.external.existingSecretName of an existing secret resource containing the database credentials""
postgresql.external.existingSecretPasswordKeyName of an existing secret key containing the database credentials""

Cassandra Parameters

NameDescriptionValue
cassandra.enabledSwitch to enable or disable the Cassandra helm chartfalse
cassandra.dbUser.userCassandra admin userkong
cassandra.dbUser.passwordPassword for cassandra.dbUser.user. Randomly generated if empty""
cassandra.dbUser.existingSecretName of existing secret to use for Cassandra credentials""
cassandra.replicaCountNumber of Cassandra replicas1
cassandra.external.hostsList of Cassandra hosts[]
cassandra.external.portCassandra port number9042
cassandra.external.userUsername of the external cassandra installation""
cassandra.external.passwordPassword of the external cassandra installation""
cassandra.external.existingSecretName of an existing secret resource containing the Cassandra credentials""
cassandra.external.existingSecretPasswordKeyName of an existing secret key containing the Cassandra credentials""
cassandra.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).large
cassandra.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}

Metrics Parameters

NameDescriptionValue
metrics.enabledEnable the export of Prometheus metricsfalse
metrics.containerPorts.httpPrometheus metrics HTTP container port9119
metrics.service.sessionAffinityControl where client requests go, to the same pod or round-robinNone
metrics.service.clusterIPCluster internal IP of the service""
metrics.service.annotationsAnnotations for Prometheus metrics service{}
metrics.service.ports.httpPrometheus metrics service HTTP port9119
metrics.serviceMonitor.enabledCreate ServiceMonitor Resource for scraping metrics using PrometheusOperatorfalse
metrics.serviceMonitor.namespaceNamespace which Prometheus is running in""
metrics.serviceMonitor.intervalInterval at which metrics should be scraped30s
metrics.serviceMonitor.scrapeTimeoutSpecify the timeout after which the scrape is ended""
metrics.serviceMonitor.labelsAdditional labels that can be used so ServiceMonitor will be discovered by Prometheus{}
metrics.serviceMonitor.selectorPrometheus instance selector labels{}
metrics.serviceMonitor.relabelingsRelabelConfigs to apply to samples before scraping[]
metrics.serviceMonitor.metricRelabelingsMetricRelabelConfigs to apply to samples before ingestion[]
metrics.serviceMonitor.honorLabelshonorLabels chooses the metric’s labels on collisions with target labelsfalse
metrics.serviceMonitor.jobLabelThe name of the label on the target service to use as the job name in prometheus.""
metrics.serviceMonitor.serviceAccountService account used by Prometheus Operator""
metrics.serviceMonitor.rbac.createCreate the necessary RBAC resources so Prometheus Operator can reach Kong’s namespacetrue

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install my-release \
  --set service.exposeAdmin=true oci://REGISTRY_NAME/REPOSITORY_NAME/kong

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command exposes the Kong admin ports inside the Kong service.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/kong

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: You can use the default values.yaml

Troubleshooting

Find more information about how to deal with common errors related to Bitnami’s Helm charts in this troubleshooting guide.

Upgrading

To 15.1.0

This version introduces image verification for security purposes. To disable it, set global.security.allowInsecureImages to true. More details at GitHub issue.

It’s necessary to specify the existing passwords while performing a upgrade to ensure the secrets are not updated with invalid randomly generated passwords. Remember to specify the existing values of the postgresql.postgresqlPassword or cassandra.password parameters when upgrading the chart:

helm upgrade my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kong \
    --set database=postgresql
    --set postgresql.enabled=true
    --set
    --set postgresql.postgresqlPassword=[POSTGRESQL_PASSWORD]

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Note: you need to substitute the placeholders [POSTGRESQL_PASSWORD] with the values obtained from instructions in the installation notes.

To 15.0.0

This major updates the PostgreSQL version from 14.x.x to 17.x.x. Instead of overwritting it in this chart values, it will automatically use the version defined in the postgresql subchart.

To 14.0.0

This major updates the PostgreSQL subchart to its newest major, 16.0.0, which uses PostgreSQL 17.x. Follow the official instructions to upgrade to 17.x.

To 13.0.0

This major updates the Cassandra subchart to its newest major, 12.0.0. Here you can find more information about the changes introduced in that version.

To 12.0.0

This major bump changes the following security defaults:

  • runAsGroup is changed from 0 to 1001
  • readOnlyRootFilesystem is set to true
  • resourcesPreset is changed from none to the minimum size working in our test suites (NOTE: resourcesPreset is not meant for production usage, but resources adapted to your use case).
  • global.compatibility.openshift.adaptSecurityContext is changed from disabled to auto.

This could potentially break any customization or init scripts used in your deployment. If this is the case, change the default values to the previous ones.

To 11.0.0

This major release bumps the PostgreSQL chart version to 14.x.x; no major issues are expected during the upgrade.

To 10.0.0

This major updates the PostgreSQL subchart to its newest major, 13.0.0. Here you can find more information about the changes introduced in that version.

To 9.0.0

This major updates the Cassandra subchart to its newest major, 10.0.0. Here you can find more information about the changes introduced in that version.

To 8.0.0

This major updates the PostgreSQL subchart to its newest major, 12.0.0. Here you can find more information about the changes introduced in that version.

To 6.0.0

The postgresql sub-chart was upgraded to 11.x.x. Several values of the sub-chart were changed, so please check the upgrade notes.

No issues are expected during the upgrade.

To 5.0.0

The cassandra sub-chart was upgraded to 9.x.x. Several values of the sub-chart were changed, so please check the upgrade notes.

No issues are expected during the upgrade.

To 3.1.0

Kong Ingress Controller version was bumped to new major version, 1.x.x. The associated CRDs were updated accordingly.

To 3.0.0

On November 13, 2020, Helm v2 support was formally finished, this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.

What changes were introduced in this major version?

  • Previous versions of this Helm Chart use apiVersion: v1 (installable by both Helm 2 and 3), this Helm Chart was updated to apiVersion: v2 (installable by Helm 3 only). Here you can find more information about the apiVersion field.
  • Move dependency information from the requirements.yaml to the Chart.yaml
  • After running helm dependency update, a Chart.lock file is generated containing the same structure used in the previous requirements.lock
  • The different fields present in the Chart.yaml file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts
  • This chart depends on the PostgreSQL 10 instead of PostgreSQL 9. Apart from the same changes that are described in this section, there are also other major changes due to the master/slave nomenclature was replaced by primary/readReplica. Here you can find more information about the changes introduced.

Considerations when upgrading to this version

  • If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn’t support Helm v2 anymore
  • If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the official Helm documentation about migrating from Helm v2 to v3
  • If you want to upgrade to this version from a previous one installed with Helm v3, it should be done reusing the PVC used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is kong):

NOTE: Please, create a backup of your database before running any of those actions.

Export secrets and required values to update
export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default kong-postgresql -o jsonpath="{.data.password}" | base64 -d)
export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=kong,app.kubernetes.io/name=postgresql,role=master -o jsonpath="{.items[0].metadata.name}")
Delete statefulsets

Delete PostgreSQL statefulset. Notice the option --cascade=false:

kubectl delete statefulsets.apps kong-postgresql --cascade=false
Upgrade the chart release
helm upgrade kong oci://REGISTRY_NAME/REPOSITORY_NAME/kong \
    --set postgresql.postgresqlPassword=$POSTGRESQL_PASSWORD \
    --set postgresql.persistence.existingClaim=$POSTGRESQL_PVC

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Force new statefulset to create a new pod for postgresql
kubectl delete pod kong-postgresql-0

Finally, you should see the lines below in MariaDB container logs:

$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
...
postgresql 08:05:12.59 INFO  ==> Deploying PostgreSQL with persisted data...
...

Useful links

To 4.0.0

This major updates the Cassandra subchart to its newest major, 4.0.0. Here you can find more information about the changes introduced in those versions.

To 2.0.0

PostgreSQL and Cassandra dependencies versions were bumped to new major versions, 9.x.x and 6.x.x respectively. Both of these include breaking changes and hence backwards compatibility is no longer guaranteed.

In order to properly migrate your data to this new version:

License

Copyright © 2025 Broadcom. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.