This topic tells you how to install Helm charts in a Kubernetes Space on Tanzu Platform.
Helm applications on Spaces give you flexibility in deploying, promoting, and managing application workloads packaged as Helm charts. Helm applications enable you to use your existing build-tooling investments while benefiting from the features of Kubernetes Spaces to streamline deployment and maximize efficiency.
Prepare your application environment for Helm
The following sections describe how to prepare your environment for the helm
application.
Before you begin
Before you can create a Helm-enabled Space, ensure that you have the following:
-
Infrastructure in Tanzu Platform, which includes:
- A Project in your organization
- A
run
cluster group in the Project - A cluster in the cluster group to run your packaged application
- Availability Targets configured in the Project
For information about how to create a cluster group, cluster, and Availability Targets, see Set up the infrastructure to create an application environment.
Install the Flux CD Source Capability
To install the Flux CD Source Capability:
- Go to Spaces > Capabilities.
- Click the FluxCD Source Controller Capability.
- Click Install Package.
- Select the cluster group that has the cluster part of the Availability Target.
- Click Install Package to install the Capability.
Install the Flux CD Helm Capability
To install the Flux CD Helm Capability:
- Go to Spaces > Capabilities.
- Click the FluxCD Helm Controller Capability.
- Click Install Package.
- Select the cluster group that has the cluster part of the Availability Target.
- Click Install Package to install the Capability.
(Optional) Install the registry pull only credentials installer Capability
Tanzu Platform requires a Secret
and a SecretExport
resource if the helm charts and container apps are hosted in a private registry. The registry pull only credentials installer Capability provides this functionality.
To install the registry pull only credentials installer Capability:
- Go to Spaces > Capabilities.
- Click the Registry Pull Only Credentials Installer Capability.
- Click Install Package.
- Select the cluster group that has the cluster part of the Availability Target.
- Expand Advanced Configuration.
-
Provide the username, password, and registry URL as input parameters to the installation.
You can either enter your input using the form by clicking the vertical ellipsis next to the username, password, and registry entries in the table, or input using YAML.
-
Click Install Package to install the Capability.
Create a Space
To create a Space:
- Go to Spaces > Overview.
- Click Create Space and then click Step by step.
- Fill in the necessary details, including the Space name and the Availability Target.
-
Select the fluxcd-helm.tanzu.vmware.com Profile. This Profile includes two Helm-related Capabilities and a single Trait that introduces the
fluxcd-helmrelease-installer
ServiceAccount
with the following permissions. All the permissions are verbs for resources:ConfigMap
Deployment
Pod
PodDisruptionBudget
ReplicaSet
Secret
ServiceAccount
Service
-
Finish the Space creation process.
Deploy Helm Chart
The following sections instruct how to deploy the podinfo
Helm chart to a newly created Space.
Create Helm configuration
To create Helm configuration:
-
Define the location of the Helm chart through the
GitRepository
,HelmRepository
, andOCIRepository
APIs that the Flux CD Source Controller package provides. Use the following YAML to reference the standard Helm repository location:# helmrepository.yaml apiVersion: source.toolkit.fluxcd.io/v1 kind: HelmRepository metadata: name: podinfo spec: interval: 5m url: https://stefanprodan.github.io/podinfo
For more information about these APIs, see the Source Controller documentation.
-
The HelmRelease API enables continuous server-side orchestration of Helm releases through Helm actions, such as install, upgrade, test, uninstall, and rollback. This API also enables correction of configuration drift from the wanted release state.
Decide which Helm chart to install (fetched from the referenced repository) and provide configuration values. For example:
# helmrelease.yaml apiVersion: helm.toolkit.fluxcd.io/v2 kind: HelmRelease metadata: name: podinfo spec: serviceAccountName: fluxcd-helmrelease-installer interval: 10m timeout: 5m chart: spec: chart: podinfo version: '6.5.4' sourceRef: kind: HelmRepository name: podinfo interval: 5m releaseName: podinfo install: remediation: retries: 3 upgrade: remediation: retries: 3 valuesFrom: - kind: Secret name: podinfo-values
-
Use the following YAML to define your secret:
# podinfo-values.yaml apiVersion: v1 kind: Secret metadata: name: podinfo-values type: Opaque stringData: values.yaml: | replicaCount: 3
-
To expose the
podinfo
application, useHTTPRoute
API as in this example:# route.yaml apiVersion: gateway.networking.k8s.io/v1beta1 kind: HTTPRoute metadata: name: podinfo-main annotations: healthcheck.gslb.tanzu.vmware.com/service: podinfo healthcheck.gslb.tanzu.vmware.com/path: / healthcheck.gslb.tanzu.vmware.com/port: "9898" spec: parentRefs: - group: gateway.networking.k8s.io kind: Gateway name: default-gateway sectionName: http-podinfo rules: - backendRefs: - group: "" kind: Service name: podinfo port: 9898 weight: 1 matches: - path: type: PathPrefix value: /
Your directory structure now looks like this:
> tree
.
├── helmrelease.yaml
├── helmrepository.yaml
├── podinfo-values.yaml
└── route.yaml
Deploy Helm resources
To deploy Helm resources:
-
In the context of your Space, run:
tanzu space use tanzu deploy --only .
-
To verify that the Helm chart was downloaded successfully, view the status of
HelmRepository
by running:kubectl get helmrepository podinfo -oyaml ... status: conditions: - lastTransitionTime: "2024-05-14T01:04:11Z" message: 'stored artifact: revision ''sha256:604cc6699bc91ac6015f7324a41f43f079a5b96d559c51bcf812e9d9f1beda94''' observedGeneration: 1 status: "True" # This should be "True" type: Ready
-
To verify that the Helm chart was installed successfully, view the status of
HelmRelease
by running:kubectl get helmrelease podinfo -oyaml ... status: conditions: - lastTransitionTime: "2024-05-14T01:08:48Z" message: Release reconciliation succeeded status: "True" # This should be "True" type: Ready
(Optional) Use ContainerApp resources to track and manage Helm applications
The ContainerApp
resource has built-in flexibility. Because of this flexibility ContainerApp
can represent and configure arbitrary live applications, even ones that tanzu build
did not build. For example, this can be the podinfo
application within the installed Helm chart.
The relatedRefs
specification field of the ContainerApp CR is used to configure your app to integrate with the ContainerApp
-provided functions. Various related references exist that add support for different settings. The most important references are:
kubernetes.list-replicas
, which enables the discovery of app instances (pods). This is required.kubernetes.set-secret-env
, which adds support for setting app environment variables with values stored in Secrets.kubernetes.scale-replicas
, which adds support for scaling a number of app instances.kubernetes.scale-resources
, which adds support for scaling CPU, memory, and perhaps additional resource types.kubernetes.delete
, which adds support for deleting related resources when an app is deleted.
To use ContainerApp
to track and manage application details:
-
Ensure that the
container-app.tanzu.vmware.com
Capability is available in your Space to work with container applications. -
Use the following example YAML to define a
ContainerApp
resource:# containerapp.yaml apiVersion: apps.tanzu.vmware.com/v1 kind: ContainerApp metadata: name: podinfo annotations: containerapp.apps.tanzu.vmware.com/class: "kubernetes" spec: description: Podinfo application from Helm contact: slack: "#my-helm-apps" image: ghcr.io/stefanprodan/podinfo replicas: 1 relatedRefs: - for: kubernetes.list-replicas kind: Pod labelSelector: app.kubernetes.io/name=podinfo - for: kubernetes.scale-replicas kind: Secret name: podinfo-values keyPath: .data['values.yaml']->[yaml].replicaCount - for: kubernetes.delete kind: HelmRelease apiGroup: helm.toolkit.fluxcd.io name: podinfo-inline - for: kubernetes.delete kind: Secret name: podinfo-values
This YAML can:
-
Aggregate the status of pods created by the
HelmRelease
by using thekubernetes.list-replicas
related reference and podlabelSelector
specified. -
Dynamically reconfigure the number of application pods by using the
kubernetes.scale-replicas
related reference pointing to thepodinfo-values
Secret and thekeyPath
within it, which manages the number of instances of theHelmRelease
. Note thekeyPath
syntax, which is able to parse complex nested YAML values. -
Ensure that the
HelmRelease
object and referenced valuesSecret
are automatically deleted whenever theContainerApp
is deleted by using thekubernetes.delete
related references pointing to the objects to be deleted.
-
-
Unset the values pointed to by related references that enforce runtime operations.
In this example when using the
kubernetes.scale-replicas
related reference mechanism to avoid configuration conflict, setting thepodinfo-values
Secret’svalues.yaml
key as well is considered invalid configuration.This is because configuring the
kubernetes.scale-replicas
related reference effectively transfers the ownership of the replicas value to theContainerApp
, so it should be the one setting this key.In this setup, the
ContainerApps
’sspec.replicas
field is the source of truth for the number of app instances. Thespec.replicas
field should be used for providing default values or updating values.spec.replicas
is updated when performing scaling operations by using the Tanzu Platform UI or the CLI.The same rules apply when using the rest of the related references that enforce dynamic configuration. The referenced
keyPath
within the configuration resource should be unset. If thekeyPath
is pointing to a nested key within a Secret value like in the earlier example, the root Secret key should be unset because the Secret values are base64-encoded and finer grained control is not possible. TheContainerApp
runtime operations do not work as expected if these criteria are not fulfilled.In the context of the Space, edit
podinfo-values
to remove thevalues.yaml
key by running:tanzu space use kubectl patch secret podinfo-values -p '{"data": {"values.yaml": null}}'
-
Deploy
ContainerApp
in the Space by running:tanzu deploy --only containerapp.yaml
-
To verify the deployment, see information about the application by running:
tanzu app list
Example output:
NAME CONTENT INSTANCES(RUNNING/REQUESTED) CPU MEM BINDINGS(BOUND/REQUESTED) STATUS podinfo 1/1 300m 1Gi 0/0 Running *HINT: To set a requested instance count, run 'tanzu app scale APP-NAME --instances=<INSTANCE-COUNT>'
-
Scale the app by running:
tanzu app scale podinfo --instances=3
-
Verify that the new instances have been deployed by running:
tanzu app instance list podinfo
Example output:
Requested instances: 3 Space replicas (Availability Targets): 1 -------------------------------------: - Total requested instances: 3 INSTANCE STATE VERSION-STATUS AVAILABILITY-TARGETS CONTENT AGE podinfo-5cdff9fb64-h4wz7 Running Up-to-date all-regions.tanzu.vmware.com 1m podinfo-5cdff9fb64-qc9vs Running Up-to-date all-regions.tanzu.vmware.com 1m podinfo-5cdff9fb64-rbcvz Running Up-to-date all-regions.tanzu.vmware.com 1m
-
To delete the application and related resources from the Space, run:
tanzu app delete podinfo
Content feedback and comments