Tanzu Platform SaaS

Install Helm charts in a Space

Last Updated February 19, 2025

This topic tells you how to install Helm charts in a Kubernetes Space on Tanzu Platform.

Helm applications on Spaces give you flexibility in deploying, promoting, and managing application workloads packaged as Helm charts. Helm applications enable you to use your existing build-tooling investments while benefiting from the features of Kubernetes Spaces to streamline deployment and maximize efficiency.

Prepare your application environment for Helm

The following sections describe how to prepare your environment for the helm application.

Before you begin

Before you can create a Helm-enabled Space, ensure that you have the following:

  • Infrastructure in Tanzu Platform, which includes:

    • A Project in your organization
    • A run cluster group in the Project
    • A cluster in the cluster group to run your packaged application
    • Availability Targets configured in the Project

    For information about how to create a cluster group, cluster, and Availability Targets, see Set up the infrastructure to create an application environment.

Install the Flux CD Source Capability

To install the Flux CD Source Capability:

  1. Go to Spaces > Capabilities.
  2. Click the FluxCD Source Controller Capability.
  3. Click Install Package.
  4. Select the cluster group that has the cluster part of the Availability Target.
  5. Click Install Package to install the Capability.

Install the Flux CD Helm Capability

To install the Flux CD Helm Capability:

  1. Go to Spaces > Capabilities.
  2. Click the FluxCD Helm Controller Capability.
  3. Click Install Package.
  4. Select the cluster group that has the cluster part of the Availability Target.
  5. Click Install Package to install the Capability.

(Optional) Install the registry pull only credentials installer Capability

Tanzu Platform requires a Secret and a SecretExport resource if the helm charts and container apps are hosted in a private registry. The registry pull only credentials installer Capability provides this functionality.

To install the registry pull only credentials installer Capability:

  1. Go to Spaces > Capabilities.
  2. Click the Registry Pull Only Credentials Installer Capability.
  3. Click Install Package.
  4. Select the cluster group that has the cluster part of the Availability Target.
  5. Expand Advanced Configuration.
  6. Provide the username, password, and registry URL as input parameters to the installation.

    You can either enter your input using the form by clicking the vertical ellipsis next to the username, password, and registry entries in the table, or input using YAML.

  7. Click Install Package to install the Capability.

Create a Space

To create a Space:

  1. Go to Spaces > Overview.
  2. Click Create Space and then click Step by step.
  3. Fill in the necessary details, including the Space name and the Availability Target.
  4. Select the fluxcd-helm.tanzu.vmware.com Profile. This Profile includes two Helm-related Capabilities and a single Trait that introduces the fluxcd-helmrelease-installer ServiceAccount with the following permissions. All the permissions are verbs for resources:

    • ConfigMap
    • Deployment
    • Pod
    • PodDisruptionBudget
    • ReplicaSet
    • Secret
    • ServiceAccount
    • Service
  5. Finish the Space creation process.

Deploy Helm Chart

The following sections instruct how to deploy the podinfo Helm chart to a newly created Space.

Create Helm configuration

To create Helm configuration:

  1. Define the location of the Helm chart through the GitRepository, HelmRepository, and OCIRepository APIs that the Flux CD Source Controller package provides. Use the following YAML to reference the standard Helm repository location:

    # helmrepository.yaml
    apiVersion: source.toolkit.fluxcd.io/v1
    kind: HelmRepository
    metadata:
      name: podinfo
    spec:
      interval: 5m
      url: https://stefanprodan.github.io/podinfo
    

    For more information about these APIs, see the Source Controller documentation.

  2. The HelmRelease API enables continuous server-side orchestration of Helm releases through Helm actions, such as install, upgrade, test, uninstall, and rollback. This API also enables correction of configuration drift from the wanted release state.

    Decide which Helm chart to install (fetched from the referenced repository) and provide configuration values. For example:

    # helmrelease.yaml
    apiVersion: helm.toolkit.fluxcd.io/v2
    kind: HelmRelease
    metadata:
      name: podinfo
    spec:
      serviceAccountName: fluxcd-helmrelease-installer
      interval: 10m
      timeout: 5m
      chart:
        spec:
          chart: podinfo
          version: '6.5.4'
          sourceRef:
            kind: HelmRepository
            name: podinfo
          interval: 5m
      releaseName: podinfo
      install:
        remediation:
          retries: 3
      upgrade:
        remediation:
          retries: 3
      valuesFrom:
        - kind: Secret
          name: podinfo-values
    
  3. Use the following YAML to define your secret:

    # podinfo-values.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: podinfo-values
    type: Opaque
    stringData:
      values.yaml: |
        replicaCount: 3
    
  4. To expose the podinfo application, use HTTPRoute API as in this example:

    # route.yaml
    apiVersion: gateway.networking.k8s.io/v1beta1
    kind: HTTPRoute
    metadata:
      name: podinfo-main
      annotations:
        healthcheck.gslb.tanzu.vmware.com/service: podinfo
        healthcheck.gslb.tanzu.vmware.com/path: /
        healthcheck.gslb.tanzu.vmware.com/port: "9898"
    spec:
      parentRefs:
      - group: gateway.networking.k8s.io
        kind: Gateway
        name: default-gateway
        sectionName: http-podinfo
      rules:
      - backendRefs:
        - group: ""
          kind: Service
          name: podinfo
          port: 9898
          weight: 1
        matches:
        - path:
            type: PathPrefix
            value: /
    

Your directory structure now looks like this:

> tree
.
├── helmrelease.yaml
├── helmrepository.yaml
├── podinfo-values.yaml
└── route.yaml

Deploy Helm resources

To deploy Helm resources:

  1. In the context of your Space, run:

    tanzu space use
    
    tanzu deploy --only .
    
  2. To verify that the Helm chart was downloaded successfully, view the status of HelmRepository by running:

    kubectl get helmrepository podinfo -oyaml
    ...
    status:
      conditions:
      - lastTransitionTime: "2024-05-14T01:04:11Z"
        message: 'stored artifact: revision ''sha256:604cc6699bc91ac6015f7324a41f43f079a5b96d559c51bcf812e9d9f1beda94'''
        observedGeneration: 1
        status: "True" # This should be "True"
        type: Ready
    
  3. To verify that the Helm chart was installed successfully, view the status of HelmRelease by running:

    kubectl get helmrelease podinfo -oyaml
    ...
    status:
      conditions:
      - lastTransitionTime: "2024-05-14T01:08:48Z"
        message: Release reconciliation succeeded
        status: "True" # This should be "True"
        type: Ready
    

(Optional) Use ContainerApp resources to track and manage Helm applications

The ContainerApp resource has built-in flexibility. Because of this flexibility ContainerApp can represent and configure arbitrary live applications, even ones that tanzu build did not build. For example, this can be the podinfo application within the installed Helm chart.

The relatedRefs specification field of the ContainerApp CR is used to configure your app to integrate with the ContainerApp-provided functions. Various related references exist that add support for different settings. The most important references are:

  • kubernetes.list-replicas, which enables the discovery of app instances (pods). This is required.
  • kubernetes.set-secret-env, which adds support for setting app environment variables with values stored in Secrets.
  • kubernetes.scale-replicas, which adds support for scaling a number of app instances.
  • kubernetes.scale-resources, which adds support for scaling CPU, memory, and perhaps additional resource types.
  • kubernetes.delete, which adds support for deleting related resources when an app is deleted.

To use ContainerApp to track and manage application details:

  1. Ensure that the container-app.tanzu.vmware.com Capability is available in your Space to work with container applications.

  2. Use the following example YAML to define a ContainerApp resource:

    # containerapp.yaml
    apiVersion: apps.tanzu.vmware.com/v1
    kind: ContainerApp
    metadata:
      name: podinfo
      annotations:
        containerapp.apps.tanzu.vmware.com/class: "kubernetes"
    spec:
      description: Podinfo application from Helm
      contact:
        slack: "#my-helm-apps"
      image: ghcr.io/stefanprodan/podinfo
      replicas: 1
      relatedRefs:
        - for: kubernetes.list-replicas
          kind: Pod
          labelSelector: app.kubernetes.io/name=podinfo
        - for: kubernetes.scale-replicas
          kind: Secret
          name: podinfo-values
          keyPath: .data['values.yaml']->[yaml].replicaCount
        - for: kubernetes.delete
          kind: HelmRelease
          apiGroup: helm.toolkit.fluxcd.io
          name: podinfo-inline
        - for: kubernetes.delete
          kind: Secret
          name: podinfo-values
    

    This YAML can:

    • Aggregate the status of pods created by the HelmRelease by using the kubernetes.list-replicas related reference and pod labelSelector specified.

    • Dynamically reconfigure the number of application pods by using the kubernetes.scale-replicas related reference pointing to the podinfo-values Secret and the keyPath within it, which manages the number of instances of the HelmRelease. Note the keyPath syntax, which is able to parse complex nested YAML values.

    • Ensure that the HelmRelease object and referenced values Secret are automatically deleted whenever the ContainerApp is deleted by using the kubernetes.delete related references pointing to the objects to be deleted.

  3. Unset the values pointed to by related references that enforce runtime operations.

    In this example when using the kubernetes.scale-replicas related reference mechanism to avoid configuration conflict, setting the podinfo-values Secret’s values.yaml key as well is considered invalid configuration.

    This is because configuring the kubernetes.scale-replicas related reference effectively transfers the ownership of the replicas value to the ContainerApp, so it should be the one setting this key.

    In this setup, the ContainerApps’s spec.replicas field is the source of truth for the number of app instances. The spec.replicas field should be used for providing default values or updating values. spec.replicas is updated when performing scaling operations by using the Tanzu Platform UI or the CLI.

    The same rules apply when using the rest of the related references that enforce dynamic configuration. The referenced keyPath within the configuration resource should be unset. If the keyPath is pointing to a nested key within a Secret value like in the earlier example, the root Secret key should be unset because the Secret values are base64-encoded and finer grained control is not possible. The ContainerApp runtime operations do not work as expected if these criteria are not fulfilled.

    In the context of the Space, edit podinfo-values to remove the values.yaml key by running:

    tanzu space use
    kubectl patch secret podinfo-values -p '{"data": {"values.yaml": null}}'
    
  4. Deploy ContainerApp in the Space by running:

    tanzu deploy --only containerapp.yaml
    
  5. To verify the deployment, see information about the application by running:

    tanzu app list
    

    Example output:

     NAME     CONTENT  INSTANCES(RUNNING/REQUESTED)  CPU   MEM  BINDINGS(BOUND/REQUESTED)  STATUS
     podinfo           1/1                           300m  1Gi  0/0                        Running
    
    *HINT: To set a requested instance count, run 'tanzu app scale APP-NAME --instances=<INSTANCE-COUNT>'
    
  6. Scale the app by running:

    tanzu app scale podinfo --instances=3
    
  7. Verify that the new instances have been deployed by running:

    tanzu app instance list podinfo
    

    Example output:

    Requested instances:                      3
    Space replicas (Availability Targets):    1
    -------------------------------------:    -
    Total requested instances:                3
    
    INSTANCE                  STATE    VERSION-STATUS    AVAILABILITY-TARGETS         CONTENT AGE
    podinfo-5cdff9fb64-h4wz7  Running  Up-to-date        all-regions.tanzu.vmware.com         1m
    podinfo-5cdff9fb64-qc9vs  Running  Up-to-date        all-regions.tanzu.vmware.com         1m
    podinfo-5cdff9fb64-rbcvz  Running  Up-to-date        all-regions.tanzu.vmware.com         1m
    
  8. To delete the application and related resources from the Space, run:

    tanzu app delete podinfo