Installing
AKO
on TKGI

AKO
supports LoadBalancer service and ingress for Tanzu Kubernetes Grid Integrated (TKGI) clusters. This document discusses the steps required to install
AKO
on TKGI cluster.
A TKGI cluster can be created on NSX-T managed overlay networking with NCP as CNI or on regular VMware vSphere Distributed Switch (VDS) networking with Flannel as Container Network Interface (CNI).
Since the Controller integrates with infrastructure orchestrator (vCenter or NSX-T), the way the
Avi Load Balancer
is configured for these networking options is different.

TKGI with NCP on NSX-T Overlays

The
Avi Load Balancer
deployment in NSX-T environment is as shown below:
The design of this deployment is as follows:

Controller

The
Avi Load Balancer Controller
cluster is deployed as a three-node cluster, connected to the management port group.

NSX-T Networking

Create a Tier-1 gateway for the
Avi Load Balancer
using the NSX-T manager UI.
Create two overlay segments connected to Tier-1, one for SE management traffic and one for the VIP/ data traffic.
The VIP network must have route-able subnet for the VIPs to be accessible form external client. The management segment can also be routable or outbound NAT can be configured on T1 to allow SEs connect to the Controller. Required ports must be allowed on DFW and Edge firewall, as explained in the
Ports and Protocols
chapter in the
Avi Load Balancer
Installation Guide
.

Service Engines (SEs)

The Service Engine virtual machines load balance the workload traffic. The vnic0 of the SE VM connects to the management segment while the vnic1 connects to the VIP segment. Leave the remaining interfaces on the VM disconnected.
In case of a vCenter cloud, this is automatic. However, with a No Orchestrator - cloud, this is done manually.

AKO

Install
AKO
on the TKGI cluster in a namespace called avi-system.
Ensure
AKO
can reach the NSX Advanced Load Balancer Controller IP address to run the
Avi Load Balancer
APIs.
If the setup has multiple TKGI clusters, each cluster needs
AKO
installed on it, but all
AKO
s can share the same Controller and SE group.

Cloud Configuration

The cloud configuration on the Controller allows it to integrate with the IaaS platform and automate the provisioning and lifecycle of the SEs. The
Avi Load Balancer
supports NSX-T cloud but this is not compatible with TKGI environment because, the
Avi Load Balancer
uses the newer NSX-T policy APIs while TKGI uses the manager APIs.
You can configure vCenter cloud which can also automate the SE lifecycle. But this is supported only if NSX created switch is of CVDS type. If the NSX switch is NVDS, use a No Orchestrator cloud manually deploy the SEs.

Deploying the
Avi Load Balancer

  1. Download the Controller OVA from Customer Connect software downloads.
  2. Deploy the virtual machines as discussed in
    Installing
    Avi Load Balancer
    in VMware vSphere Environments
    topic in the
    Avi Load Balancer
    Installation Guide
    . If you are using a vCenter write access cloud, configure the Controller.
  3. If you are using a No Orchestrator cloud, deploy the Service Engines.
  4. After deploying the SEs, connect the interface 1 to the management segment and interface 2 to the VIP segment as shown in the diagram.

Configuration

The following are the steps to configure
AKO
on TKGI:
  1. Configure TKGI
  2. Configure
    Avi Load Balancer
    IPAM and DNS
  3. Configure
    AKO

Configure TKGI

From the VMware Enterprise PKS Management Console, disable the
Avi Load Balancer
for the cluster by setting the options Use NSX-T L4 Virtual Server For K8s Load Balancer and Use NSX-T L7 Virtual Server As The Ingress Controller For K8s Cluster to No.
These services will be synced by
AKO
running on the cluster.
AKO
only supports LoadBalancer and Ingress for applications created on the TKGI cluster. The Kubernetes API endpoint is managed by NSX-T load balancer as it is directly configured by the BOSH automation.

Configure
Avi Load Balancer
IPAM and DNS

Configure the IPAM and DNS profiles as shown in Configure IPAM and DNS Profile and add the profiles to the cloud configuration.

Configure
AKO

Use Helm to install
AKO
on the TKGI cluster as shown in Install Helm CLI.