Enable
vSphere with Tanzu
on a Cluster with
NSX-T Data Center
as the Networking Stack

Through the
vSphere Automation
APIs, you can enable a vSphere cluster for managing Kubernetes workloads. A cluster configured with
NSX-T Data Center
supports running
vSphere Pod
and
Tanzu Kubernetes
clusters.
  • Verify that your environment meets the system requirements for enabling
    vSphere with Tanzu
    on the cluster. For more information about the requirements, see the documentation.
  • Verify that the
    NSX-T Data Center
    is installed and configured. See Configuring NSX-T Data Center for vSphere with Tanzu.
  • Create storage policies for the placement of pod ephemeral disks, container images, and
    Supervisor Cluster
    control plane cache.
  • Verify that DRS is enabled in fully automated mode and HA is also enabled on the cluster.
  • Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
  • Verify that the user who you use to access the
    vSphere Automation
    services has the
    Modify cluster-wide configuration
    privilege on the cluster.
  • Create a subscribed content library on the
    vCenter Server
    system to accommodate the VM image that is used for creating the nodes of the
    Tanzu Kubernetes
    clusters.
To enable a vSphere cluster for Kubernetes workload management, you use the services under the
namespace_management
package.
  1. Retrieve the IDs of the tag-based storage policies that you configured for
    vSphere with Tanzu
    .
    Use the
    Policies
    service to retrieve a list of all storage policies and then filter the policies to get the IDs of the policies that you configured for the
    Supervisor Cluster
    .
  2. Retrieve the IDs of the vSphere Distributed Switch and the
    NSX Edge
    cluster that you created when configuring the
    NSX-T Data Center
    for
    vSphere with Tanzu
    .
    Use the
    DistributedSwitchCompatibility
    service to list all vSphere Distributed Switches associated with the specific vSphere cluster and then retrieve the ID of the Distributed Switch that you configured to handle overlay networking for the
    Supervisor Cluster
    . Use the
    EdgeClusterCompatibility
    service to retrieve a list of the created
    NSX Edge
    clusters for the specific vSphere cluster and associated with the specific vSphere Distributed Switch. Retrieve the ID of the
    NSX Edge
    cluster that has the tier-0 gateway that you want to use for the namespaces networking.
  3. Retrieve the ID of the port group for the management network that you configured for the management traffic.
    Use the
    Networks
    service to list the visible networks available on the
    vCenter Server
    instance that match some criteria and then retrieve the ID of the management network you previously configured.
  4. Create a
    ClustersTypes.EnableSpec
    instance and
    define
    the parameters of the
    Supervisor Cluster
    that you want to create.
    You must specify the following required parameters of the enable specification:
    • Storage policies settings and file volume support. The storage policy you set for each of the following parameters ensures that the respective object is placed on the datastore referenced in the storage policy. You can use the same or different storage policy for the different inventory objects.
      Parameter
      Description
      setEphemeralStoragePolicy(java.lang.String ephemeralStoragePolicy)
      Specify the ID of the storage policy that you created to control the storage placement of the
      vSphere Pod
      s.
      setImageStorage(ClustersTypes.ImageStorageSpec imageStorage)
      Set the specification of the storage policy that you created to control the placement of the cache of container images.
      setMasterStoragePolicy(java.lang.String masterStoragePolicy)
      Specify the ID of the storage policy that you created to control the placement of the
      Supervisor Cluster
      control plane cache.
      Optionally, you can activate the file volume support by using
      setCnsFileConfig(CNSFileConfig cnsFileConfig)
      . See Enabling ReadWriteMany Support.
    • Management network settings. Configure the management traffic settings for the
      Supervisor Cluster
      control plane.
      Parameter
      Description
      setNetworkProvider(ClustersTypes.NetworkProvider networkProvider)
      Specify the networking stack that must be used when the
      Supervisor Cluster
      is created. To use the
      NSX-T Data Center
      as the network solution for the cluster, select
      NSXT_CONTAINER_PLUGIN
      .
      setMasterManagementNetwork(ClustersTypes.NetworkSpec masterManagementNetwork)
      Enter the cluster network specification for the
      Supervisor Cluster
      control plane. You must enter values for the following required properties:
      • setNetwork(java.lang.String network)
        - Use the management network ID retrieved in Step 3.
      • setMode(ClustersTypes.NetworkSpec.Ipv4Mode mode)
        - Set
        STATICRANGE
        or
        DHCP
        for the IPv4 address assignment mode. The
        DHCP
        mode allows an IPv4 address to be automatically assigned to the
        Supervisor Cluster
        control plane by a DHCP server. You must also set the floating IP address used by the HA primary cluster by using
        setFloatingIP(java.lang.String floatingIP)
        . Use the DHCP mode only for test purposes. The
        STATICRANGE
        mode, allows the
        Supervisor Cluster
        control plane to have a stable IPv4 address. You can use it in a production environment.
      • setAddressRange(ClustersTypes.Ipv4Range addressRange)
        - Optionally, you can configure the IPv4 addresses range for one or more interfaces of the management network. Specify the following settings:
        • The starting IP address that must be used for reserving consecutive IP addresses for the
          Supervisor Cluster
          control plane. Use up to 5 consecutive IP addresses.
        • The number of IP addresses in the range.
        • The IP address of the gateway associated with the specified range.
        • The subnet mask to be used for the management network.
      setMasterDNS(java.util.List<java.lang.String> masterDNS)
      Enter a list of the DNS server addresses that must be used from the
      Supervisor Cluster
      control plane. If your
      vCenter Server
      instance is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the
      Supervisor Cluster
      . The list of DNS addresses must be specified in the order of preference.
      setMasterDNSSearchDomains(java.util.List<java.lang.String> masterDNSSearchDomains)
      Set a list of domain names that DNS searches when looking up for a host name in the Kubernetes API server. Order the domains in the list by preference.
      setMasterNTPServers(java.util.List<java.lang.String> masterNTPServers)
      Specify a list of IP addresses or DNS names of the NTP server that you use in your environment, if any. Make sure that you configure the same NTP servers for the
      vCenter Server
      instance, all hosts in the cluster, the
      NSX-T Data Center
      , and
      vSphere with Tanzu
      . If you do not set an NTP server, VMware Tools time synchronization is enabled.
    • Workload network settings. Configure the settings for the networks for the namespaces. The namespace network settings provide connectivity to
      vSphere Pod
      s and namespaces created in the
      Supervisor Cluster
      .
      Parameter
      Description
      setNcpClusterNetworkSpec(ClustersTypes.NCPClusterNetworkEnableSpec ncpClusterNetworkSpec)
      Set the specification for the
      Supervisor Cluster
      configured with the
      NSX-T Data Center
      networking stack. Specify the following cluster networking configuration parameters for
      NCPClusterNetworkEnableSpec
      :
      • setClusterDistributedSwitch(java.lang.String clusterDistributedSwitch)
        - The vSphere Distributed Switch that handles overlay networking for the
        Supervisor Cluster
        .
      • setNsxEdgeCluster(java.lang.String nsxEdgeCluster)
        - The
        NSX Edge
        cluster that has tier-0 gateway that you want to use for namespace networking.
      • setNsxTier0Gateway(java.lang.String nsxTier0Gateway)
        - The tier-0 gateway that is associated with the cluster tier-1gateway. You can retrieve a list of
        NSXTier0Gateway
        objects associated with a particular vSphere Distributed Switch and determine the ID of the tier-0 gateway you want to set.
      • setNamespaceSubnetPrefix(java.lang.Long namespaceSubnetPrefix)
        - The subnet prefix that defines the size of the subnet reserved for namespaces segments. Default is 28.
      • setRoutedMode(java.lang.Boolean routedMode)
        - The NAT mode of the workload network. If set to
        false
        :
        • The IP addresses of the workloads are directly accessible from outside the tier-o gateway and you do not need to configure the egress CIDRs.
        • File Volume storage is not supported.
        Default is
        true
        .
      • setEgressCidrs(java.util.List<Ipv4Cidr> egressCidrs)
        - The external CIDR blocks from which the NSX Manager assigns IP addresses used for performing source NAT (SNAT) from internal
        vSphere Pod
        s IP addresses to external IP addresses. Only one egress IP address is assigned for each namespace in the
        Supervisor Cluster
        . These IP ranges must not overlap with the IP ranges of the
        vSphere Pod
        s, ingress, Kubernetes services, or other services running in the data center.
      • setIngressCidrs(java.util.List<Ipv4Cidr> ingressCidrs)
        - The external CIDR blocks from which the ingress IP range for the Kubernetes services is determined. These IP ranges are used for load balancer services and Kubernetes ingress. All Kubernetes ingress services in the same namespace share a common IP address. Each load balancer service is assigned a unique IP address. The ingress IP ranges must not overlap with the IP ranges of the
        vSphere Pod
        s, egress, Kubernetes services, or other services running in the data center.
      • setPodCidrs(java.util.List<Ipv4Cidr> podCidrs)
        - The internal CIDR blocks from which the IP ranges for
        vSphere Pod
        s are determined. The IP ranges must not overlap with the IP ranges of the ingress, egress, Kubernetes services, or other services running in the data center. All
        vSphere Pod
        s CIDR blocks must be of at least /23 subnet size.
      setWorkerDNS(java.util.List<java.lang.String> workerDNS)
      Set a list of the IP addresses of the DNS servers that must be used on the worker nodes. Use different DNS servers than the ones you set for the
      Supervisor Cluster
      control plane.
      setServiceCidr(Ipv4Cidr serviceCidr)
      Specify the CIDR block from which the IP addresses for Kubernetes services are allocated. The IP range must not overlap with the ranges of the
      vSphere Pod
      s, ingress, egress, or other services running in the data center.
      For the Kubernetes services and the
      vSphere Pod
      s, you can use the default values which are based on the cluster size that you specify.
    • Supervisor Cluster
      size. You must set a size to the
      Supervisor Cluster
      which affects the resources allocated to the Kubernetes infrastructure. The cluster size also determines default maximum values for the IP addresses ranges for the
      vSphere Pod
      s and Kubernetes services running in the cluster. You can use the
      ClusterSizeInfo.get()
      calls to retrieve information about the default values associated with each cluster size.
    • Optional. Associate the
      Supervisor Cluster
      with the subscribed content library that you created for provisioning
      Tanzu Kubernetes
      clusters. See Creating, Securing, and Synchronizing Content Libraries for Tanzu Kubernetes Releases.
      To set the library, use the
      setDefaultKubernetesServiceContentLibrary(java.lang.String defaultKubernetesServiceContentLibrary)
      method and pass the subscribed content library ID.
  5. Enable
    vSphere with Tanzu
    on a specific cluster by passing the cluster enable specification to the
    Clusters
    service.
A task runs on
vCenter Server
for turning the cluster into a
Supervisor Cluster
. Once the task completes, Kubernetes control plane nodes are created on the hosts that are part of the cluster enabled with
vSphere with Tanzu
. Now you can create
vSphere Namespace
s.
Create and configure namespaces on the
Supervisor Cluster
. See Create a vSphere Namespace.
Java
This example enables
vSphere with Tanzu
on a cluster that has NSX-T configured as the networking stack.
The following code snippet is part of the
EnableSupervisorCluster.java
sample. Some parts of the original code sample are omitted to save space. You can view the complete and up-to-date version of this sample in the
vsphere-automation-sdk-java
VMware repository at GitHub.
(...) @Override protected void run() throws Exception { System.out.println("We are building the Spec for enabling vSphere supervisor cluster"); ClustersTypes.EnableSpec spec = new ClustersTypes.EnableSpec(); (...) spec.setSizeHint(SizingHint.TINY); (...) spec.setServiceCidr(serCidr); spec.setNetworkProvider(ClustersTypes.NetworkProvider.NSXT_CONTAINER_PLUGIN); (...) spec.setNcpClusterNetworkSpec(NCPSpec); (...) spec.setMasterManagementNetwork(masterNet); (...) spec.setMasterDNS(masterDNS); (...) spec.setWorkerDNS(workerDNS); (...) spec.setMasterNTPServers(NTPserver); spec.setMasterStoragePolicy(this.storagePolicyId); // Storage policy identifier spec.setEphemeralStoragePolicy(this.storagePolicyId);// Storage policy identifier spec.setLoginBanner("This is your first Project pacific cluster"); (...) spec.setImageStorage(imageSpec); this.ppClusterService.enable(clusterId, spec); System.out.println("Invocation is successful for enabling vSphere supervisor cluster, check H5C"); } (...)