Add a VxRail Cluster Using the Workflow Optimization Script

You can use the Workflow Optimization script to perform all of the steps to add a
VxRail
cluster in one place.
  • Image the workload domain nodes. For information on imaging the nodes, refer to Dell VxRail documentation.
  • The IP addresses and Fully Qualified Domain Names (FQDNs) for the ESXi hosts, VxRail Manager, and NSX Manager instances must be resolvable by DNS.
  • If you are using DHCP for the NSX Host Overlay Network, a DHCP server must be configured on the NSX Host Overlay VLAN of the management domain. When VMware NSX creates TEPs for the VI workload domain, they are assigned IP addresses from the DHCP server.
The Workflow Optimzation script uses the
VMware Cloud Foundation on Dell VxRail
API to perform all of the steps to add a VxRail cluster in one place. See Create a Cluster with Workflow Optimization for more information about the API.
  1. Download the
    .zip
    file for the Workflow Optimzation script.
  2. Unzip the file and copy the directory (
    WorkflowOptimization-VCF-5000-master
    ) to the
    /home/vcf
    directory on the SDDC Manager VM.
  3. Using SSH, log in to the SDDC Manager VM with the user name
    vcf
    and the password you specified in the deployment parameter sheet.
  4. In the
    /home/vcf/WorkflowOptimization-VCF-5000-master
    directory, run
    python vxrail_workflow_optimization_automator.py
    .
  5. Enter the management domain SSO user name and password.
  6. Enter
    2
    to select the
    Add Cluster
    option.
  7. Select the workload domain to which you want to add the cluster.
  8. Enter a name for the datastore.
  9. Enter
    2
    to select the
    Step by step input
    option.
  10. Enter the cluster name.
  11. Enter the VxRail Manager FQDN.
  12. To trust the ssl and ssh thumbprint, enter
    yes
    .
  13. Select ESXi hosts for the cluster. (minimum of two number with comma separation).
    You must select at least two hosts. Use a comma separator between the selected hosts. For example:
    1,2
  14. Enter the FQDN for each host.
  15. Enter passwords for the selected hosts.
    • Enter a single password for all the hosts.
    • Enter passwords individually for each host.
  16. Enter the vSAN Network details.
    vSAN Network
    • VLAN Id
    • CIDR
    • Subnet mask
    • Gateway IP
    • IP Range (assign one per host from step 12)
    • MTU
  17. Enter the vMotion Network details.
    vMotion Network
    • VLAN Id
    • CIDR
    • Subnet mask
    • Gateway IP
    • IP Range (assign one per host from step 12)
    • MTU
  18. When prompted, enter
    yes
    to provide the Management Network details.
  19. Enter the Management Network details.
    Management Network
    • VLAN Id
    • CIDR
    • Subnet mask
    • Gateway IP
    • MTU
  20. Select the NIC profile.
  21. Select the vSphere Distributed Switch (vDS) option to
    Separate DVS for overlay traffic
    .
  22. Enter the vDS details.
    vDS
    • System vDS name
    • MTU value
    • Portgroup name for Management, VSAN, and VMOTION
  23. Enter the Overlay vDS name.
  24. Choose the NICs for overlay traffic (minimum of two, with comma separation).
  25. Enter the Geneve VLAN ID.
  26. Select the IP allocation method for the Host Overlay Network TEPs.
    DHCP
    With this option
    uses DHCP for the Host Overlay Network TEPs.
    A DHCP server must be configured on the NSX host overlay (Host TEP) VLAN. When NSX creates TEPs for the VI workload domain, they are assigned IP addresses from the DHCP server.
    Static IP Pool
    With this option
    uses a static IP pool for the Host Overlay Network TEPs. You can re-use an existing IP pool or create a new one.
    To create a new static IP Pool provide the following information:
    • Pool Name
    • Description
    • CIDR
    • IP Range.
    • Gateway IP
    Make sure the IP range includes enough IP addresses for the number of hosts that will use the static IP Pool. The number of IP addresses required depends on the number of pNICs on the ESXi hosts that are used for the vSphere Distributed Switch that handles host overlay networking. For example, a host with four pNICs that uses two pNICs for host overlay traffic requires two IP addresses in the static IP pool.
    You cannot stretch a cluster that uses static IP addresses for the NSX Host Overlay Network TEPs.
  27. Enter and confirm the VxRail Manager root and admin passwords.
  28. Select the license keys for VMware vSAN and VMware NSX, and apply a vSphere license.
  29. Press Enter to begin the validation process.
  30. When validation succeeds, press Enter to import the primary VxRail cluster.
    The Adding VxRail Cluster workflow status can be monitored using the SDDC Manager UI in the Tasks widget and clicking
    REFRESH
    .