Add a VxRail Cluster Using the Workflow Optimization Script
You can use the Workflow Optimization script to perform all of the steps to add a
VxRail
cluster in one place.- Image the workload domain nodes. For information on imaging the nodes, refer to Dell EMC VxRail documentation.
- The IP addresses and Fully Qualified Domain Names (FQDNs) for the ESXi hosts, VxRail Manager, and NSX Manager instances must be resolvable by DNS.
- If you are using DHCP for the NSX Host Overlay Network, a DHCP server must be configured on the NSX Host Overlay VLAN of the management domain. When NSX-T Data Center creates TEPs for the VI workload domain, they are assigned IP addresses from the DHCP server.
The Workflow Optimzation script uses the
VMware Cloud Foundation on Dell EMC VxRail
API to
perform all of the steps to add a VxRail cluster in one place. See Create a Cluster with Workflow Optimization for more information about
the API.- Download the.zipfile for the Workflow Optimzation script.For 4.5.1 and 4.5.2: https://developer.vmware.com/web/dp/samples?id=8115
- Unzip the file and copy the directory to the/home/vcfdirectory on the SDDC Manager VM.4.5.1 and 4.5.2 directory:WorkflowOptimization-VCF-4510-master4.5 directory:WorkflowOptimization-VCF-4500-master
- Using SSH, log in to the SDDC Manager VM with the user namevcfand the password you specified in the deployment parameter sheet.
- In the/home/vcf/WorkflowOptimization-VCF-4510-masterdirectory, runpython vxrail_workflow_optimization_automator.py.
- Enter the coresponding option forAdd Cluster.
- When prompted, select a workload domain to which you want to import the cluster.
- SelectStep by step input.
- Enter the cluster name.
- Enter the VxRail Manager FQDN.
- To trust the ssl and ssh thumbprint, enterY.
- Select desired nodes (minimum of two number with comma separation).
- Enter FQDN each hosts.
- Enter passwords for the discovered hosts.
- Enter a single password for all the discovered hosts.
- Enter passwords individually for each discovered host.
- Enter the vSAN Network details.vSAN Network
- VLAN Id
- CIDR
- Subnet mask
- Gateway IP
- IP Range (assign one per host from step 12)
- Enter the vMotion Network details.vMotion Network
- VLAN Id
- CIDR
- Subnet mask
- Gateway IP
- IP Range (assign one per host from step 12)
- When prompted, enterYto provide the Management Network details.
- Enter the Management Network details.Management Network
- VLAN Id
- CIDR
- Subnet mask
- Gateway IP
- Select the NIC profile.
- Select the vSphere Distributed Switch (vDS) option toSeparate DVS for overlay traffic.
- Enter the vDS details.vDS
- System name
- Portgroup name for Management, VSAN, and VMOTION
- Enter the Overlay name.
- Choose the NICs for overlay traffic (minimum of two number with comma separation).Getting shared NSX-T cluster information...
- Enter the Geneve VLAN ID.Existing NSX-T instance information is shown.
- Select the IP allocation method for the Host Overlay Network TEPs.DHCPWith this optionVMware Cloud Foundationuses DHCP for the Host Overlay Network TEPs.A DHCP server must be configured on the NSX-T host overlay (Host TEP) VLAN. When NSX creates TEPs for the VI workload domain, they are assigned IP addresses from the DHCP server.Static IP PoolWith this optionVMware Cloud Foundationuses a static IP pool for the Host Overlay Network TEPs. You can re-use an existing IP pool or create a new one.To create a new static IP Pool provide the following information:
- Pool Name
- Description
- CIDR
- IP Range.
- Gateway IP
You cannot stretch a cluster that uses static IP addresses for the NSX-T Host Overlay Network TEPs. - Enter and confirm the VxRail Manager root and admin passwords.
- Select the license keys for VMware vSAN and NSX-T, and apply a vSphere license.
- Press Enter to begin the validation process.
- When validation succeeds, press Enter to import the primary VxRail cluster.The Adding VxRail Cluster workflow status can be monitored using the SDDC Manager UI in the Tasks widget and clickingREFRESH.