vSphere Networking Design Decisions for a Virtual Infrastructure Workload Domain

Use this design decision list for reference related to the configuration of the vSphere Distributed Switch instances and VMkernel adapters in a
environment.
The configuration tasks for most design decisions are automated in
. You must perform the configuration manually only for a limited number of decisions as noted in the design implication.
Design Decisions for vSphere Distributed Switch
Design ID
Design Decision
Design Justification
Design Implication
VCF-WLD-VCS-VDS-001
Use a single vSphere Distributed Switch per vSphere cluster.
  • Reduces the complexity of the network design.
  • Reduces the size of the fault domain.
Increases the number of vSphere Distributed Switches that must be managed because you cannot share a distributed switch between clusters.
VCF-WLD-VCS-VDS-002
Configure the MTU size of the vSphere Distributed Switch to 9000 bytes for jumbo frames.
  • Supports the MTU size required by system traffic types.
  • Improves traffic throughput.
When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size.
Design Decisions on Distributed Port Groups
Decision ID
Design Decision
Design Justification
Design Implication
VCF-WLD-VCS-VDS-003
Use static port binding for all port groups in a VI workload domain cluster.
  • Static binding ensures a virtual machine connects to the same port on the vSphere Distributed Switch. This configuration supports historical data and port level monitoring.
  • Because the vCenter Server instance managing the VI workload domain resides in the management domain, using an ephemeral port group for vCenter Server recoverability is not required.
None
VCF-WLD-VCS-VDS-004
Use the
Route based on physical NIC load
teaming algorithm for the management port group.
Reduces the complexity of the network design and increases resiliency and performance.
None
VCF-WLD-VCS-VDS-005
Use the
Route based on physical NIC load
teaming algorithm for the vSphere vMotion port group.
Reduces the complexity of the network design and increases resiliency and performance.
None
Design Decisions on the vMotion TCP/IP Stack
Decision ID
Design Decision
Design Justification
Design Implication
VCF-WLD-VCS-VDS-006
Use the vMotion TCP/IP stack for vSphere vMotion traffic.
By using the vMotion TCP/IP stack, vSphere vMotion traffic can be assigned a default gateway on its own subnet and can go over Layer 3 networks.
In the vSphere Client, the vMotion TCP/IP stack is not available in the wizard for creating a VMkernel network adapter wizard at the distributed port group level. You must create the VMkernel adapter directly on the ESXi host.
Design Decisions on vSphere Network I/O Control
Decision ID
Design Decision
Design Justification
Design Implication
VCF-WLD-VCS-VDS-007
Enable Network I/O Control on the vSphere Distributed Switch for the VI workload domain cluster.
Increases resiliency and performance of the network.
If configured incorrectly, Network I/O Control might impact network performance for critical traffic types.
VCF-WLD-VCS-VDS-008
Set the share value for management traffic to Normal.
By keeping the default setting of Normal, management traffic is prioritized higher than vSphere vMotion but lower than vSAN traffic. Management traffic is important because it ensures that the hosts can still be managed during times of network contention.
None.
VCF-WLD-VCS-VDS-009
Set the share value for vSphere vMotion traffic to Low.
During times of network contention, vSphere vMotion traffic is not as important as virtual machine or storage traffic.
During times of network contention, a vSphere vMotion migration takes longer than usual to complete.
VCF-WLD-VCS-VDS-010
Set the share value for virtual machines to High.
Virtual machines are the most important asset in the SDDC. Leaving the default setting of High ensures that they always have access to the network resources they need.
None.
VCF-WLD-VCS-VDS-011
Set the share value for vSphere Fault Tolerance to Low.
This design does not use vSphere Fault Tolerance. Fault tolerance traffic can be set the lowest priority.
None.
VCF-WLD-VCS-VDS-012
Set the share value for the principal storage traffic type, for example, vSAN, to High.
During times of network contention, principal storage traffic needs a guaranteed bandwidth to support virtual machine performance.
None.
VCF-WLD-VCS-VDS-013
Set the share value for backup traffic to Low.
During times of network contention, the primary functions of the SDDC must continue to have access to network resources with priority over backup traffic.
During times of network contention, backups are slower than usual.
VCF-WLD-VCS-VDS-014
Set the share value for other traffic types to Low.
By default, VMware Cloud Foundation does not use other traffic types, like vSphere Fault Tolerance. Hence, these traffic types can be assigned the lowest priority.
None.