Configuring
NSX
Resources in Manager Mode
Last Updated October 11, 2024

There are two methods to configure certain networking resources for NCP. This section describes configuring resources in Manager mode.
In the NCP configuration file
ncp.ini
, you can specify
NSX
resources using their UUIDs or names.

Logical Routers and Logical switch

  1. Create a logical switch for the Kubernetes nodes, for example,
    LS1
    .
  2. Create a tier-0 logical router, for example,
    T0LR1
    . Set the
    tier0_router
    option in the
    [nsx_v3]
    section of
    ncp.ini
    with the logical router's ID if you do not have a shared tier-1 topology. See below for information on configuring a shared tier-1 topology. Set the HA mode to active-standby if you plan to configure NAT rules on this logical router. Otherwise, set it to active-active. Enable route redistribution. Also configure this router for access to the external network.
  3. Create a tier-1 logical router, for example,
    T1LR1
    . Connect this logical router to the tier-0 logical router.
  4. Configure router advertisement for
    T1LR1
    . At the very least, NSX-connected and NAT routes should be enabled.
  5. Connect
    T1LR1
    to
    LS1
    . Make sure that the logical router port's IP address does not conflict with the IP addresses of the Kubernetes nodes.
  6. For each node VM, make sure that the vNIC for container traffic is attached to the logical switch that is automatically created. You can find it in the
    Networking
    tab with the same name as the logical switch, that is,
    LS1
    ).
NCP must know the VIF ID of the vNIC. The corresponding logical switch ports must have the following two tags. For one tag, specify the name of the node. For the other tag, specify the name of the cluster. For the scope, specify the appropriate value as indicated below.
Tag
Scope
Node name
ncp/node_name
Cluster name
ncp/cluster
If the node name changes, you must update the tag. To retrieve the node name, you can run the following command:
kubectl get nodes
If you want to extend the Kubernetes cluster while NCP is running, for example, add more nodes to the cluster, you must add the tags to the corresponding switch ports before running "kubeadm join". If you forget to add the tags before running "kubeadm join", the new nodes will not have connectivity. In this case, you must add the tags and restart NCP to resolve the issue.
To identify the switch port for a node VM, you can make the following API call:
/api/v1/fabric/virtual-machines
In the response, look for the Node VM and retrieve the value for the ``external_id`` attribute. Or you can make the following API call:
/api/v1/search -G --data-urlencode "query=(resource_type:VirtualMachine AND display_name:<node_vm_name>)"
After you have the external ID, you can use it to retrieve the VIFs for the VM with the following API. Note that VIFs are not populated until the VM is started.
/api/v1/search -G --data-urlencode \
"query=(resource_type:VirtualNetworkInterface AND external_id:<node_vm_ext_id> AND \
_exists_:lport_attachment_id)"
The
lport_attachment_id
attribute is the VIF ID for the node VM. You can then find the logical port for this VIF and add the required tags.

IP Blocks for Kubernetes Pods

Navigate to
Networking
IP Address Pools
to create one or more IP blocks. Specify the IP block in CIDR format. Set the
container_ip_blocks
option in the
[nsx_v3]
section of
ncp.ini
to the UUIDs of the IP blocks.
By default, projects share IP blocks specified in
container_ip_blocks
. You can create IP blocks specifically for no-SNAT namespaces (for Kubernetes) or clusters (for TAS) by setting the
no_snat_ip_blocks
option in the
[nsx_v3]
section of
ncp.ini
.
If you create no-SNAT IP blocks while NCP is running, you must restart NCP. Otherwise, NCP will keep using the shared IP blocks until they are exhausted.
When you create an IP block, the prefix must not be larger than the value of the
subnet_prefix
option in NCP's configuration file
ncp.ini
. The default is 24.
NCP will allocate additional subnets for a namespace if the originally allocated subnet is exhausted.

External IP Pools

An external IP pool is used for allocating IP addresses which will be used for translating pod IPs using SNAT rules, and for exposing Ingress controllers and LoadBalancer-type services using SNAT/DNAT rules, just like Openstack floating IPs. These IP addresses are also referred to as external IPs.
Navigate to
Networking
IP Address Pools
IP Pools
to create an IP pool. Set the
external_ip_pools
option in the
[nsx_v3]
section of
ncp.ini
to the UUIDs of the IP pools.
Multiple Kubernetes clusters use the same external IP pool. Each NCP instance uses a subset of this pool for the Kubernetes cluster that it manages. By default, the same subnet prefix for pod subnets will be used. To use a different subnet size, update the
external_subnet_prefix
option in the
[nsx_v3]
section in
ncp.ini
.
You can change to a different IP pool by changing the configuration file and restarting NCP.

Shared Tier-1 Topology

To enable a shared tier-1 topology, perform the following configurations:
  • Set the
    top_tier_router
    option to the ID of either a tier-0 logical router or a tier-1 logical router. If it is a tier-1 logical router, you need to connect it to a tier-0 logical router for external connections. This option replaces the
    tier0_router
    option.
  • If SNAT for Pod traffic is enabled, disconnect
    T1LR1
    from
    LS1
    (the logical switch for the Kubernetes nodes), and connect the tier-0 or tier-1 router set in
    top_tier_router
    to
    LS1
    .
  • Set the
    single_tier_topology
    option to
    True
    . The default value is
    False
    .

(Optional) (For Kubernetes only) Firewall Marker Sections

To allow the administrator to create firewall rules and not have them interfere with NCP-created firewall sections based on network policies, navigate to
Security
Distributed Firewall
General
and create two firewall sections.
Specify marker firewall sections by setting the
bottom_firewall_section_marker
and
top_firewall_section_marker
options in the
[nsx_v3]
section of
ncp.ini
.
The bottom firewall section must be below the top firewall section. With these firewall sections created, all firewall sections created by NCP for isolation will be created above the bottom firewall section, and all firewall sections created by NCP for policy will be created below the top firewall section. If these marker sections are not created, all isolation rules will be created at the bottom, and all policy sections will be created at the top. Multiple marker firewall sections with the same value per cluster are not supported and will cause an error.