Configuring NSX Resources in Manager ModeLast Updated October 11, 2024
NSX
Resources in Manager ModeThere are two methods to configure
certain networking resources for NCP. This section describes configuring resources in
Manager mode.
In the NCP configuration file
ncp.ini
, you can specify
NSX
resources using their UUIDs or
names. Logical Routers and Logical
switch
- Create a logical switch for the Kubernetes nodes, for example,LS1.
- Create a tier-0 logical router, for example,T0LR1. Set thetier0_routeroption in the[nsx_v3]section ofncp.iniwith the logical router's ID if you do not have a shared tier-1 topology. See below for information on configuring a shared tier-1 topology. Set the HA mode to active-standby if you plan to configure NAT rules on this logical router. Otherwise, set it to active-active. Enable route redistribution. Also configure this router for access to the external network.
- Create a tier-1 logical router, for example,T1LR1. Connect this logical router to the tier-0 logical router.
- Configure router advertisement forT1LR1. At the very least, NSX-connected and NAT routes should be enabled.
- ConnectT1LR1toLS1. Make sure that the logical router port's IP address does not conflict with the IP addresses of the Kubernetes nodes.
- For each node VM, make sure that the vNIC for container traffic is attached to the logical switch that is automatically created. You can find it in theNetworkingtab with the same name as the logical switch, that is,LS1).
NCP must know the VIF ID of the vNIC. The
corresponding logical switch ports must have the following two tags. For one tag,
specify the name of the node. For the other tag, specify the name of the cluster.
For the scope, specify the appropriate value as indicated below.
Tag | Scope |
---|---|
Node name | ncp/node_name |
Cluster name | ncp/cluster |
If the node name changes, you must update
the tag. To retrieve the node name, you can run the following
command:
If you want to extend the Kubernetes
cluster while NCP is running, for example, add more nodes to the cluster, you must
add the tags to the corresponding switch ports before running "kubeadm join". If you
forget to add the tags before running "kubeadm join", the new nodes will not have
connectivity. In this case, you must add the tags and restart NCP to resolve the
issue.
To identify the switch port for a node
VM, you can make the following API
call:
In the response, look for the Node VM and
retrieve the value for the ``external_id`` attribute. Or you can make the following
API
call:
After you have the external ID, you can
use it to retrieve the VIFs for the VM with the following API. Note that VIFs are
not populated until the VM is
started.
The
lport_attachment_id
attribute is the VIF ID for the node
VM. You can then find the logical port for this VIF and add the required tags.IP Blocks for Kubernetes Pods
Navigate to
to create one or more IP blocks. Specify the IP block in CIDR format.
Set the container_ip_blocks
option in the [nsx_v3]
section of
ncp.ini
to
the UUIDs of the IP blocks. By default, projects share IP blocks
specified in
container_ip_blocks
. You can create IP blocks
specifically for no-SNAT namespaces (for Kubernetes) or clusters (for TAS) by
setting the no_snat_ip_blocks
option in the [nsx_v3]
section of
ncp.ini
. If you create no-SNAT IP blocks while NCP is
running, you must restart NCP. Otherwise, NCP will keep using the shared IP blocks
until they are exhausted.
When you create an IP block, the prefix
must not be larger than the value of the
subnet_prefix
option
in NCP's configuration file ncp.ini
. The default is 24.NCP will allocate additional subnets for
a namespace if the originally allocated subnet is exhausted.
External IP Pools
An external IP pool is used for
allocating IP addresses which will be used for translating pod IPs using SNAT rules,
and for exposing Ingress controllers and LoadBalancer-type services using SNAT/DNAT
rules, just like Openstack floating IPs. These IP addresses are also referred to as
external IPs.
Navigate to
to create an IP pool. Set the external_ip_pools
option in the [nsx_v3]
section of ncp.ini
to the UUIDs of the IP pools. Multiple Kubernetes clusters use the same
external IP pool. Each NCP instance uses a subset of this pool for the Kubernetes
cluster that it manages. By default, the same subnet prefix for pod subnets will be
used. To use a different subnet size, update the
external_subnet_prefix
option in the [nsx_v3]
section in ncp.ini
. You can change to a different IP pool by
changing the configuration file and restarting NCP.
Shared Tier-1 Topology
To enable a shared tier-1 topology,
perform the following configurations:
- Set thetop_tier_routeroption to the ID of either a tier-0 logical router or a tier-1 logical router. If it is a tier-1 logical router, you need to connect it to a tier-0 logical router for external connections. This option replaces thetier0_routeroption.
- If SNAT for Pod traffic is enabled, disconnectT1LR1fromLS1(the logical switch for the Kubernetes nodes), and connect the tier-0 or tier-1 router set intop_tier_routertoLS1.
- Set thesingle_tier_topologyoption toTrue. The default value isFalse.
(Optional) (For Kubernetes only)
Firewall Marker Sections
To allow the administrator to create firewall
rules and not have them interfere with NCP-created firewall sections based on
network policies, navigate to
and create two firewall sections. Specify marker firewall sections by setting
the
bottom_firewall_section_marker
and top_firewall_section_marker
options in the [nsx_v3]
section of
ncp.ini
. The bottom firewall section must be below the
top firewall section. With these firewall sections created, all firewall sections
created by NCP for isolation will be created above the bottom firewall section, and
all firewall sections created by NCP for policy will be created below the top
firewall section. If these marker sections are not created, all isolation rules will
be created at the bottom, and all policy sections will be created at the top.
Multiple marker firewall sections with the same value per cluster are not supported
and will cause an error.