Configuring NSX Resources in Policy Mode
NSX
Resources in Policy ModeThere are two methods to configure
certain networking resources for NCP. This section describes configuring resources in Policy
mode.
In the NCP configuration file
ncp.ini
, you must specify
NSX
resources using their resource
IDs. Usually a resource's name and ID are the same. To be completely sure, on the NSX
Manager web UI, click the 3-dot icon that displays options for a resource and select
Copy path to clipboard
. Paste the path to an application such
as Notepad. The last part of the path is the resource ID. Gateways and Segment
- Create a segment for the Kubernetes nodes, for example,Segment1.
- Create a tier-0 gateway, for example,T0GW1. Set thetop_tier_routeroption in the[nsx_v3]section ofncp.iniwith the gateway's ID if you do not have a shared tier-1 topology. See below for information on configuring a shared tier-1 topology. Set the HA mode to active-standby if you plan to configure NAT rules on this gateway. Otherwise, set it to active-active. Enable route redistribution. Also configure this gateway for access to the external network.
- Create a tier-1 gateway, for example,T1GW1. Connect this gateway to the tier-0 gateway.
- Configure router advertisement forT1GW1. At the very least, NSX-connected and NAT routes should be enabled.
- ConnectT1GW1toSegment1. Make sure that the gateway port's IP address does not conflict with the IP addresses of the Kubernetes nodes.
- For each node VM, make sure that the vNIC for container traffic is attached to the logical switch that is automatically created. You can find it in theNetworkingtab with the same name as the segment, that is,Segment1).
NCP must know the VIF ID of the vNIC. You
can see Segment1's ports that are automatically created by navigating to Networking
> Segments. These ports are not editable except for their tag property. These
ports must have the following tags. For one tag, specify the name of the node. For
the other tag, specify the name of the cluster. For the scope, specify the
appropriate value as indicated below.
Tag | Scope |
---|---|
Node name | ncp/node_name |
Cluster name | ncp/cluster |
These tags are automatically propagated
to the corresponding logical switch ports. If the node name changes, you must update
the tag. To retrieve the node name, you can run the following
command:
kubectl get nodes
If you want to extend the Kubernetes
cluster while NCP is running, for example, add more nodes to the cluster, you must
add the tags to the corresponding switch ports before running "kubeadm join". If you
forget to add the tags before running "kubeadm join", the new nodes will not have
connectivity. In this case, you must add the tags and restart NCP to resolve the
issue.
To identify the switch port for a node
VM, you can make the following API
call:
/api/v1/fabric/virtual-machines
In the response, look for the Node VM and
retrieve the value for the ``external_id`` attribute. Or you can make the following
API
call:
/api/v1/search -G --data-urlencode "query=(resource_type:VirtualMachine AND display_name:<node_vm_name>)"
After you have the external ID, you can
use it to retrieve the VIFs for the VM with the following API. Note that VIFs are
not populated until the VM is
started.
/api/v1/search -G --data-urlencode \ "query=(resource_type:VirtualNetworkInterface AND external_id:<node_vm_ext_id> AND \ _exists_:lport_attachment_id)"
The
lport_attachment_id
attribute is the VIF ID for the node
VM. You can then find the logical port for this VIF and add the required tags.IP Blocks for Kubernetes Pods
Navigate to
to create one or more IP blocks. Specify the IP block in CIDR format.
Set the container_ip_blocks
option in the [nsx_v3]
section of
ncp.ini
to
the UUIDs of the IP blocks. If you want NCP to automatically create IP blocks, you
can set the container_ip_blocks
option with a comma-separated
list of addresses in CIDR format.By default, projects share IP blocks
specified in
container_ip_blocks
. You can create IP blocks
specifically for no-SNAT namespaces (for Kubernetes) or clusters (for TAS) by
setting the no_snat_ip_blocks
option in the [nsx_v3]
section of
ncp.ini
. If you create no-SNAT IP blocks while NCP is
running, you must restart NCP. Otherwise, NCP will keep using the shared IP blocks
until they are exhausted.
When you create an IP block, the prefix
must not be larger than the value of the
subnet_prefix
option
in NCP's configuration file ncp.ini
. The default is 24.You must not modify the IP block if NCP
has started allocated IP addresses from it. If you want to use a different block,
make sure that NCP has not allocated any address from the block.
External IP Pools
An external IP pool is used for
allocating IP addresses which will be used for translating pod IPs using SNAT rules,
and for exposing Ingress controllers and LoadBalancer-type services using SNAT/DNAT
rules, just like Openstack floating IPs. These IP addresses are also referred to as
external IPs.
Navigate to
to create an IP pool. Set the external_ip_pools
option in the [nsx_v3]
section of ncp.ini
to the UUIDs of the IP pools. If you want NCP to automatically create IP pools, you
can set the external_ip_pools
option with a comma-separated
list of addresses in CIDR format or IP ranges.Multiple Kubernetes clusters use the same
external IP pool. Each NCP instance uses a subset of this pool for the Kubernetes
cluster that it manages. By default, the same subnet prefix for pod subnets will be
used. To use a different subnet size, update the
external_subnet_prefix
option in the [nsx_v3]
section in ncp.ini
. You can change to a different IP pool by
changing the configuration file and restarting NCP.
You must not modify the IP pool if NCP
has started allocated IP addresses from it. If you want to use a different pool,
make sure that NCP has not allocated any address from the pool.
Shared Tier-1 Topology
To enable a shared tier-1 topology,
perform the following configurations:
- Set thetop_tier_routeroption to the ID of a tier-1 gateway. Connect the tier-1 gateway to a tier-0 gateway for external connections.
- If SNAT for Pod traffic is enabled, modify the uplink of the segment for Kubernetes nodes to the same tier-0 or tier-1 gateway that is set intop_tier_router.
- Set thesingle_tier_topologyoption toTrue. The default value isFalse.
- If you want NCP to automatically configure the top tier router as a tier-1 gateway, unset thetop_tier_routeroption and set thetier0_gatewayoption. NCP will create a tier-1 gateway and uplink it to the tier-0 gateway specified in thetier0_gatewayoption.
Note: After you set the
top_tier_router
option and create some namespaces, you
cannot update top_tier_router
to a different value and restart
NCP. This operation is not supported.