Configure
NSX
Networking for Kubernetes Nodes
NSX
Networking for Kubernetes NodesThis section describes how to configure
NSX
networking for Kubernetes master and worker nodes. Each node must have at least two
network interfaces. The first is a management interface which might or might
not be on the
NSX
fabric.
The other interfaces provide networking for the pods, are on the
NSX
fabric,
and connected to a logical switch which is referred to as the node logical
switch. The management and pod IP addresses must be routable for Kubernetes
health check to work. For communication between the management interface and
the pods, NCP automatically creates a DFW rule to allow health check and other
management traffic. You can see details of this rule in the
NSX Manager
GUI. This rule should not be changed or deleted.
For each node VM, ensure that
the vNIC that is designated for container networking is attached to the node
logical switch.
The VIF ID of the vNIC used for container traffic in
each node must be known to
NSX Container Plugin
(NCP). The corresponding logical switch port must have the
following two tags. For one tag, specify the name of the node. For the other tag,
specify the name of the cluster. For the scope, specify the appropriate value as
indicated below.
Tag | Scope |
---|---|
Node name | ncp/node_name |
Cluster name | ncp/cluster |
You can identify the logical
switch port for a node VM by navigating to
from the
NSX Manager
GUI.
If the Kubernetes node name
changes, you must update the tag
ncp/node_name
and
restart NCP. You can use the following command to get the node names:
kubectl get nodes
If you add a node to a cluster
while NCP is running, you must add the tags to the logical switch port before
you run the
kubeadm
join
command. Otherwise, the new node will not have network
connectivity. If the tags are incorrect or missing, you can take the following
steps to resolve the issue:
- Apply the correct tags to the logical switch port.
- Restart NCP.