Prepare Kubernetes Nodes
Most of the steps to prepare the Kubernetes nodes are automated by two containers,
nsx-ovs and nsx-ncp-bootstrap, that run in the nsx-node-agent and nsx-ncp-bootstrap
DaemonSets, respectively.
Before installing NCP, make sure that the
Kubernetes nodes have Python installed and accessible through the command line
interface. You can use your Linux package manager to install it. For example, on Ubuntu,
you can run the command
apt install python
.For Ubuntu, installing the
NSX
CNI plugin will copy the AppArmor profile
file ncp-apparmor
to
/etc/apparmor.d
and load it. Before the install, the AppArmor service must be running and the directory
/etc/apparmor.d
must exist. Otherwise, the install will fail. You can check whether the AppArmor module
is enabled with the following command:
sudo cat /sys/module/apparmor/parameters/enabled
You can check whether the AppArmor service is
started with the following command:
sudo /etc/init.d/apparmor status
The
ncp-apparmor
profile file
provides an AppArmor profile for NSX node agent called node-agent-apparmor
, which
differs from the docker-default
profile in the following ways: - Thedeny mountrule is removed.
- Themountrule is added.
- Somenetwork,capability,file, andumountoptions are added.
You can replace the
node-agent-apparmor
profile with a different profile. If you do, you must change the profile name node-agent-apparmor
in the
NCP YAML file.The NSX NCP bootstrap container automates the
installation and update of of NSX CNI on the host. In previous releases, NSX CNI was
installed through a deb/rpm package. In the release, the files are simply copied to the
host. The bootstrap container will remove the previously installed NSX CNI components
from the package manager's database. The following directories and files will be
deleted:
- /etc/cni/net.d
- /etc/apparmor.d/ncp-apparmor
- /opt/cni/bin/nsx
The bootstrap container checks the file
10-nsx.conflist
and looks for the CNI version number in the tag
nsxBuildVersion
. If this version is older than the one in the
bootstrap container, the following files are copied to the host:- /opt/cni/bin/nsx
- /etc/cni/net.d/99-loopback.conf
- /etc/cni/net.d/10-nsx.conflist
- /etc/apparmor.d/ncp-apparmor
If the files
/opt/cni/bin/loopback
and
/etc/cni/net.d/99-loopback.conf
exist, they are not
overwritten. If the OS type is Ubuntu, the file ncp-apparmor
is
also copied to the host.The bootstrap container will move the IP
address and routes from
br-int
to node-if
. It
will also stop OVS if it is running on the host because OVS will run inside the nsx-ovs
container. The nsx-ovs container will create the br-int
instance if
it does not exist, add the network interface (node-if
) that is
attached to the node logical switch to br-int
, and make sure that
the br-int
and node-if
link status is up. It
will move the IP address and routes from node-if
to
br-int
. There will be downtime of a few seconds when the
nsx-node-agent pod or nsx-ovs container is restarted.If the nsx-node-agent DaemonSet is
removed, OVS is no longer running on the host (in the container or in the host's
PID).
Update the network configuration to make the
IP address and routes persistent. For example, for Ubuntu, edit
/etc/network/interfaces
(use actual values from your
environment where appropriate) to make the IP address and routes
persistent:auto eth1 iface eth1 inet static address 172.16.1.4/24 #persistent static routes up route add -net 172.16.1.3/24 gw 172.16.1.1 dev eth1
Then run the command
ifdown eth1;
ifup eth1
.For RHEL, create and edit
/etc/sysconfig/network-scripts/ifcfg-<node-if>
(use actual
values from your environment where appropriate) to make the IP address
persistent:HWADDR=00:0C:29:B7:DC:6F TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none IPADDR=172.10.0.2 PREFIX=24 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV4_DNS_PRIORITY=100 IPV6INIT=no NAME=eth1 UUID=39317e23-909b-45fc-9951-849ece53cb83 DEVICE=eth1 ONBOOT=yes
Then run the command
systemctl
restart network.service
.For information on configuring persistent
routes for RHEL, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/sec-configuring_static_routes_in_ifcfg_files.
IP and static routes must be persisted
on the uplink interface (specified by
ovs_uplink_port
) to guarantee
that connectivity to the Kubernetes API server is not lost after a VM restart.By default, nsx-ovs has a volume mount
with the name
host-original-ovs-db
and path
/etc/openvswitch
. This is the default path that OVS uses to
store the file conf.db
. If OVS was configured to use a different
path or if the path is a soft link, you must update the
host-original-ovs-db
mount with the correct path.If necessary, you can undo the changes made
by the bootstrap container. For more information, see Clean Up Kubernetes Nodes.