Preparing NCP for OpenShift 4

Before installing OpenShift 4, you must update some NCP configuration files.
The YAML files are included in the NCP download file from download.vmware.com. You can go to https://github.com/vmware/nsx-container-plugin-operator/releases, find the corresponding operator release (for example, v3.1.1) and download
openshift4.tar.gz
.
The following files are in the
nsx-container-plugin-operator/deploy
folder:
  • configmap.yaml
    – Update this file with the
    NSX
    information.
  • operator.yaml
    – Specify the NCP image location in this file.
  • namespace.yaml
    – The namespace specification for the operator. Do not edit this file.
  • role_binding.yaml
    - The role binding specefication for the operator. Do not edit this file.
  • role.yaml
    - The role specification for the operator. Do not edit this file.
  • service_account.yaml
    - The service account specification for the operator. Do not edit this file.
  • lb-secret.yaml
    - Secret for the default
    NSX
    load balancer certificate.
  • nsx-secret.yaml
    - Secret for certificate-based authentication to
    NSX
    . This is used instead of
    nsx_api_user
    and
    nsx_api_password
    in the
    configmap.yaml
    .
  • operator.nsx.vmware.com_ncpinstalls_crd.yaml
    - Operator-owned Customer Resource Definition.
  • operator.nsx.vmware.com_v1_ncpinstall_cr.yaml
    - Operator-owned Customer Resource.
The following
configmap.yaml
example shows a basic configuration. See
configmap.yaml
in the
deploy
folder for more options. You must specify values for the following parameters according to your environment:
  • cluster
  • nsx_api_managers
  • nsx_api_user
  • nsx_api_password
  • external_ip_pools
  • tier0_gateway
  • overlay_tz
  • edge_cluster
  • apiserver_host_ip
  • apiserver_host_port
kind: ConfigMap metadata: name: nsx-ncp-operator-config namespace: nsx-system-operator data: ncp.ini: | [vc] [coe] # Container orchestrator adaptor to plug in. adaptor = openshift4 # Specify cluster name. cluster = ocp [DEFAULT] [nsx_v3] policy_nsxapi = True # Path to NSX client certificate file. If specified, the nsx_api_user and # nsx_api_password options will be ignored. Must be specified along with # nsx_api_private_key_file option #nsx_api_cert_file = <None> # Path to NSX client private key file. If specified, the nsx_api_user and # nsx_api_password options will be ignored. Must be specified along with # nsx_api_cert_file option #nsx_api_private_key_file = <None> nsx_api_managers = 10.114.209.10,10.114.209.11,10.114.209.12 nsx_api_user = admin nsx_api_password = VMware1! # Do not use in production insecure = True # Choices: ALL DENY <None> log_firewall_traffic = DENY external_ip_pools = 10.114.17.0/25 #top_tier_router = <None> tier0_gateway = t0a single_tier_topology = True overlay_tz = 3efa070d-3870-4eb1-91b9-a44416637922 edge_cluster = 3088dc2b-d097-406e-b9de-7a161e8d0e47 [ha] [k8s] # Kubernetes API server IP address. apiserver_host_ip = api-int.ocp.yasen.local # Kubernetes API server port. apiserver_host_port = 6443 client_token_file = /var/run/secrets/kubernetes.io/serviceaccount/token # Choices: <None> allow_cluster allow_namespace baseline_policy_type = allow_cluster enable_multus = False process_oc_network = False [nsx_kube_proxy] [nsx_node_agent] ovs_bridge = br-int # The OVS uplink OpenFlow port ovs_uplink_port = ens192 [operator] # The default certificate for HTTPS load balancing. # Must be specified along with lb_priv_key option. # Operator will create lb-secret for NCP based on these two options. #lb_default_cert = <None> # The private key for default certificate for HTTPS load balancing. # Must be specified along with lb_default_cert option. #lb_priv_key = <None>
In
operator.yaml
, you must specify the location of NCP image in the
env
section.
kind: Deployment metadata: name: nsx-ncp-operator namespace: nsx-system-operator spec: replicas: 1 selector: matchLabels: name: nsx-ncp-operator template: metadata: labels: name: nsx-ncp-operator spec: hostNetwork: true serviceAccountName: nsx-ncp-operator tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master - effect: NoSchedule key: node.kubernetes.io/not-ready containers: - name: nsx-ncp-operator # Replace this with the built image name image: vmware/nsx-container-plugin-operator:latest command: ["/bin/bash", "-c", "nsx-ncp-operator --zap-time-encoding=iso8601"] imagePullPolicy: Always env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: OPERATOR_NAME value: "nsx-ncp-operator" - name: NCP_IMAGE value: "{NCP Image}"
Note: You should specify the location of the NCP image only in
operator.yaml
. Do not specify the NCP image in the
nsx-system
namespace with the command
kubectl edit deployment nsx-ncp -n nsx-system
.
For the operator image, specify the NCP version that will need to be installed. For example, for NCP 3.1.1, the operator image is
vmware/nsx-container-plugin-operator:v3.1.1
.
Note that pulling directly dockerhub is not recommended in a production environment because of its rate limiting policy. Once pulled from dockerhub, the image can be pushed to a local registry, possibly the same where NCP images are available.
Alternatively, you can use the operator image file included in the NCP download file from download.vmware.com and import it into a local registry. This image is the same as the one published on VMware's dockerhub.
To set the MTU value for CNI, modify the
mtu
parameter in the
[nsx-node-agent]
section of the Operator ConfigMap. The operator will trigger a recreation of the nsx-ncp-boostrap pods ensuring that CNI config files are properly updated on all the nodes. You must also update the node MTU accordingly. A mismatch between the node and pod MTU can cause problems for node-pod communication, affecting, for example, TCP liveness and readiness probes.
To update MTU on an interface on a node, you can run the command
ovs-vsctl set Interface <interface-name> mtu_request=<mtu-value>
, where
interface-name
can be either an OVS interface or a physical interface in the nsx-ovs container which is running in the nsx-node-agent pod. For example,
oc -n nsx-system exec -it nsx-node-agent-dqqm9 -c nsx-ovs -- ovs-vsctl set Interface ens192 mtu_request=9000
.
Note: Enabling HA in the Operator ConfigMap will create a single NCP pod because the
ncpReplicas
parameter is set to 1 by default. To have 3 NCP pods created, you can change it to 3. After the cluster is installed, you can change the number of NCP replicas with the command
oc edit ncpinstalls ncp-install -n nsx-system
.
The following is an example of
operator.nsx.vmware.com_v1_ncpinstall_cr.yaml
. The
addNodeTag
parameter must be set to
false
if a node has multiple network interfaces.
apiVersion: operator.nsx.vmware.com/v1 kind: NcpInstall metadata: name: ncp-install namespace: nsx-system-operator spec: ncpReplicas: 1 # Note that if one node has multiple attached VirtualNetworkInterfaces, this function is not supported and should be set to false addNodeTag: true nsx-ncp: # Uncomment below to add user-defined nodeSelector for NCP Deployment #nodeSelector: #<node_label_key>: <node_label_value> tolerations: # Please don't modify below default tolerations for NCP Deployment - key: node-role.kubernetes.io/master effect: NoSchedule - key: node.kubernetes.io/network-unavailable effect: NoSchedule # Uncomment below to add user-defined tolerations for NCP Deployment #<toleration_specification> nsx-node-agent: tolerations: # Please don't modify below default tolerations # for nsx-ncp-bootstrap and nsx-node-agent DaemonSet - key: node-role.kubernetes.io/master effect: NoSchedule - key: node.kubernetes.io/not-ready effect: NoSchedule - key: node.kubernetes.io/unreachable effect: NoSchedule - operator: Exists effect: NoExecute # Uncomment below to add user-defined tolerations for nsx-ncp-bootstrap and nsx-node-agent DaemonSet #<toleration_specification>

Configuring certificate-based authentication to
NSX
using principal identity

In a production environment, it is recommended that you do not expose administrator credentials in
configmap.yaml
with the
nsx_api_user
and
nsx_api_password
parameters. The following steps describe how to create a principal identity and allow NCP to use a certificate for authentication.
  1. Generate a certificate and key.
  2. In NSX Manager, navigate to
    System
    Users and Roles
    and click
    Add
    Principal Identity with Role
    . Add a principal identity and paste the certificate generated in step 1.
  3. Add the base64-encoded crt and key values in
    nsx-secret.yaml
    .
  4. Set the location of the certificate and key files in configmap.yaml under the [nsx_v3] section:
    nsx_api_cert_file = /etc/nsx-ujo/nsx-cert/tls.crt nsx_api_private_key_file = /etc/nsx-ujo/nsx-cert/tls.key
Note: Changing the authentication method on a cluster that is already bootstrapped is not supported.

(Optional) Configuring the default
NSX
load balancer certificate

An
NSX
load balancer can implement OpenShift HTTPS Route objects and offload the OCP HAProxy. To do that a default certificate is required. Perform the following steps to configure the default certificate:
  1. Add the base64-encoded crt and key values in
    lb-secret.yaml
    .
  2. Set the location for the certificate and the key in
    configmap.yaml
    under the
    [nsx_v3]
    section:
    lb_default_cert_path = /etc/nsx-ujo/lb-cert/tls.crt lb_priv_key_path = /etc/nsx-ujo/lb-cert/tls.key

(Optional) Configuring certificate-based authentication to NSX Managers

If you set
insecure
=
False
in the ConfigMap, you must specify the CA certificate of an NSX Manager in the NSX Manager cluster. The following procedure is an example of how to do this for a self-signed certificate.
  1. Run the following command to get the certificate of NSX Manager.
    ssh -l admin <nsx_manger_ip_address> -f 'get certificate api' > nsx.ca
  2. Edit
    nsx-secret.yaml
    and specify the
    tls.ca
    field with the base64-encoded text of
    nsx.ca
    .
  3. Set the location of the CA bundle file in
    configap.yaml
    under the
    [nsx_v3]
    section.
    ca_file = /etc/nsx-ujo/nsx-cert/tls.ca