OVN-Kubernetes CNI plugin support on OpenShift
Prior to
AKO
version 1.10.1, only OpenShift SDN was supported as the Container Network Interface (CNI) plugin on OpenShift. Starting with AKO
1.10.1, the OVN-Kubernetes Container Network Interface (CNI) plugin is supported in OpenShift.Configuring AKO to use OVN-Kubernetes CNI Plugin on OpenShift
AKO
to use OVN-Kubernetes CNI Plugin on OpenShiftIn order to support OVN-Kubernetes as the CNI plugin with
AKO
, the AKOSettings.cniPlugin value in the AKO
Helm chart values.yaml must be set to ovn-kubernetes.For more information, see Configuring AKO.
AKO
needs to read the pod CIDR subnets configured for Kubernetes or Openshift nodes to create static routes in the NSX Advanced Load Balancer Controller
for the pool backend servers (pods) to be reachable from the Service Engine. The OVN-Kubernetes CNI configures the Pod CIDR subnet on each node as part of the k8s.ovn.org/node-subnets
annotation. AKO
reads the default pod CIDR subnet value from this annotation for each node and configures the required static routes on the NSX Advanced Load Balancer Controller
. A sample annotation value is shown below.k8s.ovn.org/node-subnets: '{"default":"10.128.0.0/23"}'
AKO only supports a single pod CIDR subnet per node configured as default in the k8s.ovn.org/node-subnets annotation.
Caveat
- Pools are down when runningAKOwith service type as ClusterIP.
For OVN-Kubernetes CNI, there are some Openshift setup installations where, by default, the routing gateway (OVS) performs Source NAT for traffic from PODs leaving the nodes. This Source NAT results in the pool servers, that is, pods, being marked down in the Controller as it leads to failure in the health monitoring performed by the
NSX Advanced Load Balancer Controller
. This issue is seen only when AKO
runs with the service type configured as ClusterIP
(ClusterIP mode). No such issue is seen in the NodePort
mode, and pools come up normally. To learn more about serviceType
configuration, see Configuring AKO.To use
ClusterIP
mode, the Source NAT has to be disabled. However, disabling SNAT will break the ability of pods to route externally with the Node’s IP Address, there by leading to failure in NodePort
mode. NodePort mode must be leveraged if disabling SNAT is not desired. The changes below will disable the SNAT functionality for the namespaces that require Ingress/Route support.- Create aConfigMapto setdisable-snat-multiple-gwsfor clusternetwork.operator. Create a file namedcm_gateway-mode-config.yamlwith the following:apiVersion: v1 kind: ConfigMap metadata: name: gateway-mode-config namespace: openshift-network-operator data: disable-snat-multiple-gws: "true" mode: "shared" immutable: true
- Create ConfigMap withoc apply -f cm_gateway-mode-config.yamlAddk8s.ovn.org/routing-external-gwsannotation to namespaces that require Ingress/Route support.
- Edit any namespace withoc edit namespace <name-of-namespace>and add thek8s.ovn.org/routing-external-gwsannotation as shown below:apiVersion: v1 kind: Namespace metadata: annotations: k8s.ovn.org/routing-external-gws: <ip-of-node-gateway>