Configuring Load Balancing

The
NSX
load balancer is integrated with OpenShift and acts as the OpenShift Router..
NCP watches OpenShift route and endpoint events and configures load balancing rules on the load balancer based on the route specification. As a result, the
NSX
load balancer will forward incoming layer 7 traffic to the appropriate backend pods based on the rules.
Configuring load balancing involves configuring a Kubernetes LoadBalancer service or an OpenShift route. You also need to configure the NCP replication controller. The LoadBalancer service is for layer 4 traffic and the OpenShift route is for layer 7 traffic.
When you configure a Kubernetes LoadBalancer service, it is allocated an IP address from the external IP block that you configure. The load balancer is exposed on this IP address and the service port. You can specify the name or ID of an IP pool using the
loadBalancerIP
spec in the LoadBalancer definition. The Loadbalancer service's IP will be allocated from this IP pool. If the
loadBalancerIP
spec is empty, the IP will be allocated from the external IP block that you configure.
The IP pool specified by
loadBalancerIP
must have the tag
scope: ncp/owner, tag: cluster:<cluster_name>
.
To use the
NSX
load balancer, you must configure load balancing in NCP. In the
ncp_rc.yml
file, do the following:
  1. Set
    use_native_loadbalancer
    =
    True
    .
  2. Set
    pool_algorithm
    to
    WEIGHTED_ROUND_ROBIN
    .
  3. Set
    lb_default_cert_path
    and
    lb_priv_key_path
    to be the full path names of the CA-signed certificate file and the private key file, respectively. See below for a sample script to generate a CA-signed certificate. In addition, mount the default certificate and key into the NCP pod. See below for instructions.
  4. (Optional) Specify a persistence setting with the parameters
    l4_persistence
    and
    l7_persistence
    . The available option for layer 4 persistence is source IP. The available options for layer 7 persistence are cookie and source IP. The default is
    <None>
    . For example,
    # Choice of persistence type for ingress traffic through L7 Loadbalancer. # Accepted values: # 'cookie' # 'source_ip' l7_persistence = cookie # Choice of persistence type for ingress traffic through L4 Loadbalancer. # Accepted values: # 'source_ip' l4_persistence = source_ip
  5. (Optional) Set
    service_size
    =
    SMALL
    ,
    MEDIUM
    , or
    LARGE
    . The default is
    SMALL
    .
  6. If you are running OpenShift 3.11, you must perform the following configuration so that OpenShift will not assign an IP to the LoadBalancer service.
    • Set
      ingressIPNetworkCIDR
      to 0.0.0.0/32 under
      networkConfig
      in the
      /etc/origin/master/master-config.yaml
      file.
    • Restart the API server and controllers with the following commands:
      master-restart api master-restart controllers
For a Kubernetes LoadBalancer service, you can also specify
sessionAffinity
on the service spec to configure persistence behavior for the service if the global layer 4 persistence is turned off, that is,
l4_persistence
is set to
<None>
. If
l4_persistence
is set to
source_ip
, the
sessionAffinity
on the service spec can be used to customize the persistence timeout for the service. The default layer 4 persistence timeout is 10800 seconds (same as that specified in the Kubernetes documentation for services (https://kubernetes.io/docs/concepts/services-networking/service). All services with default persistence timeout will share the same
NSX
load balancer persistence profile. A dedicated profile will be created for each service with a non-default persistence timeout.
If the backend service of an Ingress is a service of type LoadBalancer, then the layer 4 virtual server for the service and the layer 7 virtual server for the Ingress cannot have different persistence settings, for example,
source_ip
for layer 4 and
cookie
for layer 7. In such a scenario, the persistence settings for both virtual servers must be the same (
source_ip
,
cookie
, or
None
), or one of them is
None
(then the other setting can be
source_ip
or
cookie
). An example of such a scenario:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cafe-ingress spec: rules: - host: cafe.example.com http: paths: - path: /tea backend: serviceName: tea-svc servicePort: 80 ----- apiVersion: v1 kind: Service metadata: name: tea-svc <==== same as the Ingress backend above labels: app: tea spec: ports: - port: 80 targetPort: 80 protocol: TCP name: tcp selector: app: tea type: LoadBalancer

Router Sharding

In OpenShift 4, each route can have any number of labels in its metadata field. A router uses selectors to select a subset of routes from the entire pool of routes. A selection can also involve labels on the route's namespaces. The selected routes form a route shard.
You can create route shards for purposes such as the following:
  • Configure Ingress based on route labels or namespaces.
  • Configure different routes for applications.
  • Distribute workload across multiple Load Balancer services to improve performance.
This feature only supports the sharding of layer-7 load balancer services.
Steps to configure router sharding:
  1. Set the
    enable_lb_crd
    option to True in the
    [k8s]
    section in
    configmap.yaml
    and apply the YAML file. Create and apply a YAML file that defines a LoadBalancer CRD (CustomResourceDefinition). For example,
    apiVersion: vmware.com/v1alpha1 kind: LoadBalancer metadata: name: lbs-crd-1 spec: httpConfig: virtualIP: 192.168.4.4 # VIP for HTTP/HTTPS server. Default to auto_allocate port: 81 # HTTP port number. Default to 80 tls: port: 9998 # HTTPS port number. default to 443 secretName: default_secret # Default certificate for HTTPS server. Default to nil secretNamespace: default # Need to be set with secretName xForwardedFor: INSERT # Available values are INSERT, REPLACE. Default to nil affinity: type: source_ip # Available values are sourceIP, cookie timeout: 100 # Default to 10800 size: MEDIUM # Default to SMALL
  2. Configure a router with a namespace label selector by running the following command (assuming the router's dc/svc is router):
    oc set env dc/router NAMESPACE_LABELS="router=r1"
  3. The router configured in the previous step will handle routes from the selected namespaces. To make this selector match a namespace, label the namespace accordingly. For example,
    apiVersion: v1 kind: Route metadata: name: cafe-route annotations: nsx/loadbalancer: lbs-crd-1 spec: host: cafe.example.com to: kind: Service name: tea-svc weight: 1
    Run the following command:
    oc label namespace targetns "router=r1"
    Replace targetns with the exact namespace where the target routes are in. For example,
    apiVersion: v1 kind: Namespace metadata: name: qe annotations: nsx/loadbalancer: lbs-crd-1
    Note: If a route inside a namespace has another annotation, the route annotation takes precedence.

Layer 7 Load Balancer Example

The following YAML file configures two replication controllers (tea-rc and coffee-rc), two services (tea-svc and coffee-svc), and two routes (cafe-route-multi and cafe-route) to provide layer 7 load balancing.
# RC apiVersion: v1 kind: ReplicationController metadata: name: tea-rc spec: replicas: 2 template: metadata: labels: app: tea spec: containers: - name: tea image: nginxdemos/hello imagePullPolicy: IfNotPresent ports: - containerPort: 80 --- apiVersion: v1 kind: ReplicationController metadata: name: coffee-rc spec: replicas: 2 template: metadata: labels: app: coffee spec: containers: - name: coffee image: nginxdemos/hello imagePullPolicy: IfNotPresent ports: - containerPort: 80 --- # Services apiVersion: v1 kind: Service metadata: name: tea-svc labels: app: tea spec: ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: tea --- apiVersion: v1 kind: Service metadata: name: coffee-svc labels: app: coffee spec: ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: coffee --- # Routes apiVersion: v1 kind: Route metadata: name: cafe-route-multi spec: host: www.cafe.com path: /drinks to: kind: Service name: tea-svc weight: 1 alternateBackends: - kind: Service name: coffee-svc weight: 2 --- apiVersion: v1 kind: Route metadata: name: cafe-route spec: host: www.cafe.com path: /tea-svc to: kind: Service name: tea-svc weight: 1

Additional Notes

  • All the termination modes are supported:
    edge
    ,
    passthrough
    , and
    reencrypt
    .
  • Wildcard subdomain is supported. For example, if
    wildcardPolicy
    is set to
    Subdomain
    , and the host name is set to
    wildcard.example.com
    , any request to
    *.example.com
    will be serviced.
  • If NCP throws an error during the processing of a Route event due to misconfiguration, you need to correct the Route YAML file, delete and recreate the Route resource.
  • NCP does not enforce hostname ownership by namespaces.
  • One Loadbalancer service is supported per Kubernetes cluster.
  • NSX
    will create a layer 4 load balancer virtual server and pool for each LoadBalancer service port. Both TCP and UDP are supported.
  • The
    NSX
    load balancer comes in different sizes. For information about configuring an
    NSX
    load balancer, see the
    NSX Administration Guide
    .
    After the load balancer is created, the load balancer size cannot be changed by updating the configuration file. It can be changed through the UI or API.
  • Automatic scaling of the layer 4 load balancer is supported. If a Kubernetes LoadBalancer service is created or modified so that it requires additional virtual servers and the existing layer 4 load balancer does not have the capacity, a new layer 4 load balancer will be created. NCP will also delete a layer 4 load balancer that no longer has virtual servers attached. This feature is enabled by default. You can disable it by setting
    l4_lb_auto_scaling
    to
    false
    in the NCP ConfigMap.
  • In a Route specification, the parameter
    destinationCACertificate
    is not supported and will be ignored by NCP.
  • Each TLS route must have a different CA-signed certificate.
  • If you do not want the NSX load balancer to manage the routes, add the annotation
    use_nsx_controller:False
    to the Route specification.

Sample Script to Generate a CA-Signed Certificate

The script below generates a CA-signed certificate and a private key stored in the files <filename>.crt and <finename>.key, respectively. The
genrsa
command generates a CA key. The CA key should be encrypted. You can specify an encryption method with the command such as
aes256
.
#!/bin/bash host="www.example.com" filename=server openssl genrsa -out ca.key 4096 openssl req -key ca.key -new -x509 -days 365 -sha256 -extensions v3_ca -out ca.crt -subj "/C=US/ST=CA/L=Palo Alto/O=OS3/OU=Eng/CN=${host}" openssl req -out ${filename}.csr -new -newkey rsa:2048 -nodes -keyout ${filename}.key -subj "/C=US/ST=CA/L=Palo Alto/O=OS3/OU=Eng/CN=${host}" openssl x509 -req -days 360 -in ${filename}.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out ${filename}.crt -sha256