Configuring SNAT
Restricting an SNAT IP Pool to
Specific Kubernetes Namespaces or TAS Orgs
You can specify which Kubernetes namespace or
TAS org can be allocated IPs from the SNAT IP pool by adding the following tags to
the IP pool.
- For a Kubernetes namespace:scope: ncp/owner, tag: ns:<namespace_UUID>
- For a TAS org:scope: ncp/owner, tag: org:<org_UUID>
You can get the namespace or org UUID with
one of the following commands:
kubectl get ns -o yaml cf org <org_name> --guid
Note the following:
- Each tag should specify one UUID. You can create multiple tags for the same pool.
- If you change the tags after some namespaces or orgs have been allocated IPs based on the old tags, those IPs will not be reclaimed until the SNAT configurations of the Kubernetes services or TAS apps change or NCP restarts..
- The namespace and TAS org owner tags are optional. Without these tags, any namespace or TAS org can have IPs allocated from the SNAT IP pool.
Configuring an SNAT IP Pool for a
Service
You can configure an SNAT IP pool for a
service by adding an annotation to the service. For example,
apiVersion: v1 kind: Service metadata: name: svc-example annotations: ncp/snat_pool: <external IP pool ID or name> selector: app: example ...
The IP pool specified by
ncp/snat_pool
must
have the tag scope: ncp/owner, tag: cluster:<cluster_name>
. NCP will configure the SNAT rule for this
service. The rule's source IP is the set of backend pods. The destination IP is the
SNAT IP allocated from the specified external IP pool. If an error occurs when NCP
configures the SNAT rule, the service will be annotated with
ncp/error.snat
:<error>
. The
possible errors are: - IP_POOL_NOT_FOUND- The SNAT IP pool is not found in NSX Manager.
- IP_POOL_EXHAUSTED- The SNAT IP pool is exhausted.
- IP_POOL_NOT_UNIQUE- The pool specified byncp/snat_poolrefers to multiple pools in NSX Manager.
- SNAT_RULE_OVERLAPPED- A new SNAT rule is created, but the SNAT service's pod also belongs to another SNAT service, that is, there are multiple SNAT rules for the same pod.
- POOL_ACCESS_DENIED- The IP pool specified byncp/snat_pooldoes not have the tagscope: ncp/owner, tag: cluster:<cluster_name>, or the pool's owner tag does not match the namespace of the service that is sending the allocation request. After you fix the error, you must restart NCP, or remove thencp/snat_poolannotation and add it again.
Note the following:
- The pool specified byncp/snat_poolshould already exist inNSXbefore the service is configured.
- InNSX, the priority of the SNAT rule for the service is higher than that for the project.
- If a pod is configured with multiple SNAT rules, only one will work.
- You can change to a different IP pool by changing the annotation and restarting NCP.
Configuring an SNAT IP Pool for a
Namespace
You can configure an SNAT IP pool for a
namespace by adding an annotation to the namespace. For example,
apiVersion: v1 kind: Namespace metadata: name: ns-sample annotations: ncp/snat_pool: <external IP pool ID or name> ...
NCP will configure the SNAT rule for this
namespace. The rule's source IP is the set of backend pods. The destination IP is
the SNAT IP allocated from the specified external IP pool. If an error occurs when
NCP configures the SNAT rule, the namespace will be annotated with
ncp/error.snat
:<error>
. The
possible errors are: - IP_POOL_NOT_FOUND- The SNAT IP pool is not found in NSX Manager.
- IP_POOL_EXHAUSTED- The SNAT IP pool is exhausted.
- IP_POOL_NOT_UNIQUE- The pool specified byncp/snat_poolrefers to multiple pools in NSX Manager.
- POOL_ACCESS_DENIED- The IP pool specified byncp/snat_pooldoes not have the tagscope: ncp/owner, tag: cluster:<cluster_name>, or the pool's owner tag does not match the namespace that is sending the allocation request. After you fix the error, you must restart NCP, or remove thencp/snat_poolannotation and add it again.
Note the following:
- You can specify only one SNAT IP pool in the annotation.
- The SNAT IP pool does not need to be configured inncp.ini.
- The IP pool specified byncp/snat_poolmust have the tagscope: ncp/owner, tag: cluster:<cluster_name>.
- The IP pool specified byncp/snat_poolcan also have a namespace tagscope: ncp/owner, tag: ns:<namespace_UUID>.
- If thencp/snat_poolannotation is missing, the namespace will use the SNAT IP pool for the cluster.
- You can change to a different IP pool by changing the annotation and restarting NCP.
Configuring an SNAT IP Address
for a Service
You can configure an SNAT IP address for
a service by adding an annotation to the service. For example,
apiVersion: v1 kind: Service metadata: name: svc-example annotations: ncp/static_snat_ip: "1.2.3.4" ...
If the annotation
ncp/snat_pool
is also specified, the SNAT IP address must
be in the specified SNAT address pool. Otherwise, it must be in the external IP pool
specified in ncp.ini
. If there are no errors, NCP will create
or update the SNAT rule by using the annotated SNAT IP address for this service. The
status of configuring the SNAT rule will be annotated with
ncp/snat_ip_status
in the service. The possible values are:
- IP_ALLOCATED_SUCCESSFULLY
- IP_ALREADY_ALLOCATED- The IP address has already been allocated.
- IP_NOT_IN_POOL- The IP address is not in the SNAT IP Pool.
- IP_POOL_EXHAUSTED- The SNAT IP Pool is exhausted.
- SNAT_PROCESS_FAILED- An unknown error occurred.
Configuring an SNAT IP Address
for a Namespace
You can configure an SNAT IP address for
a namespace by adding an annotation to the namespace. For example,
apiVersion: v1 kind: Namespace metadata: name: svc-example annotations: ncp/static_snat_ip: "1.2.3.4" ...
If the annotation
ncp/snat_pool
is also specified, the SNAT IP address must
be in the specified SNAT address pool. Otherwise, it must be in the external IP pool
specified in ncp.ini
. If there are no errors, NCP will create
or update the SNAT rule by using the annotated SNAT IP address for this namespace.
The status of configuring the SNAT rule will be annotated with
ncp/snat_ip_status
in the namespace. The possible values
are: - IP_ALLOCATED_SUCCESSFULLY
- IP_ALREADY_ALLOCATED- The IP address has already been allocated.
- IP_NOT_IN_POOL- The IP address is not in the SNAT IP Pool.
- IP_NOT_REALIZED- An error occurred inNSX.
- IP_POOL_EXHAUSTED- The SNAT IP Pool is exhausted.
- SNAT_PROCESS_FAILED- An unknown error occurred.
Configuring an SNAT Pool for a TAS
App
By default, NCP configures SNAT IP for a TAS
(Tanzu Application Service) org. You can configure an SNAT IP for an app by creating
an app with a
manifest.xml
that contains the SNAT IP pool information. For
example,
applications: - name: frontend memory: 32M disk_quota: 32M buildpack: go_buildpack env: GOPACKAGENAME: example-apps/cats-and-dogs/frontend NCP_SNAT_POOL: <external IP pool ID or name> ...
NCP will configure the SNAT rule for this
app. The rule's source IP is the set of instances' IPs and its destination IP is the
SNAT IP allocated from an external IP pool. Note the following:
- The pool specified byNCP_SNAT_POOLshould already exist inNSXbefore the app is pushed.
- The priority of SNAT rule for an app is higher than that for an org.
- An app can be configured with only one SNAT IP.
- You can change to a different IP pool by changing the configuration and pushing the app again.
Configuring SNAT for TAS version
3
With TAS version 3, you can configure SNAT in
one of two ways:
- ConfigureNCP_SNAT_POOLinmanifest.ymlwhen creating the app.For example, the app is calledbreadand themanifest.ymlhas the following information:applications: - name: bread stack: cflinuxfs2 random-route: true env: NCP_SNAT_POOL: AppSnatExternalIppool processes: - type: web disk_quota: 1024M instances: 2 memory: 512M health-check-type: port - type: worker disk_quota: 1024M health-check-type: process instances: 2 memory: 256M timeout: 15Run the following commands:cf v3-push bread cf v3-apply-manifest -f manifest.yml cf v3-apps cf v3-restart bread
- ConfigureNCP_SNAT_POOLusing thecf v3-set-envcommand.Run the following commands (assuming the app is calledapp3):cf v3-set-env app3 NCP_SNAT_POOL AppSnatExternalIppool (optional) cf v3-stage app3 -package-guid <package-guid> (You can get package-guid with "cf v3-packages app3".) cf v3-restart app3
Configuring an SNAT IP Pool or IP
Address for a TAS Org
You can configure an SNAT IP pool or IP
address for a TAS Org using the following annotations:
- ncp_snat_pool- The pool must exist and have the tagscope: ncp/owner, tag: cluster:<cluster_name>.
- ncp_snat_ip- A specific address in an IP pool.
Note the following:
- If bothncp_snat_poolandncp_snat_ipare specified, the SNAT IP address must be in the specified SNAT IP pool.
- If onlyncp_snat_ipis specified, the SNAT IP address must be in the external IP pool specified inncp.ini.
- If onlyncp_snat_poolis specified, the SNAT IP address will be allocated from the specified pool.
You can configure SNAT IP for an org with
the
cf curl
command. For
example:cf curl v3/organizations/<org-guid> -X PATCH -d '{"metadata": {"annotations": {"ncp_snat_pool": "ann-ip-pool", "ncp_snat_ip": "1.2.3.4"}}}'
You can get
org-guid
with the following command:cf org <org-name> --guid
You can remove the
ncp_snat_ip
annotation with the following
command:cf curl v3/organizations/<org-guid> -X PATCH -d '{"metadata": {"annotations": {"ncp_snat_ip": null}}}'
You can go to the NSX Manager UI to check
if the SNAT rule is successfully created. To check for errors, look at the NCP
logs.
If you see the
POOL_ACCESS_DENIED
error in the NCP log, it means that the
IP pool specified by ncp_snat_pool
does not have the tag
scope: ncp/owner, tag: cluster:<cluster_name>
, or the
pool's owner tag does not match the org that is sending the allocation request.
After you fix the error, you must restart NCP, or remove the
ncp_snat_pool
annotation and add it again.SNAT, Container Networks and
Gateway Firewall
The following information is applicable
to NCP 4.1.2.2 and later.
If SNAT is enabled for Container
networks, NCP creates SNAT rules on the top-tier router (also called foundation
tier-0 in TAS) in both Manager and Policy modes. In TAS, this is controlled by the
Enable SNAT for Container Networks
configuration option in
the VMware NSX-T tile. In Kubernetes, it is controlled by the configuration option
ncp.coe.enable_snat
. In TKGI, it is always enabled.If Gateway Firewall rules are configured
on the top tier-0 router, SNAT traffic on the top tier-0 router can be impacted
depending on the value of the
firewall_match
property of the
SNAT rule. A key difference between the SNAT rules in Manager and Policy mode is the
default value of the property firewall_match
in a NAT rule. In
Policy mode, the default value is MATCH_INTERNAL_ADDRESS
. In
Manager mode it is BYPASS
. If NCP uses any value other than
BYPASS
for this option, the Gateway Firewall rules defined
on the top tier-0 router will be enforced on NCP-created SNAT rules as well. This
implies that the traffic might be dropped if there is no rule to explicitly allow
egress traffic from the container range.Due to the difference in the default
behavior of NAT rules, you can configure the
firewall_match
property of the SNAT rules created by NCP in Policy mode. In TAS, it can be done via
the configuration option NAT Firewall Match for SNAT Rules
in
the VMware NSX-T tile in OpsManager. In Kubernetes, it can be done via the
configuration option cfg.CONF.nsx_v3.natfirewallmatch
. The
available options are:- BYPASS- No change compared to Manager mode. Gateway Firewall is not enforced for traffic that goes through SNAT.
- MATCH_INTERNAL_ADDRESS- Default setting. Gateway Firewall is enforced and will match traffic on source addresses before SNAT. You must ensure that rules are in place to allow traffic coming from the container range.
- MATCH_EXTERNAL_ADDRESS- Gateway Firewall is enforced and will match traffic on source addresses after SNAT. You must ensure that rules are in place to allow traffic coming from the SNAT range, that is, the external IP pools configured for NCP.
Note that NCP never updates the
firewall_match
setting of an existing SNAT rule.Notes specific to TKGI:
- In TKGI, this property can be configured via the cluster network profile.
- This property is more relevant to dual-tier topologies. For single-tier topologies you should not create Gateway Firewall rules on cluster tier-1 gateways (even though TKGI does not explicitly state that this is not supported).
- In TKGI, the logic is the same with some small differences:
- The "Top Tier" router for a cluster will be a tier-1 router for single-tier topologies, and a tier-0 for dual-tier topologies.
- No SNAT rule is created if the namespace is annotated withncp/no_snat: true.