This section provides answers to questions about Kubernetes.
What are the prerequisites for installing Tanzu Application Catalog Helm charts?
Most Tanzu Application Catalog Helm charts require the following:
- A Kubernetes v1.12+ cluster.
- The
kubectl
command line (kubectl
CLI). - The Helm v3.x CLI.
Some Helm charts also require a PersistentVolume storage provider and one or more ReadWriteMany volumes.
How do I install Kubectl?
In order to start working on a Kubernetes cluster, it is necessary to install the Kubernetes command line interface (kubectl
). Follow these steps:
-
Execute the following commands to install the kubectl CLI. OS_DISTRIBUTION is a placeholder for the binary distribution of
kubectl
, remember to replace it with the corresponding distribution for your Operating System (OS).curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/OS_DISTRIBUTION/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl
Tip Once the
kubectl
CLI is installed, you can obtain information about the current version with thekubectl version
command.Note You can also install
kubectl
by using thesudo apt-get install kubectl
command. -
Check that
kubectl
is correctly installed and configured by executing thekubectl cluster-info
command:kubectl cluster-info
The
kubectl cluster-info
command shows the IP addresses of the Kubernetes control plane and its services. -
You can also verify the cluster by checking the nodes. Use the following command to list the connected nodes:
kubectl get nodes
To get complete information on each node, run the following:
kubectl describe node
Learn more about the kubectl
CLI.
How do I install and configure Helm?
The easiest way to run and manage applications in a Kubernetes cluster is with Helm. Helm allows you to perform key operations for managing applications such as install, upgrade or delete.
To install Helm v3.x, run the following commands:
curl https://raw.githubusercontent.com/kubernetes/helm/HEAD/scripts/get-helm-3 > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
TIP If you are using macOS, you can install it with the
brew install helm
command.
Once you have installed Helm, a set of useful commands to perform common actions is shown below:
What credentials do I need?
You need application credentials that allow you to log in to your new application. These credentials consist of a username and password.
How do I obtain the application credentials?
Most Tanzu Application Catalog Helm charts allow you to define these credentials at deploy-time via Helm chart parameters. If not explicitly defined in this manner, the credentials are automatically generated by the Helm chart. Refer to the notes shown after chart deployment for the commands you must execute to obtain the credentials.
How do I access the deployed application?
Refer to the notes shown after chart deployment for the commands you must execute to obtain the application’s IP address.
How do I access the chart deployment notes?
To see the content of the “Notes” section at any time, execute the command below. Replace MY-RELEASE with the name assigned to your chart deployment.
helm status MY-RELEASE
Here is an example of the output returned by the above command, showing the commands to obtain the IP address and credentials:
How do I scale a deployment or StatefulSet?
To scale a Deployment or StatefulSet, two options are available:
- Use the kubectl scale command to scale the Deployment or StatefulSet, if available.
- Upgrade the Deployment or StatefulSet configuring a different number of nodes.
When scaling Web applications, it is necessary to use ReadWriteMany volumes if persistence is enabled.
Scaling with kubectl
Use the kubectl scale
command to scale the StatefulSet, if available. Here is an example of using the kubectl scale
command to scale an etcd StatefulSet:
kubectl scale --replicas=4 statefulset/my-release-etcd
Scaling via chart upgrade
Scale the Deployment or StatefulSet via a normal chart upgrade, following the steps below:.
- Set the password used at installation time via the
\*.password
chart parameter. If the password was generated automatically, obtain the auto-generated password from the post-deployment instructions. - Set the desired number of nodes via the
\*.replicaCount
or\*.replicas
chart parameter.
These parameters differ per chart, depending on the chart architecture. Refer to the chart documentation for the correct parameter names for each chart.
Here is an example of scaling out an Apache deployment. Substitute the PASSWORD placeholder with the original password and replace the NUMBER_OF_REPLICAS and REPOSITORY placeholders with the total number of nodes required and a reference to your Tanzu Application Catalog chart repository.
helm upgrade my-release REPOSITORY/apache \
--set rootUser.password=PASSWORD \
--set replicaCount=NUMBER_OF_REPLICAS
Here is another example of scaling out an Elasticsearch StatefulSet. Replace the NUMBER_OF_REPLICAS placeholder with the total number of nodes required.
helm upgrade my-release REPOSITORY/elasticsearch \
--set master.replicas=NUMBER_OF_REPLICAS
What are Non-Root containers and how do I use them?
Ensuring that a container is able to perform only a very limited set of operations is vital for production deployments. This is possible by the use of non-root containers, which are executed by a user different from root
. Although creating a non-root container is a bit more complex than a root container (especially regarding filesystem permissions), it is absolutely worth it. Also, in environments like Openshift, using non-root containers is mandatory.
Some Tanzu Application Catalog Helm charts containers have been configured as non-root and are verified to run in OpenShift environments. To check if a Helm chart works with non-root containers, see How can I check if a Helm chart is appropriately configured to meet specific security requirements?.
In order to make your Helm chart work with non-root containers, add the securityContext
section to the .yaml
files. This is what is done, for instance, in the Tanzu Application Catalog Elasticsearch Helm chart. This chart deploys several Elasticsearch StatefulSets and Deployments (data, ingestion, coordinating and controller nodes), all of them with non-root containers. The configuration for the controller node StatefulSet displays the following:
spec:
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
{{- end }}
The snippet above changes the permissions of the mounted volumes, so the container user can access them for read/write operations. In addition to this, inside the container definition, there is another securityContext
block:
{{- if .Values.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
The values.yaml
sets the default values for these parameters:
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
With these changes, the chart will work as non-root in platforms like GKE, Minikube or Openshift.
What are Pod security policies?
In Kubernetes, a pod security policy is represented by a PodSecurityPolicy
resource. This resource lists the conditions a pod must meet in order to run in the cluster. Here’s an example of a pod security policy, expressed in YAML:
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'nfs'
hostPorts:
- min: 100
max: 100
Briefly, this pod security policy implements the following security rules:
- Disallow containers running in privileged mode
- Disallow containers that require root privileges
- Disallow containers that access volumes apart from NFS volumes
- Disallow containers that access host ports apart from port 100
Let’s look at the broad structure of a pod security policy.
- The
metadata
section of the policy specifies its name. - The
spec
section of the policy outlines the key criteria a pod must fulfil in order to be allowed to run.
Here is a brief description of the main options available (you can find more details in the official Kubernetes documentation):
- The
privileged
field indicates whether to allow containers that use privileged mode. For more information, see Pods. - The
runAsUser
field defines which users a container can run as. Most commonly, it is used to prevent pods from running as theroot
user. - The
seLinux
field defines the Security-Enhanced Linux (SELinux) security context for containers and only allows containers that match that context. Learn more about SELinux. - The
supplementalGroups
andfsGroup
fields define the user groups or fsGroup-owned volumes that a container may access. Learn more about fsGroups and supplemental groups. - The
volumes
field defines the type(s) of volumes a container may access. Learn more about volumes. - The
hostPorts
field, together with related fields likehostNetwork
,hostPID
andhostIPC
, restrict the ports (and other networking capabilities) that a container may access on the host system.
What are Network policies?
A network policy is a set of network traffic rules applied to a given group of pods in a Kubernetes cluster. Just like every element in Kubernetes, it is modeled using an API Resource: NetworkPolicy
. The following describes the broad structure of a network policy:
- The
metadata
section of the policy specifies its name. - The
spec
section of the policy outlines the key criteria a pod must fulfil in order to be allowed to run.
Here is a brief description of the main options available (you can find more details in the official Kubernetes API Reference):
-
The
podSelector
field: If the conditions defined in the next element apply, thepodSelector
establishes which pods the network can accept traffic from (destination pods from now on). Pods can be specified using the following criteria:- The
namespaceSelector
field: This selects pods belonging to a given namespace. - The
labelSelector
field: This selects pods containing a given label.
- The
-
Network Policy Ingress Rules (
ingress
): These establish a set of allowed traffic rules. You can specify:- The
from
(origin pods) field: This specifies which pods are allowed to access the previously specified destination pods. Just like with destination pods, these origin pods can be specified using namespace selectors and labels. - The
ports
(allowed ports) field: This specifies which destination pod’s ports can be accessed by the origin pods.
- The
To view an example, see Kubernetes documentation.
Content feedback and comments