How to Secure cluster nodes and the network (Part 3)

kubernetes
Reading Time: 6 minutes

 

Restricting the use of security-related features in pods

The examples in the previous articles have shown how a person deploying pods can do whatever they want on any cluster node, by deploying a privileged pod to the node, for example. Obviously, a mechanism must prevent users from doing part or all of what’s been explaine. The cluster admin can restrict the use of the previously described securityrelated features by creating one or more PodSecurityPolicy Resources.

Introducing the PodSecurityPolicy resource

PodSecurityPolicy is a cluster-level (non-namespaced) resource, which defines what security relate features users can or can’t use in their pods. The job of upholding the policies configure in PodSecurityPolicy resources is performe by the PodSecurityPolicy admission control plugin running in the API server.

NOTE:

The PodSecurityPolicy admission control plugin may not be enable in your cluster. Before running the following examples, ensure it’s enable. If you’re using Minikube, refer to the next sidebar.

When someone posts a pod resource to the API server, the PodSecurityPolicy admission control plugin validates the pod definition against the configured PodSecurityPolicies. If the pod conforms to the cluster’s policies, it’s accepte and stored into etcd; otherwise it’s reject immediately. The plugin may also modify the podresource according to defaults configured in the policy.

UNDERSTANDING WHAT A POD SECURITY POLICY CAN DO

A PodSecurityPolicy resource defines things like the following:

  • Whether a pod can use the host’s IPC, PID, or Network namespaces
  • Which host ports a pod can bind to
  • What user IDs a container can run as
  • Whether a pod with privileged containers can be createdRestricting the use of security-related features in pods
  • Which kernel capabilities are allow, which are add by default and which are always drop
  • What SELinux labels a container can use
  • Whether a container can use a writable root filesystem or not
  • Which filesystem groups the container can run as
  • Which volume types a pod can use

EXAMINING A SAMPLE POD SECURITY POLICY

The following listing shows a sample PodSecurityPolicy, which prevents pods from using the host’s IPC, PID, and

Network namespaces, and prevents running privileged containers and the use of most host ports (except ports from 10000-11000 and 13000-14000). The policy doesn’t set any constraints on what users, groups, or SELinux groups the container can run as.

secure kubernetes cluster nodes part 3

After this PodSecurityPolicy resource is post to the cluster, the API server will no longer allow you to deploy the privilege pod used earlier. For example

$ kubectl create -f pod-privileged.yaml

Error from server (Forbidden): error when creating “pod-privileged.yaml”: pods “pod-privileged” is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]

Likewise, you can no longer deploy pods that want to use the host’s PID, IPC, or Network namespace. Also, because you set readOnlyRootFilesystem to true in the policy, the container filesystems in all pods will be read-only (containers can only write to volumes).

Understanding runAsUser, fsGroup, and supplementalGroups Policies:

The policy in the previous example doesn’t impose any limits on which users and groups containers can run as, because you’ve used the RunAsAny rule for the runAsUser , fsGroup , and supplementalGroups fields. If you want to constrain the list of allowed user or group IDs, you change the rule to MustRunAs and specify the range of allowed IDs.

USING THE MUST RUN AS RULE:

Let’s look at an example. To only allow containers to run as user ID 2 and constrain the default filesystem group and supplemental group IDs to be anything from 2–10 or 20–30 (all inclusive), you’d include the following snippet in the PodSecurityPolicy resource.

secure kubernetes cluster nodes part 3

If the pod spec tries to set either of those fields to a value outside of these ranges, the pod will not be accept by the API server. To try this, delete the previous PodSecurityPolicy and create the new one from the psp-must-run-as.yaml file.

NOTE:

Changing the policy has no effect on existing pods, because PodSecurityPolicies are enforce only when creating or updating pods.

DEPLOYING A POD WITH RUN AS USER OUTSIDE OF THE POLICY ’ S RANGE:

If you try deploying the pod-as-user-guest.yaml file from earlier, which says the container should run as user ID 405 , the API server rejects the pod:

$ kubectl create -f pod-as-user-guest.yaml

Error from server (Forbidden): error when creating “pod-as-user-guest.yaml” : pods “pod-as-user-guest” is forbidden: unable to validate against any pod security policy: [securityContext.runAsUser: Invalid value: 405: UID on container main does not match required range. Found 405, allowed: [{2 2}]]

Okay, that was obvious. But what happens if you deploy a pod without setting the runAsUser property, but the user ID is bake into the container image (using the USER directive in the Dockerfile)?

DEPLOYING A POD WITH A CONTAINER IMAGE WITH AN OUT – OF – RANGE USER ID:

I’ve created an alternative image for the Node.js app you’ve used throughout the book. The image is configure so that the container will run as user ID 5. The Dockerfile for the image is in the following listing…

secure kubernetes cluster nodes part 3

I pushed the image to Docker Hub as luksa/kubia-run-as-user-5 . If I deploy a pod with that image, the API server

doesn’t reject it:

$ kubectl run run-as-5 –image luksa/kubia-run-as-user-5 –restart Never

pod “run-as-5” created

Unlike before, the API server accepted the pod and the Kubelet has run its container.

Let’s see what user ID the container is running as:

$ kubectl exec run-as-5 — id

uid=2(bin) gid=2(bin) groups=2(bin)

As you can see, the container is running as user ID 2 , which is the ID you specified in the PodSecurityPolicy. The PodSecurityPolicy can be use to override the user ID hardcode into a container image.

USING THE MUST RUN AS NON ROOT RULE IN THE RUN A S U SER FIELD:

For the runAsUser field an additional rule can be use: MustRunAsNonRoot . As the name suggests, it prevents users from deploying containers that run as root. Either the container spec must specify a runAsUser field, which can’t be zero (zero is the root user’s ID), or the container image itself must run as a non-zero user ID. We explained why this is a good thing earlier.

Configuring allowed, default, and disallowed capabilities:

As you learned, containers can run in privileged mode or not, and you can define a more fine-grained permission configuration by adding or dropping Linux kernel capabilities in each container. Three fields influence which capabilities containers can or cannot use:

  • allowedCapabilities
  • defaultAddCapabilities
  • requiredDropCapabilities

We’ll look at an example first, and then discuss what each of the three fields does. The following listing shows a snippet of a PodSecurityPolicy resource defining three fields related to capabilities.

secure kubernetes cluster nodes part 3

NOTE:

The SYS_ADMIN capability allows a range of administrative operations, and the SYS_MODULE capability allows loading and unloading of Linux kernel modules.

SPECIFYING WHICH CAPABILITIES CAN BE ADD TO A CONTAINER:

The allowedCapabilities field is use to specify which capabilities pod authors can add in the

securityContext.capabilities field in the container spec. In one of the In previous examples, you added the SYS_TIME capability to your container. If the PodSecurityPolicy admission control plugin had been enable, you wouldn’t have been able to add that capability, unless it was specified in the PodSecurityPolicy.

ADDING CAPABILITIES TO ALL CONTAINERS:

All capabilities list under the defaultAddCapabilities field will be add to every deploy pod’s containers. If a user doesn’t want certain containers to have those capabilities, they need to explicitly drop them in the specs of those containers.

The example in listing 13.18 enables the automatic addition of the CAP_CHOWN capability to every container, thus allowing processes running in the container to change the ownership of files in the container (with the chown command, for example).

DROPPING CAPABILITIES FROM A CONTAINER

The final field in this example is requiredDropCapabilities . I must admit, this was a somewhat strange name for me at first, but it’s not that complicated. The capabilities list in this field are drop automatically from every container (the PodSecurityPolicy Admission Control plugin will add them to every container’s securityContext.capabilities.drop field).

If a user tries to create a pod where they explicitly add one of the capabilities listed in the policy’s

requiredDropCapabilities field, the pod is reject:

$ kubectl create -f pod-add-sysadmin-capability.yaml

Error from server : error when creating “pod-add-sysadmin-capability.yaml”: pods “pod-add-sysadmin-capability” is forbid: unable to validate against any pod security policy: [capabilities.add: Invalid value: “SYS_ADMIN”: capability may not be add.

Conclusion:

Now you’ve  learned about securing cluster nodes from pods and pods from

other pods. You learned that

  • Pods can use the node’s Linux namespaces instead of using their own.
  • The containers can be configure to run as a different user and/or group than the one defined in the container image.
  • And also Containers can also run in privilege mode, allowing them to access the node’s devices that are otherwise not expose to pods.
  • We can say that containers can be run as read-only, preventing processes from writing to the container’s filesystem (and only allowing them to write to mount volumes).

Written by 

A curious DevOps Intern , love to learn and working on technical skills/tools, know the basics of Linux, Docker, Ansible , Kubernetes and more..