How to Secure cluster nodes and the network (Part 1)

Reading Time: 5 minutes

In kubernetes , we know the Api Server which is responsible for validating and configuring the data for the api objects like pods, services, replicationcontrollers etc. and let’s understand why we need to secure kubernetes cluster nodes ?

For instance ,what if the attacker gets the access of api server then they can manipulate the whatever they like by packaging their code into a container image and running it in a pod.

So in order to prevent and secure kubernetes cluster nodes we’ll see in this blog:

Using the host node’s namespaces in a pod:

Now, we can use spec.hostNetwork to true. So the pod will get to use the  node’s network interfaces instead of having its own set.

And why are we using it?

For example, 

A pod may need to use the node’s network adapters instead of its own virtual network adapters. This can be achieved by setting the hostNetwork property in the pod spec to true .

secure kubernetes cluster nodes

After it , you can check by below commands:

$ Kubectl exec pod-without-hn ifconfig

Eth0 and ls are the pod’s own namespace.

By using hostNetwork:

secure kubernetes cluster nodes

$ Kubectl exec pod-hn ifconfid

After it, as you can see, that pod is using Node’s default network namespace.

When the Kubernetes Control Plane components are deployed as pods such as when you deploy your cluster with kubeadm , you’ll find those pods use the hostNetwork option, effectively making them behave as if they weren’t running inside a pod.

Binding to a host port without using the host’s network Namespace:

A related feature allows pods to bind to a port in the node’s default namespace, but still have their own network namespace.

This is done by using the hostPort property in one of the container’s ports defined in the spec.containers.ports field.

Don’t confuse pods using hostPort with pods exposed through a NodePort service. Because a connection to the node’s port is forwarded directly to the pod running on that node, whereas with a NodePort service, a connection to the node’s port is forwarded to a randomly selected pod (possibly on another node).


If a host port is used, only a single pod instance can be scheduled to a node.

Let’s see how to define the hostPort in a pod’s YAML definition:

secure kubernetes cluster nodes

After you create this pod, you can access it through port 9000 of the node it’s scheduled to. If you have multiple nodes, you’ll see you can’t access the pod through thatport on the other nodes.

check for minikube

$ minikube ip

$ curl http://<minikube-ip&gt;:9000

NOTE: Initially, people also used it to ensure two

replicas of the same pod were never scheduled to the same node.

Using the node’s PID and IPC namespaces:

Similar to the hostNetwork option are the hostPID and hostIPC pod spec properties. And You want the pod to use the host’s PID and host’IPC namespaces.

So When you set them to true , the pod’s containers will use the node’s PID and IPC.

namespaces, allowing processes running in the containers to see all the other processes on the node or communicate with them through IPC, respectively. See the following example.

secure kubernetes cluster nodes

$ kubectl exec pod-hpid-ipc ps aux

By running the below command ,you’ll see all the processes running on the host node, not only the ones running in the container, so you can communicate with all the other processes running on the node, through Inter-Process Communication.

Configuring the container’s security context:

Besides allowing the pod to use the host’s Linux namespaces, other security-related features can also be configured on the pod and its container through the security Context properties, which can be specified under the pod spec directly and inside the spec of individual containers.

Configuring the security context allows you to do various things:


$ Kubectl exec pod-without-su id

Now you’ll see uid=0(root) and gid=0(root)which means the container is running as root.

After it , you’ll run a pod where the container runs as a different user.

Running a container as a specific user:

To run a pod under a different user ID than the one that’s baked into the container image, you’ll need to set the pod’s securityContext.runAsUser property. You’ll make the container run as user guest , whose user ID in the alpine container image is 405 , as shown in the following listing.

Kubectl exec pod-as-user-guest id

uid=405(guest) gid=100(users)

As requested, the container is running as the guest user.

Preventing a container from running as root

What if an attacker gets access to your image registry and pushes a different image under the same tag? The attacker’s image is configured to run as the root user.

When Kubernetes schedules a new instance of your pod, the Kubelet will download the attacker’s image and run whatever code they put into it.

Although containers are mostly isolated from the host system, running their processes as root is still considered a bad practice. For example, when a host directory is mounted into the container, if the process running in the container is running as root, it has full access to the mounted directory, whereas if it’s running as non-root, it won’t.To prevent the attack scenario described previously,

you can specify that the pod’s container needs to run as a non-root user, as shown in the following listing.

If you deploy this pod, it gets scheduled, but is not allowed to run:

$ kubectl get po pod-run-as-non-root

Now, if anyone tampers with your container images, they won’t get far.

Running pods in privileged mode:

Sometimes pods need to do everything that the node they’re running on can do, such as use protected system devices or other kernel features, which aren’t accessible to regular containers.

An example of such a pod is the kube-proxy pod, which needs to modify the node’s iptables rules to make services work.

To get full access to the node’s kernel, the pod’s container runs in privileged mode. This is achieved by setting the privileged property in the container’s securityContext property to true.

Go ahead and deploy this pod, so you can compare it with the non-privileged pod you ran earlier.

List of available devices in a non-privileged pod

$ kubectl exec -it pod-without-hn ls /dev

List of available devices in a privileged pod

$ kubectl exec -it pod-privileged ls /dev

If we compare this with a pod named pod-non-root which shows the device files your privileged pod can see.

In fact, the privileged container seesall the host node’s devices. This means it can use any device freely.

For example, I had to use privileged mode like this when I wanted a pod running on a Raspberry Pi to control LEDs connected to it.

If you want to know more about it , refer to (How to secure kubernetes cluster nodes and Network Part 2).

Written by 

A curious DevOps Intern , love to learn and working on technical skills/tools, know the basics of Linux, Docker, Ansible , Kubernetes and more..

1 thought on “How to Secure cluster nodes and the network (Part 1)7 min read

Comments are closed.