Accessing Pod metadata and resources from applications

Reading Time: 4 minutes

In this blog we will see how to Accessing Pod metadata and resources from applications.As we know application run on thousand of microservices.So to Accessing Pod metadata and resources from applications become important to understand.Applications often need information about the environment .they’re running in,Including details about themselves and that of other components in the cluster.You’ve already seen how Kubernetes enables service discovery through environment variables or DNS.But what about other information.And accessing pod metadata and other resourses from application.we will see how certain pod and container metadata can be pass to the container.And how easy it is for an app running inside a container.To talk to the Kubernetes API server to get information about the resources deploy in the cluster.Even how to create or modify those resources.

Exposing metadata through environment variables

First, let’s look at how you can pass the pod’s and container’s metadata to the container through environment variables. You’ll create a simple single-container pod from the following listing’s manifest.

When your process runs it can look up all the environment variables.As you define in the pod spec.The pod’s name, IP, and namespace will be expose through the POD_NAME ,POD_IP .And POD_NAMESPACE environment variables respectively.The name of the node the container is running on will be expose through the NODE_NAME variable.The name of the service account is made available through the SERVICE_ACCOUNT environment variable.

You’re also creating two environment variables that will hold the amount of CPU request. For this container and the maximum amount of memory the container is allow to consume. For environment variables exposing resource limits or requests, you specify a divisor.

The actual value of the limit or the request will be divide by the divisor.And the result expose through the environment variable. In the above example,we are setting the divisor for CPU requests to 1m(one milli-core or one one-thousandth of a CPU core).

Because you’ve set the CPU request to 15m , the environment variable CONTAINER_CPU_REQUEST_MILLICORES will be set to 15 .Likewise, you set the memory limit to 4Mi (4 mebibytes) and the divisor to 1Ki (1 Kibibyte), so the CONTAINER_MEMORY_LIMIT_KIBIBYTES environment variable will be set to 4096.

The divisor for CPU limits and requests can be either 1.which means one whole core,or 1m.which is one millicore.The divisor for memory limits/requests can be 1 (byte),1k (kilobyte) or 1Ki (kibibyte), 1M (megabyte) or 1Mi (mebibyte).And so on. After creating the pod, you can use kubectl exec to see all these environment variables in your container.

$ kubectl exec downward env

All processes running inside the container can read those variables and use them however they need.


You may remember that labels and annotations can be modifie while a pod is running. As you might expect, when they change Kubernetes updates the files.Holding them allowing the pod to always see up-to-date data.This also explains why labels and annotations can’t be expose through environment variables.Because environment variable values can’t be update afterward.If the labels or annotations of a pod were expose through environment variables.there’s no way to expose the new values after they’re modify.



As you’ve seen, using the Downward API isn’t complicat. It allows you to keep the application Kubernetes-agnostic. This is especially useful when you’re dealing with an existing application that expects certain data in environment variables.

The Downward API allows you to expose the data to the application without having to rewrite the application or wrap it in a shell script.Which collects the data and then expose it through environment variables.But the metadata available through the Downward API is fairly limited. If you need more you’ll need to obtain it from the Kubernetes API server directly.

Passing metadata through files in a downwardAPI volume

If you prefer to expose the metadata through files instead of environment can define a downwardAPI volume and mount it into your container. You must use a downwardAPI volume for exposing the pod’s labels or its annotations. Because neither can be expose through environment variables.

As with environment variables, you need to specify each metadata field explicitly.if you want to have it exposed to the process. Let’s see how to modify the previous example to use a volume instead of environment variables.As shown in the following listing.

Instead of passing the metadata through environment variables, you’re defining a volume called downward and mounting.It is in your container under /etc/downward. The files this volume will contain are configure under the downwardAPI.Items attribute in the volume specification. Each item specifies the path (the filename).where the metadata should be written to and references.Either a pod-level field or a container resource field whose value you
want store in the file.

Understanding the available metadata

  • The pod’s name.
  • The pod’s IP addressPassing metadata through the Downward API 227.
  • The namespace the pod belongs to.
  • The name of the node the pod is running on.
  • The name of the service account the pod is running under.
  • The CPU and memory requests for each container.
  • The CPU and memory limits for each container.
  • The pod’s labels.
  • The pod’s annotations.


In this blog we have seen how to Accessing Pod metadata and resources from applications. Most of the items in the list shouldn’t require further explanation.Except perhaps the service account and CPU/memory requests.

Most items in the list can be pass to containers. Either through environment variables or through a downward API volume. But labels and annotations can only be expose through the volume. Part of the data can be acquire by other means (for example, from the operating system directly). But the Downward API provides a simpler alternative.

Written by 

Adesh shukla is a DevOps Intern at Knoldus Inc. His practice area is DevOps. He is always open to learn new things.. His hobbies is playing cricket.