Introduction To Kubernetes Metric Server And Its Installtion

Reading Time: 2 minutes

What is Kubernetes Metrics-Server?

Kubernetes Metrics Server is a cluster-wide aggregator of resource usage data. Its job is to collect metrics from the Summary API provided by Kubelet on each node. Resource usage metrics such as container CPU and memory usage can help you troubleshoot strange resource usage. All of these metrics are available in Kubernetes via the Metrics API.

The Metrics API contains the amount of resources currently used by a particular node or pod. The metric server is use for this purpose because the metric value is not save.In Addtion the deploy Yamls file is provided in the source code of the Metrics Server project for installation.

Metrics Server Requirements:

Metrics Server has specific cluster and network configuration requirements. These requirements are not standard for all cluster distributions. Before using Metrics Server, make sure your cluster distribution supports these requirements.

  • Metrics server must be accessible via kube-apiserver
  • The kube-apiserver must be configure correctly for the aggregation layer to be active.
  • The node requires kubelet authentication configured to match the Metrics Server configuration.
  • The container runtime must implement the container metric RPC

Deploy Metrics Server to Kubernetes:

Download manifest file.

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml

Edit the file and change the settings to your liking.

vim metrics-server-components.yaml

After making the necessary adjustments, deploy the metrics server to your Kubernetes cluster.Furthermore, If you have multiple Kubernetes clusters, switch to the appropriate cluster. Moreover it Easily manage multiple Kubernetes clusters using kubectl and kubectx.

Apply the Metrics-Server manifest available in the MetricsServer release so that it can be install from a URL:

kubectl apply -f metrics-server-components.yaml

The output of the resource being created is as follows:

serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

Make sure your metricsserver deployment is running the required number of pods using the following command:

Confirm Metrics server is active.

You can access the Metrics API using the kubectl top command. Likewise this makes it easier to debug the autoscaling pipeline.

To view the resource usage of the cluster node (CPU / storage / storage), run the following command:

Similar command can be used for pods.

Written by 

Ashi Dubey is a Software Intern at Knoldus Inc Software. She has a keen interest toward learning new technologies. Her practice area is Devops. When not working, you will find her with a Book.

Leave a Reply