Hello Readers ! In this blog we’ll see How to do Health Check in Kubernetes. Users can tell K8s when to restart the container to achieve self-healing through viability detection; Availability detection tells K8s when containers can add to the service’s load balancer to provide external services.
Similar to other probes, it verifies that the container is ready to accept traffic. Unlike the Liveliness probe which checks to see if everything is in good condition, the Availability probe ensures everything is in a state where traffic can start running.
Preparation at the shell level. This means that all containers must be ready before anything passes to the service.
The configuration syntax for ready probes is exactly the same as for Liveness probes. Here is an example:
# readiness.yml apiVersion: v1 kind: Pod metadata: name: node spec: containers: - image: busybox name: node ports: - containerPort: 3000 protocol: TCP readinessProbe: httpGet: path: / port: 3000 timeoutSeconds: 2
Let’s deploy this Pod and observe it behaviour by running the following command:
$ kubectl create -f readiness.yml
Then run the following command for few times:
kubectl get pods readiness
You will see that the state of
Pod has undergone the following changes
- When just created, the
READYstatus is unavailable.
- After 15 seconds (initialDelaySeconds + periodSeconds), readiness detection is performed for the first time and it returns successfully, set
- After 30 seconds, it
/tmp/healthyis deleted, and after 3 consecutive readiness detections fail, it
READYis set to unavailable.
Run the following command to see the log of the readiness detection failure:
$ kubectl describe pod readiness
You will see something like this:
Readiness in Application Scale Up
For multi-replica applications, when scaling is perform, a new replica adds to the Service’s responsible balance as a backend to process customer requests along with with existing copies.
Since application startup often requires a preparation phase, such as loading cached data, connecting to a database, and so on, it takes some time between startup. container and service delivery capabilities.
This is a great time to use availability detection to determine if a container is ready to avoid sending requests to backends that aren’t ready.
Below is a sample application configuration file.
apiVersion: apps/v1 kind: Deployment metadata: name: web spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 readinessProbe: httpGet: scheme: HTTP path: / port: 80 initialDelaySeconds: 10 periodSeconds: 5 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 1 --- apiVersion: v1 kind: Service metadata: name: web-svc spec: selector: run: web ports: - protocol: TCP port: 8080 targetPort: 80
Now Let’s deploy this sample app by running the following command:
$ kubectl create -f app.yml
Let’s Check Pod and service status:
kubectl get pods kubectl get svc web-svc
Let’s focus on the readinessProbe part. Here we use a different HttpGet execution detection method. K8s measures the success of this method detection as the http request return code is between 200-400.
- Schema – specifies the protocol, supported HTTP (default) and HTTPS.
- path – Specify the path.
- Port – Specify the port.
The whole process looks like:
- The probe starts 10 seconds after the container starts.
- If the return code http:// [container_ip]: 80 / healthy is not 200 – 400, it means that the container is not ready and will not receive Websvc requests from the service.
- Re-probe every 5 seconds.
- Until the return code is 200 – 400, indicating that the container is ready, add it to the websvc responsible balance and start processing client requests. The
- probes will continue to run for a period of 5 seconds, and if there are 3 failures in a row, the container will be remove from the load balancer until the next probe joins successfully.
It will first check if the file exists using the “cat” command. It does this with an initial delay of five seconds. We further define a periodSeconds parameter that performs a liveness poll every five seconds. After we delete the file, after 20 seconds the probe will be in failed state.
# Liveness.yml apiVersion: v1 kind: Pod metadata: name: liveness spec: containers: - image: magalix/node500 name: liveness ports: - containerPort: 3000 protocol: TCP livenessProbe: httpGet: path: / port: 3000 initialDelaySeconds: 5
Let’s deploy this Liveness Pod and observe it behaviour by running the following command:
$ kubectl create -f liveness.yml
Run the below command, and you will see that the liveness probe has failed, and the container has been killed and restarted.
$ kubectl get events
You can also verify it by running the following command:
$ kubectl get pods
Now what you will see in the restart column, the container is restarted once. In this way the Liveness Probe works .
Here In this blog we have seen How to do Health Check in Kubernetes . Both liveness & readiness probes are used to control the health of an application. Failing liveness probe will restart the container, whereas failing readiness probe will stop our application from serving traffic.If you have any doubt, feel free to ask me . Thanks for sticking till the end.
Happy Learning !!!