Jump to content

Kubernetes Readiness Probe: A Simple Guide with Examples


Recommended Posts

Kubernetes Readiness Probe: A Simple Guide with Examples

Cheers! Your engineering team is celebrating the rollout of a few bug fixes to an application in your Kubernetes cluster. Dang! You get a ping about your application being down a few minutes after rollout. You tinkered for a bit, wondering what could have possibly gone wrong. 

Aha! You exclaimed. You just remembered that the application needs some time before it’s ready to serve traffic. But how do you go about this? How do you give your users an enhanced user experience? How do you decrease downtime and maximize application availability? How can you reduce disruptions to ensure the reliability of your service? 

The answer to these questions is a Kubernetes readiness probe. So, let’s dive in to understand what a Kubernetes readiness probe is and how you can use it to salvage your situation.

Key Takeaways

  • Readiness probes check if a container’s application is ready to start receiving requests.
  • There are 4 ways to perform checks on containers using Kubernetes readiness probes. They are exec, httpGet, tcpSocket, and grpc.
  • You need to ensure that an appropriate initialDelaySeconds for an application is set to avoid false negatives or longer response times.

Prerequisites

Before getting started, make sure you have access to a functioning Kubernetes cluster. If you don’t already have one, you can set it up using a tool like minikube. Additionally, ensure you have kubectl installed. You also need a basic knowledge of Kubernetes concepts, such as Pods, deployments, and services. You can check out our getting started with kubernetes guide.

What is a Kubernetes Probe?

A probe in Kubernetes is a diagnostic performed by the kubelet to know the health of your container. These probes are performed on an application running in a container within a Pod. It is used to determine if the application within the container has started (Startup Probe), if the container’s application is ready to respond to requests (Readiness Probe), or if the container’s application is still running (Liveness Probe). 

As highlighted above, those are the 3 types of probes that Kubernetes performs by monitoring a container’s application. We will focus on the Readiness Probe and understand why it exists and how it works.

Want to get started with Kubernetes? Check out our Kubernetes for the Absolute Beginners course.

What is a Kubernetes Readiness Probe?

Services in Kubernetes rely on Pods which are commonly part of a deployment, to be used as their backends. If a Pod with containers fails its readiness probe, it does not receive traffic from a service in Kubernetes. The Pod’s IP will be removed from the list of endpoints for all services that match the Pod until it is ready to start accepting traffic.

A Kubernetes readiness probe can be performed using any of these 4 ways: a readiness command, a readiness HTTP request, a TCP readiness probe, and a gRPC readiness probe. Let’s see some practical examples for each of these types. 

Configure a Readiness Command

You can define a readiness command to be executed by the kubelet in the target container. A readiness command, once executed, returns one of these three outputs:

  • Success: If the readiness command probe you define is successful, it returns 0, which means your application is ready to start receiving requests.
  • Failure: If the readiness command probe fails, it returns a non-zero value, which means the application is not yet ready.
  • Unknown: the kubelet will perform further checks because it can’t determine if the container’s application is ready or not. This means that the readiness probe diagnostic failed.

Let’s see an example below:

# readiness-exec.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: readiness
  name: readiness-exec
spec:
  replicas: 1
  selector:
    matchLabels:
      app: readiness
  strategy: {}
  template:
    metadata:
      labels:
        app: readiness
    spec:
      containers:
      - image: httpd:alpine
        name: httpd
        resources: {}
        readinessProbe:
          exec:
            command:
            - stat
            - /tmp/ready
          initialDelaySeconds: 10
          periodSeconds: 5

In the example above, to perform the readiness probe, the kubelet executes the command stat /tmp/ready, in the target container when it starts. The stat command is used to display information about a file, including its size in bytes, file permissions, etc. You can define any command depending on your scenario.

There are other configuration fields defined above that control the behavior of a readiness probe:

  • initialDelaySeconds: 10 tells the kubelet to wait for 10 seconds before performing the 1st readiness probe. It defaults to 0 seconds if it’s not defined and the minimum value is 0.
  • periodSeconds: 5 tells the kubelet to perform the readiness probe every 5 seconds. It defaults to 10 seconds if it’s not defined and the minimum value is 1. Also, if the value of periodSeconds is greater than initialDelaySeconds then the initialDelaySeconds would be ignored.

A summary of all configuration fields can be seen here. With a readiness probe, 

  • The failureThreshold is the number of consecutive failures required to determine the probe failed. When a readiness probe fails, the kubelet will continue running the container (the Pod phase will be Running) and also continue to run more probes. Also, the kubelet will set the Pod’s Ready condition to false. The default value for a failureThreshold is 3.
  • The successThreshold is the minimum number of consecutive successes required to determine the probe succeeded. It defaults to 1, and the minimum value is 1.

You can apply the Kubernetes manifest file above using the command: 

kubectl apply -f readiness-exec.yaml 

To see details on Pods in your cluster, run the command: 

kubectl get pods

This command will display a list of pods running in your Kubernetes cluster, as seen in the output below:

Kubernetes Readiness Probe: A Simple Guide with Examples

The READY column shows the number of Running containers in Pod / Total containers in Pod, which is 0/1. But, why are there no ready containers in the Pod when the STATUS column of the Pod says Running?

To answer the questions above, you need to understand 2 different concepts, which are: 

  • Pod phase: The phase of a Pod tells what stage the Pod is in within its lifecycle. In the image above, the Pod’s phase is summarized in the STATUS column, which is Running. A Pod in the Running stage means that the Pod has been assigned to a node, and all the containers within this Pod have been created and are running on the selected node. A summary of all the phases of a Pod can be found here.
  • Pod conditions: The conditions of a Pod describe the health of the Pod. The ContainersReady Pod condition is summarized in the READY column, which is currently 0/1. This means that no containers in the Pod are ready to start accepting traffic. The ContainersReady Pod condition tells if all containers within a Pod have successfully passed their readiness probes. A summary of all the conditions of a Pod can be found here. By default, Kubernetes considers the readiness state of a container (the containerReady Pod condition) as Failure before the initial delay (initialDelaySeconds) defined in the probe. 

To see a Pod’s phase and conditions, you can also run the command: 

kubectl describe pod <pod_name> 

The command will display detailed information about a specified Pod, including the Pod conditions. See the screenshot below:

Kubernetes Readiness Probe: A Simple Guide with Examples

Hence, even if the ContainersReady Pod condition is false, the Pod is in a Running state because that’s where the Pod is in its lifecycle. This just means that the Pod’s containers have been created and are running within an assigned node. But, the ContainersReady Pod condition for all containers in a Pod has to be True for the Ready Pod condition to be True. The Ready condition on the Pod determines if the containers are ready to start receiving traffic.

Please note that if a container doesn’t have a readiness probe defined, the default readiness state is Success. This means that without a readiness probe configured, Kubernetes assumes the container in a Pod is ready to serve traffic. This can negatively impact user experience if the container’s application is not ready to start receiving traffic. 

To further investigate why the ContainersReady Pod condition is false, you can also check the Events field. Part of the output should be similar to the screenshot below:

Kubernetes Readiness Probe: A Simple Guide with Examples

The Events details show that the readiness probe failed because the file path used with the stat command doesn’t exist in the container.

Also, as mentioned earlier, if the readiness probe fails, the Pod’s IP address is removed from the endpoints of all services that match the Pod. To demonstrate this, let’s create a service. To create a service, run the command:

kubectl expose deployment readiness-exec --name readiness-exec-svc --type=NodePort --port=80 

This command exposes the existing deployment, readiness-exec, as a service called readiness-exec-svc. It is a service of type NodePort, listening on port 80.

To see more details related to the service, you can run the command:

kubectl describe service <service_name>

The command above displays the output below:

Kubernetes Readiness Probe: A Simple Guide with Examples

Currently, there are no Pod IPs in the Endpoints field. This is because the Pod’s container is not ready to start receiving requests since the readiness probe failed. 

Now, let’s exec into the container to create the file /tmp/ready. To do so, run the command: 

kubectl exec <pod_name> -- touch /tmp/ready

To display the pod details, you can run the command:

kubectl get pod <pod_name> 

This command shows the READY column of the Pod is now 1/1, indicating that the containers are ready to receive requests. See the screenshot below: 

Kubernetes Readiness Probe: A Simple Guide with Examples

You can also get more details about the Pod using the kubectl describe, as highlighted earlier, to see the Pod conditions.

Also, to check if the Endpoints field now has a value, run the command:

kubectl describe service <service_name>

This command displays details of the specified service, including the endpoints, as seen below:

Kubernetes Readiness Probe: A Simple Guide with Examples

The Pod’s IP is now added to the matching service because the Pod’s containers passed their readiness probe.

To verify the Pod’s IP address, you can run the command: 

kubectl get pod <pod_name> -o wide

Using the -o wide option gives more information when listing a pod with the kubectl get pod command. It includes the IP column above, which shows the IP address of the Pod. 

So, with the example given, the kubelet starts performing readiness probes 10 seconds after the container starts. This assumes your container’s application takes about 10 seconds to run any initialization tasks. Examples of initialization tasks are loading large data or configuration files during startup, connecting to external services, etc. Then, after the initialDelaySeonds, it performs the readiness probe every 5 seconds (as specified in our example) throughout the lifecycle of the container. 

The readiness probe executions can be seen as follows at a high level for our given example:

  • At time T=0 seconds, the container in the Pod starts,
  • At time T=10 seconds (the given initialDelaySeconds in the example above), the 1st readiness probe is performed,
  • If the readiness probe succeeds, it is counted as 1 successful probe,
  • At every periodSeconds (5 seconds in our example), subsequent readiness probes are executed. For example, the 2nd probe will be executed at T=15 seconds, the 3rd will be executed at T=20 seconds, and so on.
  • If the readiness probe continues to succeed for the minimum number of consecutive times defined by the successThreshold, the container is ready to start receiving traffic,
  • If the readiness probe fails, it is counted as 1 failed probe. If the number of consecutive failures reaches the failureThreshold defined, the container is considered as not ready. The container’s Pod IP address will be removed from all service endpoints that match the Pod until the container becomes ready.

Configure a Readiness HTTP Request

You can also define an HTTP GET request for a readiness probe. The kubelet sends an HTTP GET request to a server’s specified path and a specified port that the server listens on. Any response code greater than or equal to 200 and less than 400 ( 200 <= code < 400) indicates Success. Any other status code indicates Failure. If the kubelet can’t determine if the container is ready or not, then the result of the readiness probe is Unknown.

Let’s see an example below:

# readiness-http.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: readiness
  name: readiness-http
spec:
  replicas: 1
  selector:
    matchLabels:
      app: readiness
  strategy: {}
  template:
    metadata:
      labels:
        app: readiness
    spec:
      containers:
      - image: nginx:alpine
        name: readiness-http
        resources: {}
        readinessProbe:
          httpGet:
            path: /
            port: 80
            httpHeaders:
              - name: Custom-Header
                value: Awesome
          initialDelaySeconds: 8
          periodSeconds: 3

In the example above, the kubelet performs a readiness probe by sending an HTTP request to the nginx server at path /, on port 80. You can also define custom HTTP headers to be sent when sending the HTTP GET request, as seen above.

The other configuration fields that control the behavior of a readiness probe, like initialDelaySeconds, periodSeconds, etc, can also be defined when configuring a readiness HTTP request.

You can also use a named port. Below is a trimmed-down example specifying the named port configuration:

...
ports:
- name: readiness-port
  containerPort: 80
  hostPort: 80
readinessProbe:
  httpGet:
    path: /
    port: readiness-port
...

Configure a TCP Readiness Probe

You can also define a TCP socket that the kubelet will open on a specified port on the container. If a successful TCP connection is established, the container’s application is considered healthy and ready to start accepting requests. If it can’t, the readiness probe is considered a failure. Below is a trimmed-down version specifying the TCP socket configuration:

...
 readinessProbe:  
   tcpSocket:
     port: 8080
   initialDelaySeconds: 15
   periodSeconds: 10
...

You just need to specify the port for the TCP socket connection. The other configuration fields, like initialDelaySeconds, periodSeconds, etc, can also be defined when configuring a readiness TCP probe.

TCP probes also support named ports, as seen in the HTTP readiness probe example.

Configure a gRPC Readiness Probe

You can also define a remote procedure call using gRPC if your application implements the gRPC health-checking protocol. Below is a trimmed-down version specifying the gRPC probe configuration:

…
readinessProbe:
  grpc:
    port: 2379
  initialDelaySeconds: 10
…

As seen above, You need to specify the grpc port for the remote procedure call. gRPC probes do not support named ports. You can read more on the nuances of configuring a gRPC probe here

Best Practices When Using Kubernetes Readiness Probes

Below are some of the best practices you should adhere to when using readiness probes:

  • Ensure you choose the appropriate readiness probe mechanism for your application. A TCP readiness probe may be more appropriate for services or processes that are configured to run in the background (a daemon), like a database server, web server, etc.
  • Ensure you use the right amount of seconds for fields defined in a readiness probe configuration. If the readiness probe happens too frequently, it can increase the load on the network. If the initial delay is too short, the probe will fail. Also, if it’s too long, the response time will be negatively impacted.
  • You need to also health-check your readiness probes by monitoring them. You can monitor the readiness probes by accessing Pod logs using kubectl logs. You can set up logging and monitoring systems like Grafana Loki, and Prometheus with Grafana.
  • It is a common pattern to use liveness probes along with readiness probes to ensure the kubelet restarts the container if deadlocks are detected.
  • Ensure that the scripts or commands executed by readiness probes in containers run with the least privilege. 

Learn how to effectively log and monitor your readiness probes by taking our Grafana Loki and Prometheus Certified Associate (PCA) courses.

The Benefit of Kubernetes Readiness Probes

Below are some of the benefits of configuring readiness probes: 

  • Readiness probes ensure smooth and consistent application updates, which provide a better user experience by minimizing downtime. Incoming requests won’t be routed to unhealthy Pods because their IP addresses are removed from service endpoints by the endpoint controller.
  • It improves the overall reliability and availability of a system because incoming traffic is routed only to available pods.
  • It helps SRE or DevOps teams to identify issues early. You can configure a readiness process to trigger a notification to alert the on-call team when a probe fails.

FAQ

Below is a frequently asked question about Kubernetes readiness probes.

What are the Common Scenarios that Cause a Readiness Probe to Fail?

Below are some of the mistakes that can cause a ready container to fail a Kubernetes readiness probe:

  • Defining an Inappropriate value for initialDelaySeconds. If the kubelet starts performing readiness probes before the container’s application finishes initializing, the probe will fail. This is a false negative. 
  • Defining incorrect values for readiness probe fields can cause an error. For example, configuring a wrong HTTP GET readiness probe path or port.
  • Configuring inadequate security policies can limit or block a readiness probe. Security configurations like network policies, security groups, and firewall rules can cause readiness probes to fail if not configured correctly.

Many more reasons can cause the failure of readiness probes. You need to ensure you define appropriate values and monitor readiness probes to identify and address such issues proactively. You can share some scenarios you have encountered in the comment section.

Conclusion

You have learned the different mechanisms used in configuring a Kubernetes readiness probe. You have also learned some best practices when using readiness probes. Kubernetes readiness probe is one of the key components when orchestrating a highly available and reliable distributed system. It enhances user experience by maximizing uptime.

Looking to certify your Kubernetes skills? Check out the following certification exam preparation courses from KodeKloud:

Each course listed above also has a mock exam series or challenges to prepare you for the exam:

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...