Happy New Year everyone! I hope that you had a fantastic start to the new year and are looking forward to all the possibilities and opportunities that the coming year has to offer. Let’s make this year a great one for our containers by following best practices for resource management in Kubernetes.

Kubernetes is a tool that helps manage applications in containers. If you use it while deploying your applications, there is one crucial concept that you need to be in control of; Setting Resource Requests and Limits.

When you create a container, you can specify how much CPU and memory it’s allowed to use. Kubernetes uses requests and limits to control and manage resources.

It’s important to set limits for how much computer processing power (CPU) and memory your containers can use. This lets your applications have enough resources to run well and avoids using too many resources.

Balancing different resources Balancing different resources
Requests & Limits

Requests are the minimum amount of resources that a container is guaranteed to receive, and limits are the maximum amount of resources that a container is allowed to use. When you specify resource requests and limits for a container, Kubernetes will only schedule the container on a node that has enough resources to meet the requests and stay within the limits.

It’s critical to set the limit for a resource higher or equal to the request. If you try to set a lower limit, Kubernetes will not allow the container to run. Each container in a pod, a pod can contain multiple containers, has its own requests and limits, but when you are looking at the resources for the whole pod, you need to add up the requests and limits for all the containers.

Resources: CPU & Memory

Setting appropriate values for CPU and memory to guarantee your applications have the resources they need to run well is essential. If the resource limits are set too low, your applications may not have enough resources to function properly and may become slow or unresponsive. If the resource limits are set too high, you may be wasting resources and potentially paying for more resources than you actually need.

If you set limits for your CPU resources, your container might not be able to use as much CPU as it needs. This is because the CPU is considered a “compressible” resource, which means that if a container uses more CPU than the limit allows, Kubernetes will try to slow it down. This could make your container run slower, but it will not be stopped completely. You can use a tool called a liveness health check to ensure that your container’s performance has not been affected by the limits.

However, memory is different from CPU because it cannot be slowed down or compressed. If a container uses more memory than its limit allows, it will be stopped completely. There is no way to slow down the memory usage of a container like there is for the CPU. This is why it’s important to set the right limits for your memory resources.

Here is an example YAML file showing how to set resource requests and limits for a container:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: my-image
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

In this example, the container is requesting 64 megabytes of memory and 250 millicores (¼ of a core) of CPU, and it is limited to using 128 megabytes of memory and 500 millicores of CPU.

The values for memory and CPU in the YAML file are expressed in standard units. The memory values are expressed in the mebibyte value for memory (Mi, this is basically the same thing as a megabyte) or gibi (Gi, this is basically the same thing as a gigabyte). The CPU values are expressed in millicores (m, 1000m = 1 core).

Here are some tips for using resource requests and limits effectively:

  • Set them appropriately: Set resource requests and limits based on what your applications need. You should monitor your applications and adjust the resource requests and limits as needed.
  • Use “resource quotas”: Resource quotas limit the total amount of resources that containers can use in a namespace or by a set of containers. This can verify that your applications don’t use more resources than they need and prevent one application from using all the available resources.
  • Monitor resource usage: Keep an eye on how many resources your containers are using. This helps make sure they’re working properly and not using too many resources. You can use the Kubernetes dashboard or tools like the `kubectl to monitor resource usage.

In conclusion, resource requests and limits are a significant part of managing applications with Kubernetes. By setting the right resource requests and limits and monitoring resource usage, you can confirm that your applications have the resources they need to run well, and use resources efficiently.