Using Kubernetes to Manage Your Resources

Kublr Team
Kublr Team
Published in
5 min readMar 7, 2017

--

If HAL isn’t going to unlock the pod bay doors for you, you probably have an issue with your users running wild in a large production Kubernetes cluster. These and a range of other issues can be solved or mitigated through finer-grained control and resource utilization, which is where Kubernetes (K8S) quotas comes into play.

By applying quotas to each namespace, especially when developers and teams are allowed to schedule their own pods or create new services in an ad hoc manner, you can control and limit the infrastructure costs in an autoscaled cluster. This helps avoid rampant resource hogging and constrain pods resource consumption, by defining quotas and limits separately for each namespace.

In any fresh Kubernetes cluster, we generally see two “namespace” resources, named “kube-system” and “default”. The “kube-system” namespace will contain the system pods, plus the cluster’s core components, like kube-apiserver, etcd, kube-controller-manager, kube-scheduler, dashboard, kube-dns and others, depending on your installation.

Over in the “default” namespace, you will find pods that you can schedule by default (if another namespace is not specified in pod manifest). To effectively partition your Kubernetes cluster for delegation of control to other teams, you need to create a selection of development namespaces per each team, and can control their dev\test environments resource consumption with custom quotas, all running in same cluster.

To apply quotas you need to define a ResourceQuota object, as in the following example (your manifest might look shorter of course, and contain only the few things you care about, such as CPU or load balancer count). You should also separate such manifests into several objects, to easily reuse them in future, by attaching to other namespaces:

apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
spec:
hard:
requests.cpu: "8"
requests.memory: 32Gi
limits.cpu: "16"
limits.memory: 64Gi
pods: "20"
persistentvolumeclaims: "5"
replicationcontrollers: "20"
services: "20"
services.loadbalancers: "5"

Values and resources will vary (explained later), but for now save the example to file named “quota.yml”, and attach this quota to your chosen namespace:

kubectl create -f quota.yml --namespace default

(The result “resourcequota dev-quota created” shows that the operation was completed successfully.)

If you already have some pods running, you can see exactly how much of each resource is being utilized and what are the limits, with this command:

kubectl describe quota dev-quota --namespace default

It shows current consumption and limit values. If a pod exceeds its limits, it may be terminated by the system. Those millicpu numbers represent “1 virtual CPU core” divided by 1000, and depend on your cloud provider, click the link to read more about possible values for kubernetes compute-resources.

It is important to note that after you set a custom quota for CPU or memory resources to a namespace, you should either ensure that each deployment or pod manifest that you want to schedule has specified its “requests” and “limits” fields. Or, create a “default limit range” object, that will apply to every pod that attempts to be scheduled without specified limits and requests in the manifest.

Want a stress-free K8S cluster management experience? Download the Kublr-in-a-Box demo.

To verify which default limits you already have, run:

kubectl describe limits --namespace=<your namespace>

If it looks like this:

Then you don’t have all needed defaults set in that namespace, and any pod without 3 values in its definition (memory request, memory limit, cpu limit) will fail to be scheduled with the error message:

Error creating: pods "elasticsearch-master" is forbidden: failed quota: dev-quota: must specify limits.cpu,limits.memory,requests.memory

(Why are three values needed in the pod manifest, and not all four? Because “Default Request” of 100 millicpu exists, as seen on this screenshot, and will be applied to any pod that doesn’t specify “cpu request” value in its definition.)

This failure to run a pod, can happen if you activated some compute quotas, but didn’t specify them either in pod definition or in namespace “limits” object.

Compute quotas are:
1. CPU request
2. Memory request
3. CPU limit
4. Memory limit

When one, or more, are set in “limits” object, you can skip adding them to pod manifest and it will use needed defaults during scheduling.
Here is example for limits manifest that you can create, which takes care of default values for pods that didn’t specify them:

apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
spec:
limits:
- default:
cpu: 200m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 256Mi
type: Container

You can save it to file, named “default-limits.yml” for example, and create the object with:

kubectl create -f default-limits.yml --namespace=default

And check if it was created with:

kubectl describe limits default-limits --namespace=default

You should see something like the following image:

Now you will not see the error when scheduling pods without “resources.limits” and “resources.requests” fields specified.

After setting quota, any resource that will cause quota thresholds to be exceeded for that namespace, cannot be created.

Here is a full list of supported quota resources, that can be set:

Most of those names are self-explanatory, so there’s no need to describe each one in this article. To read more details about each, please check the official Kubernetes quota resource documentation page.

Need a user-friendly tool to set up and manage your K8S cluster? Check out our demo, Kublr-in-a-Box. To learn more, visit kublr.com.

--

--

Production-ready cluster and application platform that speeds and simplifies the set-up and management of your K8S cluster. To learn more, visit kublr.com.