Skip to main content

Posts

Kubernetes: 14. Static Pods

Kubelet Service Kubelet service is resposible for creating pods on the nodes Kubelet service gets the request from kube-api server to create the pod Kube-api inturn gets the request from scheduler to create the pod Kube-api server then gets the data from etcd and sends the request to all the kubelet services running on the nodes Static Pods Kube-api service is not the only service that kubelet service listens to create the pod kubelet service also looks into a specific folder on each of the node In this folder if it finds the pod-definition yaml file, it will create the pods based on it Note that Kubelet only creates a pod, if you have other objects like replicaSet, deployments etc it wont create those services Services other than pods still have to come through kube-api service Pods created by kubelet service looking into this configuration folder are called Static pods They are not requested by kube-api service But when you execute kubectl get pods, the output will show the static po...

Kubernetes: 13. DaemonSets

Daemon Sets DaemonSets are like replicaSet and deployment While replicaSets and deployments makes sure that sufficient number of pods are running as defined in the replicas DaemonSets make sure that atleast one pod runs on each node of the kubernetes cluster If a new node is added to the cluster, DaemonSet automatically runs a pod on that node Based on the node selector and node affinity rules, you can run DaemonSets only on targeted nodes Some of the use cases of the DaemonSet are monitoring, log collector etc Say you want to monitor pods running on the nodes, DaemonSet is perfect to make sure that monitoring agent is running via pod on every node kube-proxy runs as a DaemonSet DaemonSet definition file looks very similar to replicaSet definition Before v1.12 kubernetes didnt had a way to create a pod on each node. The only way was to create a pod definition by setting the node name property From v1.12 kubernetes now uses the combination of node affinity rules along with the daemon se...

Kubernetes: 12. Resource Requirements & Limits

Scheduler Kubernetes schedulers looks at the resource requirements of the pods and then schedules them on the node where the resources are available If all the nodes are exhausted then the scheduler will not schedule the pod In this case the pod remains on PENDING status.  This can be seen in the pod events By default a kubernetes assumes that a container within the pod requires (min) 0.5 CPU, 256Mi resources If the pod requires more than this, then it can be set in the pod or the deployment definition file CPU can be set as 0.5, 0.4 or 0.1 or 1 CPU count. 0.1 can also be mentioned as 100m 1 CPU count means, 1 AWS vCPU, 1 GCP Core, 1 Azure Core, 1 Hyperthread 1 Mi (pronounced as 1 Mebi byte = 1024 * 1024 bytes = 1024 Ki (Kibi byte) Resources are configured at the container level, not pod level. But since a pod is a deployment unit, the total resources required by the containers of a pod is highlighted using requests and limits Specify Description Requests The  requests  s...

Kubernetes: 11. Node Affinity

Scheduler By default Pods gets scheduled based on node availability for the scheduler There may be cases where in one of the node has more resources and the pod required to be scheduled on this node There are two ways to achieve this Node Selector Node Affinity Node Affinity The primary purpose of node affinity is to make sure that pods are hosted correctly on the nodes Assume that during pod creation the affinity rules match and the pod is created, what if the node labels are changed after the pod creation What happens to pod depends on the nodeAffinity values set. These are requiredDuringSchedulingIgnoredDuringExecution preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingRequiredDuringExecution 3rd option still does not exist in Kubernetes, it will be/or is already released in the future releases Operators can be In, NotIn, Exists For Exists, we don't need to specify any value in the pod-definition. This is because affinity rules only check if the key exists, ...

Kubernetes: 10. Node Selector

Scheduler By default Pods gets scheduled based on node availability for the scheduler There may be cases where in one of the node has more resources and the pod required to be scheduled on this node There are two ways to achieve this Node Selector Node Affinity Node Selector Update the pod definition file with the node selector label Pod will be scheduled on the node matching the label But first, Node has to be labelled.  Within the pod definition spec.nodeSelector is the property to map a pod to a node With node selector, you can only have a simple key-value selection There are no options for advanced selectors like label in certain values, or label not-in a value pod-definition.yaml apiVersion: v1 kind: Pod metadata:      name: myapp-pod      labels:           app: myapp spec:      containers:      - name: nginx-container        image...

Kubernetes: 9. Taints & Tolerations

Taints and Tolerations Taints and Toleration are used to describe what pods can be scheduled on what node They wont force a pod to schedule on the node that has a certain taint that cant be tolerated by a pod If a pod has to be mandatorily scheduled on a particular node then node-affinity has to be used Taints Taints are set on the nodes By default none of the pods have any toleration set So if a node is tainted, then with default settings on the pods, none of the pods can be scheduled on the tainted node Note that by default the scheduler never schedules a pod on a master node This is because there is a taint set on the master node This can be modified if required, but as a best practice only the management software is supposed to be run on the master node kubectl describe node <node> | grep Taint Tolerations Tolerations are set on the pods How to set taint on a node?  Use kubectl as described below kubectl taint node <node-name> <taint-key-value-pairs>:<taint...

Kubernetes: 8. Labels & Selectors

Labels Labels are a way of grouping the objects While Kubernetes understands the objects it create, it is easier to identify the objects by using custom labels With labels you group the objects by types (Pods, Services, ReplicaSet etc) or by Applications For a pod, labels are defined under the metadata section Selectors Selectors are used to filter the objects using labels defined on them Using kubectl and selector pods can be listed by filtering on the labels attached to them If a Selector has multiple labels, they are understood as logical AND, which means pods must match all labels. pod-definition.yaml apiVersion: v1 kind: Pod metadata:      name: myapp-pod      labels:           app: myapp           location: IN spec:      containers:      - name: nginx-container        image: nginx kubectl get pods ...