Skip to main content

Posts

Showing posts from October, 2021

Kubernetes: 21. Secrets

Passwords In the webapps we store the properties file for storing and retrieving the data required by application But we never store the application passwords, truststore, keystore passwords etc here We might store them in an encrypted format, but storing them as plain text is not the correct way In Kubernetes we store these sensitive information in Secrets https://medium.com/avmconsulting-blog/secrets-management-in-kubernetes-378cbf8171d0 Secrets Secrets are used to store the sensitive information They are similar to ConfigMaps, except that they are stored in hashed or encoded format Note that they are only encoded (using base64) but are not encrypted So secrets are a safe option to store sensitive information but infact they are not the safest option As such secret objects should be not checked into source code tools, its best to store them encrypted at REST in ETCD Again as in ConfigMaps, we have to create the secrets object first and then inject them into the pods There are 2 ways

Kubernetes: 20. ConfigMaps

  A Java map is a object that maps key to value. The key has to be unique. Environment Variables Environment variables can be directly added into Pod definition file under specs.env array But they will be limited to only the pod for which they are added For new Pods, the environment variables have to be added again ConfigMaps ConfigMaps are a way of storing the data in key: value pair This data is then injected into Pods via the definition file The data injected can be created as environment variables in the pod Or the data is just injected as a file that then can be used by the pod Create ConfigMaps There are two ways to create the ConfigMaps like any other Kubernetes objects Imperative  Declarative Note that in the declarative way there is no specs , we instead have data section config-map APP_COLOR: Blue APP_ENV: Prod config-map-creation-imperative kubectl create configmap -> Imperative way of creating configmap <config-name> --from-literal=<key>=<value>

Kubernetes: 19. Configure Application

Configuring application consists of Configuring commands and arguments on applications Configuring environment variables Configuring secrets Docker Commands docker run ubuntu  -> Runs ubuntu container and exit, container CMD is set to [bash], so the container quitely exits docker run ubuntu echo "Hello World" -> Runs ubuntu container, prints "Hello World" exits quitely. To update the default settings, create your own image from the base image lets call this ubuntu-sleeper image FROM ubuntu CMD sleep 5 CMD can also be mentioned in the JSON format like CMD ["sleep", "5"] Note that with JSON format the first element should always be the command to execute,  for eg, it CANNOT be ["sleep 5"] Run build the new ubuntu-sleeper image and run the new image docker build -t ubuntu-sleeper .  -> Build the image docker run ubuntu-sleeper -> Run the new image So the new image will launch ubuntu container, sleep for 5 seconds and quitely ex

Kubernetes: 18. Rollout and Rollback

Deployment When a deployment is created, it triggers a rollout Rollout creates a new revision (version) In the future when new deployment is created,  a new rollout is created The new rollout creates one more "new" version These versions help to keep track of the changes and rollback if necessary Deployment Strategy First strategy is delete and recreate strategy.  Delete all the existing pods and deploy the new updated pods But this comes with application downtime Second strategy and default strategy is Rolling update strategy Kubernetes deletes one pod at a time in the older version and in its place creates a one pod at a time in the newer version Update Strategy Updates can be many things like updating the labels, docker image, replicas etc These are directly updated into the deployment file and the changes are applied When the changes are applied using kubectl apply command, a new rollout and a new revision is created Another way to update the image name is to use the kube

Kubernetes: 17. Application Logs

Docker Logs docker run will shows the logs on the terminal docker run when executed in detached mode "-d" will not show the logs on the terminal To view the logs of container running in the detached mode we use docker logs command. Add "-f" flag to follow the live logs Kubernetes Logs Kubernetes pod logs can be viewed using kubectl logs command If there are multiple containers running in the pod, provide the container name as input to the command Use "-f" flag to "follow" the logs similar to docker When using "-f" flag with kubectl create, it indicates create from the file specified in the command docker run nginx  -> Run a container on the terminal, logs will be be displayed docker run -d nginx  -> Run the container in the detached mode, logs will no more be displayed docker logs -f nginx -> display the logs of the container, -f (follows) will shows the live logs kubectl logs -f <pod-name> -> Shows the logs of the <

Kubernetes: 16. Monitor Cluster Component

  Cluster Components Kubernetes does not have a OOB monitoring for its own cluster component Node health, node resources - CPU, Memory and Disk space are some of the resources you want to monitor Pod health, pod resources - CPU, Memory and Disk space are some of the resources you want to monitor This may change or might have already changed in the latest versions There are some good open source solutions for monitoring these components and doing analytics on them Metrics Server Heapster was one of the original project to monitor resource consumption, it was then replaced with Metrics Server Metrics Server is an IN MEMORY solution.  It aggregates and stores all the nodes and pod resources information So there is no historical data with metrics server Kubelet service is responsible to listen to the kube-api service instructions to build the pods Kubelet also has other responsibilities, one of it is cAdvisor (container advisor) cAdvisor collects the resource information from nodes and po

Kubernetes: 15. Multiple Schedulers

Custom Scheduler Kubernetes allows to create custom schedulers There can be multiple schedulers running at a same time apart from the default scheduler or A custom scheduler can replace the default kube-scheduler to become the default one So a few pods that requires additional checks apart from taints and toleration, node affinity can go through the custom scheduler before getting scheduled on the node Whereas the rest of the pods can go through the default kube-scheduler Create Custom Scheduler We can either download the kube-scheduler and run it as a service or alternatively create it using a static pod Below here we are downloading the binaries to run it The property scheduler-name is used to define the name of the scheduler, if not set then it will be defaulted to default-scheduler For your custom schedulers, update this property name to set a custom name for your scheduler For Static pods, the name can be updated directly in the pod-definition file Use kubectl create -f <pod-de

Kubernetes: 14. Static Pods

Kubelet Service Kubelet service is resposible for creating pods on the nodes Kubelet service gets the request from kube-api server to create the pod Kube-api inturn gets the request from scheduler to create the pod Kube-api server then gets the data from etcd and sends the request to all the kubelet services running on the nodes Static Pods Kube-api service is not the only service that kubelet service listens to create the pod kubelet service also looks into a specific folder on each of the node In this folder if it finds the pod-definition yaml file, it will create the pods based on it Note that Kubelet only creates a pod, if you have other objects like replicaSet, deployments etc it wont create those services Services other than pods still have to come through kube-api service Pods created by kubelet service looking into this configuration folder are called Static pods They are not requested by kube-api service But when you execute kubectl get pods, the output will show the static po

Kubernetes: 13. DaemonSets

Daemon Sets DaemonSets are like replicaSet and deployment While replicaSets and deployments makes sure that sufficient number of pods are running as defined in the replicas DaemonSets make sure that atleast one pod runs on each node of the kubernetes cluster If a new node is added to the cluster, DaemonSet automatically runs a pod on that node Based on the node selector and node affinity rules, you can run DaemonSets only on targeted nodes Some of the use cases of the DaemonSet are monitoring, log collector etc Say you want to monitor pods running on the nodes, DaemonSet is perfect to make sure that monitoring agent is running via pod on every node kube-proxy runs as a DaemonSet DaemonSet definition file looks very similar to replicaSet definition Before v1.12 kubernetes didnt had a way to create a pod on each node. The only way was to create a pod definition by setting the node name property From v1.12 kubernetes now uses the combination of node affinity rules along with the daemon se

Kubernetes: 12. Resource Requirements & Limits

Scheduler Kubernetes schedulers looks at the resource requirements of the pods and then schedules them on the node where the resources are available If all the nodes are exhausted then the scheduler will not schedule the pod In this case the pod remains on PENDING status.  This can be seen in the pod events By default a kubernetes assumes that a container within the pod requires (min) 0.5 CPU, 256Mi resources If the pod requires more than this, then it can be set in the pod or the deployment definition file CPU can be set as 0.5, 0.4 or 0.1 or 1 CPU count. 0.1 can also be mentioned as 100m 1 CPU count means, 1 AWS vCPU, 1 GCP Core, 1 Azure Core, 1 Hyperthread 1 Mi (pronounced as 1 Mebi byte = 1024 * 1024 bytes = 1024 Ki (Kibi byte) Resources are configured at the container level, not pod level. But since a pod is a deployment unit, the total resources required by the containers of a pod is highlighted using requests and limits Specify Description Requests The  requests  specification i

Kubernetes: 11. Node Affinity

Scheduler By default Pods gets scheduled based on node availability for the scheduler There may be cases where in one of the node has more resources and the pod required to be scheduled on this node There are two ways to achieve this Node Selector Node Affinity Node Affinity The primary purpose of node affinity is to make sure that pods are hosted correctly on the nodes Assume that during pod creation the affinity rules match and the pod is created, what if the node labels are changed after the pod creation What happens to pod depends on the nodeAffinity values set. These are requiredDuringSchedulingIgnoredDuringExecution preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingRequiredDuringExecution 3rd option still does not exist in Kubernetes, it will be/or is already released in the future releases Operators can be In, NotIn, Exists For Exists, we don't need to specify any value in the pod-definition. This is because affinity rules only check if the key exists,

Kubernetes: 10. Node Selector

Scheduler By default Pods gets scheduled based on node availability for the scheduler There may be cases where in one of the node has more resources and the pod required to be scheduled on this node There are two ways to achieve this Node Selector Node Affinity Node Selector Update the pod definition file with the node selector label Pod will be scheduled on the node matching the label But first, Node has to be labelled.  Within the pod definition spec.nodeSelector is the property to map a pod to a node With node selector, you can only have a simple key-value selection There are no options for advanced selectors like label in certain values, or label not-in a value pod-definition.yaml apiVersion: v1 kind: Pod metadata:      name: myapp-pod      labels:           app: myapp spec:      containers:      - name: nginx-container        image: nginx      nodeSelector:           size: Large    kubectl label node <node-name> <lab

Kubernetes: 9. Taints & Tolerations

Taints and Tolerations Taints and Toleration are used to describe what pods can be scheduled on what node They wont force a pod to schedule on the node that has a certain taint that cant be tolerated by a pod If a pod has to be mandatorily scheduled on a particular node then node-affinity has to be used Taints Taints are set on the nodes By default none of the pods have any toleration set So if a node is tainted, then with default settings on the pods, none of the pods can be scheduled on the tainted node Note that by default the scheduler never schedules a pod on a master node This is because there is a taint set on the master node This can be modified if required, but as a best practice only the management software is supposed to be run on the master node kubectl describe node <node> | grep Taint Tolerations Tolerations are set on the pods How to set taint on a node?  Use kubectl as described below kubectl taint node <node-name> <taint-key-value-pairs>:<taint-eff

Kubernetes: 8. Labels & Selectors

Labels Labels are a way of grouping the objects While Kubernetes understands the objects it create, it is easier to identify the objects by using custom labels With labels you group the objects by types (Pods, Services, ReplicaSet etc) or by Applications For a pod, labels are defined under the metadata section Selectors Selectors are used to filter the objects using labels defined on them Using kubectl and selector pods can be listed by filtering on the labels attached to them If a Selector has multiple labels, they are understood as logical AND, which means pods must match all labels. pod-definition.yaml apiVersion: v1 kind: Pod metadata:      name: myapp-pod      labels:           app: myapp           location: IN spec:      containers:      - name: nginx-container        image: nginx kubectl get pods --selector app=myapp  -> Get pods filtered by labels kubectl get pods --selector app=myapp,tier=frontend  -> Get pods filtered by lab

Kubernetes: 7. Manual Scheduling

As new pod definition files are created, Kubernetes goes through them and looks for the property nodeName If this property does not exist, then Kubernetes has the job of scheduling this pod It then looks for the nodes that can host this pod and schedules it there And the pod definition is updated with the nodeName where it is running The property nodeName in the below definition file is optional. If the property is specified then kubernetes automatically schedules the pod on the selected node. This is called manual scheduling. Once the Kubernetes identifies the node on which to run, it creates a binding object that binds the pod with the node on which the pod will run pod-definition.yaml apiVersion: v1 kind: Pod metadata:      name: myapp-pod      labels:           app: myapp           location: IN spec:      nodeName: node01      containers:      - name: nginx-container        image: nginx      - name: backed-db      

Kubernetes: 6. Imperative Commands

Imperative commands are useful in quickly creating the resources on Kubernetes kubectl --dry-run -> When created with this option, Kubernetes just validates the definition and does not actually create the resource -o=yaml -> Gets the output in YAML format kubectl run nginx-pod --image=nginx:alpine  -> Create a pod with name nginx-pod and image nginx:alpine kubectl run httpd --image=httpd:alpine  -> By default, run implies run-a-pod kubectl run redis --image=redis:alpine --labels=tier=db  -> Create a pod with name redis and image redis:alpine and labels set to tier=db kubectl run custom-nginx --image=nginx --port=8080  -> Create a pod with name custom-nginx and image nginx to run on port 8080 kubectl expose pod redis --port=6379 --name=redis-service  -> Create a service with name redis-service to expose pod named redis on service-port 6379 kubectl create deployment webapp --image=kodekloud/webapp-color --replicas=3  -> Create a deployment with name webapp and i

Kubernetes: 5. Services

A  service  is a stable endpoint to connect to "something" An abstract way to expose an application running on a set of pods as a network service. Services enable communication between various components within and outside of the application With Kubernetes Services there is no need to configure the application for service discovery Kubernetes Service is an object just like Pod, ReplicaSet etc There is always a service running when Kubernetes is installed, Kubernetes API itself When a service is created, kubernetes creates the endpoints (kubectl get endpoints) The endpoints has all the pods associated with that service Headless Service A headless service is obtained by setting clusterIP  field to None Since there is no virtual IP address, there is no load balancer either The DNS service will return the pods' IP addresses as multiple A records This gives us an easy way to discover all the replicas for a deployment This is useful for creating stateful services like DB, Elas