Skip to main content

Kubernetes: 13. DaemonSets

Daemon Sets

  • DaemonSets are like replicaSet and deployment
  • While replicaSets and deployments makes sure that sufficient number of pods are running as defined in the replicas
  • DaemonSets make sure that atleast one pod runs on each node of the kubernetes cluster
  • If a new node is added to the cluster, DaemonSet automatically runs a pod on that node
  • Based on the node selector and node affinity rules, you can run DaemonSets only on targeted nodes
  • Some of the use cases of the DaemonSet are monitoring, log collector etc
  • Say you want to monitor pods running on the nodes, DaemonSet is perfect to make sure that monitoring agent is running via pod on every node
  • kube-proxy runs as a DaemonSet
  • DaemonSet definition file looks very similar to replicaSet definition
  • Before v1.12 kubernetes didnt had a way to create a pod on each node. The only way was to create a pod definition by setting the node name property
  • From v1.12 kubernetes now uses the combination of node affinity rules along with the daemon sets to schedule the pod on the node

daemon-set-definition.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: monitoring-daemon-agent

spec:
    selector:
        matchLabels:
            app: monitoring-pod-agent
    template:
        metadata:
            labels:
                app: monitoring-pod-agent
        spec:
            containers:
            - image: omi-agent
              name: omi-agent-container


kubectl create -f daemon-set-definition.yaml 
-> Create daemonset using the definition file

kubectl get daemonsets
-> View existing daemon sets

kubectl describle daemonset <daemon-set-name>
-> Get more information of the daemon-set deployed

Comments

Popular posts from this blog

Kubernetes: 15. Multiple Schedulers

Custom Scheduler Kubernetes allows to create custom schedulers There can be multiple schedulers running at a same time apart from the default scheduler or A custom scheduler can replace the default kube-scheduler to become the default one So a few pods that requires additional checks apart from taints and toleration, node affinity can go through the custom scheduler before getting scheduled on the node Whereas the rest of the pods can go through the default kube-scheduler Create Custom Scheduler We can either download the kube-scheduler and run it as a service or alternatively create it using a static pod Below here we are downloading the binaries to run it The property scheduler-name is used to define the name of the scheduler, if not set then it will be defaulted to default-scheduler For your custom schedulers, update this property name to set a custom name for your scheduler For Static pods, the name can be updated directly in the pod-definition file Use kubectl create -f <pod-de...

Kubernetes: 19. Configure Application

Configuring application consists of Configuring commands and arguments on applications Configuring environment variables Configuring secrets Docker Commands docker run ubuntu  -> Runs ubuntu container and exit, container CMD is set to [bash], so the container quitely exits docker run ubuntu echo "Hello World" -> Runs ubuntu container, prints "Hello World" exits quitely. To update the default settings, create your own image from the base image lets call this ubuntu-sleeper image FROM ubuntu CMD sleep 5 CMD can also be mentioned in the JSON format like CMD ["sleep", "5"] Note that with JSON format the first element should always be the command to execute,  for eg, it CANNOT be ["sleep 5"] Run build the new ubuntu-sleeper image and run the new image docker build -t ubuntu-sleeper .  -> Build the image docker run ubuntu-sleeper -> Run the new image So the new image will launch ubuntu container, sleep for 5 seconds and quitely ex...

Kubernetes: 8. Labels & Selectors

Labels Labels are a way of grouping the objects While Kubernetes understands the objects it create, it is easier to identify the objects by using custom labels With labels you group the objects by types (Pods, Services, ReplicaSet etc) or by Applications For a pod, labels are defined under the metadata section Selectors Selectors are used to filter the objects using labels defined on them Using kubectl and selector pods can be listed by filtering on the labels attached to them If a Selector has multiple labels, they are understood as logical AND, which means pods must match all labels. pod-definition.yaml apiVersion: v1 kind: Pod metadata:      name: myapp-pod      labels:           app: myapp           location: IN spec:      containers:      - name: nginx-container        image: nginx kubectl get pods ...