Skip to main content

Kubernetes: 2. Replica Set

Replication Controller

  • Replication Controller is a Kubernetes Controller
  • Runs multiple instances of kubernetes pods in a cluster thus providing high availability
  • Replication Controller and Replica Set have same functionality but they are different
  • Replication Controller is the older way to scale the pods
  • Selector labels is optional, if you dont specify then ".spec.template.metadata.labels" is defaulted

Replica Set
  • Replica Set is also Kubernetes Controller
  • Replica Set are for having the desired number of Pods running, same as Replication Controller
  • Kubernetes keeps a watch of the number of pods that should be running at any time and automatically creates/deletes based on the pods running
  • What differentiates a Replica Set from Replication Controller is selector with set-based label requirements.
    • Set based allow filtering keys according to a set of values.
    • Three kinds of operators are supported:
      • in
      • notin
      • exists
Updating Replica Set
  • Use kubectl edit rs <rs-name> and modify the replica set
  • After this delete all the pods, so that the replica set creates new pods based on the updated definition file
  • Note that the replica set does not propagate the updated changes into pods definition files
  • If we use kubectl edit pod <pod-name>, changes are reflected in Pod immediately
  • Similarly the changes for replicaSet are reflected in the definition file but not in the underlying resources created by the replicaSet, so if you update the number of replicas then the changes are instantaneously updated
  • Alternatively delete the replicaSet and re-create it with updated configuration
  • Deleting a replica-set automatically deletes the pod underlying it. So new pods are created when replicaSet is created

Scale the Replica Set
  • Use kubectl edit rs <rs-name> and modify the replica set
  • Changes in the file will be immediately reflected, this is because replica set now looks at the new number of replicas to be maintained
  • So it will add or delete the pods automatically
  • Alternatively use kubectl scale --replicas=<no-of-replicas> replicaset <replica-set-name>

replication-controller-definition.yaml
apiVersion: v1
kind: ReplicationController
metadata:
    name: myapp-rc
    labels:
        app: myapp
        type: front-end

spec:
    template:
        metadata:
            name: myapp-pod
            labels:
                app: myapp
                type: front-end
        spec:
            containers:
            - name: nginx-container
              image: nginx


    replicas: 3


kubectl create -f replication-controller-definition.yaml

kubectl get replicationcontroller


replica-set-definition.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
    name: myapp-rs
    labels:
        app: myapp
        type: front-end

spec:
    template:
        metadata:
            name: myapp-pod
            labels:
                app: myapp
                type: front-end
        spec:
            containers:
            - name: nginx-container
              image: nginx


    replicas: 3
    selector:
        matchLabels:
            type: front-end

kubectl create -f replica-set-definition.yaml

kubectl get replicaset

kubectl delete replicaset myapp-rs
-> Deletes the underlying pods as well

kubectl replace -f replica-set-definition.yaml
-> Manually update the yaml definition file and run the REPLACE command

kubectl scale --replicas=6 -f replica-set-definition.yaml
-> Using file name to update the replica set will not update the YAML efinition file

kubectl scale --replicas=6 replicaset myapp-rs


selector:
matchExpressions:
- {key: tier, operator: In, values: [frontend1]}

Comments

Popular posts from this blog

Kubernetes: 15. Multiple Schedulers

Custom Scheduler Kubernetes allows to create custom schedulers There can be multiple schedulers running at a same time apart from the default scheduler or A custom scheduler can replace the default kube-scheduler to become the default one So a few pods that requires additional checks apart from taints and toleration, node affinity can go through the custom scheduler before getting scheduled on the node Whereas the rest of the pods can go through the default kube-scheduler Create Custom Scheduler We can either download the kube-scheduler and run it as a service or alternatively create it using a static pod Below here we are downloading the binaries to run it The property scheduler-name is used to define the name of the scheduler, if not set then it will be defaulted to default-scheduler For your custom schedulers, update this property name to set a custom name for your scheduler For Static pods, the name can be updated directly in the pod-definition file Use kubectl create -f <pod-de...

Kubernetes: 19. Configure Application

Configuring application consists of Configuring commands and arguments on applications Configuring environment variables Configuring secrets Docker Commands docker run ubuntu  -> Runs ubuntu container and exit, container CMD is set to [bash], so the container quitely exits docker run ubuntu echo "Hello World" -> Runs ubuntu container, prints "Hello World" exits quitely. To update the default settings, create your own image from the base image lets call this ubuntu-sleeper image FROM ubuntu CMD sleep 5 CMD can also be mentioned in the JSON format like CMD ["sleep", "5"] Note that with JSON format the first element should always be the command to execute,  for eg, it CANNOT be ["sleep 5"] Run build the new ubuntu-sleeper image and run the new image docker build -t ubuntu-sleeper .  -> Build the image docker run ubuntu-sleeper -> Run the new image So the new image will launch ubuntu container, sleep for 5 seconds and quitely ex...

Kubernetes: 8. Labels & Selectors

Labels Labels are a way of grouping the objects While Kubernetes understands the objects it create, it is easier to identify the objects by using custom labels With labels you group the objects by types (Pods, Services, ReplicaSet etc) or by Applications For a pod, labels are defined under the metadata section Selectors Selectors are used to filter the objects using labels defined on them Using kubectl and selector pods can be listed by filtering on the labels attached to them If a Selector has multiple labels, they are understood as logical AND, which means pods must match all labels. pod-definition.yaml apiVersion: v1 kind: Pod metadata:      name: myapp-pod      labels:           app: myapp           location: IN spec:      containers:      - name: nginx-container        image: nginx kubectl get pods ...