Author

Author


Rajesh Kumar

Rajesh Kumar

DevOps@RajeshKumar.xyz

DevOps@RajeshKumar.xyz

Support my work @Patron! Support my work @Patron!

Contents


Managing State With Deployments

Exercise 7.1: Working with ReplicaSets

Overview

Understanding and managing the state of containers is a core Kubernetes task. In this lab we will first explore the API objects used to manage groups of containers. The objects available have changed as Kubernetes has matured, so the Kubernetes version in use will determine which are available. Our first object will be a ReplicaSet, which does not include newer management features found with Deployments. A Deployment will will manage ReplicaSets for you. We will also work with another object called a DaemonSet which ensures a container is running on newly added node.

Then we will update the software in a container, view the revision history, and roll-back to a previous version.

Working with ReplicaSets

A ReplicaSet is a next-generation of a Replication Controller, which differs only in the selectors supported. The only reason to use a ReplicaSet anymore is if you have no need for updating container software or require update orchestration which won’t work with the typical process.

  1. View any current ReplicaSets. If you deleted resources at the end of a previous lab, you should have none reported in the default namespace.
    12345
    	student@lfs458-node-1a0a:~$ kubectl get rs
    	No resources found
    
    
  2. Create a YAML file for a simple ReplicaSet. The apiVersion setting depends on the version of Kubernetes you are using. Versions 1.8 and beyond will use apps/v1beta1, then perhaps someday apps/v1beta2 and then probably a stable apps/v1. We will use an older version of nginx then update to a newer version later in the exercise.
    123456789101112131415161718192021
    	student@lfs458-node-1a0a:~$ vim rs.yaml
    	apiVersion: extensions/v1beta1
    	kind: ReplicaSet
    	metadata:
    		name: rs-one
    	spec:
    		replicas: 2
    		template:
    			metadata:
    			labels:
    				system: ReplicaOne
    			spec:
    				containers:
    				- name: nginx
    				  image: nginx:1.7.9
    				  ports:
    				  - containerPort: 80
    
    
    
  3. Create the ReplicaSet:
    1234
    	student@lfs458-node-1a0a:~$ kubectl create -f rs.yaml
    	replicaset.extensions/rs-one created
    
  4. View the newly created ReplicaSet:
    123456789101112131415161718192021
    	student@lfs458-node-1a0a:~$ kubectl describe rs rs-one
    	Name: 			rs-one
    	Namespace:      default
    	Selector:       system=ReplicaOne
    	Labels:         system=ReplicaOne
    	Annotations:    <none>
    	Replicas:       2 current / 2 desired
    	Pods Status:    2 Running / 0 Waiting / 0 Succeeded / 0 Failed
    	Pod Template:
    	Labels:         system=ReplicaOne
    	Containers:
    	nginx:
    	Image:          nginx:1.7.9
    	Port:           80/TCP
    	Environment:    <none>
    	Mounts:         <none>
    	Volumes:        <none>
    	Events:         <none>
    	
    
  5. View the Pods created with the ReplicaSet. From the yaml file created there should be two Pods. You may see a Completed busybox which will be cleared out eventually
    12345678
    	student@lfs458-node-1a0a:~$ kubectl get pods
    	NAME 			READY 	STATUS RESTARTS AGE
    	rs-one-2p9x4 	1/1 	Running 0 	    5m4s
    	rs-one-3c6pb 	1/1 	Running 0 	    5m4s
    
    	
    
  6. Now we will delete the ReplicaSet, but not the Pods it controls.
    123456
    	student@lfs458-node-1a0a:~$ kubectl delete rs rs-one --cascade=false
    	replicaset.extensions "rs-one" deleted
    
    	
    
  7. View the ReplicaSet and Pods again:
    1234567891011
    	student@lfs458-node-1a0a:~$ kubectl describe rs rs-one
    	Error from server (NotFound): replicasets.extensions "rs-one" not found
    	
    	student@lfs458-node-1a0a:~$ kubectl get pods
    	
    	NAME 			READY 	STATUS RESTARTS AGE
    	rs-one-2p9x4 	1/1 	Running 0 		7m
    	rs-one-3c6pb 	1/1 	Running 0 		7m
    	
    
  8. . Create the ReplicaSet again. As long as we do not change the selector field, the new ReplicaSet should take ownership. Pod software versions cannot be updated this way.
    12345
    	student@lfs458-node-1a0a:~$ kubectl create -f rs.yaml
    	replicaset.extensions/rs-one created
    	
    
  9. View the age of the ReplicaSet and then the Pods within:
    123456789101112
    	student@lfs458-node-1a0a:~$ kubectl get rs
    	NAME 	DESIRED CURRENT READY AGE
    	rs-one 	2 		2 		2 	  46s
    	
    	student@lfs458-node-1a0a:~$ kubectl get pods
    	NAME 		READY 	STATUS RESTARTS AGE
    	rs-one-2p9x4 1/1 	Running 0 		8m
    	rs-one-3c6pb 1/1 	Running 0 		8m
    
    	
    
  10. We will now isolate a Pod from its ReplicaSet. Begin by editing the label of a Pod. We will change the system: parameter to be IsolatedPod.
    12345678910
    	student@lfs458-node-1a0a:~$ kubectl edit po rs-one-3c6pb
    	....
    	labels:
    	system: IsolatedPod #<-- Change from ReplicaOne
    	name: rs-one-3c6pb
    	....
    
    	
    
  11. View the number of pods within the ReplicaSet. You should see two running.
    123456
    	student@lfs458-node-1a0a:~$ kubectl get rs
    	NAME 	DESIRED CURRENT READY AGE
    	rs-one 	2 		2 		2 	  4m
    	
    
  12. Now view the pods with the label key of system. You should note that there are three, with one being newer than others. The ReplicaSet made sure to keep two replicas, replacing the Pod which was isolated.
    12345678
    	student@lfs458-node-1a0a:~$ kubectl get po -L system
    	NAME 		READY 	STATUS RESTARTS AGE SYSTEM
    	rs-one-3c6pb 1/1 	Running 0 		10m IsolatedPod
    	rs-one-2p9x4 1/1 	Running 0 		10m ReplicaOne
    	rs-one-dq5xd 1/1 	Running 0 		30s ReplicaOne
    	
    
  13. Delete the ReplicaSet, then view any remaining Pods.
    12345678910
    	student@lfs458-node-1a0a:~$ kubectl delete rs rs-one
    	replicaset.extensions "rs-one" deleted
    	
    	student@lfs458-node-1a0a:~$ kubectl get po
    	NAME 		READY STATUS 	RESTARTS AGE
    	rs-one-3c6pb 1/1  Running 		0 	14m
    	rs-one-dq5xd 0/1  Terminating 	0 	4m
    	
    
  14. In the above example the Pods had not finished termination. Wait for a bit and check again. There should be no ReplicaSets, but one Pod.
    123456789
    	student@lfs458-node-1a0a:~$ kubectl get rs
    	No resources found.
    	
    	student@lfs458-node-1a0a:~$ kubectl get po
    	NAME 		READY STATUS RESTARTS AGE
    	rs-one-3c6pb 1/1  Running 0 	  16m
    	
    
  15. Delete the remaining Pod using the label.
    12345
    	student@lfs458-node-1a0a:~$ kubectl delete po -l system=IsolatedPod
    	pod "rs-one-3c6pb" deleted
    	
    

Exercise 7.2: Working with DaemonSets

A DaemonSet is a watch loop object like a Deployment which we have been working with in the rest of the labs. The DaemonSet ensures that when a node is added to a cluster a pods will be created on that node. A Deployment would only ensure a particular number of pods are created in general, several could be on a single node. Using a DaemonSet can be helpful to ensure applications are on each node, helpful for things like metrics and logging especially in large clusters where hardware my be swapped out often. Should a node be be removed from a cluster the DaemonSet would ensure the Pods are garbage collected before removal. Starting with Kubernetes v1.12 the scheduler handles DaemonSet deployment which means we can now configure certain nodes to not have a particular DaemonSet pods.

This extra step of automation can be useful for using with products like ceph where storage is often added or removed, but perhaps among a subset of hardware. They allow for complex deployments when used with declared resources like memory, CPU or volumes.

  1. We begin by creating a yaml file. In this case the kind would be set to DaemonSet. For ease of use we will copy the previously created rs.yaml file and make a couple edits. Remove the Replicas: 2 line.
    12345678910111213141516
    	student@lfs458-node-1a0a:~$ cp rs.yaml ds.yaml
    	student@lfs458-node-1a0a:~$ vim ds.yaml
    	....
    	kind: DaemonSet
    	....
    		name: ds-one
    	....
    	replicas: 2 #<<<----Remove this line
    	....
    		system: DaemonSetOne
    	....
    
    
    	
    
  2. Create and verify the newly formed DaemonSet. There should be one Pod per node in the cluster.
    123456789101112131415
    	student@lfs458-node-1a0a:~$ kubectl create -f ds.yaml
    	daemonset.extensions/ds-one created
    	
    	student@lfs458-node-1a0a:~$ kubectl get ds
    	NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE
    	ds-one 2 		2 		2 	2 			2 		<none> 		  1m
    	
    	student@lfs458-node-1a0a:~$ kubectl get po
    	NAME 		READY 	STATUS RESTARTS AGE
    	ds-one-b1dcv 1/1 	Running 0 		2m
    	ds-one-z31r4 1/1 	Running 0 		2m
    
    	
    
  3. Verify the image running inside the Pods. We will use this information in the next section.
    123456
    	student@lfs458-node-1a0a:~$ kubectl describe po ds-one-b1dcv | grep Image:
    	Image: 			nginx:1.7.9
    
    	
    

Exercise 7.3: Rolling Updates and Rollbacks

One of the advantages of micro-services is the ability to replace and upgrade a container while continuing to respond to client requests. We will use the default OnDelete setting that upgrades a container when the predecessor is deleted, then the use the RollingUpdate feature as well.

  1. Begin by viewing the current updateStrategy setting for the DaemonSet created in the previous section.
    123456789
    	student@lfs458-node-1a0a:~$ kubectl get ds ds-one -o yaml \
    			| grep -A 1 Strategy
    		updateStrategy:
    			type: OnDelete
    
    
    	
    
  2. Update the DaemonSet to use a newer version of the nginx server. This time use the set command instead of edit. Set the version to be 1.8.1-alpine.
    123456
    	student@lfs458-node-1a0a:~$ kubectl set image ds ds-one nginx=nginx:1.8.1-alpine
    	daemonset.extensions/ds-one image updated
    
    	
    
  3. Verify that the Image: parameter for the Pod checked in the previous section is unchanged.
    12345678
    	student@lfs458-node-1a0a:~$ kubectl describe po ds-one-b1dcv |grep Image:
    	Image: nginx:1.7.9
    
    
    
    	
    
  4. Delete the Pod. Wait until the replacement Pod is running and check the version.
    123456789101112131415
    	student@lfs458-node-1a0a:~$ kubectl delete po ds-one-b1dcv
    	pod "ds-one-b1dcv" deleted
    	
    	student@lfs458-node-1a0a:~$ kubectl get po
    	NAME 		READY 	STATUS RESTARTS AGE
    	ds-one-xc86w 1/1 	Running 0 		19s
    	ds-one-z31r4 1/1 Running 	0 		4m8s
    	
    	student@lfs458-node-1a0a:~$ kubectl describe po ds-one-xc86w |grep Image:
    	Image: 			nginx:1.8.1-alpine
    
    
    	
    
  5. View the image running on the older Pod. It should still show version 1.7.9.
    123456
    	student@lfs458-node-1a0a:~$ kubectl describe po ds-one-z31r4 |grep Image:
    	Image: nginx:1.7.9
    
    	
    
  6. View the history of changes for the DaemonSet. You should see two revisions listed. The number of revisions kept is set in the DaemonSet with v.1.12.1 the history kept has increased to ten from two, by default.
    12345678
    	student@lfs458-node-1a0a:~$ kubectl rollout history ds ds-one
    	daemonsets "ds-one"
    	REVISION CHANGE-CAUSE
    	1 		 <none>
    	2 		 <none>
    	
    
  7. View the settings for the various versions of the DaemonSet. The Image: line should be the only difference between the two outputs.
    123456789101112131415161718192021
    	student@lfs458-node-1a0a:~$ kubectl rollout history ds ds-one --revision=1
    	daemonsets "ds-one" with revision #1
    	Pod Template:
    		Labels:      	system=DaemonSetOne
    		Containers:
    		nginx:
    			Image:   	nginx:1.7.9
    			Port:        80/TCP
    			Environment: <none>
    			Mounts: <none>
    			Volumes: <none>
    			
    	student@lfs458-node-1a0a:~$ kubectl rollout history ds ds-one --revision=2
    	....
    		Image: nginx:1.8.1-alpine
    	.....
    
    
    	
    
  8. Use kubectl rollout undo to change the DaemonSet back to an earlier version. As we are still using the OnDelete strategy there should be no change to the Pods.
    1234567891011
    	student@lfs458-node-1a0a:~$ kubectl rollout undo ds ds-one --to-revision=1
    	daemonset.extensions/ds-one rolled back
    	
    	student@lfs458-node-1a0a:~$ kubectl describe po ds-one-xc86w |grep Image:
    	Image: 		nginx:1.8.1-alpine
    
    
    
    	
    
  9. Delete the Pod, wait for the replacement to spawn then check the image version again.
    123456789101112131415
    	student@lfs458-node-1a0a:~$ kubectl delete po ds-one-xc86w
    	pod "ds-one-xc86w" deleted
    	
    	student@lfs458-node-1a0a:~$ kubectl get po
    	NAME 		READY STATUS 		RESTARTS AGE
    	ds-one-qc72k 1/1  Running 		0 		 10s
    	ds-one-xc86w 0/1  Terminating 	0 		 12m
    	ds-one-z31r4 1/1  Running 		0 		 28m
    	
    	student@lfs458-node-1a0a:~$ kubectl describe po ds-one-qc72k |grep Image:
    	Image: nginx:1.7.9
    
    	
    
  10. View the details of the DaemonSet. The Image should be v1.7.9 in the output.
    1234567
    	student@lfs458-node-1a0a:~$ kubectl describe ds |grep Image:
    	Image: nginx:1.7.9
    
    
    	
    
  11. View the current configuration for the DaemonSet in YAML output. Look for the update strategy near the end of the output
    12345678910111213141516
    	student@lfs458-node-1a0a:~$ kubectl get ds ds-one -o yaml
    	apiVersion: extensions/v1beta1
    	kind: DaemonSet
    	.....
    			terminationGracePeriodSeconds: 30
    		templateGeneration: 3
    		updateStrategy:
    			type: OnDelete
    	status:
    		currentNumberScheduled: 2
    	.....
    
    
    	
    
  12. Create a new DaemonSet, this time setting the update policy to RollingUpdate. Begin by generating a new config file.
    12345
    	student@lfs458-node-1a0a:~$ kubectl get ds ds-one -o yaml --export > ds2.yaml
    
    	
    
  13. Edit the file. Change the name, around line eight and the update strategy around line 38.
    123456789
    	student@lfs458-node-1a0a:~$ vim ds2.yaml
    	....
    		name: ds-two
    	....
    		type: RollingUpdate
    
    	
    
  14. Create the new DaemonSet and verify the nginx version in the new pods.
    123456789101112131415161718
    	student@lfs458-node-1a0a:~$ kubectl create -f ds2.yaml
    	daemonset.extensions/ds-two created
    	
    	student@lfs458-node-1a0a:~$ kubectl get po
    	NAME 		READY STATUS RESTARTS AGE
    	ds-one-qc72k 1/1  Running 0       28m
    	ds-one-z31r4 1/1  Running 0       57m
    	ds-two-10khc 1/1  Running 0       5m
    	ds-two-kzp9g 1/1  Running 0       5m
    	
    	student@lfs458-node-1a0a:~$ kubectl describe po ds-two-10khc |grep Image:
    	Image: nginx:1.7.9
    
    
    
    	
    
  15. Edit the configuration file and set the image to a newer version such as 1.8.1-alpine
    12345678910
    	student@lfs458-node-1a0a:~$ kubectl edit ds ds-two
    	....
    		- image: nginx:1.8.1-alpine
    	.....
    
    
    
    	
    
  16. View the age of the DaemonSets. It should be around ten minutes old, depending on how fast you type.
    1234567
    	student@lfs458-node-1a0a:~$ kubectl get ds ds-two
    	NAME 	DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE
    	ds-two 	2 		2 		2 	  2 			2 		<none> 		 10m
    
    	
    
  17. Now view the age of the Pods. Two should be much younger than the DaemonSet. They are also a few seconds apart due to the nature of the rolling update where one then the other pod was terminated and recreated.
    1234567891011
    	student@lfs458-node-1a0a:~$ kubectl get po
    	NAME 		READY STATUS RESTARTS AGE
    	ds-one-qc72k 1/1 Running 0 		  36m
    	ds-one-z31r4 1/1 Running 0 		  1h
    	ds-two-2p8vz 1/1 Running 0 		  34s
    	ds-two-8lx7k 1/1 Running 0 		  32s
    
    
    	
    
  18. Verify the Pods are using the new version of the software.
    123456
    	student@lfs458-node-1a0a:~$ kubectl describe po ds-two-8lx7k |grep Image:
    	Image:     nginx:1.8.1-alpine
    
    	
    
  19. View the rollout status and the history of the DaemonSets
    123456789101112
    	student@lfs458-node-1a0a:~$ kubectl rollout status ds ds-two
    	daemon set "ds-two" successfully rolled out
    	student@lfs458-node-1a0a:~$ kubectl rollout history ds ds-two
    	daemonsets "ds-two"
    	REVISION CHANGE-CAUSE
    	1 		 <none>
    	2 		 <none>
    
    
    	
    
  20. View the changes in the update they should look the same as the previous history, but did not require the Pods to be deleted for the update to take place.
    1234567
    	student@lfs458-node-1a0a:~$ kubectl rollout history ds ds-two --revision=2
    	...
    		Image: nginx:1.8.1-alpine
    
    	
    
  21. Clean up the system by removing one of the DaemonSets. We will leave the other running.
    123456
    	student@lfs458-node-1a0a:~$ kubectl delete ds ds-two
    	daemonset.extensions "ds-two" deleted
    
    	
    

Avail Rajesh Kumar as trainer at 50% Discount
Puppet Online Training
Puppet Classroom TrainingEnroll Now