Author

Author


Rajesh Kumar

Rajesh Kumar

DevOps@RajeshKumar.xyz

DevOps@RajeshKumar.xyz

Support my work @Patron! Support my work @Patron!

Contents


API Objects

Exercise 6.1: RESTful API Access

Overview

We will continue to explore ways of accessing the control plane of our cluster. In the security chapter we will discuss there are several authentication methods, one of which is use of a Bearer token We will work with one then deploy a local proxy server for application-level access to the Kubernetes API.

RESTful API Access

We will use the curl command to make API requests to the cluster, in an in-secure manner. Once we know the IP address and port, then the token we can retrieve cluster data in a RESTful manner. By default most of the information is restricted, but changes to authentication policy could allow more access.

  1. First we need to know the IP and port of a node running a replica of the API server. The master system will typically have one running. Use kubectl config view to get overall cluster configuration, and find the server entry. This will give us both the IP and the port.
    1234567891011
    	student@lfs458-node-1a0a:~$ kubectl config view
    	apiVersion: v1
    	clusters:
    	- cluster:
    			certificate-authority-data: REDACTED
    			server: https://10.128.0.3:6443
    		name: kubernetes
    	<output_omitted>
    
    
  2. Next we need to find the bearer token. This is part of a default token. Look at a list of tokens, first all on the cluster, then just those in the default namespace. There will be a secret for each of the controllers of the cluster
    1234567891011121314
    	student@lfs458-node-1a0a:~$ kubectl get secrets --all-namespaces
    	NAMESPACE    NAME                      TYPE ...
    	default     default-token-jdqp7 kubernetes.io/service-account-token...
    	kube-public default-token-b2prn kubernetes.io/service-account-token...
    	kube-system attachdetach-controller-token-ckwvh kubernetes.io/servic...
    	kube-system bootstrap-signer-token-wpx66 kubernetes.io/service-accou...
    	<output_omitted>
    	
    	student@lfs458-node-1a0a:~$ kubectl get secrets
    	NAME                TYPE                              DATA AGE
    	default-token-jdqp7 kubernetes.io/service-account-token 3  2d
    
    
  3. Look at the details of the secret. We will need the token: information from the output
    1234567891011
    	student@lfs458-node-1a0a:~$ kubectl describe secret default-token-jdqp7
    	Name:         default-token-jdqp7
    	Namespace:    default
    	Labels:       <none>
    	<output_omitted>
    	token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVz
    	L3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3Bh
    	Y2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubm
    	<output_omitted>
    
  4. Using your mouse to cut and paste, or cut, or awk to save the data, from the first character eyJh to the last, EFmBWA to a variable named token. Your token data will be different.
    12345
    	student@lfs458-node-1a0a:~$ export token=$(kubectl describe \
    			secret default-token-jdqp7 |grep ^token |cut -f7 -d ’ ’)
    	
    
  5. Test to see if you can get basic API information from your cluster. We will pass it the server name and port, the token and use the -k option to avoid using a cert.
    123456789101112131415161718
    	student@lfs458-node-1a0a:~$ curl https://10.128.0.3:6443/apis \
    			--header "Authorization: Bearer $token" -k
    	{
    		"kind": "APIVersions",
    		"versions": [
    		"v1"
    		],
    		"serverAddressByClientCIDRs": [
    			{
    				"clientCIDR": "0.0.0.0/0",
    				"serverAddress": "10.128.0.3:6443"
    			}
    		]
    	}
    	<output_omitted>
    	
    
  6. Try the same command, but look at API v1
    1234567
    	student@lfs458-node-1a0a:~$ curl https://10.128.0.3:6443/api/v1 \
    				--header "Authorization: Bearer $token" -k
    	<output_omitted>
    
    	
    
  7. Now try to get a list of namespaces. This should return an error. It shows our request is being seen as system:serviceaccount, which does not have the RBAC authorization to list all namespaces in the cluster
    123456789
    	student@lfs458-node-1a0a:~$ curl \
    			https://10.128.0.3:6443/api/v1/namespaces \
    			--header "Authorization: Bearer $token" -k
    	<output_omitted>
    			"message": "namespaces is forbidden: User \"system:serviceaccount:default...
    	<output_omitted>
    	
    
  8. Pods can also make use of included certificates to use the API. The certificates are automatically made available to a pod under the /var/run/secrets/kubernetes.io/serviceaccount/. We will deploy a simple Pod and view the resources. If you view the token file you will find it is the same value we put into the $token variable. The -i will request a -t terminal session of the busybox container. Once you exit the container will not restart and the pod will show as completed.
    123456789
    	student@lfs458-node-1a0a:~$ kubectl run -i -t busybox --image=busybox \
    			--restart=Never
    	# ls /var/run/secrets/kubernetes.io/serviceaccount/
    	ca.crt namespace token
    	/ # exit
    
    	
    

Exercise 6.2: Using the Proxy

Another way to interact with the API is via a proxy. The proxy can be run from a node or from within a Pod through the use of a sidecar. In the following steps we will deploy a proxy listening to the loopback address. We will use curl to access the API server. If the curl request works, but does not from outside the cluster, we have narrowed down the issue to authentication and authorization instead of issues further along the API ingestion process.

  1. Begin by starting the proxy. It will start in the foreground by default. There are several options you could pass. Begin by reviewing the help output.
    1234567891011121314
    	student@lfs458-node-1a0a:~$ kubectl proxy -h
    	Creates a proxy server or application-level gateway between localhost
    	and the Kubernetes API Server. It also allows serving static content
    	over specified HTTP path. All incoming data enters through one port
    	and gets forwarded to the remote kubernetes API Server port, except
    	for the path matching the static content path.
    	Examples:
    	# To proxy all of the kubernetes api and nothing else, use:
    	$ kubectl proxy --api-prefix=/
    	<output_omitted>
    
    	
    
  2. Start the proxy while setting the API prefix, and put it in the background. You may need to use enter to view the prompt
    1234567
    	student@lfs458-node-1a0a:~$ kubectl proxy --api-prefix=/ &
    	[1] 22500
    	Starting to serve on 127.0.0.1:8001
    
    	
    
  3. Now use the same curl command, but point toward the IP and port shown by the proxy. The output should be the same as without the proxy, but may be formatted differently.
    1234567
    	student@lfs458-node-1a0a:~$ curl http://127.0.0.1:8001/api/
    	<output_omitted>
    
    
    	
    
  4. Make an API call to retrieve the namespaces. The command did not work in the previous section due to permissions, but should work now as the proxy is making the request on your behalf.
    12345678910111213
    	student@lfs458-node-1a0a:~$ curl http://127.0.0.1:8001/api/v1/namespaces
    	{
    		"kind": "NamespaceList",
    		"apiVersion": "v1",
    		"metadata": {
    			"selfLink": "/api/v1/namespaces",
    			"resourceVersion": "86902"
    	<output_omitted>
    
    
    	
    

Exercise 6.3: Working with Jobs

While most API objects are deployed such that they continue to be available there are some which we may want to run a particular number of times called a Job, and others on a regular basis called a CronJob

Create A Job

  1. Create a job which will run a container which sleeps for three seconds then stops
    12345678910111213141516171819
    	student@lfs458-node-1a0a:~$ vim job.yaml
    	apiVersion: batch/v1
    	kind: Job
    	metadata:
    		name: sleepy
    	spec:
    	template:
    		spec:
    			containers:
    			- name: resting
    			  image: busybox
    			  command: ["/bin/sleep"]
    			  args: ["3"]
    			restartPolicy: Never
    
    
    	
    
  2. Create the job, then verify and view the details. The example shows checking the job three seconds in and then again after it has completed. You may see different output depending on how fast you type.
    123456789101112131415161718192021222324252627282930
    	student@lfs458-node-1a0a:~$ kubectl create -f job.yaml
    	job.batch/sleepy created
    	
    	student@lfs458-node-1a0a:~$ kubectl get job
    	NAME COMPLETIONS DURATION AGE
    	sleepy 0/1 3s 3s
    	
    	student@lfs458-node-1a0a:~$ kubectl describe jobs.batch sleepy
    	Name:          sleepy
    	Namespace:     default
    	Selector:      controller-uid=24c91245-d0fb-11e8-947a-42010a800002
    	Labels:        controller-uid=24c91245-d0fb-11e8-947a-42010a800002
    			       job-name=sleepy
    	Annotations:   <none>
    	Parallelism:   1
    	Completions:   1
    	Start Time:    Tue, 16 Oct 2018 04:22:50 +0000
    	Completed At:  Tue, 16 Oct 2018 04:22:55 +0000
    	Duration:      5s
    	Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
    	<output_omitted>
    	
    	student@lfs458-node-1a0a:~$ kubectl get job
    	NAME   COMPLETIONS DURATION AGE
    	sleepy 1/1 		   5s       17s
    
    
    	
    
  3. View the configuration information of the job. There are three parameters we can use to affect how the job runs. Use -o yaml to see these parameters. We can see that backoffLimit, completions, and the parallelism. We’ll add these parameters next.
    123456789101112131415
    	student@lfs458-node-1a0a:~$ kubectl get jobs.batch sleepy -o yaml
    	<output_omitted>
    		uid: c2c3a80d-d0fc-11e8-947a-42010a800002
    	spec:
    		backoffLimit: 6
    		completions: 1
    		parallelism: 1
    		selector:
    			matchLabels:
    	<output_omitted>
    
    
    	
    
  4. As the job continues to AGE in a completion state, delete the job.
    1234567
    	student@lfs458-node-1a0a:~$ kubectl delete jobs.batch sleepy
    	job.batch "sleepy" deleted
    
    
    	
    
  5. Edit the YAML and add the completions: parameter and set it to 5.
    123456789101112131415
    	student@lfs458-node-1a0a:~$ vim job.yaml
    	<output_omitted>
    	metadata:
    		name: sleepy
    	spec:
    		completions: 5 #<--Add this line
    		template:
    		spec:
    		containers:
    	<output_omitted>
    
    
    	
    
  6. Create the job again. As you view the job note that COMPLETIONS begins as zero of 5.
    123456789101112
    	student@lfs458-node-1a0a:~$ kubectl create -f job.yaml
    	job.batch/sleepy created
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs.batch
    	
    	NAME    COMPLETIONS   DURATION   AGE
    	sleepy 	0/5 		  5s 		 5s
    
    
    	
    
  7. View the pods that running. Again the output may be different depending on the speed of typing.
    12345678910
    	student@lfs458-node-1a0a:~$ kubectl get pods
    	NAME 		READY 	STATUS 		RESTARTS AGE
    	sleepy-z5tnh 0/1 	Completed 	0 		 8s
    	sleepy-zd692 1/1 	Running 	0 		 3s
    	<output_omitted>
    
    
    	
    
  8. Eventually all the jobs will have completed. Verify then delete the job.
    1234567891011
    	student@lfs458-node-1a0a:~$ kubectl get jobs
    	NAME 	COMPLETIONS 	DURATION 	AGE
    	sleepy 	5/5 			26s 	    10m
    	
    	student@lfs458-node-1a0a:~$ kubectl delete jobs.batch sleepy
    	job.batch "sleepy" deleted
    
    
    	
    
  9. Edit the YAML again. This time add in the parallelism: parameter. Set it to 2 such that two pods at a time will be deployed.
    123456789101112131415
    	student@lfs458-node-1a0a:~$ job job.yaml
    	<output_omitted>
    		name: sleepy
    	spec:
    		completions: 5
    		parallelism: 2 #<-- Add this line
    		template:
    			spec:
    	<output_omitted>
    
    
    
    	
    
  10. Create the job again. You should see the pods deployed two at a time until all five have completed.
    123456789101112131415
    	student@lfs458-node-1a0a:~$ kubectl create -f job.yaml
    	job.batch/sleepy created
    	student@lfs458-node-1a0a:~$ kubectl get pods
    	NAME        READY STATUS RESTARTS AGE
    	sleepy-8xwpc 1/1  Running 0 	  5s
    	sleepy-xjqnf 1/1  Running 0 	  5s
    	<output_omitted>
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs
    	NAME 	COMPLETIONS DURATION AGE
    	sleepy  3/5 		11s 	 11s
    
    	
    
  11. Add a parameter which will stop the job after a certain number of seconds. Set the activeDeadlineSeconds: to 15. The job and all pods will end once it runs for 15 seconds. We will also increase the sleep argument to five, just to be sure does not expire by itself.
    12345678910111213141516171819
    	student@lfs458-node-1a0a:~$ vim job.yaml
    	<output_omitted>
    		completions: 5
    		parallelism: 2
    		activeDeadlineSeconds: 15 #<-- Add this line
    		template:
    		spec:
    			containers:
    			- name: resting
    			  image: busybox
    			  command: ["/bin/sleep"]
    			  args: ["5"] #<-- Edit this line
    	<output_omitted>
    
    
    
    	
    
  12. Delete and recreate the job again. It should run for 15 seconds, usually 3/5, then continue to age without further completions.
    12345678910111213141516171819
    	student@lfs458-node-1a0a:~$ kubectl delete jobs.batch sleepy
    	job.batch "sleepy" deleted
    	
    	student@lfs458-node-1a0a:~$ kubectl create -f job.yaml
    	job.batch/sleepy created
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs
    	NAME 	COMPLETIONS 	DURATION 	AGE
    	sleepy 	1/5 			6s 			6s
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs
    	NAME 	COMPLETIONS 	DURATION 	AGE
    	sleepy 3/5 				16s 		16s
    
    
    
    	
    
  13. View the message: entry in the Status section of the object YAML output.
    12345678910111213141516171819
    	student@lfs458-node-1a0a:~$ kubectl get job sleepy -o yaml
    	<output_omitted>
    	status:
    		conditions:
    		- lastProbeTime: 2018-10-16T05:45:14Z
    		  lastTransitionTime: 2018-10-16T05:45:14Z
    		  message: Job was active longer than specified deadline
    		  reason: DeadlineExceeded
    		  status: "True"
    		  type: Failed
    		  failed: 2
    		  startTime: 2018-10-16T05:44:59Z
    		  succeeded: 3
    
    
    
    	
    
  14. Delete the job.
    1234567
    	student@lfs458-node-1a0a:~$ kubectl delete jobs.batch sleepy
    	job.batch "sleepy" deleted
    
    
    	
    

Create a CronJob

A CronJob creates a watch loop which will create a batch job on your behalf when the time becomes true. We Will use our existing Job file to start

  1. Copy the Job file to a new file.
    123456
    	student@lfs458-node-1a0a:~$ cp job.yaml cronjob.yaml
    
    
    	
    
  2. Edit the file to look like the annotated file shown below. Edit the lines mentioned below. The three parameters we added will need to be removed. Other lines will need to be further indented.
    1234567891011121314151617181920212223
    	student@lfs458-node-1a0a:~$ vim cronjob.yaml
    	apiVersion: batch/v1beta1 #<-- Add beta1 to be v1beta1
    	kind: CronJob #<-- Update this line to CronJob
    	metadata:
    	name: sleepy
    	spec:
    	schedule: "*/2 * * * *" #<-- Add Linux style cronjob syntax
    	jobTemplate: #<-- New jobTemplate and spec move
    	spec:
    	template: #<-- This and following lines move
    	spec: #<-- four spaces to the right
    	containers:
    	- name: resting
    	image: busybox
    	command: ["/bin/sleep"]
    	args: ["3"]
    	restartPolicy: Never
    
    
    
    	
    
  3. Create the new CronJob. View the jobs. It will take two minutes for the CronJob to run and generate a new batch Job..
    123456789101112131415
    	student@lfs458-node-1a0a:~$ kubectl create -f cronjob.yaml
    	cronjob.batch/sleepy created
    	
    	student@lfs458-node-1a0a:~$ kubectl get cronjobs.batch
    	NAME 	SCHEDULE 	SUSPEND ACTIVE  LAST SCHEDULE AGE
    	sleepy */2 * * * * 	False 	0 	     <none>       8s
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs.batch
    	No resources found.
    
    
    
    	
    
  4. . After two minutes you should see jobs start to run.
    123456789101112131415161718
    	student@lfs458-node-1a0a:~$ kubectl get cronjobs.batch
    	NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
    	sleepy */2 * * * * 	  False   0   21s      2m1s
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs.batch
    	NAME              COMPLETIONS DURATION AGE
    	sleepy-1539722040 1/1         5s       18s
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs.batch
    	NAME              COMPLETIONS DURATION AGE
    	sleepy-1539722040 1/1         5s       5m17s
    	sleepy-1539722160 1/1         6s       3m17s
    	sleepy-1539722280 1/1         6s       77s
    
    
    	
    
  5. Ensure that if the job continues for more than 10 seconds it is terminated. We will first edit the sleep command to run for 30 seconds then add the activeDeadlineSeconds: entry to the container.
    12345678910111213141516
    	student@lfs458-node-1a0a:~$ vim cronjob.yaml
    	....
    		jobTemplate:
    			spec:
    				template:
    						spec:
    							activeDeadlineSeconds: 10 #<-- Add this line
    							containers:
    							- name: resting
    	....
    
    
    
    	
    
  6. Delete and recreate the CronJob. It may take a couple of minutes for the batch Job to be created and terminate due to the timer.
    12345678910111213141516171819202122232425262728293031
    	student@lfs458-node-1a0a:~$ kubectl delete cronjobs.batch sleepy
    	cronjob.batch "sleepy" deleted
    	
    	student@lfs458-node-1a0a:~$ kubectl create -f cronjob.yaml
    	cronjob.batch/sleepy created
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs
    	NAME 				COMPLETIONS DURATION AGE
    	sleepy-1539723240 	0/1 		61s 	61s
    	
    	student@lfs458-node-1a0a:~$ kubectl get cronjobs.batch
    	NAME 	SCHEDULE 	SUSPEND ACTIVE LAST SCHEDULE AGE
    	sleepy */2 * * * * 	False 	1 		72s 		 94s
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs
    	NAME 				COMPLETIONS DURATION AGE
    	sleepy-1539723240 	0/1 		75s 	75s
    	
    	student@lfs458-node-1a0a:~$ kubectl get jobs
    	NAME 				COMPLETIONS DURATION AGE
    	sleepy-1539723240 	0/1 		2m19s 	2m19s
    	sleepy-1539723360 	0/1 		19s 	19s
    	
    	student@lfs458-node-1a0a:~$ kubectl get cronjobs.batch
    	NAME 	SCHEDULE 	SUSPEND ACTIVE LAST SCHEDULE AGE
    	sleepy */2 * * * *  False   2      31s           2m53s
    
    
    	
    
  7. Clean up by deleting the CronJob
    1234567
    	student@lfs458-node-1a0a:~$ kubectl delete cronjobs.batch sleepy
    	cronjob.batch "sleepy" deleted
    
    
    	
    

Avail Rajesh Kumar as trainer at 50% Discount
Puppet Online Training
Puppet Classroom TrainingEnroll Now