Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

A Basic k3s Tutorial for Kubernetes

What is k3s?

K3s is a lightweight and easy-to-install Kubernetes distribution designed for use in resource-constrained environments, edge computing, and development scenarios. It’s a simplified version of Kubernetes that retains most of its functionality. Here’s a step-by-step guide on how to work with K3s:

Step 1: Set Up a Linux Environment

K3s is primarily intended for use on Linux-based systems. Ensure you have a Linux environment available for installation.

Step 2: Install K3s

You can install K3s on your Linux system using a convenient installation script. Open a terminal and run the following command:

$ curl -sfL https://get.k3s.io | sh –

This script will download and install K3s, including the Kubernetes control plane components and the container runtime.

Step 3: Verify Installation

After installation, K3s should be up and running. You can check the status of K3s and ensure that it’s working correctly by running:

$ sudo k3s kubectl get nodes

This command should display the status of your K3s node(s).

Step 4: Interact with K3s

You can interact with K3s using the kubectl command, just like you would with a standard Kubernetes cluster. By default, K3s creates a kubeconfig file at /etc/rancher/k3s/k3s.yaml, which allows you to use kubectl to communicate with the cluster. To use this configuration, copy it to your home directory:

$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Now, you can use kubectl to interact with your K3s cluster:

$ kubectl get nodes
$ kubectl get pods –all-namespaces

Step 5: Deploy Applications

You can deploy applications to your K3s cluster using Kubernetes manifests (YAML files) just like in a standard Kubernetes cluster. For example, to deploy an Nginx web server:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx-app.yaml

Step 6: Manage K3s Services

K3s automatically manages essential services like the container runtime and networking. You can use systemctl to manage the K3s service:

To start K3s: $ sudo systemctl start k3s
To stop K3s: $ sudo systemctl stop k3s
To check the $ status of K3s: sudo systemctl status k3s

Step 7: Explore K3s Features

K3s retains most of the features of Kubernetes, including deployments, services, pods, and more. Explore and practice these features to get familiar with working in a K3s cluster.

Step 8: Uninstall K3s

If you need to uninstall K3s from your system, you can run:

$ sudo /usr/local/bin/k3s-uninstall.sh

This will remove K3s and any associated components from your system.

K3s is an excellent choice for lightweight Kubernetes testing and development environments. It simplifies the installation process while still providing the core capabilities of Kubernetes.

How to join new Nodes in K3s?
Join additional nodes to your K3s cluster. To do this, you will need to provide the new nodes with the IP address of your server node. You can do this by running the following command on your server node:

$ cat /var/lib/rancher/k3s/server/node-token
Once you have the node token, you can run the following command on the new nodes to join them to the cluster:

$ k3s agent –server https://:6443 –token

Verify that your K3s cluster is running. To do this, you can run the following command on your server node:

$ kubectl get nodes

K3s configuration file

Use a K3s configuration file. A K3s configuration file allows you to customize the behavior of your K3s cluster. You can create a K3s configuration file using the k3s config command

Use a K3s dashboard

A K3s dashboard provides a graphical user interface for managing your K3s cluster. You can install a K3s dashboard using the k3s dashboard command.

k3s is running all kubernetes component in container or node?

K3s can run all Kubernetes components in a container or on the node. By default, K3s will run all Kubernetes components in a container. This is the most efficient way to run K3s, as it reduces the resource overhead of running Kubernetes.

However, you can also choose to run some or all of the Kubernetes components on the node. This may be necessary if you need to use a specific version of a Kubernetes component, or if you need to have more control over the configuration of a Kubernetes component.

To run a Kubernetes component on the node, you need to set the --no-containerized flag when you start the K3s server or agent.

For example, to run the K3s API server on the node, you would run the following command:

k3s server --no-containerized

To run the K3s controller manager on the node, you would run the following command:

k3s agent –no-containerized –server https://<server-ip>:6443 –token <node-token> –no-containerized-components controller-manager

You can also choose to run all of the Kubernetes components on the node by setting the --no-containerized-all flag when you start the K3s server or agent.

For example, to run all of the Kubernetes components on the node, you would run the following command:

k3s server --no-containerized-all
Rajesh Kumar
Follow me
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x