Kubernetes Installation - Ubuntu 20.04.3 LTS
A node in Kubernetes refers to a server. A master node is a server that manages the state of the cluster. Worker nodes are servers that run the workloads – these are typically containerized applications and services.
1) Disable swap memory on the master and worker nodes but commenting the swap configuration in /etc/fstab and reboot the servers.
2) Set static IP across the nodes.
We will use netplan to setup static IP across the nodes.
root@masterk8s:/# cat /etc/netplan/01-network-manager-all.yaml
# Let NetworkManager manage all devices on this system
network:
version: 2
ethernets:
ens33:
addresses:
- 192.168.45.128/24
gateway4: 192.168.45.2
root@masterk8s:/#
root@masterk8s:/# netplan apply
root@masterk8s:/# netplan get
network:
ethernets:
ens33:
addresses:
- 192.168.45.128/24
gateway4: 192.168.45.2
version: 2
root@masterk8s:/#
Disable NetworkManager.
root@masterk8s:/# systemctl disable NetworkManager
Once we are done with the above swap and IP Configuration.
Install Kubernetes:
root@masterk8s:~# apt install apt-transport-https curl : Package which enables working with http and https in Ubuntu’s repositories.
Then, add the Kubernetes signing key to both nodes by executing the command:
root@masterk8s:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
OK
root@masterk8s:~#
Next, we add the Kubernetes repository as a package source on both nodes using the following command:
root@masterk8s:~# echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
root@masterk8s:~# mv ~/kubernetes.list /etc/apt/sources.list.d
root@masterk8s:~# cat /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
root@masterk8s:~#
After that, update the nodes:
root@masterk8s:~# apt-get update
Install Kubernetes tools:
This involves installing the various tools that make up Kubernetes: kubeadm, kubelet, kubectl, and kubernetes-cni. These tools are installed on both nodes.
root@masterk8s:~# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
kubelet – an agent that runs on each node and handles communication with the master node to initiate workloads in the container runtime.
kubeadm – part of the Kubernetes project and helps initialize a Kubernetes cluster.
kubectl – the Kubernetes command-line tool that allows you to run commands inside the Kubernetes clusters.
kubernetes-cni – enables networking within the containers ensuring containers can communicate and exchange data.
Changing Docker Cgroup Driver:
Install docker on both the nodes.
root@masterk8s:~# apt install docker.io
By default, Docker installs with “cgroupfs” as the cgroup driver. Kubernetes recommends that Docker should run with “systemd” as the driver.
If you get the below warning while initializing the kubernets cluster, then change the cgroupdriver to systemd for docker.
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
Repeat the above steps on the worker nodes.
Initializing the Kubernetes Master Node:
The first step in deploying a Kubernetes cluster is to fire up the master node. While on the terminal of your master node, execute the following command to initialize the masternode.
Certs generated during initialization:
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local masterk8s] and IPs [10.96.0.1 192.168.45.128]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost masterk8s] and IPs [192.168.45.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost masterk8s] and IPs [192.168.45.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
Config files:
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
Kubelet start:
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
Control plane initialization:
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
But it failed to start stating "Cannot start kubelet" service.
I tried to start the service via systemclt but failed. Ran #kubelet command and I located:
I0320 20:28:02.599381 2246 server.go:662] "Failed to get the kubelet's cgroup. Kubelet system container metrics may be missing." err="cpu and memory cgroup hierarchy not unified. cpu: /user.slice, memory: /user.slice/user-1000.slice/session-3.scope"
I found a fix from stackoverflow:
Adding a file in : /etc/systemd/system/kubelet.service.d/11-cgroups.conf
which contains:
[Service]
CPUAccounting=true
MemoryAccounting=true
then reload and restart
systemctl daemon-reload && systemctl restart kubelet
This time ended up with:
Mar 20 20:35:45 masterk8s kubelet[5876]: E0320 20:35:45.563017 5876 server.go:302] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd>
Thanks to root@masterk8s:/# journalctl -u kubelet
Issue fixed after enabling "systemd" as driver.
root@masterk8s:/# cat /etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts":
{ "max-size": "100m" },
"storage-driver": "overlay2"
}
root@masterk8s:/#
root@masterk8s:/# systemctl daemon-reload
root@masterk8s:/# systemctl restart docker
root@masterk8s:/# systemctl restart kubelet
root@masterk8s:/# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 11-cgroups.conf
Active: active (running) since Sun 2022-03-20 20:53:24 MST; 4s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 13099 (kubelet)
Finally,
root@masterk8s:~# kubeadm init --pod-network-cidr=10.244.0.0/16
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.45.128:6443 --token c08a9h.2qowjy8ocj3bnim9 \
--discovery-token-ca-cert-hash sha256:7807d0125a828f7419505675d31b84879f8bc0a5d3d96a3b705b31e76f8b43ac
root@masterk8s:~#
If you execute the above command and your system doesn’t match the expected requirements, such as minimum RAM or CPU as explained in the Prerequisites section, you will get a warning and the cluster will not start.
However, if you are doing this tutorial for learning purposes, then you can add the following flag to the kubeadm init command to ignore the error warnings:
# kubeadm init --ignore-preflight-errors=NumCPU,Mem --pod-network-cidr=10.244.0.0/16
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
root@masterk8s:~# mkdir -p $HOME/.kube
root@masterk8s:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@masterk8s:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@masterk8s:~#
Deploying a Pod Network:
A pod network facilitates communication between servers and it’s necessary for the proper functioning of the Kubernetes cluster.
Flannel is a simple overlay network that satisfies the Kubernetes requirements.
You must first add a firewall rule to create exceptions for port 6443 (the default port for Kubernetes).
Run the following ufw commands on both master and worker nodes:
root@masterk8s:~# ufw allow 6443
Rules updated
Rules updated (v6)
root@masterk8s:~# ufw allow 6443/tcp
Rules updated
Rules updated (v6)
root@masterk8s:~#
After that, you can run the following two commands to deploy the pod network on the master node:
root@masterk8s:/kube# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@masterk8s:/kube# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
root@masterk8s:/kube# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-t7pv9 1/1 Running 0 6m26s
kube-system coredns-64897985d-z84d2 1/1 Running 0 6m26s
kube-system etcd-masterk8s 1/1 Running 2 6m38s
kube-system kube-apiserver-masterk8s 1/1 Running 3 6m38s
kube-system kube-controller-manager-masterk8s 1/1 Running 3 6m41s
kube-system kube-flannel-ds-pwvth 1/1 Running 0 43s
kube-system kube-proxy-5c9z6 1/1 Running 0 6m26s
kube-system kube-scheduler-masterk8s 1/1 Running 3 6m38s
root@masterk8s:/kube#
root@masterk8s:/kube# kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
root@masterk8s:/kube#
Joining Worker Nodes to the Kubernetes Cluster:
Command to add nodes to the cluster: kubeadm join 192.168.45.128:6443 --token c08a9h.2qowjy8ocj3bnim9 \
--discovery-token-ca-cert-hash sha256:7807d0125a828f7419505675d31b84879f8bc0a5d3d96a3b705b31e76f8b43ac
root@worker1:/# kubeadm join 192.168.45.128:6443 --token c08a9h.2qowjy8ocj3bnim9 \
> --discovery-token-ca-cert-hash sha256:7807d0125a828f7419505675d31b84879f8bc0a5d3d96a3b705b31e76f8b43ac
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0320 21:17:00.854030 2776 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@worker1:/#
root@masterk8s:/kube# kubectl get nodes
NAME STATUS ROLES AGE VERSION
masterk8s Ready control-plane,master 9m8s v1.23.5
worker1 NotReady <none> 15s v1.23.5
root@masterk8s:/kube#
Comments
Post a Comment