Kubernetes Upgrade
Since I am using a single master node. I need downtime for upgrade.
I cannot drain the pods on the master node.
Let me check the current version.
root@masterk8s:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.14", GitCommit:"0fd2b5afdfe3134d6e1531365fdb37dd11f54d1c", GitTreeState:"clean", BuildDate:"2021-08-11T18:06:31Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
root@masterk8s:~#
I am running on v1.19.14
root@masterk8s:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
masterk8s Ready master 38d v1.19.14
worker01 NotReady <none> 32d v1.19.14
root@masterk8s:~#
root@masterk8s:~# dpkg -l | grep -i kube
ii kubeadm 1.19.14-00 amd64 Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.19.14-00 amd64 Kubernetes Command Line Tool
ii kubelet 1.19.14-00 amd64 Kubernetes Node Agent
ii kubernetes-cni 1.1.1-00 amd64 Kubernetes CNI
root@masterk8s:~#
I am going upgrade to v1.20.10.
If I run root@masterk8s:~# kubeadm upgrade plan, its shows the recommended upgrade is v1.19.16 is the v1.19 series.
So, to upgrade to v1.20.X I need to manually upgrade the packages.
What is apt hold ?
If a package is marked "hold", it is held back: The package cannot be installed, upgraded, or removed until the hold mark is removed.
so, I am going to unhold "kubeadm" package.
root@masterk8s:~# apt-mark unhold kubeadm
kubeadm was already not hold.
root@masterk8s:~#
Installing kubeadm of version v1.20.10.
root@masterk8s:~# apt-get install kubeadm=1.20.10-00
Skipping outputs.
(Reading database ... 218948 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.20.10-00_amd64.deb ...
Unpacking kubeadm (1.20.10-00) over (1.19.14-00) ...
Setting up kubeadm (1.20.10-00) ...
root@masterk8s:~#
Marking hold on kubeadm package.
root@masterk8s:~# apt-mark hold kubeadm
kubeadm set on hold.
root@masterk8s:~#
root@masterk8s:~# apt-mark showhold
kubeadm
root@masterk8s:~#
Next applying root@masterk8s:~# kubeadm upgrade plan -> Planning the upgrade and checks for any errors.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.19.16
[upgrade/versions] kubeadm version: v1.20.10
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 2 x v1.19.14 v1.20.15
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
kube-apiserver v1.19.16 v1.20.15
kube-controller-manager v1.19.16 v1.20.15
kube-scheduler v1.19.16 v1.20.15
kube-proxy v1.19.16 v1.20.15
CoreDNS 1.7.0 1.7.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.20.15
Note: Before you can perform this upgrade, you have to update kubeadm to v1.20.15.
Lets apply the upgrade.
root@masterk8s:~# dpkg -l | grep -i kube
hi kubeadm 1.20.10-00 amd64 Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.19.14-00 amd64 Kubernetes Command Line Tool
ii kubelet 1.19.14-00 amd64 Kubernetes Node Agent
ii kubernetes-cni 1.1.1-00 amd64 Kubernetes CNI
root@masterk8s:~#
Even though the upgrade plan recommends for v1.20.15 I am proceeding with v1.20.10.
root@masterk8s:~# kubeadm upgrade apply v1.20.10
root@masterk8s:~# kubeadm upgrade apply v1.20.10
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.20.10"
[upgrade/versions] Cluster version: v1.19.16
[upgrade/versions] kubeadm version: v1.20.10
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]:
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.10". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
root@masterk8s:~#
root@masterk8s:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.10", GitCommit:"8152330a2b6ca3621196e62966ef761b8f5a61bb", GitTreeState:"clean", BuildDate:"2021-08-11T18:05:01Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
root@masterk8s:~#
Cluster version has been upgraded.
root@masterk8s:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-cxr7h 1/1 Running 0 32d
kube-flannel kube-flannel-ds-zl528 1/1 Running 0 32d
kube-system coredns-74ff55c5b-8jzs4 1/1 Running 0 2m2s
kube-system coredns-74ff55c5b-lsjz8 1/1 Running 0 2m2s
kube-system etcd-masterk8s 1/1 Running 0 3m15s
kube-system kube-apiserver-masterk8s 1/1 Running 0 2m51s
kube-system kube-controller-manager-masterk8s 1/1 Running 0 2m35s
kube-system kube-proxy-bpt9l 1/1 Running 0 32d
kube-system kube-proxy-t7dfv 1/1 Running 1 38d
kube-system kube-scheduler-masterk8s 1/1 Running 0 2m20s
root@masterk8s:~#
All the core pods are running.
root@masterk8s:~# kubectl get cm -o yaml -n kube-system kubeadm-config | grep -i "kubernetesVersion"
kubernetesVersion: v1.20.10
root@masterk8s:~#
root@masterk8s:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
masterk8s Ready control-plane,master 38d v1.19.14
worker01 NotReady <none> 32d v1.19.14
root@masterk8s:~#
Above is the kubelet version. Lets upgrade that.
root@masterk8s:~# apt-get install kubelet=1.20.10-00
Make sure kubelet is healthy post "upgrade".
root@masterk8s:~# systemctl status kubelet
root@masterk8s:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
masterk8s Ready control-plane,master 38d v1.20.10
worker01 NotReady <none> 32d v1.19.14
root@masterk8s:~#
You can repeat the same on the worker node to upgrade kubelet.
root@worker01:~# apt-get update
root@worker01:~# apt-get install -y kubelet=1.20.10-00
root@worker01:~# systemctl status kubelet
Make sure the kubelet service running.
root@masterk8s:~# kubectl get no
NAME STATUS ROLES AGE VERSION
masterk8s Ready control-plane,master 38d v1.20.10
worker01 NotReady <none> 32d v1.19.14
root@masterk8s:~#
Let me create a join token and add the worked node to the cluster.
root@masterk8s:~# kubeadm token create --print-join-command
If kubelet fails post upgrade. No worries. Join command will start kubelet automatically.
root@worker01:~# kubeadm join 192.168.163.128:6443 --token t4f2mt.5ge5ms9t5qzhf86y --discovery-token-ca-cert-hash sha256:a6e3bcbdd60b758efee6744efde0d04488cc2be13646eb2c282e7bbb6b6d0587
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@worker01:~#
root@masterk8s:~# kubectl get no
NAME STATUS ROLES AGE VERSION
masterk8s Ready control-plane,master 38d v1.20.10
worker01 Ready <none> 32d v1.20.10
root@masterk8s:~#
We are done with the upgrade.
Comments
Post a Comment