Kubernetes Multi Master Cluster

 



Post shows how to setup multi master cluster. I setup this using 4 VM's on Ubuntu 20.04.


etcd — A highly available key-value store for shared configuration and service discovery.

kube-apiserver — Provides the API for Kubernetes orchestration.

kube-controller-manager — Enforces Kubernetes services.

kube-scheduler — Schedules containers on hosts.

kubelet — Processes a container manifest so the containers are launched according to how they are described.

kube-proxy — Provides network proxy services.


2 X Master nodes [192.168.163.128, 192.168.163.129]

1 X HAProxy node [192.168.163.130]

1 X Worker node [192.168.163.131]

Update /etc/hosts as below across all the nodes.

192.168.163.128 master1

192.168.163.129 master2

192.168.163.130 haproxy

192.168.163.131 worker1


Lets start with HAProxy setup: 

We need to deploy an HAPRoxy load balancer in front of them to distribute the traffic.

1) Update and Upgrade.

# apt-get update and # apt-get upgrade across all the nodes.

2) Install HAPROXY.

root@haproxy:~# apt-get install haproxy

3) Configure HAProxy to load balance the traffic between the two Kubernetes master nodes. 

Under "defaults" in /etc/haproxy/haproxy.cfg.

  frontend kubernetes

        bind 192.168.163.130:6443

        option tcplog

        mode tcp

        default_backend kubernetes-master-nodes

        backend kubernetes-master-nodes

        mode tcp

        balance roundrobin

        option tcp-check

        server master1 192.168.163.128:6443 check fall 3 rise 2

        server master2 192.168.163.129:6443 check fall 3 rise 2

4) Restart HAProxy.

root@haproxy:~# systemctl restart haproxy

5) Enable HAProxy.

root@haproxy:~# systemctl enable haproxy


Generating the TLS certificates - Can be done from any box. I am using HAProxy host.

Generating certificate using cfssl.

Installing cfssl:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

root@haproxy:/cfssl# ls -lrt

total 12364

-rw-r--r-- 1 root root  2277873 Dec  7  2021 cfssljson_linux-amd64

-rw-r--r-- 1 root root 10376657 Dec  7  2021 cfssl_linux-amd64

root@haproxy:/cfssl#

root@haproxy:/cfssl# chmod +x cfssl*

root@haproxy:/cfssl#

Move the binaries to /usr/local/bin:

root@haproxy:/cfssl#cfssl_linux-amd64 /usr/local/bin/cfssl

root@haproxy:/cfssl#cfssljson_linux-amd64 /usr/local/bin/cfssljson

Verify the installation.

root@haproxy:~# cfssl version

Version: 1.2.0

Revision: dev

Runtime: go1.6

root@haproxy:~#

Creating a certificate authority:

1) Create CA config file.

2) Create CA signing request config file.

3) Generate CA certificate and private key.

Create the certificate authority configuration file.

root@haproxy:/cfssl# cat ca-config.json

{

  "signing": {

    "default": {

      "expiry": "8760h"

    },

    "profiles": {

      "kubernetes": {

        "usages": ["signing", "key encipherment", "server auth", "client auth"],

        "expiry": "8760h"

      }

    }

  }

}

root@haproxy:/cfssl#

Create the certificate authority signing request configuration file.

root@haproxy:/cfssl# cat ca-csr.json

{

  "CN": "Kubernetes",

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

  {

    "C": "IE",

    "L": "Cork",

    "O": "Kubernetes",

    "OU": "CA",

    "ST": "Cork Co."

  }

 ]

}

root@haproxy:/cfssl#

Generate the certificate authority certificate and private key.

root@haproxy:/cfssl#  cfssl gencert -initca ca-csr.json | cfssljson -bare ca

2022/07/24 10:30:20 [INFO] generating a new CA key and certificate from CSR

2022/07/24 10:30:20 [INFO] generate received request

2022/07/24 10:30:20 [INFO] received CSR

2022/07/24 10:30:20 [INFO] generating key: rsa-2048

2022/07/24 10:30:20 [INFO] encoded CSR

2022/07/24 10:30:20 [INFO] signed certificate with serial number 308217731294871729528639454921123842156423092650

root@haproxy:/cfssl# ls -lrt

total 20

-rw-r--r-- 1 root root  232 Jul 24 10:27 ca-config.json

-rw-r--r-- 1 root root  194 Jul 24 10:28 ca-csr.json

-rw-r--r-- 1 root root 1363 Jul 24 10:30 ca.pem

-rw------- 1 root root 1675 Jul 24 10:30 ca-key.pem

-rw-r--r-- 1 root root 1001 Jul 24 10:30 ca.csr

root@haproxy:/cfssl#

This generates: ca.pem, ca-key.pem and ca.csr


Creating the certificate for the ETCD cluster:

1) Create the certificate signing request configuration file.

2) Generate the certificate and private key.

Create the certificate signing request configuration file:

{

  "CN": "kubernetes",

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

  {

    "C": "IE",

    "L": "Cork",

    "O": "Kubernetes",

    "OU": "Kubernetes",

    "ST": "Cork Co."

  }

 ]

}

root@haproxy:/cfssl#

Generate the certificate and private key:

root@haproxy:/cfssl# cfssl gencert \

> -ca=ca.pem \

> -ca-key=ca-key.pem \

> -config=ca-config.json \

> -hostname=192.168.163.128,192.168.163.129,192.168.163.130,127.0.0.1,kubernetes.default \

> -profile=kubernetes kubernetes-csr.json | \

> cfssljson -bare kubernetes

2022/07/24 10:34:57 [INFO] generate received request

2022/07/24 10:34:57 [INFO] received CSR

2022/07/24 10:34:57 [INFO] generating key: rsa-2048

2022/07/24 10:34:57 [INFO] encoded CSR

2022/07/24 10:34:57 [INFO] signed certificate with serial number 486899376583109496742108350898498775121671189271

root@haproxy:/cfssl#


This generates: kubernetes.pem, kubernetes-key.pem and kubernetes.csr

root@haproxy:/cfssl# ls -lrt kubernetes*

-rw-r--r-- 1 root root  202 Jul 24 10:33 kubernetes-csr.json

-rw-r--r-- 1 root root 1484 Jul 24 10:34 kubernetes.pem

-rw------- 1 root root 1679 Jul 24 10:34 kubernetes-key.pem

-rw-r--r-- 1 root root 1013 Jul 24 10:34 kubernetes.csr

root@haproxy:/cfssl#


Verify that the kubernetes-key.pem and the kubernetes.pem file were generated.

Copy the certificate to each nodes [ca.pem, kubernetes-key.pem and the kubernetes.pem] to all the master and worker nodes.

# scp ca.pem kubernetes.pem kubernetes-key.pem rajasekar@192.168.163.128:~

# scp ca.pem kubernetes.pem kubernetes-key.pem rajasekar@192.168.163.129:~

# scp ca.pem kubernetes.pem kubernetes-key.pem rajasekar@192.168.163.130:~


Installing and configuring ETCD on the master nodes:

Create a configuration directory for ETCD.

root@master1:~# mkdir /etc/etcd /var/lib/etcd

root@master1:~#

Move the certificates to the configuration directory.

root@master1:root# mv ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd

root@master1:/etc/etcd# ls -lrt

total 12

-rw-r--r-- 1 root root 1363 Jul 24 10:39 ca.pem

-rw-r--r-- 1 root root 1484 Jul 24 10:39 kubernetes.pem

-rw------- 1 root root 1679 Jul 24 10:39 kubernetes-key.pem

root@master1:/etc/etcd#


Download the ETCD binaries.

root@master1:/etc/etcd# ls -lrt

total 10192

-rw-r--r-- 1 root root 10423953 Dec  6  2021 etcd-v3.3.13-linux-amd64.tar.gz

-rw-r--r-- 1 root root     1363 Jul 24 10:39 ca.pem

-rw-r--r-- 1 root root     1484 Jul 24 10:39 kubernetes.pem

-rw------- 1 root root     1679 Jul 24 10:39 kubernetes-key.pem

root@master1:/etc/etcd#


Extract the ETCD archive.

root@master1:/etc/etcd# tar xvzf etcd-v3.3.13-linux-amd64.tar.gz

Move the etcd binaries to /usr/local/bin.

root@master1:/etc/etcd# mv etcd-v3.3.13-linux-amd64/etc* /usr/local/bin/

root@master1:/etc/etcd#

Create an etcd systemd unit file.

root@master1:/etc/etcd# cat /etc/systemd/system/etcd.service

[Unit]

Description=etcd

Documentation=https://github.com/coreos

[Service]

ExecStart=/usr/local/bin/etcd \

  --name 192.168.163.128 \

  --cert-file=/etc/etcd/kubernetes.pem \

  --key-file=/etc/etcd/kubernetes-key.pem \

  --peer-cert-file=/etc/etcd/kubernetes.pem \

  --peer-key-file=/etc/etcd/kubernetes-key.pem \

  --trusted-ca-file=/etc/etcd/ca.pem \

  --peer-trusted-ca-file=/etc/etcd/ca.pem \

  --peer-client-cert-auth \

  --client-cert-auth \

  --initial-advertise-peer-urls https://192.168.163.128:2380 \

  --listen-peer-urls https://192.168.163.128:2380 \

  --listen-client-urls https://192.168.163.128:2379,http://127.0.0.1:2379 \

  --advertise-client-urls https://192.168.163.128:2379 \

  --initial-cluster-token etcd-cluster-0 \

  --initial-cluster 192.168.163.128=https://192.168.163.128:2380,192.168.163.129=https://192.168.163.129:2380 \

  --initial-cluster-state new \

  --data-dir=/var/lib/etcd

Restart=on-failure

RestartSec=5

[Install]

WantedBy=multi-user.target

Reload the daemon configuration.

root@master1:/# systemctl daemon-reload

Start and Enable etcd to start at boot time.

root@master1:/# systemctl enable etcd

Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.

root@master1:/# systemctl start etcd

root@master1:/# systemctl status etcd

Same process on "master2":

Verify that the cluster is up and running.

root@master1:/# ETCDCTL_API=3 etcdctl member list

Error: context deadline exceeded

root@master1:/#

Once the master2 is set. You should see ETCD cluster running.

root@master2:/# ETCDCTL_API=3 etcdctl member list

77e697d00d4e81f4, started, 192.168.163.128, https://192.168.163.128:2380, https://192.168.163.128:2379

d598e1cd0e712919, started, 192.168.163.129, https://192.168.163.129:2380, https://192.168.163.129:2379

root@master2:/#

root@master1:/# ETCDCTL_API=3 etcdctl member list

77e697d00d4e81f4, started, 192.168.163.128, https://192.168.163.128:2380, https://192.168.163.128:2379

d598e1cd0e712919, started, 192.168.163.129, https://192.168.163.129:2380, https://192.168.163.129:2379

root@master1:/#

root@master1:/# netstat -antp | grep -i 2380

tcp        0      0 192.168.163.128:2380    0.0.0.0:*               LISTEN      65946/etcd

tcp        0      0 192.168.163.128:2380    192.168.163.129:59570   ESTABLISHED 65946/etcd

tcp        0      0 192.168.163.128:2380    192.168.163.129:59572   ESTABLISHED 65946/etcd

tcp        0      0 192.168.163.128:41044   192.168.163.129:2380    ESTABLISHED 65946/etcd

tcp        0      0 192.168.163.128:41048   192.168.163.129:2380    ESTABLISHED 65946/etcd

tcp        0      0 192.168.163.128:41046   192.168.163.129:2380    ESTABLISHED 65946/etcd

tcp        0      0 192.168.163.128:2380    192.168.163.129:59576   ESTABLISHED 65946/etcd

tcp        0      0 192.168.163.128:41050   192.168.163.129:2380    ESTABLISHED 65946/etcd

root@master1:/# netstat -antp | grep -i 2379

tcp        0      0 192.168.163.128:2379    0.0.0.0:*               LISTEN      65946/etcd

tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      65946/etcd

tcp        0      0 127.0.0.1:2379          127.0.0.1:59458         ESTABLISHED 65946/etcd

tcp        0      0 192.168.163.128:2379    192.168.163.128:56900   ESTABLISHED 65946/etcd

tcp        0      0 127.0.0.1:59458         127.0.0.1:2379          ESTABLISHED 65946/etcd

tcp        0      0 192.168.163.128:56900   192.168.163.128:2379    ESTABLISHED 65946/etcd

root@master1:/#

Initializing the master nodes:

Installing Docker latest version.

# curl -fsSL https://get.docker.com -o get-docker.sh
# sh get-docker.sh
# systemctl status docker
# systemctl enable docker
# usermod -aG docker root

Installing kubeadm, kublet, and kubectl:[Master and Worker nodes] - Disable swap on all the nodes.

Add the Google repository key.

root@master1:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK
root@master1:~#

Add the Google repository.

root@master1:~# echo "deb http://apt.kubernetes.io kubernetes-xenial main" >  /etc/apt/sources.list.d/kubernetes.list
root@master1:~# cat  /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io kubernetes-xenial main
root@master1:~#

Update the list of packages and install kubelet, kubeadm and kubectl.

root@master1:~# apt-get update

root@master1:~# apt-get install kubelet kubeadm kubectl

root@master1:~# dpkg -l | grep -i kube
hi  kubeadm                                    1.24.3-00                             amd64        Kubernetes Cluster Bootstrapping Tool
hi  kubectl                                    1.24.3-00                             amd64        Kubernetes Command Line Tool
hi  kubelet                                    1.24.3-00                             amd64        Kubernetes Node Agent
ii  kubernetes-cni                             0.8.7-00                              amd64        Kubernetes CNI
root@master1:~#

Initialize the cluster once packages are installed.

Create the configuration file for kubeadm [Both the master nodes].

root@master1:/# cat config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.24.3
controlPlaneEndpoint: "192.168.163.130:6443" -> HAProxy IP
etcd:
  external:
    endpoints:
    - https://192.168.163.128:2379 -> Master1
    - https://192.168.163.129:2379 -> Master2
    caFile: /etc/etcd/ca.pem
    certFile: /etc/etcd/kubernetes.pem
    keyFile: /etc/etcd/kubernetes-key.pem
networking:
  podSubnet: 10.30.0.0/24
apiServer:
  certSANs:
    - "192.168.163.130"
  extraArgs:
    apiserver-count: "3"
root@master1:/#

root@master1:/# kubeadm init --config=config.yaml

Initialize second masternode:

Copy the certs to the 2nd master node to location /etc/kubernetes/pki

root@master1:/etc/kubernetes/pki# scp * root@master2:/tmp
root@master2's password:
apiserver.crt                                                                                                                                              100% 1294   731.3KB/s   00:00
apiserver.key                                                                                                                                              100% 1675   808.9KB/s   00:00
apiserver-kubelet-client.crt                                                                                                                               100% 1164     1.1MB/s   00:00
apiserver-kubelet-client.key                                                                                                                               100% 1679     1.4MB/s   00:00
ca.crt                                                                                                                                                     100% 1099     1.1MB/s   00:00
ca.key                                                                                                                                                     100% 1679     1.6MB/s   00:00
front-proxy-ca.crt                                                                                                                                         100% 1115     1.1MB/s   00:00
front-proxy-ca.key                                                                                                                                         100% 1679     1.5MB/s   00:00
front-proxy-client.crt                                                                                                                                     100% 1119     1.0MB/s   00:00
front-proxy-client.key                                                                                                                                     100% 1675     1.4MB/s   00:00
sa.key                                                                                                                                                     100% 1675     1.4MB/s   00:00
sa.pub                                                                                                                                                     100%  451   388.2KB/s   00:00
root@master1:/etc/kubernetes/pki#

root@master2:/etc/kubernetes/pki# ls -lrt
total 44
-rw-r--r-- 1 root root 1294 Jul 27 22:40 apiserver.crt
-rw-r--r-- 1 root root 1164 Jul 27 22:40 apiserver-kubelet-client.crt
-rw------- 1 root root 1675 Jul 27 22:40 apiserver.key
-rw-r--r-- 1 root root 1099 Jul 27 22:40 ca.crt
-rw------- 1 root root 1679 Jul 27 22:40 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1115 Jul 27 22:40 front-proxy-ca.crt
-rw------- 1 root root 1679 Jul 27 22:40 ca.key
-rw------- 1 root root 1679 Jul 27 22:40 front-proxy-ca.key
-rw------- 1 root root 1675 Jul 27 22:40 front-proxy-client.key
-rw-r--r-- 1 root root 1119 Jul 27 22:40 front-proxy-client.crt
-rw------- 1 root root 1675 Jul 27 22:40 sa.key
root@master2:/etc/kubernetes/pki#

Apply the same config.yml.

Apply the manifest to deploy calico overlay:

curl https://docs.projectcalico.org/manifests/calico.yaml -O

kubectl apply -f calico.yaml

root@master1:~# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS         AGE
calico-kube-controllers-555bc4b957-4kz79   1/1     Running   0                23h
calico-node-5rhl4                          0/1     Running   11 (116s ago)    23h
calico-node-jr854                          0/1     Running   0                3m8s
calico-node-lbdmc                          0/1     Running   11 (2m10s ago)   23h
coredns-6d4b75cb6d-9cd26                   1/1     Running   1 (11m ago)      23h
coredns-6d4b75cb6d-pnckp                   1/1     Running   1 (11m ago)      23h
kube-apiserver-master1                     1/1     Running   5 (8m37s ago)    23h
kube-apiserver-master2                     1/1     Running   8 (7m46s ago)    23h
kube-controller-manager-master1            1/1     Running   2 (11m ago)      23h
kube-controller-manager-master2            1/1     Running   1 (7m46s ago)    23h
kube-proxy-g6ttm                           1/1     Running   1 (11m ago)      23h
kube-proxy-h7mxs                           1/1     Running   0                3m8s
kube-proxy-knzj6                           1/1     Running   1 (7m46s ago)    23h
kube-scheduler-master1                     1/1     Running   2 (11m ago)      23h
kube-scheduler-master2                     1/1     Running   1 (7m45s ago)    23h
root@master1:~#

Finally, verify all the pods are running.

Add the worker nodes.

root@master1:/# kubectl get nodes
NAME      STATUS   ROLES           AGE   VERSION
master1   Ready    control-plane   23h   v1.24.3
master2   Ready    control-plane   23h   v1.24.3
worker1   Ready    <none>          37m   v1.24.3
root@master1:/#

Cluster setup complete.

Comments

Popular posts from this blog

SRE/DevOps Syllabus

AWS Code Commit - CI/CD Series Part 1

Docker - Preventing IP overlapping