Kubernetes Multi Master Cluster
Post shows how to setup multi master cluster. I setup this using 4 VM's on Ubuntu 20.04.
etcd — A highly available key-value store for shared configuration and service discovery.
kube-apiserver — Provides the API for Kubernetes orchestration.
kube-controller-manager — Enforces Kubernetes services.
kube-scheduler — Schedules containers on hosts.
kubelet — Processes a container manifest so the containers are launched according to how they are described.
kube-proxy — Provides network proxy services.
2 X Master nodes [192.168.163.128, 192.168.163.129]
1 X HAProxy node [192.168.163.130]
1 X Worker node [192.168.163.131]
Update /etc/hosts as below across all the nodes.
192.168.163.128 master1
192.168.163.129 master2
192.168.163.130 haproxy
192.168.163.131 worker1
Lets start with HAProxy setup:
We need to deploy an HAPRoxy load balancer in front of them to distribute the traffic.
1) Update and Upgrade.
# apt-get update and # apt-get upgrade across all the nodes.
2) Install HAPROXY.
root@haproxy:~# apt-get install haproxy
3) Configure HAProxy to load balance the traffic between the two Kubernetes master nodes.
Under "defaults" in /etc/haproxy/haproxy.cfg.
frontend kubernetes
bind 192.168.163.130:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master1 192.168.163.128:6443 check fall 3 rise 2
server master2 192.168.163.129:6443 check fall 3 rise 2
4) Restart HAProxy.
root@haproxy:~# systemctl restart haproxy
5) Enable HAProxy.
root@haproxy:~# systemctl enable haproxy
Generating the TLS certificates - Can be done from any box. I am using HAProxy host.
Generating certificate using cfssl.
Installing cfssl:
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
root@haproxy:/cfssl# ls -lrt
total 12364
-rw-r--r-- 1 root root 2277873 Dec 7 2021 cfssljson_linux-amd64
-rw-r--r-- 1 root root 10376657 Dec 7 2021 cfssl_linux-amd64
root@haproxy:/cfssl#
root@haproxy:/cfssl# chmod +x cfssl*
root@haproxy:/cfssl#
Move the binaries to /usr/local/bin:
root@haproxy:/cfssl#cfssl_linux-amd64 /usr/local/bin/cfssl
root@haproxy:/cfssl#cfssljson_linux-amd64 /usr/local/bin/cfssljson
Verify the installation.
root@haproxy:~# cfssl version
Version: 1.2.0
Revision: dev
Runtime: go1.6
root@haproxy:~#
Creating a certificate authority:
1) Create CA config file.
2) Create CA signing request config file.
3) Generate CA certificate and private key.
Create the certificate authority configuration file.
root@haproxy:/cfssl# cat ca-config.json
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
root@haproxy:/cfssl#
Create the certificate authority signing request configuration file.
root@haproxy:/cfssl# cat ca-csr.json
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Cork",
"O": "Kubernetes",
"OU": "CA",
"ST": "Cork Co."
}
]
}
root@haproxy:/cfssl#
Generate the certificate authority certificate and private key.
root@haproxy:/cfssl# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2022/07/24 10:30:20 [INFO] generating a new CA key and certificate from CSR
2022/07/24 10:30:20 [INFO] generate received request
2022/07/24 10:30:20 [INFO] received CSR
2022/07/24 10:30:20 [INFO] generating key: rsa-2048
2022/07/24 10:30:20 [INFO] encoded CSR
2022/07/24 10:30:20 [INFO] signed certificate with serial number 308217731294871729528639454921123842156423092650
root@haproxy:/cfssl# ls -lrt
total 20
-rw-r--r-- 1 root root 232 Jul 24 10:27 ca-config.json
-rw-r--r-- 1 root root 194 Jul 24 10:28 ca-csr.json
-rw-r--r-- 1 root root 1363 Jul 24 10:30 ca.pem
-rw------- 1 root root 1675 Jul 24 10:30 ca-key.pem
-rw-r--r-- 1 root root 1001 Jul 24 10:30 ca.csr
root@haproxy:/cfssl#
This generates: ca.pem, ca-key.pem and ca.csr
Creating the certificate for the ETCD cluster:
1) Create the certificate signing request configuration file.
2) Generate the certificate and private key.
Create the certificate signing request configuration file:
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Cork",
"O": "Kubernetes",
"OU": "Kubernetes",
"ST": "Cork Co."
}
]
}
root@haproxy:/cfssl#
Generate the certificate and private key:
root@haproxy:/cfssl# cfssl gencert \
> -ca=ca.pem \
> -ca-key=ca-key.pem \
> -config=ca-config.json \
> -hostname=192.168.163.128,192.168.163.129,192.168.163.130,127.0.0.1,kubernetes.default \
> -profile=kubernetes kubernetes-csr.json | \
> cfssljson -bare kubernetes
2022/07/24 10:34:57 [INFO] generate received request
2022/07/24 10:34:57 [INFO] received CSR
2022/07/24 10:34:57 [INFO] generating key: rsa-2048
2022/07/24 10:34:57 [INFO] encoded CSR
2022/07/24 10:34:57 [INFO] signed certificate with serial number 486899376583109496742108350898498775121671189271
root@haproxy:/cfssl#
This generates: kubernetes.pem, kubernetes-key.pem and kubernetes.csr
root@haproxy:/cfssl# ls -lrt kubernetes*
-rw-r--r-- 1 root root 202 Jul 24 10:33 kubernetes-csr.json
-rw-r--r-- 1 root root 1484 Jul 24 10:34 kubernetes.pem
-rw------- 1 root root 1679 Jul 24 10:34 kubernetes-key.pem
-rw-r--r-- 1 root root 1013 Jul 24 10:34 kubernetes.csr
root@haproxy:/cfssl#
Verify that the kubernetes-key.pem and the kubernetes.pem file were generated.
Copy the certificate to each nodes [ca.pem, kubernetes-key.pem and the kubernetes.pem] to all the master and worker nodes.
# scp ca.pem kubernetes.pem kubernetes-key.pem rajasekar@192.168.163.128:~
# scp ca.pem kubernetes.pem kubernetes-key.pem rajasekar@192.168.163.129:~
# scp ca.pem kubernetes.pem kubernetes-key.pem rajasekar@192.168.163.130:~
Installing and configuring ETCD on the master nodes:
Create a configuration directory for ETCD.
root@master1:~# mkdir /etc/etcd /var/lib/etcd
root@master1:~#
Move the certificates to the configuration directory.
root@master1:root# mv ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd
root@master1:/etc/etcd# ls -lrt
total 12
-rw-r--r-- 1 root root 1363 Jul 24 10:39 ca.pem
-rw-r--r-- 1 root root 1484 Jul 24 10:39 kubernetes.pem
-rw------- 1 root root 1679 Jul 24 10:39 kubernetes-key.pem
root@master1:/etc/etcd#
Download the ETCD binaries.
root@master1:/etc/etcd# ls -lrt
total 10192
-rw-r--r-- 1 root root 10423953 Dec 6 2021 etcd-v3.3.13-linux-amd64.tar.gz
-rw-r--r-- 1 root root 1363 Jul 24 10:39 ca.pem
-rw-r--r-- 1 root root 1484 Jul 24 10:39 kubernetes.pem
-rw------- 1 root root 1679 Jul 24 10:39 kubernetes-key.pem
root@master1:/etc/etcd#
Extract the ETCD archive.
root@master1:/etc/etcd# tar xvzf etcd-v3.3.13-linux-amd64.tar.gz
Move the etcd binaries to /usr/local/bin.
root@master1:/etc/etcd# mv etcd-v3.3.13-linux-amd64/etc* /usr/local/bin/
root@master1:/etc/etcd#
Create an etcd systemd unit file.
root@master1:/etc/etcd# cat /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
--name 192.168.163.128 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://192.168.163.128:2380 \
--listen-peer-urls https://192.168.163.128:2380 \
--listen-client-urls https://192.168.163.128:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.163.128:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster 192.168.163.128=https://192.168.163.128:2380,192.168.163.129=https://192.168.163.129:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Reload the daemon configuration.
root@master1:/# systemctl daemon-reload
Start and Enable etcd to start at boot time.
root@master1:/# systemctl enable etcd
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
root@master1:/# systemctl start etcd
root@master1:/# systemctl status etcd
Same process on "master2":
Verify that the cluster is up and running.
root@master1:/# ETCDCTL_API=3 etcdctl member list
Error: context deadline exceeded
root@master1:/#
Once the master2 is set. You should see ETCD cluster running.
root@master2:/# ETCDCTL_API=3 etcdctl member list
77e697d00d4e81f4, started, 192.168.163.128, https://192.168.163.128:2380, https://192.168.163.128:2379
d598e1cd0e712919, started, 192.168.163.129, https://192.168.163.129:2380, https://192.168.163.129:2379
root@master2:/#
root@master1:/# ETCDCTL_API=3 etcdctl member list
77e697d00d4e81f4, started, 192.168.163.128, https://192.168.163.128:2380, https://192.168.163.128:2379
d598e1cd0e712919, started, 192.168.163.129, https://192.168.163.129:2380, https://192.168.163.129:2379
root@master1:/#
root@master1:/# netstat -antp | grep -i 2380
tcp 0 0 192.168.163.128:2380 0.0.0.0:* LISTEN 65946/etcd
tcp 0 0 192.168.163.128:2380 192.168.163.129:59570 ESTABLISHED 65946/etcd
tcp 0 0 192.168.163.128:2380 192.168.163.129:59572 ESTABLISHED 65946/etcd
tcp 0 0 192.168.163.128:41044 192.168.163.129:2380 ESTABLISHED 65946/etcd
tcp 0 0 192.168.163.128:41048 192.168.163.129:2380 ESTABLISHED 65946/etcd
tcp 0 0 192.168.163.128:41046 192.168.163.129:2380 ESTABLISHED 65946/etcd
tcp 0 0 192.168.163.128:2380 192.168.163.129:59576 ESTABLISHED 65946/etcd
tcp 0 0 192.168.163.128:41050 192.168.163.129:2380 ESTABLISHED 65946/etcd
root@master1:/# netstat -antp | grep -i 2379
tcp 0 0 192.168.163.128:2379 0.0.0.0:* LISTEN 65946/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 65946/etcd
tcp 0 0 127.0.0.1:2379 127.0.0.1:59458 ESTABLISHED 65946/etcd
tcp 0 0 192.168.163.128:2379 192.168.163.128:56900 ESTABLISHED 65946/etcd
tcp 0 0 127.0.0.1:59458 127.0.0.1:2379 ESTABLISHED 65946/etcd
tcp 0 0 192.168.163.128:56900 192.168.163.128:2379 ESTABLISHED 65946/etcd
root@master1:/#
Initializing the master nodes:
Comments
Post a Comment