kubernetes05-使用kubeadm部署Master高可用

00:文章简介

介绍如何使用kubeadm部署高可用的Master节点。
使用keepalived + haproxy 部署了高可用的k8s集群。

01:安装kubeadm的dashboard

安装master集群前,我们先安装dashboard,看看k8s的ui

Github地址 : https://github.com/kubernetes/dashboard , 目前dashboard是兼容产品,安装前需要在release中查看兼容k8s版本

我们安装v2.3.1版本

Text
1
2
3
4
5
6
# 下载dashboard插件
cd /etc/kubernetes/
mkdir dashboard && cd dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
sed -i 's#kubernetesui/dashboard:v2.3.1#harbor.linux98.com/google_containers/dashboard:v2.3.1#g' recommended.yaml
sed -i 's#kubernetesui/metrics-scraper:v1.0.6#harbor.linux98.com/google_containers/metrics-scraper:v1.0.6#g' recommended.yaml

修改暴露端口

image

安装dashboard

Text
1
kubectl apply -f recommended.yaml

查看状态

image

NodePort是在所有节点上都开放了30443端口,访问任意节点https://ip:30443即可打开dashboard登陆页面

image

在master节点上生成token

Text
1
2
3
4
5
6
7
8
# 创建专用的服务账户
kubectl create serviceaccount dashboard-admin -n kube-system
# 使用集群角色绑定
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 查看创建的token
kubectl -n kube-system get secret | grep dashboard-admin
# 查看token的信息
kubectl describe secrets -n kube-system dashboard-admin-token-jfmph # 后4位以上面命令显示位准

image

02:配置haproxy+keepalived

搭建k8s的高可用代理,因为k8s集群在配置初始化时要指定vip,所以要先配置ha+keepalived,先把vip后端主机保留1个,配置完其他master节点后再开放

image

ha01、ha02安装基础软件

Text
1
apt install keepalived haproxy -y

配置keepalived-master

Text
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# /etc/keepalived/keepalived.conf
global_defs {
router_id ha01
}

vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight 2
}

vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.20.200.200 dev eth0 label eth0:1
}
}

配置keepalived检测脚本

Text
1
2
3
4
5
6
7
8
9
10
# /etc/keepalived/check_haproxy.sh
#!/bin/bash
haproxy_status=$(ps -C haproxy --no-header | wc -l)
if [ $haproxy_status -eq 0 ];then
systemctl start haproxy
sleep 3
if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ];then
killall keepalived
fi
fi

配置keepalived-backup

Text
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
global_defs {
router_id ha02
}

vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight 2
}

vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 90
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.20.200.200 dev eth0 label eth0:1
}
}

脚本和master配置一样

ha01、ha02配置haproxy

Text
1
2
3
4
5
6
7
8
9
10
11
12
13
14
listen status
bind 172.20.200.200:9999
mode http
log global
stats enable
stats uri /haproxy-stats
stats auth haadmin:123456

listen k8s-api-6443
bind 172.20.200.200:6443
mode tcp
server master1 172.20.200.201:6443 check inter 3s fall 3 rise 5
# server master2 172.20.200.202:6443 check inter 3s fall 3 rise 5
# server master3 172.20.200.203:6443 check inter 3s fall 3 rise 5

重启服务

ha02的haproxy起不来,因为vip在ha01节点上

Text
1
2
systemctl restart keepalived
systemctl restart haproxy

03:配置master01节点kubeadm

kubeadm-master01

  1. 重置k8s集群
  2. 重新使用init命令初始化,指定到ha的vip

重置k8s集群,master01 node01-03 都要重置

Text
1
2
3
4
5
6
7
8
9
10
11
kubeadm reset

rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/cni/
ifconfig
ifconfig flannel.1 down
ifconfig cni0 down
ip link delete cni0
ip link delete flannel.1

使用init命令初始化k8s集群

Text
1
2
3
4
5
6
7
8
kubeadm init --kubernetes-version=1.22.1 \
--apiserver-advertise-address=172.20.200.201 \
--control-plane-endpoint=172.20.200.200 \
--apiserver-bind-port=6443 \
--image-repository=harbor.linux98.com/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=Swap

初始化成功

image

配置root权限

Text
1

配置flannel

Text
1

调整scheduler端口

Text
1

完成后,根据提示信息将

04:其他两个master加入集群

正常来说,提示信息是无法使用的,因为它缺了一条信息

Text
1
2
3
kubeadm join 172.20.200.200:6443 --token 737svp.28ynn6xwmx26a9rg \
--discovery-token-ca-cert-hash sha256:fa0635c154aa68a2d63705a0449da695f19ea1262b1707c670db03a17abe14e1 \
--control-plane

生成依赖的certificate-key

Text
1
2
kubeadm init phase upload-certs --upload-certs
bd8eb662009fd12c0b28b553a5ac36a8cb4bfb73d1e00f98dc119ccb04f19580
Text
1
2
3
kubeadm join 172.20.200.200:6443 --token 737svp.28ynn6xwmx26a9rg \
--discovery-token-ca-cert-hash sha256:fa0635c154aa68a2d63705a0449da695f19ea1262b1707c670db03a17abe14e1 \
--control-plane --certificate-key bd8eb662009fd12c0b28b553a5ac36a8cb4bfb73d1e00f98dc119ccb04f19580

配置其他master集群的环境

网络配置、内核参数、软件源、安装软件和master01一样

使用kubeadm加入到集群中

Text
1
2
3
kubeadm join 172.20.200.200:6443 --token 737svp.28ynn6xwmx26a9rg \
--discovery-token-ca-cert-hash sha256:fa0635c154aa68a2d63705a0449da695f19ea1262b1707c670db03a17abe14e1 \
--control-plane --certificate-key bd8eb662009fd12c0b28b553a5ac36a8cb4bfb73d1e00f98dc119ccb04f19580

加入后提示信息

image

根据提示赋予root权限

Text
1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看状态

image

此时其他节点无需配置schedluder和flannel了,配置会从master01中拉取

image

05:打开haproxy的其他节点注释

image

Text
1
systemctl restart haproxy.service

使用haproxy后台查看状态

image

06:其他node节点加入集群

Text
1
2
# kubeadm token create --print-join-command
kubeadm join 172.20.200.200:6443 --token sarnqy.q8qj98cem1tsmaxn --discovery-token-ca-cert-hash sha256:fa0635c154aa68a2d63705a0449da695f19ea1262b1707c670db03a17abe14e1

然后在master01中查看状态

image

07:重新安装dashboard

配置yaml过程略

在haproxy中增加代理端口

Text
1
2
3
4
5
6
7
8
listen k8s-dashboard-api-30443
bind 172.20.200.200:30443
mode tcp
server master1 172.20.200.201:30443 check inter 3s fall 3 rise 5
server master2 172.20.200.202:30443 check inter 3s fall 3 rise 5
server master3 172.20.200.203:30443 check inter 3s fall 3 rise 5

systemctl restart haproxy

生成访问token

Text
1
2
3
4
5
6
7
8
# 创建专用的服务账户
kubectl create serviceaccount dashboard-admin -n kube-system
# 使用集群角色绑定
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 查看创建的token
kubectl -n kube-system get secret | grep dashboard-admin
# 查看token的信息
kubectl describe secrets -n kube-system dashboard-admin-token-jfmph # 后4位以上面命令显示位准

浏览器访问测试

08:移除其他master节点

这个操作和移除node节点相似,可以通用命令,只是需要注意节点名称