kubernetes06-使用kubeadm部署Master高可用(续)

00:文章简介

介绍如何使用kubeadm部署高可用的Master节点。续接上篇文章。

06-使用yaml配置文件初始化集群

上面我们使用的是命令初始化,当我们需要对集群进行定制,且想要保留定制信息,方便后续查看,那么就需要使用yaml格式来初始化了。

01:创建yaml配置文件

Text
1
2
cd /root/mykube/init
kubeadm config print init-defaults > kube-init-v1221.yaml

然后对生成的yaml文件进行定制化修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 48h0m0s # token时长
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.20.200.201 # 指定当前节点的ip地址
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
imagePullPolicy: IfNotPresent
name: kubeadm-master01 # 指定当前节点的主机名
taints: # 给master添加一个"污点"
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: 172.20.200.200:6443 # 指定反代的地址
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: harbor.linux98.com/google_containers # 镜像获取地址
kind: ClusterConfiguration
kubernetesVersion: 1.22.1 # 集群版本
networking: # 网络信息
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # flannel的默认PodNet网段
serviceSubnet: 10.96.0.0/12 # service子网信息
scheduler: {}

02:重置节点

这里需要将之前做好的所有节点进行重置,再将ha01中的代理主机master02 03注释,让200.200只指向200.201

Text
1
kubeadm reset

清空kubernetes相关环境

Text
1
2
3
4
5
6
7
8
9
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/cni/
ifconfig
ifconfig flannel.1 down
ifconfig cni0 down
ip link delete cni0
ip link delete flannel.1

ha01

1
2
3
4
5
6
listen k8s-api-6443
bind 172.20.200.200:6443
mode tcp
server master1 172.20.200.201:6443 check inter 3s fall 3 rise 5
# server master2 172.20.200.202:6443 check inter 3s fall 3 rise 5
# server master3 172.20.200.203:6443 check inter 3s fall 3 rise 5
1
systemctl restart haproxy

03:初始化节点

1
kubeadm init --config kube-init-v1221.yaml

image

然后按照之前的步骤,重新配置schedluer和flannel

image

04:其他master节点加入

1
2
# kubeadm init phase upload-certs --upload-certs
d0c8f25780bf7fdfabd99cd3c3321480c01340397d76faec07e637c6ff400f81
1
2
3
kubeadm join 172.20.200.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:e25e7cc49692642c7f672785ebf9ac76ab8d3fd8ad3e3ac586e5a6d395b20af9 \
--control-plane --certificate-key d0c8f25780bf7fdfabd99cd3c3321480c01340397d76faec07e637c6ff400f81

image

master02 03授权当前用户

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

释放haproxy中的master02 03

ha01

1
2
3
4
5
6
listen k8s-api-6443
bind 172.20.200.200:6443
mode tcp
server master1 172.20.200.201:6443 check inter 3s fall 3 rise 5
server master2 172.20.200.202:6443 check inter 3s fall 3 rise 5
server master3 172.20.200.203:6443 check inter 3s fall 3 rise 5
1
systemctl restart haproxy

05:其他node节点加入

1
2
# kubeadm token create --print-join-command
kubeadm join 172.20.200.200:6443 --token 18xyt1.cy19ybc7tys6z2xu --discovery-token-ca-cert-hash sha256:e25e7cc49692642c7f672785ebf9ac76ab8d3fd8ad3e3ac586e5a6d395b20af9

image