kubeadm安装kubernetes 1.13.2多master高可用集群
1. 簡介
Kubernetes v1.13版本發布后,kubeadm才正式進入GA,可以生產使用,用kubeadm部署kubernetes集群也是以后的發展趨勢。目前Kubernetes的對應鏡像倉庫,在國內阿里云也有了鏡像站點,使用kubeadm部署Kubernetes集群變得簡單并且容易了很多,本文使用kubeadm帶領大家快速部署Kubernetes v1.13.2版本。
注意:請不要把目光僅僅放在部署上,如果你是新手,推薦先熟悉用二進制文件部署后,再來學習用kubeadm部署。二進制文件部署請查看我博客的其他文章。
2. 架構信息
系統版本:CentOS 7.6 內核:3.10.0-957.el7.x86_64 Kubernetes: v1.13.2 Docker-ce: 18.06 推薦硬件配置:2核2GKeepalived保證apiserever服務器的IP高可用 Haproxy實現apiserver的負載均衡為了減少服務器數量,haproxy、keepalived配置在node-01和node-02。
| 負載VIP | VIP | 10.31.90.200 | |
| node-01 | master | 10.31.90.201 | kubeadm、kubelet、kubectl、docker、haproxy、keepalived |
| node-02 | master | 10.31.90.202 | kubeadm、kubelet、kubectl、docker、haproxy、keepalived |
| node-03 | master | 10.31.90.203 | kubeadm、kubelet、kubectl、docker |
| node-04 | node | 10.31.90.204 | kubeadm、kubelet、kubectl、docker |
| node-05 | node | 10.31.90.205 | kubeadm、kubelet、kubectl、docker |
| node-06 | node | 10.31.90.206 | kubeadm、kubelet、kubectl、docker |
| service網段 | 10.245.0.0/16 |
2.部署前準備工作
1) 關閉selinux和防火墻
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config setenforce 0 systemctl disable firewalld systemctl stop firewalld2) 關閉swap
swapoff -a3) 為每臺服務器添加host解析記錄
cat >>/etc/hosts<<EOF 10.31.90.201 node-01 10.31.90.202 node-02 10.31.90.203 node-03 10.31.90.204 node-04 10.31.90.205 node-05 10.31.90.206 node-06 EOF4) 創建并分發密鑰
在node-01創建ssh密鑰。
[root@node-01 ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:26z6DcUarn7wP70dqOZA28td+K/erv7NlaJPLVE1BTA root@node-01 The key's randomart image is: +---[RSA 2048]----+ | E..o+| | . o| | . | | . . | | S o . | | .o X oo .| | oB +.o+oo.| | .o*o+++o+o| | .++o+Bo+=B*B| +----[SHA256]-----+分發node-01的公鑰,用于免密登錄其他服務器
for n in `seq -w 01 06`;do ssh-copy-id node-$n;done5) 配置內核參數
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 EOFsysctl --system6) 加載ipvs模塊
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv47) 添加yum源
cat << EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFwget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/CentOS-Base.repo wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo2. 部署keepalived和haproxy
1) 安裝keepalived和haproxy
在node-01和node-02安裝keepalived和haproxy
yum install -y keepalived haproxy2) 修改配置
keepalived配置
node-01的priority為100,node-02的priority為90,其他配置一樣。
[root@node-01 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalivedglobal_defs {notification_email {feng110498@163.com}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 127.0.0.1smtp_connect_timeout 30router_id LVS_1 }vrrp_instance VI_1 {state MASTER interface eth0lvs_sync_daemon_inteface eth0virtual_router_id 88advert_int 1priority 100 authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.31.90.200/24} }haproxy配置
node-01和node-02的haproxy配置是一樣的。此處我們監聽的是10.31.90.200的8443端口,因為haproxy是和k8s apiserver是部署在同一臺服務器上,都用6443會沖突。
globalchroot /var/lib/haproxydaemongroup haproxyuser haproxylog 127.0.0.1:514 local0 warningpidfile /var/lib/haproxy.pidmaxconn 20000spread-checks 3nbproc 8defaultslog globalmode tcpretries 3option redispatchlisten https-apiserverbind 10.31.90.200:8443mode tcpbalance roundrobintimeout server 900stimeout connect 15sserver apiserver01 10.31.90.201:6443 check port 6443 inter 5000 fall 5server apiserver02 10.31.90.202:6443 check port 6443 inter 5000 fall 5server apiserver03 10.31.90.203:6443 check port 6443 inter 5000 fall 53) 啟動服務
systemctl enable keepalived && systemctl start keepalived systemctl enable haproxy && systemctl start haproxy3. 部署kubernetes
1) 安裝軟件
由于kubeadm對Docker的版本是有要求的,需要安裝與kubeadm匹配的版本。
由于版本更新頻繁,請指定對應的版本號,本文采用1.13.2版本,其它版本未經測試。
2) 修改初始化配置
使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出默認配置,然后在根據自己的環境修改配置.
[root@node-01 ~]# cat kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta1 bootstrapTokens: - groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 10.31.90.201bindPort: 6443 nodeRegistration:criSocket: /var/run/dockershim.sockname: node-01taints:- effect: NoSchedulekey: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration apiServer:timeoutForControlPlane: 4m0s certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "10.31.90.200:8443" dns:type: CoreDNS etcd:local:dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kubernetesVersion: v1.13.2 networking:dnsDomain: cluster.localpodSubnet: ""serviceSubnet: "10.245.0.0/16" scheduler: {} controllerManager: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs"3) 預下載鏡像
[root@node-01 ~]# kubeadm config images pull --config kubeadm-init.yaml [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.2 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.2 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.2 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.2 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.64) 初始化
[root@node-01 ~]# kubeadm init --config kubeadm-init.yaml [init] Using Kubernetes version: v1.13.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [node-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.12.0.1 10.31.90.201 10.31.90.200] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 22.503955 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node-01" as an annotation [mark-control-plane] Marking the node node-01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node node-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node as root:kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1kubeadm init主要執行了以下操作:
-
[init]:指定版本進行初始化操作
-
[preflight] :初始化前的檢查和下載所需要的Docker鏡像文件
-
[kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,沒有這個文件kubelet無法啟動,所以初始化之前的kubelet實際上啟動失敗。
-
[certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。
-
[kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目錄中,組件之間通信需要使用對應文件。
-
[control-plane]:使用/etc/kubernetes/manifest目錄下的YAML文件,安裝 Master 組件。
-
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務。
-
[wait-control-plane]:等待control-plan部署的Master組件啟動。
-
[apiclient]:檢查Master組件服務狀態。
-
[uploadconfig]:更新配置
-
[kubelet]:使用configMap配置kubelet。
-
[patchnode]:更新CNI信息到Node上,通過注釋的方式記錄。
-
[mark-control-plane]:為當前節點打標簽,打了角色Master,和不可調度標簽,這樣默認就不會使用Master節點來運行Pod。
-
[bootstrap-token]:生成token記錄下來,后邊使用kubeadm join往集群中添加節點時會用到
- [addons]:安裝附加組件CoreDNS和kube-proxy
5) 為kubectl準備Kubeconfig文件
kubectl默認會在執行的用戶家目錄下面的.kube目錄下尋找config文件。這里是將在初始化時[kubeconfig]步驟生成的admin.conf拷貝到.kube/config。
[root@node-01 ~]# mkdir -p $HOME/.kube [root@node-01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@node-01 ~]# chown $(id -u):$(id -g)$HOME/.kube/config在該配置文件中,記錄了API Server的訪問地址,所以后面直接執行kubectl命令就可以正常連接到API Server中。
6) 查看組件狀態
[root@node-01 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 NotReady master 14m v1.13.2目前只有一個節點,角色是Master,狀態是NotReady。
7) 其他master部署
在node-01將證書文件拷貝至其他master節點
USER=root CONTROL_PLANE_IPS="node-02 node-03" for host in ${CONTROL_PLANE_IPS}; dossh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/ done在其他master執行,注意--experimental-control-plane參數
kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1 --experimental-control-plane注意:token有效期是有限的,如果舊的token過期,可以使用kubeadm token create --print-join-command重新創建一條token。
8) node部署
在node-04、node-05、node-06執行,注意沒有--experimental-control-plane參數
kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb19) 部署網絡插件flannel
Master節點NotReady的原因就是因為沒有使用任何的網絡插件,此時Node和Master的連接還不正常。目前最流行的Kubernetes網絡插件有Flannel、Calico、Canal、Weave這里選擇使用flannel。
[root@node-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml10) 查看節點狀態
所有的節點已經處于Ready狀態。
[root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready master 35m v1.13.2 node-02 Ready master 36m v1.13.2 node-03 Ready master 36m v1.13.2 node-04 Ready <none> 40m v1.13.2 node-05 Ready <none> 40m v1.13.2 node-06 Ready <none> 40m v1.13.2查看pod
[root@node-01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-89cc84847-j8mmg 1/1 Running 0 1d coredns-89cc84847-rbjxs 1/1 Running 0 1d etcd-node-01 1/1 Running 1 1d etcd-node-02 1/1 Running 0 1d etcd-node-03 1/1 Running 0 1d kube-apiserver-node-01 1/1 Running 0 1d kube-apiserver-node-02 1/1 Running 0 1d kube-apiserver-node-03 1/1 Running 0 1d kube-controller-manager-node-01 1/1 Running 2 1d kube-controller-manager-node-02 1/1 Running 0 1d kube-controller-manager-node-03 1/1 Running 0 1d kube-proxy-jfbmv 1/1 Running 0 1d kube-proxy-lvkms 1/1 Running 0 1d kube-proxy-qx7kh 1/1 Running 0 1d kube-proxy-xst5v 1/1 Running 0 1d kube-proxy-zfwrk 1/1 Running 0 1d kube-proxy-ztg6j 1/1 Running 0 1d kube-scheduler-node-01 1/1 Running 1 1d kube-scheduler-node-02 1/1 Running 1 1d kube-scheduler-node-03 1/1 Running 1 1d kube-flannel-ds-amd64-87wzj 1/1 Running 0 1d kube-flannel-ds-amd64-lczwm 1/1 Running 0 1d kube-flannel-ds-amd64-lwc2j 1/1 Running 0 1d kube-flannel-ds-amd64-mwlfq 1/1 Running 0 1d kube-flannel-ds-amd64-nj2mk 1/1 Running 0 1d kube-flannel-ds-amd64-wx7vd 1/1 Running 0 1d查看ipvs的狀態
[root@node-01 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.245.0.1:443 rr-> 10.31.90.201:6443 Masq 1 2 0 -> 10.31.90.202:6443 Masq 1 0 0 -> 10.31.90.203:6443 Masq 1 2 0 TCP 10.245.0.10:53 rr-> 10.32.0.3:53 Masq 1 0 0 -> 10.32.0.4:53 Masq 1 0 0 TCP 10.245.90.161:80 rr-> 10.45.0.1:80 Masq 1 0 0 TCP 10.245.90.161:443 rr-> 10.45.0.1:443 Masq 1 0 0 TCP 10.245.149.227:1 rr-> 10.31.90.204:1 Masq 1 0 0 -> 10.31.90.205:1 Masq 1 0 0 -> 10.31.90.206:1 Masq 1 0 0 TCP 10.245.181.126:80 rr-> 10.34.0.2:80 Masq 1 0 0 -> 10.45.0.0:80 Masq 1 0 0 -> 10.46.0.0:80 Masq 1 0 0 UDP 10.245.0.10:53 rr-> 10.32.0.3:53 Masq 1 0 0 -> 10.32.0.4:53 Masq 1 0 0至此kubernetes集群部署完成。如有問題歡迎在下面留言交流。希望大家多多關注和點贊,謝謝!
轉載于:https://blog.51cto.com/billy98/2350660
總結
以上是生活随笔為你收集整理的kubeadm安装kubernetes 1.13.2多master高可用集群的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 孕妇梦到鸭子和鸡是什么意思
- 下一篇: 梦到自己掉了好几颗牙