日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

k8s集群部署方式(kubeadm方式安装k8s)

發布時間:2024/3/12 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 k8s集群部署方式(kubeadm方式安装k8s) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

說明:部分操作請先看報錯說明,在進行操作!!


環境準備(1. centos7.7操作系統配置)

#------------------------------------------------------------------------------------------------------------------------- # (1)修改在主機名 hostnamectl set-hostname <hostname> # (不確定要不要重啟,看效果是立即生效的)reboot重啟系統,重新連接shell,使用這種方式修改,可以永久性的修改主機名稱! # 主機名成功修改后,/etc/hostname文件內容更新為最新主機名,但是/etc/hosts文件中主機名不變,需要手工修改 vim /etc/hosts 192.168.182.134 zkc-master 192.168.182.135 zkc-slave-1 192.168.182.130 zkc-slave-2 192.168.182.133 zkc-slave-3 # 或者 cat <<EOF >>/etc/hosts 192.168.182.134 zkc-master 192.168.182.135 zkc-slave-1 192.168.182.130 zkc-slave-2 192.168.182.133 zkc-slave-3 EOF# 立即生效 sysctl --system #------------------------------------------------------------------------------------------------------------------------- # 接下來正式進行docker安裝操作,三臺機器同時操作,不要只執行一個節點機器 # (2)給關閉防火墻 systemctl stop firewalld # 關閉防火墻(臨時關閉) systemctl disable firewalld # 關閉防火墻(永久關閉) #------------------------------------------------------------------------------------------------------------------------- # (3)關閉 selinux sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久 setenforce 0 # 臨時 #------------------------------------------------------------------------------------------------------------------------- # (4)關閉 swap # 方式一 swapoff -a # 臨時 vim /etc/fstab # 永久(這個執行后不會生效) free -m 看swap都為0 是生效的注釋掉最后一行(有 swap 單詞 實列 #UUID=a7a5896b-8706-4837-a767-aa0a79b81278 swap swap defaults 0 0) sysctl -p # 方式二 echo "vm.swappiness = 0">> /etc/sysctl.conf swapoff -a && swapon -a sysctl -p # 方式三(推薦) swapoff -a # 臨時 sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久 #------------------------------------------------------------------------------------------------------------------------- # (5)將橋接的IPv4流量傳遞到 iptables 的鏈# 設置開機時能夠自動加載模塊 cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF# 加載內核所需模塊 modprobe overlay modprobe br_netfilter# 設置所需的 sysctl 參數,參數在重新啟動后保持不變(方式一) cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0 # 禁止使用swap空間,只有當系統OOM時才允許使用它 EOF # 設置所需的 sysctl 參數,參數在重新啟動后保持不變(方式二) vim /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0 # 禁止使用swap空間,只有當系統OOM時才允許使用它# 應用 sysctl 參數而不重新啟動 sysctl --system # 生效 #------------------------------------------------------------------------------------------------------------------------- # (6)安裝依賴環境 yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstatlibseccomp wget vim net-tools git iproute lrzsz bash-completion tree bridge-utils unzip bind-utils gcc #------------------------------------------------------------------------------------------------------------------------- # (7)開啟IPVS # 安裝IPVS(6中有就不用在安裝了) yum -y install ipset ipvsdm# 編譯ipvs.modules文件 # 方式一 vi /etc/sysconfig/modules/ipvs.modules # 文件內容如下 #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 # 方式二 cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF# 賦予權限并執行 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4 # 重啟電腦 reboot # 檢查是否生效 lsmod | grep ip_vs_rr 或者 lsmod | grep -e ipvs -e nf_conntrack_ipv4 #------------------------------------------------------------------------------------------------------------------------- # (7)時間同步 yum install ntpdate -y ntpdate time.windows.com #------------------------------------------------------------------------------------------------------------------------- # (8)安裝docker # (9)修改docker源 # (10)修改yum源 epel源 #------------------------------------------------------------------------------------------------------------------------- # (11)設置主機之間ssh連接不要密碼 # 在四臺服務器上分別執行下面命令,生成公鑰和私鑰(注意:連續按換行回車采用默認值) ssh-keygen -t rsa# 在三臺(slave)服務器分別執行下面命令,密碼輸入系統密碼,將公鑰拷到master服務器上(如若在本機上執行ssh-copy-id 8.140.25.1 就是將自己的公鑰拷貝給自己,在執行下面操作的時候,就可以實現 (原本本機不執行ssh-copy-id 8.140.25.1,本機訪問別的主機不要密碼,其他主機訪問本機要),相互訪問不要密碼) ssh-copy-id 192.168.182.134# 之后可以在master服務器上檢查下,看看.ssh/authorized_keys文件是否包含3個公鑰(沒在masterr執行 ssh-copy-id 192.168.182.134 的情況下是3個) cat /root/.ssh/authorized_keys# 執行下面命令,將master的公鑰添加到authorized_keys文件中(此時應該包含4個公鑰)(在masterr執行了 ssh-copy-id 192.168.182.134,這里就不用了) cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys# 從master服務器執行下面命令,向其他三臺MySQL服務器分發公鑰信息。 scp /root/.ssh/authorized_keys root@192.168.182.135:/root/.ssh/authorized_keys scp /root/.ssh/authorized_keys root@192.168.182.130:/root/.ssh/authorized_keys scp /root/.ssh/authorized_keys root@192.168.182.133:/root/.ssh/authorized_keys# 檢測 ssh 192.168.182.135 ssh 192.168.182.130 ssh 192.168.182.133 ssh 192.168.182.134#------------------------------------------------------------------------------------------------------------------------- # (12)修改docker驅動,cgroup驅動程序改為systemd驅動 # 如果master不該slave改了,在執行kubeadm join 時無法添加節點(連接拒絕) # slave沒改的話在執行kubeadm join 會出現警告(具體不改能不能行,不太確定,出現警告我就直接解決問題了) # 修改前先查看驅動信息 docker info | grep Cgrou # 修改/etc/docker/daemon.json文件 vim /etc/docker/daemon.json #添加以下信息 {"exec-opts":["native.cgroupdriver=systemd"] } # 重啟docker systemctl restart docker # 查看驅動信息 docker info | grep Cgrou #------------------------------------------------------------------------------------------------------------------------- # (13)k8s加速(這里是阿里云的,但是去阿里云復制的無法使用,這個是百度出來的) cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFyum clean all yum -y makecache #------------------------------------------------------------------------------------------------------------------------- # (14)安裝指定版本的 kubeadm,kubelet 和 kubectl,為了下面拉取對應版本鏡像做準備 # 可以用下面的命令查看可以安裝的版本 yum list kubeadm --showduplicates | sort -r # 可以先查看當前k8s最新版本號 yum list --showduplicates kubeadm --disableexcludes=kubernetes# 由于官網未開放同步方式, 可能會有索引gpg檢查失敗的情況, 這時請用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安裝 yum install -y kubelet-1.20.1-0 kubeadm-1.20.1-0 kubectl-1.20.1-0 --disableexcludes=kubernetes # 或者 --disableexcludes=kubernetes 防止包沖突 yum install -y kubelet-1.20.1-0 kubeadm-1.20.1-0 kubectl-1.20.1-0# 設置kubelet(還沒設置過) 如果不配置kubelet,可能會導致K8S集群無法啟動。為實現docker使用的cgroupdriver與kubelet 使用的cgroup的一致性。 vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"# 開機啟動 systemctl enable kubelet # 或者 systemctl enable kubelet && systemctl start kubelet#------------------------------------------------------------------------------------------------------------------------- # (15)kubectl命令自動補全 echo "source <(kubectl completion bash)" >> ~/.bash_profile source ~/.bash_profile

環境準備(2. k8s集群鏡像下載)

這里在執行 kubeadm init 時制定了倉庫這一步就不用準備了

如果沒有指定倉庫需要準備的,因為無法下載國外的鏡像(具體沒有試過,沒有指定倉庫的,但是肯定要準備鏡像的)

# 查看安裝集群需要的鏡像 kubeadm config images list# (這里我用的是這個,我在下面面執行的命令中制定了倉庫) # 在 kubeadm init --apiserver-advertise-address=192.168.182.134 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.1 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.81.0.0/16 中指定鏡像倉庫后從阿里云下載的鏡像(不指定鏡像需要自己下載,看下面) registry.aliyuncs.com/google_containers/kube-proxy:v1.20.1 registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.1 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1 registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.1 registry.aliyuncs.com/google_containers/etcd:3.4.13-0 registry.aliyuncs.com/google_containers/coredns:1.7.0 registry.aliyuncs.com/google_containers/pause:3.2# 在 kubeadm init --apiserver-advertise-address=192.168.182.134 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.1 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.81.0.0/16 不指定倉庫需要自己下載鏡像 # 國內環境無法下載 docker pull k8s.gcr.io/kube-apiserver:v1.20.1 docker pull k8s.gcr.io/kube-controller-manager:v1.20.1 docker pull k8s.gcr.io/kube-scheduler:v1.20.1 docker pull k8s.gcr.io/kube-proxy:v1.20.1 docker pull k8s.gcr.io/pause:3.2 docker pull k8s.gcr.io/etcd:3.4.13-0 docker pull k8s.gcr.io/coredns:1.7.0 # 國內鏡像 docker pull rekca/kube-apiserver:v1.20.1 docker pull rekca/kube-controller-manager:v1.20.1 docker pull rekca/kube-scheduler:v1.20.1 docker pull rekca/kube-proxy:v1.20.1 docker pull rekca/pause:3.2 docker pull rekca/etcd:3.4.13-0 docker pull rekca/coredns:1.7.0 # tag(修改名字) docker tag rekca/kube-apiserver:v1.20.1 k8s.gcr.io/kube-apiserver:v1.20.1 docker tag rekca/kube-controller-manager:v1.20.1 k8s.gcr.io/kube-controller-manager:v1.20.1 docker tag rekca/kube-scheduler:v1.20.1 k8s.gcr.io/kube-scheduler:v1.20.1 docker tag rekca/kube-proxy:v1.20.1 k8s.gcr.io/kube-proxy:v1.20.1 docker tag rekca/pause:3.2 k8s.gcr.io/pause:3.2 docker tag rekca/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0 docker tag rekca/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0https://hub.docker.com/u/mirrorgooglecontainers

主機準備

ip主機名
192.168.182.134master
192.168.182.135slave-1
192.168.182.130slave-2
192.168.182.133slave-3

部署Kubernetes Master

# 1、初始化(只在master執行) kubeadm init \ --apiserver-advertise-address=192.168.182.134 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.20.1 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.81.0.0/16# 如果執行這個(如果用網絡插件 flannel 就需要 將 --pod-network-cidr的ip 等于 kube-flannel.yml中Network的ip,意思就是要改kube-flannel.yml中的Network的ip) # 如果用 calico 要將calico.yaml中192.168.0.0/16(這里的ip為calico.yaml3.14中的默認ip) 改為 --pod-network-cidr 一樣的值 kubeadm init --apiserver-advertise-address=192.168.182.134 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.1 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.81.0.0/16# 如果執行這個(使用網絡插件 flannel 這條命令的--pod-network-cidr的ip 就等于 kube-flannel.yml中的Network的ip) kubeadm init --apiserver-advertise-address=192.168.182.134 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.1 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16# 沒有執行倉庫 kubeadm init --apiserver-advertise-address=192.168.182.134 --kubernetes-version v1.20.1 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.81.0.0/16# 參數說明: apiserver-advertise-address:節點綁定的服務器ip(多網卡可以用這個參數指定ip) image-repository:指定鏡像倉庫 # 由于從阿里云拉鏡像,解決了k8s.gcr.io鏡像拉不下來的問題 kubernetes-version:要安裝的版本 --service-cidr:為service的虛擬 IP 地址另外指定 IP 地址段 pod-network-cidr:負載容器的子網網段

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config# 看節點 kubectl get nodes# 報錯 輸入 :cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 輸出 :cp: overwrite ‘/root/.kube/config’? # 后面輸入y 就行 cp: overwrite ‘/root/.kube/config’? y# 報錯 (master 主機中的docker沒有修改驅動的情況下) # Docker的cgroup驅動程序,推薦使用systemd驅動 如果在準備環境的時候沒有修改驅動,那么在下面的 “部署Kubernetes node” 中執行 kubeadm join.....略 時會失敗 # 報錯如下 [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher # 這時需要 # 修改前先查看驅動信息 docker info | grep Cgrou # 修改/etc/docker/daemon.json文件 vim /etc/docker/daemon.json #添加以下信息 {"exec-opts":["native.cgroupdriver=systemd"] } # 重啟docker systemctl restart docker # 查看驅動信息 docker info | grep Cgrou# 重新初始化 kubeadm reset # 先重置# 重新執行第一步 kubeadm init \ --apiserver-advertise-address=192.168.182.134 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.20.1 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.81.0.0/16

部署Kubernetes node

# 向集群中添加新的節點 # 這個要復制上圖提示的(不要用這里的) #在 192.168.182.135、192.168.182.130、192.168.182.133 中執行(slave-1、slave-2、slave-3) kubeadm join 192.168.182.134:6443 --token p3e2o2.umqv1zsbxu7uyruc \--discovery-token-ca-cert-hash sha256:290da2c61082ab5566de2b598818a8b6f45939c6a82be95b0113162b6656cfb4# 看節點 kubectl get nodes# 默認token有效期為24小時,當過期之后,改token就不可用了,這時就需要重新創建token kubeadm token create --print-join-command# 報錯 # error execution phase preflight: couldn't validate the identity of the API Server: Get "https://192.168.182.134:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp 192.168.182.134:6443: connect: no route to host To see the stack trace of this error execute with --v=5 or higher # 1解決方法 # master主機上重新生成token [root@zkc-master ~]# kubeadm token generate #生成toke ey7p23.852cnnyd47tx2pt3 #下面這條命令中會用到該結果 [root@zkc-master ~]# kubeadm token create ey7p23.852cnnyd47tx2pt3 --print-join-command --ttl=0 #根據token輸出添加命令 kubeadm join 192.168.182.134:6443 --token anelsh.v91a6sc5fshzok0e --discovery-token-ca-cert-hash sha256:3a90063656a1106d2c5e0f3cfa91eabeaa1d1465ab283d888aef4da1057757cc# 2 解決方法 # k8s api server不可達 # 此時需要檢查和關閉所有服務器的firewalld和selinux [root@master ~]#setenforce 0 [root@master ~]#sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config [root@master ~]# systemctl disable firewalld --now# 3 解決方法 # 上面方法試了沒用,關閉防火墻后重啟服務器后可以了# 報錯 # error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher # 解決 swapoff -a kubeadm reset systemctl daemon-reload systemctl restart kubelet iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X# 報錯 # error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` # 解決 # 重新初始化 kubeadm reset # 先重置# 警告 # [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ # 解決方法 # (6)(slave執行就行,可以先不執行報錯了在執行) # 使用systemd驅動(不修改在下面操作過程中會報錯(警告) 檢測到"cgroupfs"作為Docker的cgroup驅動程序,推薦使用systemd驅動。) # 修改前先查看驅動信息 docker info | grep Cgrou # 修改/etc/docker/daemon.json文件 vim /etc/docker/daemon.json #添加以下信息 {"exec-opts":["native.cgroupdriver=systemd"] } # 重啟docker systemctl restart docker # 查看驅動信息 docker info | grep Cgrou

部署CNI網絡插件(calico、flannel二選一)

cni是容器網絡接口,作用是實現容器跨主機網絡通信 pod的ip地址段,也稱為cidr在master節點上運行 以最常用的calico和flannel插件為例kubectl命令自動補全

安裝flannel網絡插件(master上安裝)

注意:

[root@zkc-master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-9696z 0/1 CrashLoopBackOff 1 13s
kube-flannel kube-flannel-ds-d6tk9 0/1 CrashLoopBackOff 1 13s
kube-flannel kube-flannel-ds-l9p4p 0/1 CrashLoopBackOff 1 13s
kube-flannel kube-flannel-ds-trdd9 0/1 CrashLoopBackOff 1 13s
kube-system coredns-7f89b7bc75-gp9ff 0/1 ContainerCreating 0 10h
kube-system coredns-7f89b7bc75-lqpsm 0/1 ContainerCreating 0 10h

此報錯是因為安裝Kubeadm Init的時候,沒有增加 --pod-network-cidr 10.244.0.0/16參數或者kube-flannel.yml如果yml中的"Network": "10.244.0.0/16"和--pod-network-cidr不一樣,所以,修改yml文件中的Network為相同網段后即可

用下面這天就不用改

kubeadm init --apiserver-advertise-address=192.168.182.134 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.1 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

# kube-flannel.yml文件在國外服務器上,搭建k8s集群時可以使用如下kube-flannel.yml。kube-flannel的命名空間是在 kube-system下。# 下載連接(可能下載不了) wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # gitee 上的庫 https://gitee.com/mirrors/flannel# 修改kube-flannel.yml中的鏡像倉庫地址為國內源 sed -i 's/quay.io/quay-mirror.qiniu.com/g' kube-flannel.yml# 如果kube-flannel.yml中的`"Network": "10.244.0.0/16"`和`--pod-network-cidr`不一樣就需要修改kube-flannel.yml# 安裝網絡插件 kubectl apply -f kube-flannel.yml kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml# 執行查看安裝的狀態 kubectl get pods --all-namespaces# 查看日志 kubectl -n kube-flannel logs kube-flannel-ds-9696z # 查看pod日志 kubectl get pod -n kube-flannel kube-flannel-ds-9696z# 獲取【kube-system命名空間】下的pod列表(查看flannel部署進度) kubectl get pod -n kube-system -owide # 獲取【kube-system命名空間】下的pod列表(查看flannel部署進度) kubectl get pods -n kube-system # 獲取【kube-flannel命名空間】下的pod列表(查看flannel部署進度) kubectl get pods -n kube-flannel # 獲取【所有命名空間】下的pod列表(查看flannel部署進度)(建議使用這個吧) kubectl get pods --all-namespaces# 結果 NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-6d8wn 1/1 Running 0 24m kube-flannel kube-flannel-ds-bzgll 1/1 Running 0 24m kube-flannel kube-flannel-ds-fsb9m 1/1 Running 0 24m kube-flannel kube-flannel-ds-wdczg 1/1 Running 0 24m kube-system coredns-7f89b7bc75-gp9ff 1/1 Running 0 11h kube-system coredns-7f89b7bc75-lqpsm 1/1 Running 0 11h kube-system etcd-zkc-master 1/1 Running 0 11h kube-system kube-apiserver-zkc-master 1/1 Running 0 11h kube-system kube-controller-manager-zkc-master 1/1 Running 0 11h kube-system kube-proxy-5p6vd 1/1 Running 0 11h kube-system kube-proxy-bmt6k 1/1 Running 0 11h kube-system kube-proxy-bvr48 1/1 Running 0 11h kube-system kube-proxy-rz44k 1/1 Running 0 11h kube-system kube-scheduler-zkc-master 1/1 Running 0 11h# 查看節點狀態 kubectl get nodes # 結果 NAME STATUS ROLES AGE VERSION zkc-master Ready control-plane,master 11h v1.20.1 zkc-slave-1 Ready <none> 11h v1.20.1 zkc-slave-2 Ready <none> 11h v1.20.1 zkc-slave-3 Ready <none> 11h v1.20.1# 查看集群健康狀況 kubectl get cs # 結果(發現集群不健康) [root@zkc-master ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"} # 發現集群不健康,那么需要注釋掉etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml的 –port=0: vim /etc/kubernetes/manifests/kube-controller-manager.yaml

vim /etc/kubernetes/manifests/kube-scheduler.yaml

# Master節點再次查看集群健康狀況: kubectl get cs # 結果 [root@zkc-master ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}

卸載flannel網絡插件、卸載flannel網絡切換calico、卸載calico網絡插件

# 卸載 flanne(適用于重新安裝) # 1、找到之前的flannel yaml 的文件,執行: kubectl delete -f kube-flannel.yml # 2、刪除cni配置文件 rm -rf /etc/cni/net.d/* # 3、重啟kubelet ,不行的話重啟服務器 reboot systemctl restart kubelet # 4、然后查看 kubectl get pod -n kube-system # flannel已經消失# 卸載flannel網絡切換calico(yaml形式安裝的flanneld切換calico) # 1、刪除flannel布署資源: kubectl delete -f kube-flannel.yml # 2、刪除路由 (我們刪除網卡會自動刪除這兩個網卡的路由) ip route 查看路由 # 我們刪除網卡會自動刪除這兩個網卡的路由 ip link delete flannel.1 ip link delete cni0 # 刪掉其他路由(此前flannel配置為host-gw網絡類型,沒有配置就不用刪除,本文檔暫時沒有該配置教程) ip route del 10.81.1.0/24 via 192.168.88.172 dev ens33 ip route del 10.81.1.0/24 via 192.168.88.171 dev ens33 ip route del 10.81.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 # 3、清除網絡規則 iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X # 4、重啟kubelet(每個節點重啟kubelet) systemctl daemon-reload systemctl restart kubelet# 卸載calico(徹底卸載calico) # 刪除(master上執行) kubectl delete -f calico.yaml # 檢查所有節點上的網絡,看看是否存在Tunl0 ifconfig # 刪除Tunl0 modprobe -r ipip # 移除Calico配置文件(查看 /etc/cni/net.d/ 目錄下是否存在相關文件,如:10-calico.conflist, calico-kubeconfig,calico-tls等,需要刪除。) ls /etc/cni/net.d/ rm -rf /etc/cni/net.d/* # 重啟kubelet(每個節點重啟kubelet) systemctl daemon-reload systemctl restart kubelet

測試 kubernetes 集群

在 Kubernetes 集群中創建一個 pod,驗證是否正常運行: # 下載nginx kubectl create deployment nginx --image=nginx # 查看nginx 是否下載成功 kubectl get pod # 結果(像這樣就可以了) NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-d444s 1/1 Running 0 58s# 對外暴露端口 kubectl expose deployment nginx --port=80 --type=NodePort # 查看pod對外端口 kubectl get pod,svc # 結果 NAME READY STATUS RESTARTS AGE pod/nginx-6799fc88d8-d444s 1/1 Running 0 3m6sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 11h service/nginx NodePort 10.1.196.63 <none> 80:30629/TCP 12s# 訪問地址 http://NodeIP:Port # 任意node節點ip+30629就可以訪問() http://192.168.182.135:30629 http://192.168.182.130:30629 http://192.168.182.133:30629 # master也可以 http://192.168.182.134:30629

安裝calico網絡插件(master上安裝)

kubeadm init --apiserver-advertise-address=192.168.182.134 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.1 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.81.0.0/16

# calico 官網3.14安裝方法 https://projectcalico.docs.tigera.io/archive/v3.14/getting-started/kubernetes/quickstart# 下載 wget https://docs.projectcalico.org/archive/v3.14/manifests/calico.yaml# 編輯 (要和 --pod-network-cidr 的 ip一致) vim calico.yaml ------------------------------------------------------------------------------------------------------------------- # no effect. This should fall within `--cluster-cidr`. # - name: CALICO_IPV4POOL_CIDR # value: "192.168.0.0/16" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" ----------------------------------------------換成----------------------------------------------------------------- # no effect. This should fall within `--cluster-cidr`. - name: CALICO_IPV4POOL_CIDR value: "10.81.0.0/16" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" -------------------------------------------------------------------------------------------------------------------# 安裝網絡插件 kubectl apply -f calico.yaml kubectl apply -f https://docs.projectcalico.org/archive/v3.14/manifests/calico.yaml # 結果(成功) configmap "calico-config" created customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "ipamblocks.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "blockaffinities.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "ipamhandles.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "bgppeers.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "hostendpoints.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "globalnetworksets.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "networksets.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" created clusterrole.rbac.authorization.k8s.io "calico-kube-controllers" created clusterrolebinding.rbac.authorization.k8s.io "calico-kube-controllers" created clusterrole.rbac.authorization.k8s.io "calico-node" created clusterrolebinding.rbac.authorization.k8s.io "calico-node" created daemonset.extensions "calico-node" created serviceaccount "calico-node" created deployment.extensions "calico-kube-controllers" created serviceaccount "calico-kube-controllers" created# 使用以下命令確認所有 pod 都在運行。 watch kubectl get pods --all-namespaces # 使用以下命令確認您現在在集群中有3個節點。 kubectl get nodes -o wide# 警告 # Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition # 解決(安裝后會自動改,因該時安裝前修改yaml但是沒有試過,出現這個警告也可以不用管) 將 apiVersion 的值 apiextensions.k8s.io/v1beta1 改為 apiextensions.k8s.io/v1# 指定網卡 # 問題(這個問題有點奇怪按照這個弄了也沒弄好,卸載重啟,重新安裝沒有設置也好了,建議遇到這個問題重啟一下試一試,不行在進行如下操作) [root@zkc-master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nginx-6799fc88d8-d444s 1/1 Running 3 47h kube-system calico-kube-controllers-6dfcd885bf-lr498 1/1 Running 0 16m kube-system calico-node-9hfh8 1/1 Running 0 16m kube-system calico-node-r429k 0/1 Running 0 16m # 這里時0/1 有問題 kube-system calico-node-rn5jv 1/1 Running 0 16m kube-system calico-node-tkrw2 1/1 Running 0 16m kube-system coredns-7f89b7bc75-gp9ff 1/1 Running 3 2d11h kube-system coredns-7f89b7bc75-lqpsm 1/1 Running 3 2d11h kube-system etcd-zkc-master 1/1 Running 3 2d11h kube-system kube-apiserver-zkc-master 1/1 Running 3 2d11h kube-system kube-controller-manager-zkc-master 1/1 Running 2 25h kube-system kube-proxy-5p6vd 1/1 Running 3 2d11h kube-system kube-proxy-bmt6k 1/1 Running 3 2d11h kube-system kube-proxy-bvr48 1/1 Running 3 2d11h kube-system kube-proxy-rz44k 1/1 Running 3 2d11h kube-system kube-scheduler-zkc-master 1/1 Running 4 25h # 解決 # 查看日志 kubectl describe pod calico-node-r429k -n kube-systemkubectl logs -f calico-node-r429k -n kube-system # felix/int_dataplane.go 407: Can't enable XDP acceleration. error=kernel is too old (have: 3.10.0 but want at least: 4.16.0)vim calico.yaml # Cluster type to identify the deployment type - name: CLUSTER_TYPEvalue: "k8s,bgp" # 新添加如下 - name: IP_AUTODETECTION_METHODvalue: "interface=eth.*" #或者 value: "interface=eth0" # Auto-detect the BGP IP address. - name: IPvalue: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIPvalue: "Always"

kubectl命令

# 查看版本 kubectl version --client# 查看節點的負載 kubectl top nodes# 查看pod的負載 kubectl top pods -n kube-system# 查看集群健康狀況(Master) kubectl get cs# 對外暴露端口 kubectl expose deployment nginx --port=80 --type=NodePort# 查看pod對外端口 kubectl get pod,svc

總結

以上是生活随笔為你收集整理的k8s集群部署方式(kubeadm方式安装k8s)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。