K8S V1.23 安装--Kubeadm+contained+公网 IP 多节点部署
簡介
基于兩臺公網的服務器節點,兩個服務器不再局域網內,只能通過公網 IP 相互訪問,搭建 K8S 集群,并且按照 Dashboard,通過網頁查看 K8S 相關的東西
環境及機器說明
兩臺機器,其中一臺作為主節點,一臺作為工作節點
操作系統都是centos7,centos8配置虛擬網卡有點麻煩
- crio-master(主節點):121.4.190.84
- vm-20-11-centos(工作節點):106.55.227.160
系統設置準備
同時在兩臺機器上執行
根據官方的文檔,配置下一些系統屬性
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOFsysctl --system# 修改Hosts文件,添加相關的配置,示例如下: [root@crio-master k8s]# cat /etc/hosts 121.4.190.84 crio-master 106.55.227.160 VM-20-11-centos由于我們使用的是公網 ip,但是在云服務器中是沒有對應的網卡的,這導致在 kubeadm 部署時使用公網 IP 有問題
所以我們在兩臺機器中新建對應各自公網 ip 的虛擬網卡(下面方式建立的重啟后,會被刪除,但目前也夠用了)
# 安裝軟件包 modprobe tun lsmod | grep tun# 編輯文件 vim /etc/yum.repos.d/nux-misc.repo # 填入下面的內容 [nux-misc] name=Nux Misc baseurl=http://li.nux.ro/download/nux/misc/el7/x86_64/ enabled=0 gpgcheck=1 gpgkey=http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro# 安裝 yum --enablerepo=nux-misc install tunctl# 新建虛擬網卡 tunctl -t publick -u root # 配置網卡的IP,注意替換ip成自己機器對應的公網IP ifconfig publick 121.37.246.218 netmask 255.255.255.0 promisc注:K8S 部署需要開通 6443 端口,在服務器的安全規則配置中,將 6443 端口開啟
containd 安裝
兩個機器上都安裝
docker 作為我們日常經常使用的,但感覺比較重了,我們嘗試不使用 docker,使用推薦的,較底層的 contained(cri-o 也行,目前嘗試下來,沒有想象中那么難用)
# 將機器人上的docker清理下,不然會有影響 yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine yum remove docker-ce# 我們單獨安裝contained即可 yum install -y yum-utils yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo yum install containerd.io# 加載 systemctl daemon-reload# 啟動服務 systemctl enable containerd systemctl start containerd systemctl status containerdcrictl 安裝
兩個機器上都安裝
容器運行時的命令行操作工具,和 docker 命令很像,可以類比 docker,如下命令:
- docker ps == crictl ps
- docker logs == crictl logs
這樣就 OK 了,可以運行命令嘗試下:
crictl ps crictl images錯誤處理記錄
FATA[0000] listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
? ~ crictl ps FATA[0000] listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService刪除下配置文件
rm /etc/containerd/config.toml $ systemctl restart containerdk8s 安裝
兩個機器都需要執行
# 增加軟件源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF# 設置下 setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config# 清理以前的版本,如果有的話 yum remove kubelet kubeadm kubectl yum install -y kubelet-1.23.1 kubeadm-1.23.1 kubectl-1.23.1# 啟動 systemctl enable --now kubelet# 編輯下配置文件 mkdir -p /etc/systemd/system/kubelet.service.d/ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # 填入下面的內容 [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # 這是 "kubeadm init" 和 "kubeadm join" 運行時生成的文件,動態地填充 KUBELET_KUBEADM_ARGS 變量 EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # 這是一個文件,用戶在不得已下可以將其用作替代 kubelet args。 # 用戶最好使用 .NodeRegistration.KubeletExtraArgs 對象在配置文件中替代。 # KUBELET_EXTRA_ARGS 應該從此文件中獲取。 EnvironmentFile=-/etc/default/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=159.138.89.50# 重新啟動,應用配置 systemctl daemon-reload systemctl restart kubelet systemctl status kubelet# 查看下狀態 [root@VM-16-14-centos k8s]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /etc/systemd/system/kubelet.service.d└─10-kubeadm.confActive: active (running) since Sun 2022-05-08 14:00:11 CST; 54s agoDocs: https://kubernetes.io/docs/Main PID: 12618 (kubelet)CGroup: /system.slice/kubelet.service└─12618 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remot...# 查看日志 journalctl -xeu kubelet# 查看相關的版本 [root@VM-16-14-centos ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:15:11Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"}[root@VM-16-14-centos ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:16:20Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port?由于在國內,不能得到谷歌的官方鏡像(雖然后面我們使用的是國內鏡像,但可能是有 bug,pause 這個鏡像還是會到谷歌去拉取
我們需要手動拉取國內的鏡像,然后改成谷歌的鏡像,運行下面的命令即可:
crictl pull registry.aliyuncs.com/google_containers/pause:3.6 ctr i pull registry.aliyuncs.com/google_containers/pause:3.6 ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6主節點啟動
下面這些操作,只需要在主節點:crio-master 上執行即可
下面終于到了關鍵 Kubeadm 初始化安裝 K8s 主節點
# --apiserver-advertise-address 填寫公網ip # --service-cidr --pod-network-cidr 照抄填寫即可 # --image-repository 配置國內鏡像源 [root@crio-master k8s]# kubeadm init --apiserver-advertise-address=121.4.190.84 --service-cidr=10.1.0.0/16 --pod-network-cidr=192.168.0.0/16 --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'# 下面是安裝中生成的日志 I0626 12:19:55.601776 10881 version.go:255] remote version is much newer: v1.24.2; falling back to: stable-1.23 [init] Using Kubernetes version: v1.23.8 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [crio-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 121.4.190.84] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [crio-master localhost] and IPs [121.4.190.84 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [crio-master localhost] and IPs [121.4.190.84 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 12.002365 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node crio-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node crio-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: m5yedd.clmq0lw4s4d961yu [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:# 這段日志很關鍵,工作節點加入是,復制下面這個運行即可 kubeadm join 121.4.190.84:6443 --token m5yedd.clmq0lw4s4d961yu \--discovery-token-ca-cert-hash sha256:7c3ac34b89c2c4a079dd5684286c6306abc8b5dd98fdd1ea8f1f1df8f254f256# 運行下面的命令 [root@crio-master k8s]# mkdir -p $HOME/.kube [root@crio-master k8s]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config cp: overwrite ‘/root/.kube/config’? chown $(id -u):$(id -g) $HOME/.kube/config [root@crio-master k8s]# mkdir -p $HOME/.kube [root@crio-master k8s]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config cp: overwrite ‘/root/.kube/config’? y [root@crio-master k8s]# chown $(id -u):$(id -g) $HOME/.kube/config接著安裝下面網絡插件:Calico
# 運行下面的命令 kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml kubectl create -f https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml# 查看部署狀態 watch kubectl get pods -n calico-system安裝完成,我們可以運行下面的命令看看 pod 狀態和看看容器運行狀態:
[root@crio-master k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION crio-master Ready control-plane,master 3d1h v1.23.1[root@crio-master k8s]# crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 386c53701c331 6cba34c44d478 3 days ago Running calico-apiserver 0 c96c43e126a15 e822cc35cc1c7 6cba34c44d478 3 days ago Running calico-apiserver 0 b439d9c51353c 335493793963d a4ca41631cc7a 3 days ago Running coredns 0 f8b273d0b213a 8e25629c36fe5 ec95788d0f725 3 days ago Running calico-kube-controllers 0 706ceebf584f7 f8bbdbc140cdb a4ca41631cc7a 3 days ago Running coredns 0 e6a73ececa384 5be790a6dd5a9 a3447b26d32c7 3 days ago Running calico-node 0 1728a55642148 f77615c9ea6a8 22336acac6bba 3 days ago Running calico-typha 0 dd612b2d122b8 a887f081de0f5 9735044632553 3 days ago Running tigera-operator 0 e7d02d9eb77c5 df1d4d706ad2d db4da8720bcb9 3 days ago Running kube-proxy 0 4b7a385e851a7 142aedbdc00c1 09d62ad3189b4 3 days ago Running kube-apiserver 0 29033f38ec856 e457ace9d5f03 25f8c7f3da61c 3 days ago Running etcd 0 b9307128a6142 28b0581787520 2b7c5a0399844 3 days ago Running kube-controller-manager 7 fa0f2ccf4585c fc4f7f3d1bb6f afd180ec7435a 3 days ago Running kube-scheduler 7 b7c2064e73acf節點加入集群
主節點安裝完成了,下面看著按照工作節點
主節點有相關的調度控制相關組件,比如上面的:kube-controller-manager, kube-scheduler
工作節點則沒有太多的組件,所以工作節點
在 master 節點執行 kubeadm token list 獲取 token(注意查看是否過期)
kubeadm token list# 如果是過期了,需要重新生成 kubeadm token create --print-join-command設置一下,然后運行命令讓當前 worker 節點加入
modprobe br_netfilter echo 1 > /proc/sys/net/ipv4/ip_forwardkubeadm join 121.4.190.84:6443 --token m5yedd.clmq0lw4s4d961yu \--discovery-token-ca-cert-hash sha256:7c3ac34b89c2c4a079dd5684286c6306abc8b5dd98fdd1ea8f1f1df8f254f256我們回到主節點,查看狀態,看到了完成:
[root@crio-master k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION crio-master Ready control-plane,master 3d2h v1.23.1 vm-20-11-centos Ready <none> 3d1h v1.23.1沒有 Ready 的可能原因記錄
1.hosts 需要配置,需要節點 hostname 和公網 ip 對應,主節點和工作節點的 hosts 需要配置好
[root@crio-master k8s]# cat /etc/hosts 121.4.190.84 crio-master 106.55.227.160 VM-20-11-centos2.6443 端口需要放開
Dashboard 安裝訪問
安裝配置
在 master 節點機器上安裝即可
# 運行后即部署 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml# 查看狀態,等待狀態完畢 [root@crio-master k8s]# kubectl get pods --namespace=kubernetes-dashboard -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dashboard-metrics-scraper-799d786dbf-prrgr 1/1 Running 0 12m 192.168.3.194 vm-20-11-centos <none> <none> kubernetes-dashboard-546cbc58cd-c5rs9 1/1 Running 0 12m 192.168.3.193 vm-20-11-centos <none> <none># 查看詳細信息 kubectl describe pod dashboard-metrics-scraper-799d786dbf-prrgr --namespace=kubernetes-dashboard # 查看節點信息 [root@crio-master ~]# kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.1.45.51 <none> 443/TCP 5m24s# 編輯,將最后的type:ClusterIP改為type:NodePort kubectl --namespace=kubernetes-dashboard edit service kubernetes-dashboard # 再次運行命令查看,得到其映射到主機的端口32625,這樣既可通過注解ip+端口訪問 [root@crio-master ~]# kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.1.45.51 <none> 443:32652/TCP 7m31stoken 生成
# 新建用戶 vim admin-user.yaml # 填入下面的內容 apiVersion: v1 kind: ServiceAccount metadata:name: admin-usernamespace: kubernetes-dashboard # 執行下面的命令 kubectl create -f admin-user.yaml# 綁定用戶關系 vim admin-user-role-binding.yaml # 填入下面的內容 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: admin-user roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin subjects: - kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard # 執行下面的命令 kubectl create -f admin-user-role-binding.yaml如果過程中提示存在或者需要刪除,只需要 kubectl delete -f 相應的 yaml 文件即可。
獲取 token,執行下面的命令
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')訪問頁面
在上面的部署方式中,我們通過名稱查看到服務部署在第二臺主機上:vm-20-11-centos
[root@crio-master k8s]# kubectl get pods --namespace=kubernetes-dashboard -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dashboard-metrics-scraper-799d786dbf-prrgr 1/1 Running 0 12m 192.168.3.194 vm-20-11-centos <none> <none> kubernetes-dashboard-546cbc58cd-c5rs9 1/1 Running 0 12m 192.168.3.193 vm-20-11-centos <none> <none>博主嘗試通過正常的方式訪問:通過 service 訪問,但不行,經過嘗試,咱只能直接訪問容器了
首先登錄 vm-20-11-centos 這臺機器,我們看下運行中的容器:
? k8s crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 93c138b35118e 7801cfc6d5c07 29 hours ago Running dashboard-metrics-scraper 0 87f8f420a8614 c1e3fc2d6c6d9 57446aa2002e1 29 hours ago Running kubernetes-dashboard 0 68897036dab5f db3684cf5e6a2 a3447b26d32c7 2 days ago Running calico-node 0 cb13bab18abe7 5811551b53ad0 db4da8720bcb9 2 days ago Running kube-proxy 1 2960526e1e594看到 kubernetes-dashboard 這個容器,通過查詢資料,這個服務監聽在 8443 端口,在 kubectl get pods --namespace=kubernetes-dashboard -o wide 中看到其 ip 是 192.168.3.193,那訪問的地址就是:192.168.3.193:8443
訪問下,確實能通:
? k8s curl 192.168.3.193:8443 Client sent an HTTP request to an HTTPS server.由于是在云服務上,開太多的端口不安全,我們通過 nginx 配置來訪問,安裝一個 nginx(安裝教程網上一大推,這里就不贅述了)
我們編輯 nginx 配置文件:/etc/nginx/nginx.conf,填入下面的內容
user nginx; worker_processes auto;error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;events {worker_connections 1024; }http {include /etc/nginx/mime.types;default_type application/octet-stream;log_format main '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log /var/log/nginx/access.log main;sendfile on;#tcp_nopush on;keepalive_timeout 65;#gzip on;include /etc/nginx/conf.d/*.conf;server {listen 443 ssl http2;listen [::]:443 ssl http2;server_name _;root /usr/share/nginx/html;ssl_certificate "/etc/nginx/cert/selfgrowth.club_bundle.crt";ssl_certificate_key "/etc/nginx/cert/selfgrowth.club.key";ssl_session_cache shared:SSL:1m;ssl_session_timeout 10m;ssl_ciphers HIGH:!aNULL:!MD5;ssl_prefer_server_ciphers on;# Load configuration files for the default server block.include /etc/nginx/default.d/*.conf;location /k8sadmin/ {proxy_pass https://192.168.3.193:8443/;}error_page 404 /404.html;location = /40x.html {}error_page 500 502 503 504 /50x.html;location = /50x.html {}} }證書需要自己去隨便搞一個了:
- 1.如果有自己的域名,在對應的服務商申請生成 ssl 證書即可
- 2.沒有的話,可以使用 OpenSSL 生成:使用 OpenSSL 生成自簽名證書
只是為了讓 nginx 能跑起來而已,實踐訪問不用那個域名,直接用 ip
下面我們就直接使用第二個節點的 ip 進行訪問即可,token 填入之前生成獲取到的 token
參考鏈接
- docker ce install on centos
- CentOs 8.1 安裝 containerd
- kubeadm init fails with node not found
- Getting started with containerd
- 使用 kubeadm 創建集群失敗報 Unable to register node with API server
- Quickstart for Calico on Kubernetes
- 部署和訪問 Kubernetes 儀表板(Dashboard)
- Web 基礎配置篇(十七): Kubernetes dashboard 安裝配置
- circtl 發行安裝包
- 節點加入k8s集群如何獲取token等參數值
總結
以上是生活随笔為你收集整理的K8S V1.23 安装--Kubeadm+contained+公网 IP 多节点部署的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: OpenSSL心脏出血漏洞
- 下一篇: 东南大学研究生毕业论文LaTeX模板se