日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

第7章:Kubernetes存储

發(fā)布時(shí)間:2024/9/3 编程问答 21 豆豆
生活随笔 收集整理的這篇文章主要介紹了 第7章:Kubernetes存储 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

Kubernetes存儲(chǔ)

1.為什么需要存儲(chǔ)卷?

容器部署過程中一般有以下三種數(shù)據(jù):
·啟動(dòng)時(shí)需要的初始數(shù)據(jù),可以是配置文件
·啟動(dòng)過程中產(chǎn)生的臨時(shí)數(shù)據(jù),該臨時(shí)數(shù)據(jù)需要多個(gè)容器間共享
·啟動(dòng)過程中產(chǎn)生的持久化數(shù)據(jù)

2.數(shù)據(jù)卷概述

Kubernetes 中的 Volume提供了在容器中掛載外部存儲(chǔ)的能力
Pod需要設(shè)置卷來源( spec.volume)和掛載點(diǎn)( spec.containers.volumeMounts)兩個(gè)信息后才可以使用相應(yīng)的 Volume

官方搜索查看支持的類型

awsElasticBlockStoreazureDiskazureFilecephfscinderconfigMapcsidownwardAPIemptyDirfc (fibre channel)flexVolumeflockergcePersistentDiskgitRepo (deprecated)glusterfshostPathiscsilocalnfspersistentVolumeClaimprojectedportworxVolumequobyterbdscaleIOsecretstorageosvsphereVolume k8s支持的存儲(chǔ)類型

簡(jiǎn)單的分類:
1、本地,例如emptyDir、hostPath
2、網(wǎng)絡(luò),例如nfs、cephfs、glusterfs
3、公有云,例如azureDisk、awsElasticBlockStore
4、k8s資源,例如secret、configMap

3.臨時(shí)存儲(chǔ)卷,節(jié)點(diǎn)存儲(chǔ)卷,網(wǎng)絡(luò)存儲(chǔ)卷

臨時(shí)存儲(chǔ)卷:emptyDir

創(chuàng)建一個(gè)空卷,掛載到Pod中的容器。Pod刪除該卷也會(huì)被刪除。
應(yīng)用場(chǎng)景:Pod中容器之間數(shù)據(jù)共享
emptydir默認(rèn)工作目錄:
/var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~empty-dir

什么樣的適合在pod中運(yùn)行多個(gè)容器?

{} 空值

[root@k8s-m1 chp7]# cat emptyDir.yml apiVersion: v1 kind: Pod metadata:name: emptydir spec:containers:- name: writeimage: centoscommand: ["bash","-c","for i in {1..100};do echo $i >> /data/hello;sleep 1;done"]volumeMounts:- name: datamountPath: /data- name: readimage: centoscommand: ["bash","-c","tail -f /data/hello"]volumeMounts:- name: datamountPath: /datavolumes:- name: dataemptyDir: {} [root@k8s-m1 chp7]# kubectl apply -f emptyDir.yml pod/emptydir created [root@k8s-m1 chp7]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES emptydir 2/2 Running 1 116s 10.244.111.203 k8s-n2 <none> <none>[root@k8s-n2 data]# docker ps |grep emptydir cbaf1b92b4a8 centos "bash -c 'for i in {…" About a minute ago Up About a minute k8s_write_emptydir_default_ df40c32a-9f0a-44b7-9c17-89c9e9725da2 _3 bce0f2607620 centos "bash -c 'tail -f /d…" 7 minutes ago Up 7 minutes k8s_read_emptydir_default_df40c32a-9f0a-44b7-9c17-89c9e9725da2_0 0b804b8db60f registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 7 minutes ago Up 7 minutes k8s_POD_emptydir_default_df40c32a-9f0a-44b7-9c17-89c9e9725da2_0[root@k8s-n2 data]# pwd /var/lib/kubelet/pods/df40c32a-9f0a-44b7-9c17-89c9e9725da2/volumes/kubernetes.io~empty-dir/data

節(jié)點(diǎn)存儲(chǔ)卷:hostPath

掛載Node文件系統(tǒng)上文件或者目錄到Pod中的容器。
應(yīng)用場(chǎng)景:Pod中容器需要訪問宿主機(jī)文件

[root@k8s-m1 chp7]# cat hostPath.yml apiVersion: v1 kind: Pod metadata:name: host-path spec:containers:- name: centosimage: centoscommand: ["bash","-c","sleep 36000"]volumeMounts:- name: datamountPath: /datavolumes:- name: datahostPath:path: /tmptype: Directory[root@k8s-m1 chp7]# kubectl apply -f hostPath.yml pod/host-path created [root@k8s-m1 chp7]# kubectl exec host-path -it bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.[root@host-path data]# pwd /data [root@host-path data]# touch test.txt [root@k8s-m1 ~]# ls -l /tmp/test.txt -rw-r--r--. 1 root root 5 8月 18 22:25 /tmp/test.txt

網(wǎng)絡(luò)存儲(chǔ)卷:NFS

yum install nfs-utils -y [root@k8s-n2 ~]# mkdir /nfs/k8s -p [root@k8s-n2 ~]# vim /etc/exports [root@k8s-n2 ~]# cat /etc/exports /nfs/k8s 10.0.0.0/24(rw,no_root_squash) # no_root_squash:當(dāng)?shù)卿汵FS主機(jī)使用共享目錄的使用者是root時(shí),其權(quán)限將被轉(zhuǎn)換成為匿名使用者,通常它的UID與GID都會(huì)變成nobody身份。[root@k8s-n2 ~]# systemctl restart nfs [root@k8s-n2 ~]# systemctl enable nfs Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service. [root@k8s-n2 ~]## 測(cè)試 [root@k8s-n1 ~]# mount -t nfs 10.0.0.25:/nfs/k8s /mnt/[root@k8s-n1 ~]# df -h |grep nfs 10.0.0.25:/nfs/k8s 26G 5.8G 21G 23% /mnt查看nfs共享目錄: [root@k8s-n2 ~]# showmount -e Export list for k8s-n2: /nfs/k8s 10.0.0.0/24創(chuàng)建應(yīng)用 [root@k8s-m1 chp7]# cat nfs-deploy.yml apiVersion: apps/v1 kind: Deployment metadata:name: nfs-nginx-deploy spec:selector:matchLabels:app: nfs-nginxreplicas: 3template:metadata:labels:app: nfs-nginxspec:containers:- name: nginximage: nginxvolumeMounts:- name: wwwrootmountPath: /usr/share/nginx/htmlports:- containerPort: 80volumes:- name: wwwrootnfs:server: 10.0.0.25path: /nfs/k8s[root@k8s-m1 chp7]# kubectl apply -f nfs-deploy.yml [root@k8s-m1 chp7]# kubectl get pod -o wide|grep nfs nfs-nginx-deploy-848f4597c9-658ws 1/1 Running 0 2m33s 10.244.111.205 k8s-n2 <none> <none> nfs-nginx-deploy-848f4597c9-bzl5w 1/1 Running 0 2m33s 10.244.111.207 k8s-n2 <none> <none> nfs-nginx-deploy-848f4597c9-wz422 1/1 Running 0 2m33s 10.244.111.208 k8s-n2 <none> <none>

在本地創(chuàng)建index 頁面,然后在容器中也可以看到文件

[root@k8s-n2 ~]# echo "hello world" >/nfs/k8s/index.html [root@k8s-m1 chp7]# curl 10.244.111.205 hello world

[root@k8s-m1 chp7]# kubectl exec nfs-nginx-deploy-848f4597c9-wz422 -it -- bash
root@nfs-nginx-deploy-848f4597c9-wz422:/# mount|grep k8s
10.0.0.25:/nfs/k8s on /usr/share/nginx/html type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.25,local_lock=none,addr=10.0.0.25)

4.持久卷概述

5.PV靜態(tài)供給

[root@k8s-m1 chp7]# cat pv.yml apiVersion: v1 kind: PersistentVolume metadata:name: pv0001 spec:capacity:storage: 5GivolumeMode: FilesystemaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy: Recyclenfs:path: /nfs/k8s/pv0001server: 10.0.0.25 --- apiVersion: v1 kind: PersistentVolume metadata:name: pv0002 spec:capacity:storage: 15GivolumeMode: FilesystemaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy: Recyclenfs:path: /nfs/k8s/pv0002server: 10.0.0.25 --- apiVersion: v1 kind: PersistentVolume metadata:name: pv0003 spec:capacity:storage: 30GivolumeMode: FilesystemaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy: Recyclenfs:path: /nfs/k8s/pv0003server: 10.0.0.25 創(chuàng)建pv卷 [root@k8s-m1 chp7]# kubectl apply -f pv.yml persistentvolume/pv0001 created persistentvolume/pv0002 created persistentvolume/pv0003 created[root@k8s-m1 chp7]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0001 5Gi RWX Recycle Available 53s pv0002 15Gi RWX Recycle Available 53s pv0003 30Gi RWX Recycle Available 53s [root@k8s-m1 chp7]# cat pvc-deploy.yml apiVersion: apps/v1 kind: Deployment metadata:name: pvc-ngnix spec:selector:matchLabels:app: pvc-nginxreplicas: 3template:metadata:labels:app: pvc-nginxspec:containers:- name: nginximage: nginxvolumeMounts:- name: wwwrootmountPath: /usr/share/nginx/htmlports:- containerPort: 80volumes:- name: wwwrootpersistentVolumeClaim:claimName: my-pvc---apiVersion: v1 kind: PersistentVolumeClaim metadata:name: my-pvc spec:accessModes:- ReadWriteManyresources:requests:storage: 5Gi 創(chuàng)建應(yīng)用 [root@k8s-m1 chp7]# kubectl apply -f pvc-deploy.yml deployment.apps/pvc-ngnix unchanged persistentvolumeclaim/my-pvc created[root@k8s-m1 chp7]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-pvc Bound pv0001 5Gi RWX 12m[root@k8s-n2 pv0001]# echo "hello pvc" >index.html [root@k8s-m1 chp7]# curl 10.244.111.209 hello pvc

AccessModes(訪問模式):
AccessModes 是用來對(duì) PV 進(jìn)行訪問模式的設(shè)置,用于描述用戶應(yīng)用對(duì)存儲(chǔ)資源的訪問權(quán)限,訪問權(quán)限包括下面幾種方式:
ReadWriteOnce(RWO):讀寫權(quán)限,但是只能被單個(gè)節(jié)點(diǎn)掛載
ReadOnlyMany(ROX):只讀權(quán)限,可以被多個(gè)節(jié)點(diǎn)掛載
ReadWriteMany(RWX):讀寫權(quán)限,可以被多個(gè)節(jié)點(diǎn)掛載


RECLAIM POLICY(回收策略):

目前 PV 支持的策略有三種:
Retain(保留)- 保留數(shù)據(jù),需要管理員手工清理數(shù)據(jù)
Recycle(回收)- 清除 PV 中的數(shù)據(jù),效果相當(dāng)于執(zhí)行 rm -rf /ifs/kuberneres/*
Delete(刪除)- 與 PV 相連的后端存儲(chǔ)同時(shí)刪除

STATUS(狀態(tài)):
一個(gè) PV 的生命周期中,可能會(huì)處于4中不同的階段:
Available(可用):表示可用狀態(tài),還未被任何 PVC 綁定
Bound(已綁定):表示 PV 已經(jīng)被 PVC 綁定
Released(已釋放):PVC 被刪除,但是資源還未被集群重新聲明
Failed(失敗): 表示該 PV 的自動(dòng)回收失敗

PV與PVC如何綁定:主要通過訪問模式和容量進(jìn)行匹配

默認(rèn)行為:可以分配大于你申請(qǐng)的容量,不會(huì)分配小于你申請(qǐng)的容量。

6.PV動(dòng)態(tài)供給

7.案例:應(yīng)用程序使用持久卷存儲(chǔ)數(shù)據(jù)

kind: ServiceAccount apiVersion: v1 metadata:name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:name: nfs-client-provisioner-runner rules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: run-nfs-client-provisioner subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: default roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata:name: leader-locking-nfs-client-provisioner rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: leader-locking-nfs-client-provisioner subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io rbac.yml [root@k8s-m1 nfs-client]# cat deployment.yaml apiVersion: v1 kind: ServiceAccount metadata:name: nfs-client-provisioner --- kind: Deployment apiVersion: apps/v1 metadata:name: nfs-client-provisioner spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: fuseim.pri/ifs- name: NFS_SERVERvalue: 10.0.0.25- name: NFS_PATHvalue: /nfs/k8svolumes:- name: nfs-client-rootnfs:server: 10.0.0.25path: /nfs/k8s deployment.yaml [root@k8s-m1 nfs-client]# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters:archiveOnDelete: "true" class.yaml [root@k8s-m1 chp7]# cat dynamic-pvc.yml apiVersion: apps/v1 kind: Deployment metadata:name: dynamic-pvc-ngnix spec:selector:matchLabels:app: dynamic-pvc-nginxreplicas: 3template:metadata:labels:app: dynamic-pvc-nginxspec:containers:- name: nginximage: nginxvolumeMounts:- name: wwwrootmountPath: /usr/share/nginx/htmlports:- containerPort: 80volumes:- name: wwwrootpersistentVolumeClaim:claimName: dynamic-pvc---apiVersion: v1 kind: PersistentVolumeClaim metadata:name: dynamic-pvc spec:storageClassName: managed-nfs-storageaccessModes:- ReadWriteManyresources:requests:storage: 20Gi dynamic-pvc.yml [root@k8s-m1 nfs-client]# kubectl apply -f rbac.yaml serviceaccount/nfs-client-provisioner created[root@k8s-m1 nfs-client]# kubectl apply -f deployment.yaml serviceaccount/nfs-client-provisioner created deployment.apps/nfs-client-provisioner created[root@k8s-m1 nfs-client]# kubectl apply -f class.yaml storageclass.storage.k8s.io/managed-nfs-storage created [root@k8s-m1 nfs-client]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage fuseim.pri/ifs Delete Immediate false 2s[root@k8s-m1 chp7]# kubectl apply -f dynamic-pvc.yml deployment.apps/dynamic-pvc-ngnix created persistentvolumeclaim/dynamic-pvc created[root@k8s-m1 chp7]# kubectl get po |grep dynamic-pvc dynamic-pvc-ngnix-d4d789c68-bbjj9 1/1 Running 0 51s dynamic-pvc-ngnix-d4d789c68-l8cjc 1/1 Running 0 51s dynamic-pvc-ngnix-d4d789c68-mdm6j 1/1 Running 0 51s [root@k8s-m1 chp7]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE dynamic-pvc Bound pvc-2c7a9c3b-84ef-452b-9322-2d3530c01014 20Gi RWX managed-nfs-storage 74s[root@k8s-m1 chp7]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0001 5Gi RWX Recycle Available 3h29m pv0002 15Gi RWX Recycle Available 3h29m pv0003 30Gi RWX Recycle Available 3h29m pvc-2c7a9c3b-84ef-452b-9322-2d3530c01014 20Gi RWX Delete Bound default/dynamic-pvc managed-nfs-storage 110s[root@k8s-n2 ~]# echo dynamic-pvc >/nfs/k8s/default-dynamic-pvc-pvc-2c7a9c3b-84ef-452b-9322-2d3530c01014/index.html [root@k8s-m1 chp7]# curl 10.244.111.217 dynamic-pvc [root@k8s-m1 chp7]# curl 10.244.111.220 dynamic-pvc[root@k8s-m1 chp7]# kubectl delete -f dynamic-pvc.yml deployment.apps "dynamic-pvc-ngnix" deleted persistentvolumeclaim "dynamic-pvc" deleted [root@k8s-m1 chp7]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0001 5Gi RWX Recycle Available 3h34m pv0002 15Gi RWX Recycle Available 3h34m pv0003 30Gi RWX Recycle Available 3h34m刪除歸檔 [root@k8s-n2 ~]# ls /nfs/k8s/ archived-default-dynamic-pvc-pvc-2c7a9c3b-84ef-452b-9322-2d3530c01014 index.html pv0001 pv0002 pv0003 [root@k8s-m1 chp7]# cat nfs-client/class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters:archiveOnDelete: "true" [root@k8s-m1 chp7]#[root@k8s-m1 chp7]# kubectl get pv --sort-by={.spec.capacity.storage} NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0001 5Gi RWX Recycle Available 3h41m pv0002 15Gi RWX Recycle Available 3h41m pv0003 30Gi RWX Recycle Available 3h41m

8.有狀態(tài)應(yīng)用部署:Statefulset控制器

StatefulSet:
? 部署有狀態(tài)應(yīng)用
? 解決Pod獨(dú)立生命周期,保持Pod啟動(dòng)順序和唯一性
1. 穩(wěn)定,唯一的網(wǎng)絡(luò)標(biāo)識(shí)符,持久存儲(chǔ)
2. 有序,優(yōu)雅的部署和擴(kuò)展、刪除和終止
3. 有序,滾動(dòng)更新
應(yīng)用場(chǎng)景:數(shù)據(jù)庫

StatefulSet:穩(wěn)定的網(wǎng)絡(luò)ID

Headless Service:也是一種service,但不同在于?spec.clusterIP定義為None,也就是不需要ClusterIP

多了一個(gè)serviceName: “nginx”字段,
這就告訴StatefulSet控制器要使用nginx這個(gè)headless
service來保證Pod的身份。

ClusterIP A記錄格式: <service-name>.<namespace-name>.svc.cluster.local ClusterIP=None A記錄格式: <statefulsetName-index>.<service-name> .<namespace-name>.svc.cluster.local 示例:web-0.nginx.default.svc.cluster.local [root@k8s-m1 chp8]# cat statefulset.yml apiVersion: v1 kind: Service metadata:name: handless-nginxlabels:app: nginx spec:ports:- port: 80protocol: TCPtargetPort: 80clusterIP: Noneselector:app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata:name: web spec:serviceName: "handless-nginx"replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginxports:- containerPort: 80name: web statefulset.yml [root@k8s-m1 chp8]# kubectl get po|grep web- web-0 1/1 Running 0 5h49m web-1 1/1 Running 0 5h56m web-2 1/1 Running 0 5h55m[root@k8s-m1 chp8]# kubectl get ep NAME ENDPOINTS AGE handless-nginx 10.244.111.218:80,10.244.111.226:80,10.244.111.227:80 64s[root@k8s-m1 chp8]# kubectl run -it --rm --image=busybox:1.28.4 -- sh If you don't see a command prompt, try pressing enter. / # nslookup handless-nginx Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: handless-nginx Address 1: 10.244.111.226 web-2.handless-nginx.default.svc.cluster.local Address 2: 10.244.111.218 10-244-111-218.nginx.default.svc.cluster.local Address 3: 10.244.111.227 10-244-111-227.nginx.default.svc.cluster.local

StatefulSet:穩(wěn)定的存儲(chǔ)

[root@k8s-m1 chp8]# cat statefulset.yml apiVersion: v1 kind: Service metadata:name: statefulset-nginxlabels:app: nginx spec:ports:- port: 80protocol: TCPtargetPort: 80clusterIP: Noneselector:app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata:name: statefulset-web spec:selector:matchLabels:app: nginx # has to match .spec.template.metadata.labelsserviceName: "statefulset-nginx"replicas: 3 # by default is 1template:metadata:labels:app: nginx # has to match .spec.selector.matchLabelsspec:containers:- name: nginximage: nginxports:- containerPort: 80name: web-statefulsetvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:accessModes: [ "ReadWriteOnce" ]storageClassName: managed-nfs-storageresources:requests:storage: 1Gi statefulset-pv.yml

這里的 storageClassName: managed-nfs-storage 用到了前面創(chuàng)建的sc

[root@k8s-m1 chp8]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage fuseim.pri/ifs Delete Immediate false 14h

創(chuàng)建過程

[root@k8s-m1 chp8]# kubectl get pv|grep state pvc-5fdecf86-1e71-4e5d-b9d8-be69ec0496a2 1Gi RWO Delete Bound default/www-statefulset-web-2 managed-nfs-storage 2m54s pvc-67cc7471-f012-4c17-90e2-82b9618cd33c 1Gi RWO Delete Bound default/www-statefulset-web-1 managed-nfs-storage 3m33s pvc-70268f3f-5053-411e-807a-68245b513f0a 1Gi RWO Delete Bound default/www-statefulset-web-0 managed-nfs-storage 3m51s [root@k8s-m1 chp8]# kubectl get pvc|grep state www-statefulset-web-0 Bound pvc-70268f3f-5053-411e-807a-68245b513f0a 1Gi RWO managed-nfs-storage 3m58s www-statefulset-web-1 Bound pvc-67cc7471-f012-4c17-90e2-82b9618cd33c 1Gi RWO managed-nfs-storage 3m40s www-statefulset-web-2 Bound pvc-5fdecf86-1e71-4e5d-b9d8-be69ec0496a2 1Gi RWO managed-nfs-storage 3m1s產(chǎn)生的數(shù)據(jù)獨(dú)立化 [root@k8s-n2 k8s]# echo web-0 >default-www-statefulset-web-0-pvc-70268f3f-5053-411e-807a-68245b513f0a/index.html [root@k8s-n2 k8s]# echo web-1 >default-www-statefulset-web-1-pvc-67cc7471-f012-4c17-90e2-82b9618cd33c/index.html [root@k8s-n2 k8s]# echo web-2 >default-www-statefulset-web-2-pvc-5fdecf86-1e71-4e5d-b9d8-be69ec0496a2/index.html[root@k8s-m1 chp8]# curl 10.244.111.227 web-0 [root@k8s-m1 chp8]# curl 10.244.111.218 web-1 [root@k8s-m1 chp8]# curl 10.244.111.226 web-2[root@k8s-m1 chp8]# kubectl run dns-test -it --rm --image=busybox:1.28.4 -- sh If you don't see a command prompt, try pressing enter. / # nslookup statefulset-nginx Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: statefulset-nginx Address 1: 10.244.111.221 web-1.nginx.default.svc.cluster.local Address 2: 10.244.111.216 web-2.nginx.default.svc.cluster.local Address 3: 10.244.111.218 10-244-111-218.nginx.default.svc.cluster.local Address 4: 10.244.111.227 10-244-111-227.nginx.default.svc.cluster.local Address 5: 10.244.111.226 10-244-111-226.nginx.default.svc.cluster.local Address 6: 10.244.111.222 10-244-111-222.statefulset-nginx.default.svc.cluster.local

StatefulSet的存儲(chǔ)卷使用VolumeClaimTemplate創(chuàng)建,稱為卷申請(qǐng)模板,當(dāng)StatefulSet使用VolumeClaimTemplate創(chuàng)建一個(gè)PersistentVolume時(shí),同樣也會(huì)為每個(gè)Pod分配并創(chuàng)建一個(gè)編號(hào)的PVC。

無狀態(tài)和有狀態(tài)主要在于兩點(diǎn):網(wǎng)絡(luò)和存儲(chǔ) 無狀態(tài)不考慮存儲(chǔ),不考慮網(wǎng)絡(luò) 無狀態(tài)應(yīng)用,例如nginx。主要可以任意飄逸,每個(gè)副本是對(duì)等的 有狀態(tài)應(yīng)用,例如etcd、zookeeper、mysql主從,每個(gè)副本是不對(duì)等 1、ip、主機(jī)名(headless service)為每個(gè)pod維護(hù)一個(gè)固定dns名稱 2、端口 3、節(jié)點(diǎn)名稱,通過主機(jī)名區(qū)分 在k8s中部署有狀態(tài)分布式應(yīng)用主要解決的問題: 1、通過一個(gè)鏡像怎么自動(dòng)化生成各自獨(dú)立的配置文件 2、部署這個(gè)應(yīng)用在k8s中的一個(gè)拓?fù)鋱D 根據(jù)客戶端證書里包含user、groups來確認(rèn)一個(gè)用戶。 CN:用戶名 O:用戶組

StatefulSet:小結(jié)

StatefulSet與Deployment區(qū)別:有身份的!
身份三要素:
? 域名
? 主機(jī)名
? 存儲(chǔ)(PVC)

9.應(yīng)用程序配置文件存儲(chǔ):ConfigMap

ConfigMap 是一種 API 對(duì)象,用來將非機(jī)密性的數(shù)據(jù)保存到健值對(duì)中。使用時(shí)可以用作環(huán)境變量、命令行參數(shù)或者存儲(chǔ)卷中的配置文件。

ConfigMap 將您的環(huán)境配置信息和?容器鏡像?解耦,便于應(yīng)用配置的修改。當(dāng)您需要儲(chǔ)存機(jī)密信息時(shí)可以使用?Secret?對(duì)象。

注意:

ConfigMap 并不提供保密或者加密功能。如果你想存儲(chǔ)的數(shù)據(jù)是機(jī)密的,請(qǐng)使用?Secret?,或者使用其他第三方工具來保證你的數(shù)據(jù)的私密性,而不是用 ConfigMap。

創(chuàng)建ConfigMap后,數(shù)據(jù)實(shí)際會(huì)存儲(chǔ)在K8s中(Etcd)Etcd,然后通過創(chuàng)建Pod時(shí)
引用該數(shù)據(jù)。
應(yīng)用場(chǎng)景:應(yīng)用程序配置
Pod使用configmap數(shù)據(jù)有兩種方式:
? 變量注入
? 數(shù)據(jù)卷掛載

apiVersion: v1 kind: ConfigMap metadata:name: game-demo data:# 類屬性鍵;每一個(gè)鍵都映射到一個(gè)簡(jiǎn)單的值player_initial_lives: "3"ui_properties_file_name: "user-interface.properties"## 類文件鍵game.properties: |enemy.types=aliens,monstersplayer.maximum-lives=5user-interface.properties: |color.good=purplecolor.bad=yellowallow.textmode=true --- apiVersion: v1 kind: Pod metadata:name: configmap-demo-pod spec:containers:- name: configmap-demoimage: nginxenv:# 定義環(huán)境變量- name: PLAYER_INITIAL_LIVES # 請(qǐng)注意這里和 ConfigMap 中的鍵名是不一樣的valueFrom:configMapKeyRef:name: game-demo # 這個(gè)值來自 ConfigMapkey: player_initial_lives # 需要取值的鍵- name: UI_PROPERTIES_FILE_NAMEvalueFrom:configMapKeyRef:name: game-demokey: ui_properties_file_namevolumeMounts:- name: configmountPath: "/config"readOnly: truevolumes:# 您可以在 Pod 級(jí)別設(shè)置卷,然后將其掛載到 Pod 內(nèi)的容器中- name: configconfigMap:# 提供你想要掛載的 ConfigMap 的名字name: game-demo ConfigMap.yml # configmap 應(yīng)用實(shí)例[root@k8s-m1 chp8]# kubectl apply -f configmap.yml configmap/game-demo unchanged pod/configmap-demo-pod created[root@k8s-m1 chp8]# kubectl get po -o wide |grep config configmap-demo-pod 1/1 Running 0 85s 10.244.111.223 k8s-n2 <none> <none> [root@k8s-m1 chp8]# kubectl get configmap NAME DATA AGE game-demo 4 6m49s# 查看configmap應(yīng)用里面的變量 [root@k8s-m1 chp8]# kubectl exec configmap-demo-pod -it -- bash root@configmap-demo-pod:/# ls /config/ game.properties player_initial_lives ui_properties_file_name user-interface.properties root@configmap-demo-pod:/# cat /config/game.properties enemy.types=aliens,monsters player.maximum-lives=5 root@configmap-demo-pod:/# cat /config/player_initial_lives 3 root@configmap-demo-pod:/# cat /config/ui_properties_file_name user-interface.properties root@configmap-demo-pod:/# cat /config/user-interface.properties color.good=purple color.bad=yellow allow.textmode=true root@configmap-demo-pod:/# echo $PLAYER_INITIAL_LIVES 3 root@configmap-demo-pod:/# echo $UI_PROPERTIES_FILE_NAME user-interface.properties

10.敏感數(shù)據(jù)存儲(chǔ):Secret

Secret?對(duì)象類型用來保存敏感信息,例如密碼、OAuth 令牌和 SSH 密鑰。 將這些信息放在?secret?中比放在?Pod?的定義或者?容器鏡像?中來說更加安全和靈活。

與ConfigMap類似,區(qū)別在于Secret主要存儲(chǔ)敏感數(shù)據(jù),所有的數(shù)據(jù)要經(jīng)過base64編碼。
應(yīng)用場(chǎng)景:憑據(jù)
kubectl create secret 支持三種類似數(shù)據(jù):
? docker-registry 存儲(chǔ)鏡像倉庫認(rèn)證信息
? generic 從文件、目錄或者字符串創(chuàng)建,例如存儲(chǔ)用戶名密碼
? tls 存儲(chǔ)證書,例如HTTPS證書

?

[root@k8s-m1 chp8]# vim secret.yml apiVersion: v1 kind: Pod metadata:name: secret-env-pod spec:containers:- name: mycontainerimage: nginxenv:- name: USERvalueFrom:secretKeyRef:name: db-user-passkey: username- name: PASSWORDvalueFrom:secretKeyRef:name: db-user-passkey: passwordvolumeMounts:- name: secret-volumereadOnly: truemountPath: "/etc/secret-volume"volumes:- name: secret-volumesecret:secretName: db-user-passrestartPolicy: Never secret.yml # secret 應(yīng)用實(shí)例 [root@k8s-m1 chp8]# echo -n 'admin' >./username.txt [root@k8s-m1 chp8]# echo -n '1f2dle2e67df' >./password.txt [root@k8s-m1 chp8]# kubectl create secret generic db-user-pass --from-file=username=./username.txt --from-file=password=./password.txt secret/db-user-pass created[root@k8s-m1 chp8]# kubectl apply -f secret.yml pod/secret-env-pod configured [root@k8s-m1 chp8]# kubectl get po|grep secret secret-env-pod 1/1 Running 0 64s [root@k8s-m1 chp8]# kubectl exec secret-env-pod -it -- bash root@secret-env-pod:/# echo $USER admin root@secret-env-pod:/# echo $PASSWORD 1f2dle2e67df root@secret-env-pod:/# ls /etc/secret-volume/ password username root@secret-env-pod:/# cat /etc/secret-volume/password 1f2dle2e67df

總結(jié)

以上是生活随笔為你收集整理的第7章:Kubernetes存储的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。