日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

K8S(5)HPA

發布時間:2024/3/12 编程问答 48 豆豆
生活随笔 收集整理的這篇文章主要介紹了 K8S(5)HPA 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • 一、HPA概述
  • 二、HPA版本
  • 三、HPA部署
    • (1)部署metrics-server
    • (2)創建Deployment
    • (3)基于CPU創建HPA
    • (4)基于內存創建的HPA

下面有metrics-server的完整yaml文件內容

一、HPA概述

  • HPA全稱Horizontal Pod Autoscaler,即水平Pod自動伸縮器,可以根據觀察到的CPU、內存使用率或自定義度量標準來自動增加或者減少Pod的數量,但是HPA不適用于無法擴、縮容的對象,例如DaemonSet,通常都作用與Deployment
  • HPA控制器會定期調整RC或者Deployment的副本數,使對象數量符合用戶定義規則的數量
  • 既然是通過CPU、內存等指標來自動擴、縮容Pod,那么HPA肯定是需要一個能監控這些硬件資源的組件,則例的組件有很多選擇,例如metrices-server、Heapster等,這里使用metrices-server

metrices-server從api-server中獲取cpu、內存使用率等監控指標

二、HPA版本

  • 查看HPA所有版本
[root@master C]# kubectl api-versions |grep autoscaling autoscaling/v1 #只支持通過CPU為參考依據來改變Pod的副本數 autoscaling/v2beta1 #支持通過CPU、內存、連接數或者自定義規則為參考依據 autoscaling/v2beta2 #和v2beta1差不多
  • 查看當前版本
[root@master C]# kubectl explain hpa KIND: HorizontalPodAutoscaler VERSION: autoscaling/v1 #可以看到使用的默認版本是v1DESCRIPTION:configuration of a horizontal pod autoscaler.FIELDS:apiVersion <string>APIVersion defines the versioned schema of this representation of anobject. Servers should convert recognized schemas to the latest internalvalue, and may reject unrecognized values. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resourceskind <string>Kind is a string value representing the REST resource this objectrepresents. Servers may infer this from the endpoint the client submitsrequests to. Cannot be updated. In CamelCase. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kindsmetadata <Object>Standard object metadata. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadataspec <Object>behaviour of autoscaler. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.status <Object>current information about the autoscaler.
  • 指定使用版本,這里并不是修改,相當于執行這條命令時,指定了下版本
[root@master C]# kubectl explain hpa --api-version=autoscaling/v2beta1 KIND: HorizontalPodAutoscaler VERSION: autoscaling/v2beta1DESCRIPTION:HorizontalPodAutoscaler is the configuration for a horizontal podautoscaler, which automatically manages the replica count of any resourceimplementing the scale subresource based on the metrics specified.FIELDS:apiVersion <string>APIVersion defines the versioned schema of this representation of anobject. Servers should convert recognized schemas to the latest internalvalue, and may reject unrecognized values. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resourceskind <string>Kind is a string value representing the REST resource this objectrepresents. Servers may infer this from the endpoint the client submitsrequests to. Cannot be updated. In CamelCase. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kindsmetadata <Object>metadata is the standard object metadata. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadataspec <Object>spec is the specification for the behaviour of the autoscaler. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.status <Object>status is the current information about the autoscaler.

三、HPA部署

(1)部署metrics-server

[root@master kube-system]# kubectl top nodes #查看節點狀態,因為沒有安裝,所以會報錯 Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
  • 編寫yaml文件,注意端口和鏡像
[root@master kube-system]# vim components-v0.5.0.yaml apiVersion: v1 kind: ServiceAccount metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-reader rules: - apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:k8s-app: metrics-servername: system:metrics-server rules: - apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:labels:k8s-app: metrics-servername: system:metrics-server roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: v1 kind: Service metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15s- --kubelet-insecure-tlsimage: registry.cn-shenzhen.aliyuncs.com/zengfengjin/metrics-server:v0.5.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSinitialDelaySeconds: 20periodSeconds: 10resources:requests:cpu: 100mmemory: 200MisecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100
  • 部署
[root@master kube-system]# kubectl apply -f components-v0.5.0.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created#查看創建的pod [root@master kube-system]# kubectl get pods -n kube-system| egrep 'NAME|metrics-server' NAME READY STATUS RESTARTS AGE metrics-server-5944675dfb-q6cdd 0/1 ContainerCreating 0 6s#查看日志 [root@master kube-system]# kubectl logs metrics-server-5944675dfb-q6cdd -n kube-system I0718 03:06:39.064633 1 serving.go:341] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) I0718 03:06:39.870097 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0718 03:06:39.870122 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0718 03:06:39.870159 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0718 03:06:39.870160 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0718 03:06:39.870105 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0718 03:06:39.871166 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController I0718 03:06:39.872804 1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key I0718 03:06:39.875741 1 secure_serving.go:197] Serving securely on [::]:4443 I0718 03:06:39.876050 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0718 03:06:39.970469 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0718 03:06:39.970575 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0718 03:06:39.971610 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController#如果報錯的化,可以修改apiserver的yaml文件,這是k8s的yaml文件 [root@master kube-system]# vim /etc/kubernetes/manifests/kube-apiserver.yaml40 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt41 - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key42 - --enable-aggregator-routing=true #添加這行43 image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.044 imagePullPolicy: IfNotPresent#保存退出 [root@master kube-system]# systemctl restart kubelet #修改后重啟kubelet#再次查看節點信息[root@master kube-system]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master 327m 4% 3909Mi 23% node 148m 1% 1327Mi 8%

(2)創建Deployment

  • 這里創建一個nginx的deployment
[root@master test]# cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx spec:selector:matchLabels:run: nginxreplicas: 1template:metadata:labels:run: nginxspec:containers:- name: nginximage: nginx:1.15.2ports:- containerPort: 80resources:limits:cpu: 500mrequests: #想要HPA生效,必須添加requests聲明cpu: 200m--- apiVersion: v1 kind: Service metadata:name: nginxlabels:run: nginx spec:ports:- port: 80selector:run: nginx
  • 訪問測試
[root@master test]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-9cb8d65b5-tq9v4 1/1 Running 0 14m 10.244.1.22 node <none> <none> [root@master test]# kubectl get svc nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 172.16.169.27 <none> 80/TCP 15m [root@master test]# kubectl describe svc nginx Name: nginx Namespace: default Labels: run=nginx Annotations: Selector: run=nginx Type: ClusterIP IP: 172.16.169.27 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.22:80 Session Affinity: None Events: <none> [root@node test]# curl 172.16.169.27 #訪問成功 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;} </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p><p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p> </body> </html>

(3)基于CPU創建HPA

#創建一個cpu利用率達到20,最大10個pod,最小1個,這里沒有指定版本所以默認是v1版本,而v1版本只能以CPU為標準 [root@master test]# kubectl autoscale deployment nginx --cpu-percent=20 --min=1 --max=10 horizontalpodautoscaler.autoscaling/nginx autoscaled#TARGETS可以看到使用率 [root@master test]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx Deployment/nginx 0%/20% 1 10 1 86s#創建一個測試pod增加負載,訪問地址要和pod的svc地址相同 [root@master ~]# kubectl run busybox -it --image=busybox -- /bin/sh -c 'while true; do wget -q -O- http://10.244.1.22; done'#過一分鐘后看hap的使用率,REPLICAS是當前pod的數量 [root@master test]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx Deployment/nginx 27%/20% 1 10 5 54m[root@master test]# kubectl get pods #再看pod數量,發現已經增加到了5個 NAME READY STATUS RESTARTS AGE bustbox 1/1 Running 0 119s nginx-9cb8d65b5-24dg2 1/1 Running 0 57s nginx-9cb8d65b5-c6n98 1/1 Running 0 87s nginx-9cb8d65b5-ksjzv 1/1 Running 0 57s nginx-9cb8d65b5-n77fm 1/1 Running 0 87s nginx-9cb8d65b5-tq9v4 1/1 Running 0 84m [root@master test]# kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE nginx 5/5 5 5 84m#此時,停止壓測,過好幾分鐘后再次查看pod數量和使用率 [root@master test]# kubectl delete pod busybox #終止后,刪除pod [root@master test]# kubectl get hpa #雖然使用率已經降到0了,但是可以看到當前REPLICAS的數量還5,這個需要等一會就會縮容 NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx Deployment/nginx 0%/20% 1 10 5 58m#過了幾分鐘后,可以看到pod數量已經回到了1 [root@master test]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx Deployment/nginx 0%/20% 1 10 1 64m [root@master test]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-9cb8d65b5-tq9v4 1/1 Running 0 95m

(4)基于內存創建的HPA

#先把上面創建的資源刪除 [root@master test]# kubectl delete horizontalpodautoscalers.autoscaling nginx horizontalpodautoscaler.autoscaling "nginx" deleted [root@master test]# kubectl delete -f nginx.yaml deployment.apps "nginx" deleted service "nginx" deleted
  • 重新編寫yaml文件
[root@master test]# cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx spec:selector:matchLabels:run: nginxreplicas: 1template:metadata:labels:run: nginxspec:containers:- name: nginximage: nginx:1.15.2ports:- containerPort: 80resources:limits:cpu: 500mmemory: 60Mirequests:cpu: 200mmemory: 25Mi--- apiVersion: v1 kind: Service metadata:name: nginxlabels:run: nginx spec:ports:- port: 80selector:run: nginx[root@master test]# kubectl apply -f nginx.yaml deployment.apps/nginx created service/nginx created
  • 創建HPA
[root@master test]# vim hpa-nginx.yaml apiVersion: autoscaling/v2beta1 #上面的hpa版本有提到過,使用基于內存的hpa需要換個版本 kind: HorizontalPodAutoscaler metadata:name: nginx-hpa spec:maxReplicas: 10 #1-10的pod數量限制minReplicas: 1scaleTargetRef: #指定使用hpa的資源對象,版本、類型、名稱要和上面創建的相同apiVersion: apps/v1kind: Deploymentname: nginxmetrics:- type: Resourceresource:name: memorytargetAverageUtilization: 50 #限制%50的內存 [root@master test]# kubectl apply -f hpa-nginx.yaml horizontalpodautoscaler.autoscaling/nginx-hpa created [root@master test]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 7%/50% 1 10 1 59s
  • 更換終端進行測試
#在pod中執行命令,增加內存負載 [root@master ~]# kubectl exec -it nginx-78f4944bb8-2rz7j -- /bin/sh -c 'dd if=/dev/zero of=/tmp/file1'
  • 等待負載上去,然后查看pod數量與內存使用率
[root@master test]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 137%/50% 1 10 1 12m [root@master test]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 14%/50% 1 10 3 12m [root@master test]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-78f4944bb8-2rz7j 1/1 Running 0 21m nginx-78f4944bb8-bxh78 1/1 Running 0 34s nginx-78f4944bb8-g8w2h 1/1 Running 0 34s #與CPU相同,內存上去了也會自動創建pod

總結

以上是生活随笔為你收集整理的K8S(5)HPA的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。