日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

深入了解Kubernetes CRD开发工具kubebuilder

發布時間:2025/3/21 编程问答 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深入了解Kubernetes CRD开发工具kubebuilder 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

原文連接:https://blog.csdn.net/u012986012/article/details/120271091

普通開發流程

如果不借助任何Operator腳手架,我們是如何實現Operator的?大體分為一下幾步:

  • CRD定義
  • Controller開發,編寫邏輯
  • 測試部署

API定義

首先通過k8s.io/code-generator項目生成API相關代碼,定義相關字段。

Controller實現

實現Controller以官方提供的sample-controller為例,如圖所示
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-nMH4XGxn-1631524205161)(https://github.com/kubernetes/sample-controller/raw/master/docs/images/client-go-controller-interaction.jpeg)]

主要分為以下幾步:

初始化client配置

//通過master/kubeconfig建立client configcfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)if err != nil {klog.Fatalf("Error building kubeconfig: %s", err.Error())}// kubernetes clientkubeClient, err := kubernetes.NewForConfig(cfg)if err != nil {klog.Fatalf("Error building kubernetes clientset: %s", err.Error())}// crd clientexampleClient, err := clientset.NewForConfig(cfg)if err != nil {klog.Fatalf("Error building example clientset: %s", err.Error())}

初始化Informer并啟動

//k8s sharedInformerkubeInformerFactory := kubeinformers.NewSharedInformerFactory(kubeClient, time.Second*30)// crd sharedInformerexampleInformerFactory := informers.NewSharedInformerFactory(exampleClient, time.Second*30)

// 初始化controller,傳入informer, 注冊了Deployment與Foo Informers
controller := NewController(kubeClient, exampleClient,
kubeInformerFactory.Apps().V1().Deployments(),
exampleInformerFactory.Samplecontroller().V1alpha1().Foos())
//啟動Informer
kubeInformerFactory.Start(stopCh)
exampleInformerFactory.Start(stopCh)

最后啟動Controller

if err = controller.Run(2, stopCh); err != nil {klog.Fatalf("Error running controller: %s", err.Error())}

在Controller的實現中,通過NewController來初始化:

func NewController(kubeclientset kubernetes.Interface,sampleclientset clientset.Interface,deploymentInformer appsinformers.DeploymentInformer,fooInformer informers.FooInformer) *Controller {// Create event broadcasterutilruntime.Must(samplescheme.AddToScheme(scheme.Scheme))klog.V(4).Info("Creating event broadcaster")eventBroadcaster := record.NewBroadcaster()eventBroadcaster.StartStructuredLogging(0)eventBroadcaster.StartRecordingToSink(&typedcorev1.EventSinkImpl{Interface: kubeclientset.CoreV1().Events("")})recorder := eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: controllerAgentName})controller := &Controller{kubeclientset: kubeclientset,sampleclientset: sampleclientset,deploymentsLister: deploymentInformer.Lister(), //只讀cachedeploymentsSynced: deploymentInformer.Informer().HasSynced, //調用Informer()會注冊informer到共享informer中foosLister: fooInformer.Lister(),foosSynced: fooInformer.Informer().HasSynced,workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "Foos"), // 初始化工作隊列recorder: recorder,}klog.Info("Setting up event handlers")// 添加回調事件fooInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{AddFunc: controller.enqueueFoo,UpdateFunc: func(old, new interface{}) {controller.enqueueFoo(new)},})deploymentInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{AddFunc: controller.handleObject,UpdateFunc: func(old, new interface{}) {newDepl := new.(*appsv1.Deployment)oldDepl := old.(*appsv1.Deployment)if newDepl.ResourceVersion == oldDepl.ResourceVersion {// Periodic resync will send update events for all known Deployments.// Two different versions of the same Deployment will always have different RVs.return}controller.handleObject(new)},DeleteFunc: controller.handleObject,})return controller }

Controller啟動則是典型的k8s工作流,通過控制循環不斷從工作隊列獲取對象進行處理,使其達到期望狀態

func (c *Controller) Run(workers int, stopCh <-chan struct{}) error {defer utilruntime.HandleCrash()defer c.workqueue.ShutDown()// 等待cache同步klog.Info("Waiting for informer caches to sync")if ok := cache.WaitForCacheSync(stopCh, c.deploymentsSynced, c.foosSynced); !ok {return fmt.Errorf("failed to wait for caches to sync")}// 啟動worker,每個worker一個goroutinefor i := 0; i < workers; i++ {go wait.Until(c.runWorker, time.Second, stopCh)}// 等待退出信號<-stopChreturn nil } // worker就是一個循環不斷調用processNextWorkItem func (c *Controller) runWorker() {for c.processNextWorkItem() {} } func (c *Controller) processNextWorkItem() bool {// 從工作隊列獲取對象obj, shutdown := c.workqueue.Get()if shutdown {return false}// We wrap this block in a func so we can defer c.workqueue.Done.err := func(obj interface{}) error {defer c.workqueue.Done(obj)var key stringvar ok boolif key, ok = obj.(string); !ok {c.workqueue.Forget(obj)utilruntime.HandleError(fmt.Errorf("expected string in workqueue but got %#v", obj))return nil}// 進行處理,核心邏輯if err := c.syncHandler(key); err != nil {// 處理失敗再次加入隊列c.workqueue.AddRateLimited(key)return fmt.Errorf("error syncing '%s': %s, requeuing", key, err.Error())}// 處理成功不入隊c.workqueue.Forget(obj)klog.Infof("Successfully synced '%s'", key)return nil}(obj)if err != nil {utilruntime.HandleError(err)return true}return true }

Operator模式

在Operator模式下,用戶只需要實現Reconcile(調諧)即sample-controller中的syncHandler,其他步驟kubebuilder已經幫我們實現了。那我們來一探究竟,kubebuilder是怎么一步步觸發Reconcile邏輯。

以mygame為例,通常使用kubebuilder生成的主文件如下:

var (// 用來解析kubernetes對象scheme = runtime.NewScheme()setupLog = ctrl.Log.WithName("setup") ) func init() {utilruntime.Must(clientgoscheme.AddToScheme(scheme))// 添加自定義對象到schemeutilruntime.Must(myappv1.AddToScheme(scheme))//+kubebuilder:scaffold:scheme } func main() {// ...ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))// 初始化controller managermgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{Scheme: scheme,MetricsBindAddress: metricsAddr,Port: 9443,HealthProbeBindAddress: probeAddr,LeaderElection: enableLeaderElection,LeaderElectionID: "7bc453ad.qingwave.github.io",})if err != nil {setupLog.Error(err, "unable to start manager")os.Exit(1)}// 初始化Reconcilerif err = (&controllers.GameReconciler{Client: mgr.GetClient(),Scheme: mgr.GetScheme(),}).SetupWithManager(mgr); err != nil {setupLog.Error(err, "unable to create controller", "controller", "Game")os.Exit(1)}// 初始化Webhookif enableWebhook {if err = (&myappv1.Game{}).SetupWebhookWithManager(mgr); err != nil {setupLog.Error(err, "unable to create webhook", "webhook", "Game")os.Exit(1)}}//+kubebuilder:scaffold:builder// 啟動managerif err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {setupLog.Error(err, "problem running manager")os.Exit(1)} }

kubebuilder封裝了controller-runtime,在主文件中主要初始了controller-manager,以及我們填充的Reconciler與Webhook,最后啟動manager。

分別來看下每個流程。

Manager初始化

代碼如下:

func New(config *rest.Config, options Options) (Manager, error) {// 設置默認配置options = setOptionsDefaults(options)// cluster初始化cluster, err := cluster.New(config, func(clusterOptions *cluster.Options) {clusterOptions.Scheme = options.SchemeclusterOptions.MapperProvider = options.MapperProviderclusterOptions.Logger = options.LoggerclusterOptions.SyncPeriod = options.SyncPeriodclusterOptions.Namespace = options.NamespaceclusterOptions.NewCache = options.NewCacheclusterOptions.ClientBuilder = options.ClientBuilderclusterOptions.ClientDisableCacheFor = options.ClientDisableCacheForclusterOptions.DryRunClient = options.DryRunClientclusterOptions.EventBroadcaster = options.EventBroadcaster})if err != nil {return nil, err}// event recorder初始化recorderProvider, err := options.newRecorderProvider(config, cluster.GetScheme(), options.Logger.WithName("events"), options.makeBroadcaster)if err != nil {return nil, err}// 選主的資源鎖配置leaderConfig := options.LeaderElectionConfigif leaderConfig == nil {leaderConfig = rest.CopyConfig(config)}resourceLock, err := options.newResourceLock(leaderConfig, recorderProvider, leaderelection.Options{LeaderElection: options.LeaderElection,LeaderElectionResourceLock: options.LeaderElectionResourceLock,LeaderElectionID: options.LeaderElectionID,LeaderElectionNamespace: options.LeaderElectionNamespace,})if err != nil {return nil, err}// ...return &controllerManager{cluster: cluster,recorderProvider: recorderProvider,resourceLock: resourceLock,metricsListener: metricsListener,metricsExtraHandlers: metricsExtraHandlers,logger: options.Logger,elected: make(chan struct{}),port: options.Port,host: options.Host,certDir: options.CertDir,leaseDuration: *options.LeaseDuration,renewDeadline: *options.RenewDeadline,retryPeriod: *options.RetryPeriod,healthProbeListener: healthProbeListener,readinessEndpointName: options.ReadinessEndpointName,livenessEndpointName: options.LivenessEndpointName,gracefulShutdownTimeout: *options.GracefulShutdownTimeout,internalProceduresStop: make(chan struct{}),leaderElectionStopped: make(chan struct{}),}, nil

在New中主要初始化了各種配置端口、選主信息、eventRecorder,最重要的是初始了Cluster。Cluster用來訪問k8s,初始化代碼如下:

// New constructs a brand new cluster func New(config *rest.Config, opts ...Option) (Cluster, error) {if config == nil {return nil, errors.New("must specify Config")}options := Options{}for _, opt := range opts {opt(&options)}options = setOptionsDefaults(options)// Create the mapper providermapper, err := options.MapperProvider(config)if err != nil {options.Logger.Error(err, "Failed to get API Group-Resources")return nil, err}// Create the cache for the cached read client and registering informerscache, err := options.NewCache(config, cache.Options{Scheme: options.Scheme, Mapper: mapper, Resync: options.SyncPeriod, Namespace: options.Namespace})if err != nil {return nil, err}clientOptions := client.Options{Scheme: options.Scheme, Mapper: mapper}apiReader, err := client.New(config, clientOptions)if err != nil {return nil, err}writeObj, err := options.ClientBuilder.WithUncached(options.ClientDisableCacheFor...).Build(cache, config, clientOptions)if err != nil {return nil, err}if options.DryRunClient {writeObj = client.NewDryRunClient(writeObj)}recorderProvider, err := options.newRecorderProvider(config, options.Scheme, options.Logger.WithName("events"), options.makeBroadcaster)if err != nil {return nil, err}return &cluster{config: config,scheme: options.Scheme,cache: cache,fieldIndexes: cache,client: writeObj,apiReader: apiReader,recorderProvider: recorderProvider,mapper: mapper,logger: options.Logger,}, nil }

這里主要創建了cache與讀寫client

Cache初始化

創建cache代碼:

// New initializes and returns a new Cache. func New(config *rest.Config, opts Options) (Cache, error) {opts, err := defaultOpts(config, opts)if err != nil {return nil, err}im := internal.NewInformersMap(config, opts.Scheme, opts.Mapper, *opts.Resync, opts.Namespace)return &informerCache{InformersMap: im}, nil }

New中調用了NewInformersMap來創建infermer map,分為structured、unstructured與metadata

func NewInformersMap(config *rest.Config,scheme *runtime.Scheme,mapper meta.RESTMapper,resync time.Duration,namespace string) *InformersMap {return &InformersMap{structured: newStructuredInformersMap(config, scheme, mapper, resync, namespace),unstructured: newUnstructuredInformersMap(config, scheme, mapper, resync, namespace),metadata: newMetadataInformersMap(config, scheme, mapper, resync, namespace),Scheme: scheme,} }

最終都是調用newSpecificInformersMap

// newStructuredInformersMap creates a new InformersMap for structured objects. func newStructuredInformersMap(config *rest.Config, scheme *runtime.Scheme, mapper meta.RESTMapper, resync time.Duration, namespace string) *specificInformersMap {return newSpecificInformersMap(config, scheme, mapper, resync, namespace, createStructuredListWatch) } func newSpecificInformersMap(config *rest.Config,scheme *runtime.Scheme,mapper meta.RESTMapper,resync time.Duration,namespace string,createListWatcher createListWatcherFunc) *specificInformersMap {ip := &specificInformersMap{config: config,Scheme: scheme,mapper: mapper,informersByGVK: make(map[schema.GroupVersionKind]*MapEntry),codecs: serializer.NewCodecFactory(scheme),paramCodec: runtime.NewParameterCodec(scheme),resync: resync,startWait: make(chan struct{}),createListWatcher: createListWatcher,namespace: namespace,}return ip } func createStructuredListWatch(gvk schema.GroupVersionKind, ip *specificInformersMap) (*cache.ListWatch, error) {// Kubernetes APIs work against Resources, not GroupVersionKinds. Map the// groupVersionKind to the Resource API we will use.mapping, err := ip.mapper.RESTMapping(gvk.GroupKind(), gvk.Version)if err != nil {return nil, err}client, err := apiutil.RESTClientForGVK(gvk, false, ip.config, ip.codecs)if err != nil {return nil, err}listGVK := gvk.GroupVersion().WithKind(gvk.Kind + "List")listObj, err := ip.Scheme.New(listGVK)if err != nil {return nil, err}// TODO: the functions that make use of this ListWatch should be adapted to// pass in their own contexts instead of relying on this fixed one here.ctx := context.TODO()// Create a new ListWatch for the objreturn &cache.ListWatch{ListFunc: func(opts metav1.ListOptions) (runtime.Object, error) {res := listObj.DeepCopyObject()isNamespaceScoped := ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRooterr := client.Get().NamespaceIfScoped(ip.namespace, isNamespaceScoped).Resource(mapping.Resource.Resource).VersionedParams(&opts, ip.paramCodec).Do(ctx).Into(res)return res, err},// Setup the watch functionWatchFunc: func(opts metav1.ListOptions) (watch.Interface, error) {// Watch needs to be set to true separatelyopts.Watch = trueisNamespaceScoped := ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRootreturn client.Get().NamespaceIfScoped(ip.namespace, isNamespaceScoped).Resource(mapping.Resource.Resource).VersionedParams(&opts, ip.paramCodec).Watch(ctx)},}, nil }

在newSpecificInformersMap中通過informersByGVK來記錄schema中每個GVK對象與informer的對應關系,使用時可根據GVK得到informer再去List/Get。

newSpecificInformersMap中的createListWatcher來初始化ListWatch對象。

Client初始化

client這里有多種類型,apiReader直接從apiserver讀取對象,writeObj可以從apiserver或者cache中讀取數據。

apiReader, err := client.New(config, clientOptions)if err != nil {return nil, err} func New(config *rest.Config, options Options) (Client, error) {if config == nil {return nil, fmt.Errorf("must provide non-nil rest.Config to client.New")}// Init a scheme if none providedif options.Scheme == nil {options.Scheme = scheme.Scheme}// Init a Mapper if none providedif options.Mapper == nil {var err erroroptions.Mapper, err = apiutil.NewDynamicRESTMapper(config)if err != nil {return nil, err}}// 從cache中讀取clientcache := &clientCache{config: config,scheme: options.Scheme,mapper: options.Mapper,codecs: serializer.NewCodecFactory(options.Scheme),structuredResourceByType: make(map[schema.GroupVersionKind]*resourceMeta),unstructuredResourceByType: make(map[schema.GroupVersionKind]*resourceMeta),}rawMetaClient, err := metadata.NewForConfig(config)if err != nil {return nil, fmt.Errorf("unable to construct metadata-only client for use as part of client: %w", err)}c := &client{typedClient: typedClient{cache: clientcache,paramCodec: runtime.NewParameterCodec(options.Scheme),},unstructuredClient: unstructuredClient{cache: clientcache,paramCodec: noConversionParamCodec{},},metadataClient: metadataClient{client: rawMetaClient,restMapper: options.Mapper,},scheme: options.Scheme,mapper: options.Mapper,}return c, nil }

writeObj實現了讀寫分離的Client,寫直連apiserver,讀獲取在cache中則直接讀取cache,否則通過clientset。

writeObj, err := options.ClientBuilder.WithUncached(options.ClientDisableCacheFor...).Build(cache, config, clientOptions)if err != nil {return nil, err} func (n *newClientBuilder) Build(cache cache.Cache, config *rest.Config, options client.Options) (client.Client, error) {// Create the Client for Write operations.c, err := client.New(config, options)if err != nil {return nil, err}return client.NewDelegatingClient(client.NewDelegatingClientInput{CacheReader: cache,Client: c,UncachedObjects: n.uncached,}) } // 讀寫分離client func NewDelegatingClient(in NewDelegatingClientInput) (Client, error) {uncachedGVKs := map[schema.GroupVersionKind]struct{}{}for _, obj := range in.UncachedObjects {gvk, err := apiutil.GVKForObject(obj, in.Client.Scheme())if err != nil {return nil, err}uncachedGVKs[gvk] = struct{}{}}return &delegatingClient{scheme: in.Client.Scheme(),mapper: in.Client.RESTMapper(),Reader: &delegatingReader{CacheReader: in.CacheReader,ClientReader: in.Client,scheme: in.Client.Scheme(),uncachedGVKs: uncachedGVKs,cacheUnstructured: in.CacheUnstructured,},Writer: in.Client,StatusClient: in.Client,}, nil } // Get retrieves an obj for a given object key from the Kubernetes Cluster. func (d *delegatingReader) Get(ctx context.Context, key ObjectKey, obj Object) error {//根據是否cached選擇clientif isUncached, err := d.shouldBypassCache(obj); err != nil {return err} else if isUncached {return d.ClientReader.Get(ctx, key, obj)}return d.CacheReader.Get(ctx, key, obj) }

Controller初始化

Controller初始化代碼如下:

func (r *GameReconciler) SetupWithManager(mgr ctrl.Manager) error {ctrl.NewControllerManagedBy(mgr).WithOptions(controller.Options{MaxConcurrentReconciles: 3,}).For(&myappv1.Game{}). // Reconcile資源Owns(&appsv1.Deployment{}). // 監聽Owner是當前資源的DeploymentComplete(r)return nil } // Complete builds the Application ControllerManagedBy. func (blder *Builder) Complete(r reconcile.Reconciler) error {_, err := blder.Build(r)return err } // Build builds the Application ControllerManagedBy and returns the Controller it created. func (blder *Builder) Build(r reconcile.Reconciler) (controller.Controller, error) {if r == nil {return nil, fmt.Errorf("must provide a non-nil Reconciler")}if blder.mgr == nil {return nil, fmt.Errorf("must provide a non-nil Manager")}if blder.forInput.err != nil {return nil, blder.forInput.err}// Checking the reconcile type exist or notif blder.forInput.object == nil {return nil, fmt.Errorf("must provide an object for reconciliation")}// Set the Configblder.loadRestConfig()// Set the ControllerManagedByif err := blder.doController(r); err != nil {return nil, err}// Set the Watchif err := blder.doWatch(); err != nil {return nil, err}return blder.ctrl, nil }

初始化Controller調用ctrl.NewControllerManagedBy來創建Builder,填充配置,最后通過Build方法完成初始化,主要做了三件事

  • 設置配置
  • doController來創建controller
  • doWatch來設置需要監聽的資源
  • 先看controller初始化

    func (blder *Builder) doController(r reconcile.Reconciler) error {ctrlOptions := blder.ctrlOptionsif ctrlOptions.Reconciler == nil {ctrlOptions.Reconciler = r}gvk, err := getGvk(blder.forInput.object, blder.mgr.GetScheme())if err != nil {return err}// Setup the logger.if ctrlOptions.Log == nil {ctrlOptions.Log = blder.mgr.GetLogger()}ctrlOptions.Log = ctrlOptions.Log.WithValues("reconciler group", gvk.Group, "reconciler kind", gvk.Kind)// Build the controller and return.blder.ctrl, err = newController(blder.getControllerName(gvk), blder.mgr, ctrlOptions)return err } func New(name string, mgr manager.Manager, options Options) (Controller, error) {c, err := NewUnmanaged(name, mgr, options)if err != nil {return nil, err}// Add the controller as a Manager componentsreturn c, mgr.Add(c) } func NewUnmanaged(name string, mgr manager.Manager, options Options) (Controller, error) {if options.Reconciler == nil {return nil, fmt.Errorf("must specify Reconciler")}if len(name) == 0 {return nil, fmt.Errorf("must specify Name for Controller")}if options.Log == nil {options.Log = mgr.GetLogger()}if options.MaxConcurrentReconciles <= 0 {options.MaxConcurrentReconciles = 1}if options.CacheSyncTimeout == 0 {options.CacheSyncTimeout = 2 * time.Minute}if options.RateLimiter == nil {options.RateLimiter = workqueue.DefaultControllerRateLimiter()}// Inject dependencies into Reconcilerif err := mgr.SetFields(options.Reconciler); err != nil {return nil, err}// Create controller with dependencies setreturn &controller.Controller{Do: options.Reconciler,MakeQueue: func() workqueue.RateLimitingInterface {return workqueue.NewNamedRateLimitingQueue(options.RateLimiter, name)},MaxConcurrentReconciles: options.MaxConcurrentReconciles,CacheSyncTimeout: options.CacheSyncTimeout,SetFields: mgr.SetFields,Name: name,Log: options.Log.WithName("controller").WithName(name),}, nil }

    doController調用controller.New來創建controller并添加到manager,在NewUnmanaged可以看到我們熟悉的配置,與上文sample-controller類似這里也設置了工作隊列、最大Worker數等。

    doWatch代碼如下

    func (blder *Builder) doWatch() error {// Reconcile typetypeForSrc, err := blder.project(blder.forInput.object, blder.forInput.objectProjection)if err != nil {return err}src := &source.Kind{Type: typeForSrc}hdler := &handler.EnqueueRequestForObject{}allPredicates := append(blder.globalPredicates, blder.forInput.predicates...)if err := blder.ctrl.Watch(src, hdler, allPredicates...); err != nil {return err}// Watches the managed typesfor _, own := range blder.ownsInput {typeForSrc, err := blder.project(own.object, own.objectProjection)if err != nil {return err}src := &source.Kind{Type: typeForSrc}hdler := &handler.EnqueueRequestForOwner{OwnerType: blder.forInput.object,IsController: true,}allPredicates := append([]predicate.Predicate(nil), blder.globalPredicates...)allPredicates = append(allPredicates, own.predicates...)if err := blder.ctrl.Watch(src, hdler, allPredicates...); err != nil {return err}}// Do the watch requestsfor _, w := range blder.watchesInput {allPredicates := append([]predicate.Predicate(nil), blder.globalPredicates...)allPredicates = append(allPredicates, w.predicates...)// If the source of this watch is of type *source.Kind, project it.if srckind, ok := w.src.(*source.Kind); ok {typeForSrc, err := blder.project(srckind.Type, w.objectProjection)if err != nil {return err}srckind.Type = typeForSrc}if err := blder.ctrl.Watch(w.src, w.eventhandler, allPredicates...); err != nil {return err}}return nil }

    doWatch以此watch當前資源,ownsInput資源(即owner為當前資源),以及通過builder傳入的watchsInput,最后調用ctrl.Watch來注冊。其中參數eventhandler為入隊函數,如當前資源入隊實現為handler.EnqueueRequestForObject,類似地handler.EnqueueRequestForOwner是將owner加入工作隊列。

    type EnqueueRequestForObject struct{} // Create implements EventHandler func (e *EnqueueRequestForObject) Create(evt event.CreateEvent, q workqueue.RateLimitingInterface) {if evt.Object == nil {enqueueLog.Error(nil, "CreateEvent received with no metadata", "event", evt)return}// 加入隊列q.Add(reconcile.Request{NamespacedName: types.NamespacedName{Name: evt.Object.GetName(),Namespace: evt.Object.GetNamespace(),}}) }

    Watch實現如下:

    func (c *Controller) Watch(src source.Source, evthdler handler.EventHandler, prct ...predicate.Predicate) error {c.mu.Lock()defer c.mu.Unlock()// Inject Cache into argumentsif err := c.SetFields(src); err != nil {return err}if err := c.SetFields(evthdler); err != nil {return err}for _, pr := range prct {if err := c.SetFields(pr); err != nil {return err}}if !c.Started {c.startWatches = append(c.startWatches, watchDescription{src: src, handler: evthdler, predicates: prct})return nil}c.Log.Info("Starting EventSource", "source", src)return src.Start(c.ctx, evthdler, c.Queue, prct...) } func (ks *Kind) InjectCache(c cache.Cache) error {if ks.cache == nil {ks.cache = c}return nil } func (ks *Kind) Start(ctx context.Context, handler handler.EventHandler, queue workqueue.RateLimitingInterface,prct ...predicate.Predicate) error {...i, err := ks.cache.GetInformer(ctx, ks.Type)if err != nil {if kindMatchErr, ok := err.(*meta.NoKindMatchError); ok {log.Error(err, "if kind is a CRD, it should be installed before calling Start","kind", kindMatchErr.GroupKind)}return err}i.AddEventHandler(internal.EventHandler{Queue: queue, EventHandler: handler, Predicates: prct})return nil } // informer get 實現 func (m *InformersMap) Get(ctx context.Context, gvk schema.GroupVersionKind, obj runtime.Object) (bool, *MapEntry, error) {switch obj.(type) {case *unstructured.Unstructured:return m.unstructured.Get(ctx, gvk, obj)case *unstructured.UnstructuredList:return m.unstructured.Get(ctx, gvk, obj)case *metav1.PartialObjectMetadata:return m.metadata.Get(ctx, gvk, obj)case *metav1.PartialObjectMetadataList:return m.metadata.Get(ctx, gvk, obj)default:return m.structured.Get(ctx, gvk, obj)} } // 如果informer不存在則新創建一個,加入到informerMap func (ip *specificInformersMap) Get(ctx context.Context, gvk schema.GroupVersionKind, obj runtime.Object) (bool, *MapEntry, error) {// Return the informer if it is foundi, started, ok := func() (*MapEntry, bool, bool) {ip.mu.RLock()defer ip.mu.RUnlock()i, ok := ip.informersByGVK[gvk]return i, ip.started, ok}()if !ok {var err errorif i, started, err = ip.addInformerToMap(gvk, obj); err != nil {return started, nil, err}}...return started, i, nil }

    Watch通過SetFeilds方法注入cache, 最后添加到controller的startWatches隊列,若已啟動,調用Start方法配置回調函數EventHandler。

    Manager啟動

    最后來看Manager啟動流程

    func (cm *controllerManager) Start(ctx context.Context) (err error) {if err := cm.Add(cm.cluster); err != nil {return fmt.Errorf("failed to add cluster to runnables: %w", err)}cm.internalCtx, cm.internalCancel = context.WithCancel(ctx)stopComplete := make(chan struct{})defer close(stopComplete)defer func() {stopErr := cm.engageStopProcedure(stopComplete)}()cm.errChan = make(chan error)if cm.metricsListener != nil {go cm.serveMetrics()}// Serve health probesif cm.healthProbeListener != nil {go cm.serveHealthProbes()}go cm.startNonLeaderElectionRunnables()go func() {if cm.resourceLock != nil {err := cm.startLeaderElection()if err != nil {cm.errChan <- err}} else {// Treat not having leader election enabled the same as being elected.cm.startLeaderElectionRunnables()close(cm.elected)}}()select {case <-ctx.Done():// We are donereturn nilcase err := <-cm.errChan:// Error starting or running a runnablereturn err} }

    主要流程包括:

  • 啟動監控服務
  • 啟動健康檢查服務
  • 啟動非選主服務
  • 啟動選主服務
  • 對于非選主服務,代碼如下

    func (cm *controllerManager) startNonLeaderElectionRunnables() {cm.mu.Lock()defer cm.mu.Unlock()cm.waitForCache(cm.internalCtx)// Start the non-leaderelection Runnables after the cache has syncedfor _, c := range cm.nonLeaderElectionRunnables {cm.startRunnable(c)} } func (cm *controllerManager) waitForCache(ctx context.Context) {if cm.started {return}for _, cache := range cm.caches {cm.startRunnable(cache)}for _, cache := range cm.caches {cache.GetCache().WaitForCacheSync(ctx)}cm.started = true }

    啟動cache,啟動其他服務,對于選主服務也類似,初始化controller時會加入到選主服務隊列,即最后啟動Controller

    func (c *Controller) Start(ctx context.Context) error {...c.Queue = c.MakeQueue()defer c.Queue.ShutDown() // needs to be outside the iife so that we shutdown after the stop channel is closederr := func() error {defer c.mu.Unlock()defer utilruntime.HandleCrash()for _, watch := range c.startWatches {c.Log.Info("Starting EventSource", "source", watch.src)if err := watch.src.Start(ctx, watch.handler, c.Queue, watch.predicates...); err != nil {return err}}for _, watch := range c.startWatches {syncingSource, ok := watch.src.(source.SyncingSource)if !ok {continue}if err := func() error {// use a context with timeout for launching sources and syncing caches.sourceStartCtx, cancel := context.WithTimeout(ctx, c.CacheSyncTimeout)defer cancel()if err := syncingSource.WaitForSync(sourceStartCtx); err != nil {err := fmt.Errorf("failed to wait for %s caches to sync: %w", c.Name, err)c.Log.Error(err, "Could not wait for Cache to sync")return err}return nil}(); err != nil {return err}}...for i := 0; i < c.MaxConcurrentReconciles; i++ {go wait.UntilWithContext(ctx, func(ctx context.Context) {for c.processNextWorkItem(ctx) {}}, c.JitterPeriod)}c.Started = truereturn nil}()if err != nil {return err}<-ctx.Done()c.Log.Info("Stopping workers")return nil } func (c *Controller) processNextWorkItem(ctx context.Context) bool {obj, shutdown := c.Queue.Get()...c.reconcileHandler(ctx, obj)return true } func (c *Controller) reconcileHandler(ctx context.Context, obj interface{}) {// Make sure that the the object is a valid request.req, ok := obj.(reconcile.Request)...if result, err := c.Do.Reconcile(ctx, req); err != nil {... }

    Controller啟動主要包括

  • 等待cache同步
  • 啟動多個processNextWorkItem
  • 每個Worker調用c.Do.Reconcile來進行數據處理
    與sample-controller工作流程一致,不斷獲取工作隊列中的數據調用Reconcile進行調諧。
  • 流程歸納

    至此,通過kubebuilder生成代碼的主要邏輯已經明朗,對比sample-controller其實整體流程類似,只是kubebuilder通過controller-runtime已經幫我們做了很多工作,如client、cache的初始化,controller的運行框架,我們只需要關心Reconcile邏輯即可。

  • 初始化manager,創建client與cache
  • 創建controller,對于監聽資源會創建對應informer并添加回調函數
  • 啟動manager,啟動cache與controller
  • 總結

    kubebuilder大大簡化了開發Operator的流程,了解其背后的原理有利于我們對Operator進行調優,能更好地應用于生產。

    引用

    [1] https://github.com/kubernetes/sample-controller
    [2] https://book.kubebuilder.io/architecture.html
    [3] https://developer.aliyun.com/article/719215

    總結

    以上是生活随笔為你收集整理的深入了解Kubernetes CRD开发工具kubebuilder的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    主站蜘蛛池模板: 成人区人妻精品一区二区网站 | 亚洲欧美另类在线观看 | 在线免费观看视频网站 | 久艹视频在线观看 | 综合色婷婷一区二区亚洲欧美国产 | 久热在线视频 | 国产69精品久久久久999小说 | 欧美在线一区二区三区 | 成人一区二区在线观看 | 久久久国产成人一区二区三区 | 男女午夜免费视频 | 日日夜夜网站 | 欧美日韩一区二区在线观看 | 国产乱人乱偷精品视频 | 68日本xxxxxⅹxxx59 | 爱福利视频广场 | 神宫寺奈绪一区二区三区 | 小视频在线免费观看 | 视频一区二区在线播放 | 国产手机av | 日日夜夜精品视频免费 | 国产伦理精品 | 91亚洲精华国产精华精华液 | 免费视频久久 | 91成年影院 | 国产精品成人免费一区二区视频 | 中文字幕在线观看免费视频 | 最新国产视频 | 国产成人三级在线播放 | 亚洲国产欧美在线观看 | 国产精品免费av一区二区三区 | 五月婷婷婷婷 | 毛片.com| 91亚瑟| 国产美女91 | 一区二区黄色 | 日韩色图视频 | 老熟妇午夜毛片一区二区三区 | 亚洲骚片 | 97理伦 | 99精品热视频 | 婷婷俺来也 | 91亚洲网站 | 新x8x8拨牐拨牐永久免费影库 | 天堂在线网| 一本大道av伊人久久综合 | 欧美精品1| bl动漫在线观看 | 日本偷拍一区 | 你懂的在线观看视频 | av中文字| 99免费在线观看 | 奇米影视在线视频 | 日韩欧美在线不卡 | jzzijzzij日本成熟少妇 | 欧美在线xxx | 蜜臀99久久精品久久久久小说 | 欧美日韩视频一区二区三区 | 麻豆91精品91久久久 | 午夜日韩av | 久久99热这里只有精品 | 在线观看网址你懂的 | 欧美一区二区三区黄色 | 亚洲午夜久久久久久久久久久 | 中文字幕在线观看免费高清 | 91麻豆精品在线观看 | 小早川怜子久久精品中文字幕 | 丰满女邻居的色诱4hd | 欧日韩不卡在线视频 | 欧美午夜精品理论片 | 天堂俺去俺来也www 欧美大片在线播放 | 国产亚洲精品美女 | www亚洲一区 | babes性欧美69| 一区二区日韩av | 中国黄色小视频 | 午夜激情在线观看 | www.五月婷婷 | 免费成人在线电影 | 免费在线观看一区二区 | 国内自拍av | 亚洲精品一级片 | 欧美一级免费视频 | 国产精品一区二区三区线羞羞网站 | 亚洲二区一区 | 老司机黄色影院 | 国产伦精品一区二区三区88av | 亚洲综合另类 | 肉大捧一进一出免费视频 | 成人深夜小视频 | 黄a视频 | 波多野结衣有码 | 中文字幕在线观看二区 | 亚洲伦理在线视频 | 日本伦理一区二区三区 | 亚洲精品国产精品乱码不卡 | 瑟瑟av | 国产一区二区三区免费播放 | 国产图片区|