日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

自动化机器学习(三)神经网络架构搜索综述(NAS)简述

發(fā)布時間:2025/3/21 编程问答 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 自动化机器学习(三)神经网络架构搜索综述(NAS)简述 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

文章目錄

    • 技術(shù)介紹
      • 簡介
      • 技術(shù)棧
    • 實(shí)現(xiàn)
      • 數(shù)據(jù)
      • 數(shù)據(jù)讀取
      • 創(chuàng)建模型并訓(xùn)練
      • 模型預(yù)測與評估
      • 模型的導(dǎo)出

技術(shù)介紹

簡介

自動化機(jī)器學(xué)習(xí)就是能夠自動建立機(jī)器學(xué)習(xí)模型的方法,其主要包含三個方面:方面一,超參數(shù)優(yōu)化;方面二,自動特征工程與機(jī)器學(xué)習(xí)算法自動選擇;方面三,神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)搜索。本文側(cè)重于方面三,神級網(wǎng)絡(luò)結(jié)構(gòu)搜索。

自動化機(jī)器學(xué)習(xí)的前兩個部分,都有一個特點(diǎn)——只對現(xiàn)在已有的算法進(jìn)行搜索,而不創(chuàng)造新的算法。一般而言機(jī)器學(xué)習(xí)專家在開發(fā)機(jī)器學(xué)習(xí)應(yīng)用或者構(gòu)建機(jī)器學(xué)習(xí)模型時,都不太可能從頭造輪子,直接創(chuàng)造一個新的算法。但是到了深度神經(jīng)網(wǎng)路的時候,就發(fā)生了一些變化。嚴(yán)格意義上來說,神經(jīng)網(wǎng)絡(luò)的基本結(jié)構(gòu)都是固定的,有限的。但是每當(dāng)我們重新創(chuàng)建一個模型的時候,基本單元的不同組合又可以視作是在創(chuàng)建一個新的神經(jīng)網(wǎng)絡(luò)模型。在這種基本現(xiàn)實(shí)之下,第三種自動化機(jī)器學(xué)習(xí)技術(shù)應(yīng)用而生,那就是神經(jīng)網(wǎng)絡(luò)架構(gòu)搜索綜述(NAS),根據(jù)基本的神經(jīng)網(wǎng)絡(luò)基本單元創(chuàng)建新的神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),最終結(jié)果就是得到一個非常強(qiáng)大的神經(jīng)網(wǎng)絡(luò)。

在這方面也已經(jīng)有大量的科學(xué)家進(jìn)行了深入的研究。不過由于其基本假設(shè)就是深度神經(jīng)網(wǎng)絡(luò),而且是在其之上進(jìn)行架構(gòu)的搜索,因此對算力的要求往往都比較高。目前比較厲害,且可以單卡運(yùn)行的算法當(dāng)要屬enas,而enas的作者在發(fā)布論文的同時開源了其基于enas的作品——autokeras,任何人都可以下載和使用該類庫。

autokeras底層都是enas算法,但是根據(jù)應(yīng)用不同其存在可以詳細(xì)的分為以下類型:

  • 圖像分類
  • 圖像回歸
  • 文本分類
  • 文本回歸
  • 結(jié)構(gòu)化數(shù)據(jù)分類
  • 結(jié)構(gòu)化數(shù)據(jù)回歸
    我們直接進(jìn)行比較復(fù)雜的操作,使用autokeras進(jìn)行自動圖像分類。

技術(shù)棧

  • tensorflow
  • pathlib
  • numpy
  • autokeras

實(shí)現(xiàn)

數(shù)據(jù)

數(shù)據(jù)我們這里使用的是keras中的flower_photos數(shù)據(jù)集,該數(shù)據(jù)集是五類鮮花的圖片數(shù)據(jù),共有3675張圖片。可以直接使用keras自帶代下載該數(shù)據(jù)集。具體命令如下:

import tensorflow as tf AUTOTUNE = tf.data.experimental.AUTOTUNE import pathlib import numpy as np import osdata_dir = tf.keras.utils.get_file(origin='https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',fname='flower_photos', untar=True) data_dir = pathlib.Path(data_dir) image_count = len(list(data_dir.glob('*/*.jpg'))) print("This directory: ",data_dir," have ",image_count,"images")CLASS_NAMES = np.array([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"]) print("CLASS_NAMES :",CLASS_NAMES,",They are the names of the secondary directories") This directory: /home/fonttian/.keras/datasets/flower_photos have 3670 images CLASS_NAMES : ['roses' 'dandelion' 'daisy' 'sunflowers' 'tulips'] ,They are the names of the secondary directories

數(shù)據(jù)讀取

之后我們將數(shù)據(jù)的讀取轉(zhuǎn)化為tfds格式,這樣使用效率會高很多,具體實(shí)現(xiàn)如下:

import warnings warnings.filterwarnings("ignore")print("----------------- 參數(shù) -----------------")BATCH_SIZE = 256 IMG_HEIGHT = 224 IMG_WIDTH = 224 STEPS_PER_EPOCH = np.ceil(image_count/BATCH_SIZE)print("----------------- start tfds -----------------") list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'))def get_label(file_path):# convert the path to a list of path componentsparts = tf.strings.split(file_path, os.path.sep)# The second to last is the class-directoryreturn parts[-2] == CLASS_NAMESdef decode_img(img):# convert the compressed string to a 3D uint8 tensorimg = tf.image.decode_jpeg(img, channels=3)# Use `convert_image_dtype` to convert to floats in the [0,1] range.img = tf.image.convert_image_dtype(img, tf.float32)# resize the image to the desired size.return tf.image.resize(img, [IMG_HEIGHT, IMG_WIDTH])def process_path(file_path):label = get_label(file_path)# load the raw data from the file as a stringimg = tf.io.read_file(file_path)img = decode_img(img)return img, label# Set `num_parallel_calls` so multiple images are loaded/processed in parallel. labeled_ds = list_ds.map(process_path, num_parallel_calls=AUTOTUNE) print("type(labeled_ds): ",type(labeled_ds)) ----------------- 參數(shù) ----------------- ----------------- start tfds ----------------- type(labeled_ds): <class 'tensorflow.python.data.ops.dataset_ops.ParallelMapDataset'>

創(chuàng)建模型并訓(xùn)練

之后我們使用幾條簡單的命令進(jìn)行模型的創(chuàng)建,然后使用fit方法進(jìn)行訓(xùn)練。

print("----------------- autokeras.fit with tfds -----------------")import autokeras as ak clf = ak.ImageClassifier(overwrite=True,max_trials=1)print("type(clf) :",type(clf))# Feed the tensorflow Dataset to the classifier. # model = clf.fit(train_ds, epochs=10) clf.fit(labeled_ds, epochs=10) print("End of training") ----------------- autokeras.fit with tfds ----------------- type(clf) : <class 'autokeras.tasks.image.ImageClassifier'>

Starting new trial

Epoch 1/10 92/92 [==============================] - ETA: 0s - loss: 1.6533 - accuracy: 0.18 - ETA: 5s - loss: 3.6870 - accuracy: 0.18 - ETA: 7s - loss: 12.2507 - accuracy: 0.187 - ETA: 8s - loss: 20.0298 - accuracy: 0.179 - ETA: 8s - loss: 19.0134 - accuracy: 0.181 - ETA: 8s - loss: 17.3803 - accuracy: 0.208 - ETA: 8s - loss: 15.9739 - accuracy: 0.218 - ETA: 8s - loss: ......

Trial complete

Trial summary

|-Trial ID: c908fe149791b23cd0f4595ec5bde856

|-Score: 1.596627950668335

|-Best step: 6

Hyperparameters:

|-classification_head_1/dropout_rate: 0.5

|-classification_head_1/spatial_reduction_1/reduction_type: flatten

|-image_block_1/augment: False

|-image_block_1/block_type: vanilla

|-image_block_1/conv_block_1/dropout_rate: 0.25

|-image_block_1/conv_block_1/filters_0_0: 32

|-image_block_1/conv_block_1/filters_0_1: 64

|-image_block_1/conv_block_1/kernel_size: 3

|-image_block_1/conv_block_1/max_pooling: True

|-image_block_1/conv_block_1/num_blocks: 1

|-image_block_1/conv_block_1/num_layers: 2

|-image_block_1/conv_block_1/separable: False

|-image_block_1/normalize: True

|-optimizer: adam

INFO:tensorflow:Oracle triggered exit 115/115 [==============================] - ETA: 0s - loss: 1.7198 - accuracy: 0.28 - ETA: 6s - loss: 26.4634 - accuracy: 0.250 - ETA: 8s - loss: 21.2743 - accuracy: 0.239 - ETA: 9s - loss: 20.6605 - accuracy: 0.250 - ETA: 10s - loss: 21.1210 - accuracy: 0.21 - ETA: 10s - loss: 19.7904 - accuracy: 0.19 - ETA: 10s - loss: 18.0614 - accuracy: 0.19 - ETA: 10s - loss: 16.6908 - accuracy: 0.20 - ETA: 10s - loss: 15.4958 - accuracy: 0.21 - ETA: 10s - loss: 14.4523 - accuracy: 0.21 ...... ETA: 0s - loss: 2.7511 - accuracy: 0.23 - 15s 134ms/step - loss: 2.7511 - accuracy: 0.2343 End of training

模型預(yù)測與評估

模型的訓(xùn)練過程輸出了大量的數(shù)據(jù),但是使用起來卻只需要幾行代碼。另外模型的預(yù)測與導(dǎo)出同樣如此,在訓(xùn)練過程,訓(xùn)練過程已經(jīng)進(jìn)行了保存,但是想要輸出訓(xùn)練好的最佳模型則需要使用export_model方法。

print("----------------- Predict with the best model -----------------") # Predict with the best model. predicted_y = clf.predict(labeled_ds) # predicted_y = clf.predict(train_ds) # Evaluate the best model with testing data. print(clf.evaluate(labeled_ds)) # print(clf.evaluate(train_ds)) ----------------- Predict with the best model ----------------- WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details. WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details. 115/115 [==============================] - ETA: 0s - loss: 1.6073 - accuracy: 0.28 - ETA: 3s - loss: 1.6060 ...... loss: 1.6052 - accuracy: 0.24 - 5s 42ms/step - loss: 1.6052 - accuracy: 0.2455 [1.6052242517471313, 0.24550408124923706]

模型的導(dǎo)出

print("----------------- Export as a Keras Model -----------------" ) # Export as a Keras Model. model = clf.export_model()print(type(model)) # <class 'tensorflow.python.keras.engine.training.Model'>try:model.save("model_autokeras", save_format="tf") except:model.save("model_autokeras.h5")print("-----------------End of the program -----------------") ----------------- Export as a Keras Model ----------------- WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter ...... If using Keras pass *_constraint arguments to layers. INFO:tensorflow:Assets written to: model_autokeras/assets -----------------End of the program -----------------

剛剛的代碼就是代碼的導(dǎo)出,由于該類庫底層使用的仍然是tensorflow,因此保存模型其實(shí)調(diào)用的是Tensorflow中的模型導(dǎo)出方法。因此如果需要加載或者部署,采用tensorflow中的方法進(jìn)行模型的保存和加載即可。至于整個文件夾則如下圖所示,可以看到底層文件基本與TensorFlow無異。

總結(jié)

以上是生活随笔為你收集整理的自动化机器学习(三)神经网络架构搜索综述(NAS)简述的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。