日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

MLAT-Autoencoders---下篇-关键代码及结果展示(2)

發布時間:2023/12/14 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 MLAT-Autoencoders---下篇-关键代码及结果展示(2) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

卷積和降噪自編碼器

1.導入各種包

from pathlib import Path import pandas as pd import numpy as np from numpy.random import choiceimport tensorflow as tf from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D from tensorflow.keras.models import Model from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from tensorflow.keras.datasets import fashion_mnistimport matplotlib.pyplot as plt import seaborn as sns

2.數據準備

(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data() X_train.shape, X_test.shape class_dict = {0: 'T-shirt/top',1: 'Trouser',2: 'Pullover',3: 'Dress',4: 'Coat',5: 'Sandal',6: 'Shirt',7: 'Sneaker',8: 'Bag',9: 'Ankle boot'} classes = list(class_dict.keys())#規范數據 image_size = 28 def data_prep_conv(x, size=image_size):return x.reshape(-1, size, size, 1).astype('float32')/255 X_train_scaled = data_prep_conv(X_train) X_test_scaled = data_prep_conv(X_test) X_train_scaled.shape, X_test_scaled.shape

out:

#訓練與函數結合 def train_autoencoder(path, model, x_train=X_train_scaled, x_test=X_test_scaled):callbacks = [EarlyStopping(patience=5, restore_best_weights=True),ModelCheckpoint(filepath=path, save_best_only=True, save_weights_only=True)]model.fit(x=x_train, y=x_train, epochs=100, validation_split=.1, callbacks=callbacks)model.load_weights(path)mse = model.evaluate(x=x_test, y=x_test)return model, mse

3.卷積Autoencoder

定義一個三層的編碼器,它分別使用32、16和8個過濾器的2D卷積。第三層的編碼大小是4 x 4 x 8 = 128,比之前的例子要大。

input_ = Input(shape=(28, 28, 1), name='Input_3D')x = Conv2D(filters=32,kernel_size=(3, 3),activation='relu',padding='same',name='Encoding_Conv_1')(input_) x = MaxPooling2D(pool_size=(2, 2), padding='same', name='Encoding_Max_1')(x) x = Conv2D(filters=16,kernel_size=(3, 3),activation='relu',padding='same',name='Encoding_Conv_2')(x) x = MaxPooling2D(pool_size=(2, 2), padding='same', name='Encoding_Max_2')(x) x = Conv2D(filters=8,kernel_size=(3, 3),activation='relu',padding='same',name='Encoding_Conv_3')(x) encoded_conv = MaxPooling2D(pool_size=(2, 2),padding='same',name='Encoding_Max_3')(x)x = Conv2D(filters=8,kernel_size=(3, 3),activation='relu',padding='same',name='Decoding_Conv_1')(encoded_conv) x = UpSampling2D(size=(2, 2), name='Decoding_Upsample_1')(x) x = Conv2D(filters=16,kernel_size=(3, 3),activation='relu',padding='same',name='Decoding_Conv_2')(x) x = UpSampling2D(size=(2, 2), name='Decoding_Upsample_2')(x) x = Conv2D(filters=32,kernel_size=(3, 3),activation='relu',name='Decoding_Conv_3')(x) x = UpSampling2D(size=(2, 2), name='Decoding_Upsample_3')(x) decoded_conv = Conv2D(filters=1,kernel_size=(3, 3),activation='sigmoid',padding='same',name='Decoding_Conv_4')(x)autoencoder_conv = Model(input_, decoded_conv) autoencoder_conv.compile(optimizer='adam', loss='mse') autoencoder_conv.summary()

這里是定義了一個解碼器,它限制了濾波器的數量,并使用2D采樣代替最大池化來解決濾波器數量的減少的問題。out可見三層自動編碼器有12785個參數,略高于前一種深度自動編碼器容量的5%。
out:

path = (results_path / 'autencoder_conv.32.weights.hdf5').as_posix() autoencoder_deep, mse = train_autoencoder(path, autoencoder_conv, x_train=X_train_scaled, x_test=X_test_scaled)

out結果略

f'MSE: {mse:.4f} | RMSE {mse**.5:.4f}'

out:

訓練在75個周期后停止,結果測試RMSE進一步減少9%,這是由于卷積濾波器從圖像數據學習的能力更有效并且編碼尺寸更大。

autoencoder_conv.load_weights(path) reconstructed_images = autoencoder_deep.predict(X_test_scaled) reconstructed_images.shapefig, axes = plt.subplots(ncols=n_classes, nrows=2, figsize=(20, 4)) for i in range(n_classes):axes[0, i].imshow(X_test_scaled[i].reshape(image_size, image_size), cmap='gray')axes[0, i].axis('off')axes[1, i].imshow(reconstructed_images[i].reshape(image_size, image_size) , cmap='gray')axes[1, i].axis('off')

out:

4.去噪自編碼器

自編碼器在去噪任務中的應用只影響訓練階段。下面會在標準正態分布的Fashion MNIST數據中添加噪聲,同時保持像素值在[0,1]范圍內。

def add_noise(x, noise_factor=.3):return np.clip(x + noise_factor * np.random.normal(size=x.shape), 0, 1)X_train_noisy = add_noise(X_train_scaled) X_test_noisy = add_noise(X_test_scaled) fig, axes = plt.subplots(nrows=2, ncols=5, figsize=(20, 4))axes = axes.flatten() for i, ax in enumerate(axes):ax.imshow(X_test_noisy[i].reshape(28, 28), cmap='gray')ax.axis('off')

out:

x = Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same', name='Encoding_Conv_1')(input_) x = MaxPooling2D(pool_size=(2, 2), padding='same', name='Encoding_Max_1')(x) x = Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same', name='Encoding_Conv_2')(x) encoded_conv = MaxPooling2D(pool_size=(2, 2), padding='same', name='Encoding_Max_3')(x)x = Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same', name='Decoding_Conv_1')(encoded_conv) x = UpSampling2D(size=(2, 2), name='Decoding_Upsample_1')(x) x = Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same', name='Decoding_Conv_2')(x) x = UpSampling2D(size=(2, 2), name='Decoding_Upsample_2')(x) decoded_conv = Conv2D(filters=1, kernel_size=(3, 3), activation='sigmoid', padding='same', name='Decoding_Conv_4')(x)autoencoder_denoise = Model(input_, decoded_conv) autoencoder_denoise.compile(optimizer='adam', loss='mse')path = (results_path / 'autencoder_denoise.32.weights.hdf5').as_posix()callbacks = [EarlyStopping(patience=5,restore_best_weights=True),ModelCheckpoint(filepath=path,save_best_only=True,save_weights_only=True)]#繼續在有噪聲的輸入上訓練卷積自編碼器,目的是學習如何生成未損壞的原始數據: autoencoder_denoise.fit(x=X_train_noisy,y=X_train_scaled,epochs=100,batch_size=128,shuffle=True,validation_split=.1,callbacks=callbacks)

out略

autoencoder_denoise.load_weights(path) mse = autoencoder_denoise.evaluate(x=X_test_noisy, y=X_test_scaled) f'MSE: {mse:.4f} | RMSE {mse**.5:.4f}'

out:

5.可視化

下圖從上到下分別是原始圖像和去噪后的圖像。它說明了自編碼器成功地從噪聲圖像中產生壓縮編碼,這些壓縮編碼與從原始圖像中產生的非常相似。

總結

以上是生活随笔為你收集整理的MLAT-Autoencoders---下篇-关键代码及结果展示(2)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。