MLAT-Autoencoders---下篇-关键代码及结果展示(2)
生活随笔
收集整理的這篇文章主要介紹了
MLAT-Autoencoders---下篇-关键代码及结果展示(2)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
卷積和降噪自編碼器
1.導入各種包
from pathlib import Path import pandas as pd import numpy as np from numpy.random import choiceimport tensorflow as tf from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D from tensorflow.keras.models import Model from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from tensorflow.keras.datasets import fashion_mnistimport matplotlib.pyplot as plt import seaborn as sns2.數據準備
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data() X_train.shape, X_test.shape class_dict = {0: 'T-shirt/top',1: 'Trouser',2: 'Pullover',3: 'Dress',4: 'Coat',5: 'Sandal',6: 'Shirt',7: 'Sneaker',8: 'Bag',9: 'Ankle boot'} classes = list(class_dict.keys())#規范數據 image_size = 28 def data_prep_conv(x, size=image_size):return x.reshape(-1, size, size, 1).astype('float32')/255 X_train_scaled = data_prep_conv(X_train) X_test_scaled = data_prep_conv(X_test) X_train_scaled.shape, X_test_scaled.shapeout:
3.卷積Autoencoder
定義一個三層的編碼器,它分別使用32、16和8個過濾器的2D卷積。第三層的編碼大小是4 x 4 x 8 = 128,比之前的例子要大。
input_ = Input(shape=(28, 28, 1), name='Input_3D')x = Conv2D(filters=32,kernel_size=(3, 3),activation='relu',padding='same',name='Encoding_Conv_1')(input_) x = MaxPooling2D(pool_size=(2, 2), padding='same', name='Encoding_Max_1')(x) x = Conv2D(filters=16,kernel_size=(3, 3),activation='relu',padding='same',name='Encoding_Conv_2')(x) x = MaxPooling2D(pool_size=(2, 2), padding='same', name='Encoding_Max_2')(x) x = Conv2D(filters=8,kernel_size=(3, 3),activation='relu',padding='same',name='Encoding_Conv_3')(x) encoded_conv = MaxPooling2D(pool_size=(2, 2),padding='same',name='Encoding_Max_3')(x)x = Conv2D(filters=8,kernel_size=(3, 3),activation='relu',padding='same',name='Decoding_Conv_1')(encoded_conv) x = UpSampling2D(size=(2, 2), name='Decoding_Upsample_1')(x) x = Conv2D(filters=16,kernel_size=(3, 3),activation='relu',padding='same',name='Decoding_Conv_2')(x) x = UpSampling2D(size=(2, 2), name='Decoding_Upsample_2')(x) x = Conv2D(filters=32,kernel_size=(3, 3),activation='relu',name='Decoding_Conv_3')(x) x = UpSampling2D(size=(2, 2), name='Decoding_Upsample_3')(x) decoded_conv = Conv2D(filters=1,kernel_size=(3, 3),activation='sigmoid',padding='same',name='Decoding_Conv_4')(x)autoencoder_conv = Model(input_, decoded_conv) autoencoder_conv.compile(optimizer='adam', loss='mse') autoencoder_conv.summary()這里是定義了一個解碼器,它限制了濾波器的數量,并使用2D采樣代替最大池化來解決濾波器數量的減少的問題。out可見三層自動編碼器有12785個參數,略高于前一種深度自動編碼器容量的5%。
out:
out結果略
f'MSE: {mse:.4f} | RMSE {mse**.5:.4f}'out:
訓練在75個周期后停止,結果測試RMSE進一步減少9%,這是由于卷積濾波器從圖像數據學習的能力更有效并且編碼尺寸更大。
out:
4.去噪自編碼器
自編碼器在去噪任務中的應用只影響訓練階段。下面會在標準正態分布的Fashion MNIST數據中添加噪聲,同時保持像素值在[0,1]范圍內。
def add_noise(x, noise_factor=.3):return np.clip(x + noise_factor * np.random.normal(size=x.shape), 0, 1)X_train_noisy = add_noise(X_train_scaled) X_test_noisy = add_noise(X_test_scaled) fig, axes = plt.subplots(nrows=2, ncols=5, figsize=(20, 4))axes = axes.flatten() for i, ax in enumerate(axes):ax.imshow(X_test_noisy[i].reshape(28, 28), cmap='gray')ax.axis('off')out:
out略
autoencoder_denoise.load_weights(path) mse = autoencoder_denoise.evaluate(x=X_test_noisy, y=X_test_scaled) f'MSE: {mse:.4f} | RMSE {mse**.5:.4f}'out:
5.可視化
下圖從上到下分別是原始圖像和去噪后的圖像。它說明了自編碼器成功地從噪聲圖像中產生壓縮編碼,這些壓縮編碼與從原始圖像中產生的非常相似。
總結
以上是生活随笔為你收集整理的MLAT-Autoencoders---下篇-关键代码及结果展示(2)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 公积金查询api
- 下一篇: 一加手机救砖资源-sahara通信失败