keras系列︱图像多分类训练与利用bottleneck features进行微调(三)
引自:http://blog.csdn.net/sinat_26917383/article/details/72861152
中文文檔:http://keras-cn.readthedocs.io/en/latest/?
官方文檔:https://keras.io/?
文檔主要是以keras2.0。
訓(xùn)練、訓(xùn)練主要就”練“嘛,所以堆幾個(gè)案例就知道怎么做了。?
.
.
Keras系列:
1、keras系列︱Sequential與Model模型、keras基本結(jié)構(gòu)功能(一)?
2、keras系列︱Application中五款已訓(xùn)練模型、VGG16框架(Sequential式、Model式)解讀(二)?
3、keras系列︱圖像多分類訓(xùn)練與利用bottleneck features進(jìn)行微調(diào)(三)?
4、keras系列︱人臉表情分類與識(shí)別:opencv人臉檢測(cè)+Keras情緒分類(四)?
5、keras系列︱遷移學(xué)習(xí):利用InceptionV3進(jìn)行fine-tuning及預(yù)測(cè)、完整案例(五)
.
一、CIFAR10 小圖片分類示例(Sequential式)
要訓(xùn)練模型,首先得知道數(shù)據(jù)長(zhǎng)啥樣。先來(lái)看看經(jīng)典的cifar10是如何進(jìn)行訓(xùn)練的。?
示例中CIFAR10采用的是Sequential式來(lái)編譯網(wǎng)絡(luò)結(jié)構(gòu)。
from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D batch_size = 32 num_classes = 10 epochs = 200 data_augmentation = True # 數(shù)據(jù)載入 (x_train, y_train), (x_test, y_test) = cifar10.load_data() # 多分類標(biāo)簽生成 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) # 網(wǎng)絡(luò)結(jié)構(gòu)配置 model = Sequential() model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:])) model.add(Activation('relu')) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(64, (3, 3), padding='same')) model.add(Activation('relu')) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes)) model.add(Activation('softmax')) # 訓(xùn)練參數(shù)設(shè)置 # initiate RMSprop optimizer opt = keras.optimizers.rmsprop(lr=0.0001, decay=1e-6) # Let's train the model using RMSprop model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) # 生成訓(xùn)練數(shù)據(jù) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 if not data_augmentation: print('Not using data augmentation.') model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test), shuffle=True) else: print('Using real-time data augmentation.') # This will do preprocessing and realtime data augmentation: datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180) width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=True, # randomly flip images vertical_flip=False) # randomly flip images # Compute quantities required for feature-wise normalization # (std, mean, and principal components if ZCA whitening is applied). datagen.fit(x_train) # fit訓(xùn)練 # Fit the model on the batches generated by datagen.flow(). model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size), steps_per_epoch=x_train.shape[0] // batch_size, epochs=epochs, validation_data=(x_test, y_test)) ?
就像caffe里面需要把數(shù)據(jù)編譯成LMDB一樣,keras也要數(shù)據(jù)服從其格式。來(lái)看看cifar10的數(shù)據(jù)格式:?
.
1、載入數(shù)據(jù)
(x_train, y_train), (x_test, y_test) = cifar10.load_data() ?
這句用來(lái)網(wǎng)絡(luò)上載入數(shù)據(jù),跟之前application之中,pre-model一樣,有時(shí)間需要不斷的網(wǎng)上下載,所以等你下載完了,可以自己改一樣地址,讓其讀取本地文件。?
x_train格式例如(100,100,100,3),100張格式為100*100*3的圖像集;y_train格式為(100,)
.
2、多分類標(biāo)簽指定keras格式
keras對(duì)多分類的標(biāo)簽需要一種固定格式,所以需要按照以下的方式進(jìn)行轉(zhuǎn)換,num_classes為分類數(shù)量,假設(shè)此時(shí)有5類:
y_train = keras.utils.to_categorical(y_train, num_classes) ?
最終輸出的格式應(yīng)該是(100,5)?
.
3、圖片預(yù)處理生成器ImageDataGenerator
datagen = ImageDataGenerator()
datagen.fit(x_train) ?
生成器初始化datagen ,然后datagen.fit,計(jì)算依賴于數(shù)據(jù)的變換所需要的統(tǒng)計(jì)信息?
.
4、最終訓(xùn)練格式-batch
把數(shù)據(jù)按照每個(gè)batch進(jìn)行劃分,這樣就可以送到模型進(jìn)行訓(xùn)練了。比caffe中要LMDB快很多。
datagen.flow(x_train, y_train, batch_size=batch_size) ?
接收numpy數(shù)組和標(biāo)簽為參數(shù),生成經(jīng)過(guò)數(shù)據(jù)提升或標(biāo)準(zhǔn)化后的batch數(shù)據(jù),并在一個(gè)無(wú)限循環(huán)中不斷的返回batch數(shù)據(jù)。
.
二、官方改編——多分類簡(jiǎn)易網(wǎng)絡(luò)結(jié)構(gòu)(Sequential式)
改編自官方文檔的《面向小數(shù)據(jù)集構(gòu)建圖像分類模型》?
.
1、數(shù)據(jù)來(lái)源與下載
官方文檔是貓狗二分類,此時(shí)變成一個(gè)5分類,由于追求效率,從網(wǎng)上找來(lái)一個(gè)很小的數(shù)據(jù)集。來(lái)源于博客:?
Caffe學(xué)習(xí)系列(12):訓(xùn)練和測(cè)試自己的圖片?
數(shù)據(jù)描述:?
共有500張圖片,分為大巴車(chē)、恐龍、大象、鮮花和馬五個(gè)類,每個(gè)類100張。?
下載地址:http://pan.baidu.com/s/1nuqlTnN?
編號(hào)分別以3,4,5,6,7開(kāi)頭,各為一類。我從其中每類選出20張作為測(cè)試,其余80張作為訓(xùn)練。因此最終訓(xùn)練圖片400張,測(cè)試圖片100張,共5類。如下圖:?
.
2、 載入與模型網(wǎng)絡(luò)構(gòu)建
很坑的是Keras中文文檔本節(jié)還沒(méi)有及時(shí)更新,還需要看原版的網(wǎng)站。譬如keras中文文檔是Convolution2D,但是現(xiàn)在是conv2D所以有點(diǎn)坑。
# 載入與模型網(wǎng)絡(luò)構(gòu)建
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(150, 150,3))) # filter大小3*3,數(shù)量32個(gè),原始圖像大小3,150,150 model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors model.add(Dense(64)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(5)) # matt,幾個(gè)分類就要有幾個(gè)dense model.add(Activation('softmax'))# matt,多分類 ?
二分類與多分類在前面的結(jié)構(gòu)上都沒(méi)有問(wèn)題,就是需要改一下最后的全連接層,因?yàn)榇藭r(shí)有5分類,所以需要Dense(5),同時(shí)激活函數(shù)是softmax,如果是二分類就是dense(2)+sigmoid(激活函數(shù))。
同時(shí)出現(xiàn)了以下的報(bào)錯(cuò):
報(bào)錯(cuò)1:model.add(Convolution2D(32, 3, 3, input_shape=(3, 150, 150))) ValueError: Negative dimension size caused by subtracting 3 from 1 for 'conv2d_6/convolution' (op: 'Conv2D') with input shapes: [?,1,148,32], [3,3,32,32]. 報(bào)錯(cuò)2:model.add(MaxPooling2D(pool_size=(2, 2))) ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_11/MaxPool' (op: 'MaxPool') with input shapes: [?,1,148,32]. ?
原因:?
input_shape=(3,150, 150)是theano的寫(xiě)法,而tensorflow需要寫(xiě)出:(150,150,3);?
需要修改Input_size。也就是”channels_last”和”channels_first”數(shù)據(jù)格式的問(wèn)題。?
.
3、設(shè)置訓(xùn)練參數(shù)
# 二分類
#model.compile(loss='binary_crossentropy',
# optimizer='rmsprop',
# metrics=['accuracy'])# 多分類 model.compile(loss='categorical_crossentropy', # matt,多分類,不是binary_crossentropy optimizer='rmsprop', metrics=['accuracy']) # 優(yōu)化器rmsprop:除學(xué)習(xí)率可調(diào)整外,建議保持優(yōu)化器的其他默認(rèn)參數(shù)不變 ?
二分類的參數(shù)與多分類的參數(shù)設(shè)置有些區(qū)別。
.
4、圖像預(yù)處理
然后我們開(kāi)始準(zhǔn)備數(shù)據(jù),使用.flow_from_directory()來(lái)從我們的jpgs圖片中直接產(chǎn)生數(shù)據(jù)和標(biāo)簽。?
其中值得留意的是:
- ImageDataGenerator:用以生成一個(gè)batch的圖像數(shù)據(jù),支持實(shí)時(shí)數(shù)據(jù)提升。訓(xùn)練時(shí)該函數(shù)會(huì)無(wú)限生成數(shù)據(jù),直到達(dá)到規(guī)定的epoch次數(shù)為止。
- flow_from_directory(directory):?
以文件夾路徑為參數(shù),生成經(jīng)過(guò)數(shù)據(jù)提升/歸一化后的數(shù)據(jù),在一個(gè)無(wú)限循環(huán)中無(wú)限產(chǎn)生batch數(shù)據(jù)
train_datagen = ImageDataGenerator(rescale=1./255,shear_range=0.2,zoom_range=0.2,horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( '/.../train', target_size=(150, 150), # all images will be resized to 150x150 batch_size=32, class_mode='categorical') # matt,多分類 validation_generator = test_datagen.flow_from_directory( '/.../validation', target_size=(150, 150), batch_size=32, class_mode='categorical') # matt,多分類 # class_mode='binary' ?
這一步驟是數(shù)據(jù)準(zhǔn)備階段,會(huì)比較慢,同時(shí)多分類,需要設(shè)置class_mode為“categorical”。flow_from_directory是計(jì)算數(shù)據(jù)的一些屬性值,之后再訓(xùn)練階段直接丟進(jìn)去這些生成器。?
.
5、訓(xùn)練階段
model.fit_generator(train_generator,samples_per_epoch=2000,nb_epoch=50,validation_data=validation_generator,nb_val_samples=800)
# samples_per_epoch,相當(dāng)于每個(gè)epoch數(shù)據(jù)量峰值,每個(gè)epoch以經(jīng)過(guò)模型的樣本數(shù)達(dá)到samples_per_epoch時(shí),記一個(gè)epoch結(jié)束
model.save_weights('/.../first_try_animal5.h5') ?
最后的結(jié)果示范:
Epoch 48/50
62/62 [==============================] - 39s - loss: 0.0464 - acc: 0.9929 - val_loss: 0.3916 - val_acc: 0.9601 Epoch 49/50 62/62 [==============================] - 38s - loss: 0.0565 - acc: 0.9914 - val_loss: 0.6423 - val_acc: 0.9500 Epoch 50/50 62/62 [==============================] - 38s - loss: 0.0429 - acc: 0.9960 - val_loss: 0.4238 - val_acc: 0.9599 <keras.callbacks.History object at 0x7f049fc6f090> ?
.
6、出現(xiàn)的問(wèn)題
問(wèn)題一:loss為負(fù)數(shù)?
原因:如果出現(xiàn)loss為負(fù),是因?yàn)橹岸喾诸惖臉?biāo)簽?zāi)男┰O(shè)置不對(duì),現(xiàn)在是5分類的,寫(xiě)成了2分類之后導(dǎo)致了Loss為負(fù)數(shù),形如下面
Epoch 43/50
62/62 [==============================] - 39s - loss: -16.0148 - acc: 0.1921 - val_loss: -15.9440 - val_acc: 0.1998
Epoch 44/50
61/62 [============================>.] - ETA: 0s - loss: -15.8525 - acc: 0.2049Segmentation fault (core dumped) .
三、fine-tuning方式一:使用預(yù)訓(xùn)練網(wǎng)絡(luò)的bottleneck特征
本節(jié)主要來(lái)源于:面向小數(shù)據(jù)集構(gòu)建圖像分類模型?
當(dāng)然,keras中文版里面漏洞一大堆… 沒(méi)有跟著版本更新,導(dǎo)致很多內(nèi)容都是不對(duì)的,哎…
先看VGG-16的網(wǎng)絡(luò)結(jié)構(gòu)如下:?
本節(jié)主要是通過(guò)已經(jīng)訓(xùn)練好的模型,把bottleneck特征抽取出來(lái),然后滾到下一個(gè)“小”模型里面,也就是全連接層。?
實(shí)施步驟為:
- 1、把訓(xùn)練好的模型的權(quán)重拿來(lái),model;
- 2、運(yùn)行,提取bottleneck feature(網(wǎng)絡(luò)在全連接之前的最后一層激活的feature?
map,卷積-全連接層之間),單獨(dú)拿出來(lái),并保存 - 3、bottleneck層數(shù)據(jù),之后 + dense全連接層,進(jìn)行fine-tuning?
.
1、導(dǎo)入預(yù)訓(xùn)練權(quán)重與網(wǎng)絡(luò)框架
這里keras中文文檔是錯(cuò)誤的,要看現(xiàn)在的原作者的博客,
WEIGHTS_PATH = '/home/ubuntu/keras/animal5/vgg16_weights_tf_dim_ordering_tf_kernels.h5'
WEIGHTS_PATH_NO_TOP = '/home/ubuntu/keras/animal5/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'from keras.applications.vgg16_matt import VGG16
model = VGG16(include_top=False, weights='imagenet') ?
其中WEIGHTS_PATH_NO_TOP 就是去掉了全連接層,可以用他直接提取bottleneck的特征,感謝原作者。?
.
2、提取圖片的bottleneck特征
需要步驟:
- 載入圖片;
- 灌入pre-model的權(quán)重;
- 得到bottleneck feature
#如何提取bottleneck feature
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense # (1)載入圖片 # 圖像生成器初始化 from keras.preprocessing.image import ImageDataGenerator import numpy as np datagen = ImageDataGenerator(rescale=1./255) # 訓(xùn)練集圖像生成器 generator = datagen.flow_from_directory( '/home/ubuntu/keras/animal5/train', target_size=(150, 150), batch_size=32, class_mode=None, shuffle=False) # 驗(yàn)證集圖像生成器 generator = datagen.flow_from_directory( '/home/ubuntu/keras/animal5/validation', target_size=(150, 150), batch_size=32, class_mode=None, shuffle=False) #(2)灌入pre-model的權(quán)重 model.load_weights('/.../vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5') #(3)得到bottleneck feature bottleneck_features_train = model.predict_generator(generator, 500) # 核心,steps是生成器要返回?cái)?shù)據(jù)的輪數(shù),每個(gè)epoch含有500張圖片,與model.fit(samples_per_epoch)相對(duì) np.save(open('bottleneck_features_train.npy', 'w'), bottleneck_features_train) bottleneck_features_validation = model.predict_generator(generator, 100) # 與model.fit(nb_val_samples)相對(duì),一個(gè)epoch有800張圖片,驗(yàn)證集 np.save(open('bottleneck_features_validation.npy', 'w'), bottleneck_features_validation) ?
注意
- class_mode,此時(shí)為預(yù)測(cè)場(chǎng)景,制作數(shù)據(jù)階段,不用設(shè)置標(biāo)簽,因?yàn)榇藭r(shí)是按照順序產(chǎn)生;而在train_generator數(shù)據(jù)訓(xùn)練之前的數(shù)據(jù)準(zhǔn)備,則需要設(shè)置標(biāo)簽
- shuffle,此時(shí)為預(yù)測(cè)場(chǎng)景,制作數(shù)據(jù)集,不用打亂;但是在model.fit過(guò)程中需要打亂,表示是否在訓(xùn)練過(guò)程中每個(gè)epoch前隨機(jī)打亂輸入樣本的順序。
.
3、 fine-tuning - “小”網(wǎng)絡(luò)
主要步驟:
- (1)導(dǎo)入bottleneck_features數(shù)據(jù);
- (2)設(shè)置標(biāo)簽,并規(guī)范成Keras默認(rèn)格式;
- (3)寫(xiě)“小網(wǎng)絡(luò)”的網(wǎng)絡(luò)結(jié)構(gòu)
- (4)設(shè)置參數(shù)并訓(xùn)練
# (1)導(dǎo)入bottleneck_features數(shù)據(jù)
train_data = np.load(open('bottleneck_features_train.npy'))
# the features were saved in order, so recreating the labels is easy
train_labels = np.array([0] * 100 + [1] * 100 + [2] * 100 + [3] * 100 + [4] * 96) # matt,打標(biāo)簽 validation_data = np.load(open('bottleneck_features_validation.npy')) validation_labels = np.array([0] * 20 + [1] * 20 + [2] * 20 + [3] * 20 + [4] * 16) # matt,打標(biāo)簽 # (2)設(shè)置標(biāo)簽,并規(guī)范成Keras默認(rèn)格式 train_labels = keras.utils.to_categorical(train_labels, 5) validation_labels = keras.utils.to_categorical(validation_labels, 5) # (3)寫(xiě)“小網(wǎng)絡(luò)”的網(wǎng)絡(luò)結(jié)構(gòu) model = Sequential() #train_data.shape[1:] model.add(Flatten(input_shape=(4,4,512)))# 4*4*512 model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) #model.add(Dense(1, activation='sigmoid')) # 二分類 model.add(Dense(5, activation='softmax')) # matt,多分類 #model.add(Dense(1)) #model.add(Dense(5)) #model.add(Activation('softmax')) # (4)設(shè)置參數(shù)并訓(xùn)練 model.compile(loss='categorical_crossentropy', # matt,多分類,不是binary_crossentropy optimizer='rmsprop', metrics=['accuracy']) model.fit(train_data, train_labels, nb_epoch=50, batch_size=16, validation_data=(validation_data, validation_labels)) model.save_weights('bottleneck_fc_model.h5') ?
因?yàn)樘卣鞯膕ize很小,模型在CPU上跑的也會(huì)很快,大概1s一個(gè)epoch。
#正確的結(jié)果:
Epoch 48/50
496/496 [==============================] - 0s - loss: 0.3071 - acc: 0.7762 - val_loss: 4.9337 - val_acc: 0.3229 Epoch 49/50 496/496 [==============================] - 0s - loss: 0.2881 - acc: 0.8004 - val_loss: 4.3143 - val_acc: 0.3750 Epoch 50/50 496/496 [==============================] - 0s - loss: 0.3119 - acc: 0.7984 - val_loss: 4.4788 - val_acc: 0.5625 <keras.callbacks.History object at 0x7f25d4456e10> ?
4、遇到的問(wèn)題
(1)Flatten層——最難處理的層?
其中在配置網(wǎng)絡(luò)中,我發(fā)現(xiàn)Flatten是最容易出現(xiàn)問(wèn)題的Layer了。非常多的問(wèn)題,是因?yàn)檩斀o這個(gè)層的格式不對(duì)。譬如報(bào)錯(cuò):
語(yǔ)句:model.add(Flatten(input_shape=train_data.shape[1:]))
ValueError: Input 0 is incompatible with layer flatten_5: expected min_ndim=3, found ndim=2 ?
于是要改成(4,4,512),這樣寫(xiě)(512,4,4)也不對(duì)!
(2)標(biāo)簽格式問(wèn)題?
model.fit之后報(bào)錯(cuò):
ValueError: Error when checking target: expected dense_2 to have shape (None, 5) but got array with shape (500, 1) ?
標(biāo)簽格式?jīng)]有設(shè)置,特別是多分類會(huì)遇見(jiàn)這樣的問(wèn)題。需要keras.utils.to_categorical()
train_labels = keras.utils.to_categorical(train_labels, 5) .
四、fine-tuning方式二:要調(diào)整權(quán)重
Keras中文文檔+原作者文檔這個(gè)部分都沒(méi)有寫(xiě)對(duì)!
先來(lái)看看整個(gè)結(jié)構(gòu)。?
fine-tune分三個(gè)步驟:?
- 搭建vgg-16并載入權(quán)重,將之前定義的全連接網(wǎng)絡(luò)加在模型的頂部,并載入權(quán)重?
- 凍結(jié)vgg16網(wǎng)絡(luò)的一部分參數(shù)?
- 模型訓(xùn)練
注意:
- 1、fine-tune,所有的層都應(yīng)該以訓(xùn)練好的權(quán)重為初始值,例如,你不能將隨機(jī)初始的全連接放在預(yù)訓(xùn)練的卷積層之上,這是因?yàn)橛呻S機(jī)權(quán)重產(chǎn)生的大梯度將會(huì)破壞卷積層預(yù)訓(xùn)練的權(quán)重。
- 2、選擇只fine-tune最后的卷積塊,而不是整個(gè)網(wǎng)絡(luò),這是為了防止過(guò)擬合。整個(gè)網(wǎng)絡(luò)具有巨大的熵容量,因此具有很高的過(guò)擬合傾向。由底層卷積模塊學(xué)習(xí)到的特征更加一般,更加不具有抽象性,因此我們要保持前兩個(gè)卷積塊(學(xué)習(xí)一般特征)不動(dòng),只fine-tune后面的卷積塊(學(xué)習(xí)特別的特征)
- 3、fine-tune應(yīng)該在很低的學(xué)習(xí)率下進(jìn)行,通常使用SGD優(yōu)化而不是其他自適應(yīng)學(xué)習(xí)率的優(yōu)化算法,如RMSProp。這是為了保證更新的幅度保持在較低的程度,以免毀壞預(yù)訓(xùn)練的特征。?
.
1、步驟一:搭建vgg-16并載入權(quán)重
1.1 Keras文檔結(jié)果
先看看Keras中文文檔是這樣的:
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers from keras.models import Sequential from keras.layers import Dropout, Flatten, Dense # 網(wǎng)絡(luò)結(jié)構(gòu) top_model = Sequential() #top_model.add(Flatten(input_shape=model.output_shape[1:])) top_model.add(Flatten(input_shape=(4,4,512))) top_model.add(Dense(256, activation='relu')) top_model.add(Dropout(0.5)) #top_model.add(Dense(1, activation='sigmoid')) top_model.add(Dense(5, activation='softmax')) # 加載權(quán)重 top_model.load_weights(top_model_weights_path) model.add(top_model) ?
中文文檔是用Sequential式寫(xiě)的,但是沒(méi)有找到對(duì)的權(quán)重:top_model_weights_path,如果不正確的權(quán)重文件會(huì)報(bào)錯(cuò):
ValueError: You are trying to load a weight file containing 16 layers into a model with 2 layers. ?
同時(shí)也沒(méi)有交代model是什么。
1.2 原作者新改
當(dāng)然看原作者代碼知道了這里的model就是VGG16的。所以原作者改成:
# 載入Model權(quán)重 + 網(wǎng)絡(luò)
from keras.applications.vgg16_matt import VGG16
model = VGG16(weights='imagenet', include_top=False) # “小網(wǎng)絡(luò)”結(jié)構(gòu) top_model = Sequential() top_model.add(Flatten(input_shape=model.output_shape[1:])) # top_model.add(Flatten(input_shape=(4,4,512))) top_model.add(Dense(256, activation='relu')) top_model.add(Dropout(0.5)) top_model.add(Dense(5, activation='softmax')) # 加權(quán)重 top_model.load_weights(top_model_weights_path) # 兩個(gè)網(wǎng)絡(luò)整合 model.add(top_model) ?
這里又出現(xiàn)一個(gè)問(wèn)題就是,原作者是用application中的VGG16來(lái)做的,那么VGG16原來(lái)的是Model式的,現(xiàn)在model.add的是Sequential,兼容不起來(lái),報(bào)錯(cuò):
# AttributeError: 'Model' object has no attribute 'add' ?
于是參考了VGG16原來(lái)網(wǎng)絡(luò)中的結(jié)構(gòu)自己寫(xiě)了:
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers from keras.models import Sequential from keras.layers import Dropout, Flatten, Dense # 載入Model權(quán)重 + 網(wǎng)絡(luò) from keras.applications.vgg16_matt import VGG16 model = VGG16(weights='imagenet', include_top=False) # 新加層 x = model.output # 最有問(wèn)題的層:flatten層 x = Flatten(name='flatten')(x) # 嘗試一:x = Flatten()(x) # 嘗試二:x = GlobalAveragePooling2D()(x) # 嘗試三:from keras.layers import Reshape #x = Reshape((4,4, 512))(x) # TypeError: long() argument must be a string or a number, not 'NoneType' x = Dense(256, activation='relu', name='fc1')(x) x = Dropout(0.5)(x) predictions = Dense(5, activation='softmax')(x) from keras.models import Model vgg_model = Model(input=model.input, output=predictions) ?
其中又是遇到了Flatten()層的問(wèn)題,而且做了很多嘗試,這一個(gè)層的意思是把VGG16網(wǎng)絡(luò)結(jié)構(gòu)+權(quán)重的model數(shù)據(jù)輸出格式輸入給Flatten()進(jìn)行降維,但是!?
model.output輸出的格式是:(?,?,?,512)?
那么肯定會(huì)報(bào)錯(cuò):
ValueError: The shape of the input to "Flatten" is not fully defined (got (None, None, 512). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model. ?
(1)其中原作者VGG16代碼中是這么處理Flatten層的:
x = Flatten(name='flatten')(x) ?
同樣會(huì)報(bào)錯(cuò)。
(2)借鑒《Keras跨領(lǐng)域圖像分類遷移學(xué)習(xí)與微調(diào)》的一部分:
x = Reshape((4,4, 512))(x) ?
也沒(méi)成功,應(yīng)該是自己不太會(huì)如果寫(xiě)這個(gè)層。
(3)嘗試直接加了個(gè)GlobalAveragePooling2D層之后:
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu', name='fc1')(x) x = Dropout(0.5)(x) predictions = Dense(5, activation='softmax')(x) ?
可以運(yùn)行,但是,fit的結(jié)果是:
Epoch 1/50
31/31 [==============================] - 10s - loss: 0.5575 - acc: 0.7730 - val_loss: 0.5191 - val_acc: 0.8000 Epoch 2/50 31/31 [==============================] - 9s - loss: 0.5548 - acc: 0.7760 - val_loss: 0.5256 - val_acc: 0.8000 ... Epoch 49/50 31/31 [==============================] - 9s - loss: 0.5602 - acc: 0.7730 - val_loss: 0.5285 - val_acc: 0.8000 Epoch 50/50 31/31 [==============================] - 9s - loss: 0.5583 - acc: 0.7780 - val_loss: 0.5220 - val_acc: 0.8000 <keras.callbacks.History object at 0x7fb90410fb10> 內(nèi)容結(jié)果總是一樣的,所以還是不對(duì),這塊還沒(méi)有解決。。?
.
2、凍結(jié)vgg16網(wǎng)絡(luò)的一部分參數(shù)
然后將最后一個(gè)卷積塊前的卷積層參數(shù)凍結(jié):
for layer in vgg_model.layers[:25]:layer.trainable = False# compile the model with a SGD/momentum optimizer vgg_model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9), metrics=['accuracy']) .
3、模型訓(xùn)練
然后以很低的學(xué)習(xí)率進(jìn)行訓(xùn)練:
# 準(zhǔn)備數(shù)據(jù)
train_data_dir = '/.../train'
validation_data_dir = '/.../validation'
img_width, img_height = 150, 150 nb_train_samples = 500 nb_validation_samples = 100 epochs = 50 batch_size = 16 # 圖片預(yù)處理生成器 train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) # 圖片generator train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_height, img_width), batch_size=32, class_mode='categorical') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_height, img_width), batch_size=32, class_mode='categorical') # 訓(xùn)練 vgg_model.fit_generator( train_generator, steps_per_epoch=nb_train_samples // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=nb_validation_samples // batch_size) 如果在之前的網(wǎng)絡(luò)結(jié)構(gòu)可以正常載入的話,后面都是沒(méi)有問(wèn)題的,可以直接運(yùn)行。
?
轉(zhuǎn)載于:https://www.cnblogs.com/Anita9002/p/8136578.html
總結(jié)
以上是生活随笔為你收集整理的keras系列︱图像多分类训练与利用bottleneck features进行微调(三)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: reboot 百度网盘资源
- 下一篇: MySQL优化配置之query_cach