日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

python训练手势分类器_python-Keras分类器的准确性在训练过程中稳定...

發(fā)布時(shí)間:2024/7/23 python 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python训练手势分类器_python-Keras分类器的准确性在训练过程中稳定... 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

我有以下神經(jīng)網(wǎng)絡(luò),使用Tensorflow作為后端用Keras編寫,我在Windows 10的Python 3.5(Anaconda)上運(yùn)行:

model = Sequential()

model.add(Dense(100, input_dim=283, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(150, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(4, init='normal', activation='sigmoid'))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)

model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

我正在使用我的GPU進(jìn)行訓(xùn)練.在訓(xùn)練期間(10000個(gè)紀(jì)元),幼稚網(wǎng)絡(luò)的準(zhǔn)確性從0.25穩(wěn)步提高到0.7到0.9之間,然后突然下降并保持在0.25:

Epoch 1/10000

6120/6120 [==============================] - 1s - loss: 1.5329 - acc: 0.2665

Epoch 2/10000

6120/6120 [==============================] - 1s - loss: 1.2985 - acc: 0.3784

Epoch 3/10000

6120/6120 [==============================] - 1s - loss: 1.2259 - acc: 0.4891

Epoch 4/10000

6120/6120 [==============================] - 1s - loss: 1.1867 - acc: 0.5208

Epoch 5/10000

6120/6120 [==============================] - 1s - loss: 1.1494 - acc: 0.5199

Epoch 6/10000

6120/6120 [==============================] - 1s - loss: 1.1042 - acc: 0.4953

Epoch 7/10000

6120/6120 [==============================] - 1s - loss: 1.0491 - acc: 0.4982

Epoch 8/10000

6120/6120 [==============================] - 1s - loss: 1.0066 - acc: 0.5065

Epoch 9/10000

6120/6120 [==============================] - 1s - loss: 0.9749 - acc: 0.5338

Epoch 10/10000

6120/6120 [==============================] - 1s - loss: 0.9456 - acc: 0.5696

Epoch 11/10000

6120/6120 [==============================] - 1s - loss: 0.9252 - acc: 0.5995

Epoch 12/10000

6120/6120 [==============================] - 1s - loss: 0.9111 - acc: 0.6106

Epoch 13/10000

6120/6120 [==============================] - 1s - loss: 0.8772 - acc: 0.6160

Epoch 14/10000

6120/6120 [==============================] - 1s - loss: 0.8517 - acc: 0.6245

Epoch 15/10000

6120/6120 [==============================] - 1s - loss: 0.8170 - acc: 0.6345

Epoch 16/10000

6120/6120 [==============================] - 1s - loss: 0.7850 - acc: 0.6428

Epoch 17/10000

6120/6120 [==============================] - 1s - loss: 0.7633 - acc: 0.6580

Epoch 18/10000

6120/6120 [==============================] - 4s - loss: 0.7375 - acc: 0.6717

Epoch 19/10000

6120/6120 [==============================] - 1s - loss: 0.7058 - acc: 0.6850

Epoch 20/10000

6120/6120 [==============================] - 1s - loss: 0.6787 - acc: 0.7018

Epoch 21/10000

6120/6120 [==============================] - 1s - loss: 0.6557 - acc: 0.7093

Epoch 22/10000

6120/6120 [==============================] - 1s - loss: 0.6304 - acc: 0.7208

Epoch 23/10000

6120/6120 [==============================] - 1s - loss: 0.6052 - acc: 0.7270

Epoch 24/10000

6120/6120 [==============================] - 1s - loss: 0.5848 - acc: 0.7371

Epoch 25/10000

6120/6120 [==============================] - 1s - loss: 0.5564 - acc: 0.7536

Epoch 26/10000

6120/6120 [==============================] - 1s - loss: 0.1787 - acc: 0.4163

Epoch 27/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500

Epoch 28/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500

Epoch 29/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500

Epoch 30/10000

6120/6120 [==============================] - 2s - loss: 1.1921e-07 - acc: 0.2500

Epoch 31/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500

Epoch 32/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500 ...

我猜這是由于優(yōu)化器陷入了局部最小值,該最小值將所有數(shù)據(jù)分配到一個(gè)類別.我如何禁止它這樣做?

我嘗試過(guò)的事情(但似乎并沒有阻止這種情況的發(fā)生):

>使用其他優(yōu)化器(adam)

>確保培訓(xùn)數(shù)據(jù)包括每個(gè)類別中相同數(shù)量的示例

>增加培訓(xùn)數(shù)據(jù)量(目前為6000)

>在2到5之間變化類別的數(shù)量

>將網(wǎng)絡(luò)中的隱藏層數(shù)從1增加到5

>更改圖層寬度(從50到500)

這些都沒有幫助.還有其他想法為什么會(huì)發(fā)生和/或如何抑制呢?難道是Keras中的錯(cuò)誤?非常感謝您的任何建議.

編輯:

通過(guò)將最終激活更改為softmax(從sigmoid)并向最后兩個(gè)隱藏層添加maxnorm(3)正則化,似乎已解決了該問(wèn)題:

model = Sequential()

model.add(Dense(100, input_dim=npoints, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(150, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu', W_constraint=maxnorm(3)))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu', W_constraint=maxnorm(3)))

model.add(Dropout(0.2))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)

model.add(Dense(ncat, init='normal', activation='softmax'))

model.compile(loss='mean_squared_error', optimizer=sgd, metrics=['accuracy'])

非常感謝您的建議.

總結(jié)

以上是生活随笔為你收集整理的python训练手势分类器_python-Keras分类器的准确性在训练过程中稳定...的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。