日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 人工智能 > 循环神经网络 >内容正文

循环神经网络

深度学习之循环神经网络(5)RNN情感分类问题实战

發(fā)布時(shí)間:2023/12/15 循环神经网络 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习之循环神经网络(5)RNN情感分类问题实战 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

深度學(xué)習(xí)之循環(huán)神經(jīng)網(wǎng)絡(luò)(5)RNN情感分類問(wèn)題實(shí)戰(zhàn)

  • 1. 數(shù)據(jù)集
  • 2. 網(wǎng)絡(luò)模型
  • 3. 訓(xùn)練與測(cè)試
  • 完整代碼
  • 運(yùn)行結(jié)果

?現(xiàn)在利用基礎(chǔ)的RNN網(wǎng)絡(luò)來(lái)挑戰(zhàn)情感分類問(wèn)題。網(wǎng)絡(luò)結(jié)構(gòu)如下圖所示,RNN網(wǎng)絡(luò)共兩層,循環(huán)提取序列信號(hào)的語(yǔ)義特征,利用第2層RNN層的最后時(shí)間戳的狀態(tài)向量 hs(2)\boldsymbol h_s^{(2)}hs(2)?作為句子的全局語(yǔ)義特征表示,送入全連接層構(gòu)成的分類網(wǎng)絡(luò)3,得到樣本 x\boldsymbol xx為積極情感的概率 P(x為積極情感│x)∈[0,1]P(\boldsymbol x為積極情感│\boldsymbol x)\in[0,1]P(xx)[0,1]

情感分類任務(wù)的網(wǎng)絡(luò)結(jié)構(gòu)

1. 數(shù)據(jù)集

?這里使用經(jīng)典的IMDB影評(píng)數(shù)據(jù)集來(lái)完成情感分類任務(wù)。IMDB影評(píng)數(shù)據(jù)集包含了50000條用戶評(píng)價(jià),評(píng)價(jià)的標(biāo)簽分為消極和積極,其中IMDB評(píng)級(jí)<5<5<5的用戶評(píng)價(jià)標(biāo)注為0,即消極; IMDB評(píng)價(jià)≥7≥77的用戶評(píng)價(jià)標(biāo)注為1,即積極。25000條影評(píng)用于訓(xùn)練集,25000條用于測(cè)試集。

?通過(guò)Keras提供的數(shù)據(jù)集datasets工具即可加載IMDB數(shù)據(jù)集,代碼如下:

import numpy as np from tensorflow import keras from tensorflow.keras import layers, losses, optimizers, Sequential from tensorflow.python.keras.datasets import imdbos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'batchsz = 128 # 批量大小 total_words = 10000 # 詞匯表大小N_vocab max_review_len = 80 # 句子最大長(zhǎng)度s,大于的句子部分將截?cái)?#xff0c;小于的將填充 embedding_len = 100 # 詞向量特征長(zhǎng)度f(wàn) # 加載IMDB數(shù)據(jù)集,此處的數(shù)據(jù)采用數(shù)字編碼,一個(gè)數(shù)字代表一個(gè)單詞 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=total_words) # 打印輸入的形狀,標(biāo)簽的形狀 print(x_train.shape, len(x_train[0]), y_train.shape) print(x_test.shape, len(x_test[0]), y_test.shape)


運(yùn)行結(jié)果如下圖所示:


可以看到,x_train和x_test是長(zhǎng)度為25000的一維數(shù)組,數(shù)組的每個(gè)元素是不定長(zhǎng)List,保存了數(shù)字編碼的每個(gè)句子,例如訓(xùn)練集的第一個(gè)句子共有218個(gè)單詞,測(cè)試集的第一個(gè)句子共有68個(gè)單詞,每個(gè)句子都包含了句子起始標(biāo)志ID。

?那么每個(gè)單詞是如何編碼為數(shù)字的呢?我們可以通過(guò)查看它的編碼表獲得編碼方案,例如:

# 數(shù)字編碼表 word_index = imdb.get_word_index() # 打印出編碼表的單詞和對(duì)應(yīng)的數(shù)字 for k, v in word_index.items():print(k, v)


運(yùn)行結(jié)果如下圖所示:


由于編碼表的鍵為單詞,值為ID,這翻轉(zhuǎn)編碼表,并添加標(biāo)志位的編碼ID,代碼如下:

# 前面4個(gè)ID是特殊位 word_index = {k:(v+3) for k, v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 # unknown word_index["<UNUSED>"] = 3 # 翻轉(zhuǎn)編碼表 reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])


?對(duì)于一個(gè)數(shù)字編碼的句子,通過(guò)入選函數(shù)轉(zhuǎn)換為字符串?dāng)?shù)據(jù):

def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])# 將第1個(gè)句子轉(zhuǎn)換為字符串?dāng)?shù)據(jù) print(decode_review(x_train[0]))


運(yùn)行結(jié)果如下所示:

<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all


?對(duì)于長(zhǎng)度參差不齊的句子,人為設(shè)置一個(gè)閾值,對(duì)大于此長(zhǎng)度的句子,選擇階段部分單詞,可以選擇截去句首單詞,也可以截去句末單詞; 對(duì)于小于此長(zhǎng)度的句子,可以選擇在句首或句尾填充,句子截?cái)喙δ芸梢酝ㄟ^(guò)keras.preprocessing.sequence.pad_sequences()函數(shù)方便實(shí)現(xiàn),例如:

# 截?cái)嗪吞畛渚渥?#xff0c;使得等長(zhǎng),此處長(zhǎng)句子保留句子后面的部分,短句子在前面填充 x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=max_review_len) x_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen=max_review_len)


截?cái)嗷蛱畛錇橄嗤L(zhǎng)度后,通過(guò)Dataset類包裹成數(shù)據(jù)集對(duì)象,并添加常用的數(shù)據(jù)集處理流程,代碼如下:

# 構(gòu)建數(shù)據(jù)集,打散,批量,并丟掉最后一個(gè)不夠batchsz的batch db_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)) db_train = db_train.shuffle(1000).batch(batchsz, drop_remainder=True) db_test = tf.data.Dataset.from_tensor_slices((x_test, y_test)) db_test = db_test.batch(batchsz, drop_remainder=True) # 統(tǒng)計(jì)數(shù)據(jù)集屬性 print('x_train shape:', x_train.shape, tf.reduce_max(y_train), tf.reduce_min(y_train)) print('x_test shape:', x_test.shape)


運(yùn)行結(jié)果如下圖所示:


可以看到截?cái)嗵畛浜蟮木渥娱L(zhǎng)度統(tǒng)一為80,即設(shè)定的句子長(zhǎng)度閾值。drop_remainder=True參數(shù)設(shè)置丟掉最后一個(gè)Batch,因?yàn)槠湔鎸?shí)的Batch Size可能小于預(yù)設(shè)的Batch Size。


2. 網(wǎng)絡(luò)模型

?我們創(chuàng)建自定義的模型類MyRNN,繼承自Model基類,需要新建Embedding層,兩個(gè)RNN層,分類網(wǎng)絡(luò)層,代碼如下:

class MyRNN(keras.Model):# Cell方式構(gòu)建多層網(wǎng)絡(luò)def __init__(self, units):super(MyRNN, self).__init__()# [b, 64],構(gòu)建Cell初始化狀態(tài)向量,重復(fù)使用self.state0 = [tf.zeros([batchsz, units])]self.state1 = [tf.zeros([batchsz, units])]# 詞向量編碼 [b, 80] => [b, 80, 100]self.embedding = layers.Embedding(total_words, embedding_len,input_length=max_review_len)# 構(gòu)建2個(gè)Cellself.rnn_cell0 = layers.SimpleRNNCell(units, dropout=0.5)self.rnn_cell1 = layers.SimpleRNNCell(units, dropout=0.5)# 構(gòu)建分類網(wǎng)絡(luò),用于將CELL的輸出特征進(jìn)行分類,2分類# [b, 80, 100] => [b, 64] => [b, 1]self.outlayer = layers.Dense(1)


其中詞向量編碼長(zhǎng)度n=100n=100n=100,RNN的狀態(tài)向量長(zhǎng)度h=unitsh=\text{units}h=units參數(shù),分類網(wǎng)絡(luò)完成二分類任務(wù),故輸出節(jié)點(diǎn)設(shè)置為1。

?前向傳播邏輯如下: 輸入序列通過(guò)Embedding層完成詞向量編碼,循環(huán)通過(guò)兩個(gè)RNN層,提取語(yǔ)義特征,取最后一層的最后時(shí)間戳的狀態(tài)向量輸出送入分類網(wǎng)絡(luò),經(jīng)過(guò)Sigmoid激活函數(shù)后得到輸出概率。代碼如下:

def call(self, inputs, training=None):x = inputs # [b, 80]# 獲取詞向量: embedding: [b, 80] => [b, 80, 100]x = self.embedding(x)# 通過(guò)2個(gè)RNN CELL,rnn cell compute,[b, 80, 100] => [b, 64]state0 = self.state0state1 = self.state1for word in tf.unstack(x, axis=1): # word: [b, 100] out0, state0 = self.rnn_cell0(word, state0, training) out1, state1 = self.rnn_cell1(out0, state1, training)# 末層最后一個(gè)輸出作為分類網(wǎng)絡(luò)的輸入: [b, 64] => [b, 1]x = self.outlayer(out1, training)# 通過(guò)激活函數(shù),p(y is pos|x)prob = tf.sigmoid(x)return prob

3. 訓(xùn)練與測(cè)試


?為了簡(jiǎn)便,這里使用Keras的Compile&Fit方式訓(xùn)練網(wǎng)絡(luò),設(shè)置優(yōu)化器為Adam優(yōu)化器,學(xué)習(xí)率為0.001,誤差函數(shù)選用二分類的交叉熵?fù)p失函數(shù)BinaryCrossentropy,測(cè)試指標(biāo)采用準(zhǔn)確率即可。代碼如下:

# 訓(xùn)練與測(cè)試 def main():units = 64 # RNN狀態(tài)向量長(zhǎng)度f(wàn)epochs = 50 # 訓(xùn)練epochsmodel = MyRNN(units)# 裝配model.compile(optimizer=optimizers.RMSprop(0.001),loss=losses.BinaryCrossentropy(),metrics=['accuracy'])# 訓(xùn)練和驗(yàn)證model.fit(db_train, epochs=epochs, validation_data=db_test)# 測(cè)試model.evaluate(db_test)


網(wǎng)絡(luò)固定訓(xùn)練20個(gè)Epoch后,在測(cè)試集上獲得了80.1%的準(zhǔn)確率。


完整代碼

import os import sslimport tensorflow as tf import numpy as np from tensorflow import keras from tensorflow.keras import layers, losses, optimizers, Sequential from tensorflow.python.keras.datasets import imdbos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' ssl._create_default_https_context = ssl._create_unverified_contextbatchsz = 128 # 批量大小 total_words = 10000 # 詞匯表大小N_vocab max_review_len = 80 # 句子最大長(zhǎng)度s,大于的句子部分將截?cái)?#xff0c;小于的將填充 embedding_len = 100 # 詞向量特征長(zhǎng)度f(wàn) # 加載IMDB數(shù)據(jù)集,此處的數(shù)據(jù)采用數(shù)字編碼,一個(gè)數(shù)字代表一個(gè)單詞 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=total_words) # 打印輸入的形狀,標(biāo)簽的形狀 print(x_train.shape, len(x_train[0]), y_train.shape) print(x_test.shape, len(x_test[0]), y_test.shape)# 數(shù)字編碼表 word_index = imdb.get_word_index() # 打印出編碼表的單詞和對(duì)應(yīng)的數(shù)字 # for k, v in word_index.items(): # print(k, v)# 前面4個(gè)ID是特殊位 word_index = {k:(v+3) for k, v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 # unknown word_index["<UNUSED>"] = 3 # 翻轉(zhuǎn)編碼表 reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])# # 將第1個(gè)句子轉(zhuǎn)換為字符串?dāng)?shù)據(jù) # print(decode_review(x_train[0]))# 截?cái)嗪吞畛渚渥?#xff0c;使得等長(zhǎng),此處長(zhǎng)句子保留句子后面的部分,短句子在前面填充 x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=max_review_len) x_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen=max_review_len)# 構(gòu)建數(shù)據(jù)集,打散,批量,并丟掉最后一個(gè)不夠batchsz的batch db_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)) db_train = db_train.shuffle(1000).batch(batchsz, drop_remainder=True) db_test = tf.data.Dataset.from_tensor_slices((x_test, y_test)) db_test = db_test.batch(batchsz, drop_remainder=True) # 統(tǒng)計(jì)數(shù)據(jù)集屬性 print('x_train shape:', x_train.shape, tf.reduce_max(y_train), tf.reduce_min(y_train)) print('x_test shape:', x_test.shape)class MyRNN(keras.Model):# Cell方式構(gòu)建多層網(wǎng)絡(luò)def __init__(self, units):super(MyRNN, self).__init__()# [b, 64],構(gòu)建Cell初始化狀態(tài)向量,重復(fù)使用self.state0 = [tf.zeros([batchsz, units])]self.state1 = [tf.zeros([batchsz, units])]# 詞向量編碼 [b, 80] => [b, 80, 100]self.embedding = layers.Embedding(total_words, embedding_len,input_length=max_review_len)# 構(gòu)建2個(gè)Cellself.rnn_cell0 = layers.SimpleRNNCell(units, dropout=0.5)self.rnn_cell1 = layers.SimpleRNNCell(units, dropout=0.5)# 構(gòu)建分類網(wǎng)絡(luò),用于將CELL的輸出特征進(jìn)行分類,2分類# [b, 80, 100] => [b, 64] => [b, 1]self.outlayer = Sequential([layers.Dense(units),layers.Dropout(rate=0.5),layers.ReLU(),layers.Dense(1)])def call(self, inputs, training=None):x = inputs # [b, 80]# 獲取詞向量: embedding: [b, 80] => [b, 80, 100]x = self.embedding(x)# 通過(guò)2個(gè)RNN CELL,rnn cell compute,[b, 80, 100] => [b, 64]state0 = self.state0state1 = self.state1for word in tf.unstack(x, axis=1): # word: [b, 100]out0, state0 = self.rnn_cell0(word, state0, training)out1, state1 = self.rnn_cell1(out0, state1, training)# 末層最后一個(gè)輸出作為分類網(wǎng)絡(luò)的輸入: [b, 64] => [b, 1]x = self.outlayer(out1, training)# 通過(guò)激活函數(shù),p(y is pos|x)prob = tf.sigmoid(x)return prob# 訓(xùn)練與測(cè)試 def main():units = 64 # RNN狀態(tài)向量長(zhǎng)度f(wàn)epochs = 50 # 訓(xùn)練epochsmodel = MyRNN(units)# 裝配model.compile(optimizer=optimizers.RMSprop(0.001),loss=losses.BinaryCrossentropy(),metrics=['accuracy'])# 訓(xùn)練和驗(yàn)證model.fit(db_train, epochs=epochs, validation_data=db_test)# 測(cè)試model.evaluate(db_test)if __name__ == '__main__':main()

運(yùn)行結(jié)果


可以看到,在訓(xùn)練45個(gè)Epoch后,正確率最高達(dá)到了80.08%。

總結(jié)

以上是生活随笔為你收集整理的深度学习之循环神经网络(5)RNN情感分类问题实战的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。