深度学习之循环神经网络(11-b)GRU情感分类问题代码
生活随笔
收集整理的這篇文章主要介紹了
深度学习之循环神经网络(11-b)GRU情感分类问题代码
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
深度學(xué)習(xí)之循環(huán)神經(jīng)網(wǎng)絡(luò)(11-b)GRU情感分類問題代碼
- 1. Cell方式
- 代碼
- 運(yùn)行結(jié)果
- 2. 層方式
- 代碼
- 運(yùn)行結(jié)果
1. Cell方式
代碼
import os import tensorflow as tf import numpy as np from tensorflow import keras from tensorflow.keras import layers, losses, optimizers, Sequential from tensorflow.python.keras.datasets import imdbtf.random.set_seed(22) np.random.seed(22) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' assert tf.__version__.startswith('2.')batchsz = 128 # 批量大小 total_words = 10000 # 詞匯表大小N_vocab max_review_len = 80 # 句子最大長(zhǎng)度s,大于的句子部分將截?cái)?#xff0c;小于的將填充 embedding_len = 100 # 詞向量特征長(zhǎng)度f(wàn) # 加載IMDB數(shù)據(jù)集,此處的數(shù)據(jù)采用數(shù)字編碼,一個(gè)數(shù)字代表一個(gè)單詞 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=total_words) # (x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words=total_words) print(x_train.shape, len(x_train[0]), y_train.shape) print(x_test.shape, len(x_test[0]), y_test.shape) #%% x_train[0] #%% # 數(shù)字編碼表 word_index = keras.datasets.imdb.get_word_index() # for k,v in word_index.items(): # print(k,v) #%% word_index = {k:(v+3) for k,v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 # unknown word_index["<UNUSED>"] = 3 # 翻轉(zhuǎn)編碼表 reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])decode_review(x_train[8])#%%# x_train:[b, 80] # x_test: [b, 80] # 截?cái)嗪吞畛渚渥?#xff0c;使得等長(zhǎng),此處長(zhǎng)句子保留句子后面的部分,短句子在前面填充 x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=max_review_len) x_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen=max_review_len) # 構(gòu)建數(shù)據(jù)集,打散,批量,并丟掉最后一個(gè)不夠batchsz的batch db_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)) db_train = db_train.shuffle(1000).batch(batchsz, drop_remainder=True) db_test = tf.data.Dataset.from_tensor_slices((x_test, y_test)) db_test = db_test.batch(batchsz, drop_remainder=True) print('x_train shape:', x_train.shape, tf.reduce_max(y_train), tf.reduce_min(y_train)) print('x_test shape:', x_test.shape)#%%class MyRNN(keras.Model):# Cell方式構(gòu)建多層網(wǎng)絡(luò)def __init__(self, units):super(MyRNN, self).__init__()# [b, 64],構(gòu)建Cell初始化狀態(tài)向量,重復(fù)使用self.state0 = [tf.zeros([batchsz, units])]self.state1 = [tf.zeros([batchsz, units])]# 詞向量編碼 [b, 80] => [b, 80, 100]self.embedding = layers.Embedding(total_words, embedding_len,input_length=max_review_len)# 構(gòu)建2個(gè)Cellself.rnn_cell0 = layers.GRUCell(units, dropout=0.5)self.rnn_cell1 = layers.GRUCell(units, dropout=0.5)# 構(gòu)建分類網(wǎng)絡(luò),用于將CELL的輸出特征進(jìn)行分類,2分類# [b, 80, 100] => [b, 64] => [b, 1]self.outlayer = Sequential([layers.Dense(units),layers.Dropout(rate=0.5),layers.ReLU(),layers.Dense(1)])def call(self, inputs, training=None):x = inputs # [b, 80]# embedding: [b, 80] => [b, 80, 100]x = self.embedding(x)# rnn cell compute,[b, 80, 100] => [b, 64]state0 = self.state0state1 = self.state1for word in tf.unstack(x, axis=1): # word: [b, 100]out0, state0 = self.rnn_cell0(word, state0, training)out1, state1 = self.rnn_cell1(out0, state1, training)# 末層最后一個(gè)輸出作為分類網(wǎng)絡(luò)的輸入: [b, 64] => [b, 1]x = self.outlayer(out1, training)# p(y is pos|x)prob = tf.sigmoid(x)return probdef main():units = 64 # RNN狀態(tài)向量長(zhǎng)度f(wàn)epochs = 50 # 訓(xùn)練epochsmodel = MyRNN(units)# 裝配model.compile(optimizer = optimizers.RMSprop(0.001),loss = losses.BinaryCrossentropy(),metrics=['accuracy'])# 訓(xùn)練和驗(yàn)證model.fit(db_train, epochs=epochs, validation_data=db_test)# 測(cè)試model.evaluate(db_test)if __name__ == '__main__':main()運(yùn)行結(jié)果
運(yùn)行結(jié)果如下所示:
(25000,) 218 (25000,) (25000,) 68 (25000,) x_train shape: (25000, 80) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(0, shape=(), dtype=int64) x_test shape: (25000, 80) Epoch 1/50 195/195 [==============================] - 27s 75ms/step - loss: 0.5313 - accuracy: 0.7230 - val_loss: 0.3653 - val_accuracy: 0.8379 Epoch 2/50 195/195 [==============================] - 11s 58ms/step - loss: 0.3624 - accuracy: 0.8492 - val_loss: 0.4092 - val_accuracy: 0.8215 Epoch 3/50 195/195 [==============================] - 12s 63ms/step - loss: 0.3125 - accuracy: 0.8747 - val_loss: 0.3466 - val_accuracy: 0.8469 Epoch 4/50 195/195 [==============================] - 12s 60ms/step - loss: 0.2882 - accuracy: 0.8863 - val_loss: 0.3449 - val_accuracy: 0.8473 Epoch 5/50 195/195 [==============================] - 11s 58ms/step - loss: 0.2620 - accuracy: 0.8993 - val_loss: 0.3564 - val_accuracy: 0.8441 Epoch 6/50 195/195 [==============================] - 12s 60ms/step - loss: 0.2433 - accuracy: 0.9050 - val_loss: 0.3797 - val_accuracy: 0.8390 Epoch 7/50 195/195 [==============================] - 13s 65ms/step - loss: 0.2284 - accuracy: 0.9136 - val_loss: 0.3808 - val_accuracy: 0.8442 Epoch 8/50 195/195 [==============================] - 12s 60ms/step - loss: 0.2148 - accuracy: 0.9191 - val_loss: 0.4447 - val_accuracy: 0.8404 Epoch 9/50 195/195 [==============================] - 12s 62ms/step - loss: 0.2026 - accuracy: 0.9249 - val_loss: 0.4039 - val_accuracy: 0.8409 Epoch 10/50 195/195 [==============================] - 12s 60ms/step - loss: 0.1858 - accuracy: 0.9325 - val_loss: 0.4054 - val_accuracy: 0.8361 Epoch 11/50 195/195 [==============================] - 12s 62ms/step - loss: 0.1795 - accuracy: 0.9350 - val_loss: 0.4211 - val_accuracy: 0.8390 Epoch 12/50 195/195 [==============================] - 13s 66ms/step - loss: 0.1629 - accuracy: 0.9408 - val_loss: 0.4978 - val_accuracy: 0.8359 Epoch 13/50 195/195 [==============================] - 12s 62ms/step - loss: 0.1563 - accuracy: 0.9448 - val_loss: 0.4397 - val_accuracy: 0.8361 Epoch 14/50 195/195 [==============================] - 11s 58ms/step - loss: 0.1453 - accuracy: 0.9478 - val_loss: 0.5085 - val_accuracy: 0.8353 Epoch 15/50 195/195 [==============================] - 11s 59ms/step - loss: 0.1368 - accuracy: 0.9522 - val_loss: 0.5143 - val_accuracy: 0.8325 Epoch 16/50 195/195 [==============================] - 11s 59ms/step - loss: 0.1288 - accuracy: 0.9538 - val_loss: 0.6158 - val_accuracy: 0.8255 Epoch 17/50 195/195 [==============================] - 12s 60ms/step - loss: 0.1201 - accuracy: 0.9573 - val_loss: 0.5548 - val_accuracy: 0.8282 Epoch 18/50 195/195 [==============================] - 12s 61ms/step - loss: 0.1124 - accuracy: 0.9611 - val_loss: 0.6440 - val_accuracy: 0.8269 Epoch 19/50 195/195 [==============================] - 12s 62ms/step - loss: 0.1068 - accuracy: 0.9637 - val_loss: 0.6014 - val_accuracy: 0.8256 Epoch 20/50 195/195 [==============================] - 12s 64ms/step - loss: 0.1016 - accuracy: 0.9645 - val_loss: 0.6732 - val_accuracy: 0.8175 Epoch 21/50 195/195 [==============================] - 12s 63ms/step - loss: 0.0927 - accuracy: 0.9682 - val_loss: 0.6812 - val_accuracy: 0.8219 Epoch 22/50 195/195 [==============================] - 12s 63ms/step - loss: 0.0887 - accuracy: 0.9702 - val_loss: 0.6998 - val_accuracy: 0.8215 Epoch 23/50 195/195 [==============================] - 14s 70ms/step - loss: 0.0795 - accuracy: 0.9733 - val_loss: 0.6457 - val_accuracy: 0.8169 Epoch 24/50 195/195 [==============================] - 13s 67ms/step - loss: 0.0761 - accuracy: 0.9749 - val_loss: 0.8002 - val_accuracy: 0.8152 Epoch 25/50 195/195 [==============================] - 11s 58ms/step - loss: 0.0690 - accuracy: 0.9764 - val_loss: 0.8147 - val_accuracy: 0.8177 Epoch 26/50 195/195 [==============================] - 12s 61ms/step - loss: 0.0663 - accuracy: 0.9782 - val_loss: 0.8104 - val_accuracy: 0.8183 Epoch 27/50 195/195 [==============================] - 12s 64ms/step - loss: 0.0616 - accuracy: 0.9791 - val_loss: 0.7919 - val_accuracy: 0.8157 Epoch 28/50 195/195 [==============================] - 12s 64ms/step - loss: 0.0554 - accuracy: 0.9805 - val_loss: 0.9586 - val_accuracy: 0.8163 Epoch 29/50 195/195 [==============================] - 12s 64ms/step - loss: 0.0564 - accuracy: 0.9816 - val_loss: 0.8694 - val_accuracy: 0.8107 Epoch 30/50 195/195 [==============================] - 13s 65ms/step - loss: 0.0459 - accuracy: 0.9853 - val_loss: 1.0061 - val_accuracy: 0.8098 Epoch 31/50 195/195 [==============================] - 12s 63ms/step - loss: 0.0417 - accuracy: 0.9863 - val_loss: 1.1465 - val_accuracy: 0.8091 Epoch 32/50 195/195 [==============================] - 13s 65ms/step - loss: 0.0390 - accuracy: 0.9870 - val_loss: 1.1344 - val_accuracy: 0.8098 Epoch 33/50 195/195 [==============================] - 12s 62ms/step - loss: 0.0386 - accuracy: 0.9871 - val_loss: 1.2853 - val_accuracy: 0.8065 Epoch 34/50 195/195 [==============================] - 13s 67ms/step - loss: 0.0359 - accuracy: 0.9881 - val_loss: 1.1043 - val_accuracy: 0.8101 Epoch 35/50 195/195 [==============================] - 12s 64ms/step - loss: 0.0340 - accuracy: 0.9894 - val_loss: 1.2222 - val_accuracy: 0.8111 Epoch 36/50 195/195 [==============================] - 12s 62ms/step - loss: 0.0315 - accuracy: 0.9893 - val_loss: 1.2501 - val_accuracy: 0.8058 Epoch 37/50 195/195 [==============================] - 13s 67ms/step - loss: 0.0266 - accuracy: 0.9913 - val_loss: 1.2969 - val_accuracy: 0.8077 Epoch 38/50 195/195 [==============================] - 12s 62ms/step - loss: 0.0247 - accuracy: 0.9922 - val_loss: 1.2353 - val_accuracy: 0.8058 Epoch 39/50 195/195 [==============================] - 13s 66ms/step - loss: 0.0217 - accuracy: 0.9931 - val_loss: 1.2809 - val_accuracy: 0.8091 Epoch 40/50 195/195 [==============================] - 12s 60ms/step - loss: 0.0224 - accuracy: 0.9929 - val_loss: 1.2100 - val_accuracy: 0.8097 Epoch 41/50 195/195 [==============================] - 12s 60ms/step - loss: 0.0188 - accuracy: 0.9942 - val_loss: 1.2793 - val_accuracy: 0.8109 Epoch 42/50 195/195 [==============================] - 12s 61ms/step - loss: 0.0169 - accuracy: 0.9948 - val_loss: 1.4286 - val_accuracy: 0.8087 Epoch 43/50 195/195 [==============================] - 11s 58ms/step - loss: 0.0160 - accuracy: 0.9951 - val_loss: 1.4288 - val_accuracy: 0.8069 Epoch 44/50 195/195 [==============================] - 12s 60ms/step - loss: 0.0178 - accuracy: 0.9948 - val_loss: 1.5837 - val_accuracy: 0.8103 Epoch 45/50 195/195 [==============================] - 12s 59ms/step - loss: 0.0117 - accuracy: 0.9957 - val_loss: 1.8518 - val_accuracy: 0.8079 Epoch 46/50 195/195 [==============================] - 11s 59ms/step - loss: 0.0112 - accuracy: 0.9968 - val_loss: 1.9529 - val_accuracy: 0.8082 Epoch 47/50 195/195 [==============================] - 12s 61ms/step - loss: 0.0127 - accuracy: 0.9961 - val_loss: 1.6620 - val_accuracy: 0.8096 Epoch 48/50 195/195 [==============================] - 11s 58ms/step - loss: 0.0105 - accuracy: 0.9968 - val_loss: 1.9470 - val_accuracy: 0.8075 Epoch 49/50 195/195 [==============================] - 12s 61ms/step - loss: 0.0139 - accuracy: 0.9959 - val_loss: 1.8875 - val_accuracy: 0.8094 Epoch 50/50 195/195 [==============================] - 12s 59ms/step - loss: 0.0114 - accuracy: 0.9972 - val_loss: 1.9203 - val_accuracy: 0.8064 195/195 [==============================] - 3s 16ms/step - loss: 1.9203 - accuracy: 0.80642. 層方式
代碼
import os import tensorflow as tf import numpy as np from tensorflow import keras from tensorflow.keras import layers, losses, optimizers, Sequential from tensorflow.python.keras.datasets import imdbtf.random.set_seed(22) np.random.seed(22) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' assert tf.__version__.startswith('2.')batchsz = 128 # 批量大小 total_words = 10000 # 詞匯表大小N_vocab max_review_len = 80 # 句子最大長(zhǎng)度s,大于的句子部分將截?cái)?#xff0c;小于的將填充 embedding_len = 100 # 詞向量特征長(zhǎng)度f(wàn) # 加載IMDB數(shù)據(jù)集,此處的數(shù)據(jù)采用數(shù)字編碼,一個(gè)數(shù)字代表一個(gè)單詞 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=total_words) # (x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words=total_words) print(x_train.shape, len(x_train[0]), y_train.shape) print(x_test.shape, len(x_test[0]), y_test.shape) #%% x_train[0] #%% # 數(shù)字編碼表 word_index = keras.datasets.imdb.get_word_index() # for k,v in word_index.items(): # print(k,v) #%% word_index = {k:(v+3) for k,v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 # unknown word_index["<UNUSED>"] = 3 # 翻轉(zhuǎn)編碼表 reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])decode_review(x_train[8])#%%# x_train:[b, 80] # x_test: [b, 80] # 截?cái)嗪吞畛渚渥?#xff0c;使得等長(zhǎng),此處長(zhǎng)句子保留句子后面的部分,短句子在前面填充 x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=max_review_len) x_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen=max_review_len) # 構(gòu)建數(shù)據(jù)集,打散,批量,并丟掉最后一個(gè)不夠batchsz的batch db_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)) db_train = db_train.shuffle(1000).batch(batchsz, drop_remainder=True) db_test = tf.data.Dataset.from_tensor_slices((x_test, y_test)) db_test = db_test.batch(batchsz, drop_remainder=True) print('x_train shape:', x_train.shape, tf.reduce_max(y_train), tf.reduce_min(y_train)) print('x_test shape:', x_test.shape)#%%class MyRNN(keras.Model):# Cell方式構(gòu)建多層網(wǎng)絡(luò)def __init__(self, units):super(MyRNN, self).__init__()# 詞向量編碼 [b, 80] => [b, 80, 100]self.embedding = layers.Embedding(total_words, embedding_len,input_length=max_review_len)# 構(gòu)建RNNself.rnn = keras.Sequential([layers.GRU(units, dropout=0.5, return_sequences=True),layers.GRU(units, dropout=0.5)])# 構(gòu)建分類網(wǎng)絡(luò),用于將CELL的輸出特征進(jìn)行分類,2分類# [b, 80, 100] => [b, 64] => [b, 1]self.outlayer = Sequential([layers.Dense(32),layers.Dropout(rate=0.5),layers.ReLU(),layers.Dense(1)])def call(self, inputs, training=None):x = inputs # [b, 80]# embedding: [b, 80] => [b, 80, 100]x = self.embedding(x)# rnn cell compute,[b, 80, 100] => [b, 64]x = self.rnn(x)# 末層最后一個(gè)輸出作為分類網(wǎng)絡(luò)的輸入: [b, 64] => [b, 1]x = self.outlayer(x,training)# p(y is pos|x)prob = tf.sigmoid(x)return probdef main():units = 32 # RNN狀態(tài)向量長(zhǎng)度f(wàn)epochs = 50 # 訓(xùn)練epochsmodel = MyRNN(units)# 裝配model.compile(optimizer = optimizers.Adam(0.001),loss = losses.BinaryCrossentropy(),metrics=['accuracy'])# 訓(xùn)練和驗(yàn)證model.fit(db_train, epochs=epochs, validation_data=db_test)# 測(cè)試model.evaluate(db_test)if __name__ == '__main__':main()運(yùn)行結(jié)果
運(yùn)行結(jié)果如下所示:
(25000,) 218 (25000,) (25000,) 68 (25000,) x_train shape: (25000, 80) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(0, shape=(), dtype=int64) x_test shape: (25000, 80) Epoch 1/50 195/195 [==============================] - 15s 63ms/step - loss: 0.5453 - accuracy: 0.7086 - val_loss: 0.3727 - val_accuracy: 0.8329 Epoch 2/50 195/195 [==============================] - 12s 59ms/step - loss: 0.3404 - accuracy: 0.8619 - val_loss: 0.3770 - val_accuracy: 0.8387 Epoch 3/50 195/195 [==============================] - 12s 62ms/step - loss: 0.2796 - accuracy: 0.8921 - val_loss: 0.3811 - val_accuracy: 0.8334 Epoch 4/50 195/195 [==============================] - 12s 61ms/step - loss: 0.2419 - accuracy: 0.9076 - val_loss: 0.4437 - val_accuracy: 0.8317 Epoch 5/50 195/195 [==============================] - 12s 60ms/step - loss: 0.2083 - accuracy: 0.9243 - val_loss: 0.5327 - val_accuracy: 0.8237 Epoch 6/50 195/195 [==============================] - 12s 59ms/step - loss: 0.1819 - accuracy: 0.9345 - val_loss: 0.5159 - val_accuracy: 0.8251 Epoch 7/50 195/195 [==============================] - 12s 63ms/step - loss: 0.1492 - accuracy: 0.9479 - val_loss: 0.6070 - val_accuracy: 0.8212 Epoch 8/50 195/195 [==============================] - 12s 63ms/step - loss: 0.1356 - accuracy: 0.9528 - val_loss: 0.6642 - val_accuracy: 0.8227 Epoch 9/50 195/195 [==============================] - 13s 65ms/step - loss: 0.1080 - accuracy: 0.9640 - val_loss: 0.6305 - val_accuracy: 0.8207 Epoch 10/50 195/195 [==============================] - 13s 65ms/step - loss: 0.0973 - accuracy: 0.9663 - val_loss: 0.8183 - val_accuracy: 0.8166 Epoch 11/50 195/195 [==============================] - 12s 64ms/step - loss: 0.0859 - accuracy: 0.9712 - val_loss: 0.8450 - val_accuracy: 0.8155 Epoch 12/50 195/195 [==============================] - 14s 70ms/step - loss: 0.0783 - accuracy: 0.9736 - val_loss: 0.7626 - val_accuracy: 0.8115 Epoch 13/50 195/195 [==============================] - 12s 61ms/step - loss: 0.0757 - accuracy: 0.9752 - val_loss: 0.9203 - val_accuracy: 0.8110 Epoch 14/50 195/195 [==============================] - 12s 61ms/step - loss: 0.0600 - accuracy: 0.9802 - val_loss: 1.0984 - val_accuracy: 0.8108 Epoch 15/50 195/195 [==============================] - 12s 60ms/step - loss: 0.0559 - accuracy: 0.9810 - val_loss: 1.0869 - val_accuracy: 0.8143 Epoch 16/50 195/195 [==============================] - 12s 60ms/step - loss: 0.0509 - accuracy: 0.9838 - val_loss: 1.1889 - val_accuracy: 0.8106 Epoch 17/50 195/195 [==============================] - 12s 61ms/step - loss: 0.0498 - accuracy: 0.9832 - val_loss: 1.1193 - val_accuracy: 0.8130 Epoch 18/50 195/195 [==============================] - 12s 63ms/step - loss: 0.0450 - accuracy: 0.9854 - val_loss: 1.1024 - val_accuracy: 0.8119 Epoch 19/50 195/195 [==============================] - 13s 65ms/step - loss: 0.0420 - accuracy: 0.9860 - val_loss: 1.2353 - val_accuracy: 0.8086 Epoch 20/50 195/195 [==============================] - 12s 63ms/step - loss: 0.0384 - accuracy: 0.9875 - val_loss: 1.2411 - val_accuracy: 0.8073 Epoch 21/50 195/195 [==============================] - 12s 62ms/step - loss: 0.0382 - accuracy: 0.9876 - val_loss: 1.2832 - val_accuracy: 0.8076 Epoch 22/50 195/195 [==============================] - 13s 66ms/step - loss: 0.0376 - accuracy: 0.9879 - val_loss: 1.3139 - val_accuracy: 0.8052 Epoch 23/50 195/195 [==============================] - 12s 62ms/step - loss: 0.0314 - accuracy: 0.9897 - val_loss: 1.2738 - val_accuracy: 0.8087 Epoch 24/50 195/195 [==============================] - 12s 64ms/step - loss: 0.0324 - accuracy: 0.9898 - val_loss: 1.3274 - val_accuracy: 0.8111 Epoch 25/50 195/195 [==============================] - 12s 62ms/step - loss: 0.0303 - accuracy: 0.9903 - val_loss: 1.3256 - val_accuracy: 0.8083 Epoch 26/50 195/195 [==============================] - 12s 60ms/step - loss: 0.0290 - accuracy: 0.9903 - val_loss: 1.3456 - val_accuracy: 0.8090 Epoch 27/50 195/195 [==============================] - 13s 64ms/step - loss: 0.0262 - accuracy: 0.9910 - val_loss: 1.3736 - val_accuracy: 0.8090 Epoch 28/50 195/195 [==============================] - 12s 62ms/step - loss: 0.0264 - accuracy: 0.9919 - val_loss: 1.5663 - val_accuracy: 0.8065 Epoch 29/50 195/195 [==============================] - 12s 63ms/step - loss: 0.0279 - accuracy: 0.9915 - val_loss: 1.2952 - val_accuracy: 0.8026 Epoch 30/50 195/195 [==============================] - 12s 62ms/step - loss: 0.0251 - accuracy: 0.9919 - val_loss: 1.3855 - val_accuracy: 0.8067 Epoch 31/50 195/195 [==============================] - 13s 69ms/step - loss: 0.0233 - accuracy: 0.9925 - val_loss: 1.5669 - val_accuracy: 0.8052 Epoch 32/50 195/195 [==============================] - 13s 68ms/step - loss: 0.0217 - accuracy: 0.9932 - val_loss: 1.5572 - val_accuracy: 0.8039 Epoch 33/50 195/195 [==============================] - 12s 63ms/step - loss: 0.0228 - accuracy: 0.9928 - val_loss: 1.4645 - val_accuracy: 0.8022 Epoch 34/50 195/195 [==============================] - 13s 67ms/step - loss: 0.0238 - accuracy: 0.9923 - val_loss: 1.5204 - val_accuracy: 0.8077 Epoch 35/50 195/195 [==============================] - 13s 65ms/step - loss: 0.0179 - accuracy: 0.9937 - val_loss: 1.3944 - val_accuracy: 0.8080 Epoch 36/50 195/195 [==============================] - 13s 68ms/step - loss: 0.0178 - accuracy: 0.9946 - val_loss: 1.6660 - val_accuracy: 0.8063 Epoch 37/50 195/195 [==============================] - 13s 66ms/step - loss: 0.0163 - accuracy: 0.9954 - val_loss: 1.9218 - val_accuracy: 0.8047 Epoch 38/50 195/195 [==============================] - 12s 63ms/step - loss: 0.0190 - accuracy: 0.9940 - val_loss: 1.5856 - val_accuracy: 0.8075 Epoch 39/50 195/195 [==============================] - 13s 67ms/step - loss: 0.0155 - accuracy: 0.9952 - val_loss: 1.5744 - val_accuracy: 0.8077 Epoch 40/50 195/195 [==============================] - 13s 66ms/step - loss: 0.0167 - accuracy: 0.9947 - val_loss: 1.7135 - val_accuracy: 0.8053 Epoch 41/50 195/195 [==============================] - 13s 66ms/step - loss: 0.0155 - accuracy: 0.9951 - val_loss: 1.4940 - val_accuracy: 0.8037 Epoch 42/50 195/195 [==============================] - 13s 68ms/step - loss: 0.0167 - accuracy: 0.9954 - val_loss: 1.6509 - val_accuracy: 0.8040 Epoch 43/50 195/195 [==============================] - 13s 65ms/step - loss: 0.0151 - accuracy: 0.9953 - val_loss: 1.6721 - val_accuracy: 0.8045 Epoch 44/50 195/195 [==============================] - 14s 69ms/step - loss: 0.0166 - accuracy: 0.9945 - val_loss: 1.6668 - val_accuracy: 0.8053 Epoch 45/50 195/195 [==============================] - 13s 66ms/step - loss: 0.0201 - accuracy: 0.9942 - val_loss: 1.4363 - val_accuracy: 0.8076 Epoch 46/50 195/195 [==============================] - 12s 62ms/step - loss: 0.0140 - accuracy: 0.9956 - val_loss: 1.5191 - val_accuracy: 0.8050 Epoch 47/50 195/195 [==============================] - 12s 59ms/step - loss: 0.0129 - accuracy: 0.9960 - val_loss: 1.7104 - val_accuracy: 0.8064 Epoch 48/50 195/195 [==============================] - 12s 59ms/step - loss: 0.0143 - accuracy: 0.9954 - val_loss: 1.6236 - val_accuracy: 0.8054 Epoch 49/50 195/195 [==============================] - 12s 59ms/step - loss: 0.0151 - accuracy: 0.9961 - val_loss: 1.6736 - val_accuracy: 0.8058 Epoch 50/50 195/195 [==============================] - 12s 63ms/step - loss: 0.0120 - accuracy: 0.9962 - val_loss: 1.7336 - val_accuracy: 0.8043 195/195 [==============================] - 3s 15ms/step - loss: 1.7336 - accuracy: 0.8043總結(jié)
以上是生活随笔為你收集整理的深度学习之循环神经网络(11-b)GRU情感分类问题代码的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: win10怎么关闭用户账户控制窗口 wi
- 下一篇: 深度学习之循环神经网络(12)预训练的词