深度学习之循环神经网络(11-a)LSTM情感分类问题代码
生活随笔
收集整理的這篇文章主要介紹了
深度学习之循环神经网络(11-a)LSTM情感分类问题代码
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
深度學習之循環神經網絡(11-a)LSTM情感分類問題代碼
- 1. Cell方式
- 代碼
- 運行結果
- 2. 層方式
- 代碼
- 運行結果
1. Cell方式
代碼
import os import tensorflow as tf import numpy as np from tensorflow import keras from tensorflow.keras import layers, losses, optimizers, Sequential from tensorflow.python.keras.datasets import imdbtf.random.set_seed(22) np.random.seed(22) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' assert tf.__version__.startswith('2.')batchsz = 128 # 批量大小 total_words = 10000 # 詞匯表大小N_vocab max_review_len = 80 # 句子最大長度s,大于的句子部分將截斷,小于的將填充 embedding_len = 100 # 詞向量特征長度f # 加載IMDB數據集,此處的數據采用數字編碼,一個數字代表一個單詞 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=total_words) # (x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words=total_words) print(x_train.shape, len(x_train[0]), y_train.shape) print(x_test.shape, len(x_test[0]), y_test.shape) #%% x_train[0] #%% # 數字編碼表 word_index = keras.datasets.imdb.get_word_index() # for k,v in word_index.items(): # print(k,v) #%% word_index = {k:(v+3) for k,v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 # unknown word_index["<UNUSED>"] = 3 # 翻轉編碼表 reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])decode_review(x_train[8])#%%# x_train:[b, 80] # x_test: [b, 80] # 截斷和填充句子,使得等長,此處長句子保留句子后面的部分,短句子在前面填充 x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=max_review_len) x_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen=max_review_len) # 構建數據集,打散,批量,并丟掉最后一個不夠batchsz的batch db_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)) db_train = db_train.shuffle(1000).batch(batchsz, drop_remainder=True) db_test = tf.data.Dataset.from_tensor_slices((x_test, y_test)) db_test = db_test.batch(batchsz, drop_remainder=True) print('x_train shape:', x_train.shape, tf.reduce_max(y_train), tf.reduce_min(y_train)) print('x_test shape:', x_test.shape)#%%class MyRNN(keras.Model):# Cell方式構建多層網絡def __init__(self, units):super(MyRNN, self).__init__()# [b, 64],構建Cell初始化狀態向量,重復使用self.state0 = [tf.zeros([batchsz, units]),tf.zeros([batchsz, units])]self.state1 = [tf.zeros([batchsz, units]),tf.zeros([batchsz, units])]# 詞向量編碼 [b, 80] => [b, 80, 100]self.embedding = layers.Embedding(total_words, embedding_len,input_length=max_review_len)# 構建2個Cellself.rnn_cell0 = layers.LSTMCell(units, dropout=0.5)self.rnn_cell1 = layers.LSTMCell(units, dropout=0.5)# 構建分類網絡,用于將CELL的輸出特征進行分類,2分類# [b, 80, 100] => [b, 64] => [b, 1]self.outlayer = Sequential([layers.Dense(units),layers.Dropout(rate=0.5),layers.ReLU(),layers.Dense(1)])def call(self, inputs, training=None):x = inputs # [b, 80]# embedding: [b, 80] => [b, 80, 100]x = self.embedding(x)# rnn cell compute,[b, 80, 100] => [b, 64]state0 = self.state0state1 = self.state1for word in tf.unstack(x, axis=1): # word: [b, 100]out0, state0 = self.rnn_cell0(word, state0, training)out1, state1 = self.rnn_cell1(out0, state1, training)# 末層最后一個輸出作為分類網絡的輸入: [b, 64] => [b, 1]x = self.outlayer(out1,training)# p(y is pos|x)prob = tf.sigmoid(x)return probdef main():units = 64 # RNN狀態向量長度fepochs = 50 # 訓練epochsmodel = MyRNN(units)# 裝配model.compile(optimizer = optimizers.RMSprop(0.001),loss = losses.BinaryCrossentropy(),metrics=['accuracy'])# 訓練和驗證model.fit(db_train, epochs=epochs, validation_data=db_test)# 測試model.evaluate(db_test)if __name__ == '__main__':main()運行結果
運行結果如下所示:
(25000,) 218 (25000,) (25000,) 68 (25000,) x_train shape: (25000, 80) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(0, shape=(), dtype=int64) x_test shape: (25000, 80) Epoch 1/50 195/195 [==============================] - 29s 103ms/step - loss: 0.5068 - accuracy: 0.7436 - val_loss: 0.3703 - val_accuracy: 0.8387 Epoch 2/50 195/195 [==============================] - 16s 82ms/step - loss: 0.3533 - accuracy: 0.8548 - val_loss: 0.3616 - val_accuracy: 0.8412 Epoch 3/50 195/195 [==============================] - 15s 79ms/step - loss: 0.3054 - accuracy: 0.8771 - val_loss: 0.3446 - val_accuracy: 0.8487 Epoch 4/50 195/195 [==============================] - 17s 88ms/step - loss: 0.2802 - accuracy: 0.8887 - val_loss: 0.3480 - val_accuracy: 0.8484 Epoch 5/50 195/195 [==============================] - 18s 92ms/step - loss: 0.2575 - accuracy: 0.9032 - val_loss: 0.3570 - val_accuracy: 0.8444 Epoch 6/50 195/195 [==============================] - 18s 91ms/step - loss: 0.2403 - accuracy: 0.9093 - val_loss: 0.4224 - val_accuracy: 0.8371 Epoch 7/50 195/195 [==============================] - 15s 79ms/step - loss: 0.2263 - accuracy: 0.9154 - val_loss: 0.4013 - val_accuracy: 0.8399 Epoch 8/50 195/195 [==============================] - 15s 76ms/step - loss: 0.2114 - accuracy: 0.9200 - val_loss: 0.4547 - val_accuracy: 0.8359 Epoch 9/50 195/195 [==============================] - 16s 82ms/step - loss: 0.2023 - accuracy: 0.9254 - val_loss: 0.4413 - val_accuracy: 0.8276 Epoch 10/50 195/195 [==============================] - 17s 85ms/step - loss: 0.1876 - accuracy: 0.9313 - val_loss: 0.4362 - val_accuracy: 0.8256 Epoch 11/50 195/195 [==============================] - 16s 83ms/step - loss: 0.1736 - accuracy: 0.9356 - val_loss: 0.4511 - val_accuracy: 0.8305 Epoch 12/50 195/195 [==============================] - 16s 84ms/step - loss: 0.1607 - accuracy: 0.9418 - val_loss: 0.5020 - val_accuracy: 0.8325 Epoch 13/50 195/195 [==============================] - 15s 78ms/step - loss: 0.1501 - accuracy: 0.9451 - val_loss: 0.4984 - val_accuracy: 0.8281 Epoch 14/50 195/195 [==============================] - 16s 82ms/step - loss: 0.1371 - accuracy: 0.9510 - val_loss: 0.5379 - val_accuracy: 0.8258 Epoch 15/50 195/195 [==============================] - 16s 80ms/step - loss: 0.1269 - accuracy: 0.9552 - val_loss: 0.5322 - val_accuracy: 0.8238 Epoch 16/50 195/195 [==============================] - 16s 83ms/step - loss: 0.1185 - accuracy: 0.9583 - val_loss: 0.7252 - val_accuracy: 0.8037 Epoch 17/50 195/195 [==============================] - 15s 78ms/step - loss: 0.1104 - accuracy: 0.9602 - val_loss: 0.5632 - val_accuracy: 0.8155 Epoch 18/50 195/195 [==============================] - 16s 80ms/step - loss: 0.1007 - accuracy: 0.9642 - val_loss: 0.7574 - val_accuracy: 0.8197 Epoch 19/50 195/195 [==============================] - 16s 80ms/step - loss: 0.0937 - accuracy: 0.9681 - val_loss: 0.6460 - val_accuracy: 0.8054 Epoch 20/50 195/195 [==============================] - 15s 76ms/step - loss: 0.0857 - accuracy: 0.9702 - val_loss: 0.6929 - val_accuracy: 0.8134 Epoch 21/50 195/195 [==============================] - 16s 81ms/step - loss: 0.0796 - accuracy: 0.9725 - val_loss: 0.9142 - val_accuracy: 0.8059 Epoch 22/50 195/195 [==============================] - 15s 79ms/step - loss: 0.0705 - accuracy: 0.9746 - val_loss: 0.7580 - val_accuracy: 0.8065 Epoch 23/50 195/195 [==============================] - 15s 79ms/step - loss: 0.0659 - accuracy: 0.9771 - val_loss: 0.8530 - val_accuracy: 0.7987 Epoch 24/50 195/195 [==============================] - 16s 80ms/step - loss: 0.0630 - accuracy: 0.9793 - val_loss: 0.8808 - val_accuracy: 0.8112 Epoch 25/50 195/195 [==============================] - 16s 83ms/step - loss: 0.0540 - accuracy: 0.9819 - val_loss: 0.8355 - val_accuracy: 0.8008 Epoch 26/50 195/195 [==============================] - 16s 82ms/step - loss: 0.0463 - accuracy: 0.9842 - val_loss: 0.9459 - val_accuracy: 0.8097 Epoch 27/50 195/195 [==============================] - 15s 78ms/step - loss: 0.0470 - accuracy: 0.9840 - val_loss: 0.9115 - val_accuracy: 0.8101 Epoch 28/50 195/195 [==============================] - 17s 87ms/step - loss: 0.0429 - accuracy: 0.9862 - val_loss: 0.9409 - val_accuracy: 0.8120 Epoch 29/50 195/195 [==============================] - 16s 83ms/step - loss: 0.0383 - accuracy: 0.9877 - val_loss: 1.0979 - val_accuracy: 0.7990 Epoch 30/50 195/195 [==============================] - 16s 82ms/step - loss: 0.0351 - accuracy: 0.9884 - val_loss: 1.0956 - val_accuracy: 0.8114 Epoch 31/50 195/195 [==============================] - 16s 82ms/step - loss: 0.0353 - accuracy: 0.9883 - val_loss: 1.1614 - val_accuracy: 0.8026 Epoch 32/50 195/195 [==============================] - 18s 91ms/step - loss: 0.0320 - accuracy: 0.9899 - val_loss: 1.0825 - val_accuracy: 0.8088 Epoch 33/50 195/195 [==============================] - 19s 97ms/step - loss: 0.0301 - accuracy: 0.9907 - val_loss: 1.0638 - val_accuracy: 0.8129 Epoch 34/50 195/195 [==============================] - 20s 104ms/step - loss: 0.0269 - accuracy: 0.9910 - val_loss: 1.1916 - val_accuracy: 0.8087 Epoch 35/50 195/195 [==============================] - 18s 91ms/step - loss: 0.0249 - accuracy: 0.9921 - val_loss: 1.2331 - val_accuracy: 0.8040 Epoch 36/50 195/195 [==============================] - 15s 79ms/step - loss: 0.0243 - accuracy: 0.9921 - val_loss: 1.2130 - val_accuracy: 0.8064 Epoch 37/50 195/195 [==============================] - 19s 100ms/step - loss: 0.0235 - accuracy: 0.9931 - val_loss: 1.2002 - val_accuracy: 0.7985 Epoch 38/50 195/195 [==============================] - 19s 99ms/step - loss: 0.0219 - accuracy: 0.9931 - val_loss: 1.1760 - val_accuracy: 0.8082 Epoch 39/50 195/195 [==============================] - 19s 96ms/step - loss: 0.0229 - accuracy: 0.9928 - val_loss: 1.2577 - val_accuracy: 0.8077 Epoch 40/50 195/195 [==============================] - 17s 90ms/step - loss: 0.0193 - accuracy: 0.9932 - val_loss: 1.4784 - val_accuracy: 0.8094 Epoch 41/50 195/195 [==============================] - 16s 82ms/step - loss: 0.0214 - accuracy: 0.9935 - val_loss: 1.3102 - val_accuracy: 0.8067 Epoch 42/50 195/195 [==============================] - 16s 83ms/step - loss: 0.0171 - accuracy: 0.9950 - val_loss: 1.5331 - val_accuracy: 0.8066 Epoch 43/50 195/195 [==============================] - 15s 78ms/step - loss: 0.0180 - accuracy: 0.9947 - val_loss: 1.4144 - val_accuracy: 0.8026 Epoch 44/50 195/195 [==============================] - 16s 82ms/step - loss: 0.0148 - accuracy: 0.9952 - val_loss: 1.5186 - val_accuracy: 0.8072 Epoch 45/50 195/195 [==============================] - 16s 84ms/step - loss: 0.0167 - accuracy: 0.9953 - val_loss: 1.5549 - val_accuracy: 0.8073 Epoch 46/50 195/195 [==============================] - 19s 99ms/step - loss: 0.0163 - accuracy: 0.9946 - val_loss: 1.5496 - val_accuracy: 0.8070 Epoch 47/50 195/195 [==============================] - 16s 83ms/step - loss: 0.0154 - accuracy: 0.9954 - val_loss: 1.5598 - val_accuracy: 0.8046 Epoch 48/50 195/195 [==============================] - 18s 90ms/step - loss: 0.0161 - accuracy: 0.9952 - val_loss: 1.4052 - val_accuracy: 0.8080 Epoch 49/50 195/195 [==============================] - 16s 83ms/step - loss: 0.0132 - accuracy: 0.9956 - val_loss: 1.5454 - val_accuracy: 0.8009 Epoch 50/50 195/195 [==============================] - 17s 87ms/step - loss: 0.0144 - accuracy: 0.9962 - val_loss: 1.5731 - val_accuracy: 0.8071 195/195 [==============================] - 5s 24ms/step - loss: 1.5731 - accuracy: 0.80712. 層方式
代碼
import os import tensorflow as tf import numpy as np from tensorflow import keras from tensorflow.keras import layers, losses, optimizers, Sequential from tensorflow.python.keras.datasets import imdbtf.random.set_seed(22) np.random.seed(22) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' assert tf.__version__.startswith('2.')batchsz = 128 # 批量大小 total_words = 10000 # 詞匯表大小N_vocab max_review_len = 80 # 句子最大長度s,大于的句子部分將截斷,小于的將填充 embedding_len = 100 # 詞向量特征長度f # 加載IMDB數據集,此處的數據采用數字編碼,一個數字代表一個單詞 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=total_words) # (x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words=total_words) print(x_train.shape, len(x_train[0]), y_train.shape) print(x_test.shape, len(x_test[0]), y_test.shape) #%% x_train[0] #%% # 數字編碼表 word_index = keras.datasets.imdb.get_word_index() # for k,v in word_index.items(): # print(k,v) #%% word_index = {k:(v+3) for k,v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 # unknown word_index["<UNUSED>"] = 3 # 翻轉編碼表 reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])decode_review(x_train[8])#%%# x_train:[b, 80] # x_test: [b, 80] # 截斷和填充句子,使得等長,此處長句子保留句子后面的部分,短句子在前面填充 x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=max_review_len) x_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen=max_review_len) # 構建數據集,打散,批量,并丟掉最后一個不夠batchsz的batch db_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)) db_train = db_train.shuffle(1000).batch(batchsz, drop_remainder=True) db_test = tf.data.Dataset.from_tensor_slices((x_test, y_test)) db_test = db_test.batch(batchsz, drop_remainder=True) print('x_train shape:', x_train.shape, tf.reduce_max(y_train), tf.reduce_min(y_train)) print('x_test shape:', x_test.shape)#%%class MyRNN(keras.Model):# Cell方式構建多層網絡def __init__(self, units):super(MyRNN, self).__init__()# 詞向量編碼 [b, 80] => [b, 80, 100]self.embedding = layers.Embedding(total_words, embedding_len,input_length=max_review_len)# 構建RNN,換成LSTM即可self.rnn = keras.Sequential([layers.LSTM(units, dropout=0.5, return_sequences=True),layers.LSTM(units, dropout=0.5)])# 構建分類網絡,用于將CELL的輸出特征進行分類,2分類# [b, 80, 100] => [b, 64] => [b, 1]self.outlayer = Sequential([layers.Dense(32),layers.Dropout(rate=0.5),layers.ReLU(),layers.Dense(1)])def call(self, inputs, training=None):x = inputs # [b, 80]# embedding: [b, 80] => [b, 80, 100]x = self.embedding(x)# rnn cell compute,[b, 80, 100] => [b, 64]x = self.rnn(x)# 末層最后一個輸出作為分類網絡的輸入: [b, 64] => [b, 1]x = self.outlayer(x,training)# p(y is pos|x)prob = tf.sigmoid(x)return probdef main():units = 32 # RNN狀態向量長度fepochs = 50 # 訓練epochsmodel = MyRNN(units)# 裝配model.compile(optimizer = optimizers.Adam(0.001),loss = losses.BinaryCrossentropy(),metrics=['accuracy'])# 訓練和驗證model.fit(db_train, epochs=epochs, validation_data=db_test)# 測試model.evaluate(db_test)if __name__ == '__main__':main()運行結果
運行結果如下所示:
(25000,) 218 (25000,) (25000,) 68 (25000,) x_train shape: (25000, 80) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(0, shape=(), dtype=int64) x_test shape: (25000, 80) Epoch 1/50 195/195 [==============================] - 17s 76ms/step - loss: 0.5038 - accuracy: 0.7447 - val_loss: 0.3658 - val_accuracy: 0.8372 Epoch 2/50 195/195 [==============================] - 14s 73ms/step - loss: 0.3199 - accuracy: 0.8725 - val_loss: 0.3851 - val_accuracy: 0.8372 Epoch 3/50 195/195 [==============================] - 14s 74ms/step - loss: 0.2592 - accuracy: 0.9002 - val_loss: 0.3722 - val_accuracy: 0.8375 Epoch 4/50 195/195 [==============================] - 14s 74ms/step - loss: 0.2185 - accuracy: 0.9166 - val_loss: 0.4998 - val_accuracy: 0.8295 Epoch 5/50 195/195 [==============================] - 14s 74ms/step - loss: 0.1803 - accuracy: 0.9327 - val_loss: 0.5774 - val_accuracy: 0.8232 Epoch 6/50 195/195 [==============================] - 14s 74ms/step - loss: 0.1540 - accuracy: 0.9418 - val_loss: 0.5962 - val_accuracy: 0.8208 Epoch 7/50 195/195 [==============================] - 14s 73ms/step - loss: 0.1314 - accuracy: 0.9517 - val_loss: 0.6731 - val_accuracy: 0.8183 Epoch 8/50 195/195 [==============================] - 14s 74ms/step - loss: 0.1134 - accuracy: 0.9591 - val_loss: 0.7513 - val_accuracy: 0.8164 Epoch 9/50 195/195 [==============================] - 15s 75ms/step - loss: 0.0991 - accuracy: 0.9635 - val_loss: 0.8271 - val_accuracy: 0.8166 Epoch 10/50 195/195 [==============================] - 15s 78ms/step - loss: 0.0856 - accuracy: 0.9682 - val_loss: 0.9412 - val_accuracy: 0.8160 Epoch 11/50 195/195 [==============================] - 15s 78ms/step - loss: 0.0828 - accuracy: 0.9712 - val_loss: 0.7984 - val_accuracy: 0.8190 Epoch 12/50 195/195 [==============================] - 15s 79ms/step - loss: 0.0716 - accuracy: 0.9743 - val_loss: 0.8853 - val_accuracy: 0.8152 Epoch 13/50 195/195 [==============================] - 15s 77ms/step - loss: 0.0601 - accuracy: 0.9792 - val_loss: 1.0944 - val_accuracy: 0.8133 Epoch 14/50 195/195 [==============================] - 15s 76ms/step - loss: 0.0567 - accuracy: 0.9810 - val_loss: 1.0747 - val_accuracy: 0.8091 Epoch 15/50 195/195 [==============================] - 15s 75ms/step - loss: 0.0508 - accuracy: 0.9824 - val_loss: 1.2696 - val_accuracy: 0.8133 Epoch 16/50 195/195 [==============================] - 15s 76ms/step - loss: 0.0495 - accuracy: 0.9834 - val_loss: 1.3418 - val_accuracy: 0.8093 Epoch 17/50 195/195 [==============================] - 15s 78ms/step - loss: 0.0440 - accuracy: 0.9853 - val_loss: 1.0453 - val_accuracy: 0.8102 Epoch 18/50 195/195 [==============================] - 15s 77ms/step - loss: 0.0377 - accuracy: 0.9872 - val_loss: 1.3730 - val_accuracy: 0.8091 Epoch 19/50 195/195 [==============================] - 15s 77ms/step - loss: 0.0375 - accuracy: 0.9889 - val_loss: 1.1634 - val_accuracy: 0.8105 Epoch 20/50 195/195 [==============================] - 15s 79ms/step - loss: 0.0362 - accuracy: 0.9878 - val_loss: 1.1654 - val_accuracy: 0.8118 Epoch 21/50 195/195 [==============================] - 16s 80ms/step - loss: 0.0346 - accuracy: 0.9897 - val_loss: 1.0447 - val_accuracy: 0.8141 Epoch 22/50 195/195 [==============================] - 16s 82ms/step - loss: 0.0311 - accuracy: 0.9898 - val_loss: 1.2387 - val_accuracy: 0.8126 Epoch 23/50 195/195 [==============================] - 15s 76ms/step - loss: 0.0294 - accuracy: 0.9898 - val_loss: 1.0919 - val_accuracy: 0.8150 Epoch 24/50 195/195 [==============================] - 15s 79ms/step - loss: 0.0248 - accuracy: 0.9917 - val_loss: 1.3664 - val_accuracy: 0.8109 Epoch 25/50 195/195 [==============================] - 15s 78ms/step - loss: 0.0268 - accuracy: 0.9913 - val_loss: 1.4241 - val_accuracy: 0.8108 Epoch 26/50 195/195 [==============================] - 15s 77ms/step - loss: 0.0215 - accuracy: 0.9931 - val_loss: 1.4574 - val_accuracy: 0.8125 Epoch 27/50 195/195 [==============================] - 15s 77ms/step - loss: 0.0308 - accuracy: 0.9898 - val_loss: 1.2165 - val_accuracy: 0.8131 Epoch 28/50 195/195 [==============================] - 15s 76ms/step - loss: 0.0221 - accuracy: 0.9926 - val_loss: 1.3817 - val_accuracy: 0.8087 Epoch 29/50 195/195 [==============================] - 15s 79ms/step - loss: 0.0269 - accuracy: 0.9915 - val_loss: 1.3401 - val_accuracy: 0.8103 Epoch 30/50 195/195 [==============================] - 16s 80ms/step - loss: 0.0207 - accuracy: 0.9934 - val_loss: 1.4784 - val_accuracy: 0.8112 Epoch 31/50 195/195 [==============================] - 14s 74ms/step - loss: 0.0217 - accuracy: 0.9935 - val_loss: 1.4824 - val_accuracy: 0.8133 Epoch 32/50 195/195 [==============================] - 15s 77ms/step - loss: 0.0218 - accuracy: 0.9932 - val_loss: 1.6059 - val_accuracy: 0.8113 Epoch 33/50 195/195 [==============================] - 15s 77ms/step - loss: 0.0209 - accuracy: 0.9934 - val_loss: 1.4859 - val_accuracy: 0.8153 Epoch 34/50 195/195 [==============================] - 15s 76ms/step - loss: 0.0181 - accuracy: 0.9942 - val_loss: 1.1898 - val_accuracy: 0.8077 Epoch 35/50 195/195 [==============================] - 14s 72ms/step - loss: 0.0206 - accuracy: 0.9939 - val_loss: 1.3978 - val_accuracy: 0.8159 Epoch 36/50 195/195 [==============================] - 15s 75ms/step - loss: 0.0152 - accuracy: 0.9950 - val_loss: 1.6128 - val_accuracy: 0.8131 Epoch 37/50 195/195 [==============================] - 15s 79ms/step - loss: 0.0161 - accuracy: 0.9949 - val_loss: 1.4107 - val_accuracy: 0.8152 Epoch 38/50 195/195 [==============================] - 15s 77ms/step - loss: 0.0169 - accuracy: 0.9952 - val_loss: 1.4496 - val_accuracy: 0.8145 Epoch 39/50 195/195 [==============================] - 14s 74ms/step - loss: 0.0173 - accuracy: 0.9948 - val_loss: 1.5069 - val_accuracy: 0.8142 Epoch 40/50 195/195 [==============================] - 14s 73ms/step - loss: 0.0220 - accuracy: 0.9936 - val_loss: 1.3958 - val_accuracy: 0.8155 Epoch 41/50 195/195 [==============================] - 14s 74ms/step - loss: 0.0174 - accuracy: 0.9946 - val_loss: 1.6310 - val_accuracy: 0.8115 Epoch 42/50 195/195 [==============================] - 14s 74ms/step - loss: 0.0138 - accuracy: 0.9961 - val_loss: 1.6134 - val_accuracy: 0.8077 Epoch 43/50 195/195 [==============================] - 15s 75ms/step - loss: 0.0196 - accuracy: 0.9942 - val_loss: 1.4115 - val_accuracy: 0.8141 Epoch 44/50 195/195 [==============================] - 15s 77ms/step - loss: 0.0177 - accuracy: 0.9944 - val_loss: 1.5318 - val_accuracy: 0.8151 Epoch 45/50 195/195 [==============================] - 15s 76ms/step - loss: 0.0162 - accuracy: 0.9952 - val_loss: 1.5185 - val_accuracy: 0.8151 Epoch 46/50 195/195 [==============================] - 14s 74ms/step - loss: 0.0126 - accuracy: 0.9964 - val_loss: 1.5852 - val_accuracy: 0.8095 Epoch 47/50 195/195 [==============================] - 15s 75ms/step - loss: 0.0148 - accuracy: 0.9956 - val_loss: 1.5189 - val_accuracy: 0.8127 Epoch 48/50 195/195 [==============================] - 15s 76ms/step - loss: 0.0132 - accuracy: 0.9960 - val_loss: 1.3913 - val_accuracy: 0.8155 Epoch 49/50 195/195 [==============================] - 15s 76ms/step - loss: 0.0196 - accuracy: 0.9939 - val_loss: 1.4081 - val_accuracy: 0.8131 Epoch 50/50 195/195 [==============================] - 15s 78ms/step - loss: 0.0145 - accuracy: 0.9955 - val_loss: 1.7372 - val_accuracy: 0.8095 195/195 [==============================] - 4s 21ms/step - loss: 1.7372 - accuracy: 0.8095 創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎總結
以上是生活随笔為你收集整理的深度学习之循环神经网络(11-a)LSTM情感分类问题代码的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 深度学习之循环神经网络(11)LSTM/
- 下一篇: 深度学习之循环神经网络(11-b)GRU