日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

使用RNN预测文档归属作者

發(fā)布時(shí)間:2024/7/5 编程问答 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 使用RNN预测文档归属作者 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

文章目錄

    • 1. 文本處理
    • 2. 文本序列化
    • 3. 數(shù)據(jù)集拆分
    • 4. 建立RNN模型
    • 5. 訓(xùn)練
    • 6. 測試

參考 基于深度學(xué)習(xí)的自然語言處理

1. 文本處理

數(shù)據(jù)預(yù)覽

# 有兩個(gè)作者的文章(A, B),定義為0, 1 A = 0 # hamilton B = 1 # madison UNKNOWN = -1 # 把同一作者的文章全部合并到一個(gè)文件 textA, textB = '', ''import os for file in os.listdir('./papers/A'):textA += preprocessing('./papers/A/'+file) for file in os.listdir('./papers/B'):textB += preprocessing('./papers/B/'+file)
  • 把同一作者的文檔合并,去除\n, 多余空格,以及作者的名字(防止數(shù)據(jù)泄露)
def preprocessing(file_path):with open(file_path, 'r') as f:lines = f.readlines()text = ' '.join(lines[1:]).replace('\n',' ').replace(' ', ' ').lower().replace('hamilton','').replace('madison','')text = ' '.join(text.split())return text

print("文本A的長度:{}".format(len(textA))) print("文本B的長度:{}".format(len(textB)))文本A的長度:216394 文本B的長度:230867

2. 文本序列化

  • 采用字符級(jí)別的 tokenizer char_level=True
from keras.preprocessing.text import Tokenizer char_tokenizer = Tokenizer(char_level=True)char_tokenizer.fit_on_texts(textA + textB) # 訓(xùn)練tokenizerlong_seq_a = char_tokenizer.texts_to_sequences([textA])[0] # 文本轉(zhuǎn) ids 序列 long_seq_b = char_tokenizer.texts_to_sequences([textB])[0]Xa, ya = make_subsequence(long_seq_a, A) # 切分成多個(gè)等長的子串樣本 Xb, yb = make_subsequence(long_seq_b, B)



  • ids 序列切分成等長的子串樣本
SEQ_LEN = 30 # 切分序列的長度,超參數(shù) import numpy as np def make_subsequence(long_seq, label, seq_len=SEQ_LEN):numofsubseq = len(long_seq)-seq_len+1 # 滑窗,可以取出來這么多種X = np.zeros((numofsubseq, seq_len)) # 數(shù)據(jù)y = np.zeros((numofsubseq, 1)) # 標(biāo)簽for i in range(numofsubseq):X[i] = long_seq[i:i+seq_len] # seq_len 大小的滑窗y[i] = labelreturn X, y print('字符的種類:{}'.format(len(char_tokenizer.word_index))) # 52 # {' ': 1, 'e': 2, 't': 3, 'o': 4, 'i': 5, 'n': 6, 'a': 7, 's': 8, 'r': 9, 'h': 10, # 'l': 11, 'd': 12, 'c': 13, 'u': 14, 'f': 15, 'm': 16, 'p': 17, 'b': 18, 'y': 19, 'w': 20, # ',': 21, 'g': 22, 'v': 23, '.': 24, 'x': 25, 'k': 26, 'j': 27, ';': 28, 'q': 29, 'z': 30, # '-': 31, '?': 32, '"': 33, '1': 34, ':': 35, '8': 36, '7': 37, '(': 38, ')': 39, '2': 40, # '0': 41, '3': 42, '4': 43, '6': 44, "'": 45, '!': 46, ']': 47, '5': 48, '[': 49, '@': 50, # '9': 51, '%': 52} print('A訓(xùn)練集大小:{}'.format(Xa.shape)) print('B訓(xùn)練集大小:{}'.format(Xb.shape)) A訓(xùn)練集大小:(216365, 30) B訓(xùn)練集大小:(230838, 30)

3. 數(shù)據(jù)集拆分

  • A、B數(shù)據(jù)集混合
# 堆疊AB訓(xùn)練數(shù)據(jù)在一起 X = np.vstack((Xa, Xb)) y = np.vstack((ya, yb))
  • 訓(xùn)練集,測試集拆分
# 訓(xùn)練集測試集拆分 from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

4. 建立RNN模型

from keras.models import Sequential from keras.layers import SimpleRNN, Dense, EmbeddingEmbedding_dim = 128 # 輸出的嵌入的維度 RNN_size = 256 # RNN 單元個(gè)數(shù)model = Sequential() model.add(Embedding(input_dim=len(char_tokenizer.word_index)+1,output_dim=Embedding_dim,input_length=SEQ_LEN)) model.add(SimpleRNN(units=RNN_size, return_sequences=False)) # 只輸出最后一步 # return the last output in the output sequence model.add(Dense(1, activation='sigmoid')) # 二分類model.compile(optimizer='adam', loss='binary_crossentropy',metrics=['accuracy']) model.summary()

模型結(jié)構(gòu):

Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 30, 128) 6784 _________________________________________________________________ simple_rnn (SimpleRNN) (None, 256) 98560 _________________________________________________________________ dense (Dense) (None, 1) 257 ================================================================= Total params: 105,601 Trainable params: 105,601 Non-trainable params: 0 _________________________________________________________________

如果return_sequences=True,后兩個(gè)輸出維度如下:(增加了序列長度維度)

simple_rnn_1 (SimpleRNN) (None, 30, 256) 98560 _________________________________________________________________ dense_1 (Dense) (None, 30, 1) 257

5. 訓(xùn)練

batch_size = 4096 # 一次梯度下降使用的樣本數(shù)量 epochs = 20 # 訓(xùn)練輪數(shù) history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs,validation_data=(X_test, y_test),verbose=1) Epoch 1/20 88/88 [==============================] - 59s 669ms/step - loss: 0.6877 - accuracy: 0.5436 - val_loss: 0.6856 - val_accuracy: 0.5540 Epoch 2/20 88/88 [==============================] - 56s 634ms/step - loss: 0.6830 - accuracy: 0.5564 - val_loss: 0.6844 - val_accuracy: 0.5550 Epoch 3/20 88/88 [==============================] - 56s 633ms/step - loss: 0.6825 - accuracy: 0.5577 - val_loss: 0.6829 - val_accuracy: 0.5563 Epoch 4/20 88/88 [==============================] - 56s 634ms/step - loss: 0.6816 - accuracy: 0.5585 - val_loss: 0.6788 - val_accuracy: 0.5641 Epoch 5/20 88/88 [==============================] - 56s 637ms/step - loss: 0.6714 - accuracy: 0.5813 - val_loss: 0.6670 - val_accuracy: 0.5877 Epoch 6/20 88/88 [==============================] - 56s 637ms/step - loss: 0.6532 - accuracy: 0.6113 - val_loss: 0.6435 - val_accuracy: 0.6235 Epoch 7/20 88/88 [==============================] - 57s 648ms/step - loss: 0.6287 - accuracy: 0.6424 - val_loss: 0.6159 - val_accuracy: 0.6563 Epoch 8/20 88/88 [==============================] - 55s 620ms/step - loss: 0.5932 - accuracy: 0.6807 - val_loss: 0.5747 - val_accuracy: 0.6971 Epoch 9/20 88/88 [==============================] - 54s 615ms/step - loss: 0.5383 - accuracy: 0.7271 - val_loss: 0.5822 - val_accuracy: 0.7178 Epoch 10/20 88/88 [==============================] - 56s 632ms/step - loss: 0.4803 - accuracy: 0.7687 - val_loss: 0.4536 - val_accuracy: 0.7846 Epoch 11/20 88/88 [==============================] - 61s 690ms/step - loss: 0.3979 - accuracy: 0.8190 - val_loss: 0.3940 - val_accuracy: 0.8195 Epoch 12/20 88/88 [==============================] - 60s 687ms/step - loss: 0.3257 - accuracy: 0.8572 - val_loss: 0.3248 - val_accuracy: 0.8564 Epoch 13/20 88/88 [==============================] - 59s 668ms/step - loss: 0.2637 - accuracy: 0.8897 - val_loss: 0.2980 - val_accuracy: 0.8742 Epoch 14/20 88/88 [==============================] - 56s 638ms/step - loss: 0.2154 - accuracy: 0.9115 - val_loss: 0.2326 - val_accuracy: 0.9023 Epoch 15/20 88/88 [==============================] - 56s 639ms/step - loss: 0.1822 - accuracy: 0.9277 - val_loss: 0.2112 - val_accuracy: 0.9130 Epoch 16/20 88/88 [==============================] - 56s 640ms/step - loss: 0.1504 - accuracy: 0.9412 - val_loss: 0.1803 - val_accuracy: 0.9267 Epoch 17/20 88/88 [==============================] - 58s 660ms/step - loss: 0.1298 - accuracy: 0.9499 - val_loss: 0.1662 - val_accuracy: 0.9331 Epoch 18/20 88/88 [==============================] - 57s 643ms/step - loss: 0.1132 - accuracy: 0.9567 - val_loss: 0.1643 - val_accuracy: 0.9358 Epoch 19/20 88/88 [==============================] - 58s 659ms/step - loss: 0.1018 - accuracy: 0.9613 - val_loss: 0.1409 - val_accuracy: 0.9441 Epoch 20/20 88/88 [==============================] - 57s 642ms/step - loss: 0.0907 - accuracy: 0.9659 - val_loss: 0.1325 - val_accuracy: 0.9475
  • 繪制訓(xùn)練過程
import pandas as pd import matplotlib.pyplot as plt pd.DataFrame(history.history).plot(figsize=(8, 5)) plt.grid(True) plt.gca().set_ylim(0, 1) # set the vertical range to [0-1] plt.show()

6. 測試

# 測試for file in os.listdir('./papers/Unknown'):# 測試文本處理unk_file = preprocessing('./papers/Unknown/'+file)# 文本轉(zhuǎn)ids序列unk_file_seq = char_tokenizer.texts_to_sequences([unk_file])[0]# 提取固定長度的子串,形成多個(gè)樣本X_unk, _ = make_subsequence(unk_file_seq, UNKNOWN)# 預(yù)測y_pred = model.predict(X_unk)y_pred = y_pred > 0.5votesA = np.sum(y_pred==0)votesB = np.sum(y_pred==1)print("文章 {} 被預(yù)測為 {} 寫的,投票數(shù) {} : {}".format(file,"A:hamilton" if votesA > votesB else "B:madison",max(votesA, votesB),min(votesA, votesB)))

輸出:5個(gè)文本的作者,都預(yù)測對(duì)了

文章 paper_1.txt 被預(yù)測為 B:madison 寫的,投票數(shù) 122118563 文章 paper_2.txt 被預(yù)測為 B:madison 寫的,投票數(shù) 108998747 文章 paper_3.txt 被預(yù)測為 A:hamilton 寫的,投票數(shù) 70416343 文章 paper_4.txt 被預(yù)測為 A:hamilton 寫的,投票數(shù) 50634710 文章 paper_5.txt 被預(yù)測為 A:hamilton 寫的,投票數(shù) 68784876

總結(jié)

以上是生活随笔為你收集整理的使用RNN预测文档归属作者的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。