日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

java调用keras theano模型_使用Keras / Theano和LSTM进行多标签文本分类

發布時間:2024/1/23 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 java调用keras theano模型_使用Keras / Theano和LSTM进行多标签文本分类 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

嘗試使用Keras / Theano運行LSTM多標簽文本分類 .

我有一個文本/標簽csv . 文本是純文本,標簽是數字,總共9個,從1到9 .

我想我沒有為這個問題正確配置模型 . 我的代碼到目前為止:

import keras.preprocessing.text

import numpy as np

Using Theano backend.

from keras.preprocessing import sequence

from keras.models import Sequential

from keras.layers.core import Dense, Activation

from keras.layers.embeddings import Embedding

from keras.layers.recurrent import LSTM

import pandas

data = pandas.read_csv("for_keras_text_label.csv", sep = ',', quotechar = '"', header = 0)

x = data['text']

y = data['label']

x = x.iloc[:].values

y = y.iloc[:].values

tk = keras.preprocessing.text.Tokenizer(nb_words=2000, filters=keras.preprocessing.text.base_filter(), lower=True, split=" ")

tk.fit_on_texts(x)

x = tk.texts_to_sequences(x)

max_len = 80

print "max_len ", max_len

print('Pad sequences (samples x time)')

x = sequence.pad_sequences(x, maxlen=max_len)

# the model

max_features = 20000

model = Sequential()

model.add(Embedding(max_features, 128, input_length=max_len, dropout=0.2))

model.add(LSTM(128, dropout_W=0.2, dropout_U=0.2))

model.add(Dense(9))

model.add(Activation('softmax'))

model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop', metrics=["accuracy"])

# run

model.fit(x, y=y, batch_size=200, nb_epoch=1, verbose=1, validation_split=0.2, shuffle=True)

我收到此錯誤:

IndexError: index 9 is out of bounds for axis 1 with size 9 Apply node that caused the error:

AdvancedIncSubtensor{inplace=False, set_instead_of_inc=True}(Alloc.0, TensorConstant{1}, ARange{dtype='int64'}.0, Elemwise{Cast{int32}}.0)

Toposort index: 213

Inputs types: [TensorType(float32, matrix), TensorType(int8, scalar), TensorType(int64, vector), TensorType(int32, vector)]

Inputs shapes: [(200, 9), (), (200,), (200,)]

Inputs strides: [(36, 4), (), (8,), (4,)]

Inputs values: ['not shown', array(1, dtype=int8), 'not shown', 'not shown']

Outputs clients: [[Reshape{2}(AdvancedIncSubtensor{inplace=False, set_instead_of_inc=True}.0, MakeVector{dtype='int64'}.0)]]

Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):

File "/home/ubuntu/anaconda3/envs/theano/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes

if self.run_code(code, result):

File "/home/ubuntu/anaconda3/envs/theano/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code

exec(code_obj, self.user_global_ns, self.user_ns)

File "", line 7, in

model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop', metrics=["accuracy"])

File "/home/ubuntu/anaconda3/envs/theano/lib/python2.7/site-packages/keras/models.py", line 578, in compile

**kwargs)

File "/home/ubuntu/anaconda3/envs/theano/lib/python2.7/site-packages/keras/engine/training.py", line 604, in compile

sample_weight, mask)

File "/home/ubuntu/anaconda3/envs/theano/lib/python2.7/site-packages/keras/engine/training.py", line 303, in weighted

score_array = fn(y_true, y_pred)

File "/home/ubuntu/anaconda3/envs/theano/lib/python2.7/site-packages/keras/objectives.py", line 45, in sparse_categorical_crossentropy

return K.sparse_categorical_crossentropy(y_pred, y_true)

File "/home/ubuntu/anaconda3/envs/theano/lib/python2.7/site-packages/keras/backend/theano_backend.py", line 1079, in sparse_categorical_crossentropy

target = T.extra_ops.to_one_hot(target, nb_class=output.shape[-1])

總結

以上是生活随笔為你收集整理的java调用keras theano模型_使用Keras / Theano和LSTM进行多标签文本分类的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。