日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【TensorFlow】 基于视频时序LSTM的行为动作识别

發布時間:2025/3/11 编程问答 17 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【TensorFlow】 基于视频时序LSTM的行为动作识别 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

簡介
本文基于LSTM來完成用戶行為識別。數據集來源:https://archive.ics.uci.edu/ml/machine-learning-databases/00240/

此數據集一共有6種行為狀態:

行走;
站立;
躺下;
坐下;
上樓;
下樓;
以上6種行為數據是通過傳感器進行采集的。

.\data\UCI HAR Dataset\train\Inertial Signals

?

實現
本次實驗實現的是6分類任務。

pip install -i https://pypi.douban.com/simple/ --trusted-host=pypi.douban.com/simple tensorflow==1.13.1

?

import tensorflow as tf import numpy as np# # 模型好壞主要由數據決定,數據決定模型上限,模型決定逼近這個上限,記錄儀上的數據 def load_X(X_signals_paths):X_signals = []for signal_type_path in X_signals_paths:file = open(signal_type_path, 'r')X_signals.append([np.array(serie, dtype=np.float32) for serie in[row.replace(' ', ' ').strip().split(' ') for row in file]])file.close()return np.transpose(np.array(X_signals), (1, 2, 0))def load_y(y_path):file = open(y_path, 'r')y_ = np.array([elem for elem in [row.replace(' ', ' ').strip().split(' ') for row in file]], dtype=np.int32)file.close()return y_ - 1class Config(object):def __init__(self, X_train, X_test):self.train_count = len(X_train) # 訓練記錄self.test_data_count = len(X_test)self.n_steps = len(X_train[0]) # 步長,128步self.learning_rate = 0.0025self.lambda_loss_amount = 0.0015 # 正則化懲罰粒度self.training_epochs = 300self.batch_size = 1500self.n_inputs = len(X_train[0][0]) # 每個step收集9個,數據收集維度self.n_hidden = 32 # 隱層神經元個數self.n_classes = 6 # 輸出6個類別self.W = {'hidden': tf.Variable(tf.random_normal([self.n_inputs, self.n_hidden])), # 輸入到隱層'output': tf.Variable(tf.random_normal([self.n_hidden, self.n_classes]))} # 隱層到輸出self.biases = {'hidden': tf.Variable(tf.random_normal([self.n_hidden], mean=1.0)),'output': tf.Variable(tf.random_normal([self.n_classes]))}# 構造LSTM網絡 def LSTM_Network(_X, config):# 數據轉換,使其滿足LSTM網絡要求_X = tf.transpose(_X, [1, 0, 2]) # 把0 1 2調換成1 0 2,調換第一維度和第二維度_X = tf.reshape(_X, [-1, config.n_inputs])_X = tf.nn.relu(tf.matmul(_X, config.W['hidden']) + config.biases['hidden']) # 9個神經元變為32_X = tf.split(_X, config.n_steps, 0) # 把每一步放到RNN對應的位置# 兩層LSTM堆疊在一起lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(config.n_hidden, forget_bias=1.0, state_is_tuple=True)lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(config.n_hidden, forget_bias=1.0, state_is_tuple=True)lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32) # outputs:最終結果; states:中間結果print(np.array(outputs).shape) # 查看outputs,128個輸出結果lstm_last_output = outputs[-1] # 取最終結果return tf.matmul(lstm_last_output, config.W['output']) + config.biases['output'] # 分類def one_hot(y_):y_ = y_.reshape(len(y_))n_values = int(np.max(y_)) + 1return np.eye(n_values)[np.array(y_, dtype=np.int32)]if __name__ == '__main__':# 指定九種不同輸入信號,即9個文件的文件名前綴INPUT_SIGNAL_TYPES = ['body_acc_x_','body_acc_y_','body_acc_z_','body_gyro_x_','body_gyro_y_','body_gyro_z_','total_acc_x_','total_acc_y_','total_acc_z_']# 六種行為標簽,行走 站立 躺下 坐下 上樓 下樓LABELS = ['WALKING','WALKING_UPSTAIRS','WALKING_DOWNSTAIRS','SITTING','STANDING','LAYING']# 指定數據路徑DATA_PATH = 'data/'DATASET_PATH = DATA_PATH + 'UCI HAR Dataset/'print('\n' + 'Dataset is now located at:' + DATASET_PATH)TRAIN = 'train/'TEST = 'test/'X_train_signals_paths = [DATASET_PATH + TRAIN + 'Inertial Signals/' + signal + 'train.txt' for signal in INPUT_SIGNAL_TYPES]X_test_signals_paths = [DATASET_PATH + TEST + 'Inertial Signals/' + signal + 'test.txt' for signal inINPUT_SIGNAL_TYPES]X_train = load_X(X_train_signals_paths)X_test = load_X(X_test_signals_paths)print('X_train:', X_train.shape) # 7352條數據,每個數據128窗口序列,每個序列記錄9個不同指標print('X_test:', X_test.shape)y_train_path = DATASET_PATH + TRAIN + 'y_train.txt'y_test_path = DATASET_PATH + TEST + 'y_test.txt'y_train = one_hot(load_y(y_train_path))y_test = one_hot(load_y(y_test_path))print('y_train:', y_train.shape) # 7352條數據,6個類別print('y_test:', y_test.shape)config = Config(X_train, X_test)print("Some useful info to get an insight on dataset's shape and normalisation:")print("features shape, labels shape, each features mean, each features standard deviation")print(X_test.shape, y_test.shape,np.mean(X_test), np.std(X_test))print('the dataset is therefore properly normalised, as expected.')X = tf.placeholder(tf.float32, [None, config.n_steps, config.n_inputs])Y = tf.placeholder(tf.float32, [None, config.n_classes])pred_Y = LSTM_Network(X, config) # 最終預測結果# l2正則懲罰,tf.trainable_variables()(僅可以查看可訓練的變量)l2 = config.lambda_loss_amount * \sum(tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables())# 損失值cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y, logits=pred_Y)) + l2optimizer = tf.train.AdamOptimizer(learning_rate=config.learning_rate).minimize(cost)correct_pred = tf.equal(tf.argmax(pred_Y, 1), tf.argmax(Y, 1))accuracy = tf.reduce_mean(tf.cast(correct_pred, dtype=tf.float32))# tf.InteractiveSession():可以先構建一個session然后再定義操作(operation)# tf.Session():需要在會話構建之前定義好全部的操作(operation)然后再構建會話# tf.ConfigProto():獲取到 operations 和 Tensor 被指派到哪個設備(幾號CPU或幾號GPU)上運行# log_device_placement=False:不會在終端打印出各項操作是在哪個設備上運行sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=False))init = tf.global_variables_initializer()sess.run(init)best_accuracy = 0.0for i in range(config.training_epochs):# zip() 函數用于將可迭代的對象作為參數,將對象中對應的元素打包成一個個元組,然后返回由這些元組組成的列表for start, end in zip(range(0, config.train_count, config.batch_size),range(config.batch_size, config.train_count + 1, config.batch_size)):sess.run(optimizer, feed_dict={X: X_train[start:end],Y: y_train[start:end]})# 也可對迭代過程進行可視化展示pred_out, accuracy_out, loss_out = sess.run([pred_Y, accuracy, cost], feed_dict={X: X_test, Y: y_test})print('traing iter: {},'.format(i) + 'test accuracy: {},'.format(accuracy_out) + 'loss:{}'.format(loss_out))best_accuracy = max(best_accuracy, accuracy_out)print('')print('final test accuracy: {}'.format(accuracy_out))print("best epoch's test accuracy: {}".format(best_accuracy))print('')

?


運行結果:

Dataset is now located at:data/UCI HAR Dataset/ X_train: (7352, 128, 9) X_test: (2947, 128, 9) y_train: (7352, 6) y_test: (2947, 6) Some useful info to get an insight on dataset's shape and normalisation: features shape, labels shape, each features mean, each features standard deviation (2947, 128, 9) (2947, 6) 0.09913992 0.39567086 the dataset is therefore properly normalised, as expected. WARNING:tensorflow:From D:/WorkSpace/ai/csdn/lab-lstm-activity-recognition/lstm.py:58: BasicLSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version. Instructions for updating: This class is deprecated, please use tf.nn.rnn_cell.LSTMCell, which supports all the feature this cell currently has. Please replace the existing code with tf.nn.rnn_cell.LSTMCell(name='basic_lstm_cell'). (128,) WARNING:tensorflow:From D:/WorkSpace/ai/csdn/lab-lstm-activity-recognition/lstm.py:141: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See `tf.nn.softmax_cross_entropy_with_logits_v2`. 2019-12-16 19:45:10.908801: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 traing iter: 0,test accuracy: 0.4801492989063263,loss:1.9456521272659302 final test accuracy: 0.4801492989063263 best epoch's test accuracy: 0.4801492989063263traing iter: 1,test accuracy: 0.5334238409996033,loss:1.6313532590866089final test accuracy: 0.5334238409996033 best epoch's test accuracy: 0.5334238409996033 traing iter: 2,test accuracy: 0.6128265857696533,loss:1.4844205379486084 final test accuracy: 0.6128265857696533 best epoch's test accuracy: 0.6128265857696533

?

總結

以上是生活随笔為你收集整理的【TensorFlow】 基于视频时序LSTM的行为动作识别的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 天天干妹子 | 91精品久久久久久粉嫩 | jizz网站 | 久久a久久 | 特黄一级毛片 | 亚洲国产一二三区 | 亚洲痴女 | 欧美三级欧美成人高清 | 亚洲美女自拍视频 | 在线国产播放 | www.黄色在线观看 | 日韩精品人妻一区二区三区免费 | 奇米一区二区三区 | 日本三级韩国三级三级a级按摩 | www.黄色片网站 | 亚欧洲精品在线视频 | 69sese| 久久久久久久久久久综合 | jizz日本在线观看 | 国产日韩欧美视频 | 国产麻豆一区二区三区在线观看 | 人人澡澡人人 | 久久久99久久| 日本性网站 | 女儿朋友| 欧美大片a| 黄色av播放 | 欧美做受高潮中文字幕 | 激情福利视频 | 国产精品污 | 一区二区免费在线观看 | 三级黄色网 | 欧美日韩一区视频 | 欧美成人免费观看 | 人人舔人人 | 521a人成v香蕉网站 | 香蕉一区二区三区四区 | 欧美精品在线视频 | 伊人成人在线视频 | 日本一区二区不卡在线观看 | 天天摸夜夜爽 | 日韩欧美黄色 | 欧美精品videos另类 | 美女张开腿让人桶 | 日韩精品中文字幕一区二区三区 | 精品久久久久久无码中文野结衣 | 宅男午夜影院 | 91夫妻视频 | 成人小视频免费观看 | 女同性69囗交 | 国产一区二区福利 | 日日插插 | 激情综合视频 | 狠狠夜夜| 亚洲三级在线观看 | 欧美日韩在线二区 | 中国女人和老外的毛片 | 99re最新 | 欧美大片一区二区 | 成人在线免费观看网站 | 亚洲三级视频在线观看 | 日韩精品免费在线观看 | 日日日干干干 | av一级免费 | 久草视| japanese国产打屁股网站 | 欧美videossex极品 | 久久久啊啊啊 | 国内精品一区二区三区 | exo妈妈mv在线播放免费 | jizz欧美| 国产精品66| av手机免费观看 | 97一级片| 中文字幕在线观看网站 | 亚洲天堂激情 | 人妻少妇久久中文字幕 | 中文字幕一区二区三区乱码 | 波多野结衣调教 | 在线天堂中文在线资源网 | 国产精品一区二区三区免费观看 | 国产精品久久无码 | 色老头一区 | 国产精品成人免费精品自在线观看 | 色干综合 | 午夜影院黄| 日韩三级在线免费观看 | 一级免费观看 | 岛国av中文字幕 | 高潮网 | 国产成人精品在线 | 亚洲电影中文字幕 | 91精品国产日韩91久久久久久 | 色噜噜一区二区 | 亚洲天堂麻豆 | 久草五月 | 91宅男| 爱爱爱爱网站 | 精品人妻一区二区三区含羞草 |