日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

caffe中mnist数据集的运行

發布時間:2025/4/16 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 caffe中mnist数据集的运行 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1、首先進入caffe的安裝目錄

????

CAFFE_ROOT='/home/lxc/caffe/'
2、運行腳本文件,數據集

cd CAFFE_ROOT ./data/mnist/gte_mnist.sh
3、吧數據集轉換成caffe可讀入的格式

./examples/mnist/create_mnist.sh
4、訓練數據集,得到訓練模型

./examples/mnist/train_lenet.sh

運行的過程:

407563 (* 1 = 0.00407563 loss) I0314 09:58:05.226999 25029 sgd_solver.cpp:106] Iteration 7400, lr = 0.00660067 I0314 09:58:06.644390 25029 solver.cpp:337] Iteration 7500, Testing net (#0) I0314 09:58:07.555140 25029 solver.cpp:404] Test net output #0: accuracy = 0.9909 I0314 09:58:07.555181 25029 solver.cpp:404] Test net output #1: loss = 0.0300942 (* 1 = 0.0300942 loss) I0314 09:58:07.563323 25029 solver.cpp:228] Iteration 7500, loss = 0.00139727 I0314 09:58:07.563347 25029 solver.cpp:244] Train net output #0: loss = 0.00139717 (* 1 = 0.00139717 loss) I0314 09:58:07.563361 25029 sgd_solver.cpp:106] Iteration 7500, lr = 0.00657236 I0314 09:58:09.002777 25029 solver.cpp:228] Iteration 7600, loss = 0.00737701 I0314 09:58:09.002853 25029 solver.cpp:244] Train net output #0: loss = 0.0073769 (* 1 = 0.0073769 loss) I0314 09:58:09.002867 25029 sgd_solver.cpp:106] Iteration 7600, lr = 0.00654433 I0314 09:58:10.444634 25029 solver.cpp:228] Iteration 7700, loss = 0.0348312 I0314 09:58:10.444681 25029 solver.cpp:244] Train net output #0: loss = 0.0348311 (* 1 = 0.0348311 loss) I0314 09:58:10.444694 25029 sgd_solver.cpp:106] Iteration 7700, lr = 0.00651658 I0314 09:58:11.927762 25029 solver.cpp:228] Iteration 7800, loss = 0.00242758 I0314 09:58:11.927819 25029 solver.cpp:244] Train net output #0: loss = 0.00242747 (* 1 = 0.00242747 loss) I0314 09:58:11.927831 25029 sgd_solver.cpp:106] Iteration 7800, lr = 0.00648911 I0314 09:58:13.402936 25029 solver.cpp:228] Iteration 7900, loss = 0.00951368 I0314 09:58:13.402987 25029 solver.cpp:244] Train net output #0: loss = 0.00951358 (* 1 = 0.00951358 loss) I0314 09:58:13.403002 25029 sgd_solver.cpp:106] Iteration 7900, lr = 0.0064619 I0314 09:58:14.867146 25029 solver.cpp:337] Iteration 8000, Testing net (#0) I0314 09:58:15.800400 25029 solver.cpp:404] Test net output #0: accuracy = 0.9904 I0314 09:58:15.800444 25029 solver.cpp:404] Test net output #1: loss = 0.0276247 (* 1 = 0.0276247 loss) I0314 09:58:15.808650 25029 solver.cpp:228] Iteration 8000, loss = 0.00521269 I0314 09:58:15.808675 25029 solver.cpp:244] Train net output #0: loss = 0.00521257 (* 1 = 0.00521257 loss) I0314 09:58:15.808689 25029 sgd_solver.cpp:106] Iteration 8000, lr = 0.00643496 I0314 09:58:17.274581 25029 solver.cpp:228] Iteration 8100, loss = 0.00926444 I0314 09:58:17.274636 25029 solver.cpp:244] Train net output #0: loss = 0.00926433 (* 1 = 0.00926433 loss) I0314 09:58:17.274651 25029 sgd_solver.cpp:106] Iteration 8100, lr = 0.00640827 I0314 09:58:18.732739 25029 solver.cpp:228] Iteration 8200, loss = 0.00703852 I0314 09:58:18.732786 25029 solver.cpp:244] Train net output #0: loss = 0.00703842 (* 1 = 0.00703842 loss) I0314 09:58:18.732800 25029 sgd_solver.cpp:106] Iteration 8200, lr = 0.00638185 I0314 09:58:20.189698 25029 solver.cpp:228] Iteration 8300, loss = 0.0678537 I0314 09:58:20.189746 25029 solver.cpp:244] Train net output #0: loss = 0.0678536 (* 1 = 0.0678536 loss) I0314 09:58:20.189759 25029 sgd_solver.cpp:106] Iteration 8300, lr = 0.00635567 I0314 09:58:21.628165 25029 solver.cpp:228] Iteration 8400, loss = 0.00610364 I0314 09:58:21.628206 25029 solver.cpp:244] Train net output #0: loss = 0.00610354 (* 1 = 0.00610354 loss) I0314 09:58:21.628218 25029 sgd_solver.cpp:106] Iteration 8400, lr = 0.00632975 I0314 09:58:23.054644 25029 solver.cpp:337] Iteration 8500, Testing net (#0) I0314 09:58:23.975236 25029 solver.cpp:404] Test net output #0: accuracy = 0.9913 I0314 09:58:23.975289 25029 solver.cpp:404] Test net output #1: loss = 0.0276445 (* 1 = 0.0276445 loss) I0314 09:58:23.983639 25029 solver.cpp:228] Iteration 8500, loss = 0.00722562 I0314 09:58:23.983686 25029 solver.cpp:244] Train net output #0: loss = 0.00722551 (* 1 = 0.00722551 loss) I0314 09:58:23.983702 25029 sgd_solver.cpp:106] Iteration 8500, lr = 0.00630407 I0314 09:58:25.423691 25029 solver.cpp:228] Iteration 8600, loss = 0.000844742 I0314 09:58:25.423733 25029 solver.cpp:244] Train net output #0: loss = 0.000844626 (* 1 = 0.000844626 loss) I0314 09:58:25.423746 25029 sgd_solver.cpp:106] Iteration 8600, lr = 0.00627864 I0314 09:58:26.858505 25029 solver.cpp:228] Iteration 8700, loss = 0.00262191 I0314 09:58:26.858546 25029 solver.cpp:244] Train net output #0: loss = 0.00262179 (* 1 = 0.00262179 loss) I0314 09:58:26.858559 25029 sgd_solver.cpp:106] Iteration 8700, lr = 0.00625344 I0314 09:58:28.296435 25029 solver.cpp:228] Iteration 8800, loss = 0.00161585 I0314 09:58:28.296476 25029 solver.cpp:244] Train net output #0: loss = 0.00161573 (* 1 = 0.00161573 loss) I0314 09:58:28.296489 25029 sgd_solver.cpp:106] Iteration 8800, lr = 0.00622847 I0314 09:58:29.741562 25029 solver.cpp:228] Iteration 8900, loss = 0.000348777 I0314 09:58:29.741605 25029 solver.cpp:244] Train net output #0: loss = 0.000348661 (* 1 = 0.000348661 loss) I0314 09:58:29.741631 25029 sgd_solver.cpp:106] Iteration 8900, lr = 0.00620374 I0314 09:58:31.165171 25029 solver.cpp:337] Iteration 9000, Testing net (#0) I0314 09:58:32.077903 25029 solver.cpp:404] Test net output #0: accuracy = 0.9909 I0314 09:58:32.077941 25029 solver.cpp:404] Test net output #1: loss = 0.0273136 (* 1 = 0.0273136 loss) I0314 09:58:32.086107 25029 solver.cpp:228] Iteration 9000, loss = 0.0154975 I0314 09:58:32.086132 25029 solver.cpp:244] Train net output #0: loss = 0.0154974 (* 1 = 0.0154974 loss) I0314 09:58:32.086145 25029 sgd_solver.cpp:106] Iteration 9000, lr = 0.00617924 I0314 09:58:33.524173 25029 solver.cpp:228] Iteration 9100, loss = 0.00757405 I0314 09:58:33.524216 25029 solver.cpp:244] Train net output #0: loss = 0.00757394 (* 1 = 0.00757394 loss) I0314 09:58:33.524230 25029 sgd_solver.cpp:106] Iteration 9100, lr = 0.00615496 I0314 09:58:34.966588 25029 solver.cpp:228] Iteration 9200, loss = 0.00248411 I0314 09:58:34.966630 25029 solver.cpp:244] Train net output #0: loss = 0.00248399 (* 1 = 0.00248399 loss) I0314 09:58:34.966644 25029 sgd_solver.cpp:106] Iteration 9200, lr = 0.0061309 I0314 09:58:36.405937 25029 solver.cpp:228] Iteration 9300, loss = 0.00742113 I0314 09:58:36.405982 25029 solver.cpp:244] Train net output #0: loss = 0.007421 (* 1 = 0.007421 loss) I0314 09:58:36.405995 25029 sgd_solver.cpp:106] Iteration 9300, lr = 0.00610706 I0314 09:58:37.850505 25029 solver.cpp:228] Iteration 9400, loss = 0.0306143 I0314 09:58:37.850548 25029 solver.cpp:244] Train net output #0: loss = 0.0306142 (* 1 = 0.0306142 loss) I0314 09:58:37.850560 25029 sgd_solver.cpp:106] Iteration 9400, lr = 0.00608343 I0314 09:58:39.285089 25029 solver.cpp:337] Iteration 9500, Testing net (#0) I0314 09:58:40.197036 25029 solver.cpp:404] Test net output #0: accuracy = 0.9902 I0314 09:58:40.197090 25029 solver.cpp:404] Test net output #1: loss = 0.0315135 (* 1 = 0.0315135 loss) I0314 09:58:40.205340 25029 solver.cpp:228] Iteration 9500, loss = 0.0047698 I0314 09:58:40.205364 25029 solver.cpp:244] Train net output #0: loss = 0.00476967 (* 1 = 0.00476967 loss) I0314 09:58:40.205379 25029 sgd_solver.cpp:106] Iteration 9500, lr = 0.00606002 I0314 09:58:41.670626 25029 solver.cpp:228] Iteration 9600, loss = 0.00133942 I0314 09:58:41.670670 25029 solver.cpp:244] Train net output #0: loss = 0.00133929 (* 1 = 0.00133929 loss) I0314 09:58:41.670685 25029 sgd_solver.cpp:106] Iteration 9600, lr = 0.00603682 I0314 09:58:43.134160 25029 solver.cpp:228] Iteration 9700, loss = 0.0014465 I0314 09:58:43.134203 25029 solver.cpp:244] Train net output #0: loss = 0.00144637 (* 1 = 0.00144637 loss) I0314 09:58:43.134217 25029 sgd_solver.cpp:106] Iteration 9700, lr = 0.00601382 I0314 09:58:44.587978 25029 solver.cpp:228] Iteration 9800, loss = 0.0134798 I0314 09:58:44.588019 25029 solver.cpp:244] Train net output #0: loss = 0.0134797 (* 1 = 0.0134797 loss) I0314 09:58:44.588033 25029 sgd_solver.cpp:106] Iteration 9800, lr = 0.00599102 I0314 09:58:46.046571 25029 solver.cpp:228] Iteration 9900, loss = 0.00367283 I0314 09:58:46.046613 25029 solver.cpp:244] Train net output #0: loss = 0.0036727 (* 1 = 0.0036727 loss) I0314 09:58:46.046627 25029 sgd_solver.cpp:106] Iteration 9900, lr = 0.00596843 I0314 09:58:47.491849 25029 solver.cpp:454] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel I0314 09:58:47.502293 25029 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate I0314 09:58:47.510488 25029 solver.cpp:317] Iteration 10000, loss = 0.00222019 I0314 09:58:47.510517 25029 solver.cpp:337] Iteration 10000, Testing net (#0) I0314 09:58:48.453008 25029 solver.cpp:404] Test net output #0: accuracy = 0.9917 I0314 09:58:48.453052 25029 solver.cpp:404] Test net output #1: loss = 0.0266635 (* 1 = 0.0266635 loss) I0314 09:58:48.453065 25029 solver.cpp:322] Optimization Done. I0314 09:58:48.453073 25029 caffe.cpp:222] Optimization Done.

最終會得到兩個模型

caffemodel文件

總結

以上是生活随笔為你收集整理的caffe中mnist数据集的运行的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。