日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > c/c++ >内容正文

c/c++

FCN Caffe:可视化featureMaps和Weights(C++)、获取FCN结果

發(fā)布時(shí)間:2023/12/31 c/c++ 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 FCN Caffe:可视化featureMaps和Weights(C++)、获取FCN结果 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

????? 為何不使用C++版本FCN獲取最后的分割掩模像素塊集合,何必要使用python呢!因此需要獲取網(wǎng)絡(luò)最后層的所有featureMaps,featureMaps的結(jié)果直接對(duì)應(yīng)了segmentation的最終結(jié)果,可以直接用于掩模分析。

????? caffe源碼給出了提取中間層featureMap的源代碼,位置在tools/extract_features.cpp。????? 參考文章鏈接:? caffe模型可視化featureMaps和Weights(C++) ,文章有大量修改,如有不適,請(qǐng)移步原文。FCN:Fully Convolutional Networks。??????

?

1. 可視化最后一層featureMap的代碼段(稍作修改):

int Classifier::visualize_featuremap( const cv::Mat& img, string layer_name, std::vector<cv::Mat> &Maps ) {Maps.resize(0);Blob<float>* input_layer = net_->input_blobs()[0];input_layer->Reshape(1, num_channels_, input_geometry_.height, input_geometry_.width);net_->Reshape();std::vector<cv::Mat> input_channels;WrapInputLayer(&input_channels);Preprocess(img, &input_channels);net_->Forward();std::cout << "網(wǎng)絡(luò)中的Blobs名稱為:\n";vector<shared_ptr<Blob<float> > > blobs = net_->blobs();vector<string> blob_names = net_->blob_names();std::cout << blobs.size() << " " << blob_names.size() << std::endl;for (int i = 0; i < blobs.size(); i++){std::cout << blob_names[i] << " " << blobs[i]->shape_string() << std::endl;}std::cout << std::endl;assert(net_->has_blob(layer_name));shared_ptr<Blob<float> > conv1Blob = net_->blob_by_name(layer_name);std::cout << "測(cè)試圖片的特征響應(yīng)圖的形狀信息為:" << conv1Blob->shape_string() << std::endl;float maxValue = -10000000, minValue = 10000000;const float* tmpValue = conv1Blob->cpu_data();for (int i = 0; i < conv1Blob->count(); i++){maxValue = std::max(maxValue, tmpValue[i]);minValue = std::min(minValue, tmpValue[i]);}int width = conv1Blob->shape(3); //響應(yīng)圖的高度 int height = conv1Blob->shape(2); //響應(yīng)圖的寬度 int channel = conv1Blob->shape(1); //通道數(shù) int num = conv1Blob->shape(0); //個(gè)數(shù) int imgHeight = (int)(1 + sqrt(channel))*height;int imgWidth = (int)(1 + sqrt(channel))*width;cv::Mat img(imgHeight, imgWidth, CV_8UC1, cv::Scalar(0));int kk = 0;for (int x = 0; x < imgHeight; x += height){for (int y = 0; y < imgWidth; y += width){if (kk >= channel)continue;cv::Mat roi(height, width, CV_8UC1);//cv::Mat roi = img(cv::Rect(y, x, width, height));for (int i = 0; i < height; i++){for (int j = 0; j < width; j++){float value = conv1Blob->data_at(0, kk, i, j);//速度稍慢,應(yīng)該有快速?gòu)?fù)制方法//roi.at<uchar>(i, j) = (value - minValue) / (maxValue - minValue) * 255;value = (value - minValue) / (maxValue - minValue);roi.at<uchar>(i, j) = 255* floor(value / 0.5) ;}}Maps.push_back(roi);kk++;}}return Maps.size(); }

?

2. 獲取FCN的最終輸出

vector<Blob<float>* > outBlob = net_->Forward();//得到的結(jié)果仍為151個(gè)//輸出結(jié)果為151個(gè)模板int channel = outBlob[0]->shape(1);int hi = outBlob[0]->shape(2);int wi = outBlob[0]->shape(3);int area = wi*hi;vector<shared_ptr<Blob<float> > > blobs = net_->blobs();vector<string> blob_names = net_->blob_names();

獲取最大標(biāo)記

int Classifier::GetMaxMask( const cv::Mat& img, int layerIdx, double thres,cv::Mat &maskMax ) {vector<boost::shared_ptr<Blob<float> > > blobs = net_->blobs();vector<string> blob_names = net_->blob_names();int num_features = net_->output_blobs()[0]->shape(1);int channel = net_->output_blobs()[0]->shape(1);int hi = net_->output_blobs()[0]->shape(2);int wi = net_->output_blobs()[0]->shape(3);int area = wi*hi;std::vector<int> image_indices(num_features, 0);int i = layerIdx;const boost::shared_ptr<Blob<float> > feature_blob= net_->blob_by_name(blob_names[i]);int batch_size = feature_blob->num();int dim_features = feature_blob->count() / batch_size;float maxValue = -10000000, minValue = 10000000;const float* tmpValue = feature_blob->cpu_data();for (int i = 0; i < feature_blob->count(); i++){maxValue = std::max(maxValue, tmpValue[i]);minValue = std::min(minValue, tmpValue[i]);}std::vector<int> areal(channel);for (int i = 0; i < channel;++i){areal[i] = i*area;}const float* feature_blob_data;const float minv = 10000000;const float maxv = -10000000;int classI = 0;for ( int n = 0; n < batch_size; ++n){feature_blob_data =feature_blob->cpu_data() + feature_blob->offset(n);int img_index = 0;for (int h = 0; h < hi; ++h){uchar* ptr = (unsigned char*)(maskMax.data + h * maskMax.step);int idxH = h*wi;img_index = idxH;for ( int w = 0; w < wi; ++w){float valueG = maxv;for ( int c = 0; c < channel; ++c){int datum_index = areal[c] + img_index;// area*c;float value = static_cast<float>(feature_blob_data[datum_index]);if ( valueG < value ){valueG = value;classI = c;}}*ptr = (uchar)classI;++ptr;++img_index;}}} return 1; }

獲取所有標(biāo)記

//獲取特定的元,使用點(diǎn)數(shù)限制 int Classifier::getAllSeg(cv::Mat &im_inp, cv::Mat &maskMax, std::vector<cv::Mat > &segs,std::vector<std::pair<int,float> > &labels, const int nPointMin) {std::vector<int> numsc(m_nClass);int h = maskMax.rows;int w = maskMax.cols;for (int i = 0; i < maskMax.rows; ++i){uchar *ptrm = maskMax.ptr<uchar>(i);for (int j = 0; j < maskMax.cols; ++j){int c = *ptrm;numsc[c]++;++ptrm;}}//添加限制,獲取分割圖std::map<int, int> maps;int k = 0;for (int i = 0; i < numsc.size();++i){if (numsc[i]>nPointMin){auto idx =make_pair(i,1.0f);labels.push_back(idx);auto idxm = make_pair(i, k);maps.insert(idxm);++k;}}//獲取圖像for (int i = 0; i < labels.size(); ++i){cv::Mat seg(h, w, CV_8UC3);segs.push_back(seg);}std::vector<uchar *> ptres(labels.size());for (int idx = 0; idx < labels.size(); ++idx){ptres[idx] = (uchar *)segs[idx].data;}for ( int i = 0; i < maskMax.rows; ++i ){uchar *ptr = im_inp.ptr<uchar>(i);uchar *ptrm = maskMax.ptr<uchar>(i);for (int n = 0; n < labels.size(); ++n) ptres[n] = (uchar *)segs[n].ptr<uchar>(i);for ( int j = 0; j < maskMax.cols; ++j ){int c = *ptrm;int pos;// = maps[c];auto l_it = maps.find(c);if ( l_it == maps.end() )pos = -1;else pos = l_it->second;if ( pos>-1) *(ptres[pos]) = *ptr;++ptr;for (int n = 0; n < labels.size();++n) ++ptres[n];if (pos>-1) *(ptres[pos]) = *ptr;++ptr;for (int n = 0; n < labels.size(); ++n) ++ptres[n];if (pos>-1) *(ptres[pos]) = *ptr;++ptr;for (int n = 0; n < labels.size(); ++n) ++ptres[n];++ptrm;}}int nseg = segs.size();return nseg; }

?

3.此外,可視化權(quán)值的代碼段,直接摘抄

cv::Mat visualize_weights(string prototxt, string caffemodel, int weights_layer_num) { ::google::InitGoogleLogging("0"); #ifdef CPU_ONLY Caffe::set_mode(Caffe::CPU); #else Caffe::set_mode(Caffe::GPU); #endif Net<float> net(prototxt, TEST); net.CopyTrainedLayersFrom(caffemodel); vector<shared_ptr<Blob<float> > > params = net.params(); std::cout << "各層參數(shù)的維度信息為:\n"; for (int i = 0; i<params.size(); ++i) std::cout << params[i]->shape_string() << std::endl; int width = params[weights_layer_num]->shape(3); //寬度 int height = params[weights_layer_num]->shape(2); //高度 int channel = params[weights_layer_num]->shape(1); //通道數(shù) int num = params[weights_layer_num]->shape(0); //個(gè)數(shù) int imgHeight = (int)(1 + sqrt(num))*height; int imgWidth = (int)(1 + sqrt(num))*width; Mat img(imgHeight, imgWidth, CV_8UC3, Scalar(0, 0, 0)); float maxValue = -1000, minValue = 10000; const float* tmpValue = params[weights_layer_num]->cpu_data(); for (int i = 0; i<params[weights_layer_num]->count(); i++){ maxValue = std::max(maxValue, tmpValue[i]); minValue = std::min(minValue, tmpValue[i]); } int kk = 0; for (int y = 0; y<imgHeight; y += height){ for (int x = 0; x<imgWidth; x += width){ if (kk >= num) continue; Mat roi = img(Rect(x, y, width, height)); for (int i = 0; i<height; i++){ for (int j = 0; j<width; j++){ for (int k = 0; k<channel; k++){ float value = params[weights_layer_num]->data_at(kk, k, i, j); roi.at<Vec3b>(i, j)[k] = (value - minValue) / (maxValue - minValue) * 255; } } } ++kk; } } return img; }

?

3.FeatureMap獲取結(jié)果

原圖:

分割結(jié)果顯示:

?

參考:經(jīng)典論文 Fully Convolutional Networks for semantic Segmentation

?????? 核心觀點(diǎn)是建立“全卷積”網(wǎng)絡(luò),輸入任意尺寸,經(jīng)過有效的推理和學(xué)習(xí)產(chǎn)生相應(yīng)尺寸的輸出。定義并指定全卷積網(wǎng)絡(luò)的空間,解釋它們?cè)诳臻g范圍內(nèi)dense prediction任務(wù)(預(yù)測(cè)每個(gè)像素所屬的類別)和獲取與先驗(yàn)?zāi)P吐?lián)系的應(yīng)用。

???? ? ---------------------------

????? 通常CNN網(wǎng)絡(luò)在卷積層之后會(huì)接上若干個(gè)全連接層, 將卷積層產(chǎn)生的特征圖(feature map)映射成一個(gè)固定長(zhǎng)度的特征向量。以AlexNet為代表的經(jīng)典CNN結(jié)構(gòu)適合于圖像級(jí)的分類和回歸任務(wù),因?yàn)樗鼈冏詈蠖计谕玫秸麄€(gè)輸入圖像的一個(gè)數(shù)值描述(概率),比如AlexNet的ImageNet模型輸出一個(gè)1000維的向量表示輸入圖像屬于每一類的概率(softmax歸一化)。

??????

?????? FCN對(duì)圖像進(jìn)行像素級(jí)的分類,從而解決了語(yǔ)義級(jí)別的圖像分割(semantic segmentation)問題。與經(jīng)典的CNN在卷積層之后使用全連接層得到固定長(zhǎng)度的特征向量進(jìn)行分類(全聯(lián)接層+softmax輸出)不同,FCN可以接受任意尺寸的輸入圖像,采用反卷積層對(duì)最后一個(gè)卷積層的feature map進(jìn)行上采樣, 使它恢復(fù)到輸入圖像相同的尺寸,從而可以對(duì)每個(gè)像素都產(chǎn)生了一個(gè)預(yù)測(cè), 同時(shí)保留了原始輸入圖像中的空間信息, 最后在上采樣的特征圖上進(jìn)行逐像素分類。

???????

?????? 最后逐個(gè)像素計(jì)算softmax分類的損失, 相當(dāng)于每一個(gè)像素對(duì)應(yīng)一個(gè)訓(xùn)練樣本。

?

網(wǎng)絡(luò)結(jié)構(gòu):

???

?????? 作者又翻譯了一遍

?????

FCN因受基礎(chǔ)網(wǎng)絡(luò)結(jié)構(gòu)的不同影響,與不同基礎(chǔ)網(wǎng)絡(luò)結(jié)合的結(jié)果如下:

在分割任務(wù)上,網(wǎng)絡(luò)結(jié)構(gòu)相對(duì)過擬合的GoogleLeNet效果不能超過標(biāo)準(zhǔn)結(jié)構(gòu)AlexNet。

總結(jié):

?????? pooling層的多層分布,最終用于預(yù)測(cè)每個(gè)點(diǎn)的類別信息,pooling層的粒度與最終分割的精度產(chǎn)生關(guān)聯(lián)。

總結(jié)

以上是生活随笔為你收集整理的FCN Caffe:可视化featureMaps和Weights(C++)、获取FCN结果的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。