日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 人工智能 > pytorch >内容正文

pytorch

深度学习笔记7 Working with Large Images 卷积特征提取

發(fā)布時(shí)間:2025/4/16 pytorch 81 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习笔记7 Working with Large Images 卷积特征提取 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

動(dòng)機(jī)

在用稀疏自編碼器對(duì)8*8 或28*28等小圖像提取特征時(shí)是可行的,但是若對(duì)大圖像學(xué)習(xí)整幅圖像的特征,將會(huì)非常耗時(shí)。因此需要把這種“全連接”的設(shè)計(jì)改為“部分聯(lián)通”的網(wǎng)絡(luò)。

卷積

自然圖像有其固有特性——圖像的一部分統(tǒng)計(jì)特征與其他部分是一樣的,這也意味著我們?cè)谶@一部分上學(xué)習(xí)的特征也能用在另一部分上,所以對(duì)這個(gè)圖像上的所以位置,我們都能使用同樣的學(xué)習(xí)特征。
比如,可以8*8的樣本中學(xué)習(xí)到一些特征,并把這個(gè)特征應(yīng)用到圖像的任意地方去。特別是,我們可以從8*8的樣本中學(xué)習(xí)的特征跟原來(lái)的大尺寸圖像做卷積。

池化

得到的卷積特征就可以去訓(xùn)練分類(lèi)器了,但是由于卷積特征的維數(shù)很高,除了計(jì)算慢之外,還容易出現(xiàn)過(guò)擬合。因此把每一個(gè)的卷積特征進(jìn)行池化。如卷積特征是89^2*400,表示400個(gè)特征,每個(gè)特征有89^2維。池化就是對(duì)每一特征89^2,平均分成若干固定大小不相干的塊,可以用塊內(nèi)的平均或最大值代表,這樣若分成了10塊,則89^2維就變成了10維。
池化具有平移性

練習(xí)

這個(gè)部分感覺(jué)做了練習(xí)才理解的清楚了。
step1:從大圖像中隨機(jī)提取8*8的小塊–> ZCA白化–>用SparseEncoder提取出特征。
step2:實(shí)現(xiàn)卷積

function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch) %cnnConvolve Returns the convolution of the features given by W and b with %the given images % % Parameters: % patchDim - patch (feature) dimension % numFeatures - number of features % images - large images to convolve with, matrix in the form % images(r, c, channel, image number) % W, b - W, b for features from the sparse autoencoder % ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for % preprocessing % % Returns: % convolvedFeatures - matrix of convolved features in the form % convolvedFeatures(featureNum, imageNum, imageRow, imageCol)numImages = size(images, 4); imageDim = size(images, 1); imageChannels = size(images, 3);%convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);% Instructions: % Convolve every feature with every large image here to produce the % numFeatures x numImages x (imageDim - patchDim + 1) x (imageDim - patchDim + 1) % matrix convolvedFeatures, such that % convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the % value of the convolved featureNum feature for the imageNum image over % the region (imageRow, imageCol) to (imageRow + patchDim - 1, imageCol + patchDim - 1) % % Expected running times: % Convolving with 100 images should take less than 3 minutes % Convolving with 5000 images should take around an hour % (So to save time when testing, you should convolve with less images, as % described earlier)% -------------------- YOUR CODE HERE -------------------- % Precompute the matrices that will be used during the convolution. Recall % that you need to take into account the whitening and mean subtraction % stepsWT=W*ZCAWhite; b_mean = b - WT*meanPatch;% --------------------------------------------------------convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1); for imageNum = 1:numImagesfor featureNum = 1:numFeatures% convolution of image with feature matrix for each channelconvolvedImage = zeros(imageDim - patchDim + 1, imageDim - patchDim + 1);for channel = 1:imageChannels% Obtain the feature (patchDim x patchDim) needed during the convolution% ---- YOUR CODE HERE ----feature = zeros(8,8); % You should replace thisoffset=(channel-1)*patchDim*patchDim;fea=WT(featureNum,offset+1:offset+patchDim*patchDim);feature=reshape(fea,patchDim,patchDim);% ------------------------% Flip the feature matrix because of the definition of convolution, as explained laterfeature = flipud(fliplr(squeeze(feature)));% Obtain the imageim = squeeze(images(:, :, channel, imageNum));% Convolve "feature" with "im", adding the result to convolvedImage% be sure to do a 'valid' convolution% ---- YOUR CODE HERE ----convolvedImage=convolvedImage+conv2( im,feature,'valid' );% ------------------------end% Subtract the bias unit (correcting for the mean subtraction as well)% Then, apply the sigmoid function to get the hidden activation% ---- YOUR CODE HERE ----convolvedImage=sigmoid(convolvedImage+b_mean(featureNum));% ------------------------% The convolved feature is the sum of the convolved values for all channelsconvolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;end endendfunction sigm=sigmoid(x)sigm=1./(1+exp(-x)); end

step3:池化

function pooledFeatures = cnnPool(poolDim, convolvedFeatures) %cnnPool Pools the given convolved features % % Parameters: % poolDim - dimension of pooling region % convolvedFeatures - convolved features to pool (as given by cnnConvolve) % convolvedFeatures(featureNum, imageNum, imageRow, imageCol) % % Returns: % pooledFeatures - matrix of pooled features in the form % pooledFeatures(featureNum, imageNum, poolRow, poolCol) % numImages = size(convolvedFeatures, 2); numFeatures = size(convolvedFeatures, 1); convolvedDim = size(convolvedFeatures, 3);pooledFeatures = zeros(numFeatures, numImages, floor(convolvedDim / poolDim), floor(convolvedDim / poolDim));numRegin=floor(convolvedDim / poolDim); for featureNum=1:numFeaturesfor imageNum=1:numImagesfor row=1:numReginfor col=1:numReginregin=convolvedFeatures(featureNum, imageNum,(row-1)*poolDim+1:row*poolDim,(col-1)*poolDim+1:col*poolDim);pooledFeatures(featureNum,imageNum,row,col)=mean(regin(:));endendend end end

總結(jié)

以上是生活随笔為你收集整理的深度学习笔记7 Working with Large Images 卷积特征提取的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。