日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪(fǎng)問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

UFLDL教程: Exercise:Learning color features with Sparse Autoencoders

發(fā)布時(shí)間:2023/12/13 编程问答 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 UFLDL教程: Exercise:Learning color features with Sparse Autoencoders 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

Linear Decoders


Deep Learning and Unsupervised Feature Learning Tutorial Solutions

以三層的稀疏編碼神經(jīng)網(wǎng)絡(luò)而言,在sparse autoencoder中的輸出層滿(mǎn)足下面的公式

從公式中可以看出,a3的輸出值是f函數(shù)的輸出,而在普通的sparse autoencoder中f函數(shù)一般為sigmoid函數(shù),所以其輸出值的范圍為(0,1),所以可以知道a3的輸出值范圍也在0到1之間。

另外我們知道,在稀疏模型中的輸出層應(yīng)該是盡量和輸入層特征相同,也就是說(shuō)a3=x1,這樣就可以推導(dǎo)出x1也是在0和1之間,那就是要求我們對(duì)輸入到網(wǎng)絡(luò)中的數(shù)據(jù)要先變換到0和1之間,這一條件雖然在有些領(lǐng)域滿(mǎn)足,比如前面實(shí)驗(yàn)中的MINIST數(shù)字識(shí)別。
但是有些領(lǐng)域,比如說(shuō)使用了PCA Whitening后的數(shù)據(jù),其范圍卻不一定在0和1之間。因此Linear Decoder方法就出現(xiàn)了。Linear Decoder是指在隱含層采用的激發(fā)函數(shù)是sigmoid函數(shù),而在輸出層的激發(fā)函數(shù)采用的是線(xiàn)性函數(shù),比如說(shuō)最特別的線(xiàn)性函數(shù)——等值函數(shù)。此時(shí),也就是說(shuō)輸出層滿(mǎn)足下面公式:

一個(gè) S 型或 tanh 隱含層以及線(xiàn)性輸出層構(gòu)成的自編碼器,我們稱(chēng)為線(xiàn)性解碼器


隨著輸出單元的激勵(lì)函數(shù)的改變,這個(gè)輸出單元梯度也相應(yīng)變化。回顧之前每一個(gè)輸出單元誤差項(xiàng)定義為:

其中 y = x 是所期望的輸出, 是自編碼器的輸出, 是激勵(lì)函數(shù).因?yàn)樵谳敵鰧蛹?lì)函數(shù)為 f(z) = z, 這樣 f’(z) = 1,所以上述公式可以簡(jiǎn)化為

當(dāng)然,若使用反向傳播算法來(lái)計(jì)算隱含層的誤差項(xiàng)時(shí):

因?yàn)殡[含層采用一個(gè) S 型(或 tanh)的激勵(lì)函數(shù) f,在上述公式中, 依然是 S 型(或 tanh)函數(shù)的導(dǎo)數(shù)。

這樣在用BP算法進(jìn)行梯度的求解時(shí),只需要更改誤差的計(jì)算公式而已,改成如下公式


實(shí)驗(yàn)步驟


1.初始化參數(shù),編寫(xiě)計(jì)算線(xiàn)性解碼器代價(jià)函數(shù)及其梯度的函數(shù)sparseAutoencoderLinearCost.m,主要是在sparseAutoencoderCost.m的基礎(chǔ)上稍微修改,然后再檢查其梯度實(shí)現(xiàn)是否正確。
2.加載數(shù)據(jù)并原始數(shù)據(jù)進(jìn)行ZCA Whitening的預(yù)處理。
3.學(xué)習(xí)特征,即用LBFG算法訓(xùn)練整個(gè)線(xiàn)性解碼器網(wǎng)絡(luò),得到整個(gè)網(wǎng)絡(luò)權(quán)值optTheta。
4.可視化第一層學(xué)習(xí)到的特征。

linearDecoderExercise.m

%% CS294A/CS294W Linear Decoder Exercise% Instructions % ------------ % % This file contains code that helps you get started on the % linear decoder exericse. For this exercise, you will only need to modify % the code in sparseAutoencoderLinearCost.m. You will not need to modify % any code in this file.%%====================================================================== %% STEP 0: Initialization % Here we initialize some parameters used for the exercise.imageChannels = 3; % number of channels (rgb, so 3)patchDim = 8; % patch dimension numPatches = 100000; % number of patchesvisibleSize = patchDim * patchDim * imageChannels; % number of input units outputSize = visibleSize; % number of output units hiddenSize = 400; % number of hidden units sparsityParam = 0.035; % desired average activation of the hidden units. lambda = 3e-3; % weight decay parameter beta = 5; % weight of sparsity penalty term epsilon = 0.1; % epsilon for ZCA whitening%%====================================================================== %% STEP 1: Create and modify sparseAutoencoderLinearCost.m to use a linear decoder, % and check gradients % You should copy sparseAutoencoderCost.m from your earlier exercise % and rename it to sparseAutoencoderLinearCost.m. % Then you need to rename the function from sparseAutoencoderCost to % sparseAutoencoderLinearCost, and modify it so that the sparse autoencoder % uses a linear decoder instead. Once that is done, you should check % your gradients to verify that they are correct.% NOTE: Modify sparseAutoencoderCost first!% To speed up gradient checking, we will use a reduced network and some % dummy patchesdebugHiddenSize = 5; debugvisibleSize = 8; patches = rand([8 10]);%隨機(jī)產(chǎn)生10個(gè)樣本,每個(gè)樣本為一個(gè)8維的列向量,元素值為0~1 theta = initializeParameters(debugHiddenSize, debugvisibleSize); [cost, grad] = sparseAutoencoderLinearCost(theta, debugvisibleSize, debugHiddenSize, ...lambda, sparsityParam, beta, ...patches);% Check gradients numGrad = computeNumericalGradient( @(x) sparseAutoencoderLinearCost(x, debugvisibleSize, debugHiddenSize, ...lambda, sparsityParam, beta, ...patches), theta);% Use this to visually compare the gradients side by side disp([numGrad grad]); diff = norm(numGrad-grad)/norm(numGrad+grad); % Should be small. In our implementation, these values are usually less than 1e-9. disp(diff); assert(diff < 1e-9, 'Difference too large. Check your gradient computation again');% NOTE: Once your gradients check out, you should run step 0 again to % reinitialize the parameters %}%%====================================================================== %% STEP 2: Learn features on small patches從pathes中學(xué)習(xí)特征 % In this step, you will use your sparse autoencoder (which now uses a % linear decoder) to learn features on small patches sampled from related % images.%% STEP 2a: Load patches 加載數(shù)據(jù) % In this step, we load 100k patches sampled from the STL10 dataset and % visualize them. Note that these patches have been scaled to [0,1]load stlSampledPatches.mat %里面自己定義了變量patches的值displayColorNetwork(patches(:, 1:100));%% STEP 2b: Apply preprocessing預(yù)處理 % In this sub-step, we preprocess the sampled patches, in particular, % ZCA whitening them. % % In a later exercise on convolution and pooling, you will need to replicate % exactly the preprocessing steps you apply to these patches before % using the autoencoder to learn features on them. Hence, we will save the % ZCA whitening and mean image matrices together with the learned features % later on.% Subtract mean patch (hence zeroing the mean of the patches) meanPatch = mean(patches, 2); %注意這里減掉的是每一維屬性的均值 %為什么是對(duì)每行求平均,以前是對(duì)每列即每個(gè)樣本求平均呀?因?yàn)橐郧笆腔叶葓D,現(xiàn)在是彩色圖,如果現(xiàn)在對(duì)每列平均就是對(duì)三個(gè)通道求平均,這肯定不行 patches = bsxfun(@minus, patches, meanPatch);%每一維都均值化% Apply ZCA whitening sigma = patches * patches' / numPatches;%協(xié)方差矩陣 [u, s, v] = svd(sigma); ZCAWhite = u * diag(1 ./ sqrt(diag(s) + epsilon)) * u';%求出ZCAWhitening矩陣 patches = ZCAWhite * patches;displayColorNetwork(patches(:, 1:100));%% STEP 2c: Learn features % You will now use your sparse autoencoder (with linear decoder) to learn % features on the preprocessed patches. This should take around 45 minutes.theta = initializeParameters(hiddenSize, visibleSize);% Use minFunc to minimize the function addpath minFunc/options = struct; options.Method = 'lbfgs'; options.maxIter = 400; options.display = 'on';[optTheta, cost] = minFunc( @(p) sparseAutoencoderLinearCost(p, ...visibleSize, hiddenSize, ...lambda, sparsityParam, ...beta, patches), ...theta, options);% Save the learned features and the preprocessing matrices for use in % the later exercise on convolution and pooling fprintf('Saving learned features and preprocessing matrices...\n'); save('STL10Features.mat', 'optTheta', 'ZCAWhite', 'meanPatch'); fprintf('Saved\n');%% STEP 2d: Visualize learned featuresW = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize); b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); figure; %這里為什么要用(W*ZCAWhite)'呢?首先,使用W*ZCAWhite是因?yàn)槊總€(gè)樣本x輸入網(wǎng)絡(luò), %其輸出等價(jià)于W*ZCAWhite*x;另外,由于W*ZCAWhite的每一行才是一個(gè)隱含節(jié)點(diǎn)的變換值 %而displayColorNetwork函數(shù)是把每一列顯示一個(gè)小圖像塊的,所以需要對(duì)其轉(zhuǎn)置。displayColorNetwork( (W*ZCAWhite)');

sparseAutoencoderLinearCost.m

function [cost,grad,features] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...lambda, sparsityParam, beta, data) % -------------------- YOUR CODE HERE -------------------- % Instructions: % Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your % earlier exercise onto this file, renaming the function to % sparseAutoencoderLinearCost, and changing the autoencoder to use a % linear decoder. % -------------------- YOUR CODE HERE -------------------- %計(jì)算線(xiàn)性解碼器代價(jià)函數(shù)及其梯度 % visibleSize:輸入層神經(jīng)單元節(jié)點(diǎn)數(shù) % hiddenSize:隱藏層神經(jīng)單元節(jié)點(diǎn)數(shù) % lambda: 權(quán)重衰減系數(shù) % sparsityParam: 稀疏性參數(shù) % beta: 稀疏懲罰項(xiàng)的權(quán)重 % data: 訓(xùn)練集 % theta:參數(shù)向量,包含W1、W2、b1、b2W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize); W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize); b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);% Loss and gradient variables (your code needs to compute these values) m = size(data, 2); % 樣本數(shù)量%% ---------- YOUR CODE HERE -------------------------------------- % Instructions: Compute the loss for the Sparse Autoencoder and gradients % W1grad, W2grad, b1grad, b2grad % % Hint: 1) data(:,i) is the i-th example % 2) your computation of loss and gradients should match the size % above for loss, W1grad, W2grad, b1grad, b2grad% z2 = W1 * x + b1 % a2 = f(z2) % z3 = W2 * a2 + b2 % h_Wb = a3 = f(z3)z2 = W1 * data + repmat(b1, [1, m]); a2 = sigmoid(z2); z3 = W2 * a2 + repmat(b2, [1, m]); a3 = z3;rhohats = mean(a2,2); rho = sparsityParam; KLsum = sum(rho * log(rho ./ rhohats) + (1-rho) * log((1-rho) ./ (1-rhohats)));squares = (a3 - data).^2; squared_err_J = (1/2) * (1/m) * sum(squares(:)); %均方差項(xiàng) weight_decay_J = (lambda/2) * (sum(W1(:).^2) + sum(W2(:).^2));%權(quán)重衰減項(xiàng) sparsity_J = beta * KLsum; %懲罰項(xiàng)cost = squared_err_J + weight_decay_J + sparsity_J;%損失函數(shù)值% delta3 = -(data - a3) .* fprime(z3); % but fprime(z3) = a3 * (1-a3) delta3 = -(data - a3); beta_term = beta * (- rho ./ rhohats + (1-rho) ./ (1-rhohats)); delta2 = ((W2' * delta3) + repmat(beta_term, [1,m]) ) .* a2 .* (1-a2);W2grad = (1/m) * delta3 * a2' + lambda * W2; % W2梯度 b2grad = (1/m) * sum(delta3, 2); % b2梯度 W1grad = (1/m) * delta2 * data' + lambda * W1; % W1梯度 b1grad = (1/m) * sum(delta2, 2); % b1梯度%------------------------------------------------------------------- % Convert weights and bias gradients to a compressed form % This step will concatenate and flatten all your gradients to a vector % which can be used in the optimization method. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];%------------------------------------------------------------------- % We are giving you the sigmoid function, you may find this function % useful in your computation of the loss and the gradients. function sigm = sigmoid(x)sigm = 1 ./ (1 + exp(-x)); endend

displayColorNetwork.m

function displayColorNetwork(A)% display receptive field(s) or basis vector(s) for image patches % % A the basis, with patches as column vectors% In case the midpoint is not set at 0, we shift it dynamically if min(A(:)) >= 0 A = A - mean(A(:));%0均值化 endcols = round(sqrt(size(A, 2)));% 每行大圖像中小圖像塊的個(gè)數(shù)channel_size = size(A,1) / 3; dim = sqrt(channel_size);% 小圖像塊內(nèi)每行或列像素點(diǎn)個(gè)數(shù) dimp = dim+1; rows = ceil(size(A,2)/cols);% 每列大圖像中小圖像塊的個(gè)數(shù) B = A(1:channel_size,:);% R通道像素值 C = A(channel_size+1:channel_size*2,:);% G通道像素值 D = A(2*channel_size+1:channel_size*3,:);% B通道像素值 B=B./(ones(size(B,1),1)*max(abs(B)));% 歸一化 C=C./(ones(size(C,1),1)*max(abs(C))); D=D./(ones(size(D,1),1)*max(abs(D))); % Initialization of the image I = ones(dim*rows+rows-1,dim*cols+cols-1,3);%Transfer features to this image matrix for i=0:rows-1for j=0:cols-1if i*cols+j+1 > size(B, 2)breakend% This sets the patchI(i*dimp+1:i*dimp+dim,j*dimp+1:j*dimp+dim,1) = ...reshape(B(:,i*cols+j+1),[dim dim]);I(i*dimp+1:i*dimp+dim,j*dimp+1:j*dimp+dim,2) = ...reshape(C(:,i*cols+j+1),[dim dim]);I(i*dimp+1:i*dimp+dim,j*dimp+1:j*dimp+dim,3) = ...reshape(D(:,i*cols+j+1),[dim dim]);end endI = I + 1;% 使I的范圍從[-1,1]變?yōu)閇0,2] I = I / 2;% 使I的范圍從[0,2]變?yōu)閇0, 1] imagesc(I); axis equal% 等比坐標(biāo)軸:設(shè)置屏幕高寬比,使得每個(gè)坐標(biāo)軸的具有均勻的刻度間隔 axis off% 關(guān)閉所有的坐標(biāo)軸標(biāo)簽、刻度、背景end

參考文獻(xiàn)


Exercise:Learning color features with Sparse Autoencoders

Deep learning:十七(Linear Decoders,Convolution和Pooling)

線(xiàn)性解碼器

吳恩達(dá) Andrew Ng 的公開(kāi)課

總結(jié)

以上是生活随笔為你收集整理的UFLDL教程: Exercise:Learning color features with Sparse Autoencoders的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。