日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > 循环神经网络 >内容正文

循环神经网络

求解无约束最优化问题的共轭梯度法matlab程序,Matlab实现FR共轭梯度法

發布時間:2023/12/20 循环神经网络 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 求解无约束最优化问题的共轭梯度法matlab程序,Matlab实现FR共轭梯度法 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

前一段時間學習了無約束最優化方法,今天用Matlab實現了求解無約束最優化問題的FR共軛梯度法。關于共軛梯度法的理論介紹,請參考我的另一篇文章無約束最優化方法學習筆記。

文件testConjungateGradient.m用于測試共軛梯度法函數。測試文件需要定義函數f和自變量x,給定迭代初值x0和允許誤差?。函數設置了show_detail變量用于控制是否顯示每一步的迭代信息。

% test conjungate gradient method

% by TomHeaven, hanlin_tan@nudt.edu.cn, 2015.08.25

%% define function and variable

syms x1 x2;

%f = xs^2+2*ys^2-2*xs*ys + 2*ys + 2;

f = (x1-1)^4 + (x1 - x2)^2;

%f = (1-x1)^2 + 2*(x2 - x1^2)^2;

x = {x1, x2};

% initial value

x0 = [0 0];

% tolerance

epsilon = 1e-1;

%% call conjungate gradient method

show_detail = true;

[bestf, bestx, count] = conjungate_gradient(f, x, x0, epsilon, show_detail);

% print result

fprintf('bestx = %s, bestf = %f, count = %d\n', num2str(bestx), bestf, count);

文件conjungate_gradient.m是共軛梯度法的實現函數。變量nf表示函數f的梯度?f(梯度的希臘字母是nabla,故用nf)。

function [fv, bestx, iter_num] = conjungate_gradient(f, x, x0, epsilon, show_detail)

%% conjungate gradient method

% by TomHeaven, hanlin_tan@nudt.edu.cn, 2015.08.25

% Input:

% f - syms function

% x - row cell arrow for input syms variables

% $x_0$ - init point

% epsilon - tolerance

% show_detail - a boolean value for wether to print details

% Output:

% fv - minimum f value

% bestx - mimimum point

% iter_num - iteration count

%% init

syms lambdas % suffix s indicates this is a symbol variable

% n is the dimension

n = length(x);

% compute differential of function f stored in cell nf

nf = cell(1, n); % using row cells, column cells will result in error

for i = 1 : n

nf{i} = diff(f, x{i});

end

% $\nabla f(x_0)$

nfv = subs(nf, x, x0);

% init $\nabla f(x_k)$

nfv_pre = nfv;

% init count, k and xv for x value.

count = 0;

k = 0;

xv = x0;

% initial search direction

d = - nfv;

% show initial info

if show_detail

fprintf('Initial:\n');

fprintf('f = %s, x0 = %s, epsilon = %f\n\n', char(f), num2str(x0), epsilon);

end

%% loop

while (norm(nfv) > epsilon)

%% one-dimensional search

% define $x_{k+1} = x_{k} + \lambda d$

xv = xv+lambdas*d;

% define $\phi$ and do 1-dim search

phi = subs(f, x, xv);

nphi = diff(phi); % $\nabla \phi$

lambda = solve(nphi);

% get rid of complex and minus solution

lambda = double(lambda);

if length(lambda) > 1

lambda = lambda(abs(imag(lambda)) < 1e-5);

lambda = lambda(lambda > 0);

lambda = lambda(1);

end

% if $\lambda$ is too small, stop iteration

if lambda < 1e-5

break;

end

%% update

% update $x_{k+1} = x_{k} + \lambda d$

xv = subs(xv, lambdas, lambda);

% convert sym to double

xv = double(xv);

% compute the differential

nfv = subs(nf, x, xv);

% update counters

count = count + 1;

k = k + 1;

% compute alpha based on FR formula

alpha = sumsqr(nfv) / sumsqr(nfv_pre);

% show iteration info

if show_detail

fprintf('Iteration: %d\n', count);

fprintf('x(%d) = %s, lambda = %f\n', count, num2str(xv), lambda);

fprintf('nf(x) = %s, norm(nf) = %f\n', num2str(double(nfv)), norm(double(nfv)));

fprintf('d = %s, alpha = %f\n', num2str(double(d)), double(alpha));

fprintf('\n');

end

% update conjungate direction

d = -nfv + alpha * d;

% save the previous $$\nabla f(x_k)$$

nfv_pre = nfv;

% reset the conjungate direction and k if k >= n

if k >= n

k = 0;

d = - nfv;

end

end % while

%% output

fv = double(subs(f, x, xv));

bestx = double(xv);

iter_num = count;

end

運行testConjungateGradient后輸出結果如下:

>> testConjungateGradient

Initial:

f = (x1 - x2)^2 + (x1 - 1)^4, x0 = 0 0, epsilon = 0.100000

Iteration: 1

x(1) = 0.41025 0, lambda = 0.102561

nf(x) = 1.08e-16 -0.82049, norm(nf) = 0.820491

d = 4 0, alpha = 0.042075

Iteration: 2

x(2) = 0.52994 0.58355, lambda = 0.711218

nf(x) = -0.52265 0.10721, norm(nf) = 0.533528

d = 0.1683 0.82049, alpha = 0.422831

Iteration: 3

x(3) = 0.63914 0.56115, lambda = 0.208923

nf(x) = -0.031994 -0.15597, norm(nf) = 0.159223

d = 0.52265 -0.10721, alpha = 0.089062

Iteration: 4

x(4) = 0.76439 0.79465, lambda = 1.594673

nf(x) = -0.11285 0.060533, norm(nf) = 0.128062

d = 0.078542 0.14643, alpha = 0.646892

Iteration: 5

x(5) = 0.79174 0.77998, lambda = 0.242379

nf(x) = -0.012614 -0.023517, norm(nf) = 0.026686

d = 0.11285 -0.060533, alpha = 0.043425

bestx = 0.79174 0.77998, bestf = 0.002019, count = 5

修改允許誤差為

epsilon=1e-8;

則可以得到更加精確的結果:

Iteration: 6

x(6) = 0.9026 0.9122, lambda = 6.329707

nf(x) = -0.022884 0.019188, norm(nf) = 0.029864

d = 0.017515 0.020888, alpha = 1.252319

Iteration: 7

x(7) = 0.90828 0.90744, lambda = 0.247992

nf(x) = -0.0014077 -0.0016788, norm(nf) = 0.002191

d = 0.022884 -0.019188, alpha = 0.005382

Iteration: 8

x(8) = 0.97476 0.97586, lambda = 43.429293

nf(x) = -0.0022668 0.0022025, norm(nf) = 0.003161

d = 0.0015309 0.0015756, alpha = 2.080989

Iteration: 9

x(9) = 0.97533 0.97531, lambda = 0.249812

nf(x) = -2.9597e-05 -3.0461e-05, norm(nf) = 0.000042

d = 0.0022668 -0.0022025, alpha = 0.000181

Iteration: 10

x(10) = 0.99709 0.99712, lambda = 725.188481

nf(x) = -5.2106e-05 5.2008e-05, norm(nf) = 0.000074

d = 3.0006e-05 3.0063e-05, alpha = 3.004594

Iteration: 11

x(11) = 0.9971 0.9971, lambda = 0.249997

nf(x) = -4.8571e-08 -4.8663e-08, norm(nf) = 0.000000

d = 5.2106e-05 -5.2008e-05, alpha = 0.000001

Iteration: 12

x(12) = 0.99992 0.99992, lambda = 57856.826721

nf(x) = -9.3751e-08 9.3748e-08, norm(nf) = 0.000000

d = 4.8616e-08 4.8617e-08, alpha = 3.718503

Iteration: 13

x(13) = 0.99992 0.99992, lambda = 0.250000

nf(x) = -1.1858e-12 -1.1855e-12, norm(nf) = 0.000000

d = 9.3751e-08 -9.3748e-08, alpha = 0.000000

bestx = 0.99992 0.99992, bestf = 0.000000, count = 13

這與問題的最優解(1,1)T已經非常接近了。

算法實現沒有經過大量測試,實際使用可能會有BUG。這里只是用于說明基本實現原理,有興趣的讀者可以在此基礎上改進。

總結

以上是生活随笔為你收集整理的求解无约束最优化问题的共轭梯度法matlab程序,Matlab实现FR共轭梯度法的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。