最优化算法【共轭梯度法】
特点:具有超线性收敛速度,只需要计算梯度,避免计算二阶导数
算法步骤
\(step0:\)
给定初始值\(x_0\),容许误差\(\epsilon\)
\(step1:\)
计算梯度\(g_k=\nabla f(x_k)\),if \(norm(g_k)<=\epsilon\), \(break;\)
输出当前值\(x_k\)
else \(to step2\)
\(step2:\)
\[\begin{cases} d_k=-g_k, & \text {$k$=0} \\ d_k=-g_k+\beta_{k-1}d_{k-1}, & \text {$k$>=1} \end{cases} \]\[\beta_{k-1}=\frac{g_k^Tg_k}{g_{k-1}^Tg_{k-1}} \]\(step3:\)
利用线搜索技术确定\(\alpha_k\)
\(k=k+1\),to step 1;
matlab code
function [x,val,fun_t] = conjugate_gradient(fun,gfun,x0,max_ite)
%myFun - Description
%
% Syntax: [x,val,fun_t] = myFun(fun,gfun,x0)
%
% conjugate gradient algorithm
maxk=max_ite;
rho=0.6;Sigma=0.4;
k=0;epsilon=1e-4;
n=length(x0);
fun_t=zeros(1,max_ite);
while k=0.0
d=-g;
end
end
if norm(g)
main code
%%%%%%%%conjugate gradient algorithm
clc;
close all;
fun=@(x) 100*(x(1)^2-x(2))^2+(x(1)-1)^2;
gfun=@(x) [400*(x(1)^2-x(2))*x(1)+2*(x(1)-1);-200*(x(1)^2-x(2))];
x0=[0;0];
max_ite=200; %%number of iterations
[x,val,fun_t] = conjugate_gradient(fun,gfun,x0,max_ite);
disp(x);
disp(val);
figure(1);
plot(1:max_ite,fun_t);
set(get(gca, 'XLabel'), 'String', 'number of iterations');
set(get(gca, 'YLabel'), 'String', 'function value');
result
conclusion
共轭梯度算法介于梯度下降和牛顿法之间,快于线性收敛,只需要梯度,不用计算二阶导数;
reference
《最优化方法及其matlab程序设计》