日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

LM算法详解

發(fā)布時(shí)間:2023/12/10 编程问答 25 豆豆
生活随笔 收集整理的這篇文章主要介紹了 LM算法详解 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

1. 高斯牛頓法

殘差函數(shù)f(x)為非線性函數(shù),對(duì)其一階泰勒近似有:


這里的J是殘差函數(shù)f的雅可比矩陣,帶入損失函數(shù)的:

令其一階導(dǎo)等于0,得:


這就是論文里常看到的normal equation。

2.LM

LM是對(duì)高斯牛頓法進(jìn)行了改進(jìn),在求解過程中引入了阻尼因子:

2.1 阻尼因子的作用:

2.2 阻尼因子的初始值選取:

一個(gè)簡(jiǎn)單的策略就是:

2.3 阻尼因子的更新策略

3.核心代碼講解

3.1 構(gòu)建H矩陣

void Problem::MakeHessian() {TicToc t_h;// 直接構(gòu)造大的 H 矩陣ulong size = ordering_generic_;MatXX H(MatXX::Zero(size, size));VecX b(VecX::Zero(size));// TODO:: accelate, accelate, accelate //#ifdef USE_OPENMP //#pragma omp parallel for //#endif// 遍歷每個(gè)殘差,并計(jì)算他們的雅克比,得到最后的 H = J^T * Jfor (auto &edge: edges_) {edge.second->ComputeResidual();edge.second->ComputeJacobians();auto jacobians = edge.second->Jacobians();auto verticies = edge.second->Verticies();assert(jacobians.size() == verticies.size());for (size_t i = 0; i < verticies.size(); ++i) {auto v_i = verticies[i];if (v_i->IsFixed()) continue; // Hessian 里不需要添加它的信息,也就是它的雅克比為 0auto jacobian_i = jacobians[i];ulong index_i = v_i->OrderingId();ulong dim_i = v_i->LocalDimension();MatXX JtW = jacobian_i.transpose() * edge.second->Information();for (size_t j = i; j < verticies.size(); ++j) {auto v_j = verticies[j];if (v_j->IsFixed()) continue;auto jacobian_j = jacobians[j];ulong index_j = v_j->OrderingId();ulong dim_j = v_j->LocalDimension();assert(v_j->OrderingId() != -1);MatXX hessian = JtW * jacobian_j;// 所有的信息矩陣疊加起來H.block(index_i, index_j, dim_i, dim_j).noalias() += hessian;if (j != i) {// 對(duì)稱的下三角H.block(index_j, index_i, dim_j, dim_i).noalias() += hessian.transpose();}}b.segment(index_i, dim_i).noalias() -= JtW * edge.second->Residual();}}Hessian_ = H;b_ = b;t_hessian_cost_ += t_h.toc();delta_x_ = VecX::Zero(size); // initial delta_x = 0_n;}

3.2 將構(gòu)建好的H矩陣加上阻尼因子

void Problem::AddLambdatoHessianLM() {ulong size = Hessian_.cols();assert(Hessian_.rows() == Hessian_.cols() && "Hessian is not square");for (ulong i = 0; i < size; ++i) {Hessian_(i, i) += currentLambda_;} }

3.3 進(jìn)行求解后,驗(yàn)證該步的解是否合適,代碼對(duì)應(yīng)阻尼因子的更新策略

bool Problem::IsGoodStepInLM() {double scale = 0;scale = delta_x_.transpose() * (currentLambda_ * delta_x_ + b_);scale += 1e-3; // make sure it's non-zero :)// recompute residuals after update state// 統(tǒng)計(jì)所有的殘差double tempChi = 0.0;for (auto edge: edges_) {edge.second->ComputeResidual();tempChi += edge.second->Chi2();}double rho = (currentChi_ - tempChi) / scale;if (rho > 0 && isfinite(tempChi)) // last step was good, 誤差在下降{double alpha = 1. - pow((2 * rho - 1), 3);alpha = std::min(alpha, 2. / 3.);double scaleFactor = (std::max)(1. / 3., alpha);currentLambda_ *= scaleFactor;ni_ = 2;currentChi_ = tempChi;return true;} else {currentLambda_ *= ni_;ni_ *= 2;return false;} }

總結(jié)

以上是生活随笔為你收集整理的LM算法详解的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。