日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

RPCA学习笔记

發布時間:2025/3/20 编程问答 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 RPCA学习笔记 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

很久沒有寫學習筆記了,年初先后忙考試,忙課程,改作業,回家剛安定下來,讀了大神上學期給的paper,這幾天折騰數學的感覺很好,就在這里和大家一起分享一下,希望能夠有所收獲。響應了Jeffrey的建議,強制自己把筆記做成英文的,可能給大家帶來閱讀上的不便,希望大家理解,多讀英文的東西總沒壞處的。這里感謝大神和我一起在本文手稿部分推了一些大牛的“?easily?achieved”stuff... 本文尚不成熟,我也是初接觸Robust PCA,希望各位能夠拍磚提出寶貴意見。


Robust PCA

Rachel Zhang

?

1. RPCA Brief Introduction

1.?????Why use Robust PCA?

Solve the problem withspike noise with high magnitude instead of Gaussian distributed noise.


2.?????Main Problem

Given C = A*+B*, where A*is a sparse spike noise matrix and B* is a Low-rank matrix, aiming at recoveringB*.

B*= UΣV’, in which U∈Rn*k?,Σ∈Rk*k?,V∈Rn*k


3.?????Difference from PCA

Both PCA and Robust PCAaims at Matrix decomposition, However,

In PCA, M = L0+N0,????L0:low rank matrix ; N0: small idd Gaussian noise matrix,it seeks the best rank-k estimation of L0 by minimizing ||M-L||2?subjectto rank(L)<=k. This problem can be solved by?SVD.

In RPCA, M = L0+S0,?L0:low rank matrix ; S0: a sparse spikes noise matrix,?we’lltry to give the solution in the following sections.

?



2. Conditionsfor correct decomposition

4.?????Ill-posed problem:

Suppose?sparse matrix A*?and?B*=eiejTare the solution of this decomposition problem.

1)???????With the assumption that B* is not only low rank but alsosparse, another valid sparse-plus low-rank decomposition might be A1= A*+?eiejT?andB1?= 0, Therefore, we?need an appropriatenotion of low-rank that ensures that B* is not too sparse. Conditionswill be imposed later that require the space spanned by singular vectors U andV (i.e., the row and column spaces of B*) to be “incoherent” with the standardbasis.

2)???????Similarly, with the assumption that A* is sparse as well aslow-rank (e.g. the first column of A* is non-zero, while all other columns are0, then A* has rank 1 and sparse). Another valid decomposition might be A2=0,B2?= A*+B* (Here rank(B2) <= rank(B*) + 1). Thus weneed?the limitation that sparse matrix should beassumed not to be low rank. I.e., assume each row/column does not havetoo many non-zero entries (don’t exist dense row/column), to avoid such issues.



5.?????Conditions for exact recovery / decomposition:

If A* and B* are drawnfrom these classes, then we have exact recovery with high probability?[1].

1)???????For low rank matrix L---Random orthogonal model [Candes andRecht 2008]:

A rank-k matrix B* with SVD B*=UΣV’ is constructed in this way: Thesingular vectors?U,V∈Rn*k?are drawnuniformly at random from the collection of rank-k partial isometries(等距算子)?inRn*k. The singular vectors in U and V need not be mutuallyindependent. No restriction is placed on the singular values.

2)???????For sparse matrix S---Random sparsity model:

The matrix A* such that?support(A*) is chosen uniformlyat random from the collection of all support sets of size m. There is noassumption made about the values of A* at locations specified by support(A*).

[Support(M)]: thelocations of the non-zero entries in M

Latest?[2]?improved on the conditions andyields the ‘best’ condition.

?



3. Recovery Algorithms

6.?????Formulization

For decomposition D = A+E,in which A is low rank and error E is sparse.

1)???????Intuitively propose

minrank(A)+γ||E||0,??? (1)

However, it is non-convex thus intractable (both of these 2are NP-hard to approximate).

2)???????Relax L0-norm to L1-norm and replace rank with nuclear norm

min||A||*?+ λ||E||1,

where||A||*?iσi(A)?? (2)

??????? This is convex, i.e., exist a uniquely minimizer.

Reason: This relaxation ismotivated by observing that ||A||*?+ λ||E||1?is theconvex envelop (凸包) of rank(A)+γ||E||0?over the set of (A,E) suchthat max(||A||2,2,||E||1, ∞)≤1.

Moreover, there might be circumstancesunder which (2) perfectly recovers low-rank matrix A0.[3] shows itis indeed true under surprising broad conditions.

?


7.?????Approach RPCA Optimization Algorithm

We approach in twodifferent ways.?1st?approach, use afirst-order method to solve the primal problem directly.?(E.g. ProximalGradient, Accelerated Proximal Gradient (APG)), the computational bottleneck ofeach iteration is a SVD computation.2ndapproach is to formulate and solve the dual problem, and retrieve the primalsolution from the dual optimal solution.?The dual problem too RPCA canbe written as:

maxYtrace(DTY) , subject to J(Y) ≤ 1

where J(Y) = max(||Y||2-1||Y||). ||A||x?means the x norm of A.(無窮范數表示矩陣中絕對值最大的一個)。This dual problem can besolved by constrained steepest ascent.

Now let’s talk about?Augmented Lagrange Multiplier (ALM)?and?Alternating Directions Method (ADM)?[2,4].


7.1. General method of ALM

For the optimizationproblem

min f(X), subj. h(X)=0?????(3)

we can define the Lagrangefunction:

L(X,Y,μ) = f(X)+<Y, h(x)>+μ/2||h(X)||?F2? ?????(4)

where Y is a Lagrangemultiplier and μ is a positive scalar.

General method of ALM is:


A genetic Lagrangemultiplier algorithm would solve PCP(principle component pursuit) by repeatedlysetting (Xk) = arg min L(Xk,Yk,μ), and then the Lagrangemultiplier matrix via Yk+1=Yk+μ(hk(X))

?

7.2 ?ALM algorithm for RPCA

In RPCA, we can define (5)

X = (A,E), f(x) = ||A||*?+ λ||E||1, andh(X) = D-A-E

Then the Lagrange functionis (6)

L(A,E,Y, μ) = ||A||*?+ λ||E||1+ <Y, D-A-E>+μ/2·|| D-A-E ||?F?2

The Optimization Flow is just like the general ALM method. The initialization Y= Y0* isinspired by the dual problem(對偶問題) as it is likely to make the objective function value <D,Y0*> reasonably large.

?

Theorem 1.? For algorithm 4,?any accumulation point (A*,E*) of (Ak*, Ek*)is an optimal solution to the RPCA problem?and the convergence rate is at least O(μk-1).[5]



Inthis RPCA algorithm, a iteration strategy is adopted. As the optimizationprocess might be confused, we impose 2 well-known facts: (7)(8)


Sε[W] = arg minX?ε||X||1+ ?||X-W||F2 ? ? ? (7)

U Sε[W] VT?= arg minX?ε||X||*+?||X-W||F2 ? ? ? ?(8)

which is used in the abovealgorithm for optimization one parameter while fixing another. In thesolutions, Sε[W] is the soft thresholding operator. Here I will impose the problemto speculate this. (To facilitate writing formulation and ploting, I use amanuscript. )




BTW,S_u(x) is easily implemented by 2 lines:

S_u(X)= max(x-u , 0);

S_u(X)= S_u(X) + min(X+u , 0);







?

?

?

Now we utilize formulation (7,8) into RPCA problem.

For the objective function (6) w.r.t get optimal E, wecan rewrite the objective function by deleting unrelated component into:

f(E) = λ||E||1+ <Y, D-A-E> +μ/2·|| D-A-E||?F?2

? ? ? ?=λ||E||1+ <Y, D-A-E> +μ/2·||D-A-E ||?F?2+(μ/2)||μ-1Y||2?//add an irrelevant item w.r.t E

? ? ? ?=λ||E||1+(μ/2)(2(μ-1Y· (D-A-E))+|| D-A-E ||?F?2+||μ-1Y||2)?//try to transform into (7)’s form

? ? ? ?=(λ/μ)||E||1+?||E-(D-A-μ-1Y)||F2

Finally we get the form of (7) and in the optimizationstep of E, we have

E = Sλ/μ[D-A-μ-1Y]

,same as what mentioned in algorithm 4.

Similarly, for matrices X, we can prove?A=US1/μ(D-E-μ-1Y)V?is theoptimization process of A.

?

?

8. Experiments?

?Here I've tested on a video data. This data is achieved from a fixed point camera and the scene is same at most time, thus the active variance part can be regarded as error E and the stationary/invariant part serves as low rank matrix A. The following picture shows the result. As the person walks in, error matrix has its value.?The 2 subplots below represent low rank matrix and sparse one respectively.?



?

9.?????Reference:

1)???????E. J. Candes and B. Recht. Exact Matrix Completion Via ConvexOptimization. Submitted for publication, 2008.

2)???????E. J. Candes, X. Li, Y. Ma, and J. Wright. Robust PrincipalComponent Analysis Submitted for publication, 2009.

3)???????Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robustprincipal component analysis: Exact recovery of corrupted low-rank matrices viaconvex optimization. In: NIPS 2009.

4)???????X. Yuan and J. Yang. Sparse and low-rank matrix decompositionvia alternating direction methods. preprint, 2009.

5)???????Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented Lagrangemultiplier method for exact recovery of a corrupted low-rank matrices.Mathematical Programming, submitted, 2009.

6)???????Generalized?Power?method?for?Sparse?Principal?Component?Analysis

?

?





本文尚不成熟,希望大家提出寶貴意見。

關于Machine Learning更多的學習資料與相關討論將繼續更新,敬請關注本博客和新浪微博Rachel____Zhang.


總結

以上是生活随笔為你收集整理的RPCA学习笔记的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 国产我不卡 | 99国产精品99久久久久久粉嫩 | 三级黄色av | 婷婷综合激情 | 黄色小视频网 | 国产中文字字幕乱码无限 | 男女啊啊啊 | 欧美老女人性生活视频 | 国产欧美日韩综合精品一区二区 | 三级久久久 | 日韩影视在线 | 驯服少爷漫画免费观看下拉式漫画 | 91a视频| 国产美女精品 | 国产精品自拍一区 | 欧美视频www | 精品视频久久久久久久 | 337p亚洲精品色噜噜噜 | 五月天校园春色 | 蜜桃av成人永久免费 | 女生被男生c | 自拍偷拍999 | 麻豆视频免费在线 | 性欧美巨大乳 | 韩国三级hd中文字幕有哪些 | 亚洲欧美另类在线视频 | www.性欧美 | 蜜臂av | 午夜久久电影 | 日韩美av| 欧美精品免费在线观看 | 最新视频 - x88av | 国产精品成人在线观看 | 欧美一区二区三区久久妖精 | 91久久久国产精品 | 国产麻豆成人 | 在线97视频 | 欧美丰满熟妇bbb久久久 | 成人黄色一区二区三区 | 亚洲综合影视 | av一区不卡 | 久久久久亚洲日日精品 | 国产一区二区在线视频 | 中文字幕乱伦视频 | 国产网站一区 | 美女黄色小视频 | 在线黄色免费网站 | 麻豆av在线 | 极品尤物一区二区 | 高潮喷水一区二区三区 | 瑟瑟久久 | 成人免费aaa| 欧美青青草 | 日本少妇bbwbbw精品 | 国产精品久久久久久久一区探花 | 免费观看一区 | 亚洲网在线观看 | 天堂av手机在线 | 中文字幕一区二区三区夫目前犯 | 国产视频一区二区不卡 | 国内福利视频 | 国产精品色图 | 成年人网站在线观看视频 | 国产伦精品一区二区三区视频孕妇 | 欧美被狂躁喷白浆精品 | 日本精品一区二区三区四区 | 国内精品毛片 | 国产黑丝在线视频 | 精品一区二区三区在线视频 | 成人免费不卡视频 | 成年人的天堂 | 日本一区二区精品视频 | 精品国产露脸精彩对白 | 精品一区二区三区视频在线观看 | 制中文字幕音影 | www在线播放| 精品日韩一区二区三区四区 | 日日夜夜天天干 | 久久中文av | 日韩精品色哟哟 | 特黄视频 | 免费国产视频在线观看 | 亚洲精品自拍偷拍 | 亚洲色图3p| 亚洲一区二区在线看 | 夜夜爽夜夜叫夜夜高潮漏水 | 空姐毛片 | 特大黑人巨交吊性xxxxhd | 美女又爽又黄 | 请用你的手指扰乱我吧 | 久久99精品久久久久婷婷 | xxxx亚洲 | 操碰视频在线 | 婷婷一级片 | 超碰在线网址 | 在线免费黄色片 | 中文字幕免费视频 | 18禁网站免费无遮挡无码中文 | 中国大陆毛片 |