日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 人工智能 > ChatGpt >内容正文

ChatGpt

ai 中 统计_AI统计(第2部分)

發(fā)布時(shí)間:2023/11/29 ChatGpt 78 豆豆
生活随笔 收集整理的這篇文章主要介紹了 ai 中 统计_AI统计(第2部分) 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

ai 中 統(tǒng)計(jì)

Today I plan to cover the following topics: Linear independence, special matrices, and matrix decomposition.

今天,我計(jì)劃涵蓋以下主題:線性獨(dú)立性,特殊矩陣和矩陣分解。

線性獨(dú)立 (Linear independence)

A set of vectors is linearly independent if none of these vectors can be written as a linear combination of other vectors.For example, V1=(1,0) and V2=(0,1). Here, V2 cannot be written in terms of V1. However, V3 (3,4) is linearly dependent as V3 can be expressed as 3V1+4V2.

如果這些向量中的任何一個(gè)都不能寫成其他向量的線性組合,則它們是線性獨(dú)立的,例如V1 =(1,0)和V2 =(0,1)。 在此,不能用V1來寫V2。 但是,V3(3,4)與線性相關(guān),因?yàn)閂3可以表示為3V1 + 4V2。

Mathematically, s={V1, V2,…., Vn} is linearly independent if and only if the linear combination α1V1+α2V2+…..+αnVn=0 means that all αi=0.

在數(shù)學(xué)上,當(dāng)且僅當(dāng)線性組合α1V1+α2V2+ .... +αnVn= 0表示所有αi= 0時(shí),s = {V1,V2,....,Vn}是線性獨(dú)立的。

矩陣運(yùn)算 (Matrix operations)

Matrices can transform one vector to another vector. For example, V is an Nx1 vector and w is also an Nx1 vector.

矩陣可以將一個(gè)向量轉(zhuǎn)換為另一向量。 例如,V是Nx1向量,w也是Nx1向量。

矩陣的痕跡 (Trace of a matrix)

Trace of a matrix is given by its sum of diagonal elements. For matrix A, its trace will be the summation of all the elements with the same value of row and column.

矩陣的跡線由對角元素的總和給出。 對于矩陣A,其軌跡將是具有相同行和列值的所有元素的總和。

Trace of a matrix, Image by author矩陣的痕跡,作者提供的圖像

一些屬性 (Some properties)

  • Tr(A+B) = Tr(A)+Tr(B)

    Tr(A + B)= Tr(A)+ Tr(B)
  • Tr(AB) = Tr(BA)

    Tr(AB)= Tr(BA)
  • Tr(A) = Tr(A.T) (A.T means transpose of matrix A)

    Tr(A)= Tr(AT)(AT表示矩陣A的轉(zhuǎn)置)
  • 矩陣的行列式 (The determinant of a matrix)

    Laplace expansion for an NxN matrix is given by the following formula:

    NxN矩陣的拉普拉斯展開式由以下公式給出:

    The determinant of a matrix, Image by author矩陣的行列式,作者提供的圖像

    Determinant actually represents the volume formed by the column vectors. For a 2x2 vector, it represents the area.

    行列式實(shí)際上表示由列向量形成的體積。 對于2x2向量,它表示面積。

    Interpretation of 2x2 vectors in space, Image by author空間中2x2向量的解釋,作者提供

    矩陣的可逆性 (Invertibility of a matrix)

    The inverse of a matrix A is possible only if the det(A) is not 0. Note that this automatically means that the columns of A have to be linearly independent. Consider a matrix below.

    僅當(dāng)det(A)不為0時(shí),矩陣A的逆才可能。請注意,這自動意味著A的列必須線性獨(dú)立。 考慮下面的矩陣。

    Matrix A, image by author矩陣A,作者提供的圖片

    Note that V1, V2,…., Vn are vectors and if any vector, say, Vn can be written as linearly dependent vectors of the rest like Vn=α1V1+α2V2+…..+αn-1Vn-1 then, we can do a simple column operation i.e last column = the last column- (α1V1+α2V2+…..+αn-1Vn-1) and this would yield a column full of zeros. This will make the determinant of matrix 0. For a 2x2 matrix, we will have 2 vectors V1 and V2. If V1 and V2 are linearly dependent like V1=2V2, then the area formed by the two vectors is going to be zero. A smart way to put this would be that the two vectors are parallel to one another.

    請注意,V1,V2,...,Vn是向量,如果有任何向量,例如,Vn可以寫為其余部分的線性相關(guān)向量,例如Vn =α1V1+α2V2+ ..... +αn-1Vn-1,那么我們可以一個(gè)簡單的列運(yùn)算,即最后一列=最后一列-(α1V1+α2V2+ ..... +αn-1Vn-1),這將產(chǎn)生一列充滿零的列。 這將決定矩陣0的行列式。對于2x2矩陣,我們將有2個(gè)向量V1和V2。 如果V1和V2是線性相關(guān)的,例如V1 = 2V2,則由兩個(gè)向量形成的面積將為零。 一種明智的解釋是,兩個(gè)向量彼此平行。

    特殊矩陣和向量 (Special matrices and vectors)

  • Diagonal matrix: Only diagonal entries are non zero and rest all elements are zero. D(i,j) = 0 if i is not equal to j.

    對角矩陣:僅對角線條目為非零,其余所有元素為零。 如果i不等于j,則D(i,j)= 0。
  • Symmetric matrix: A matrix is said to be symmetric if the matrix and its transpose are equal.

    對稱矩陣:如果矩陣及其轉(zhuǎn)置相等,則稱該矩陣是對稱的。
  • Unit vector: vector with unit length. 2-Norm of the vector is 1.

    單位向量:單位長度的向量。 向量的2范數(shù)為1。
  • Orthogonal vectors: Two vectors X and Y are orthogonal if (X.T)Y = 0

    正交向量:如果(XT)Y = 0,則兩個(gè)向量X和Y是正交的
  • Orthogonal matrix: If a transpose of a matrix is equal o its inverse, then we can say that the matrix is orthogonal. Also, all columns are orthonormal. The orthogonal matrix can be used to rotate vectors which preserve volume.

    正交矩陣:如果矩陣的轉(zhuǎn)置等于其逆,則可以說矩陣是正交的。 同樣,所有列都是正交的。 正交矩陣可用于旋轉(zhuǎn)保留體積的向量。
  • Orthonormal matrix: If the inverse of a matrix is equal to its transpose with unit determinant, the matrix is said to be orthonormal.

    正交矩陣:如果矩陣的逆等于其行列式的轉(zhuǎn)置,則稱該矩陣為正交的。
  • Orthogonal matrix, Image by author正交矩陣,作者提供的圖片 Orthonormal matrix, Image by author正交矩陣,作者提供

    本征分解 (Eigen decomposition)

    Eigen decomposition is extremely useful for a square symmetric matrix. Let's look at the physical meaning of the term.

    本征分解對于平方對稱矩陣非常有用。 讓我們看一下該術(shù)語的物理含義。

    Every real matrix can be thought of as a combination of rotation and stretching.

    每個(gè)實(shí)數(shù)矩陣都可以視為旋轉(zhuǎn)和拉伸的組合。

    Vector multiplication, Image by author矢量乘法,作者提供的圖像 operation on vector v that generates vector w, Image by author向量v的運(yùn)算,產(chǎn)生向量w,作者提供

    Here, A can be thought of as an operator tat stretches and rotates a vector v to obtain a new vector w. Eigenvectors for a matrix are those special vectors that only stretch under the action of a matrix. Eigenvalues are the factor by which the eigenvectors stretch. In the equation below, the vector v is stretched by a value of lambda when operated with an eigenvector A.

    在這里,可以將A視為操作員tat拉伸并旋轉(zhuǎn)向量v以獲得新的向量w。 矩陣的特征向量是那些僅在矩陣作用下才拉伸的特殊向量。 特征值是特征向量伸展的因子。 在下面的公式中,向量v在使用特征向量A時(shí)被拉伸了一個(gè)lambda值。

    Eigenvalue lambda of a vector v, Image by author向量v的特征值λ,作者提供的圖像

    Say, A has n linearly independent eigenvectors {V1, V2,….., Vn}. On concatenating all the vectors as a column, we get a single eigenvector matric V where V=[V1, V2,….., Vn]. If we concatenate the corresponding eigenvalues into a diagonal matrix i.e Λ=diag(λ1, λ2,…, λn), we get the eigendecomposition (factorization) of A as:

    假設(shè)A有n個(gè)線性獨(dú)立的特征向量{V1,V2,…..,Vn}。 將所有向量連接為一列后,我們得到單個(gè)特征向量矩陣V,其中V = [V1,V2,.....,Vn]。 如果將對應(yīng)的特征值連接到對角矩陣即Λ = diag(λ1,λ2,...,λn),則得到A的特征分解(因式分解)為:

    eigendecomposition of A, Image by authorA的特征分解,作者提供

    Real symmetric matrices have real eigenvectors and real eigenvalues.

    實(shí)對稱矩陣具有實(shí)特征向量和實(shí)特征值。

    Real symmetric matrix, Image y author實(shí)對稱矩陣,作者:y

    二次形式和正定矩陣 (Quadratic form and positive definite matrix)

    The quadratic form can be interpreted as a ‘weighted’ length.

    二次形式可以解釋為“加權(quán)”長度。

    Quadratic form, Image by author二次形式,作者提供的圖片 Quadratic form, Image by author二次形式,作者提供的圖片

    The positive definite (PD) matrix has all eigenvalues greater than zero. The semi-definite positive(PSD) matrix has eigenvalues greater than equal to zero. A PD matrix has a property that for all X, (X.T)AX is greater than 0. For example, if A=I or identity matrix then, (X.T)I(X)=(X.T)(X) which is greater than 0. A PSD matrix has a property that for all X, (X.T)AX is greater than equal to 0. Similarly, a negative definite (ND)matrix has all eigenvalues less than zero. And semi-negative definite (PD)matrix has all eigenvalues less than equal to zero.

    正定(PD)矩陣的所有特征值均大于零。 半定正(PSD)矩陣的特征值大于零。 PD矩陣的屬性是,對于所有X,(XT)AX都大于0。例如,如果A = I或單位矩陣,則(XT)I(X)=(XT)(X)大于0. PSD矩陣具有以下特性:對于所有X,(XT)AX都等于0。類似地,負(fù)定(ND)矩陣的所有特征值均小于零。 半負(fù)定(PD)矩陣的所有特征值均小于零。

    奇異值分解 (Singular value decomposition)

    If A is an MxN matrix, then

    如果A是MxN矩陣,則

    Singular value decomposition, Image by author奇異值分解,作者提供
  • U is an MxM matrix and orthogonal

    U是MxM矩陣并且正交
  • V is an NxN matrix and orthogonal

    V是NxN矩陣并且正交
  • D is an MxN matrix and diagonal

    D是MxN矩陣和對角線
  • Elements of U are the eigenvectors of A(A.T), called left singular vectors

    U的元素是A(AT)的特征向量,稱為左奇異向量
  • Elements of Vare the eigenvectors of (A.T)A, called right singular vectors

    Vare的元素(AT)A的特征向量,稱為右奇異向量
  • Non zero elements of D are the square-root( λ((A.T)(A))) which means square-root of eigenvalues of (A.T)(A), called as singular values

    D的非零元素是平方根(λ((AT)(A))),它表示(AT)(A)特征值的平方根,稱為奇異值
  • 結(jié)束 (End)

    Thank you and stay tuned for more blogs on AI.

    謝謝,請繼續(xù)關(guān)注更多有關(guān)AI的博客。

    翻譯自: https://towardsdatascience.com/statistics-for-ai-part-2-43d81986c87c

    ai 中 統(tǒng)計(jì)

    總結(jié)

    以上是生活随笔為你收集整理的ai 中 统计_AI统计(第2部分)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。