无监督学习 k-means_无监督学习-第1部分
無監督學習 k-means
有關深層學習的FAU講義 (FAU LECTURE NOTES ON DEEP LEARNING)
These are the lecture notes for FAU’s YouTube Lecture “Deep Learning”. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. Try it yourself! If you spot mistakes, please let us know!
這些是FAU YouTube講座“ 深度學習 ”的 講義 。 這是演講視頻和匹配幻燈片的完整記錄。 我們希望您喜歡這些視頻。 當然,此成績單是使用深度學習技術自動創建的,并且僅進行了較小的手動修改。 自己嘗試! 如果發現錯誤,請告訴我們!
導航 (Navigation)
Previous Lecture / Watch this Video / Top Level / Next Lecture
上一個講座 / 觀看此視頻 / 頂級 / 下一個講座
Welcome back to deep learning! So today, we want to talk about unsupervised methods and in particular, we will focus on autoencoders and GANs in the next couple of videos. We will start today with the basics, the motivation, and look into one of the rather historical methods — the restricted Boltzmann machines. We still mention them here, because they are kind of important in terms of the developments towards unsupervised learning.
歡迎回到深度學習! 因此,今天,我們想談談無監督方法,尤其是在接下來的兩個視頻中,我們將重點介紹自動編碼器和GAN。 今天,我們將從基礎知識,動機開始,并研究一種相當古老的方法-受限的Boltzmann機器。 我們在這里仍然提到它們,因為就無監督學習的發展而言,它們是很重要的。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。So, let’s see what I have here for you. So, the main topic as I said is unsupervised learning. Of course, we start with our motivation. So, you see that the data sets we’ve seen so far, they are huge they had up to millions of different training observations, many objects, and in particular few modalities. Most of the things we’ve looked at where essentially camera images. There may have been different cameras that have been used but typically only one or two modalities that were in one single dataset. However, this is not generally the case. For example, in medical imaging, you have typically very small data sets, maybe 30 to 100 patients. You have only one complex object that is the human body and many different modalities from MR, X-ray, to ultrasound. All of them have a very different appearance which means that they also have different requirements in terms of their processing. So why is this the case? Well in Germany, we actually have 65 CT scans per thousand inhabitants. This means that in 2014 alone, we had five million CT scans in Germany. So, there should be plenty of data. Why can’t we use all of this data? Well, these data are, of course, sensitive and they contain patient health information. So, for example, if you have a scan that contains the head in a CT scan, then you can render the surface of the face and you can even use an automatic system to determine the identity of this person. There are also non-obvious cues. So, for example, if you have the surface of the brain, the surface is actually characteristic for a certain person. You can identify persons by the shape of their brain to an accuracy of up to 99 percent. So, you see that this is indeed highly sensitive data. If you share whole volumes, people may be able to identify the person, although, you may argue that it’s difficult to identify a person from a single slice image. So, there are some trends to make data like this available. But still, you have the problem even if you have the data, you need labels. So, you need experts who look at the data and tell you what kind of disease is present, which anatomical structure is where, and so on. This is also very expensive to obtain.
所以,讓我們看看我在這里為您準備的。 因此,正如我所說,主要主題是無監督學習。 當然,我們從動力開始。 因此,您可以看到到目前為止我們已經看到的數據集非常龐大,它們擁有多達數百萬種不同的訓練觀察結果,許多對象,尤其是很少的模態。 我們看過的大多數東西本質上都是相機圖像。 可能使用了不同的相機,但通常在一個數據集中只有一種或兩種方式。 但是,通常情況并非如此。 例如,在醫學成像中,您通常擁有非常小的數據集,可能有30至100位患者。 您只有一個復雜的物體即人體,并且具有從MR,X射線到超聲的多種不同方式。 它們的外觀都非常不同,這意味著它們在處理方面也有不同的要求。 那么為什么會這樣呢? 在德國,我們實際上每千名居民進行65次CT掃描。 這意味著僅在2014年,我們在德國就進行了500萬次CT掃描。 因此,應該有大量數據。 為什么我們不能使用所有這些數據? 好吧,這些數據當然是敏感的,并且包含患者的健康信息。 因此,例如,如果您的CT掃描中包含頭部, 則可以渲染臉部表面,甚至可以使用自動系統來確定此人的身份。 也有不明顯的提示。 因此,例如,如果您有大腦的表面,則該表面實際上是某個人的特征。 您可以根據他們的大腦形狀來識別人員, 準確度最高可達99% 。 因此,您看到這確實是高度敏感的數據。 如果共享全部內容,盡管人們可能會認為很難從單個切片圖像中識別出一個人,但人們可能會識別出該人。 因此,存在使這種數據可用的趨勢。 但是,即使有數據,也需要標簽。 因此,您需要專家來查看數據并告訴您當前存在哪種疾病,哪種解剖結構在哪里等等。 獲得這也是非常昂貴的。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。So, it would be great if we had methods that could work with very few annotations or even no annotations. I have some examples here that go in this direction. One trend is weakly supervised learning. So, here you have a label for related tasks. The example that we show here is the localization from the class label. So let’s say, you have images and you have classes like brushing teeth or cutting trees. Then, you can use these plus the associated gradient information, like using visualization mechanisms, and you can localize the class in that particular image. This is a way how you can get a very cheap label, for example, for bounding boxes. There are also semi-supervised techniques where you have very little labeled data and you try to apply it to a larger data set. So, the typical approach here would be things like bootstrapping. You create a weak classifier from a small labeled data set. Then, you apply it to a large data set and you try to estimate which of the data points in that large data set have been classified reliably. Next, you take the reliable ones into a new training set and with the new training set, you then start over again trying to build a new system. Finally, you iterate until you have a better system.
因此,如果我們擁有可以使用很少注釋或什至沒有注釋的方法,那就太好了。 我這里有一些朝這個方向發展的例子。 一種趨勢是弱監督學習。 因此,這里有一個相關任務的標簽。 我們在此顯示的示例是類標簽的本地化。 假設您有圖像,并且有刷牙或砍樹之類的課程。 然后,您可以使用這些信息以及關聯的漸變信息(例如使用可視化機制),并且可以在該特定圖像中定位該類。 這是一種獲得非常便宜的標簽(例如用于邊界框)的方法。 還有半監督技術,在這些技術中,標記數據很少,然后嘗試將其應用于更大的數據集。 因此,此處的典型方法是類似引導程序。 您從一個小的標簽數據集創建一個弱分類器。 然后,將其應用于大型數據集,并嘗試估計該大型數據集中的哪些數據點已可靠分類。 接下來,將可靠的培訓帶入新的培訓集中,并使用新的培訓集,然后重新嘗試構建新的系統。 最后,您進行迭代直到擁有更好的系統。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Of course, there are also unsupervised techniques where you don’t need any labeled data. This will be the main topic of the next couple of videos. So let’s have a look at label-free learning. One typical application here is dimensionality reduction. Here, you have an example where data is on a high dimensional space. We have a 3-D space. Actually, we’re just showing you one slice through this 3-D space. You see that the data is rolled up and we identify similar points by similar color in this image. You can see this 3-D manifold that is often called the Swiss roll. Now, the Swiss roll actually doesn’t need a 3-D representation. So, what you would like to do is automatically unroll it. You see that here on the right-hand side, the dimensionality is reduced. So, you only have two dimensions here. This has been done automatically using a manifold learning technique or dimensionality reduction technique that is nonlinear. With these nonlinear methods, you can break down data sets into lower dimensionality. Now, this is useful because the smaller dimensionality is supposed to carry all the information that you need and you can now use this as a kind of representation.
當然,也有無監督的技術,您不需要任何標記的數據。 這將是接下來的兩個視頻的主題。 因此,讓我們看一下無標簽學習。 此處的一種典型應用是降維。 在這里,您有一個示例,其中數據位于高維空間中。 我們有一個3D空間。 實際上,我們只是向您展示此3-D空間中的一個切片。 您會看到數據已匯總,并且我們在此圖像中通過相似的顏色標識了相似的點。 您會看到這種3-D歧管,通常稱為瑞士卷。 現在,瑞士卷實際上不需要3D表示。 因此,您要做的是自動將其展開。 您會看到,在右側,維數減小了。 因此,這里只有二維。 這是使用非線性的流形學習技術或降維技術自動完成的。 使用這些非線性方法,您可以將數據集分解為較低維度。 現在,這很有用,因為較小的維度應該可以承載您需要的所有信息,并且您現在可以將其用作一種表示形式。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。What we’ll also see in the next couple of videos is that you can use this for example as network initialization. You already see the first auto-encoder structure here. You train such a network with a bottleneck where you have a low dimensional representation. Later, you take this low-dimensional representation and repurpose it. This means that you essentially remove the right-hand part of the network and replace it with a different one. Here, we use it for classification, and again our example is classifying cats and dogs. So, you can already see that if we are able to do such a dimensionality reduction, preserve the original information in a low dimensional space, then we potentially have fewer weights that we have to work with to approach a classification task. By the way, this is very similar to what we have already discussed when talking about transfer learning techniques.
在接下來的兩個視頻中,我們還將看到您可以將其用作網絡初始化。 您已經在這里看到了第一個自動編碼器結構。 您會在低維表示的瓶頸處訓練這樣的網絡。 以后,您將采用這種低維表示并將其重新調整用途。 這意味著您實際上要刪除網絡的右側部分,然后將其替換為另一部分。 在這里,我們將其用于分類,同樣,我們的示例是對貓和狗進行分類。 因此,您已經看到,如果能夠減少維數,將原始信息保留在低維空間中,那么處理分類任務所需的權重就可能更少。 順便說一句,這與我們在討論遷移學習技術時已經討論過的內容非常相似。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。You can also use this for clustering and you have already seen that. We have been using this technique in the chapter on visualization where we had this very nice dimensionality reduction and we zoomed in and looked over the different places here.
您也可以將其用于集群,并且您已經看到了。 我們在可視化一章中一直使用這種技術,在其中我們進行了很好的降維,然后放大并查看了此處的不同位置。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。You’ve seen that if you have a good learning method that will extract a good representation, then you can also use it to identify similar images in such a low dimensional space. Well, this can also be used for generative models. So here, the task is to generate realistic images. You can tackle for example missing data problems with this. This then leads into semi-supervised learning where you can also use this, for example, for augmentation. You can also use it for image-to-image translation which is also a very cool application. We will later see the so-called cycle GAN where you can really do a domain translation. You can also use this to simulate possible futures in reinforcement learning. So, we would have all kinds of interesting domains where we could apply these unsupervised techniques as well. So, here are some examples of data generation. You train with the left-hand side and then you generate on the right-hand side those images. This would be an appealing thing to do. You could generate images that look like real observations.
您已經看到,如果您擁有一種能夠提取良好表示形式的良好學習方法,那么您也可以使用它來識別如此低維空間中的相似圖像。 好吧,這也可以用于生成模型。 因此,這里的任務是生成逼真的圖像。 您可以使用此方法解決例如丟失數據的問題。 然后,這將導致半監督學習,您也可以在其中使用它,例如進行擴充。 您也可以將其用于圖像到圖像的轉換,這也是一個非常不錯的應用程序。 稍后我們將看到所謂的GAN循環,您可以在其中真正進行域轉換。 您還可以使用它來模擬強化學習中可能的未來。 因此,我們將擁有各種有趣的領域,我們也可以在這些領域應用這些無監督的技術。 因此,這是數據生成的一些示例。 您用左側訓練,然后在右側生成這些圖像。 這將是一件吸引人的事情。 您可以生成看起來像真實觀察結果的圖像。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。So today, we will talk about the restricted Boltzmann machines. As already indicated, they are historically important. But, honestly, nowadays they are not so commonly used anymore. They have been part of the big breakthroughs that we’ve seen earlier. For example, in Google dream. So, I think you should know about these techniques.
所以今天,我們將討論受限的玻爾茲曼機。 如前所述,它們在歷史上很重要。 但是,老實說,如今它們不再那么常用了。 它們是我們之前看到的重大突破的一部分。 例如,在Google夢中。 因此,我認為您應該了解這些技術。
Dreams of MNIST. Image created using gifify. Source: YouTubeMNIST的夢想。 使用gifify創建的圖像 。 資料來源: YouTubeLater, we’ll talk about autoencoders which are essentially an emerging technology and kind of similar to the restricted Boltzmann machines. You can use them in a feed-forward network context. You can use them for nonlinear dimensionality reduction and even extend this to generative models like the variational auto-encoders which is also a pretty cool trick. Lastly, we will talk about general adversarial networks which are currently probably the most widely used generative models. There are many applications of this very general concept. You can use it in image segmentation, reconstruction, semi-supervised learning, and many more.
稍后,我們將討論自動編碼器,它本質上是一種新興技術,并且與受限的Boltzmann機器相似。 您可以在前饋網絡環境中使用它們。 您可以將它們用于非線性降維,甚至可以將其擴展到生成模型,例如變分自動編碼器,這也是一個很酷的技巧。 最后,我們將討論通用對抗網絡,該網絡目前可能是使用最廣泛的生成模型。 這個非常籠統的概念有很多應用。 您可以將其用于圖像分割,重建,半監督學習等。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。But let’s first look at the historical perspective. Probably these historical things like restricted Boltzmann machines are not so important if you encounter an exam with me at some point. Still, I think you should know about this technique. Now, the idea is a very simple one. So, you start with two sets of nodes. One of them consists of visible units and the other one of the hidden units. They’re connected. So, you have the visible units v and they represent the observed data. Then, you have the hidden units that capture the dependencies. So they’re latent variables and they’re supposed to be binary. So they’re supposed to be zeros and ones. Now, what can we do with this bipartite graph?
但是,讓我們先來看一下歷史觀點。 如果您在某個時候遇到我的考試,這些歷史性的東西(例如受限的Boltzmann機器)可能并不那么重要。 不過,我認為您應該了解這種技術。 現在,這個想法很簡單。 因此,您從兩組節點開始。 其中一個由可見單位組成,另一個由隱藏單位組成。 他們已連接。 因此,您有可見單位v ,它們表示觀察到的數據。 然后,您將獲得捕獲依賴項的隱藏單元。 因此,它們是潛在變量,應該是二進制的。 因此,它們應該是零和一。 現在,我們如何處理這個二部圖?
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Well, you can see that the restricted Boltzmann machine is based on an energy model with a joint probability function that is p(v, h). It’s defined in terms of an energy function and this energy function is used inside the probability. So, you have 1/Z which is a kind of normalization constant. Then, e to the power of -E(v, h). The energy function that we’re defining here E(v, h) is essentially an inner product of the bias with v another bias and inner product with h and then a weighted inner product of v and h that is weighted with the matrix W. So, you can see that the unknowns here essentially are b, c, and the matrix W. So, this probability density function is called the Boltzmann distribution. It’s closely related to the softmax function. Remember that this is not simply a fully connected layer, because it’s not feed-forward. So, you feed into the restricted Boltzmann machines, you determine the h, and from the h you can then produce v again. So, the hidden layer model the input layer in a stochastic manner and is trained unsupervised.
好吧,您可以看到受限的Boltzmann機器基于具有聯合概率函數為p( v , h )的能量模型。 它是根據能量函數定義的,并且該能量函數在概率內使用。 因此,您擁有1 / Z,這是一種歸一化常數。 然后,e等于-E( v , h )的冪。 我們在此處定義的能量函數E( v , h )本質上是具有v的另一個偏差和具有h的內部積,然后是具有矩陣W的v和h的加權內部積。 因此,您可以看到這里的未知數本質上是b , c和矩陣W。 因此,該概率密度函數稱為玻耳茲曼分布。 它與softmax函數密切相關。 請記住,這不是簡單的完全連接的層,因為它不是前饋。 因此,您輸入受限的Boltzmann機器,確定h ,然后從h可以再次產生v 。 因此,隱藏層以隨機方式對輸入層進行建模,并在無人監督的情況下進行訓練。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。So let’s look into some details here. The visible and hidden units form this bipartite graph as I already mentioned. You could argue that our RBMs are Markov random fields with hidden variables. Then, we want to find W such that our probability is high for low energy states and vice-versa. The learning is based on gradient descent on the negative log-likelihood. So, we start with the log-likelihood and you can see there’s a small mistake on this slide. We are missing a log in the p(v, h. We already fixed that in the next line where we have the logarithm of 1/Z and the sum of the exponential functions. Now, we can use the definition of Z and expand it. This allows us then to write this multiplication as a second logarithmic term. Because it’s 1/Z it’s -log the definition of Z. This is the sum over v and h over the exponential function of -E(v, h). Now, if we look at the gradient, you can see that the full derivation is given in [5]. What you essentially get are two sums here. One is the sum over the p(h, v) times the negative partial differential of the energy function concerning the parameters minus the sum over v and h of the p(v, h) times the negative partial derivative of the energy function with respect to the parameters. Again, you can interpret those two terms as the expected value of the data and the expected value of the model. Generally, the expected value of the model is intractable, but you can approximate this with the so-called contrastive divergence.
因此,讓我們在這里研究一些細節。 正如我已經提到的,可見和隱藏單元構成了該二部圖。 您可能會說我們的RBM是帶有隱藏變量的Markov隨機字段。 然后,我們想要找到W,這樣對于低能態我們的概率很高,反之亦然。 該學習基于負對數似然上的梯度下降。 因此,我們從對數可能性開始,您會發現這張幻燈片上有一個小錯誤。 我們在p( v , h中缺少對數。我們已經在下一行固定對數為1 / Z和指數函數之和的對數中,現在,我們可以使用Z的定義并將其展開這允許我們然后寫該乘法作為第二對數項。因為這是1 / Z是-log Z.此的定義是在v和H以上-E(v,H)的指數函數之和。現在,如果我們看一下梯度,您會發現[5]中給出了全導數,您實際上得到的是兩個和,一個是p( h , v )上的和乘以負偏微分。與參數有關的能量函數減去p( v , h )在v和h上的總和乘以能量函數相對于參數的負偏導數,同樣,您可以將這兩個項解釋為數據的期望值和模型的期望值通常,模型的期望值是很難理解的,但是您可以將其近似 與所謂的對比分歧。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Now, contrastive divergence works the following way: You take any training example as v. Then, you set the binary states of the hidden units by computing the sigmoid function of the weighted sum over the vs plus the biases. So, this gives you essentially the probabilities of your hidden units. Then, you can run k Gibbs sampling steps where you sample the reconstruction v tilde by computing the probabilities of v subscript j =1 given h again by computing the sigmoid function over the weighted sum of h plus the biases. So, you’re using the hidden units that you have been computing in the second step. You can then use this to sample the reconstruction v tilde. This allows you again to resample h tilde. So, you run this for a couple of steps and if you did so, then you can actually compute the gradient updates. The gradient update for the matrix W is given by η times v h transpose minus v tilde h tilde transpose. The update for the bias is given as η times v — v tilde and the update for the bias c is given as η times h — h tilde. So this allows you also to update the weights. This way you can then start computing the appropriate weights and biases. So the more iterations of Gibbs sampling you run, the less biassed the estimate of the gradients will be. In practice, k is simply chosen as one.
現在,對比散度通過以下方式起作用:您將任何訓練示例視為v 。 然后,通過在v s加上偏差上計算加權和的S型函數來設置隱藏單位的二進制狀態。 因此,這實際上為您提供了隱藏單位的概率。 然后,可以游程k Gibbs抽樣,其中再次通過計算過的H加的偏壓的加權和的S形函數,通過計算v下標j = 1個給定的H的概率采樣重建v波浪步驟。 因此,您將使用第二步中一直在計算的隱藏單位。 然后,您可以使用此采樣重建v波浪。 這又可以讓你重新取樣^ h波浪。 因此,您需要執行幾個步驟,如果這樣做,則可以實際計算梯度更新。 矩陣W的梯度更新由η次v h轉置減去v tilde h tilde轉置給出。 用于偏置的更新被給定為η倍于V - v波浪號和用于偏置c中的更新被給定為η倍? - ?波浪。 因此,這還允許您更新權重。 這樣,您便可以開始計算適當的權重和偏差。 因此,您運行的Gibbs采樣迭代次數越多,梯度估計的偏差就越小。 實際上,僅將k選擇為1。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。You can expand on this into a deep belief network. The idea here is then that you stack layers on top again. The idea of deep learning is like layers on layers. So we need to go deeper and here we have one restricted Boltzmann machine on top another restricted Boltzmann machine. So, you can then use this to create really deep networks. One additional trick that you can use is that you use, for example, the last layer to fine-tune it for a classification task.
您可以在此基礎上擴展為深入的信任網絡。 然后,這里的想法是您再次在頂部堆疊圖層。 深度學習的想法就像層層疊疊。 因此,我們需要更深入地研究,這里在另一臺受限的Boltzmann機頂上有一個受限的Boltzmann機。 因此,您可以使用它來創建真正的深度網絡。 您可以使用的另一種技巧是,例如,使用最后一層對分類任務進行微調。
Deep believe networks in action. Image created using gifify. Source: YouTube深信網絡在行動。 使用gifify創建的圖像 。 資料來源: YouTubeThis is one of the first successful deep architectures as you see in [9]. This sparked the deep learning renaissance. Nowadays, RMBs are rarely used. So, deep belief networks are not that commonly used anymore.
如您在[9]中所見,這是第一個成功的深度架構。 這引發了深度學習的復興。 如今,很少使用人民幣。 因此,深度信任網絡不再是常用的了。
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。So, this is the reason why we talk next time about autoencoders. We will look then in the next couple of videos into more sophisticated methods, for example, at the generative adversarial networks. So, I hope you liked this video and if you liked it then I hope to see you in the next one. Goodbye!
因此,這就是我們下次討論自動編碼器的原因。 然后,在接下來的兩個視頻中,我們將研究更復雜的方法,例如在生成對抗網絡中。 所以,我希望您喜歡這部影片,如果您喜歡它,那么我希望在下一部影片中見到您。 再見!
If you liked this post, you can find more essays here, more educational material on Machine Learning here, or have a look at our Deep LearningLecture. I would also appreciate a follow on YouTube, Twitter, Facebook, or LinkedIn in case you want to be informed about more essays, videos, and research in the future. This article is released under the Creative Commons 4.0 Attribution License and can be reprinted and modified if referenced. If you are interested in generating transcripts from video lectures try AutoBlog.
如果你喜歡這篇文章,你可以找到這里更多的文章 ,更多的教育材料,機器學習在這里 ,或看看我們的深入 學習 講座 。 如果您希望將來了解更多文章,視頻和研究信息,也歡迎關注YouTube , Twitter , Facebook或LinkedIn 。 本文是根據知識共享4.0署名許可發布的 ,如果引用,可以重新打印和修改。 如果您對從視頻講座中生成成績單感興趣,請嘗試使用AutoBlog 。
鏈接 (Links)
Link — Variational Autoencoders: Link — NIPS 2016 GAN Tutorial of GoodfellowLink — How to train a GAN? Tips and tricks to make GANs work (careful, noteverything is true anymore!) Link - Ever wondered about how to name your GAN?
鏈接 —可變自動編碼器: 鏈接 — Goodfellow的NIPS 2016 GAN教程鏈接 —如何訓練GAN? 使GAN正常工作的提示和技巧(小心,什么都沒了!) 鏈接 -是否想知道如何命名GAN?
翻譯自: https://towardsdatascience.com/unsupervised-learning-part-1-c007f0c35669
無監督學習 k-means
總結
以上是生活随笔為你收集整理的无监督学习 k-means_无监督学习-第1部分的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 荣耀 60、荣耀 50 系列开启 Mag
- 下一篇: keras时间序列数据预测_使用Kera