深度学习术语_您应该意识到这些(通用)深度学习术语和术语
深度學(xué)習(xí)術(shù)語
術(shù)語 (Terminologies)
介紹 (Introduction)
I’ve recently gone through a set of machine learning-based projects presented in Juptyter notebook and have noticed that there are a set of recurring terms and terminologies in all notebooks and machine learning-based projects I’ve worked on or reviewed.
我最近瀏覽了Juptyter筆記本中介紹的一組基于機器學(xué)習(xí)的項目,并注意到在我從事或研究的所有筆記本和基于機器學(xué)習(xí)的項目中都有一組重復(fù)的術(shù)語和術(shù)語。
You can see this article as a way of cutting through some noise within machine learning and deep learning. Expect to find descriptions and explanations of terms and terminologies that you are bound to come across in the majority of deep learning-based projects.
您可以將本文視為消除機器學(xué)習(xí)和深度學(xué)習(xí)中的噪音的一種方式。 期望找到對大多數(shù)基于深度學(xué)習(xí)的項目必定會遇到的術(shù)語和術(shù)語的描述和解釋。
I cover the definition of terms and terminologies associated with the following subject areas in a machine learning project:
我將介紹與機器學(xué)習(xí)項目中以下主題領(lǐng)域相關(guān)的術(shù)語和術(shù)語的定義:
Datasets
數(shù)據(jù)集
Convolutional Neural Network Architecture
卷積神經(jīng)網(wǎng)絡(luò)架構(gòu)
Techniques
技巧
Hyperparameters
超參數(shù)
1.數(shù)據(jù)集 (1. Datasets)
Photo by Franki Chamaki on Unsplash照片由Franki Chamaki在Unsplash上拍攝Training Dataset: This is the group of our dataset used to train the neural network directly. Training data refers to the dataset partition exposed to the neural network during training.
訓(xùn)練數(shù)據(jù)集 :這是我們的數(shù)據(jù)集,用于直接訓(xùn)練神經(jīng)網(wǎng)絡(luò)。 訓(xùn)練數(shù)據(jù)是指訓(xùn)練期間暴露于神經(jīng)網(wǎng)絡(luò)的數(shù)據(jù)集分區(qū)。
Validation Dataset: This group of the dataset is utilized during training to assess the performance of the network at various iterations.
驗證數(shù)據(jù)集 :在訓(xùn)練期間利用該組數(shù)據(jù)集來評估網(wǎng)絡(luò)在各種迭代中的性能。
Test Dataset: This partition of the dataset evaluates the performance of our network after the completion of the training phase.
測試數(shù)據(jù)集 :在訓(xùn)練階段完成后, 數(shù)據(jù)集的此分區(qū)評估了我們網(wǎng)絡(luò)的性能。
2.卷積神經(jīng)網(wǎng)絡(luò) (2. Convolutional Neural Networks)
Photo by Alina Grubnyak on Unsplash Alina Grubnyak在Unsplash上拍攝的照片Convolutional layer: A convolution is a mathematical term that describes a dot product multiplication between two sets of elements. Within deep learning, the convolution operation acts on the filters/kernels and image data array within the convolutional layer. Therefore a convolutional layer simply houses the convolution operation that occurs between the filters and the images passed through a convolutional neural network.
卷積層 :卷積是一個數(shù)學(xué)術(shù)語,用于描述兩組元素之間的點積乘法。 在深度學(xué)習(xí)中,卷積操作作用于卷積層內(nèi)的濾鏡/內(nèi)核和圖像數(shù)據(jù)陣列。 因此,卷積層僅容納發(fā)生在濾波器和通過卷積神經(jīng)網(wǎng)絡(luò)的圖像之間發(fā)生的卷積運算。
Batch Normalization layer: Batch Normalization is a technique that mitigates the effect of unstable gradients within a neural network through the introduction of an additional layer that performs operations on the inputs from the previous layer. The operations standardize and normalize the input values, after that the input values are transformed through scaling and shifting operations.
批處理歸一化層 :批處理歸一化是一種技術(shù),它通過引入一個附加層來減輕神經(jīng)網(wǎng)絡(luò)中不穩(wěn)定梯度的影響,該附加層對來自上一層的輸入執(zhí)行操作。 在通過縮放和移位操作對輸入值進行轉(zhuǎn)換之后,這些操作將對輸入值進行標準化和標準化。
MaxPooling layer: Max pooling is a variant of sub-sampling where the maximum pixel value of pixels that fall within the receptive field of a unit within a sub-sampling layer is taken as the output. The max-pooling operation below has a window of 2x2 and slides across the input data, outputting an average of the pixels within the receptive field of the kernel.
MaxPooling層 :Max pooling是子采樣的一種變體,其中將屬于子采樣層內(nèi)某個單元的接收場內(nèi)的像素的最大像素值作為輸出。 下面的最大池化操作具有2x2的窗口,并在輸入數(shù)據(jù)上滑動,輸出內(nèi)核接受域內(nèi)像素的平均值。
Flatten layer: Takes an input shape and flattens the input image data into a one-dimensional array.
拼合層 :采用輸入形狀并將輸入圖像數(shù)據(jù)拼合為一維數(shù)組。
Dense Layer: A dense layer has an embedded number of arbitrary units/neurons within. Each neuron is a perceptron.
致密層 :致密層中嵌入了任意數(shù)量的單元/神經(jīng)元。 每個神經(jīng)元都是一個感知器。
3.技術(shù) (3. Techniques)
Photo by Markus Spiske on Unsplash Markus Spiske在Unsplash上拍攝的照片Activation Function: A mathematical operation that transforms the result or signals of neurons into a normalized output. The purpose of an activation function as a component of a neural network is to introduce non-linearity within the network. The inclusion of an activation function enables the neural network to have greater representational power and solve complex functions.
激活函數(shù) :將神經(jīng)元的結(jié)果或信號轉(zhuǎn)換為標準化輸出的數(shù)學(xué)運算。 激活函數(shù)作為神經(jīng)網(wǎng)絡(luò)的組成部分的目的是在網(wǎng)絡(luò)內(nèi)引入非線性。 包含激活函數(shù)使神經(jīng)網(wǎng)絡(luò)具有更大的表示能力并能夠解決復(fù)雜的函數(shù)。
Rectified Linear Unit Activation Function(ReLU): A type of activation function that transforms the value results of a neuron. The transformation imposed by ReLU on values from a neuron is represented by the formula y=max(0,x). The ReLU activation function clamps down any negative values from the neuron to 0, and positive values remain unchanged. The result of this mathematical transformation is utilized as the output of the current layer and used as input to a consecutive layer within a neural network.
整流線性單位激活函數(shù)(ReLU) :一種激活函數(shù),可轉(zhuǎn)換神經(jīng)元的值結(jié)果。 ReLU對來自神經(jīng)元的值施加的變換由公式y = max(0,x)表示 。 ReLU激活功能將神經(jīng)元的任何負值鉗制為0,而正值保持不變。 該數(shù)學(xué)變換的結(jié)果被用作當前層的輸出,并被用作神經(jīng)網(wǎng)絡(luò)內(nèi)連續(xù)層的輸入。
Softmax Activation Function: A type of activation function that is utilized to derive the probability distribution of a set of numbers within an input vector. The output of a softmax activation function is a vector in which its set of values represents the probability of an occurrence of a class or event. The values within the vector all add up to 1.
Softmax激活函數(shù) :一種激活函數(shù),用于導(dǎo)出輸入向量內(nèi)一組數(shù)字的概率分布。 softmax激活函數(shù)的輸出是一個向量,其中的一組值表示發(fā)生類或事件的概率。 向量中的值總計為1。
Dropout: Dropout technique works by randomly reducing the number of interconnecting neurons within a neural network. At every training step, each neuron has a chance of being left out, or rather, dropped out of the collated contributions from connected neurons.
輟學(xué) :輟學(xué)技術(shù)通過隨機減少神經(jīng)網(wǎng)絡(luò)中互連神經(jīng)元的數(shù)量來工作。 在每個訓(xùn)練步驟中,每個神經(jīng)元都有機會被遺漏,或更確切地說,會從連接的神經(jīng)元的整理貢獻中消失。
4.超參數(shù) (4. Hyperparameters)
Marko Bla?evi? on 馬爾科布拉澤維奇對UnsplashUnsplashLoss function: A method that quantifies ‘how well’ a machine learning model performs. The quantification is an output(cost) based on a set of inputs, which are referred to as parameter values. The parameter values are used to estimate a prediction, and the ‘loss’ is the difference between the predictions and the actual values.
損失函數(shù) :一種方法,量化機器“ 如何 ”學(xué)習(xí)模型執(zhí)行。 量化是基于一組輸入的輸出(成本),稱為參數(shù)值。 參數(shù)值用于估計預(yù)測,而“損失”是預(yù)測與實際值之間的差。
Optimization Algorithm: An optimizer within a neural network is an algorithmic implementation that facilitates the process of gradient descent within a neural network by minimizing the loss values provided via the loss function. To reduce the loss, it is paramount the values of the weights within the network are selected appropriately.
優(yōu)化算法 :神經(jīng)網(wǎng)絡(luò)內(nèi)的優(yōu)化器是一種算法實現(xiàn),可通過最小化通過損失函數(shù)提供的損失值來促進神經(jīng)網(wǎng)絡(luò)內(nèi)的梯度下降過程。 為了減少損失,最重要的是適當選擇網(wǎng)絡(luò)內(nèi)的權(quán)重值。
Learning Rate: An integral component of a neural network implementation detail as it’s a factor value that determines the level of updates that are made to the values of the weights of the network. Learning rate is a type of hyperparameter.
學(xué)習(xí)率 :神經(jīng)網(wǎng)絡(luò)實現(xiàn)細節(jié)的組成部分,因為它是決定對網(wǎng)絡(luò)權(quán)重值進行更新的級別的因子值。 學(xué)習(xí)率是一種超參數(shù)。
Epoch: This is a numeric value that indicates the number of time a network has been exposed to all the data points within a training dataset.
時代:這是一個數(shù)字值,表示網(wǎng)絡(luò)暴露于訓(xùn)練數(shù)據(jù)集中所有數(shù)據(jù)點的時間。
結(jié)論 (Conclusion)
There are obviously tons more terms and terminologies that you are bound to come across as you undertake and complete machine learning projects.
在您完成并完成機器學(xué)習(xí)項目時,顯然會有更多的術(shù)語和術(shù)語。
In future articles, I’ll probably expand on more complex concepts within machine learning that appear frequently.
在以后的文章中,我可能會擴展機器學(xué)習(xí)中經(jīng)常出現(xiàn)的更復(fù)雜的概念。
Feel free to save the article or share it with machine learning practitioners who are at the start of their learning journey or career.
隨意保存文章或與處于學(xué)習(xí)旅程或職業(yè)開始的機器學(xué)習(xí)從業(yè)者分享。
我希望您覺得這篇文章有用。 (I hope you found the article useful.)
To connect with me or find more content similar to this article, do the following:
要與我聯(lián)系或查找更多類似于本文的內(nèi)容,請執(zhí)行以下操作:
Subscribe to my email list for weekly newsletters
訂閱我的電子郵件列表 每周通訊
Follow me on Medium
跟我來中
Connect and reach me on LinkedIn
在LinkedIn上聯(lián)系并聯(lián)系我
翻譯自: https://towardsdatascience.com/you-should-be-aware-of-these-common-deep-learning-terms-and-terminologies-26e0522fb88b
深度學(xué)習(xí)術(shù)語
總結(jié)
以上是生活随笔為你收集整理的深度学习术语_您应该意识到这些(通用)深度学习术语和术语的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Linux下Matlab的安装和中文显示
- 下一篇: sprintf用法