ann人工神经网络_深度学习-人工神经网络(ANN)
ann人工神經(jīng)網(wǎng)絡(luò)
Building your first neural network in less than 30 lines of code.
用不到30行代碼構(gòu)建您的第一個(gè)神經(jīng)網(wǎng)絡(luò)。
1.What is Deep Learning ?
1.什么是深度學(xué)習(xí)?
Deep learning is that AI function which is able to learn features directly from the data without any human intervention ,where the data can be unstructured and unlabeled.
深度學(xué)習(xí)是AI功能,它可以直接從數(shù)據(jù)中學(xué)習(xí)特征,而無(wú)需任何人工干預(yù),其中數(shù)據(jù)可以是非結(jié)構(gòu)化和無(wú)標(biāo)簽的。
1.1 Why deep learning?
1.1為什么要深度學(xué)習(xí)?
ML techniques became insufficient as the amount of data is increased. The success of a model heavily relied on feature engineering till last decade where these models fell under the category of Machine learning. Where deep learning models deals with finding these features automatically from the raw data.
隨著數(shù)據(jù)量的增加,機(jī)器學(xué)習(xí)技術(shù)變得不足。 模型的成功很大程度上取決于特征工程,直到上個(gè)十年,這些模型都屬于機(jī)器學(xué)習(xí)范疇。 深度學(xué)習(xí)模型負(fù)責(zé)處理從原始數(shù)據(jù)中自動(dòng)查找這些功能的地方。
1.2 Machine learning vs Deep learning
1.2機(jī)器學(xué)習(xí)與深度學(xué)習(xí)
ML vs DL (Source: https://www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners)ML與DL(來(lái)源: https : //www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners )2.What is Artificial neural network?
2.什么是人工神經(jīng)網(wǎng)絡(luò)?
2.1 Structure of a neural network:
2.1神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu):
In a neural network as the structure says there is at least one hidden layer between the input and output layers. The hidden layers does not see the inputs. The word “deep” is a relative term which means how many hidden layer a neural network have.
如結(jié)構(gòu)所說(shuō),在神經(jīng)網(wǎng)絡(luò)中,輸入層和輸出層之間至少有一個(gè)隱藏層。 隱藏的層看不到輸入。 術(shù)語(yǔ)“深層”是一個(gè)相對(duì)術(shù)語(yǔ),表示神經(jīng)網(wǎng)絡(luò)具有多少個(gè)隱藏層。
While computing the layer the input layer is ignored. For example in the picture below we have a 3 layered neural network as mentioned input layer is not counted.
在計(jì)算層時(shí),將忽略輸入層。 例如,在下面的圖片中,我們有一個(gè)3層神經(jīng)網(wǎng)絡(luò),因?yàn)槲刺峒拜斎雽印?
Layers in an ANN:
ANN中的層:
1 Dense or fully connected layers
1密集或完全連接的層
2 Convolution layers
2個(gè)卷積層
3 Pooling layers
3個(gè)池化層
4 Recurrent layers
4個(gè)循環(huán)層
5 Normalization layers
5個(gè)標(biāo)準(zhǔn)化層
6 Many others
6很多
Different layers performs different type of transformations on the input. A convolution layer mainly used to perform convolution operation while working with image data. A Recurrent layer is used while working with time series data. A dense layer is a fully connected layer. In a nutshell each layer have its own features and used to perform specific task.
不同的層對(duì)輸入執(zhí)行不同類型的轉(zhuǎn)換。 卷積層主要用于在處理圖像數(shù)據(jù)時(shí)執(zhí)行卷積運(yùn)算。 在處理時(shí)間序列數(shù)據(jù)時(shí),將使用循環(huán)層。 致密層是完全連接的層。 簡(jiǎn)而言之,每一層都有自己的功能,并用于執(zhí)行特定任務(wù)。
Structure of a neural network (Source: https://www.gabormelli.com/RKB/Neural_Network_Hidden_Layer)神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu)(來(lái)源: https : //www.gabormelli.com/RKB/Neural_Network_Hidden_??Layer )2.2 Structure of a 2 layer neural network:
2.2 2層神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu):
structure of a 2 layer neural network(Source: https://ibb.co/rQmCkqG)2層神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu)(來(lái)源: https : //ibb.co/rQmCkqG )Input layer : Each of the nodes in the input layer represents the individual feature from each sample within our data set that will pass to the model.
輸入層 : 輸入層中的每個(gè)節(jié)點(diǎn)代表數(shù)據(jù)集中每個(gè)樣本的獨(dú)立要素,這些要素將傳遞給模型。
Hidden layer :The connections between the input layer and hidden layer , each of these connections transfers output from the previous units as input to the receiving unit. Each connections have its own assigned weight. Each input will be multiplied by the weights and output will be an activation function of these weighted sum of inputs.
隱藏層 :輸入層和隱藏層之間的連接,這些連接中的每一個(gè)都將先前單元的輸出作為輸入傳輸?shù)浇邮諉卧?每個(gè)連接都有自己分配的權(quán)重。 每個(gè)輸入將乘以權(quán)重,輸出將是這些輸入加權(quán)總和的激活函數(shù)。
To recap we have weights assigned to each connections and we compute the weighted sum that points to the same neuron(node) in the next layer. That sum is passed as an activation function that transforms the output to a number that can be between 0 and 1.This will be passed on to the next neuron(node) to the next layer. This process occurs over and over again until reaching the output layer.
概括地說(shuō),我們?yōu)槊總€(gè)連接分配了權(quán)重,并計(jì)算了指向下一層中相同神經(jīng)元(節(jié)點(diǎn))的加權(quán)總和。 該和作為激活函數(shù)傳遞,該函數(shù)將輸出轉(zhuǎn)換為介于0到1之間的數(shù)字。這將傳遞到下一個(gè)神經(jīng)元(節(jié)點(diǎn))。 此過(guò)程一遍又一遍,直到到達(dá)輸出層。
Lets consider part1 connections between input layer and hidden layer , as from fig above. Here the activation function we are using is tanh function.
讓我們考慮輸入層和隱藏層之間的part1連接,如上圖所示。 在這里,我們使用的激活函數(shù)是tanh函數(shù)。
Z1 = W1 X + b1
Z1 = W1 X + b1
A1 = tanh(Z1)
A1 = tanh(Z1)
Lets consider part 2 connections between hidden layer and output layer , as from fig above. Here the activation function we are using is sigmoid function.
讓我們考慮隱藏層和輸出層之間的第2部分連接,如上圖所示。 在這里,我們使用的激活函數(shù)是S型函數(shù)。
Z2 = W1 A1 + b2
Z2 = W1 A1 + b2
A2 = σ(Z2)
A2 =σ(Z2)
During this process weights will be continuously changing in order to reach optimized weights for each connections as the model continues to learn from the data.
在此過(guò)程中,隨著模型繼續(xù)從數(shù)據(jù)中學(xué)習(xí),權(quán)重將不斷變化,以達(dá)到每個(gè)連接的最佳權(quán)重。
Output layer : If it’s a binary classification problem to classify cats or dogs the output layer have 2 neurons. Hence the output layer can be consists of each of the possible outcomes or categories of outcomes and that much of neurons.
輸出層 :如果對(duì)貓或狗進(jìn)行分類是二進(jìn)制分類問(wèn)題,則輸出層具有2個(gè)神經(jīng)元。 因此,輸出層可以由每種可能的結(jié)果或結(jié)果類別以及大量的神經(jīng)元組成。
Please note that number of neurons in the hidden layer is a hyper parameter like learning rate.
請(qǐng)注意,隱藏層中神經(jīng)元的數(shù)量是學(xué)習(xí)率之類的超參數(shù)。
3. Building your first neural network with keras in less than 30 lines of code
3.用不到30行代碼用keras構(gòu)建您的第一個(gè)神經(jīng)網(wǎng)絡(luò)
3.1 What is Keras ?
3.1什么是Keras?
There is a lot of deep learning frame works . Keras is a high-level API written in Python which runs on-top of popular frameworks such as TensorFlow, Theano, etc. to provide the machine learning practitioner with a layer of abstraction to reduce the inherent complexity of writing NNs.
有很多深度學(xué)習(xí)框架作品。 Keras是用Python編寫(xiě)的高級(jí)API,它在TensorFlow,Theano等流行框架之上運(yùn)行,從而為機(jī)器學(xué)習(xí)從業(yè)人員提供了一層抽象層,以減少編寫(xiě)NN的固有復(fù)雜性。
3.2 Time to work on GPU:
3.2使用GPU的時(shí)間:
In this we will be using keras with Tensorflow backend. We will use pip commands to install on Anaconda environment.
在本文中,我們將使用具有Tensorflow后端的keras。 我們將使用pip命令在Anaconda環(huán)境上安裝。
· pip3 install Keras
·pip3安裝Keras
· pip3 install Tensorflow
·pip3安裝Tensorflow
Make sure that you set up GPU if you are using googlecolab
如果您使用的是googlecolab,請(qǐng)確保已設(shè)置GPU
google colab GPU activationGoogle colab GPU激活We are using MNIST data set in this tutorial. The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from MNIST. The digits have been size-normalized and centered in a fixed-size image.
在本教程中,我們將使用MNIST數(shù)據(jù)集。 可以從此頁(yè)面獲得的MNIST手寫(xiě)數(shù)字?jǐn)?shù)據(jù)庫(kù)的訓(xùn)練集為60,000個(gè)示例,而測(cè)試集為10,000個(gè)示例。 它是MNIST可用的較大集合的子集。 這些數(shù)字已進(jìn)行尺寸規(guī)格化,并在固定尺寸的圖像中居中。
We are importing necessary modules
我們正在導(dǎo)入必要的模塊
Loading the data set as training & test
加載數(shù)據(jù)集作為培訓(xùn)和測(cè)試
Now with our training & test data we are ready to build our Neural network.
現(xiàn)在,有了我們的培訓(xùn)和測(cè)試數(shù)據(jù),我們就可以構(gòu)建我們的神經(jīng)網(wǎng)絡(luò)了。
In this example we will be using dense layer , a dense layer is nothing but fully connected neuron. Which means each neuron receives input from all the neurons in previous layer. The shape of our input is [60000,28,28] which is 60000 images with a pixel height and width of 28 X 28.
在此示例中,我們將使用密集層,密集層僅是完全連接的神經(jīng)元。 這意味著每個(gè)神經(jīng)元都從上一層中的所有神經(jīng)元接收輸入。 輸入的形狀為[60000,28,28],它是60000張圖像,像素的高度和寬度為28 X 28。
784 and 10 refers to dimension of the output space , which will become the number of inputs to the subsequent layer.We are solving a classification problem with 10 possible categories (numbers from 0 to 9). Hence the final layer has potential output of 10 units.
784和10表示輸出空間的尺寸,它將成為下一層的輸入數(shù)量。我們正在解決一個(gè)具有10個(gè)可能類別的分類問(wèn)題(數(shù)字從0到9)。 因此,最后一層的潛在輸出為10個(gè)單位。
Activation function can be different type , relu which is most widely used. In the output layer we are using softmax here.
激活功能可以是使用最廣泛的不同類型的relu。 在輸出層中,我們?cè)谶@里使用softmax。
As out neural network is defined we are compiling it with optimizer as adam,loss function as categorical_cross entropy,metrics as accuracy here. These can be changed based upon the need.
由于定義了神經(jīng)網(wǎng)絡(luò),我們?cè)谶@里使用優(yōu)化器將其編譯為adam,損失函數(shù)作為categorical_cross熵,度量作為精度。 這些可以根據(jù)需要進(jìn)行更改。
AIWA !!! You have just build your first neural network.
AIWA !!! 您剛剛建立了第一個(gè)神經(jīng)網(wǎng)絡(luò)。
There is questions in your mind related to the terms which we have used on model building , like relu,softmax,adam ..these requires in depth explanations I would suggest you to read the book Deep Learning with Python by Francois Chollet, which inspired this tutorial.
您的思維中存在與我們?cè)谀P蜆?gòu)建中使用的術(shù)語(yǔ)相關(guān)的問(wèn)題,例如relu,softmax,adam ..這些需要進(jìn)行深入的解釋,我建議您閱讀Francois Chollet撰寫(xiě)的《用Python進(jìn)行深度學(xué)習(xí)》一書(shū),這啟發(fā)了這一點(diǎn)。教程。
We can reshape our data set and split in between train 60000 images and test of 10000 images
我們可以重塑我們的數(shù)據(jù)集,并在訓(xùn)練60000張圖像和10000張圖像的測(cè)試之間進(jìn)行劃分
We will use categorical encoding in order to return number of features in numerical operations.
我們將使用分類編碼,以便在數(shù)值運(yùn)算中返回特征數(shù)量。
Our data set is split into train and test , our model is compiled and data is reshaped and encoded. Next step is to train our neural network(NN).
我們的數(shù)據(jù)集分為訓(xùn)練和測(cè)試,我們的模型被編譯,數(shù)據(jù)被重塑和編碼。 下一步是訓(xùn)練我們的神經(jīng)網(wǎng)絡(luò)(NN)。
Here we are passing training images and train labels as well as epochs. One epoch is when an entire data set is passed forward and backward through the neural network only once.Batch size is number of samples that will propagate through the neural network.
在這里,我們傳遞訓(xùn)練圖像和訓(xùn)練標(biāo)簽以及歷元。 一個(gè)時(shí)期是整個(gè)數(shù)據(jù)集僅通過(guò)神經(jīng)網(wǎng)絡(luò)向前和向后傳遞一次,批大小是將通過(guò)神經(jīng)網(wǎng)絡(luò)傳播的樣本數(shù)量。
We are measuring the performance of our model to identify how well our model performed. You will get a test accuracy of around 98 which means our model has predicted the correct digit while 98 percentage of time while running its tests.
我們正在測(cè)量模型的性能,以確定模型的性能。 您將獲得大約98的測(cè)試準(zhǔn)確性,這意味著我們的模型在運(yùn)行測(cè)試時(shí)的98%的時(shí)間里預(yù)測(cè)了正確的數(shù)字。
This is how the first look of a neural network is. That’s not the end just a beginning before we get a deep dive into different aspects of neural networks. You have just taken the first step towards your long and exciting journey.
這就是神經(jīng)網(wǎng)絡(luò)的外觀。 那不是結(jié)束,而是我們深入研究神經(jīng)網(wǎng)絡(luò)各個(gè)方面之前的開(kāi)始。 您剛剛邁出了漫長(zhǎng)而令人興奮的旅程的第一步。
Stay focused , keep learning , stay curious.
保持專注,保持學(xué)習(xí),保持好奇心。
“Don’t take rest after your first victory because if you fail in second, more lips are waiting to say that your first victory was just luck.” — Dr APJ Abdul Kalam
“第一場(chǎng)勝利后不要休息,因?yàn)槿绻诙?chǎng)失敗,就會(huì)有更多的嘴唇在等待說(shuō)你的第一場(chǎng)勝利只是運(yùn)氣。” — APJ Abdul Kalam博士
Reference : Deep Learning with Python , Fran?ois Chollet , ISBN 9781617294433
參考:用Python進(jìn)行深度學(xué)習(xí),Fran?oisChollet,ISBN 9781617294433
Stay connected — https://www.linkedin.com/in/arun-purakkatt-mba-m-tech-31429367/
保持聯(lián)系-https://www.linkedin.com/in/arun-purakkatt-mba-m-tech-31429367/
翻譯自: https://medium.com/analytics-vidhya/deep-learning-artificial-neural-network-ann-13b54c3f370f
ann人工神經(jīng)網(wǎng)絡(luò)
總結(jié)
以上是生活随笔為你收集整理的ann人工神经网络_深度学习-人工神经网络(ANN)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 支付宝双v尊享权益怎么取消
- 下一篇: 扫描二维码读取文档_使用深度学习读取和分