sift论文_卷积神经网络设计相关论文
最近梳理了一下卷積神經(jīng)網(wǎng)絡(luò)設(shè)計相關(guān)的論文(這個repo現(xiàn)在只列出了最重要的一些論文,后面會持續(xù)補充):
Neural network architecture design?github.com1. Handcrafted
1.1 Efficient
- [1608.08021] PVANET: Deep but Lightweight Neural Networks for Real-time Object Detection
- [1610.02357] Xception: Deep Learning with Depthwise Separable Convolutions
- [1612.08242] YOLO9000: Better, Faster, Stronger
- [1704.04861] MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- [1707.01083] ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
- [1708.05234] FaceBoxes: A CPU Real-time Face Detector with High Accuracy
- [1711.07264] Light-Head R-CNN: In Defense of Two-Stage Object Detector
- [1801.04381] MobileNetV2: Inverted Residuals and Linear Bottlenecks
- [1803.10615] SqueezeNext: Hardware-Aware Neural Network Design
- [1807.11164] ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
1.2 High accuracy
- [2012] ImageNet Classification with Deep Convolutional Neural Networks
- [1409.1556] Very Deep Convolutional Networks for Large-Scale Image Recognition
- [1409.4842] Going Deeper with Convolutions
- [1512.00567] Rethinking the Inception Architecture for Computer Vision
- [1512.03385] Deep Residual Learning for Image Recognition
- [1602.07261] Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
- [1603.05027] Identity Mappings in Deep Residual Networks
- [1608.06993] Densely Connected Convolutional Networks
- [1804.02767] YOLOv3: An Incremental Improvement
2. Automated
- [1707.07012] Learning Transferable Architectures for Scalable Image Recognition
- [1807.11626] MnasNet: Platform-Aware Neural Architecture Search for Mobile
- [1812.00332] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
- [1812.03443] FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search
- [1812.08934] ChamNet: Towards Efficient Network Design through Platform-Aware Model Adaptation
3. Useful component
- [1502.03167] Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- [1603.05201] Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units # CReLU
- [1709.01507] Squeeze-and-Excitation Networks # SE
- [1708.02002] Focal Loss for Dense Object Detection
4. Activation function
- [1502.01852] Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification # PReLU
5. Weight initialization
- [2018] Residual Learning Without Normalization via Better Initialization # ZeroInit
感覺每做一個子方向,最好還是要適時做一些梳理和總結(jié),一方面方便日后查閱,另一方面梳理和總結(jié)的過程能夠進一步加深對這個問題的認(rèn)識。在做梳理的過程中,我越發(fā)覺得網(wǎng)絡(luò)自動設(shè)計的重要性。
18年12月份arxiv上一下多了三篇網(wǎng)絡(luò)自動設(shè)計的文章,感覺后面網(wǎng)絡(luò)自動設(shè)計的相關(guān)論文應(yīng)該會越來越多。不同的業(yè)務(wù)場景和硬件環(huán)境對神經(jīng)網(wǎng)絡(luò)的需求不一樣,幾乎沒有一個神經(jīng)網(wǎng)絡(luò)可以“一招鮮,吃遍天”,一般情況都會針對不同業(yè)務(wù)場景和硬件環(huán)境專門設(shè)計神經(jīng)網(wǎng)絡(luò)。
早期的時候業(yè)務(wù)場景比較少、硬件環(huán)境也比較單一,全靠人工設(shè)計也未嘗不可。但是隨著人工智能的發(fā)展,業(yè)務(wù)場景會越來越多,硬件環(huán)境也越來越多樣。如果全靠人力來設(shè)計神經(jīng)網(wǎng)絡(luò),成本會越來越高,神經(jīng)網(wǎng)絡(luò)設(shè)計的自動化是一個必然的趨勢。
就像資訊網(wǎng)站早期的時候,向所有用戶推薦一樣的資訊,早期用戶量不大,人工找到大家興趣點的最大公約數(shù)還比較容易,隨著用戶量越來越大,用戶興趣點的公約數(shù)越來越難找,甚至幾乎沒有,既然找不到公約數(shù),那干脆千人千面好了,但是人工要做到千人千面需要極大的成本,這個時候基于推薦的資訊APP應(yīng)運而生。
這個過程也像圖像特征子的發(fā)展歷程一樣,早期通過手工設(shè)計特征子,例如SIFT、HOG、LBP、Haar等等,設(shè)計有效的特征子很難,只有少部分對這個領(lǐng)域有著很深理解的人才有這個能力;后面深度學(xué)習(xí)橫空出世,深度卷積神經(jīng)網(wǎng)絡(luò)通過自學(xué)習(xí)特征子,學(xué)出來的特征子在大多數(shù)任務(wù)中都比手工設(shè)計的特征子好用。而且每當(dāng)我們訓(xùn)練一個模型,其實就設(shè)計出來了一套特征子,雖然我們自己都沒意識到。
以后網(wǎng)絡(luò)自動設(shè)計成熟后,我們每運行一下網(wǎng)絡(luò)自動設(shè)計程序,就會設(shè)計出一個新的神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)。到那個時候,神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)的設(shè)計就會如同現(xiàn)在的特征子的設(shè)計一樣,在你訓(xùn)練一個模型的過程中,程序就自動幫你設(shè)計了一個最合適的神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)。
總結(jié)
以上是生活随笔為你收集整理的sift论文_卷积神经网络设计相关论文的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: sap系统前台数据与后台表之间_数据治理
- 下一篇: v-model无法对返回的数据进行填写_