DL之DenseNet:DenseNet算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之DenseNet:DenseNet算法的簡介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細攻略
?
?
?
目錄
DenseNet算法的簡介(論文介紹)
DenseNet算法的架構(gòu)詳解
3、DenseNet architectures for ImageNet
4、實驗結(jié)果
DenseNet算法的案例應(yīng)用
?
?
?
?
?
?
相關(guān)文章
DL之DenseNet:DenseNet算法的簡介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細攻略
DL之DenseNet:DenseNet算法的架構(gòu)詳解
DenseNet算法的簡介(論文介紹)
? ? ? ? DenseNet算法即Densely Connected Convolutional Networks,在某種度上也借鑒了ResNet算法,相關(guān)論文獲得2017 (CVPR Best Paper Award)。
?
Abstract ?
? ? ? Recent work has shown that convolutional networks can ?be substantially deeper, more accurate, and efficient to train ?if they contain shorter connections between layers close to ?the input and those close to the output. In this paper, we ?embrace this observation and introduce the Dense Convolutional ?Network (DenseNet), which connects each layer ?to every other layer in a feed-forward fashion. Whereas ?traditional convolutional networks with L layers have L ?connections—one between each layer and its subsequent ?layer—our network has L(L+1) ?2 ?direct connections. For ?each layer, the feature-maps of all preceding layers are ?used as inputs, and its own feature-maps are used as inputs ?into all subsequent layers. DenseNets have several compelling ?advantages: they alleviate the vanishing-gradient ?problem, strengthen feature propagation, encourage feature ?reuse, and substantially reduce the number of parameters. ?We evaluate our proposed architecture on four highly ?competitive object recognition benchmark tasks (CIFAR-10, ?CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant ?improvements over the state-of-the-art on most of ?them, whilst requiring less computation to achieve high performance. ?Code and pre-trained models are available at ?https://github.com/liuzhuang13/DenseNet.
摘要
? ? ? 最近的研究表明,如果卷積網(wǎng)絡(luò)在靠近輸入和接近輸出的層之間包含較短的連接,那么卷積網(wǎng)絡(luò)可以更深入、更準(zhǔn)確和有效地訓(xùn)練。在本文中,我們采用這種觀測方法,并引入了緊密卷積網(wǎng)絡(luò)(densenet),它以一種前饋的方式將每一層連接到另一層。傳統(tǒng)的具有L層的卷積網(wǎng)絡(luò)在每一層和其后續(xù)層之間都有L連接,而我們的網(wǎng)絡(luò)有L(L+1)2個直接連接。對于每個圖層,前面所有圖層的 feature-maps都用作輸入,其自身的 feature-maps也用作后面所有圖層的輸入。 DenseNets有幾個引人注目的優(yōu)點:它們可以緩解消失梯度問題,加強特征傳播,鼓勵特征重用,并大幅減少參數(shù)數(shù)量。我們在四個高度競爭的對象識別基準(zhǔn)任務(wù) (CIFAR-10, ?CIFAR-100, SVHN, and ImageNet)上評估我們提出的體系結(jié)構(gòu)。DenseNets 在大多數(shù)方面都比最先進的技術(shù)有了顯著的改進,同時需要較少的計算來實現(xiàn)高性能。可在https://github.com/liuzhuang13/DenseNet上獲取代碼和預(yù)訓(xùn)練模型。
Conclusion ?
? ? ? We proposed a new convolutional network architecture, ?which we refer to as Dense Convolutional Network ?(DenseNet). It introduces direct connections between any ?two layers with the same feature-map size. We showed that ?DenseNets scale naturally to hundreds of layers, while exhibiting ?no optimization difficulties. In our experiments,DenseNets tend to yield consistent improvement in accuracy ?with growing number of parameters, without any signs ?of performance degradation or overfitting. Under multiple ?settings, it achieved state-of-the-art results across several ?highly competitive datasets. Moreover, DenseNets ?require substantially fewer parameters and less computation ?to achieve state-of-the-art performances. Because we ?adopted hyperparameter settings optimized for residual networks ?in our study, we believe that further gains in accuracy ?of DenseNets may be obtained by more detailed tuning of ?hyperparameters and learning rate schedules.
? ? ? ?Whilst following a simple connectivity rule, DenseNets ?naturally integrate the properties of identity mappings, deep ?supervision, and diversified depth. They allow feature reuse ?throughout the networks and can consequently learn more ?compact and, according to our experiments, more accurate ?models. Because of their compact internal representations ?and reduced feature redundancy, DenseNets may be good ?feature extractors for various computer vision tasks that ?build on convolutional features, e.g., [4, 5]. We plan to ?study such feature transfer with DenseNets in future work.
結(jié)論
? ? ? ?我們提出了一種新的卷積網(wǎng)絡(luò)結(jié)構(gòu),我們稱之為密集卷積網(wǎng)絡(luò)(DenseNet)。它引入了任何兩層之間具有相同feature-map大小的直接連接。我們發(fā)現(xiàn)?DenseNets可以自然地擴展到數(shù)百層,但不存在優(yōu)化困難。在我們的實驗中,隨著參數(shù)數(shù)量的增加,?DenseNets的精確度會持續(xù)提高,而不會出現(xiàn)性能下降或過度擬合的跡象。在多個設(shè)置下,它在多個高度競爭的數(shù)據(jù)集中實現(xiàn)了最先進的結(jié)果。此外,?DenseNets需要更少的參數(shù)和更少的計算來實現(xiàn)最先進的性能。因為我們在研究中采用了針對剩余網(wǎng)絡(luò)進行優(yōu)化的超參數(shù)設(shè)置,我們相信通過更詳細地調(diào)整超參數(shù)和學(xué)習(xí)速率時間表,可以進一步提高?DenseNets的精度。
? ? ? ?在遵循簡單連接規(guī)則的同時,?DenseNets自然地整合了身份映射、深度監(jiān)督和多樣化深度的屬性。它們允許在整個網(wǎng)絡(luò)中重復(fù)使用功能,因此可以學(xué)習(xí)更緊湊的,根據(jù)我們的實驗,更精確的模型。由于其緊湊的內(nèi)部表示和減少的特征冗余,DenseNets可能是各種計算機視覺任務(wù)的很好的特征提取器,這些任務(wù)基于卷積特征,例如[4,5]。我們計劃在未來的工作中與DenseNets一起研究這種特征轉(zhuǎn)移。
論文
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Weinberger.
Densely connected convolutional networks. CVPR. 2017 (CVPR Best Paper Award)
https://arxiv.org/pdf/1608.06993.pdf
?
GitHub
https://github.com/liuzhuang13/DenseNet
? ? ? ?DenseNet is a network architecture where each layer is directly connected to every other layer in a feed-forward fashion (within each?dense block). For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. This connectivity pattern yields state-of-the-art accuracies on CIFAR10/100 (with or without data augmentation) and SVHN. On the large scale ILSVRC 2012 (ImageNet) dataset, DenseNet achieves a similar accuracy as ResNet, but using less than half the amount of parameters and roughly half the number of FLOPs.
? ? ? ?Densenet是一種網(wǎng)絡(luò)架構(gòu),其中每一層以前饋方式(在每個密集塊內(nèi))直接連接到其他每一層。對于每個圖層,前面所有圖層的要素圖都被視為單獨的輸入,而它自己的要素圖則作為輸入傳遞給后面所有圖層。這種連接模式在CIFAR10/100(有或無數(shù)據(jù)擴充)和SVHN上產(chǎn)生最先進的精度。在大規(guī)模的ILSVRC 2012(ImageNet)數(shù)據(jù)集上,DenseNet 實現(xiàn)了與ResNet相似的精度,但使用的參數(shù)數(shù)量不到一半,而使用的觸發(fā)器數(shù)量大約為一半。
?
DenseNet算法的架構(gòu)詳解
3、DenseNet architectures for ImageNet
The growth rate for all the networks is ?=32. Note that each “conv” layer shown in the table corresponds the sequence BN-ReLU-Conv.? ?所有網(wǎng)絡(luò)的增長率為32。請注意,表中所示的每個“conv”層對應(yīng)于序列BN-ReLU-Conv。
?
4、實驗結(jié)果
1、CIFAR-10上的結(jié)果
2、ImageNet上的結(jié)果
The top-1 and top-5 error rates on the ImageNet validation set, with single-crop / 10-crop testing
ImageNet上的結(jié)果:基于DenseNet的分類器只需要ResNet一半的參數(shù)量,就可在ImageNet上達到相同分類精度
?
?
DenseNet算法的案例應(yīng)用
后期更新……
?
?
?
?
?
總結(jié)
以上是生活随笔為你收集整理的DL之DenseNet:DenseNet算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: DL之ResNeXt:ResNeXt算法
- 下一篇: DL之SqueezeNet:Squeez