日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

计算机视觉整理

發(fā)布時(shí)間:2025/3/17 编程问答 18 豆豆
生活随笔 收集整理的這篇文章主要介紹了 计算机视觉整理 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

經(jīng)典論文

  • ImageNet分類(lèi)
  • 物體檢測(cè)
  • 物體跟蹤
  • 低級(jí)視覺(jué)
  • 邊緣檢測(cè)
  • 語(yǔ)義分割
  • 視覺(jué)注意力和顯著性
  • 物體識(shí)別
  • 人體姿態(tài)估計(jì)
  • CNN原理和性質(zhì)(Understanding CNN)
  • 圖像和語(yǔ)言
  • 圖像解說(shuō)
  • 視頻解說(shuō)
  • 圖像生成
  • 微軟ResNet

    論文:用于圖像識(shí)別的深度殘差網(wǎng)絡(luò)

    作者:何愷明、張祥雨、任少卿和孫劍

    鏈接:http://arxiv.org/pdf/1512.03385v1.pdf

    微軟PRelu(隨機(jī)糾正線性單元/權(quán)重初始化)

    論文:深入學(xué)習(xí)整流器:在ImageNet分類(lèi)上超越人類(lèi)水平

    作者:何愷明、張祥雨、任少卿和孫劍

    鏈接:http://arxiv.org/pdf/1502.01852.pdf

    谷歌Batch Normalization

    論文:批量歸一化:通過(guò)減少內(nèi)部協(xié)變量來(lái)加速深度網(wǎng)絡(luò)訓(xùn)練

    作者:Sergey Ioffe, Christian Szegedy

    鏈接:http://arxiv.org/pdf/1502.03167.pdf

    谷歌GoogLeNet

    論文:更深的卷積,CVPR 2015

    作者:Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich

    鏈接:http://arxiv.org/pdf/1409.4842.pdf

    牛津VGG-Net

    論文:大規(guī)模視覺(jué)識(shí)別中的極深卷積網(wǎng)絡(luò),ICLR 2015

    作者:Karen Simonyan & Andrew Zisserman

    鏈接:http://arxiv.org/pdf/1409.1556.pdf

    AlexNet

    論文:使用深度卷積神經(jīng)網(wǎng)絡(luò)進(jìn)行ImageNet分類(lèi)

    作者:Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton

    鏈接:http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

    物體檢測(cè)

    PVANET

    論文:用于實(shí)時(shí)物體檢測(cè)的深度輕量神經(jīng)網(wǎng)絡(luò)(PVANET:Deep but Lightweight Neural Networks for Real-time Object Detection)

    作者:Kye-Hyeon Kim, Sanghoon Hong, Byungseok Roh, Yeongjae Cheon, Minje Park

    鏈接:http://arxiv.org/pdf/1608.08021

    紐約大學(xué)OverFeat

    論文:使用卷積網(wǎng)絡(luò)進(jìn)行識(shí)別、定位和檢測(cè)(OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks),ICLR 2014

    作者:Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann LeCun

    鏈接:http://arxiv.org/pdf/1312.6229.pdf

    伯克利R-CNN

    論文:精確物體檢測(cè)和語(yǔ)義分割的豐富特征層次結(jié)構(gòu)(Rich feature hierarchies for accurate object detection and semantic segmentation),CVPR 2014

    作者:Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik

    鏈接:http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf

    微軟SPP

    論文:視覺(jué)識(shí)別深度卷積網(wǎng)絡(luò)中的空間金字塔池化(Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition),ECCV 2014

    作者:何愷明、張祥雨、任少卿和孫劍

    鏈接:http://arxiv.org/pdf/1406.4729.pdf

    微軟Fast R-CNN

    論文:Fast R-CNN

    作者:Ross Girshick

    鏈接:http://arxiv.org/pdf/1504.08083.pdf

    微軟Faster R-CNN

    論文:使用RPN走向?qū)崟r(shí)物體檢測(cè)(Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks)

    作者:任少卿、何愷明、Ross Girshick、孫劍

    鏈接:http://arxiv.org/pdf/1506.01497.pdf

    牛津大學(xué)R-CNN minus R

    論文:R-CNN minus R

    作者:Karel Lenc, Andrea Vedaldi

    鏈接:http://arxiv.org/pdf/1506.06981.pdf

    端到端行人檢測(cè)

    論文:密集場(chǎng)景中端到端的行人檢測(cè)(End-to-end People Detection in Crowded Scenes)

    作者:Russell Stewart, Mykhaylo Andriluka

    鏈接:http://arxiv.org/pdf/1506.04878.pdf

    實(shí)時(shí)物體檢測(cè)

    論文:你只看一次:統(tǒng)一實(shí)時(shí)物體檢測(cè)(You Only Look Once: Unified, Real-Time Object Detection)

    作者:Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi

    鏈接:http://arxiv.org/pdf/1506.02640.pdf

    Inside-Outside Net

    論文:使用跳躍池化和RNN在場(chǎng)景中檢測(cè)物體(Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks)

    作者:Sean Bell, C. Lawrence Zitnick, Kavita Bala, Ross Girshick

    鏈接:http://arxiv.org/abs/1512.04143.pdf

    微軟ResNet

    論文:用于圖像識(shí)別的深度殘差網(wǎng)絡(luò)

    作者:何愷明、張祥雨、任少卿和孫劍

    鏈接:http://arxiv.org/pdf/1512.03385v1.pdf

    R-FCN

    論文:通過(guò)區(qū)域全卷積網(wǎng)絡(luò)進(jìn)行物體識(shí)別(R-FCN: Object Detection via Region-based Fully Convolutional Networks)

    作者:代季峰,李益,何愷明,孫劍

    鏈接:http://arxiv.org/abs/1605.06409

    SSD

    論文:單次多框檢測(cè)器(SSD: Single Shot MultiBox Detector)

    作者:Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg

    鏈接:http://arxiv.org/pdf/1512.02325v2.pdf

    速度/精度權(quán)衡

    論文:現(xiàn)代卷積物體檢測(cè)器的速度/精度權(quán)衡(Speed/accuracy trade-offs for modern convolutional object detectors)

    作者:Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy

    鏈接:http://arxiv.org/pdf/1611.10012v1.pdf

    物體跟蹤

    • 論文:用卷積神經(jīng)網(wǎng)絡(luò)通過(guò)學(xué)習(xí)可區(qū)分的顯著性地圖實(shí)現(xiàn)在線跟蹤(Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network)

    作者:Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han

    地址:arXiv:1502.06796.

    • 論文:DeepTrack:通過(guò)視覺(jué)跟蹤的卷積神經(jīng)網(wǎng)絡(luò)學(xué)習(xí)辨別特征表征(DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking)

    作者:Hanxi Li, Yi Li and Fatih Porikli

    發(fā)表: BMVC, 2014.

    • 論文:視覺(jué)跟蹤中,學(xué)習(xí)深度緊湊圖像表示(Learning a Deep Compact Image Representation for Visual Tracking)

    作者:N Wang, DY Yeung

    發(fā)表:NIPS, 2013.

    • 論文:視覺(jué)跟蹤的分層卷積特征(Hierarchical Convolutional Features for Visual Tracking)

    作者:Chao Ma, Jia-Bin Huang, Xiaokang Yang and Ming-Hsuan Yang

    發(fā)表: ICCV 2015

    • 論文:完全卷積網(wǎng)絡(luò)的視覺(jué)跟蹤(Visual Tracking with fully Convolutional Networks)

    作者:Lijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu,

    發(fā)表:ICCV 2015

    • 論文:學(xué)習(xí)多域卷積神經(jīng)網(wǎng)絡(luò)進(jìn)行視覺(jué)跟蹤(Learning Multi-Domain Convolutional Neural Networks for Visual Tracking)

    作者:Hyeonseob Namand Bohyung Han

    對(duì)象識(shí)別(Object Recognition)

    論文:卷積神經(jīng)網(wǎng)絡(luò)弱監(jiān)督學(xué)習(xí)(Weakly-supervised learning with convolutional neural networks)

    作者:Maxime Oquab,Leon Bottou,Ivan Laptev,Josef Sivic,CVPR,2015

    鏈接:?
    http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Oquab_Is_Object_Localization_2015_CVPR_paper.pdf

    FV-CNN

    論文:深度濾波器組用于紋理識(shí)別和分割(Deep Filter Banks for Texture Recognition and Segmentation)

    作者:Mircea Cimpoi, Subhransu Maji, Andrea Vedaldi, CVPR, 2015.

    鏈接:?
    http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Cimpoi_Deep_Filter_Banks_2015_CVPR_paper.pdf

    人體姿態(tài)估計(jì)(Human Pose Estimation)

    • 論文:使用 Part Affinity Field的實(shí)時(shí)多人2D姿態(tài)估計(jì)(Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields)

    作者:Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, CVPR, 2017.

    • 論文:Deepcut:多人姿態(tài)估計(jì)的聯(lián)合子集分割和標(biāo)簽(Deepcut: Joint subset partition and labeling for multi person pose estimation)

    作者:Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele, CVPR, 2016.

    • 論文:Convolutional pose machines

    作者:Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh, CVPR, 2016.

    • 論文:人體姿態(tài)估計(jì)的 Stacked hourglass networks(Stacked hourglass networks for human pose estimation)

    作者:Alejandro Newell, Kaiyu Yang, and Jia Deng, ECCV, 2016.

    • 論文:用于視頻中人體姿態(tài)估計(jì)的Flowing convnets(Flowing convnets for human pose estimation in videos)

    作者:Tomas Pfister, James Charles, and Andrew Zisserman, ICCV, 2015.

    • 論文:卷積網(wǎng)絡(luò)和人類(lèi)姿態(tài)估計(jì)圖模型的聯(lián)合訓(xùn)練(Joint training of a convolutional network and a graphical model for human pose estimation)

    作者:Jonathan J. Tompson, Arjun Jain, Yann LeCun, Christoph Bregler, NIPS, 2014.

    理解CNN

    • 論文:通過(guò)測(cè)量同變性和等價(jià)性來(lái)理解圖像表示(Understanding image representations by measuring their equivariance and equivalence)

    作者:Karel Lenc, Andrea Vedaldi, CVPR, 2015.

    鏈接:?
    http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Lenc_Understanding_Image_Representations_2015_CVPR_paper.pdf

    • 論文:深度神經(jīng)網(wǎng)絡(luò)容易被愚弄:無(wú)法識(shí)別的圖像的高置信度預(yù)測(cè)(Deep Neural Networks are Easily Fooled:High Confidence Predictions for Unrecognizable Images)

    作者:Anh Nguyen, Jason Yosinski, Jeff Clune, CVPR, 2015.

    鏈接:?
    http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf

    • 論文:通過(guò)反演理解深度圖像表示(Understanding Deep Image Representations by Inverting Them)

    作者:Aravindh Mahendran, Andrea Vedaldi, CVPR, 2015

    鏈接:?
    http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf

    • 論文:深度場(chǎng)景CNN中的對(duì)象檢測(cè)器(Object Detectors Emerge in Deep Scene CNNs)

    作者:Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, ICLR, 2015.

    鏈接:http://arxiv.org/abs/1412.6856

    • 論文:用卷積網(wǎng)絡(luò)反演視覺(jué)表示(Inverting Visual Representations with Convolutional Networks)

    作者:Alexey Dosovitskiy, Thomas Brox, arXiv, 2015.

    鏈接:http://arxiv.org/abs/1506.02753

    • 論文:可視化和理解卷積網(wǎng)絡(luò)(Visualizing and Understanding Convolutional Networks)

    作者:Matthrew Zeiler, Rob Fergus, ECCV, 2014.

    鏈接:http://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf

    圖像與語(yǔ)言

    圖像說(shuō)明(Image Captioning)

    UCLA / Baidu

    用多模型循環(huán)神經(jīng)網(wǎng)絡(luò)解釋圖像(Explain Images with Multimodal Recurrent Neural Networks)

    Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, arXiv:1410.1090

    http://arxiv.org/pdf/1410.1090

    Toronto

    使用多模型神經(jīng)語(yǔ)言模型統(tǒng)一視覺(jué)語(yǔ)義嵌入(Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models)

    Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, arXiv:1411.2539.

    http://arxiv.org/pdf/1411.2539

    Berkeley

    用于視覺(jué)識(shí)別和描述的長(zhǎng)期循環(huán)卷積網(wǎng)絡(luò)(Long-term Recurrent Convolutional Networks for Visual Recognition and Description)

    Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, arXiv:1411.4389.

    http://arxiv.org/pdf/1411.4389

    Google

    看圖寫(xiě)字:神經(jīng)圖像說(shuō)明生成器(Show and Tell: A Neural Image Caption Generator)

    Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, arXiv:1411.4555.

    http://arxiv.org/pdf/1411.4555

    Stanford

    用于生成圖像描述的深度視覺(jué)語(yǔ)義對(duì)齊(Deep Visual-Semantic Alignments for Generating Image Description)

    Andrej Karpathy, Li Fei-Fei, CVPR, 2015.

    Web:http://cs.stanford.edu/people/karpathy/deepimagesent/

    Paper:http://cs.stanford.edu/people/karpathy/cvpr2015.pdf

    UML / UT

    使用深度循環(huán)神經(jīng)網(wǎng)絡(luò)將視頻轉(zhuǎn)換為自然語(yǔ)言(Translating Videos to Natural Language Using Deep Recurrent Neural Networks)

    Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, NAACL-HLT, 2015.

    http://arxiv.org/pdf/1412.4729

    CMU / Microsoft

    學(xué)習(xí)圖像說(shuō)明生成的循環(huán)視覺(jué)表示(Learning a Recurrent Visual Representation for Image Caption Generation)

    Xinlei Chen, C. Lawrence Zitnick, arXiv:1411.5654.

    Xinlei Chen, C. Lawrence Zitnick, Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015

    http://www.cs.cmu.edu/~xinleic/papers/cvpr15_rnn.pdf

    Microsoft

    從圖像說(shuō)明到視覺(jué)概念(From Captions to Visual Concepts and Back)

    Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, CVPR, 2015.

    http://arxiv.org/pdf/1411.4952

    Univ. Montreal / Univ. Toronto

    Show, Attend, and Tell:視覺(jué)注意力與神經(jīng)圖像標(biāo)題生成(Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention)

    Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, arXiv:1502.03044 / ICML 2015

    http://www.cs.toronto.edu/~zemel/documents/captionAttn.pdf

    Idiap / EPFL / Facebook

    基于短語(yǔ)的圖像說(shuō)明(Phrase-based Image Captioning)

    Remi Lebret, Pedro O. Pinheiro, Ronan Collobert, arXiv:1502.03671 / ICML 2015

    http://arxiv.org/pdf/1502.03671

    UCLA / Baidu

    像孩子一樣學(xué)習(xí):從圖像句子描述快速學(xué)習(xí)視覺(jué)的新概念(Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images)

    Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, arXiv:1504.06692

    http://arxiv.org/pdf/1504.06692

    MS + Berkeley

    探索圖像說(shuō)明的最近鄰方法( Exploring Nearest Neighbor Approaches for Image Captioning)

    Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, arXiv:1505.04467

    http://arxiv.org/pdf/1505.04467.pdf

    圖像說(shuō)明的語(yǔ)言模型(Language Models for Image Captioning: The Quirks and What Works)

    Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, arXiv:1505.01809

    http://arxiv.org/pdf/1505.01809.pdf

    阿德萊德

    具有中間屬性層的圖像說(shuō)明( Image Captioning with an Intermediate Attributes Layer)

    Qi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, arXiv:1506.01144

    蒂爾堡

    通過(guò)圖片學(xué)習(xí)語(yǔ)言(Learning language through pictures)

    Grzegorz Chrupala, Akos Kadar, Afra Alishahi, arXiv:1506.03694

    蒙特利爾大學(xué)

    使用基于注意力的編碼器-解碼器網(wǎng)絡(luò)描述多媒體內(nèi)容(Describing Multimedia Content using Attention-based Encoder-Decoder Networks)

    Kyunghyun Cho, Aaron Courville, Yoshua Bengio, arXiv:1507.01053

    康奈爾

    圖像表示和神經(jīng)圖像說(shuō)明的新領(lǐng)域(Image Representations and New Domains in Neural Image Captioning)

    Jack Hessel, Nicolas Savva, Michael J. Wilber, arXiv:1508.02091

    MS + City Univ. of HongKong

    Learning Query and Image Similarities with Ranking Canonical Correlation Analysis

    Ting Yao, Tao Mei, and Chong-Wah Ngo, ICCV, 2015

    視頻字幕(Video Captioning)

    伯克利

    Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.

    猶他州/ UML / 伯克利

    Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.

    微軟

    Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861.

    猶他州/ UML / 伯克利

    Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence–Video to Text, arXiv:1505.00487.

    蒙特利爾大學(xué)/ 舍布魯克

    Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Describing Videos by Exploiting Temporal Structure, arXiv:1502.08029

    MPI / 伯克利

    Anna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698

    多倫多大學(xué) / MIT

    Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724

    蒙特利爾大學(xué)

    Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053

    TAU / 美國(guó)南加州大學(xué)

    Dotan Kaufman, Gil Levi, Tal Hassner, Lior Wolf, Temporal Tessellation for Video Annotation and Summarization, arXiv:1612.06950.

    圖像生成

    卷積/循環(huán)網(wǎng)絡(luò)

    • 論文:Conditional Image Generation with PixelCNN Decoders”

    作者:A?ron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu

    • 論文:Learning to Generate Chairs with Convolutional Neural Networks

    作者:Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox

    發(fā)表:CVPR, 2015.

    • 論文:DRAW: A Recurrent Neural Network For Image Generation

    作者:Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra

    發(fā)表:ICML, 2015.

    對(duì)抗網(wǎng)絡(luò)

    • 論文:生成對(duì)抗網(wǎng)絡(luò)(Generative Adversarial Networks)

    作者:Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio

    發(fā)表:NIPS, 2014.

    • 論文:使用對(duì)抗網(wǎng)絡(luò)Laplacian Pyramid 的深度生成圖像模型(Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks)

    作者:Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus

    發(fā)表:NIPS, 2015.

    • 論文:生成模型演講概述 (A note on the evaluation of generative models)

    作者:Lucas Theis, A?ron van den Oord, Matthias Bethge

    發(fā)表:ICLR 2016.

    • 論文:變分自動(dòng)編碼深度高斯過(guò)程(Variationally Auto-Encoded Deep Gaussian Processes)

    作者:Zhenwen Dai, Andreas Damianou, Javier Gonzalez, Neil Lawrence

    發(fā)表:ICLR 2016.

    • 論文:用注意力機(jī)制從字幕生成圖像 (Generating Images from Captions with Attention)

    作者:Elman Mansimov, Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov

    發(fā)表: ICLR 2016

    • 論文:分類(lèi)生成對(duì)抗網(wǎng)絡(luò)的無(wú)監(jiān)督和半監(jiān)督學(xué)習(xí)(Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks)

    作者:Jost Tobias Springenberg

    發(fā)表:ICLR 2016

    • 論文:用一個(gè)對(duì)抗檢測(cè)表征(Censoring Representations with an Adversary)

    作者:Harrison Edwards, Amos Storkey

    發(fā)表:ICLR 2016

    • 論文:虛擬對(duì)抗訓(xùn)練實(shí)現(xiàn)分布式順滑 (Distributional Smoothing with Virtual Adversarial Training)

    作者:Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii

    發(fā)表:ICLR 2016

    • 論文:自然圖像流形上的生成視覺(jué)操作(Generative Visual Manipulation on the Natural Image Manifold)

    作者:朱俊彥, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros

    發(fā)表: ECCV 2016.

    • 論文:深度卷積生成對(duì)抗網(wǎng)絡(luò)的無(wú)監(jiān)督表示學(xué)習(xí)(Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks)

    作者:Alec Radford, Luke Metz, Soumith Chintala

    發(fā)表: ICLR 2016

    問(wèn)題回答

    弗吉尼亞大學(xué) / 微軟研究院

    論文:VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop.

    作者:Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh

    MPI / 伯克利

    論文:Ask Your Neurons: A Neural-based Approach to Answering Questions about Images

    作者:Mateusz Malinowski, Marcus Rohrbach, Mario Fritz,

    發(fā)布 : arXiv:1505.01121.

    多倫多

    論文: Image Question Answering: A Visual Semantic Embedding Model and a New Dataset

    作者:Mengye Ren, Ryan Kiros, Richard Zemel

    發(fā)表: arXiv:1505.02074 / ICML 2015 deep learning workshop.

    百度/ 加州大學(xué)洛杉磯分校

    作者:Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, 徐偉

    論文:Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering

    發(fā)表: arXiv:1505.05612.

    POSTECH(韓國(guó))

    論文:Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction

    作者:Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han

    發(fā)表: arXiv:1511.05765

    CMU / 微軟研究院

    論文:Stacked Attention Networks for Image Question Answering

    作者:Yang, Z., He, X., Gao, J., Deng, L., & Smola, A. (2015)

    發(fā)表: arXiv:1511.02274.

    MetaMind

    論文:Dynamic Memory Networks for Visual and Textual Question Answering

    作者:Xiong, Caiming, Stephen Merity, and Richard Socher

    發(fā)表: arXiv:1603.01417 (2016).

    首爾國(guó)立大學(xué) + NAVER

    論文:Multimodal Residual Learning for Visual QA

    作者:Jin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang

    發(fā)表:arXiv:1606:01455

    UC Berkeley + 索尼

    論文:Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding

    作者:Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach

    發(fā)表:arXiv:1606.01847

    Postech

    論文:Training Recurrent Answering Units with Joint Loss Minimization for VQA

    作者:Hyeonwoo Noh and Bohyung Han

    發(fā)表: arXiv:1606.03647

    首爾國(guó)立大學(xué) + NAVER

    論文: Hadamard Product for Low-rank Bilinear Pooling

    作者:Jin-Hwa Kim, Kyoung Woon On, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhan

    發(fā)表:arXiv:1610.04325.

    視覺(jué)注意力和顯著性

    ?
    論文:Predicting Eye Fixations using Convolutional Neural Networks

    作者:Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, Tianming Liu

    發(fā)表:CVPR, 2015.

    學(xué)習(xí)地標(biāo)的連續(xù)搜索

    作者:Learning a Sequential Search for Landmarks

    論文:Saurabh Singh, Derek Hoiem, David Forsyth

    發(fā)表:CVPR, 2015.

    視覺(jué)注意力機(jī)制實(shí)現(xiàn)多物體識(shí)別

    論文:Multiple Object Recognition with Visual Attention

    作者:Jimmy Lei Ba, Volodymyr Mnih, Koray Kavukcuoglu,

    發(fā)表:ICLR, 2015.

    視覺(jué)注意力機(jī)制的循環(huán)模型

    作者:Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu

    論文:Recurrent Models of Visual Attention

    發(fā)表:NIPS, 2014.

    低級(jí)視覺(jué)

    超分辨率

    • Iterative Image Reconstruction

    Sven Behnke: Learning Iterative Image Reconstruction. IJCAI, 2001.

    Sven Behnke: Learning Iterative Image Reconstruction in the Neural Abstraction Pyramid. International Journal of Computational Intelligence and Applications, vol. 1, no. 4, pp. 427-438, 2001.

    • Super-Resolution (SRCNN)

    Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.

    Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092.

    • Very Deep Super-Resolution

    Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Accurate Image Super-Resolution Using Very Deep Convolutional Networks, arXiv:1511.04587, 2015.

    • Deeply-Recursive Convolutional Network

    Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Deeply-Recursive Convolutional Network for Image Super-Resolution, arXiv:1511.04491, 2015.

    • Casade-Sparse-Coding-Network

    Zhaowen Wang, Ding Liu, Wei Han, Jianchao Yang and Thomas S. Huang, Deep Networks for Image Super-Resolution with Sparse Prior. ICCV, 2015.

    • Perceptual Losses for Super-Resolution

    Justin Johnson, Alexandre Alahi, Li Fei-Fei, Perceptual Losses for Real-Time Style Transfer and Super-Resolution, arXiv:1603.08155, 2016.

    • SRGAN

    Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi, Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, arXiv:1609.04802v3, 2016.

    其他應(yīng)用

    Optical Flow (FlowNet)

    Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip H?usser, Caner Haz?rba?, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852.

    Compression Artifacts Reduction

    Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.

    Blur Removal

    Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Sch?lkopf, Learning to Deblur, arXiv:1406.7444

    Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR, 2015

    Image Deconvolution

    Li Xu, Jimmy SJ. Ren, Ce Liu, Jiaya Jia, Deep Convolutional Neural Network for Image Deconvolution, NIPS, 2014.

    Deep Edge-Aware Filter

    Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.

    Computing the Stereo Matching Cost with a Convolutional Neural Network

    Jure ?bontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.

    Colorful Image Colorization Richard Zhang, Phillip Isola, Alexei A. Efros, ECCV, 2016

    Feature Learning by Inpainting

    Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros, Context Encoders: Feature Learning by Inpainting, CVPR, 2016

    邊緣檢測(cè)

    ?
    Saining Xie, Zhuowen Tu, Holistically-Nested Edge Detection, arXiv:1504.06375.

    DeepEdge

    Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.

    DeepContour

    Wei Shen, Xinggang Wang, Yan Wang, Xiang Bai, Zhijiang Zhang, DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection, CVPR, 2015.

    語(yǔ)義分割

    SEC: Seed, Expand and Constrain

    Alexander Kolesnikov, Christoph Lampert, Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation, ECCV, 2016.

    Adelaide

    Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. (1st ranked in VOC2012)

    Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. (4th ranked in VOC2012)

    Deep Parsing Network (DPN)

    Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 (2nd ranked in VOC 2012)

    CentraleSuperBoundaries, INRIA

    Iasonas Kokkinos, Surpassing Humans in Boundary Detection using Deep Learning, arXiv:1411.07386 (4th ranked in VOC 2012)

    BoxSup

    Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640. (6th ranked in VOC2012)

    POSTECH

    Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. (7th ranked in VOC2012)

    Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924.

    Seunghoon Hong,Junhyuk Oh,Bohyung Han, andHonglak Lee, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network, arXiv:1512.07928

    Conditional Random Fields as Recurrent Neural Networks

    Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. (8th ranked in VOC2012)

    DeepLab

    Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. (9th ranked in VOC2012)

    Zoom-out

    Mohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015

    Joint Calibration

    Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Joint Calibration for Semantic Segmentation, arXiv:1507.01581.

    Fully Convolutional Networks for Semantic Segmentation

    Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.

    Hypercolumn

    Bharath Hariharan, Pablo Arbelaez, Ross Girshick, Jitendra Malik, Hypercolumns for Object Segmentation and Fine-Grained Localization, CVPR, 2015.

    Deep Hierarchical Parsing

    Abhishek Sharma, Oncel Tuzel, David W. Jacobs, Deep Hierarchical Parsing for Semantic Segmentation, CVPR, 2015.

    Learning Hierarchical Features for Scene Labeling

    Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.

    Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.

    University of Cambridge

    Vijay Badrinarayanan, Alex Kendall and Roberto Cipolla “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.” arXiv preprint arXiv:1511.00561, 2015.

    Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla “Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding.” arXiv preprint arXiv:1511.02680, 2015.

    Princeton

    Fisher Yu, Vladlen Koltun, “Multi-Scale Context Aggregation by Dilated Convolutions”, ICLR 2016

    Univ. of Washington, Allen AI

    Hamid Izadinia, Fereshteh Sadeghi, Santosh Kumar Divvala, Yejin Choi, Ali Farhadi, “Segment-Phrase Table for Semantic Segmentation, Visual Entailment and Paraphrasing”, ICCV, 2015

    INRIA

    Iasonas Kokkinos, “Pusing the Boundaries of Boundary Detection Using deep Learning”, ICLR 2016

    UCSB

    Niloufar Pourian, S. Karthikeyan, and B.S. Manjunath, “Weakly supervised graph based semantic segmentation by learning communities of image-parts”, ICCV, 2015

    其他資源

    課程

    深度視覺(jué)

    [斯坦福] CS231n: Convolutional Neural Networks for Visual Recognition

    [香港中文大學(xué)] ELEG 5040: Advanced Topics in Signal Processing(Introduction to Deep Learning)

    · 更多深度課程推薦

    [斯坦福] CS224d: Deep Learning for Natural Language Processing

    [牛津 Deep Learning by Prof. Nando de Freitas

    [紐約大學(xué)] Deep Learning by Prof. Yann LeCun

    圖書(shū)

    免費(fèi)在線圖書(shū)

    Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville

    Neural Networks and Deep Learning by Michael Nielsen

    Deep Learning Tutorial by LISA lab, University of Montreal

    視頻

    演講

    Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng

    Recent Developments in Deep Learning By Geoff Hinton

    The Unreasonable Effectiveness of Deep Learning by Yann LeCun

    Deep Learning of Representations by Yoshua bengio

    軟件

    框架

    • Tensorflow: An open source software library for numerical computation using data flow graph by Google [Web]
    • Torch7: Deep learning library in Lua, used by Facebook and Google Deepmind [Web]
    • Torch-based deep learning libraries: [torchnet],
    • Caffe: Deep learning framework by the BVLC [Web]
    • Theano: Mathematical library in Python, maintained by LISA lab [Web]
    • Theano-based deep learning libraries: [Pylearn2], [Blocks], [Keras], [Lasagne]
    • MatConvNet: CNNs for MATLAB [Web]
    • MXNet: A flexible and efficient deep learning library for heterogeneous distributed systems with multi-language support [Web]
    • Deepgaze: A computer vision library for human-computer interaction based on CNNs [Web]

    應(yīng)用

    • 對(duì)抗訓(xùn)練 Code and hyperparameters for the paper “Generative Adversarial Networks” [Web]
    • 理解與可視化 Source code for “Understanding Deep Image Representations by Inverting Them,” CVPR, 2015. [Web]
    • 詞義分割 Source code for the paper “Rich feature hierarchies for accurate object detection and semantic segmentation,” CVPR, 2014. [Web] ; Source code for the paper “Fully Convolutional Networks for Semantic Segmentation,” CVPR, 2015. [Web]
    • 超分辨率 Image Super-Resolution for Anime-Style-Art [Web]
    • 邊緣檢測(cè) Source code for the paper “DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection,” CVPR, 2015. [Web]
    • Source code for the paper “Holistically-Nested Edge Detection”, ICCV 2015. [Web]

    講座

    • [CVPR 2014] Tutorial on Deep Learning in Computer Vision
    • [CVPR 2015] Applied Deep Learning for Computer Vision with Torch

    博客

    • Deep down the rabbit hole: CVPR 2015 and beyond@Tombone’s Computer Vision Blog
    • CVPR recap and where we’re going@Zoya Bylinskii (MIT PhD Student)’s Blog
    • Facebook’s AI Painting@Wired
    • Inceptionism: Going Deeper into Neural Networks@Google Research
    • Implementing Neural networks
    新人創(chuàng)作打卡挑戰(zhàn)賽發(fā)博客就能抽獎(jiǎng)!定制產(chǎn)品紅包拿不停!

    總結(jié)

    以上是生活随笔為你收集整理的计算机视觉整理的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

    如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。