盤點影響計算機視覺Top100論文從ResNet到AlexNet
1新智元編譯
來源:github
編譯整理: 新智元編輯部
【新智元導讀】計算機視覺近年來獲得了較大的發展,代表了深度學習最前沿的研究方向。本文梳理了2012到2017年計算機視覺領域的大事件:以論文和其他乾貨資源為主,並附上資源地址。囊括上百篇論文,分ImageNet 分類、物體檢測、物體追蹤、物體識別、圖像與語言和圖像生成等多個方向進行介紹。
今年2月,新智元曾經向大家介紹了近5年100篇被引用次數最多的深度學習論文,覆蓋了優化/訓練方法、無監督/生成模型、卷積網路模型和圖像分割/目標檢測等十大子領域。
【進入新智元公眾號,在對話框輸入「論文100」下載這份經典資料】
上述的深度學習被引用最多的100篇論文是Github上的一個開源項目,社區的成員都可以參與。在這個項目上,我們發現了另一個項目——Deep Vision,這是一個關於計算機視覺資源的項目,包含了近年來對該領域影響最大的論文、圖書和博客等的匯總。其中在論文部分,作者也分為ImageNet 分類、物體檢測、物體追蹤、物體識別、圖像與語言和圖像生成等多個方向進行介紹。
經典論文
ImageNet分類
物體檢測
物體跟蹤
低級視覺
超解析度
其他應用
邊緣檢測
語義分割
視覺注意力和顯著性
物體識別
人體姿態估計
CNN原理和性質(Understanding CNN)
圖像和語言
圖像解說
視頻解說
問答
圖像生成
上面是根據這些論文、作者、機構的一些關鍵詞製作的熱圖。
ImageNet分類
圖片來源:AlexNet論文
微軟ResNet
論文:用於圖像識別的深度殘差網路
作者:何愷明、張祥雨、任少卿和孫劍
微軟PRelu(隨機糾正線性單元/權重初始化)
論文:深入學習整流器:在ImageNet分類上超越人類水平
作者:何愷明、張祥雨、任少卿和孫劍
鏈接:https://arxiv.org/pdf/1502.01852.pdf
谷歌Batch Normalization
論文:批量歸一化:通過減少內部協變數來加速深度網路訓練
作者:Sergey Ioffe, Christian Szegedy
鏈接:https://arxiv.org/pdf/1502.03167.pdf
谷歌GoogLeNet
論文:更深的卷積,CVPR 2015
作者:Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich
鏈接:https://arxiv.org/pdf/1409.4842.pdf
牛津VGG-Net
論文:大規模視覺識別中的極深卷積網路,ICLR 2015
作者:Karen Simonyan & Andrew Zisserman
鏈接:https://arxiv.org/pdf/1409.1556.pdf
AlexNet
論文:使用深度卷積神經網路進行ImageNet分類
作者:Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
物體檢測
圖片來源:Faster-RCNN 論文
PVANET
論文:用於實時物體檢測的深度輕量神經網路(PVANET:Deep but Lightweight Neural Networks for Real-time Object Detection)
作者:Kye-Hyeon Kim, Sanghoon Hong, Byungseok Roh, Yeongjae Cheon, Minje Park
鏈接:https://arxiv.org/pdf/1608.08021
紐約大學OverFeat
論文:使用卷積網路進行識別、定位和檢測(OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks),ICLR 2014
作者:Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann LeCun
鏈接:https://arxiv.org/pdf/1312.6229.pdf
伯克利R-CNN
論文:精確物體檢測和語義分割的豐富特徵層次結構(Rich feature hierarchies for accurate object detection and semantic segmentation),CVPR 2014
作者:Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik
鏈接:http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf
微軟SPP
論文:視覺識別深度卷積網路中的空間金字塔池化(Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition),ECCV 2014
作者:何愷明、張祥雨、任少卿和孫劍
鏈接:https://arxiv.org/pdf/1406.4729.pdf
微軟Fast R-CNN
論文:Fast R-CNN
作者:Ross Girshick
鏈接:https://arxiv.org/pdf/1504.08083.pdf
微軟Faster R-CNN
論文:使用RPN走向實時物體檢測(Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks)
作者:任少卿、何愷明、Ross Girshick、孫劍
鏈接:https://arxiv.org/pdf/1506.01497.pdf
牛津大學R-CNN minus R
論文:R-CNN minus R
作者:Karel Lenc, Andrea Vedaldi
鏈接:https://arxiv.org/pdf/1506.06981.pdf
端到端行人檢測
論文:密集場景中端到端的行人檢測(End-to-end People Detection in Crowded Scenes)
作者:Russell Stewart, Mykhaylo Andriluka
鏈接:https://arxiv.org/pdf/1506.04878.pdf
實時物體檢測
論文:你只看一次:統一實時物體檢測(You Only Look Once: Unified, Real-Time Object Detection)
作者:Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi
鏈接:https://arxiv.org/pdf/1506.02640.pdf
Inside-Outside Net
論文:使用跳躍池化和RNN在場景中檢測物體(Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks)
作者:Sean Bell, C. Lawrence Zitnick, Kavita Bala, Ross Girshick
鏈接:https://arxiv.org/abs/1512.04143.pdf
R-FCN
論文:通過區域全卷積網路進行物體識別(R-FCN: Object Detection via Region-based Fully Convolutional Networks)
作者:代季峰,李益,何愷明,孫劍
鏈接:https://arxiv.org/abs/1605.06409
SSD
論文:單次多框檢測器(SSD: Single Shot MultiBox Detector)
作者:Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg
鏈接:https://arxiv.org/pdf/1512.02325v2.pdf
速度/精度權衡
論文:現代卷積物體檢測器的速度/精度權衡(Speed/accuracy trade-offs for modern convolutional object detectors)
作者:Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy
鏈接:https://arxiv.org/pdf/1611.10012v1.pdf
物體跟蹤
論文:用卷積神經網路通過學習可區分的顯著性地圖實現在線跟蹤(Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network)
作者:Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han
地址:arXiv:1502.06796.
論文:DeepTrack:通過視覺跟蹤的卷積神經網路學習辨別特徵表徵(DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking)
作者:Hanxi Li, Yi Li and Fatih Porikli
發表: BMVC, 2014.
論文:視覺跟蹤中,學習深度緊湊圖像表示(Learning a Deep Compact Image Representation for Visual Tracking)
作者:N Wang, DY Yeung
發表:NIPS, 2013.
論文:視覺跟蹤的分層卷積特徵(Hierarchical Convolutional Features for Visual Tracking)
作者:Chao Ma, Jia-Bin Huang, Xiaokang Yang and Ming-Hsuan Yang
發表: ICCV 2015
論文:完全卷積網路的視覺跟蹤(Visual Tracking with fully Convolutional Networks)
作者:Lijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu,
發表:ICCV 2015
論文:學習多域卷積神經網路進行視覺跟蹤
(Learning Multi-Domain Convolutional Neural Networks for Visual Tracking)
作者:Hyeonseob Namand Bohyung Han
對象識別(Object Recognition)
論文:卷積神經網路弱監督學習(Weakly-supervised learning with convolutional neural networks)
作者:Maxime Oquab,Leon Bottou,Ivan Laptev,Josef Sivic,CVPR,2015
鏈接:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Oquab_Is_Object_Localization_2015_CVPR_paper.pdf
FV-CNN
論文:深度濾波器組用於紋理識別和分割(Deep Filter Banks for Texture Recognition and Segmentation)
作者:Mircea Cimpoi, Subhransu Maji, Andrea Vedaldi, CVPR, 2015.
鏈接:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Cimpoi_Deep_Filter_Banks_2015_CVPR_paper.pdf
人體姿態估計(Human Pose Estimation)
論文:使用 Part Affinity Field的實時多人2D姿態估計(Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields)
作者:Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, CVPR, 2017.
論文:Deepcut:多人姿態估計的聯合子集分割和標籤(Deepcut: Joint subset partition and labeling for multi person pose estimation)
作者:Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele, CVPR, 2016.
論文:Convolutional pose machines
作者:Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh, CVPR, 2016.
論文:人體姿態估計的 Stacked hourglass networks(Stacked hourglass networks for human pose estimation)
作者:Alejandro Newell, Kaiyu Yang, and Jia Deng, ECCV, 2016.
論文:用於視頻中人體姿態估計的Flowing convnets(Flowing convnets for human pose estimation in videos)
作者:Tomas Pfister, James Charles, and Andrew Zisserman, ICCV, 2015.
論文:卷積網路和人類姿態估計圖模型的聯合訓練(Joint training of a convolutional network and a graphical model for human pose estimation)
作者:Jonathan J. Tompson, Arjun Jain, Yann LeCun, Christoph Bregler, NIPS, 2014.
理解CNN
論文:通過測量同變性和等價性來理解圖像表示(Understanding image representations by measuring their equivariance and equivalence)
作者:Karel Lenc, Andrea Vedaldi, CVPR, 2015.
鏈接:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Lenc_Understanding_Image_Representations_2015_CVPR_paper.pdf
論文:深度神經網路容易被愚弄:無法識別的圖像的高置信度預測(Deep Neural Networks are Easily Fooled:High Confidence Predictions for Unrecognizable Images)
作者:Anh Nguyen, Jason Yosinski, Jeff Clune, CVPR, 2015.
鏈接:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf
論文:通過反演理解深度圖像表示(Understanding Deep Image Representations by Inverting Them)
作者:Aravindh Mahendran, Andrea Vedaldi, CVPR, 2015
鏈接:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf
論文:深度場景CNN中的對象檢測器(Object Detectors Emerge in Deep Scene CNNs)
作者:Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, ICLR, 2015.
論文:用卷積網路反演視覺表示(Inverting Visual Representations with Convolutional Networks)
作者:Alexey Dosovitskiy, Thomas Brox, arXiv, 2015.
論文:可視化和理解卷積網路(Visualizing and Understanding Convolutional Networks)
作者:Matthrew Zeiler, Rob Fergus, ECCV, 2014.
鏈接:https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf
圖像與語言
圖像說明(Image Captioning)
圖:(from Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.)
UCLA / Baidu
用多模型循環神經網路解釋圖像(Explain Images with Multimodal Recurrent Neural Networks)
Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, arXiv:1410.1090
Toronto
使用多模型神經語言模型統一視覺語義嵌入(Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models)
Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, arXiv:1411.2539.
Berkeley
用於視覺識別和描述的長期循環卷積網路(Long-term Recurrent Convolutional Networks for Visual Recognition and Description)
Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, arXiv:1411.4389.
看圖寫字:神經圖像說明生成器(Show and Tell: A Neural Image Caption Generator)
Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, arXiv:1411.4555.
Stanford
用於生成圖像描述的深度視覺語義對齊(Deep Visual-Semantic Alignments for Generating Image Description)
Andrej Karpathy, Li Fei-Fei, CVPR, 2015.
UML / UT
使用深度循環神經網路將視頻轉換為自然語言(Translating Videos to Natural Language Using Deep Recurrent Neural Networks)
Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, NAACL-HLT, 2015.
CMU / Microsoft
學習圖像說明生成的循環視覺表示(Learning a Recurrent Visual Representation for Image Caption Generation)
Xinlei Chen, C. Lawrence Zitnick, arXiv:1411.5654.
Xinlei Chen, C. Lawrence Zitnick, Mind』s Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015
http://www.cs.cmu.edu/~xinleic/papers/cvpr15_rnn.pdf
Microsoft
從圖像說明到視覺概念(From Captions to Visual Concepts and Back)
Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, CVPR, 2015.
Univ. Montreal / Univ. Toronto
Show, Attend, and Tell:視覺注意力與神經圖像標題生成(Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention)
Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, arXiv:1502.03044 / ICML 2015
http://www.cs.toronto.edu/~zemel/documents/captionAttn.pdf
Idiap / EPFL / Facebook
基於短語的圖像說明(Phrase-based Image Captioning)
Remi Lebret, Pedro O. Pinheiro, Ronan Collobert, arXiv:1502.03671 / ICML 2015
UCLA / Baidu
像孩子一樣學習:從圖像句子描述快速學習視覺的新概念(Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images)
Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, arXiv:1504.06692
MS + Berkeley
探索圖像說明的最近鄰方法( Exploring Nearest Neighbor Approaches for Image Captioning)
Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, arXiv:1505.04467
圖像說明的語言模型(Language Models for Image Captioning: The Quirks and What Works)
Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, arXiv:1505.01809
阿德萊德
具有中間屬性層的圖像說明( Image Captioning with an Intermediate Attributes Layer)
Qi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, arXiv:1506.01144
蒂爾堡
通過圖片學習語言(Learning language through pictures)
Grzegorz Chrupala, Akos Kadar, Afra Alishahi, arXiv:1506.03694
蒙特利爾大學
使用基於注意力的編碼器-解碼器網路描述多媒體內容(Describing Multimedia Content using Attention-based Encoder-Decoder Networks)
Kyunghyun Cho, Aaron Courville, Yoshua Bengio, arXiv:1507.01053
康奈爾
圖像表示和神經圖像說明的新領域(Image Representations and New Domains in Neural Image Captioning)
Jack Hessel, Nicolas Savva, Michael J. Wilber, arXiv:1508.02091
MS + City Univ. of HongKong
Learning Query and Image Similarities with Ranking Canonical Correlation Analysis
Ting Yao, Tao Mei, and Chong-Wah Ngo, ICCV, 2015
視頻字幕(Video Captioning)
伯克利
Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.
猶他州/ UML / 伯克利
Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.
微軟
Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861.
猶他州/ UML / 伯克利
Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487.
蒙特利爾大學/ 舍布魯克
Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Describing Videos by Exploiting Temporal Structure, arXiv:1502.08029
MPI / 伯克利
Anna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698
多倫多大學 / MIT
Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724
蒙特利爾大學
Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053
TAU / 美國南加州大學
Dotan Kaufman, Gil Levi, Tal Hassner, Lior Wolf, Temporal Tessellation for Video Annotation and Summarization, arXiv:1612.06950.
圖像生成
卷積/循環網路
論文:Conditional Image Generation with PixelCNN Decoders」
作者:A?ron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu
論文:Learning to Generate Chairs with Convolutional Neural Networks
作者:Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox
發表:CVPR, 2015.
論文:DRAW: A Recurrent Neural Network For Image Generation
作者:Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
發表:ICML, 2015.
對抗網路
論文:生成對抗網路(Generative Adversarial Networks)
作者:Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
發表:NIPS, 2014.
論文:使用對抗網路Laplacian Pyramid 的深度生成圖像模型(Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks)
作者:Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus
發表:NIPS, 2015.
論文:生成模型演講概述 (A note on the evaluation of generative models)
作者:Lucas Theis, A?ron van den Oord, Matthias Bethge
發表:ICLR 2016.
論文:變分自動編碼深度高斯過程(Variationally Auto-Encoded Deep Gaussian Processes)
作者:Zhenwen Dai, Andreas Damianou, Javier Gonzalez, Neil Lawrence
發表:ICLR 2016.
論文:用注意力機制從字幕生成圖像 (Generating Images from Captions with Attention)
作者:Elman Mansimov, Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov
發表: ICLR 2016
論文:分類生成對抗網路的無監督和半監督學習(Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks)
作者:Jost Tobias Springenberg
發表:ICLR 2016
論文:用一個對抗檢測表徵(Censoring Representations with an Adversary)
作者:Harrison Edwards, Amos Storkey
發表:ICLR 2016
論文:虛擬對抗訓練實現分散式順滑 (Distributional Smoothing with Virtual Adversarial Training)
作者:Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii
發表:ICLR 2016
論文:自然圖像流形上的生成視覺操作(Generative Visual Manipulation on the Natural Image Manifold)
作者:朱俊彥, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros
發表: ECCV 2016.
論文:深度卷積生成對抗網路的無監督表示學習(Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks)
作者:Alec Radford, Luke Metz, Soumith Chintala
發表: ICLR 2016
問題回答
圖:(from Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop)
弗吉尼亞大學 / 微軟研究院
論文:VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop.
作者:Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh
MPI / 伯克利
論文:Ask Your Neurons: A Neural-based Approach to Answering Questions about Images
作者:Mateusz Malinowski, Marcus Rohrbach, Mario Fritz,
發布 : arXiv:1505.01121.
多倫多
論文: Image Question Answering: A Visual Semantic Embedding Model and a New Dataset
作者:Mengye Ren, Ryan Kiros, Richard Zemel
發表: arXiv:1505.02074 / ICML 2015 deep learning workshop.
百度/ 加州大學洛杉磯分校
作者:Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, 徐偉
論文:Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering
發表: arXiv:1505.05612.
POSTECH(韓國)
論文:Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction
作者:Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han
發表: arXiv:1511.05765
CMU / 微軟研究院
論文:Stacked Attention Networks for Image Question Answering
作者:Yang, Z., He, X., Gao, J., Deng, L., & Smola, A. (2015)
發表: arXiv:1511.02274.
MetaMind
論文:Dynamic Memory Networks for Visual and Textual Question Answering
作者:Xiong, Caiming, Stephen Merity, and Richard Socher
發表: arXiv:1603.01417 (2016).
首爾國立大學 + NAVER
論文:Multimodal Residual Learning for Visual QA
作者:Jin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang
發表:arXiv:1606:01455
UC Berkeley + 索尼
論文:Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding
作者:Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach
發表:arXiv:1606.01847
Postech
論文:Training Recurrent Answering Units with Joint Loss Minimization for VQA
作者:Hyeonwoo Noh and Bohyung Han
發表: arXiv:1606.03647
首爾國立大學 + NAVER
論文: Hadamard Product for Low-rank Bilinear Pooling
作者:Jin-Hwa Kim, Kyoung Woon On, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhan
發表:arXiv:1610.04325.
視覺注意力和顯著性
Mr-CNN
學習地標的連續搜索
視覺注意力機制實現多物體識別
視覺注意力機制的循環模型
論文:Predicting Eye Fixations using Convolutional Neural Networks
作者:Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, Tianming Liu
發表:CVPR, 2015.
作者:Learning a Sequential Search for Landmarks
論文:Saurabh Singh, Derek Hoiem, David Forsyth
發表:CVPR, 2015.
論文:Multiple Object Recognition with Visual Attention
作者:Jimmy Lei Ba, Volodymyr Mnih, Koray Kavukcuoglu,
發表:ICLR, 2015.
作者:Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu
論文:Recurrent Models of Visual Attention
發表:NIPS, 2014.
低級視覺
超解析度
Iterative Image Reconstruction
Sven Behnke: Learning Iterative Image Reconstruction. IJCAI, 2001.
Sven Behnke: Learning Iterative Image Reconstruction in the Neural Abstraction Pyramid. International Journal of Computational Intelligence and Applications, vol. 1, no. 4, pp. 427-438, 2001.
Super-Resolution (SRCNN)
Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.
Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092.
Very Deep Super-Resolution
Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Accurate Image Super-Resolution Using Very Deep Convolutional Networks, arXiv:1511.04587, 2015.
Deeply-Recursive Convolutional Network
Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Deeply-Recursive Convolutional Network for Image Super-Resolution, arXiv:1511.04491, 2015.
Casade-Sparse-Coding-Network
Zhaowen Wang, Ding Liu, Wei Han, Jianchao Yang and Thomas S. Huang, Deep Networks for Image Super-Resolution with Sparse Prior. ICCV, 2015.
Perceptual Losses for Super-Resolution
Justin Johnson, Alexandre Alahi, Li Fei-Fei, Perceptual Losses for Real-Time Style Transfer and Super-Resolution, arXiv:1603.08155, 2016.
SRGAN
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi, Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, arXiv:1609.04802v3, 2016.
其他應用
Optical Flow (FlowNet)
Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip H?usser, Caner Haz?rba?, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852.
Compression Artifacts Reduction
Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.
Blur Removal
Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Sch?lkopf, Learning to Deblur, arXiv:1406.7444
Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR, 2015
Image Deconvolution
Li Xu, Jimmy SJ. Ren, Ce Liu, Jiaya Jia, Deep Convolutional Neural Network for Image Deconvolution, NIPS, 2014.
Deep Edge-Aware Filter
Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.
Computing the Stereo Matching Cost with a Convolutional Neural Network
Jure ?bontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.
Colorful Image Colorization Richard Zhang, Phillip Isola, Alexei A. Efros, ECCV, 2016
Feature Learning by Inpainting
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros, Context Encoders: Feature Learning by Inpainting, CVPR, 2016
邊緣檢測
Holistically-Nested Edge Detection
Saining Xie, Zhuowen Tu, Holistically-Nested Edge Detection, arXiv:1504.06375.
DeepEdge
Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.
DeepContour
Wei Shen, Xinggang Wang, Yan Wang, Xiang Bai, Zhijiang Zhang, DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection, CVPR, 2015.
語義分割
圖片來源:BoxSup論文
SEC: Seed, Expand and Constrain
Alexander Kolesnikov, Christoph Lampert, Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation, ECCV, 2016.
Adelaide
Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. (1st ranked in VOC2012)
Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. (4th ranked in VOC2012)
Deep Parsing Network (DPN)
Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 (2nd ranked in VOC 2012)
CentraleSuperBoundaries, INRIA
Iasonas Kokkinos, Surpassing Humans in Boundary Detection using Deep Learning, arXiv:1411.07386 (4th ranked in VOC 2012)
BoxSup
Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640. (6th ranked in VOC2012)
POSTECH
Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. (7th ranked in VOC2012)
Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924.
Seunghoon Hong,Junhyuk Oh,Bohyung Han, andHonglak Lee, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network, arXiv:1512.07928
Conditional Random Fields as Recurrent Neural Networks
Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. (8th ranked in VOC2012)
DeepLab
Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. (9th ranked in VOC2012)
Zoom-out
Mohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015
Joint Calibration
Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Joint Calibration for Semantic Segmentation, arXiv:1507.01581.
Fully Convolutional Networks for Semantic Segmentation
Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.
Hypercolumn
Bharath Hariharan, Pablo Arbelaez, Ross Girshick, Jitendra Malik, Hypercolumns for Object Segmentation and Fine-Grained Localization, CVPR, 2015.
Deep Hierarchical Parsing
Abhishek Sharma, Oncel Tuzel, David W. Jacobs, Deep Hierarchical Parsing for Semantic Segmentation, CVPR, 2015.
Learning Hierarchical Features for Scene Labeling
Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.
Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.
University of Cambridge
Vijay Badrinarayanan, Alex Kendall and Roberto Cipolla "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation." arXiv preprint arXiv:1511.00561, 2015.
Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla "Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding." arXiv preprint arXiv:1511.02680, 2015.
Princeton
Fisher Yu, Vladlen Koltun, "Multi-Scale Context Aggregation by Dilated Convolutions", ICLR 2016
Univ. of Washington, Allen AI
Hamid Izadinia, Fereshteh Sadeghi, Santosh Kumar Divvala, Yejin Choi, Ali Farhadi, "Segment-Phrase Table for Semantic Segmentation, Visual Entailment and Paraphrasing", ICCV, 2015
INRIA
Iasonas Kokkinos, "Pusing the Boundaries of Boundary Detection Using deep Learning", ICLR 2016
UCSB
Niloufar Pourian, S. Karthikeyan, and B.S. Manjunath, "Weakly supervised graph based semantic segmentation by learning communities of image-parts", ICCV, 2015
其他資源
課程
?深度視覺
[斯坦福] CS231n: Convolutional Neural Networks for Visual Recognition
[香港中文大學] ELEG 5040: Advanced Topics in Signal Processing(Introduction to Deep Learning)
?更多深度課程推薦
[斯坦福] CS224d: Deep Learning for Natural Language Processing
[牛津 Deep Learning by Prof. Nando de Freitas
[紐約大學] Deep Learning by Prof. Yann LeCun
圖書
免費在線圖書
?Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville
?Neural Networks and Deep Learning by Michael Nielsen
?Deep Learning Tutorial by LISA lab, University of Montreal
視頻
演講
?Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng
?Recent Developments in Deep Learning By Geoff Hinton
?The Unreasonable Effectiveness of Deep Learning by Yann LeCun
?Deep Learning of Representations by Yoshua bengio
軟體
框架
?Tensorflow: An open source software library for numerical computation using data flow graph by Google [Web]
?Torch7: Deep learning library in Lua, used by Facebook and Google Deepmind [Web]
?Torch-based deep learning libraries: [torchnet],
?Caffe: Deep learning framework by the BVLC [Web]
?Theano: Mathematical library in Python, maintained by LISA lab [Web]
?Theano-based deep learning libraries: [Pylearn2], [Blocks], [Keras], [Lasagne]
?MatConvNet: CNNs for MATLAB [Web]
?MXNet: A flexible and efficient deep learning library for heterogeneous distributed systems with multi-language support [Web]
?Deepgaze: A computer vision library for human-computer interaction based on CNNs [Web]
應用
?對抗訓練 Code and hyperparameters for the paper "Generative Adversarial Networks" [Web]
?理解與可視化 Source code for "Understanding Deep Image Representations by Inverting Them," CVPR, 2015. [Web]
?詞義分割 Source code for the paper "Rich feature hierarchies for accurate object detection and semantic segmentation," CVPR, 2014. [Web] ; Source code for the paper "Fully Convolutional Networks for Semantic Segmentation," CVPR, 2015. [Web]
?超解析度 Image Super-Resolution for Anime-Style-Art [Web]
?邊緣檢測 Source code for the paper "DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection," CVPR, 2015. [Web]
;Source code for the paper "Holistically-Nested Edge Detection", ICCV 2015. [Web]
講座
?[CVPR 2014] Tutorial on Deep Learning in Computer Vision
?[CVPR 2015] Applied Deep Learning for Computer Vision with Torch
博客
?Deep down the rabbit hole: CVPR 2015 and beyond@Tombone"s Computer Vision Blog
?CVPR recap and where we"re going@Zoya Bylinskii (MIT PhD Student)"s Blog
?Facebook"s AI Painting@Wired
?Inceptionism: Going Deeper into Neural Networks@Google Research
?Implementing Neural networks
點擊閱讀原文可查看職位詳情,期待你的加入~
※「破解人類識別文字之謎」對圖像中的字母進行無監督學習
※美中印AI三巨頭機器人實力對比:中國能否保住第二?
※破解人類識別文字之謎,對圖像中的字母進行無監督學習
※「微軟語音識別新突破,錯誤率降至5.1%」黃學東:新的行業里程碑
※「AI博士五星指南」入行自评,选大公司还是初创企业(万字长文)
TAG:新智元 |
※一文概覽2017年Facebook AI Research的計算機視覺研究進展
※Top500出爐:IBM Summit和Sierra超級計算機奪得榜首
※Quantum Machines融550萬美元打造量子計算機
※2019計算機體系結構最高獎Eckert-Mauchly公布,Mark D. Hill獲獎
※2019計算機體系結構最高獎Eckert-Mauchly獎公布,Mark D.Hill獲獎
※Jitendra Malik 榮獲 2019 年 IEEE 計算機先驅獎
※美國Summit超級計算機運算性能為200PFlops卡
※2018計算機大獎被谷歌包攬:ACM計算獎授予Shwetak Patel
※修復:Windows 10計算機上的HTTP錯誤400
※微軟阻止包含舊版BattlEye軟體的計算機升級到Windows 10最新版
※AutoML Vision教程:訓練模型解決計算機視覺問題,準確率達94.5%
※英偉達發布AI計算機Jetson Nano,售價僅為99美元
※計算機圖形學遇上深度學習,針對3D圖像的TensorFlowGraphics面世
※計算機圖形學遇上深度學習,針對3D圖像的TensorFlowGraphics面世
※92年圖靈獎獲得者,Butler Lampson:計算機科學傳奇仍在憧憬未來
※HP Enterprise買下了超級計算機製造商Cray
※MobileNetV2:下一代設備上計算機視覺網路
※南大周志華獲IEEE計算機學會2019年Edward J.McCluskey技術成就獎
※加速AR對象分類,Facebook開源計算機視覺演算法Detectron
※RoadBotics融資750萬美元 開發道路評估計算機視覺和AI技術