當前位置:
首頁 > 新聞 > ICML2018論文公布!一文了解機器學習最新熱議論文和研究熱點

ICML2018論文公布!一文了解機器學習最新熱議論文和研究熱點

新智元推薦

來源:專知

編輯:克雷格

【新智元導讀】ICML 2018上周公布了會議接受論文,各家組織機構和研究大牛們在Twitter上紛紛報喜,放出接受論文,恭喜!有Google Brain、DeepMind、Facebook、微軟和各大高校等。我們整理了Twitter上的關注度比較熱的一些論文,供大家了解,最新關於機器學習的一些熱門研究方向!

Differentiable Dynamic Programming for Structured Prediction and Attention

最熱的就是這篇第一作者Arthur Mensch?,來自法國Inria Parietal,也是scikit-learn 作者之一,論文關於結構性預測與注意力中的可微分動態編程

作者重點指出:Sparsity and backprop in CRF-like inference layers using max-smoothing, application in text + time series (NER, NMT, DTW)。

Twitter上截止到現在 600贊。

論文網址:

http://www.zhuanzhi.ai/document/34c4176a60e002b524b56b5114db0e78

這位評價甚高!oneofthemostinnovativedeeplearningpapers!

歡迎大家閱讀!

2. WaveRNN、Parralel WaveNet

來自DeepMind的兩篇論文關於語音合成

WaveRNN: http://arxiv.org/abs/1802.08435

Parallel WaveNet: http://arxiv.org/abs/1711.10433

WaveNet早已名聲卓著,比原來快千倍,語音更自然,已經用在Google自家產品Google Assistant 里~

3. GAN性能表現分析

來自谷歌大腦GoodFellow團隊,Is Generator Conditioning Causally Related to GAN Performance? find: 1. Spectrum of G"s in/out Jacobian predicts Inception Score. 2. Intervening to change spectrum affects scores a lot

論文鏈接:https://t.co/cXQDEE2Uee

4.優化方法 Adam分析

Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients

論文地址:https://arxiv.org/abs/1705.07774

5. 圖像轉換器

論文地址:https://arxiv.org/abs/1802.05751

其他論文列表:

論文地址:

Bayesian Quadrature for Multiple Related Integrals

https://arxiv.org/abs/1801.04153

Stein Points

https://arxiv.org/abs/1803.10161

Active Learning with Logged Data

https://arxiv.org/abs/1802.09069

Analyzing the Robustness of Nearest Neighbors to Adversarial Examples

https://arxiv.org/abs/1706.03922

Hierarchical Imitation and Reinforcement Learning

https://arxiv.org/abs/1803.00590

Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model

https://arxiv.org/abs/1802.04551

Detecting and Correcting for Label Shift with Black Box Predictors

https://arxiv.org/abs/1802.03916

Yes, but Did It Work?: Evaluating Variational Inference

https://arxiv.org/abs/1802.02538

MAGAN: Aligning Biological Manifolds

https://arxiv.org/abs/1803.00385

Does Distributionally Robust Supervised Learning Give Robust Classifiers?

https://arxiv.org/abs/1611.02041

Knowledge Transfer with Jacobian Matching

https://arxiv.org/abs/1803.00443

Kronecker Recurrent Units

https://arxiv.org/abs/1705.10142

Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors

https://arxiv.org/abs/1712.09376

The Manifold Assumption and Defenses Against Adversarial Perturbations

https://arxiv.org/abs/1711.08001

Overcoming catastrophic forgetting with hard attention to the task

https://arxiv.org/abs/1801.01423

On the Opportunities and Pitfalls of Nesting Monte Carlo Estimators

https://arxiv.org/abs/1709.06181

Tighter Variational Bounds are Not Necessarily Better

https://arxiv.org/abs/1802.04537

LaVAN: Localized and Visible Adversarial Noise

https://arxiv.org/abs/1801.02608

Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples

https://arxiv.org/abs/1711.09576

Geometry Score: A Method For Comparing Generative Adversarial Networks

https://arxiv.org/abs/1802.02664

(本文授權轉載自專知,ID:Quan_Zhuanzhi)

喜歡這篇文章嗎?立刻分享出去讓更多人知道吧!

本站內容充實豐富,博大精深,小編精選每日熱門資訊,隨時更新,點擊「搶先收到最新資訊」瀏覽吧!


請您繼續閱讀更多來自 新智元 的精彩文章:

震撼!英偉達用深度學習做圖像修復,毫無ps痕迹
蘋果確認自研5G技術,欲擺脫對高通英特爾依賴

TAG:新智元 |