“Text-2015-01-14”版本间的差异
(→share paper) |
(→list paper) |
||
| 第5行: | 第5行: | ||
==choose paper== | ==choose paper== | ||
| − | + | =list paper= | |
| − | Deep Learning and Representation Learning Workshop: NIPS 2014 | + | ==Deep Learning and Representation Learning Workshop: NIPS 2014 --Accepted papers== |
| − | Accepted papers | + | *Oral presentations: |
| − | Oral presentations: | + | |
| − | cuDNN: Efficient Primitives for Deep Learning (#49) | + | cuDNN: Efficient Primitives for Deep Learning (#49)Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, Evan Shelhamer |
| − | Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, Evan Shelhamer | + | |
| − | Distilling the Knowledge in a Neural Network (#65) | + | Distilling the Knowledge in a Neural Network (#65)Geoffrey Hinton, Oriol Vinyals, Jeff Dean |
| − | Geoffrey Hinton, Oriol Vinyals, Jeff Dean | + | |
| − | Supervised Learning in Dynamic Bayesian Networks (#54) | + | Supervised Learning in Dynamic Bayesian Networks (#54)Shamim Nemati, Ryan Adams |
| − | Shamim Nemati, Ryan Adams | + | |
| − | Deeply-Supervised Nets (#2) | + | Deeply-Supervised Nets (#2)Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen Tu |
| − | Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen Tu | + | |
| − | Posters, morning session (11:30-14:45): | + | *Posters, morning session (11:30-14:45): |
| − | Unsupervised Feature Learning from Temporal Data (#3) | + | Unsupervised Feature Learning from Temporal Data (#3)Ross Goroshin, Joan Bruna, Arthur Szlam, Jonathan Tompson, David Eigen, Yann LeCun |
| − | Ross Goroshin, Joan Bruna, Arthur Szlam, Jonathan Tompson, David Eigen, Yann LeCun | + | |
| − | Autoencoder Trees (#5) | + | Autoencoder Trees (#5)Ozan Irsoy, Ethem Alpaydin |
| − | Ozan Irsoy, Ethem Alpaydin | + | |
| − | Scheduled denoising autoencoders (#6) | + | Scheduled denoising autoencoders (#6)Krzysztof Geras, Charles Sutton |
| − | Krzysztof Geras, Charles Sutton | + | |
| − | Learning to Deblur (#8) | + | Learning to Deblur (#8)Christian Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf |
| − | Christian Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf | + | |
| − | A Winner-Take-All Method for Training Sparse Convolutional Autoencoders (#10) | + | A Winner-Take-All Method for Training Sparse Convolutional Autoencoders (#10)Alireza Makhzani, Brendan Frey |
| − | Alireza Makhzani, Brendan Frey | + | |
| − | "Mental Rotation" by Optimizing Transforming Distance (#11) | + | "Mental Rotation" by Optimizing Transforming Distance (#11)Weiguang Ding, Graham Taylor |
| − | Weiguang Ding, Graham Taylor | + | |
| − | On Importance of Base Model Covariance for Annealing Gaussian RBMs (#12) | + | On Importance of Base Model Covariance for Annealing Gaussian RBMs (#12)Taichi Kiwaki, Kazuyuki Aihara |
| − | Taichi Kiwaki, Kazuyuki Aihara | + | |
| − | Ultrasound Standard Plane Localization via Spatio-Temporal Feature Learning with Knowledge Transfer (#14) | + | Ultrasound Standard Plane Localization via Spatio-Temporal Feature Learning with Knowledge Transfer (#14)Hao Chen, Dong Ni, Ling Wu, Sheng Li, Pheng Heng |
| − | Hao Chen, Dong Ni, Ling Wu, Sheng Li, Pheng Heng | + | |
| − | Understanding Locally Competitive Networks (#15) | + | Understanding Locally Competitive Networks (#15)Rupesh Srivastava, Jonathan Masci, Faustino Gomez, Jurgen Schmidhuber |
| − | Rupesh Srivastava, Jonathan Masci, Faustino Gomez, Jurgen Schmidhuber | + | |
| − | Unsupervised pre-training speeds up the search for good features: an analysis of a simplified model of neural network learning (#18) | + | Unsupervised pre-training speeds up the search for good features: an analysis of a simplified model of neural network learning (#18)Avraham Ruderman |
| − | Avraham Ruderman | + | |
| − | Analyzing Feature Extraction by Contrastive Divergence Learning in RBMs (#19) | + | Analyzing Feature Extraction by Contrastive Divergence Learning in RBMs (#19)Ryo Karakida, Masato Okada, Shun-ichi Amari |
| − | Ryo Karakida, Masato Okada, Shun-ichi Amari | + | |
| − | Deep Tempering (#20) | + | Deep Tempering (#20)Guillaume Desjardins, Heng Luo, Aaron Courville, Yoshua Bengio |
| − | Guillaume Desjardins, Heng Luo, Aaron Courville, Yoshua Bengio | + | |
| − | Learning Word Representations with Hierarchical Sparse Coding (#21) | + | Learning Word Representations with Hierarchical Sparse Coding (#21)Dani Yogatama, Manaal Faruqui, Chris Dyer, Noah Smith |
| − | Dani Yogatama, Manaal Faruqui, Chris Dyer, Noah Smith | + | |
| − | Deep Learning as an Opportunity in Virtual Screening (#23) | + | Deep Learning as an Opportunity in Virtual Screening (#23)Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Jörg Wenger, Hugo Ceulemans, Sepp Hochreiter |
| − | Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Jörg Wenger, Hugo Ceulemans, Sepp Hochreiter | + | |
| − | Revisit Long Short-Term Memory: An Optimization Perspective (#24) | + | Revisit Long Short-Term Memory: An Optimization Perspective (#24)Qi Lyu, J Zhu |
| − | Qi Lyu, J Zhu | + | |
| − | Locally Scale-Invariant Convolutional Neural Networks (#26) | + | Locally Scale-Invariant Convolutional Neural Networks (#26)Angjoo Kanazawa, David Jacobs, Abhishek Sharma |
| − | Angjoo Kanazawa, David Jacobs, Abhishek Sharma | + | |
| − | Deep Exponential Families (#28) | + | Deep Exponential Families (#28)Rajesh Ranganath, Linpeng Tang, Laurent Charlin, David Blei |
| − | Rajesh Ranganath, Linpeng Tang, Laurent Charlin, David Blei | + | |
| − | Techniques for Learning Binary Stochastic Feedforward Neural Networks (#29) | + | Techniques for Learning Binary Stochastic Feedforward Neural Networks (#29)Tapani Raiko, mathias Berglund, Guillaume Alain, Laurent Dinh |
| − | Tapani Raiko, mathias Berglund, Guillaume Alain, Laurent Dinh | + | |
| − | Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition (#30) | + | Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition (#30)Phong Le, Willem Zuidema |
| − | Phong Le, Willem Zuidema | + | |
| − | Deep Multi-Instance Transfer Learning (#32) | + | Deep Multi-Instance Transfer Learning (#32)Dimitrios Kotzias, Misha Denil, Phil Blunsom, Nando De Freitas |
| − | Dimitrios Kotzias, Misha Denil, Phil Blunsom, Nando De Freitas | + | |
| − | Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models (#33) | + | Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models (#33)Ryan Kiros, Ruslan Salakhutdinov, Richard Zemel |
| − | Ryan Kiros, Ruslan Salakhutdinov, Richard Zemel | + | |
| − | Retrofitting Word Vectors to Semantic Lexicons (#34) | + | Retrofitting Word Vectors to Semantic Lexicons (#34)Manaal Faruqui, Jesse Dodge, Sujay Jauhar, Chris Dyer, Eduard Hovy, Noah Smith |
| − | Manaal Faruqui, Jesse Dodge, Sujay Jauhar, Chris Dyer, Eduard Hovy, Noah Smith | + | |
| − | Deep Sequential Neural Network (#35) | + | Deep Sequential Neural Network (#35)Ludovic Denoyer, Patrick Gallinari |
| − | Ludovic Denoyer, Patrick Gallinari | + | |
| − | Efficient Training Strategies for Deep Neural Network Language Models (#71) | + | Efficient Training Strategies for Deep Neural Network Language Models (#71)Holger Schwenk |
| − | Holger Schwenk | + | |
| − | Posters, afternoon session (17:00-18:30): | + | *Posters, afternoon session (17:00-18:30): |
| − | Deep Learning for Answer Sentence Selection (#36) | + | Deep Learning for Answer Sentence Selection (#36)Lei Yu, Karl Moritz Hermann, Phil Blunsom, Stephen Pulman |
| − | Lei Yu, Karl Moritz Hermann, Phil Blunsom, Stephen Pulman | + | |
| − | Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition (#37) | + | Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition (#37)Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman |
| − | Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman | + | |
| − | Learning Torque-Driven Manipulation Primitives with a Multilayer Neural Network (#39) | + | Learning Torque-Driven Manipulation Primitives with a Multilayer Neural Network (#39)Sergey Levine, Pieter Abbeel |
| − | Sergey Levine, Pieter Abbeel | + | |
| − | SimNets: A Generalization of Convolutional Networks (#41) | + | SimNets: A Generalization of Convolutional Networks (#41)Nadav Cohen, Amnon Shashua |
| − | Nadav Cohen, Amnon Shashua | + | |
| − | Phonetics embedding learning with side information (#44) | + | Phonetics embedding learning with side information (#44)Gabriel Synnaeve, Thomas Schatz, Emmanuel Dupoux |
| − | Gabriel Synnaeve, Thomas Schatz, Emmanuel Dupoux | + | |
| − | End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results (#45) | + | End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results (#45)Jan Chorowski, Dzmitry Bahdanau, KyungHyun Cho, Yoshua Bengio |
| − | Jan Chorowski, Dzmitry Bahdanau, KyungHyun Cho, Yoshua Bengio | + | |
| − | BILBOWA: Fast Bilingual Distributed Representations without Word Alignments (#46) | + | BILBOWA: Fast Bilingual Distributed Representations without Word Alignments (#46)Stephan Gouws, Yoshua Bengio, Greg Corrado |
| − | Stephan Gouws, Yoshua Bengio, Greg Corrado | + | |
| − | Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling (#47) | + | Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling (#47)Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, Yoshua Bengio |
| − | Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, Yoshua Bengio | + | |
| − | Reweighted Wake-Sleep (#48) | + | Reweighted Wake-Sleep (#48)Jorg Bornschein, Yoshua Bengio |
| − | Jorg Bornschein, Yoshua Bengio | + | |
| − | Explain Images with Multimodal Recurrent Neural Networks (#51) | + | Explain Images with Multimodal Recurrent Neural Networks (#51)Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan Yuille |
| − | Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan Yuille | + | |
| − | Rectified Factor Networks and Dropout (#53) | + | Rectified Factor Networks and Dropout (#53)Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter |
| − | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | + | |
| − | Towards Deep Neural Network Architectures Robust to Adversarials (#55) | + | Towards Deep Neural Network Architectures Robust to Adversarials (#55)Shixiang Gu, Luca Rigazio |
| − | Shixiang Gu, Luca Rigazio | + | |
| − | Making Dropout Invariant to Transformations of Activation Functions and Inputs (#56) | + | Making Dropout Invariant to Transformations of Activation Functions and Inputs (#56)Jimmy Ba, Hui Yuan Xiong, Brendan Frey |
| − | Jimmy Ba, Hui Yuan Xiong, Brendan Frey | + | |
| − | Aspect Specific Sentiment Analysis using Hierarchical Deep Learning (#58) | + | Aspect Specific Sentiment Analysis using Hierarchical Deep Learning (#58)Himabindu Lakkaraju, Richard Socher, Chris Manning |
| − | Himabindu Lakkaraju, Richard Socher, Chris Manning | + | |
| − | Deep Directed Generative Autoencoders (#59) | + | Deep Directed Generative Autoencoders (#59)Sherjil Ozair, Yoshua Bengio |
| − | Sherjil Ozair, Yoshua Bengio | + | |
| − | Conditional Generative Adversarial Nets (#60) | + | Conditional Generative Adversarial Nets (#60)Mehdi Mirza, Simon Osindero |
| − | Mehdi Mirza, Simon Osindero | + | |
| − | Analyzing the Dynamics of Gated Auto-encoders (#61) | + | Analyzing the Dynamics of Gated Auto-encoders (#61)Daniel Im, Graham Taylor |
| − | Daniel Im, Graham Taylor | + | |
| − | Representation as a Service (#63) | + | Representation as a Service (#63)Ouais Alsharif, Joelle Pineau, philip bachman |
| − | Ouais Alsharif, Joelle Pineau, philip bachman | + | |
| − | Provable Methods for Training Neural Networks with Sparse Connectivity (#66) | + | Provable Methods for Training Neural Networks with Sparse Connectivity (#66)Hanie Sedghi, Anima Anandkumar |
| − | Hanie Sedghi, Anima Anandkumar | + | |
| − | Trust Region Policy Optimization (#67) | + | Trust Region Policy Optimization (#67)John D. Schulman, Philipp C. Moritz, Sergey Levine, Michael I. Jordan, Pieter Abbeel |
| − | John D. Schulman, Philipp C. Moritz, Sergey Levine, Michael I. Jordan, Pieter Abbeel | + | |
| − | Document Embedding with Paragraph Vectors (#68) | + | Document Embedding with Paragraph Vectors (#68)Andrew Dai, Christopher Olah, Quoc Le, Greg Corrado |
| − | Andrew Dai, Christopher Olah, Quoc Le, Greg Corrado | + | |
| − | Backprop-Free Auto-Encoders (#69) | + | Backprop-Free Auto-Encoders (#69)Dong-Hyun Lee, Yoshua Bengio |
| − | Dong-Hyun Lee, Yoshua Bengio | + | |
| − | Rate-Distortion Auto-Encoders (#73) | + | Rate-Distortion Auto-Encoders (#73)Luis Sanchez Giraldo, Jose Principe |
| − | Luis Sanchez Giraldo, Jose Principe | + | |
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | ||
2015年1月12日 (一) 06:47的最后版本
目录
- E. Strubell,L. Vilnis,and A.McCallum "Training for fast sequential prediction using dynamic feature selection"[1](Dong Wang)
- "Predictive Property of Hidden Representations in Recurrent Neural Network Language Models."(Xiaoxi Wang)
- "embedding word tokens using a linear dynamical system"[2](Bin Yuan)
choose paper
list paper
Deep Learning and Representation Learning Workshop: NIPS 2014 --Accepted papers
- Oral presentations:
cuDNN: Efficient Primitives for Deep Learning (#49)Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, Evan Shelhamer
Distilling the Knowledge in a Neural Network (#65)Geoffrey Hinton, Oriol Vinyals, Jeff Dean
Supervised Learning in Dynamic Bayesian Networks (#54)Shamim Nemati, Ryan Adams
Deeply-Supervised Nets (#2)Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen Tu
- Posters, morning session (11:30-14:45):
Unsupervised Feature Learning from Temporal Data (#3)Ross Goroshin, Joan Bruna, Arthur Szlam, Jonathan Tompson, David Eigen, Yann LeCun
Autoencoder Trees (#5)Ozan Irsoy, Ethem Alpaydin
Scheduled denoising autoencoders (#6)Krzysztof Geras, Charles Sutton
Learning to Deblur (#8)Christian Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf
A Winner-Take-All Method for Training Sparse Convolutional Autoencoders (#10)Alireza Makhzani, Brendan Frey
"Mental Rotation" by Optimizing Transforming Distance (#11)Weiguang Ding, Graham Taylor
On Importance of Base Model Covariance for Annealing Gaussian RBMs (#12)Taichi Kiwaki, Kazuyuki Aihara
Ultrasound Standard Plane Localization via Spatio-Temporal Feature Learning with Knowledge Transfer (#14)Hao Chen, Dong Ni, Ling Wu, Sheng Li, Pheng Heng
Understanding Locally Competitive Networks (#15)Rupesh Srivastava, Jonathan Masci, Faustino Gomez, Jurgen Schmidhuber
Unsupervised pre-training speeds up the search for good features: an analysis of a simplified model of neural network learning (#18)Avraham Ruderman
Analyzing Feature Extraction by Contrastive Divergence Learning in RBMs (#19)Ryo Karakida, Masato Okada, Shun-ichi Amari
Deep Tempering (#20)Guillaume Desjardins, Heng Luo, Aaron Courville, Yoshua Bengio
Learning Word Representations with Hierarchical Sparse Coding (#21)Dani Yogatama, Manaal Faruqui, Chris Dyer, Noah Smith
Deep Learning as an Opportunity in Virtual Screening (#23)Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Jörg Wenger, Hugo Ceulemans, Sepp Hochreiter
Revisit Long Short-Term Memory: An Optimization Perspective (#24)Qi Lyu, J Zhu
Locally Scale-Invariant Convolutional Neural Networks (#26)Angjoo Kanazawa, David Jacobs, Abhishek Sharma
Deep Exponential Families (#28)Rajesh Ranganath, Linpeng Tang, Laurent Charlin, David Blei
Techniques for Learning Binary Stochastic Feedforward Neural Networks (#29)Tapani Raiko, mathias Berglund, Guillaume Alain, Laurent Dinh
Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition (#30)Phong Le, Willem Zuidema
Deep Multi-Instance Transfer Learning (#32)Dimitrios Kotzias, Misha Denil, Phil Blunsom, Nando De Freitas
Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models (#33)Ryan Kiros, Ruslan Salakhutdinov, Richard Zemel
Retrofitting Word Vectors to Semantic Lexicons (#34)Manaal Faruqui, Jesse Dodge, Sujay Jauhar, Chris Dyer, Eduard Hovy, Noah Smith
Deep Sequential Neural Network (#35)Ludovic Denoyer, Patrick Gallinari
Efficient Training Strategies for Deep Neural Network Language Models (#71)Holger Schwenk
- Posters, afternoon session (17:00-18:30):
Deep Learning for Answer Sentence Selection (#36)Lei Yu, Karl Moritz Hermann, Phil Blunsom, Stephen Pulman
Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition (#37)Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman
Learning Torque-Driven Manipulation Primitives with a Multilayer Neural Network (#39)Sergey Levine, Pieter Abbeel
SimNets: A Generalization of Convolutional Networks (#41)Nadav Cohen, Amnon Shashua
Phonetics embedding learning with side information (#44)Gabriel Synnaeve, Thomas Schatz, Emmanuel Dupoux
End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results (#45)Jan Chorowski, Dzmitry Bahdanau, KyungHyun Cho, Yoshua Bengio
BILBOWA: Fast Bilingual Distributed Representations without Word Alignments (#46)Stephan Gouws, Yoshua Bengio, Greg Corrado
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling (#47)Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, Yoshua Bengio
Reweighted Wake-Sleep (#48)Jorg Bornschein, Yoshua Bengio
Explain Images with Multimodal Recurrent Neural Networks (#51)Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan Yuille
Rectified Factor Networks and Dropout (#53)Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
Towards Deep Neural Network Architectures Robust to Adversarials (#55)Shixiang Gu, Luca Rigazio
Making Dropout Invariant to Transformations of Activation Functions and Inputs (#56)Jimmy Ba, Hui Yuan Xiong, Brendan Frey
Aspect Specific Sentiment Analysis using Hierarchical Deep Learning (#58)Himabindu Lakkaraju, Richard Socher, Chris Manning
Deep Directed Generative Autoencoders (#59)Sherjil Ozair, Yoshua Bengio
Conditional Generative Adversarial Nets (#60)Mehdi Mirza, Simon Osindero
Analyzing the Dynamics of Gated Auto-encoders (#61)Daniel Im, Graham Taylor
Representation as a Service (#63)Ouais Alsharif, Joelle Pineau, philip bachman
Provable Methods for Training Neural Networks with Sparse Connectivity (#66)Hanie Sedghi, Anima Anandkumar
Trust Region Policy Optimization (#67)John D. Schulman, Philipp C. Moritz, Sergey Levine, Michael I. Jordan, Pieter Abbeel
Document Embedding with Paragraph Vectors (#68)Andrew Dai, Christopher Olah, Quoc Le, Greg Corrado
Backprop-Free Auto-Encoders (#69)Dong-Hyun Lee, Yoshua Bengio
Rate-Distortion Auto-Encoders (#73)Luis Sanchez Giraldo, Jose Principe