“Text-2015-01-07”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“=readed paper= * Clustering words by projection entropy[Tianyi Luo] * Bootstrapping dialog systems with word embedding[Rong Liu] * Neural machine translation by join...”为内容创建页面)
 
Wxx讨论 | 贡献
readed paper
第3行: 第3行:
 
* Bootstrapping dialog systems with word embedding[Rong Liu]
 
* Bootstrapping dialog systems with word embedding[Rong Liu]
 
* Neural machine translation by jointly learning to align and translate[Dongxu Zhang]
 
* Neural machine translation by jointly learning to align and translate[Dongxu Zhang]
 +
 +
 +
 +
 +
 +
 +
 +
 +
Deep Learning and Representation Learning Workshop: NIPS 2014
 +
Search this site
 +
HOME
 +
ACCEPTED PAPERS
 +
CALL FOR PAPERS
 +
INVITED SPEAKERS
 +
PROGRAM COMMITTEE
 +
SCHEDULE
 +
SUBMISSION
 +
SITEMAP
 +
Accepted papers
 +
Oral presentations:
 +
 +
 +
cuDNN: Efficient Primitives for Deep Learning (#49)
 +
Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, Evan Shelhamer
 +
 +
Distilling the Knowledge in a Neural Network (#65)
 +
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
 +
 +
Supervised Learning in Dynamic Bayesian Networks (#54)
 +
Shamim Nemati, Ryan Adams
 +
 +
Deeply-Supervised Nets (#2)
 +
Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen Tu
 +
 +
 +
 +
Posters, morning session (11:30-14:45):
 +
 +
Unsupervised Feature Learning from Temporal Data (#3)
 +
Ross Goroshin, Joan Bruna, Arthur Szlam, Jonathan Tompson, David Eigen, Yann LeCun
 +
 +
Autoencoder Trees (#5)
 +
Ozan Irsoy, Ethem Alpaydin
 +
 +
Scheduled denoising autoencoders (#6)
 +
Krzysztof Geras, Charles Sutton
 +
 +
Learning to Deblur (#8)
 +
Christian Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf
 +
 +
A Winner-Take-All Method for Training Sparse Convolutional Autoencoders (#10)
 +
Alireza Makhzani, Brendan Frey
 +
 +
"Mental Rotation" by Optimizing Transforming Distance (#11)
 +
Weiguang Ding, Graham Taylor
 +
 +
On Importance of Base Model Covariance for Annealing Gaussian RBMs (#12)
 +
Taichi Kiwaki, Kazuyuki Aihara
 +
 +
Ultrasound Standard Plane Localization via Spatio-Temporal Feature Learning with Knowledge Transfer (#14)
 +
Hao Chen, Dong Ni, Ling Wu, Sheng Li, Pheng Heng
 +
 +
Understanding Locally Competitive Networks (#15)
 +
Rupesh Srivastava, Jonathan Masci, Faustino Gomez, Jurgen Schmidhuber
 +
 +
Unsupervised pre-training speeds up the search for good features: an analysis of a simplified model of neural network learning (#18)
 +
Avraham Ruderman
 +
 +
Analyzing Feature Extraction by Contrastive Divergence Learning in RBMs (#19)
 +
Ryo Karakida, Masato Okada, Shun-ichi Amari
 +
 +
Deep Tempering (#20)
 +
Guillaume Desjardins, Heng Luo, Aaron Courville, Yoshua Bengio
 +
 +
Learning Word Representations with Hierarchical Sparse Coding (#21)
 +
Dani Yogatama, Manaal Faruqui, Chris Dyer, Noah Smith
 +
 +
Deep Learning as an Opportunity in Virtual Screening (#23)
 +
Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Jörg Wenger, Hugo Ceulemans, Sepp Hochreiter
 +
 +
Revisit Long Short-Term Memory: An Optimization Perspective (#24)
 +
Qi Lyu, J Zhu
 +
 +
Locally Scale-Invariant Convolutional Neural Networks (#26)
 +
Angjoo Kanazawa, David Jacobs, Abhishek Sharma
 +
 +
Deep Exponential Families (#28)
 +
Rajesh Ranganath, Linpeng Tang, Laurent Charlin, David Blei
 +
 +
Techniques for Learning Binary Stochastic Feedforward Neural Networks (#29)
 +
Tapani Raiko, mathias Berglund, Guillaume Alain, Laurent Dinh
 +
 +
Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition (#30)
 +
Phong Le, Willem Zuidema
 +
 +
Deep Multi-Instance Transfer Learning (#32)
 +
Dimitrios Kotzias, Misha Denil, Phil Blunsom, Nando De Freitas
 +
 +
Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models (#33)
 +
Ryan Kiros, Ruslan Salakhutdinov, Richard Zemel
 +
 +
Retrofitting Word Vectors to Semantic Lexicons (#34)
 +
Manaal Faruqui, Jesse Dodge, Sujay Jauhar, Chris Dyer, Eduard Hovy, Noah Smith
 +
 +
Deep Sequential Neural Network (#35)
 +
Ludovic Denoyer, Patrick Gallinari
 +
 +
Efficient Training Strategies for Deep Neural Network Language Models (#71)
 +
Holger Schwenk
 +
 +
 +
 +
 +
Posters, afternoon session (17:00-18:30):
 +
 +
Deep Learning for Answer Sentence Selection (#36)
 +
Lei Yu, Karl Moritz Hermann, Phil Blunsom, Stephen Pulman
 +
 +
Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition (#37)
 +
Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman
 +
 +
Learning Torque-Driven Manipulation Primitives with a Multilayer Neural Network (#39)
 +
Sergey Levine, Pieter Abbeel
 +
 +
SimNets: A Generalization of Convolutional Networks (#41)
 +
Nadav Cohen, Amnon Shashua
 +
 +
Phonetics embedding learning with side information (#44)
 +
Gabriel Synnaeve, Thomas Schatz, Emmanuel Dupoux
 +
 +
End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results (#45)
 +
Jan Chorowski, Dzmitry Bahdanau, KyungHyun Cho, Yoshua Bengio
 +
 +
BILBOWA: Fast Bilingual Distributed Representations without Word Alignments (#46)
 +
Stephan Gouws, Yoshua Bengio, Greg Corrado
 +
 +
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling (#47)
 +
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, Yoshua Bengio
 +
 +
Reweighted Wake-Sleep (#48)
 +
Jorg Bornschein, Yoshua Bengio
 +
 +
Explain Images with Multimodal Recurrent Neural Networks (#51)
 +
Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan Yuille
 +
 +
Rectified Factor Networks and Dropout (#53)
 +
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
 +
 +
Towards Deep Neural Network Architectures Robust to Adversarials (#55)
 +
Shixiang Gu, Luca Rigazio
 +
 +
Making Dropout Invariant to Transformations of Activation Functions and Inputs (#56)
 +
Jimmy Ba, Hui Yuan Xiong, Brendan Frey
 +
 +
Aspect Specific Sentiment Analysis using Hierarchical Deep Learning (#58)
 +
Himabindu Lakkaraju, Richard Socher, Chris Manning
 +
 +
Deep Directed Generative Autoencoders (#59)
 +
Sherjil Ozair, Yoshua Bengio
 +
 +
Conditional Generative Adversarial Nets (#60)
 +
Mehdi Mirza, Simon Osindero
 +
 +
Analyzing the Dynamics of Gated Auto-encoders (#61)
 +
Daniel Im, Graham Taylor
 +
 +
Representation as a Service (#63)
 +
Ouais Alsharif, Joelle Pineau, philip bachman
 +
 +
Provable Methods for Training Neural Networks with Sparse Connectivity (#66)
 +
Hanie Sedghi, Anima Anandkumar
 +
 +
Trust Region Policy Optimization (#67)
 +
John D. Schulman, Philipp C. Moritz, Sergey Levine, Michael I. Jordan, Pieter Abbeel
 +
 +
Document Embedding with Paragraph Vectors (#68)
 +
Andrew Dai, Christopher Olah, Quoc Le, Greg Corrado
 +
 +
Backprop-Free Auto-Encoders (#69)
 +
Dong-Hyun Lee, Yoshua Bengio
 +
 +
Rate-Distortion Auto-Encoders (#73)
 +
Luis Sanchez Giraldo, Jose Principe
 +
评论
 +
Commenting disabled due to a network error. Please reload the page.
 +
You do not have permission to add comments.
 +
登录|最近的网站活动|举报滥用行为|打印页面|由 Google 协作平台强力驱动

2015年1月12日 (一) 06:20的版本

readed paper

  • Clustering words by projection entropy[Tianyi Luo]
  • Bootstrapping dialog systems with word embedding[Rong Liu]
  • Neural machine translation by jointly learning to align and translate[Dongxu Zhang]





Deep Learning and Representation Learning Workshop: NIPS 2014 Search this site HOME ACCEPTED PAPERS CALL FOR PAPERS INVITED SPEAKERS PROGRAM COMMITTEE SCHEDULE SUBMISSION SITEMAP Accepted papers Oral presentations:


cuDNN: Efficient Primitives for Deep Learning (#49) Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, Evan Shelhamer

Distilling the Knowledge in a Neural Network (#65) Geoffrey Hinton, Oriol Vinyals, Jeff Dean

Supervised Learning in Dynamic Bayesian Networks (#54) Shamim Nemati, Ryan Adams

Deeply-Supervised Nets (#2) Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen Tu


Posters, morning session (11:30-14:45):

Unsupervised Feature Learning from Temporal Data (#3) Ross Goroshin, Joan Bruna, Arthur Szlam, Jonathan Tompson, David Eigen, Yann LeCun

Autoencoder Trees (#5) Ozan Irsoy, Ethem Alpaydin

Scheduled denoising autoencoders (#6) Krzysztof Geras, Charles Sutton

Learning to Deblur (#8) Christian Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf

A Winner-Take-All Method for Training Sparse Convolutional Autoencoders (#10) Alireza Makhzani, Brendan Frey

"Mental Rotation" by Optimizing Transforming Distance (#11) Weiguang Ding, Graham Taylor

On Importance of Base Model Covariance for Annealing Gaussian RBMs (#12) Taichi Kiwaki, Kazuyuki Aihara

Ultrasound Standard Plane Localization via Spatio-Temporal Feature Learning with Knowledge Transfer (#14) Hao Chen, Dong Ni, Ling Wu, Sheng Li, Pheng Heng

Understanding Locally Competitive Networks (#15) Rupesh Srivastava, Jonathan Masci, Faustino Gomez, Jurgen Schmidhuber

Unsupervised pre-training speeds up the search for good features: an analysis of a simplified model of neural network learning (#18) Avraham Ruderman

Analyzing Feature Extraction by Contrastive Divergence Learning in RBMs (#19) Ryo Karakida, Masato Okada, Shun-ichi Amari

Deep Tempering (#20) Guillaume Desjardins, Heng Luo, Aaron Courville, Yoshua Bengio

Learning Word Representations with Hierarchical Sparse Coding (#21) Dani Yogatama, Manaal Faruqui, Chris Dyer, Noah Smith

Deep Learning as an Opportunity in Virtual Screening (#23) Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Jörg Wenger, Hugo Ceulemans, Sepp Hochreiter

Revisit Long Short-Term Memory: An Optimization Perspective (#24) Qi Lyu, J Zhu

Locally Scale-Invariant Convolutional Neural Networks (#26) Angjoo Kanazawa, David Jacobs, Abhishek Sharma

Deep Exponential Families (#28) Rajesh Ranganath, Linpeng Tang, Laurent Charlin, David Blei

Techniques for Learning Binary Stochastic Feedforward Neural Networks (#29) Tapani Raiko, mathias Berglund, Guillaume Alain, Laurent Dinh

Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition (#30) Phong Le, Willem Zuidema

Deep Multi-Instance Transfer Learning (#32) Dimitrios Kotzias, Misha Denil, Phil Blunsom, Nando De Freitas

Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models (#33) Ryan Kiros, Ruslan Salakhutdinov, Richard Zemel

Retrofitting Word Vectors to Semantic Lexicons (#34) Manaal Faruqui, Jesse Dodge, Sujay Jauhar, Chris Dyer, Eduard Hovy, Noah Smith

Deep Sequential Neural Network (#35) Ludovic Denoyer, Patrick Gallinari

Efficient Training Strategies for Deep Neural Network Language Models (#71) Holger Schwenk



Posters, afternoon session (17:00-18:30):

Deep Learning for Answer Sentence Selection (#36) Lei Yu, Karl Moritz Hermann, Phil Blunsom, Stephen Pulman

Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition (#37) Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman

Learning Torque-Driven Manipulation Primitives with a Multilayer Neural Network (#39) Sergey Levine, Pieter Abbeel

SimNets: A Generalization of Convolutional Networks (#41) Nadav Cohen, Amnon Shashua

Phonetics embedding learning with side information (#44) Gabriel Synnaeve, Thomas Schatz, Emmanuel Dupoux

End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results (#45) Jan Chorowski, Dzmitry Bahdanau, KyungHyun Cho, Yoshua Bengio

BILBOWA: Fast Bilingual Distributed Representations without Word Alignments (#46) Stephan Gouws, Yoshua Bengio, Greg Corrado

Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling (#47) Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, Yoshua Bengio

Reweighted Wake-Sleep (#48) Jorg Bornschein, Yoshua Bengio

Explain Images with Multimodal Recurrent Neural Networks (#51) Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan Yuille

Rectified Factor Networks and Dropout (#53) Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter

Towards Deep Neural Network Architectures Robust to Adversarials (#55) Shixiang Gu, Luca Rigazio

Making Dropout Invariant to Transformations of Activation Functions and Inputs (#56) Jimmy Ba, Hui Yuan Xiong, Brendan Frey

Aspect Specific Sentiment Analysis using Hierarchical Deep Learning (#58) Himabindu Lakkaraju, Richard Socher, Chris Manning

Deep Directed Generative Autoencoders (#59) Sherjil Ozair, Yoshua Bengio

Conditional Generative Adversarial Nets (#60) Mehdi Mirza, Simon Osindero

Analyzing the Dynamics of Gated Auto-encoders (#61) Daniel Im, Graham Taylor

Representation as a Service (#63) Ouais Alsharif, Joelle Pineau, philip bachman

Provable Methods for Training Neural Networks with Sparse Connectivity (#66) Hanie Sedghi, Anima Anandkumar

Trust Region Policy Optimization (#67) John D. Schulman, Philipp C. Moritz, Sergey Levine, Michael I. Jordan, Pieter Abbeel

Document Embedding with Paragraph Vectors (#68) Andrew Dai, Christopher Olah, Quoc Le, Greg Corrado

Backprop-Free Auto-Encoders (#69) Dong-Hyun Lee, Yoshua Bengio

Rate-Distortion Auto-Encoders (#73) Luis Sanchez Giraldo, Jose Principe 评论 Commenting disabled due to a network error. Please reload the page. You do not have permission to add comments. 登录|最近的网站活动|举报滥用行为|打印页面|由 Google 协作平台强力驱动