“ASR-events-BICS16”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第1行: 第1行:
 
[[文件:Bicstoken.png]]
 
[[文件:Bicstoken.png]]
  
==Special session on BICS 2017: Deep and/or Sparse Neural Models for Speech and Language Processing==
+
==Special session on BICS 2016: Deep and/or Sparse Neural Models for Speech and Language Processing==
  
 
===Introduction===
 
===Introduction===

2016年6月15日 (三) 11:24的版本

Bicstoken.png

Special session on BICS 2016: Deep and/or Sparse Neural Models for Speech and Language Processing

Introduction

Large-scale deep neural models, e.g., deep neural networks (DNN) and recurrent neural networks (RNN), have demonstrated significant success in solving various challenging tasks of speech and language processing (SLP), including, amongst others, speech recognition, speech synthesis, document classification, question answering. This growing impact corroborates the neurobiological evidence concerning the presence of layer-wise deep processing in the human brain.

On the other hand, sparse coding representation has also gained similar success in SLP, particularly in signal processing, demonstrating sparsity as another important neurobiological characteristic, that may be responsible for efficient functioning of the human neural system. One question of particular interest to both neuroscience and sparsity researchers concerns the interrelationship of two key aspects: depth and sparsity, whether they may be functioning independently, or perhaps twisted together?

Traditionally, deep learning and sparse coding have been studied by different research communities. This special session on BICS 2016 (http://bii.ia.ac.cn/bics-2016/index.html) aims to offer a timely opportunity to researchers in the two areas to share their complementary results and methods, and help mutually promote development of new theories and methodologies for hybrid deep and sparsity based models, particularly in the field of SLP.

Scope

The focus of this special session is to address recent advances in hybrid deep and sparsity based neural models, with a particular focus on SLP. It will provide a forum for scientists and researchers working in deep and sparse computing to learn from each other, and mutually develop possible new methodologies for next-generation deep-sparse, and sparse-deep models and applications. Target research topics of interest include, but are not limited to, the following:

  •  Theories and methods for deep sparse or sparse deep models
  •  Theories and methods for hybrid deep neural models in SLP
  •  Theories and methods for hybrid sparse models in SLP
  •  Comparative study of deep/sparse neural and Bayesian based models
  •  Applications of deep and/or sparse models in SLP


Important dates

  • Paper submission: July 20, 2016
  • Acceptance notification: August 10, 2016
  • Camera-ready due: September 10, 2016
  • Special session dates on BICS 2017: TBD (November 28-30, 2016)

Submission and publication

Organizers

Dong Wang(+), Qiang Zhou(+) and Amir Hussain(*)

  • (+)Center for Speech and Language Technology, Research Institute of Information Technology,Tsinghua University, China
Email: wangdong99@mails.tsinghua.edu.cn; zq-lxd@mail.tsinghua.edu.cn
  • (*)Cognitive Big Data Informatics Research Lab, Computing Science & Maths, School of Natural Science, University of Stirling, Scotland, UK
Email: ahu@cs.stir.ac.uk