ASR-events-BICS16

来自cslt Wiki
2016年6月11日 (六) 12:37Cslt讨论 | 贡献的版本

(差异) ←上一版本 | 最后版本 (差异) | 下一版本→ (差异)
跳转至: 导航搜索

Special session on BICS: Deep and/or Sparse Neural Models for Speech and Language Processing

Organizers

  • Dong Wang(+), Qiang Zhou(+) and Amir Hussain(*)
  • (+)Center for Speech and Language Technology,

Research Institute of Information Technology

Tsinghua University, China

Email: wangdong99@mails.tsinghua.edu.cn; zq-lxd@mail.tsinghua.edu.cán

  • (*)Cognitive Big Data Informatics Research Lab,

Computing Science & Maths, School of Natural Science,

University of Stirling, Scotland, UK

Email: ahu@cs.stir.ac.uk

Introduction

Large-scale deep neural models, e.g., deep neural networks (DNN) and recurrent neural networks (RNN), have demonstrated significant success in solving various challenging tasks of speech and language processing (SLP), including, amongst others, speech recognition, speech synthesis, document classification, question answering. This growing impact corroborates the neurobiological evidence concerning the presence of layer-wise deep processing in the human brain.

On the other hand, sparse coding representation has also gained similar success in SLPá, particularly in signal processing, demonstrating sparsity as another important neurobiological characteristic, that may be responsible for efficient functioning of the human neural system. One question of particular interest to both neuroscience and sparsity researchers concerns the interrelationship of two key aspects: depth and sparsity, whether they may be functioning independently, or perhaps twisted together?

Traditionally, deep learning and sparse coding have been studied by different research communities. This special session aims to offer a timely opportunity to researchers in the two areas to share their complementary results and methods, and help mutually promote development of new theories and methodologies for hybrid deep and sparsity based models, particularly in the field of SLP.

Scope

The focus of this special session is to address recent advances in hybrid deep and sparsity based neural models, with a particular focus on SLP. It will provide a forum for scientists and researchers working in deep and sparse computing to learn from each other, and mutually develop possible new methodologies for next-generation deep-sparse, and sparse-deep models and applications. Target research topics of interest include, but are not limited to, the following:

  •  Theories and methods for deep sparse or sparse deep models
  •  Theories and methods for hybrid deep neural models in SLP
  •  Theories and methods for hybrid sparse models in SLP
  •  Comparative study of deep/sparse neural and Bayesian based models
  •  Applications of deep and/or sparse models in SLP

Bio of organizers

  • Dong Wang

 Dr. Dong Wang received received his Ph.D. degree from CSTR, University of Edinburgh, in 2010. He worked with Oracle China (2002–2004) and IBM China (2004–2006). From 2010 to 2011, he was with EURECOM as a Postdoctoral Fellow, and from 2011 to 2012, was a Senior Research Scientist with Nuance. He is now an Assistant Professor with Tsinghua University. Dr. Dong Wang is working on speech and language processing, particularly deep neural models for speech recognition and smenatic computing. He published more than 70 academic papers and is now serving as NCMMSC general secretary, APSIPA SLA secretary, ISCSLP 2015 special session co-chair, ISCSLP 2016 plenary talk co-chair.

  • Qiang Zhou

 Dr. Qiang Zhou received the B.S. degree in Computer Science and Technology from Tsinghua University, Beijing, China in 1990 and the M.S. and Ph.D. degrees in Computer Science and Technology from Peking University, Beijing, China, in 1993 and 1996 respectively. He joined the State Key Laboratory of Intelligence Technology and Systems, Department of Computer Science and Technology, Tsinghua University in 1998, after two years postdoctoral research being supervised by Prof. Changning Huang. Now he is a senior researcher in the Centre for Speech and Language Technology, Research Institute of Information Technology, Tsinghua University.

  • &Amir Hussain

 Prof. Amir Hussain obtained his BEng (with the highest 1st Class Honours) and PhD (in novel neural network architectures and algorithms) from the University of Strathclyde in Glasgow, UK, in 1992 and 1997 respectively. Following a Research Fellowship at the University of Paisley, UK (96-98), and a research Lectureship at the University of Dundee, UK (98-00), he joined the University of Stirling in 2000, where he is currently Professor of Computing Science, and founding Director of the Cognitive Big Data Informatics (CogBDI) Research Laboratory. He has authored nearly 300 publications (including over a dozen Books and around 100 journal papers); conducted and led collaborative research with industry; partnered in major European research programs, and supervised more than 20 PhDs. He is an Associate Editor of the IEEE Transactions on Neural Networks and Learning Systems, IEEE Computational Intelligence Magazine, founding Editor-in-Chief of Springer’s Cognitive Computation journal, Springer/BioMed Central’s Big Data Analytics journal, SpringerBriefs in Cognitive Computation and the Springer Book Series on Socio-Affective Computing. He holds several Visiting Professorships and serves as an International Advisor to various Governmental Higher Education and Research Councils, Universities and Companies. He has served as invited/keynote speaker, general/program/organizing (co)chair for over 50 international conferences and workshops, including IEEE WCCI, IJCNN, IEEE SSCI etc. He is a member of several Technical Committees of the IEEE Computational Intelligence Society (CIS), founding publications co-Chair of the INNS Big Data Section and its annual INNS Conference on Big Data, and Chapter Chair of the IEEE UK & RI Industry Applications Society. He is a Senior Fellow of the Brain Sciences Foundation.