“OLR Challenge 2017”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第7行: 第7行:
 
* shorter speech segments. OLR 2017 set individual tasks for 1 second, 3 second, 5 second segments separately.  
 
* shorter speech segments. OLR 2017 set individual tasks for 1 second, 3 second, 5 second segments separately.  
  
As the first OLR challenge, we will publish the result on a special session of APSIPA 2017. See more details for the [[AP17:OLR-special session| AP17 special session]].
+
As the first OLR challenge, we will publish the results on a special session of APSIPA 2017. See more details for the [[AP17:OLR-special session| AP17 special session]].
  
 
==Data==
 
==Data==
  
 
The challenge is based on two multilingual database, AP16-OL7 that was designed for the OLR challenge 2016, and a new complementary AP17-OL3  
 
The challenge is based on two multilingual database, AP16-OL7 that was designed for the OLR challenge 2016, and a new complementary AP17-OL3  
database. These two databases are both provided by SpeechOcean (www.speechocean.com).  
+
database.  
 +
AP16-OL7 is provided by SpeechOcean (www.speechocean.com), and AP17-OL3 is provided by Tsinghua University, Northwest National University and Xinjing University, under
 +
the M2ASR project supported by NSFC.
  
  

2017年6月9日 (五) 00:17的版本

Oriental Language Recognition (OLR) 2017 Challenge

Oriental languages involve interesting specialalities. The OLR challenge series aims at boosting language recognition technology for oriental languages. Following the success of OLR Challenge 2016, the new challenge in 2017 follows the same theme, but sets up more challenging tasks in the sense of:

  • more languages: OLR 2016 involves 7 languages, OLR 2017 involves 10 languages.
  • shorter speech segments. OLR 2017 set individual tasks for 1 second, 3 second, 5 second segments separately.

As the first OLR challenge, we will publish the results on a special session of APSIPA 2017. See more details for the AP17 special session.

Data

The challenge is based on two multilingual database, AP16-OL7 that was designed for the OLR challenge 2016, and a new complementary AP17-OL3 database. AP16-OL7 is provided by SpeechOcean (www.speechocean.com), and AP17-OL3 is provided by Tsinghua University, Northwest National University and Xinjing University, under the M2ASR project supported by NSFC.


The features for AP16-OL7 involve:

  • Mobile channel
  • 7 languages in total
  • 24 speakers (18 speakers for training/development, 6 speakers for test).
  • 71 hours of speech signals in total
  • Transcriptions and lexica are provided
  • The data profile is here
  • The Licence for the data is here

The feature for AP17-OL3 involve:

  • Mobile channel
  • 3 languages in total
  • 24 speakers (18 speakers for training/development, 6 speakers for test).
  • 30 hours of speech signals in total
  • Transcriptions and lexica are provided
  • The data profile is here
  • The Licence for the data is here

Evaluation plan

Evaluation tools

  • The Kaldi-based baseline scripts here
  • The evaluation toolkit here

Participation rules

  • Participants from both academy and industry are welcome
  • Publications based on the data provided by the challenge should cite the following paper:

Dong Wang, Lantian Li, Difei Tang, Qing Chen, AP16-OL7: a multilingual database for oriental languages and a language recognition baseline, APSIPA 2016.pdf

Zhiyuan Tang, Dong Wang, Yixiang Chen, Qing Chen: AP17 OLE: Data, plan, and baseline, submitted to APSIPA 2017.

Important dates

  • June.11, AP17-OL7 training data release.
  • Oct.1, test data release.
  • Oct.2, 12:00PM, Beijing time, submission deadline
  • APSIPA17, results announcement.

Registration procedure

If you are interested to participate the challenge, or if you have any other questions, comments, suggestions about the challenge, please send email to the organizer:

  • Dr. Zhiyuan Tang (tangzy@cslt.riit.tsinghua.edu.cn)
  • Dr. Dong Wang (wangdong99@mails.tsinghua.edu.cn)
  • Ms. Qing Chen(chenqing@speechocean.com)

Organizers

  • Zhiyuan Tang, Tsinghua University
  • Dong Wang, Tsinghua University
  • Qing Chen, SpeechOcean