“OLR Challenge 2021”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
Participation rules
Evaluation plan
第52行: 第52行:
  
 
Refer to the following paper:
 
Refer to the following paper:
 
Zheng Li, Miao Zhao, Qingyang Hong, Lin Li, Zhiyuan Tang, Dong Wang, Liming Song and Cheng Yang: AP20-OLR Challenge: Three Tasks and Their Baselines, submitted to APSIPA ASC 2020.[https://arxiv.org/pdf/2006.03473.pdf pdf]
 
  
 
==Evaluation tools==
 
==Evaluation tools==

2021年7月23日 (五) 10:22的版本

Oriental Language Recognition (OLR) 2021 Challenge

Oriental languages involve interesting specialties. The OLR challenge series aim at improve the performance of language recognition systems and speech recognition systems within multilingual scenarios. Following the success of OLR Challenge 2016, OLR Challenge 2017, OLR Challenge 2018, OLR Challenge 2019, and OLR Challenge 2020 the new challenge in 2020 focuses on more practical and challenging problems, with four tasks:

  • Task 1: constrained LID is a close-set identification task, which means the language of each utterance is among the known 13 target languages, but utterances were recorded in different environments. And only the data provided by the organizer can be used to build the LID system.
  • Task 2: unconstrained LID is also a close-set identification task, but test data from wild, which means utterances are obtained from real-life environments. It is therefore more challenging than the constrained task. In this task, any data (except the evaluation data) you can access is allowed to build the system. Utterances in this subset are obtained from real-life environments.
  • Task 3: constrained multilingual ASR is a data resources constrained task, only the data provided by the organizer can be used.
  • Task 4: unconstrained multilingual ASR is a task with unconstrained data resources and any data can be used for training and optimization.

News

  • Jul. 23, challenge registration open.
  • Jul. 30, evaluation plan release and OLR21 training set and progress subset release.

Data

The challenge is based on two multilingual databases, OLR16-OL7 that was designed for the OLR challenge 2016, and OLR17-OL3 database that was designed for the OLR challenge 2017. For OLR 2021 Challenge, a standard test set OLR21-test is also provided.

OLR16-OL7 is provided by Speechocean (www.speechocean.com), and AP17-OL3 is provided by Tsinghua University, Northwest Minzu University and Xinjiang University, under the M2ASR project supported by NSFC.

The features for OLR16-OL7 involve:

  • Mobile channel
  • 7 languages in total
  • 71 hours of speech signals in total
  • Transcriptions and lexica are provided
  • The data profile is here
  • The License for the data is here

The features for AP17-OL3 involve:

  • Mobile channel
  • 3 languages in total
  • Tibetan provided by Prof. Guanyu Li@Northwest Minzu Univ.
  • Uyghur and Kazak provided by Prof. Askar Hamdulla@Xinjiang University.
  • 35 hours of speech signals in total
  • Transcriptions and lexica are provided
  • The data profile is here
  • The License for the data is here

For the OLR 2021 Challenge, the trials of the four tasks will be divided into two subsets: a progress subset, and a test subset. The progress subset will comprise 30% of the trials and will be used to monitor progress in the leaderboard. The remaining 70% of the trials will form the test subset, and will be used to generate the official results which are the base of the final ranking. The OLR21-test database is the standard test set for the OLR 2021 challenge, which contains two parts: OLR21-cross-domain-test and OLR21-wild-test.

  • OLR21-cross-domain-test: This subset is designed for three tasks: the constrained LID task, the constrained multilingual ASR task, and the unconstrained multilingual ASR task. It contains 13 languages, and was recorded by different recording equipments in different environments. The 13 languages are Indonesian, Japanese, Russian, Korean, Vietnamese, Mandarin, Cantonese (China), Sichuanese (China), Shanghainese (China), Hokkien (China), Tibetan (China), Kazakh (China), and Uyghur (China).
  • OLR21-wild-test: This subset is designed for the unconstrained LID task, which contains 17 languages: Indonesian, Japanese, Russian, Korean, Vietnamese, Thai, Malay, Telugu, Hindi, English (British and American), Kazakh (China), Tibetan (China), Uyghur (China), Mandarin, Sichuan (China), Shanghainese (China), Hokkien (China). Utterances in this subset are obtained from reallife environments.

Evaluation plan

Refer to the following paper:

Evaluation tools

  • The Kaldi and Pytorch recipes for baselines. [1]

Participation rules

  • Participants from both academy and industry are welcome.
  • Publications based on the data provided by the challenge should cite the following paper:

Dong Wang, Lantian Li, Difei Tang, Qing Chen, AP16-OL7: a multilingual database for oriental languages and a language recognition baseline, APSIPA ASC 2016. pdf

Zhiyuan Tang, Dong Wang, Yixiang Chen, Qing Chen: AP17-OLR Challenge: Data, Plan, and Baseline, APSIPA ASC 2017. pdf

Zhiyuan Tang, Dong Wang, Qing Chen: AP18-OLR Challenge: Three Tasks and Their Baselines, submitted to APSIPA ASC 2018. pdf

Zhiyuan Tang, Dong Wang, Liming Song: AP19-OLR Challenge: Three Tasks and Their Baselines, submitted to APSIPA ASC 2019. pdf

Zheng Li, Miao Zhao, Qingyang Hong, Lin Li, Zhiyuan Tang, Dong Wang, Liming Song and Cheng Yang: AP20-OLR Challenge: Three Tasks and Their Baselines, submitted to APSIPA ASC 2020. pdf

Binling Wang, Wenxuan Hu, Jing Li, Yiming Zhi, Zheng Li, Qingyang Hong, Lin Li, Dong Wang, Liming Song and Cheng Yang: OLR 2021 Challenge: Datasets, Rules and Baselines, submitted to APSIPA ASC 2021.(The article will be available for download on Jul.26)

Important dates

  • Jul. 23, challenge registration open.
  • Jul. 30, evaluation plan release and OLR21 training set release.
  • Aug. 9, progress subset release.
  • Oct. 1, register deadline.
  • Nov. 1, test subset release.
  • Nov. 20, 24:00, Beijing time, submission deadline.
  • Dec. 10, results announcement.

(Due to the COVID-19, the seminar and award ceremony will be adjusted according to the actual situation.)

Registration procedure

If you intend to participate the challenge, or if you have any questions, comments or suggestions about the challenge, please send email to the organizers ( olr_challenge@163.com). For participants, the following information is required, also please sign the Data License Agreement on behalf of an organization/company of speech research/technology, and send back the scanned copy by email.

 - Team Name: 
 - Institute & Nationality: 
 - Participants: 
 - Duty person: 
 - Hompage or published papers in speech field of person/organization/company:

Organization Committee

  • Qingyang Hong, Xiamen University [home]
  • Lin Li, Xiamen University [home]
  • Binling Wang, Xiamen University
  • Wenxuan Hu, Xiamen University
  • Dong Wang, Tsinghua University [home]
  • Zhiyuan Tang, Tsinghua University [home]
  • Ming Li, Duke-Kunshan University
  • Xiaolei Zhang, NWPU
  • Liming Song, Speechocean
  • Cheng Yang, Speechocean