“150308-Lantian Li”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“Weekly Summary 1. Make a series of d-vector-based experiments.(testing on sentence 2 and 7) 1). Comparison experiments on "Input data", including one text / two te...”为内容创建页面)
 
 
第13行: 第13行:
 
2). last-hid-layer without sigmoid normalization < last-hid-layer with sigmoid normalization. (under the LDA condition and no matter which input data).
 
2). last-hid-layer without sigmoid normalization < last-hid-layer with sigmoid normalization. (under the LDA condition and no matter which input data).
  
2. To train a text-content-based neural networks and extract d-vectors from this network.   
+
2. To train a text-content-based neural networks and extract d-vectors from these networks.   
  
 
Next Week
 
Next Week
  
 
1. Go on the task1 and task2.
 
1. Go on the task1 and task2.

2015年3月9日 (一) 14:13的最后版本

Weekly Summary

1. Make a series of d-vector-based experiments.(testing on sentence 2 and 7)

1). Comparison experiments on "Input data", including one text / two texts / 15 texts.

2). Comparison experiments on different hidden layers, last-hid-layer with sigmoid normalization and without sigmoid normalization.

The experimental results are that:(compared by the value of EER(%))

1). two texts < 15 texts < one text (especially under the LDA condition); The d-vector can be used in sudo speaker recognition.

2). last-hid-layer without sigmoid normalization < last-hid-layer with sigmoid normalization. (under the LDA condition and no matter which input data).

2. To train a text-content-based neural networks and extract d-vectors from these networks.

Next Week

1. Go on the task1 and task2.