|
|
(某位用户的一个中间修订版本未显示) |
第1行: |
第1行: |
− | {| class="wikitable"
| + | [[NLP Status Report 2017-3-6]] |
− | !Date !! People !! Last Week !! This Week
| + | |
− | |-
| + | |
− | | rowspan="6"|2017/1/3
| + | |
− | |Yang Feng ||
| + | |
− | ||
| + | |
− | |-
| + | |
− | |Jiyuan Zhang ||
| + | |
− | * reproduced planning neural network [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/38/Planning_neural_network_initial_decode.pdf results]
| + | |
− | ||
| + | |
− | *reproduce planning neural network
| + | |
− | |-
| + | |
− | |Andi Zhang ||
| + | |
− | * added source masks in attention_decoder where calculates attention and in gru_cell where calculates new states.
| + | |
− | * found the attribute sentence_length, perhaps it works better than my code
| + | |
− | ||
| + | |
| | | |
− | |-
| + | [[ASR Status Report 2017-3-6]] |
− | |Shiyue Zhang ||
| + | |
− | * figured out the problem of attention: the initial value of V should be around 0
| + | |
− | * tested different modification, such as add mask, init b with 0.
| + | |
− | * Compared the results, and concluded only change the initial value of V is the best.
| + | |
− | ||
| + | |
− | * try to get right attention on memory
| + | |
− | |-
| + | |
− | |Peilun Xiao ||
| + | |
| | | |
− | ||
| + | [[FIN Status Report 2017-3-6]] |
| | | |
− | |}
| + | [[FreeNeb Status Report 2017-3-6]] |
2017年3月7日 (二) 09:09的最后版本
NLP Status Report 2017-3-6
ASR Status Report 2017-3-6
FIN Status Report 2017-3-6
FreeNeb Status Report 2017-3-6