第十八章 深度学习前沿
来自cslt Wiki
教学资料
- 教学参考
- 课件
- 小清爱提问:什么是词向量[1]
- 小清爱提问:什么是序列到序列模型[2]
- 小清爱提问:什么是注意力机制[3]
- 小清爱提问:什么是自监督学习[4]
- 小清爱提问:什么是对抗生成网络[5]
- 小清爱提问:什么是变分自编码器[6]
扩展阅读
- AI100问:什么是残差网络[7]
- AI100问:什么是词向量[8]
- AI100问:什么是序列到序列模型[9]
- AI100问:什么是注意力机制[10]
- AI100问:什么是自监督学习[11]
- AI100问:什么是对抗生成网络[12]
- AI100问:什么是变分自编码器[13]
视频展示
演示链接
开发者资源
高级读者
- He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.[18]
- Bengio Y, Ducharme R, Vincent P. A neural probabilistic language model[J]. Advances in neural information processing systems, 2000, 13. [19]
- Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space[J]. arXiv preprint arXiv:1301.3781, 2013. [20]
- Schroff F, Kalenichenko D, Philbin J. Facenet: A unified embedding for face recognition and clustering[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 815-823. [21]
- Li C, Ma X, Jiang B, et al. Deep speaker: an end-to-end neural speaker embedding system[J]. arXiv preprint arXiv:1705.02304, 2017. [22]
- Lin Y, Liu Z, Sun M, et al. Learning entity and relation embeddings for knowledge graph completion[C]//Twenty-ninth AAAI conference on artificial intelligence. 2015. [23]
- Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks[J]. Advances in neural information processing systems, 2014, 27. [24]
- Liu Y, Liu D, Lv J, et al. Generating Chinese poetry from images via concrete and abstract information[C]//2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020: 1-8. [25]
- Sun D, Ren T, Li C, et al. Learning to write stylized chinese characters by reading a handful of examples[J]. arXiv preprint arXiv:1712.06424, 2017. [26]
- Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate[J]. arXiv preprint arXiv:1409.0473, 2014. [27]
- Xu K, Ba J, Kiros R, et al. Show, attend and tell: Neural image caption generation with visual attention[C]//International conference on machine learning. PMLR, 2015: 2048-2057. [28]
- Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30. [29]
- Liu X, Zhang F, Hou Z, et al. Self-supervised learning: Generative or contrastive[J]. IEEE Transactions on Knowledge and Data Engineering, 2021. [30]
- Schneider S, Baevski A, Collobert R, et al. wav2vec: Unsupervised pre-training for speech recognition[J]. arXiv preprint arXiv:1904.05862, 2019. [31]
- Noroozi M, Favaro P. Unsupervised learning of visual representations by solving jigsaw puzzles[C]//European conference on computer vision. Springer, Cham, 2016: 69-84. [32]
- Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018. [33]
- Brown T, Mann B, Ryder N, et al. Language models are few-shot learners[J]. Advances in neural information processing systems, 2020, 33: 1877-1901. [34]
- Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.[]
- Ethayarajh K. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings[J]. arXiv preprint arXiv:1909.00512, 2019. [35]
- Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[J]. Advances in neural information processing systems, 2014, 27. [36]
- Kingma D P, Welling M. Auto-encoding variational bayes[J]. arXiv preprint arXiv:1312.6114, 2013. [37]
- 王东,机器学习导论,第三章,神经模型,2021,清华大学出版社 [38]
- Ian Goodfellow and Yoshua Bengio and Aaron Courville, Deep Learning [39]