“第十八章 深度学习前沿”版本间的差异
来自cslt Wiki
第36行: | 第36行: | ||
* A simple GAN implementation [https://iq.opengenus.org/beginners-guide-to-generative-adversarial-networks/] | * A simple GAN implementation [https://iq.opengenus.org/beginners-guide-to-generative-adversarial-networks/] | ||
* Pytroch-GAN: A large amount of GAN implementation [https://github.com/eriklindernoren/PyTorch-GAN] | * Pytroch-GAN: A large amount of GAN implementation [https://github.com/eriklindernoren/PyTorch-GAN] | ||
+ | * A collectio nof generative models (GAN/VAE/RBM) [https://github.com/wiseodd/generative-models] | ||
+ | |||
==高级读者== | ==高级读者== |
2022年8月8日 (一) 01:02的版本
教学资料
- 教学参考
- 课件
- 小清爱提问:什么是词向量[1]
- 小清爱提问:什么是序列到序列模型[2]
- 小清爱提问:什么是注意力机制[3]
- 小清爱提问:什么是自监督学习[4]
- 小清爱提问:什么是对抗生成网络[5]
- 小清爱提问:什么是变分自编码器[6]
扩展阅读
- AI100问:什么是残差网络[7]
- AI100问:什么是词向量[8]
- AI100问:什么是序列到序列模型[9]
- AI100问:什么是注意力机制[10]
- AI100问:什么是自监督学习[11]
- AI100问:什么是对抗生成网络[12]
- AI100问:什么是变分自编码器[13]
视频展示
演示链接
开发者资源
- Word embedding [16][17]
- A simple GAN implementation [18]
- Pytroch-GAN: A large amount of GAN implementation [19]
- A collectio nof generative models (GAN/VAE/RBM) [20]
高级读者
- He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.[21]
- Bengio Y, Ducharme R, Vincent P. A neural probabilistic language model[J]. Advances in neural information processing systems, 2000, 13. [22]
- Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space[J]. arXiv preprint arXiv:1301.3781, 2013. [23]
- Schroff F, Kalenichenko D, Philbin J. Facenet: A unified embedding for face recognition and clustering[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 815-823. [24]
- Li C, Ma X, Jiang B, et al. Deep speaker: an end-to-end neural speaker embedding system[J]. arXiv preprint arXiv:1705.02304, 2017. [25]
- Lin Y, Liu Z, Sun M, et al. Learning entity and relation embeddings for knowledge graph completion[C]//Twenty-ninth AAAI conference on artificial intelligence. 2015. [26]
- Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks[J]. Advances in neural information processing systems, 2014, 27. [27]
- Liu Y, Liu D, Lv J, et al. Generating Chinese poetry from images via concrete and abstract information[C]//2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020: 1-8. [28]
- Sun D, Ren T, Li C, et al. Learning to write stylized chinese characters by reading a handful of examples[J]. arXiv preprint arXiv:1712.06424, 2017. [29]
- Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate[J]. arXiv preprint arXiv:1409.0473, 2014. [30]
- Xu K, Ba J, Kiros R, et al. Show, attend and tell: Neural image caption generation with visual attention[C]//International conference on machine learning. PMLR, 2015: 2048-2057. [31]
- Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30. [32]
- Liu X, Zhang F, Hou Z, et al. Self-supervised learning: Generative or contrastive[J]. IEEE Transactions on Knowledge and Data Engineering, 2021. [33]
- Schneider S, Baevski A, Collobert R, et al. wav2vec: Unsupervised pre-training for speech recognition[J]. arXiv preprint arXiv:1904.05862, 2019. [34]
- Noroozi M, Favaro P. Unsupervised learning of visual representations by solving jigsaw puzzles[C]//European conference on computer vision. Springer, Cham, 2016: 69-84. [35]
- Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018. [36]
- Brown T, Mann B, Ryder N, et al. Language models are few-shot learners[J]. Advances in neural information processing systems, 2020, 33: 1877-1901. [37]
- Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.[]
- Ethayarajh K. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings[J]. arXiv preprint arXiv:1909.00512, 2019. [38]
- Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[J]. Advances in neural information processing systems, 2014, 27. [39]
- Kingma D P, Welling M. Auto-encoding variational bayes[J]. arXiv preprint arXiv:1312.6114, 2013. [40]
- 王东,机器学习导论,第三章,神经模型,2021,清华大学出版社 [41]
- Ian Goodfellow and Yoshua Bengio and Aaron Courville, Deep Learning [42]