第十九章 深度学习的问题
来自cslt Wiki
教学资料
扩展阅读
视频展示
- Deep neural networks are easy to fool [6]
演示链接
- Demo of adversarial attack [7]
开发者资源
- AI安全之对抗样本入门 [8]
高级读者
- What is adversarial machine learning [9]
- Fong R C, Vedaldi A. Interpretable explanations of black boxes by meaningful perturbation[C]//Proceedings of the IEEE international conference on computer vision. 2017: 3429-3437. [10]
- Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013. [11]
- Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 427-436. [12]
- Eykholt K, Evtimov I, Fernandes E, et al. Robust physical-world attacks on deep learning visual classification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 1625-1634. [13]
- 可解释人工智能导论 [14]