“第十九章 深度学习的问题”版本间的差异
来自cslt Wiki
(相同用户的2个中间修订版本未显示) | |||
第9行: | 第9行: | ||
==扩展阅读== | ==扩展阅读== | ||
− | + | * 维基百科:可解释人工智能[http://aigraph.cslt.org/courses/19/Explainable_artificial_intelligence.pdf][http://aigraph.cslt.org/courses/19/可解釋人工智慧.pdf] | |
+ | * 知乎:可解释人工智能[https://zhuanlan.zhihu.com/p/354233093] | ||
+ | * 脆弱的神经网络:UC Berkeley详解对抗样本生成机制 [https://www.jiqizhixin.com/articles/2018-01-31-5] | ||
+ | * 对抗样本为什么重要:未解决的研究问题与真实的威胁模型 [https://cloud.tencent.com/developer/article/1418617] | ||
==视频展示== | ==视频展示== | ||
+ | * Deep neural networks are easy to fool [http://aigraph.cslt.org/courses/19/easy-fool.mp4] | ||
==演示链接== | ==演示链接== | ||
− | + | * Demo of adversarial attack [https://kennysong.github.io/adversarial.js/] | |
==开发者资源== | ==开发者资源== | ||
+ | * AI安全之对抗样本入门 [https://github.com/duoergun0729/adversarial_examples] | ||
==高级读者== | ==高级读者== | ||
第26行: | 第31行: | ||
* Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 427-436. [https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf] | * Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 427-436. [https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf] | ||
* Eykholt K, Evtimov I, Fernandes E, et al. Robust physical-world attacks on deep learning visual classification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 1625-1634. [http://openaccess.thecvf.com/content_cvpr_2018/papers/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf] | * Eykholt K, Evtimov I, Fernandes E, et al. Robust physical-world attacks on deep learning visual classification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 1625-1634. [http://openaccess.thecvf.com/content_cvpr_2018/papers/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf] | ||
+ | * 可解释人工智能导论 [https://item.jd.com/13700578.html] |
2022年8月8日 (一) 07:16的最后版本
教学资料
扩展阅读
视频展示
- Deep neural networks are easy to fool [6]
演示链接
- Demo of adversarial attack [7]
开发者资源
- AI安全之对抗样本入门 [8]
高级读者
- What is adversarial machine learning [9]
- Fong R C, Vedaldi A. Interpretable explanations of black boxes by meaningful perturbation[C]//Proceedings of the IEEE international conference on computer vision. 2017: 3429-3437. [10]
- Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013. [11]
- Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 427-436. [12]
- Eykholt K, Evtimov I, Fernandes E, et al. Robust physical-world attacks on deep learning visual classification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 1625-1634. [13]
- 可解释人工智能导论 [14]