“第十九章 深度学习的问题”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第9行: 第9行:
 
==扩展阅读==
 
==扩展阅读==
  
 +
* 维基百科:可解释人工智能[http://aigraph.cslt.org/courses/19/Explainable_artificial_intelligence.pdf][http://aigraph.cslt.org/courses/19/可解釋人工智慧.pdf]
 +
* 知乎:可解释人工智能[https://zhuanlan.zhihu.com/p/354233093]
 +
* 脆弱的神经网络:UC Berkeley详解对抗样本生成机制 [https://www.jiqizhixin.com/articles/2018-01-31-5]
  
  
第26行: 第29行:
 
* Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 427-436. [https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf]
 
* Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 427-436. [https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf]
 
* Eykholt K, Evtimov I, Fernandes E, et al. Robust physical-world attacks on deep learning visual classification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 1625-1634. [http://openaccess.thecvf.com/content_cvpr_2018/papers/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf]
 
* Eykholt K, Evtimov I, Fernandes E, et al. Robust physical-world attacks on deep learning visual classification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 1625-1634. [http://openaccess.thecvf.com/content_cvpr_2018/papers/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf]
 +
* 可解释人工智能导论 [https://item.jd.com/13700578.html]

2022年8月8日 (一) 06:45的版本


教学资料


扩展阅读

  • 维基百科:可解释人工智能[1][2]
  • 知乎:可解释人工智能[3]
  • 脆弱的神经网络:UC Berkeley详解对抗样本生成机制 [4]


视频展示

演示链接

开发者资源

高级读者

  • What is adversarial machine learning [5]
  • Fong R C, Vedaldi A. Interpretable explanations of black boxes by meaningful perturbation[C]//Proceedings of the IEEE international conference on computer vision. 2017: 3429-3437. [6]
  • Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013. [7]
  • Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 427-436. [8]
  • Eykholt K, Evtimov I, Fernandes E, et al. Robust physical-world attacks on deep learning visual classification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 1625-1634. [9]
  • 可解释人工智能导论 [10]