|
|
(相同用户的11个中间修订版本未显示) |
第2行: |
第2行: |
| *[[教学参考-13|教学参考]] | | *[[教学参考-13|教学参考]] |
| *[http://aigraph.cslt.org/courses/13/course-13.pptx 课件] | | *[http://aigraph.cslt.org/courses/13/course-13.pptx 课件] |
− | *小清爱提问:什么是爬山法?[https://mp.weixin.qq.com/s?__biz=Mzk0NjIzMzI2MQ==&mid=2247486886&idx=1&sn=a9959dfb953fd7383589236676d6bb08&chksm=c3080764f47f8e72809d3502ecdd0da6940d680fc9ecb0423f7aeec504bcb83026cdb544a11a&scene=178#rd] | + | *小清爱提问:监督学习和无监督学习有什么不同?[] |
− | *小清爱提问:什么是模拟退火算法?[https://mp.weixin.qq.com/s?__biz=Mzk0NjIzMzI2MQ==&mid=2247486965&idx=1&sn=30da3c422773f7cb530eb6047d91b30e&chksm=c3080737f47f8e21802ca8650d8a39d434102f09ef9693b8041f4c24f8888f7d5c1c5a6fc05c&scene=178#rd] | + | *小清爱提问:什么是强化学习?[] |
− | *小清爱提问:什么是奥卡姆剃刀准则? [https://mp.weixin.qq.com/s?__biz=Mzk0NjIzMzI2MQ==&mid=2247486241&idx=1&sn=328b83f1c63103ffff86b1d38c3ac048&chksm=c30801e3f47f88f539f0e68f4cfc5a1e8a46e861ea0f2c732ed370530c4996e40b49a2ee6da6&scene=178#rd]
| + | |
− | *小清爱提问:为什么说数据是人工智能的粮食?[https://mp.weixin.qq.com/s?__biz=Mzk0NjIzMzI2MQ==&mid=2247485586&idx=1&sn=1892fe37396e19e57b1728604402e186&chksm=c3080250f47f8b46a9b96f88739e3c698b89fd24d90c1b2cad41fa9a3fb4956abc5306a5c7b5&scene=178#rd]
| + | |
− | | + | |
− | | + | |
| | | |
| ==扩展阅读== | | ==扩展阅读== |
| | | |
− | * 维基百科:没有免费的午餐定理 [http://aigraph.cslt.org/courses/12/No_free_lunch_theorem.pdf] | + | *DeepMind AlphaGo博客[https://www.deepmind.com/research/highlighted-research/alphago] |
− | * 维基百科:梯度下降法[http://aigraph.cslt.org/courses/12/梯度下降法.pdf][http://aigraph.cslt.org/courses/12/Gradient_descent.pdf] | + | *维基百科: AlphaGo [http://aigraph.cslt.org/courses/13/AlphaGo.pdf][http://aigraph.cslt.org/courses/13/AlphaGo_chs.pdf] |
− | * 百度百科:梯度下降法[https://baike.baidu.com/item/%E6%A2%AF%E5%BA%A6%E4%B8%8B%E9%99%8D/4864937][http://baike.baidu.com/l/FdY9mFXE] | + | *DeepMind AlphaStar博客[https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii] |
− | * 知乎:梯度下降法[https://zhuanlan.zhihu.com/p/36902908]
| + | *维基百科: AlphaStar[http://aigraph.cslt.org/courses/13/AlphaStar.pdf] |
− | * 知乎:小批量梯度下降法[https://zhuanlan.zhihu.com/p/72929546]
| + | |
− | * 知乎:动量梯度下降法[https://www.jiqizhixin.com/graph/technologies/d6ee5e5b-43ff-4c41-87ff-f34c234d0e32][]
| + | ==视频展示== |
− | * 维基百科:模拟退火算法 [http://aigraph.cslt.org/courses/12/模拟退.pdf火][http://aigraph.cslt.org/courses/12/Simulated_annealing.pdf] | + | *小清爱提问:什么是聚类? [https://mp.weixin.qq.com/s?__biz=Mzk0NjIzMzI2MQ==&mid=2247487378&idx=1&sn=bd2ec82d7baf0d4c3074f2b09bd678aa&chksm=c3080550f47f8c46308ce16dfe3facff9f9f09482c0da5ceb50c2ed0ba0043e5e3960bbf7df6&scene=178#rd] |
− | * 百度百科:模拟退火算法[https://baike.baidu.com/item/%E6%A8%A1%E6%8B%9F%E9%80%80%E7%81%AB%E7%AE%97%E6%B3%95/355508][http://baike.baidu.com/l/Smyp3NfN] | + | *小清爱提问:什么是流形学习?[] |
− | * 知乎:模拟退火详解 [https://zhuanlan.zhihu.com/p/266874840] | + | *小清爱提问:机器学习里有哪些回归模型[] |
− | * 维基百科:牛顿法 [http://aigraph.cslt.org/courses/12/Newton's_method.pdf][http://aigraph.cslt.org/courses/12/牛顿法.pdf] | + | *小清爱提问:机器学习里有哪些分类模型[https://mp.weixin.qq.com/s?__biz=Mzk0NjIzMzI2MQ==&mid=2247486850&idx=1&sn=313502e7f4533d70fc627240df7fc4db&chksm=c3080740f47f8e56dbb88a8f9bdbf4486843b3a6a5b4a6dd31061cbcbca1f5f0f97cfbee87b6&scene=178#rd] |
− | * 维基百科:奥卡姆剃刀[http://aigraph.cslt.org/courses/12/奥卡姆剃刀.pdf][http://aigraph.cslt.org/courses/12/Occam's_razor.pdf]
| + | *UC Berkley的科学家用一小时教会机器人站立、抓取等动作 [http://aigraph.cslt.org/courses/08/DayDreamer.mp4][https://arxiv.org/pdf/2206.14176.pdf][https://danijar.com/project/daydreamer/] |
− | * 百度百科:奥卡姆剃刀[https://baike.baidu.com/item/%E5%A5%A5%E5%8D%A1%E5%A7%86%E5%89%83%E5%88%80%E5%8E%9F%E7%90%86/10900565][http://baike.baidu.com/l/HUkXrXzT] | + | |
− | * 维基百科:过拟合[http://aigraph.cslt.org/courses/12/Overfitting.pdf][http://aigraph.cslt.org/courses/12/過適.pdf] | + | |
− | * 维基百科:GPT-3 [http://aigraph.cslt.org/courses/12/GPT-3-zh.pdf][http://aigraph.cslt.org/courses/12GPT-3-zh.pdf/]
| + | |
− | * 机器之心:当谈论机器学习中的公平公正时,我们该谈论些什么?[https://www.jiqizhixin.com/articles/2020-06-03-11]
| + | |
− | * 机器之心:数据增强 [https://www.jiqizhixin.com/articles/2019-12-04-10]
| + | |
− | * 知乎:数据增强 [https://zhuanlan.zhihu.com/p/38345420][https://zhuanlan.zhihu.com/p/41679153]
| + | |
− | * 什么是模型预训练[https://paddlepedia.readthedocs.io/en/latest/tutorials/pretrain_model/pretrain_model_description.html]
| + | |
− | * 迁移学习 [https://baike.baidu.com/item/%E8%BF%81%E7%A7%BB%E5%AD%A6%E4%B9%A0/22768151]
| + | |
| | | |
| | | |
第35行: |
第23行: |
| | | |
| | | |
− | * 优化方法在线演示 [https://www.benfrederickson.com/numerical-optimization/] | + | * 回归任务演示 [https://www.benfrederickson.com/numerical-optimization/] |
− | * 基于神经网络的二分类任务演示 [https://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html] | + | * 分类任务演示 [https://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html] |
| + | * 聚类演示 [http://alekseynp.com/viz/k-means.html] |
| + | * 流形学习演示 (tSNE) [https://cs.stanford.edu/people/karpathy/tsnejs/csvdemo.html][https://projector.tensorflow.org/] |
| | | |
| ==开发者资源== | | ==开发者资源== |
| + | |
| + | * 用于机器学习的Sklearn Python工具包,包含回归,分类,聚类,流形学习等各种函数库 [https://scikit-learn.org/stable/] |
| + | |
| | | |
| ==高级读者== | | ==高级读者== |
| | | |
− | * 王东,机器学习导论,第一章“绪论”,第十一章“优化方法”[http://mlbook.cslt.org] | + | * 王东,机器学习导论,2021,清华大学出版社 [http://mlbook.cslt.org] |
− | * Wolpert, David (1996), "The Lack of A Priori Distinctions between Learning Algorithms", Neural Computation, pp. 1341–1390 [https://web.archive.org/web/20161220125415/http://www.zabaras.com/Courses/BayesianComputing/Papers/lack_of_a_priori_distinctions_wolpert.pdf] | + | * 周志华,机器学习,2016 [https://item.jd.com/11867803.html] |
− | * Sebastian Ruder, An overview of gradient descend algorithms,2017 [https://arxiv.org/pdf/1609.04747.pdf]
| + | |
− | * Kirkpatrick, S.; Gelatt Jr, C. D.; Vecchi, M. P. (1983). "Optimization by Simulated Annealing". Science. 220 (4598): 671–680. [https://sci2s.ugr.es/sites/default/files/files/Teaching/GraduatesCourses/Metaheuristicas/Bibliography/1983-Science-Kirkpatrick-sim_anneal.pdf]
| + | |
− | * Brown et al., Language Models are Few-Shot Learners [https://arxiv.org/pdf/2005.14165.pdf]
| + | |