“第四十三章 AI谱曲”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“==教学资料== * 教学参考 * [http://aigraph.cslt.org/courses/43/course-43.pptx 课件] * 小清爱提问:计算机如何谱曲? [http...”为内容创建页面)
 
高级读者
 
(相同用户的7个中间修订版本未显示)
第10行: 第10行:
 
* AI100问:计算机如何谱曲?[http://aigraph.cslt.org/ai100/AI-100-76-计算机如何谱曲.pdf]
 
* AI100问:计算机如何谱曲?[http://aigraph.cslt.org/ai100/AI-100-76-计算机如何谱曲.pdf]
 
* 维基百科:计算机谱曲 [http://aigraph.cslt.org/courses/43/算法作曲.pdf][http://aigraph.cslt.org/courses/43/Algorithmic_composition.pdf]
 
* 维基百科:计算机谱曲 [http://aigraph.cslt.org/courses/43/算法作曲.pdf][http://aigraph.cslt.org/courses/43/Algorithmic_composition.pdf]
 +
* 维基百科:ILLIAC suite [http://aigraph.cslt.org/courses/43/Illiac_Suite.pdf]
 +
* 维基百科:Lejaren Hiller [http://aigraph.cslt.org/courses/43/Lejaren_Hiller.pdf]
 +
* DeepMind Perceiver AR [https://www.deepmind.com/publications/perceiver-ar-general-purpose-long-context-autoregressive-generation]
 +
* Magenta [https://magenta.tensorflow.org/]
 +
 +
  
 
==视频展示==
 
==视频展示==
  
 +
* Perceiver AR [http://aigraph.cslt.org/courses/43/autoregressive-long-context-music-generation.mp4]
 +
* DDSP-VST, 将任何声音转换成音乐,模拟变化的基音和响度 [http://aigraph.cslt.org/courses/43/multi-instrument-transitions.mp4]
  
 
==演示链接==
 
==演示链接==
  
* DeepJ [https://github.com/calclavia/DeepJ/tree/icsc/archives/v1]
 
 
* RNN Performer [https://magenta.tensorflow.org/performance-rnn]
 
* RNN Performer [https://magenta.tensorflow.org/performance-rnn]
 +
* Perceiver AR [https://magenta.tensorflow.org/perceiver-ar]
 +
* Paint with music [https://magenta.tensorflow.org/paint-with-music]
 +
* Paint with music [*][https://artsandculture.google.com/experiment/paint-with-music/YAGuJyDB-XbbWg]
 +
* Listen to transformer [https://magenta.github.io/listen-to-transformer/#a1_11806.mid]
 +
* Sketchpad [https://magic-sketchpad.glitch.me/]
 +
* Other Magenta demos [https://magenta.tensorflow.org/demos]
  
 
==开发者资源==
 
==开发者资源==
  
* Performance RNN [https://github.com/magenta/magenta/tree/main/magenta/models/performance_rnn]
+
* DeepJ [*][https://github.com/calclavia/DeepJ/tree/icsc/archives/v1]
 
+
* Performance RNN [*][https://github.com/magenta/magenta/tree/main/magenta/models/performance_rnn]
 +
* Perciver AR [*][https://github.com/google-research/perceiver-ar]
  
 
==高级读者==
 
==高级读者==
第30行: 第44行:
 
* A. Hiller Jr, L. and L. M. Isaacson. 1957. Musical composition with a high speed digital computer. In Audio Engineering Society Convention 9. Audio Engineering Society. [https://www.aes.org/e-lib/browse.cfm?elib=189]
 
* A. Hiller Jr, L. and L. M. Isaacson. 1957. Musical composition with a high speed digital computer. In Audio Engineering Society Convention 9. Audio Engineering Society. [https://www.aes.org/e-lib/browse.cfm?elib=189]
 
* Performance RNN: Generating Music with Expressive Timing and Dynamics [https://magenta.tensorflow.org/performance-rnn]
 
* Performance RNN: Generating Music with Expressive Timing and Dynamics [https://magenta.tensorflow.org/performance-rnn]
* Mao H H, Shin T, Cottrell G. DeepJ: Style-specific music generation[C]//2018 IEEE 12th International Conference on Semantic Computing (ICSC). IEEE, 2018: 377-382. [https://arxiv.org/pdf/1801.00887.pdf] [https://github.com/calclavia/DeepJ/tree/icsc/archives/v1]
+
* Mao H H, Shin T, Cottrell G. DeepJ: Style-specific music generation[C]//2018 IEEE 12th International Conference on Semantic Computing (ICSC). IEEE, 2018: 377-382.[*] [https://arxiv.org/pdf/1801.00887.pdf] [https://github.com/calclavia/DeepJ/tree/icsc/archives/v1]
 
* Fernández J D, Vico F. AI methods in algorithmic composition: A comprehensive survey[J]. Journal of Artificial Intelligence Research, 2013, 48: 513-582. [https://www.jair.org/index.php/jair/article/download/10845/25883/]
 
* Fernández J D, Vico F. AI methods in algorithmic composition: A comprehensive survey[J]. Journal of Artificial Intelligence Research, 2013, 48: 513-582. [https://www.jair.org/index.php/jair/article/download/10845/25883/]
 +
* Hawthorne C, Jaegle A, Cangea C, et al. General-purpose, long-context autoregressive modeling with Perceiver AR[J]. arXiv preprint arXiv:2202.07765, 2022. [https://arxiv.org/pdf/2202.07765]

2023年8月13日 (日) 02:41的最后版本

教学资料


扩展阅读

  • AI100问:计算机如何谱曲?[2]
  • 维基百科:计算机谱曲 [3][4]
  • 维基百科:ILLIAC suite [5]
  • 维基百科:Lejaren Hiller [6]
  • DeepMind Perceiver AR [7]
  • Magenta [8]


视频展示

  • Perceiver AR [9]
  • DDSP-VST, 将任何声音转换成音乐,模拟变化的基音和响度 [10]

演示链接

  • RNN Performer [11]
  • Perceiver AR [12]
  • Paint with music [13]
  • Paint with music [*][14]
  • Listen to transformer [15]
  • Sketchpad [16]
  • Other Magenta demos [17]

开发者资源

高级读者

  • S. A. Hedges. 1978. Dice music in the eighteenth century. Music Lett. 59, 2 (1978), 180--187. [21]
  • Herremans D, Chuan C H, Chew E. A functional taxonomy of music generation systems[J]. ACM Computing Surveys (CSUR), 2017, 50(5): 1-30. [22]
  • A. Hiller Jr, L. and L. M. Isaacson. 1957. Musical composition with a high speed digital computer. In Audio Engineering Society Convention 9. Audio Engineering Society. [23]
  • Performance RNN: Generating Music with Expressive Timing and Dynamics [24]
  • Mao H H, Shin T, Cottrell G. DeepJ: Style-specific music generation[C]//2018 IEEE 12th International Conference on Semantic Computing (ICSC). IEEE, 2018: 377-382.[*] [25] [26]
  • Fernández J D, Vico F. AI methods in algorithmic composition: A comprehensive survey[J]. Journal of Artificial Intelligence Research, 2013, 48: 513-582. [27]
  • Hawthorne C, Jaegle A, Cangea C, et al. General-purpose, long-context autoregressive modeling with Perceiver AR[J]. arXiv preprint arXiv:2202.07765, 2022. [28]