Good ideas and conversation. No ads, no tracking. Login or Take a Tour!
(3) is what figures out (2), so there's no need to have (2) independent of (3). (1) is the open-source code. By design, this model is trained on MIDIs and generates MIDIs. The README explains how to use the code, but it's designed to run on linux (specifically, the sort of GPU AWS instance the author mentions in the article.). However, once you've trained a model, you can just use it over and over again to generate midis, without needing a fancy GPU. What I'm getting at is: If you can feed MIDI files into your thing, you're good to go.