a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  3398 days ago  ·  link  ·    ·  parent  ·  post: Composing music with Neural Networks

It's interesting, anyway. I'm trying to gauge the distance between "me reading this" and "me having something I can feed into Reaktor" and it's a depressingly long hike.





empty  ·  3398 days ago  ·  link  ·  

What would you need for Reaktor to make use of it?

kleinbl00  ·  3398 days ago  ·  link  ·  

Well...

First let me say I'm not a programmer. I've occasionally aspired to be one, but I'm held back by my nightmare experiences in Turbo Pascal and Fortran. So I see this, and I know that logically, I have the chops to accomplish what I want but practically, I lack the discipline to get a clean compile quickly enough to stave off project-killing frustration.

So what we have here (did you write this? or did you find it?) is a compositional engine whose behavior is determined parametrically, correct? The mix of the nodes is parametric, the bias is parametric, the activation function is parametric, the feedback is parametric. All of that stuff is the kinds of parameters that have to be set in order for the thing to function - mess with them too much and you'll pull the whole affair out of the usable realm.

But then we've also got the fine parameters -

- pitchclass

- vicinity

- context

- beat

etc that effectively shape the behavior of the network. But on top of that, there's also training - whereby the network is refined through iterative analysis of a large body of work.

That's my summary of what is needed to make this function:

1) Precise architecture of the neural network so that it will work

2) Parametric architecture of the functions so that it can work well

3) Iterative training of the network so that it can work this way

Tell me if I'm close, and where I'm wrong, so that I don't say something stupid next.

empty  ·  3398 days ago  ·  link  ·  

(3) is what figures out (2), so there's no need to have (2) independent of (3). (1) is the open-source code.

By design, this model is trained on MIDIs and generates MIDIs. The README explains how to use the code, but it's designed to run on linux (specifically, the sort of GPU AWS instance the author mentions in the article.). However, once you've trained a model, you can just use it over and over again to generate midis, without needing a fancy GPU.

What I'm getting at is: If you can feed MIDI files into your thing, you're good to go.