Thanks for your resource. Never heard of Dan Tepfer or Extempore - such a great way of imagining music!
What I was planning was something simpler - much like generating sounds from a written score, but like live classical performances, the generated sound reacts to the cues of the player.
I'm not exactly familiar with Magenta, but the thing I'm currently trying to implement (at a very early stage) is Deepmind's Wave2Midi2Wave which is part of the dataset released with Magenta [1]. I'm not aware if they'd released any code as well.
What I was planning was something simpler - much like generating sounds from a written score, but like live classical performances, the generated sound reacts to the cues of the player.
I'm not exactly familiar with Magenta, but the thing I'm currently trying to implement (at a very early stage) is Deepmind's Wave2Midi2Wave which is part of the dataset released with Magenta [1]. I'm not aware if they'd released any code as well.
[1] - https://magenta.tensorflow.org/maestro-wave2midi2wave