Gray Wolf
Active member
This thread focuses on AI instruments. Not AI-generated music, but using AI as a sound source in place of traditional VST instruments. I’m intrigued by this emerging frontier.
To be clear, this is not an endorsement of ACE Studio. They simply happen to be implementing AI instruments in a compelling way, which makes it a useful reference point for discussing the technology more broadly. I’m sure other companies are exploring similar approaches.
Of course, it’s 100% your music. You played it. The only thing changing here is the sound source, the instrument generating audio from the MIDI notes you performed. In that respect, there’s really nothing to argue about when it comes to AI in this particular context.
In these cases, it's not triggering samples like a traditional Kontakt library, it's driving a trained neural network model that generates audio in real time based on musical intent. For example: The neural network learns:
Here is an example of that with strings where a composer feeds it his midi tracks that were previously playing BBCSO.
Those strings sound pretty good to me, mmv. The solo violin from that model was also pretty impressive:
I'm very curious about the CPU, RAM and latency footprint of a larger collection of neural network instruments like that.
To be clear, this is not an endorsement of ACE Studio. They simply happen to be implementing AI instruments in a compelling way, which makes it a useful reference point for discussing the technology more broadly. I’m sure other companies are exploring similar approaches.
Of course, it’s 100% your music. You played it. The only thing changing here is the sound source, the instrument generating audio from the MIDI notes you performed. In that respect, there’s really nothing to argue about when it comes to AI in this particular context.
In these cases, it's not triggering samples like a traditional Kontakt library, it's driving a trained neural network model that generates audio in real time based on musical intent. For example: The neural network learns:
- How a violin sounds at different pitches
- How it transitions between notes
- How vibrato evolves over time
- How dynamics shape the tone
- How phrasing affects timbre
Here is an example of that with strings where a composer feeds it his midi tracks that were previously playing BBCSO.
Those strings sound pretty good to me, mmv. The solo violin from that model was also pretty impressive:
I'm very curious about the CPU, RAM and latency footprint of a larger collection of neural network instruments like that.
- Are neural network models perhaps the future of software instruments?
- Will we at some point have orchestral libraries that don't take up terrabytes of hard drive space and take hours to download?