• Hi and welcome to the Studio One User Forum!

    Please note that this is an independent, user-driven forum and is not endorsed by, affiliated with, or maintained by PreSonus. Learn more in the Welcome thread!

AI Instruments

Gray Wolf

Active member
This thread focuses on AI instruments. Not AI-generated music, but using AI as a sound source in place of traditional VST instruments. I’m intrigued by this emerging frontier.

To be clear, this is not an endorsement of ACE Studio. They simply happen to be implementing AI instruments in a compelling way, which makes it a useful reference point for discussing the technology more broadly. I’m sure other companies are exploring similar approaches.

Of course, it’s 100% your music. You played it. The only thing changing here is the sound source, the instrument generating audio from the MIDI notes you performed. In that respect, there’s really nothing to argue about when it comes to AI in this particular context.

In these cases, it's not triggering samples like a traditional Kontakt library, it's driving a trained neural network model that generates audio in real time based on musical intent. For example: The neural network learns:
  • How a violin sounds at different pitches
  • How it transitions between notes
  • How vibrato evolves over time
  • How dynamics shape the tone
  • How phrasing affects timbre
It learns patterns, not samples.

Here is an example of that with strings where a composer feeds it his midi tracks that were previously playing BBCSO.

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

Those strings sound pretty good to me, mmv. The solo violin from that model was also pretty impressive:

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

I'm very curious about the CPU, RAM and latency footprint of a larger collection of neural network instruments like that.
  • Are neural network models perhaps the future of software instruments?
  • Will we at some point have orchestral libraries that don't take up terrabytes of hard drive space and take hours to download?
Thoughts?
 
Yes to both questions, but I've yet to see any part of the software industry that hasn't expanded its products to fit the assumed space available, so I'm sure that AI instrument makers will find a way to use any space freed up by not needing so many samples!

I'm less impressed than this guy by the results he's getting in this test, but I'm sure that - at least in part - that's because we've become used to hearing 'perfect' renditions by VSTs, and getting variations more representative of human players actually generates a less perfect result! There's no doubt that the technology is impressive, and it will undoubtedly get better. I do like that he emphasises that it's a tool for perhaps improving mock-ups, not for creating commercial music.
 
You're casting your AI net very wide. In general AI is associated with autonomous learning, reasoning, problem-solving, perception, and decision-making (Wikipedia). A system that by itself can't deliberately go beyond its initial programming (e.g. an operating system) is not considered to have artificial intelligence. Makes AI a bit more special ;)
 
It depends on the definition of AI I guess, which may be different depending on where you live. But somehow a machine that correctly deduces, from me pressing a button, that I want coffee shouldn't qualify as AI when it is a coffee machine ;)
 
It depends on the definition of AI I guess, which may be different depending on where you live. But somehow a machine that correctly deduces, from me pressing a button, that I want coffee shouldn't qualify as AI when it is a coffee machine ;)

I knew I was doing something wrong!
 
Hence, my often used gum-ball machine analogy. Anywhere there's an algorithm, there's pratically someone claiming "AI". There's so much slight of hand because there's a market for it. So for the purposes of what's been presented here, sure. Acoustic and dynamic variability, sympathetic vibrations, and any way to gain realism of an instrument or timbre is likely a good thing. Its just that this has been continually going on as development anyway. So to drop the idea of AI is formulating something must further be explained.
JMO, but once a person has requested some whatever known artist to sit in on their song to create some line they couldn't come up with themselves, then they've just basterdized their product. Some of you may not agree with that, but then I wouldn't care anyway.
 
Hmm, then a pencil would be intelligent too: You push one end and the other end converts your thoughts to text. I think we can agree that's far-fetched (as are the lightbulbs).:p
 
I'm not seeing the lightbulb analogy either. I mean, these are incremental developments that man has achieved using different alloys, filaments, and such so lets give a little credit to actual human intelligence and milestones on its own merit. For example, if any type light turns on due to motion sensing. That's not AI. No matter how one spins it. My Keurig coffee machine isn't AI. We're getting ahead of ourselves. ; )

Why doesnt it surprise me this thread will run in all directions irregardless of what the OP (respectfully) tried to keep on the rails.
......lets see.
 
I'm just the messenger. ;)

Ask Google AI . . .

Google AI says some LED lights and Keurig coffee machines are AI devices depending on their specific capabilities and behaviors.

Some of the newer Keurig coffee machines can "talk" to Amazon Alexa--itself clearly an AI entity--and order refills and do other stuff automagically.

Considering it has been suggested that I am AI, well it's . . .

Lots of FUN :)
Spoken like a true AI! :) I have a question for you, just wondering, are you a real person or are you an AI bot? Just asking no offense intended. I'm just courios to know.
 
I'm just the messenger. ;)

Ask Google AI . . .
"Ask Google AI" ?
Here's a thought. If I am any form of intelligence, in this universe, I know that I wouldn't want to have my species be coined as artificial in front of our level of intelligence. That phrase was pigeonheld by humans. That said, I'm perfectly fine with calling ninety percent of humans "artificial humans". Seems fitting from that perspective.

* * * * *
French philosopher René Descartes, who coined the phrase “I think, therefore I am,” laid the foundation for modern philosophy, asserting the certainty of one's existence through the act of thought.
 
I think that information could have been consolidated.

Most of us know East West, and its huge assortment of library sampled sounds.
Pay the $22, and see what you come up with or for Ace Studio. Post it, even. We can determine (more importantly, you) as to how real it is or ultimately if it works for you. In the final analysis, if you like it, that is what is important.

Respectfully: If you dont mind, it might be more helpful to trim your responses down in size. Thanks.
If you have to go to such lengthy responses, I honestly dont have the time to read them. Nor watch the videos (some posts, being numerous videos). Just sayin'
😊
 
This thread focuses on AI instruments. Not AI-generated music, but using AI as a sound source in place of traditional VST instruments. I’m intrigued by this emerging frontier.

To be clear, this is not an endorsement of ACE Studio. They simply happen to be implementing AI instruments in a compelling way, which makes it a useful reference point for discussing the technology more broadly. I’m sure other companies are exploring similar approaches.

Of course, it’s 100% your music. You played it. The only thing changing here is the sound source, the instrument generating audio from the MIDI notes you performed. In that respect, there’s really nothing to argue about when it comes to AI in this particular context.

In these cases, it's not triggering samples like a traditional Kontakt library, it's driving a trained neural network model that generates audio in real time based on musical intent. For example: The neural network learns:
  • How a violin sounds at different pitches
  • How it transitions between notes
  • How vibrato evolves over time
  • How dynamics shape the tone
  • How phrasing affects timbre
It learns patterns, not samples.

Here is an example of that with strings where a composer feeds it his midi tracks that were previously playing BBCSO.

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

Those strings sound pretty good to me, mmv. The solo violin from that model was also pretty impressive:

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

I'm very curious about the CPU, RAM and latency footprint of a larger collection of neural network instruments like that.
  • Are neural network models perhaps the future of software instruments?
  • Will we at some point have orchestral libraries that don't take up terrabytes of hard drive space and take hours to download?
Thoughts?

A couple of thoughts...

I imagine the raw computing power will be the inhibiting factor, but hey, who knows perhaps Ai will re-invent itself in an all encompassing form of itself ????

I suppose one day in the not too distant future we'll be jammin along with an "Ai Joe's Garage Band". The question i keep asking myself, is this going to satisfy folk that get a kick out of the sharing a musical space. You know that shared musical conversation, whether master or student ?

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

You could ask is this the best use case for Ai in the grand scheme of things ?

Kindest regards to all. :)
 
You could ask is this the best use case for Ai in the grand scheme of things ?

"Best" is a pretty subjective word but I do think it's one of the least problematic uses in the music arena for sure.

Problem: At least in the professional orchestral library domain. Libraries are very large, take up tons of hard drive space, use a lot of CPU and takes some time to download and install or copy. The overall cost exceeds the cost of the products themselves as you have to also invest in very large and fast hard drives at a minimum. Of course you also need very fast RAM and CPU for tons of gigabytes of samples to load and unload in a reasonable amount time.

Potential Solutions: Physical modeling (SWAM, etc) is one potential solution to that problem. Small footprint, fast loading, great sound for some things. AI Neural network modeling as described here may be the ultimate answer in the future.

I guess we'll have to just wait and see what unfolds.
 
"Best" is a pretty subjective word but I do think it's one of the least problematic uses in the music arena for sure.

Problem: At least in the professional orchestral library domain. Libraries are very large, take up tons of hard drive space, use a lot of CPU and takes some time to download and install or copy. The overall cost exceeds the cost of the products themselves as you have to also invest in very large and fast hard drives at a minimum. Of course you also need very fast RAM and CPU for tons of gigabytes of samples to load and unload in a reasonable amount time.

Potential Solutions: Physical modeling (SWAM, etc) is one potential solution to that problem. Small footprint, fast loading, great sound for some things. AI Neural network modeling as described here may be the ultimate answer in the future.

I guess we'll have to just wait and see what unfolds.
SWAM is interesting, I don't have any SWAM vsti's but have been reading some of the info about them and yes I agree.
A small footpring and sound modelling is presenting a different method of being expressive, other than "Articulation=sample etc...
When you think about the human experience and the rate of developing tech I do believe we will have to grow with it or surrender control to an algo.

Yes. The Neural Network is a fascinating subject and the possibilities I guess become exponential.
There's a lot of money being poured into Ai. and the argument/discussion as to thought process = the human Philosophical experience has been argued Ad infinitum. Source Wiki "Ai The Chinese room", along with an ever growing discussion spread across the web.

The end game for me is to bridge the gap that my aged and failing bones are creating, so If tech can help and keep it enjoyable I'm in.

Kindest regards
 
Just so I better understand the technical differences myself, I asked AI to illuminate the differences between those two things.

These links never scroll to the top so you'll have to scroll up to read it from the beginning. I actually didn't know that Pianoteq was physical modeling.

 
Last edited:
  • Like
Reactions: THW
Just so I better understand the technical differences myself, I asked AI to illuminate the differences between those two things.

These links never scroll to the top so you'll have to scroll up to read it from the beginning. I actually didn't know that Pianoteq was physical modeling.

Yeah, we have come a long way since Pythagorian Sound Theory, the comparrison between Neural networks and Sound modelling is quite revealing.

Have you had a cruise round the "Audio Modelling" website or watched any of the YTube stuff on SWAM instruments.

If you have 17 mins to spare interesting vid, all the usual caveats apply and real world use may require specific settings being made for best results?
Anyhow...

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

Kindest regards

PS excuse my poor choice of words above and substitute the word to "desired" instead of "best" in the above comments...
 
Last edited:
That's impressive, the video.
It really is impressive! While watching the video, the thing that kept entering my mind was: as good as this application is, it would be wasted on me, as to get the most out it, I'd have to think like a string player, not a guitar player.

Which is where the best apps fall short I suspect. There's so much more than what notes to play, even in the most sophisticated AI orchestra pack...its how to play them,
 
That's impressive, the video.
The Osmose Keyboard/synth plays an important role by way of it's MPE capabilities, how well a DAW might capture those nuances is what might be the break point.

Hey, @Ken Morgan as you mention the guitar player...
The one that got me was Marco Parisi playing along to Little Wing on a Seaboard Rise, he gets 10 out 10 for the air drumming. :ninja:

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

Kindest regards, and happy surfin folks. Have fun:cool:
 
Back
Top