• Hi and welcome to the Studio One User Forum!

    Please note that this is an independent, user-driven forum and is not endorsed by, affiliated with, or maintained by PreSonus. Learn more in the Welcome thread!

"Smart" Plugins

BCnSTL

New member
I've tried several smart plugins like the Sonible Smart-EQ/Comp, the Neutron suite, Faster Master from Mastering-The-Mix etc and so far I'm unimpressed.
The concept seems appealing but in practice I don't see worthwhile/trustworthy results.

Take Faster-Master - pick a genre - have it listen & analyze, then get a result. OK, sure. But pick a different genre and it creates a *wildly* different EQ curve like +/- 10db for a given band. What the...? Maybe I've been doing-it-wrong all this time but my approach has been to create a generally frequency balanced mix. The different sound/feel etc came from the arrangement, timbre and selection of the instrumentation, tempo etc. Not cutting or boasting the low-end by 5+Db if the song was "rock" vs "pop", say. Same experience with Neutron etc.

Anyone have a different experience? What's your usage workflow? I'm curious.
 
I bought into Gulfoss by Sound Theory. This is a very intelligent plugin that analyses harsh and excessive resonances in your mix and creates these notches that fly in and out all over the spectrum by various degrees. To be honest I think it is amazing and its worth every penny and every time I use it the mix sounds better. The better the mix of course the less it does its thing, but when I am mastering final mixes from other clients they can be a bit out of whack sometimes and even after careful use of EQ (form that fabulous AMEK 200) things may still not be quite right. But if you let loose with Gulfoss, often the end result almost borders on amazing. Changes can be very noticeable but the changes are usually always better. This is all in real time, latency is negligible with a powerful computer.

I love using it in situations that require correction and improvement. There are different versions of it including a mastering grade version. It does a host of other stuff too light level changes and brighten etc.. Not only does it tame the resonances, it also can recover things are missing or a little low. So it can go into a restorative state as well as a correction state. All under your control, and you even decide how much of the spectrum you want Gulfoss to work on. All of it or just a section of it. That is very cool indeed. It could be used over individual stereo/mono tracks and stems of course.

When you put a spectrum analyser after it you can definitely see what it always trying to do. Balance out the spectrum and flatten or smooth it out a little so things all sound more in balance with each other. You can also dial in that perfect high end slope of a well mastered mix.
 
Anyone have a different experience? What's your usage workflow? I'm curious.

I think there may be an "experience threshold" with the do-it-all plug-ins (as opposed to a more focused plug-in like Gulfoss). If you're below that threshold, than the "smart" plug-ins produce a better result than you would obtain by yourself. If you're above that threshold, you have a better idea of the results you want. The odds are that the "smart" plug-ins will have a harder time delivering that.
 
I sometimes use smart eq as a second pair of ears. I’ll make my moves and then bypass my eq and see what smart eq suggests. More often than not I’m happier with my decisions but every now and again the smart eq will pick up on something I missed which can be helpful. Like all this AI stuff in the world I think it can be helpful to be used in conjunction with your own brain, not as a replacement for. I’ve had similar experiences with chat gpt. It’s rare that I’ll just accept its suggestions as gospel but often it’ll surprise me or make me see something from a different perspective. They’re all just tools, not replacements for skill/knowledge.
Things like Soothe 2 can be real time savers. I might hear some grim resonances somewhere around 1-3khz so rather than fiddling about ducking them all out with dynamic eq I’ll just slap a soothe on and target the area I’m hearing and let it do its thing.
I think a really good use of AI which doesn’t seem to have been focused on by developers would be to help with mundane, non creative parts of our work, such as intelligent session prep or content aware export routing etc. things that take time but aren’t the fun creative parts. I’ve said it before, but AI needs to stay out of the creative parts of life, that’s our special human realm!
 
I've used some of these AI tools, and frankly, most of them just apply a template. This template may be AI-developed, or the code is simply AI-completed. Seriously, it's hard for me to identify this kind of thing as "AI". At the same time, a significant portion of these AI tools are poorly adjusted and there is little to control. The full version of ozone does a great job of this, and I like it, but it's still in the minority. Most can only adjust a few knobs that you have no idea what you're doing. This is beginner-friendly and can get a good sound if analyzed, but for engineers with professional needs, it is a bit limited.

In fact, we have a better approach to use AI, letting it help us handle those tedious tasks rather than replacing our creativity. I believe this is a better way to use it. We have already seen that some solutions exist, but Studio One has not yet followed up:
  • Daw ai assistant. I heard that the new generation of FL Studio will come with an AI assistant that is familiar with the user manual. This way, when people encounter problems, they can ask the AI assistant directly instead of flipping through the manual, and it will provide better solutions.

  • Automatic naming, grouping, and color assignment. I believe this kind of repetitive, meaningless work definitely takes up a lot of time. A year ago, such AI already existed on the market, but it seems only Pro Tools users could use it.

I have always believed that humans invent AI to free us from tedious, repetitive work, saving time and energy to create more wonderful things, rather than to replace us in producing music or making decisions for us, while humans are still doing those tedious, mechanical tasks.
 
In fact, we have a better approach to use AI, letting it help us handle those tedious tasks rather than replacing our creativity.

At least for now, I'd settle for "click here to create an archive folder that does the following: back up your song, render all tracks as audio without effects, render all tracks as audio with effects, save all presets used in the song, generate a .dawproject file, and save any mastering page-related project with the most recent digital release files." That would be soooooo helpful.
 
At least for now, I'd settle for "click here to create an archive folder that does the following: back up your song, render all tracks as audio without effects, render all tracks as audio with effects, save all presets used in the song, generate a .dawproject file, and save any mastering page-related project with the most recent digital release files." That would be soooooo helpful.
That would be AMAZING!
 
Back
Top