• Hi and welcome to the Studio One User Forum!

    Please note that this is an independent, user-driven forum and is not endorsed by, affiliated with, or maintained by PreSonus. Learn more in the Welcome thread!

Rendering in Studio One (sic)

darren

Member
While I was investigating the best way to "archive" my songs I played about with all sorts of different rendering options like "mixdown selection", "export stems", "transform to rendered audio" and so forth. I thought I understood them all but there were still some nuances which I hadn't noted down.

However, in understanding how each of them worked I ended up trying to null the render against the original track (by flipping the phase on one) to make 100% sure my ears weren't tricking me. And I couldn't help notice that, most of the time, my renders didn't null that well even when they were doing what they should be doing.

So I've just gone back and loaded up a totally empty song without any plugins and one track (drum overheads) to test the rendering and....it's a bit concerning. I tried three different methods (mixdown selection, export stems channels, export stems tracks) and only one of them nulled which was export stems channels. Neither of the other two nulled and weren't even close. Also, out of interest, I tried to null the mixdown selection against the export stems tracks and they didn't null against each other so they were both different to the original track but in different ways.

My song is 24 bit 48khz and all my renders are using the same values so there shouldn't be any conversion going on. Indeed the export stems channels nulls exactly. Edit: just to add that the audio files I'm trying to render are also 24/48.

As you can see from my signature, I'm still on 6.6 which is why I put Studio One in the thread title but I'm wondering if this is a bug, if it has been fixed in Studio Pro or if this is somehow expected behaviour?

I don't think this is academic because when I render my songs for release to the world I'm not expecting them to be different to what I'm hearing in my DAW.

Can anybody shed some light on this?
 
Last edited:
I’m currently dealing with a similar topic.

After upgrading to Fender Studio Pro 8, I did a direct comparison with Studio One 7.
I loaded the same material into a new project and flipped the phase on one track. The tracks did not cancel out in the high frequencies.

My impression is that Fender Studio Pro 8 sounds noticeably better after the bounce: more width, more detailed and natural reverb tails, and overall more depth in the mix. The spatial information feels clearer and more realistic.

From my perspective, it seems that something in the rendering behavior has improved in version 8.

I’d be curious to hear if others have done similar A/B tests and noticed the same results.
 
While I was investigating the best way to "archive" my songs I played about with all sorts of different rendering options like "mixdown selection", "export stems", "transform to rendered audio" and so forth. I thought I understood them all but there were still some nuances which I hadn't noted down.

However, in understanding how each of them worked I ended up trying to null the render against the original track (by flipping the phase on one) to make 100% sure my ears weren't tricking me. And I couldn't help notice that, most of the time, my renders didn't null that well even when they were doing what they should be doing.

So I've just gone back and loaded up a totally empty song without any plugins and one track (drum overheads) to test the rendering and....it's a bit concerning. I tried three different methods (mixdown selection, export stems channels, export stems tracks) and only one of them nulled which was export stems channels. Neither of the other two nulled and weren't even close. Also, out of interest, I tried to null the mixdown selection against the export stems tracks and they didn't null against each other so they were both different to the original track but in different ways.

My song is 24 bit 48khz and all my renders are using the same values so there shouldn't be any conversion going on. Indeed the export stems channels nulls exactly. Edit: just to add that the audio files I'm trying to render are also 24/48.

As you can see from my signature, I'm still on 6.6 which is why I put Studio One in the thread title but I'm wondering if this is a bug, if it has been fixed in Studio Pro or if this is somehow expected behaviour?

I don't think this is academic because when I render my songs for release to the world I'm not expecting them to be different to what I'm hearing in my DAW.

Can anybody shed some light on this?
Are the differences in your exports only related to high frequencies, or do you also find differences in medium or low frequency regions?
 
One thing you have to keep in mind, if you use a lot of non linear stuff or sounds that have other random things going on, a null test will not fully null. As an example: a virtual drum kit with round robin samples might not play the same sample next time. Only just in case... :)
 
Last edited:
@Edu, probably worse at "high frequencies" but it's across the board.

@Navar, I'm using the OH from a live mic drum kit. But any of the mic do the same thing. And, like I say, the export stems channels nulls perfectly
 
Sorry but I have no idea what this conversation is about. Can someone tell me what is meant by "nulling the render"?
 
Sorry but I have no idea what this conversation is about. Can someone tell me what is meant by "nulling the render"?
If you have two tracks which are exactly the same then if you flip the phase on one you end up with no sound because the two tracks cancel each other out. So if you render a track then you can compare the rendered track to the original by flipping the phase on one and if you have no sound then they're exactly the same.
 
OK, I get it. I had no idea that flipping the phase was a thing that one could do in Studio One. My question is...does it make any difference if you publish a song flipped or not flipped? Also, is flipping only useful for comparing two versions of the same song or is there another application for it?
 
SwitchBack, so if dither adds noise why add it? I'm learning here so please forgive my ignorance of these things.
 
Also check the dither settings. Dither adds random noise which is, well, random. It means that especially in the high frequencies signals won't null completely.
Unless I'm mistaken, dithering is only relevant when changing bit depth. Which I'm not doing - it's all 24 bit.
 
Digital audio processing (and the quantising that comes with it) adds noise with peaks in specific frequencies. Adding a little random noise (dithering) brings down those peaks, which is good for the signal to noise ratio (because the loudest noise peaks are gone). It also means that no rendering will be exactly the same when dithering is active. But the difference should be small wrt. the signal level.

And technically speaking you don't flip the phase but you flip the polarity, which is similar to swapping + an - (black and red) on your speakers.
 
Digital audio processing (and the quantising that comes with it) adds noise with peaks in specific frequencies. Adding a little random noise (dithering) brings down those peaks, which is good for the signal to noise ratio (because the loudest noise peaks are gone). It also means that no rendering will be exactly the same when dithering is active. But the difference should be small wrt. the signal level.
But a) I'm pretty confident you only dither when converting bit depth and b) export stems channels nulls 100% so clearly no dithering
 
But a) I'm pretty confident you only dither when converting bit depth and b) export stems channels nulls 100% so clearly no dithering
Just asking, Is there a possibiity that a DC offset might be a factor ?

Kindest regards
 
But a) I'm pretty confident you only dither when converting bit depth and b) export stems channels nulls 100% so clearly no dithering
"And at the end you dither" is the mantra, but what is the end? When you export a mix? Or when you export tracks? Or when you export stems? So dithering is an option when exporting after processing, as processing adds quantisation noise. So check the setting. As for exporting raw stems: No processing there so no need for dithering either. But I could be wrong so time to ask the experts :)
 
Dithering in Studio One is turned on by default, and is automatically applied during a Mixdown or Export that involves a bit depth conversion. It’s also applied during playback when the original file resolution is 24-bits or higher while the Song format is 16-bit to avoid conversion artifacts during playback.
From version 5 blog but don't think it's changed

 
I dither whenever I go down in Bit Depth or go from floating to fixed point. The second one happens, when you render to a 24Bit file out of the 32 or 64 Bit floating point audio engine of FSP.

But I'm pretty sure this would not make or break a null test.
What I would have a good look at, is if there are any plugins with any kind of randomness in the song. Like some Saturation Algos, but also plugins like chorus/flanger/doubler/shifter/tapesim wow/flutter (even when timing is synced to host tempo, because Oscillators don't neccessarily start at the same point each time you render) or Dynamic EQ's sometimes don't null (like TBT Kirchoff for instance). Spectral Plugins like Gullfuss or Soothe are also candidates to look at. And of course Synth Plugins.. ALL these need to be rendered down to audio first to be able to do a meaningfull null test comparison of song exports.
 
Last edited:
I dither whenever I go down in Bit Depth or go from floating to fixed point. The second one happens, when you render to a 24Bit file out of the 32 or 64 Bit floating point audio engine of FSP.

But I'm pretty sure this would not make or break a null test.
What I would a have a good look at, is if there are any plugins with any kind of randomness in the song. Like some Saturation Algos, but also plugins like chorus/flanger/doubler/shifter/tapesim wow/flutter (even when timing is synced to host tempo, because they don't neccessarily start at the same point each time you render) or Dynamic EQ's sometimes don't null (like TBT Kirchoff for instance). Spectral Plugins like Gullfuss or Soothe are also candidates to look at. And of course Synth Plugins.. ALL these need to be rendered down to audio first to be able to do a meaningfull null test comparison of song exports.
As I said in my post, it's one audio track with NO plugins. I'm just doing it as a test because my suspicions were raised
 
Well, if it doesn't matter then see what happens when you turn it off. Worth a try :)
Hmmm....on the one hand it didn't make any difference. However, on the other hand I tried a different audio file and while NONE of the renders were exact copies of the original audio, they did all null against each other so they were all exactly the same "wrongness".

That's different to what happened earlier.

What a palaver.
 
Back
Top