• Hi and welcome to the Studio One User Forum!

    Please note that this is an independent, user-driven forum and is not endorsed by, affiliated with, or maintained by PreSonus. Learn more in the Welcome thread!

Question about Main levels in Editor and the "Loudness" adjustment on export

Huertaaj

Active member
I'm a little confused about setting the "Main" levels in the Studio One Editor and what happens when I export a finished song using the "Adjust Loudness" setting. Here is my flow...

1) Compose a song with multiple tracks

2) Adjust each track so that it sounds good and all the tracks are nice and clear (plus other stuff like Eq if needed, etc.)

3) Play with the volume of said tracks and the MAIN level slider so that the loudest sound in the MAIN meter is at about -3 to -6 dB.
The slider on the Main at this point is typically at around 0 dB. The song now sounds good to my ears

4) In order to export the song I have read that I should set the "Adjust Loudness" to about -14 dB for Spotify, and other platforms. So I do that.

5) After exporting the song as a .WAV file I then import it into Studio One and play it. What I notice is that when the Main slider is set to 0 dB the song definitely sounds less loud and the observed maximum levels on the Main are lower than before exporting the song. I see the maximum level of the imported song at about -6 dB or less. Is that what I should expect or am I missing something in the process of producing songs for publishing. In other words, is my process more or less correct or is there some glaring error in my thinking?

Thanks for any input.
 
You don't want it too loud for Spotify/YouTube etc as it goes through through their own internal compression algorithms. These algorithms require headroom, too loud and you will lose out on quality (it won't clip but the dynamics will be a lot more squashed after their compression than usual). Spotify ends up normalising the volume to their own algorithm anyway.

Somebody can correct me if I'm wrong.
 
When you tick the adjust loudness box, all S1 does is to turn your track up or down to get it to the LUFS target you set, so in your case -14 LUFS. If your track was louder than -14 LUFS then S1 will simply turn it down in order to get it to your target. If you track was quieter than -14 LUFS, S1 can only increase the gain up to the peak ceiling (probably -1dBFS). So to get up to loudness might not be possible unless you apply some limiting/compression/clipping on your mix before rendering…and this is where mastering comes in :)
 
You don't want it too loud for Spotify/YouTube etc as it goes through through their own internal compression algorithms. These algorithms require headroom, too loud and you will lose out on quality (it won't clip but the dynamics will be a lot more squashed after their compression than usual). Spotify ends up normalising the volume to their own algorithm anyway.

Somebody can correct me if I'm wrong.
Spotify/youtube will just turn a track down if it’s louder than their loudness reference, they won’t apply extra dynamic compression. So loudness is more of a taste thing than a hard rule.
You do, however, have to watch out for intersample peaks that can cause distortion/artifacts when encoded by the platforms, hence the guideline for keeping true peak to -1dB
 
Spotify/youtube will just turn a track down if it’s louder than their loudness reference, they won’t apply extra dynamic compression. So loudness is more of a taste thing than a hard rule.
You do, however, have to watch out for intersample peaks that can cause distortion/artifacts when encoded by the platforms, hence the guideline for keeping true peak to -1dB

I'm not talking about dynamic compression. I'm talking about the lossy compression that reduce file sizes and bandwidth when being streamed with these services. Spotify uses Ogg/Vorbis or ACC, and that is this is lossy. Other services may use MP3 or MP4 for video with audio. Regardless they all work from the same principles of lossy compression and just use different algorithms which are more or less efficient.

Lossy compression algorithms need a certain amount of headroom otherwise it will make a poor job of it. If you don't give it headroom the music will get dynamically squashed in parts by the algorithm itself, or artifacts will be simply discarded, and that's just the way it goes.

Additionally, normalisation is also part of the course (making it louder or quieter) so the volume is consistent between their playlists.

The exception is Spotify Premium subscribers, or other lossless services (where available), where they offer lossless FLAC compression, and then that isn't an issue (just normalised), but not all Spotify customers use this service.
 
Last edited:
That’s why I use the You Lean Loudness Meter.
I put it last in the effects chain when mastering and get a ballpark reading.
But the most accurate reading is when you have the full paid version you can drag and drop your exported mix into it. This gives you an instant reading with very accurate True Peak level and LUFS.

By rights you should not master a song during mixing. It is a second step. But with the right technique you can get away with it.
My other tool is the Loud Max brick wall limiter by Thomas Munt. I tested every limiter I had on hand and it was the only one that came with in 0.2 db of accurate results according to the You Lean Meter.
I set it at -1.0 db and generally that is what I get under normal conditions.
If I push the mix at it it will hang in there but if you start seeing a steady peak reduction then it might hold on at -0.8 db. That drops the harder you push it.

Most all of the other limiters I tested never maintained-1.0 db and many add unwanted harmonic content. Loud max is squeaky clean.
 
Last edited:
I'm not talking about dynamic compression. I'm talking about the lossy compression that reduce file sizes and bandwidth when being streamed with these services. Spotify uses Ogg/Vorbis or ACC, and that is this is lossy. Other services may use MP3 or MP4 for video with audio. Regardless they all work from the same principles of lossy compression and just use different algorithms which are more or less efficient.

Lossy compression algorithms need a certain amount of headroom otherwise it will make a poor job of it. If you don't give it headroom the music will get dynamically squashed in parts by the algorithm itself, or artifacts will be simply discarded, and that's just the way it goes.

Additionally, normalisation is also part of the course (making it louder or quieter) so the volume is consistent between their playlists.

The exception is Spotify Premium subscribers, or other lossless services (where available), where they offer lossless FLAC compression, and then that isn't an issue (just normalised), but not all Spotify customers use this service.
Sure, I think we might be confusing two different things. The lossy compression itself isn’t dynamically reacting to the loudness of your track. The main thing is avoiding clipping, which is why mastering engineers usually leave around -1 dBTP to control intersample peaks. As long as true peaks are under control, how loud you make the master is really more a creative choice. The codecs don’t apply extra dynamic compression, they just remove data to reduce file size.

When they talk about normalisation, that’s simply the platform turning tracks up or down to keep a consistent level across the service, much like Studio One’s loudness target option.
 
Sure, I think we might be confusing two different things. The lossy compression itself isn’t dynamically reacting to the loudness of your track. The main thing is avoiding clipping, which is why mastering engineers usually leave around -1 dBTP to control intersample peaks. As long as true peaks are under control, how loud you make the master is really more a creative choice. The codecs don’t apply extra dynamic compression, they just remove data to reduce file size.

When they talk about normalisation, that’s simply the platform turning tracks up or down to keep a consistent level across the service, much like Studio One’s loudness target option.

Not confused here. I never said the algorithm was dynamically reacting to the loudness of the track, it's just how the algorithm works. It's a necessary evil.

The lossy compression algorithm which Spotify (or whatever) applies after upload, squashes it and removes artifacts, that's it's job, and one hope it does this without it being too noticeable. How much it squashes it (very little if mastered correctly), how much artifacts are removed, and how efficiently the data is stored, depends on the algorithm used. Without decent headroom it does it way too much and it become very audible.

Normalisation is the second step, and as stated that just sorts out the volume levels with all the other tracks they have.

Not talking about mastering which of course it an entirely different thing, but of course a file without large fluctuations is going to help with the lossy compression algorithm.
 
Last edited:
Back
Top