• Hi and welcome to the Studio One User Forum!

    Please note that this is an independent, user-driven forum and is not endorsed by, affiliated with, or maintained by PreSonus. Learn more in the Welcome thread!

'Copy External Files'; Do the references get moved?

Hey gang just need to double check this, when you right click in pool > copy external files, it brings all used pool audio into the media folder if it's not there already.

I think the file references stay the same, i.e. they still point to the external locations and only use the media folder file if that reference gets broken?

Is this correct or does the reference itself get overwritten to the media folder?

Thanks a bunch!
 
The idea of this feature is to copy all used media to the song folder so that the song is no longer dependent on the original files (which may be on a camera SD card or other external storage).

So yes, the references should point to the new location. You can easily test this (which I always do before deleting footage) by renaming or moving one of the dependencies and seeing if the song still opens without problems. Or, even easier, right-click on the file in the Pool and select "Show in Explorer/Finder". It should locate the file in your song folder instead of the original location.
 
Hey gang just need to double check this, when you right click in pool > copy external files, it brings all used pool audio into the media folder if it's not there already.

I think the file references stay the same, i.e. they still point to the external locations and only use the media folder file if that reference gets broken?

Is this correct or does the reference itself get overwritten to the media folder?

Thanks a bunch!
If you copy external files, these files will be copied into the song folder. And in the song the reference will point to that new copy inside of the song folder. The whole idea of this feature is to make sure you have everything in one folder for easy back-up and organizing.
 
It also makes all file references relative to the location of the song's root folder. So after 'copy external files' you can drag the song's root folder to a different location and then open the song from that new location without a hitch :)
 
Thanks for clarifying the behavior guys.

I'm in a bit of a pickle I guess, because in order to get VSX systemwide (headphone-correction software) to function, I have to switch to 48kHz... but I record in 44.1, and I just tracked an entire song.

Thus I've decided for continuity sake and for ease-of-use going forward to upsample everything in my current song to 48khz; only trouble is, my project is filled with loads of 'extra' audio -- Stereo backing tracks for all my instrument rehearsals, A/B files for shootouts and demo'ing, imported stem-separations via Spectralayers for spectral analysis in S1, etc. etc.

I'd like to be able to JUST upsample my damn tracks for the song (it's one of those months-long things) and be done with it...but S1 has to resample it anyway apparently (which makes perfect sense), and then my cache folder is suddenly 15-20 GB.

😣

I suppose the lesson to learn is, resampling is generally expensive AF in terms of storage overhead.


This time I guess I'll just eat the 18GB worth of extra overhead and plan on NEVER moving away from 48khz ever again, lol
 
FWIW, I suppose a more modern interface would not have issues running Windows 10/WDM drivers @ 48kHz in parallel to Studio One running ASIO @ 44.1 -- but my Fireface 800 driver does not like this one bit and presents many ear-shattering artifacts in protest of my efforts -- something akin to a swarm of giant chainsaw-wielding bees.

Or maybe I'm wrong and running ASIO 44.1 and WDM 48 at the same time on Windows might never be a good idea.

🤷‍♂️
 
Two things: The interface doesn't determine whether you can run two audio drivers at the same time, the OS does.

Second: Yes, upsampling is done automatically. But did you know you can easily delete all unused files by right clicking in the pool and selecting that option? So if you decide to bounce all events in orde to not have that process running in the background, you can then delete the original files. And if you have copied these it will only affect the ones in your song folder.
 
Two things: The interface doesn't determine whether you can run two audio drivers at the same time, the OS does.
Yeah no kidding, I've been wanting to set up parallel stereo outputs in W10 for years (I've heard W11 finally supports this after decades but cannot confirm)

It is a very interesting point though -- Are you aware of any notable examples of someone pulling off dual-sample rates, one for the DAW and another for OS apps?

you can easily delete all unused files by right clicking in the pool and selecting that option? So if you decide to bounce all events in orde to not have that process running in the background, you can then delete the original files. And if you have copied these it will only affect the ones in your song folder.
Great tip!

My prospective fix to clean up the old files was to simply save the 'pre-SR-conversion' song version in order to go back to it after 'copy external' and clean up those pool-references manually, but 'delete unused' post-SR-conversion is a much more elegant solution.

👌
 
Last edited:
I seriously doubt that one interface can smoothly run two different sample rates at the same time, ever. Imagine the real-time sampling hardware and buffers having to deal with a clock jumping between two different speeds all the time. So one interface running two different drivers at the same sample rate yes (MacOS does it already), two interfaces each running a different driver at a different sample rate probably fine, but one interface on two different sample rates will surely have to flush the buffers every time it switches, causing glitches and lost samples. At least on a Mac that's the case.
 
Just extending this idea, it depends on your reasoning to always starting at 44 1 kHertz is unless your holding to have to primarily record to CD's and burning to them, there's really no need to not transition to 48kH from the start (needs may vary). Not looking to create a discussion of two common resolutions here, but it's been my experience that it's easier to transform 48kH to 44.1kH than the other way around. Mostly for the subtle higher resolution and critical upper mid frequency sorting human hearing resolves, but also less aliasing going on (no dither discussion need apply). If your Fireface 800 is protesting and has greater clocking res than any commercial audio interface, it might be time to let go of 44 1k altogether if and where you can. I say "if", not knowing you may have your own very good reasons why. Just saying, I let go of 44.1 K, several years now and regardless of what anyone thinks, my audiophile ears and playback system continually reinforce which sounds better, and that I made the right decision. MMV of course.

Hopefully setting some rationale towards considering that. That may not be your direction, but hey..... it's all good in the end with what your doing. Having to convert sort of sucks when you need to carry double the load. So maybe decide what can be off loaded.
 
Last edited:
Just extending this idea, it depends on your reasoning to always starting at 44 1 kHertz is unless your holding to have to primarily record to CD's and burning to them, there's really no need to not transition to 48kH from the start (needs may vary). Not looking to create a discussion of two common resolutions here, but it's been my experience that it's easier to transform 48kH to 44.1kH than the other way around. Mostly for the subtle higher resolution and critical upper mid frequency sorting human hearing resolves, but also less aliasing going on (no dither discussion need apply). If your Fireface 800 is protesting and has greater clocking res than any commercial audio interface, it might be time to let go of 44 1k altogether if and where you can. I say "if", not knowing you may have your own very good reasons why. Just saying, I let go of 44.1 K, several years now and regardless of what anyone thinks, my audiophile ears and playback system continually reinforce which sounds better, and that I made the right decision. MMV of course.

Hopefully setting some rationale towards considering that. That may not be your direction, but hey..... it's all good in the end with what your doing. Having to convert sort of sucks when you need to carry double the load. So maybe decide what can be off loaded.
Here's the thing lokey, I respect your take (as always) and hear what you're saying, but...

Render a song in your DAW at 44.1, then render the same song at 48k. (If you can avoid resampling by finding someone who recorded the same performance at two different sample rates even better, but this would be tough to find practically; I'm sure there's some similar thing online)

If you can hear the difference in a blind A/B 20 times in a row on your setup (the statistical '100%' challenge), I will believe you can hear the difference between 44.1 and 48k, but until then Iet's just say I've seen enough big studies of audiophile-type dudes who could actually not hear the difference, that I remain highly dubious.

What I will not argue, is that 48k is just a better overall decision for engineers and producers of all kinds these days due to that being a standard for what the video guys want to see and work with, as well as other audio collaborators.

That alone is enough reason to use 48khz regardless of audible differences; it's just so much more compatible with other ecosystems.

As for why I use 44.1 personally?

Because I happen to not be able to tell the difference in a blind test 20 times in a row, and I get a ~10% DSP/CPU overhead boost when tracking and mixing

:ninja:

That said I'll probably start my future projects in 48khz from now on. The compatibility stuff is just too valuable to pass up in terms of time saved.
 
"......I'll probably start my future projects in 48khz from now on. The compatibility stuff is just too valuable to pass up in terms of time saved."
Excellent TD, That was my only concerted intention in my response. You likely already came to that conclusion prior to, and that is a good thing going forward.

I see your system specs, and I know you're putting your best foot forward my friend.

I support your position, and others any way that I can, and your right to believe them. On the tests by professionals, let's simply agree to disagree. No, you or anyone might not hear and confirm those differences because twenty times might have contributing and unique differences to them. If I leave the A train from work, and perform a test an hour later, it's not likely I'll hear any difference, and actually I'd bet that I could tell the difference. What I will say to you is beware of such reports of "tested experts" for a host of reasons. Starting with the source material. All too often, comparisons are referred from Hi-Fi recordings where the so called higher res recordings were not that at all. They were often up sampled. That's one of many skewed tests. Second, I wouldn't give a hoot what experts heard or couldn't hear. Are they effectively hearing what is transparent in a mix? Was the test allowing for very discriminating time to hear the subtle differences of even 0.2 dB in a mix or mastering session? The more one of particularly acute hearing, and time "clocked in" to a mix session will (seriously). It's also true, after too much time, their ability to discern 0.2 dB (and we'll tell you closely at what frequency) will diminish. What Im saying is the difference in quality CAN be heard, and it's not a bunch of stated can tell experts, but many people with the ability to hear those transparent frequencies dancing around, because the resolution differences allows for it. In fact, they'll be times 44.1 k will in a particular track/song, simply sound better as well. The difference is there. Varied resolutions will prompt certain mixes to simply sound better, based on the material.

I as in most of us can't hear as well as I did in my younger years. I'm only good up to around 13k to14k, but that's plenty. I know I'm not banking on some team of experts because some article stated so, because I've seen those articles and just as many that disprove their sources or integrity. In the final analysis, recording ones own 44.1 vs 48k 24 bit can draw their own conclusions. So it's just a lot more than what "they say". Have "they" give me a call. 😉

Awesome on turning to 48k hertz on the reasons you mentioned.
Best to you 👍
 
Last edited:
Here are my two cents regarding 44.1 kHz compared to 48 kHz:

First of, people do not hear anything above 20 kHz, and that is before life starts bringing that number down. Lokeyfly is not wrong though, there are many situations where you can actually hear that difference, or even a difference between 96 kHz and 192 kHz sample rates, but that has everything to do with the plugins you use, or other processing. For example, if you use plugins that add harmonics then chances are you hear a clear difference between these sample rates. A guitar through an amp sim can indeed sound different at each sample rate, simply because of the folding back harmonics, making the higher sample rates much more clear and "clean", and the lower sample rates sometimes even messy.

I agree, Lokeyfly, that some of the tests may be faulty, but the ones that are not still show the same results. A similar test was done with a digital and an analog mix, where even the well trained mixing engineers were wrong 50% of the time. But then again, taking my example iof mixing in different sample rates, if you use these harmonics adding plugins without oversampling, When you upsample you indeed do not get the difference. But the funny thing is, the same is true the other way around. If you record and mix at 192 kHz and downsample afterwards to 44.1, there will not be an audible difference. And the resulting wavs will null in a null test as well. The only difference will be a digital noise that is so extremely low in level that it's impossible to hear. You can make it audible by recording the nulled signal and then normalize it. I have done the testing, even left out my ears so I could not be fooled or confronted with the shortcomings, and when doing the test properly you will not be able to tell the difference in a blind test. Nobody can. But, again, in many situations there will be a difference when mixing, and they can be quite drastic even. I hear up to 15 kHz still, which for 53 is not bad at all (many of my age are down to 12) but I hear a difference between noise generated at 44.1 kHz and 48 kHz, and even that of 96 kHz and 192 kHz. Not because I can hear frequencies outside of human hearing, but because of artefacts that are present in the audible range, even below 15 kHz. So your point remains very valid, mixing at 48 kHz is better than 44.1 kHz. And in some situations you may even need to go higher than that for good results.
 
  • Like
Reactions: AAV
Trying to put things in perspective: Sample rates and bit depths have lead to loads of tests and opinions ever since there was more than one option to choose from. Current interfaces and DAWs are great at 44.1 and 48kHz alike for final mixes. Differences if any between the two are completely insignificant compared to all the processing we put the original material through from raw tracks to the final mix. And in the end there will be only one final mix as the finished article. If it's great it's great for its content, not for the sample rate used ;)
 
there are many situations where you can actually hear that difference, or even a difference between 96 kHz and 192 kHz sample rates, but that has everything to do with the plugins you use, or other processing. For example, if you use plugins that add harmonics then chances are you hear a clear difference between these sample rates. A guitar through an amp sim can indeed sound different at each sample rate, simply because of the folding back harmonics, making the higher sample rates much more clear and "clean", and the lower sample rates sometimes even messy.

If that's true, then it would imply you could run a bunch of 44.1 audio files through plugins which produced the type of artifacts you layed out operating @ 96k or even 192khz in the DAW, render the 44.1 project-SR version and the 192kHz version, and pick them out in a blind A/B.

Better yet, simply enabling oversampling on any of those ampsims should make the difference audible, then you just have to render the channel output/s @ whatever SR the oversampling is hitting.


This is a very interesting prospect, and something I will have to look into; I'd be very interested to see if I could hear a difference between some of the new UAD guitar amps with this technique on some crispy/breakup gain guitar tracks, for instance.
Might post a FunkyAB of it here afterwards.

A similar test was done with a digital and an analog mix, where even the well trained mixing engineers were wrong 50% of the time

And let's not forget (lest we become overconfident), wrong 50% of the time = 100% unable to detect a difference. I've seen guys respond to blind A/Bs with 'I can hear it 70% of the time, so I passed!' No, just no -- that is not how this works, lol -- it's not a math test! Assuming a pristine render of audio files with nothing but the differences you are trying to hear (this is not trivial fwiw), coupled with a flat+well-treated listening environment (also anything but trivial), if you can not tell a difference 10 out of 10 times (preferably 20 to reduce even more statistical anomalies), you can't hear it.

This is why Blind A/B is so invaluable; it allows you to know what you think, or to not know it -- which is no less informative when it happens and just might save you some money by allowing you to sell something you don't actually need.

Nice insights as always Dave!
 
Last edited:
Hey guys I'm gonna have to bump back in here a little bit;

I eventually went through with my SR-conversion from 44.1 to 48, and for the most part went pretty smooth, but all my timestretched-audio tracks are suddenly messed up, and nothing's on the beat.

I imagine this is because at their core the timestretch algos are sample-based (?)

There's several reasons why this could be I guess but if you guys have any ideas I'm all ears... I do remember reading in the manual at one point warning about event-based timestretching settings but I can't recall what was said... gonna dig there a bit and see what I find.

Almost through this thing -- help me get to the finish line!
 
UPDATE:

Yeah... so I do a lot of manual audio timestretching in Studio One; I'll listen after something is tracked and realize I want to slow it down or speed it up, so I'll go ~5% in either direction.

I achieve this by inputting the tempo that I first tracked the part at, then setting the inspector to 'Timestretch'. Works so well and follows all adjustments to the tempo track.

After I converted the files to 48kHz I was dismayed to see that all the 'File Tempo' settings for those parts/events were gone. The result is that all the timestretching has been completely obliterated.

My guess would be that when the files are upsampled, the tempo information is not correctly written to the new .wavs.

Can anyone comment on/confirm this?

It honestly seems more like a bug than any kind of user error; the events should not have their tempo information wiped, or there ought to be an option to preserve it when resampling.

Pretty bummed. Looks like I am going to finish this project at 44.1 after all.
 
Well, changing the sample rate of an existing song is a bit like changing the canvas after finishing the painting: There will be some pain depending on the complexity of the work done. Consider bouncing timestretched tracks before converting the song, to avoid timing complications. Also, Studio One allows various ways to change the tempo of a song and/or tracks in that song which will likely produce different results when changing the sample rate. Hard to say how you went about it and how much of the original 'editing history' can be preserved in the converted version of the song.

Other idea: Timestretching a track stores the stretching operations in the .song file and then creates a new audio file for the track in the Cache/Audio folder. The song will use that cached file (and not the stretching operations in the .song file) as long as it's there. You want to avoid that. So somewhere between changing the sample rate of the raw track files and 'continuing with the song' you have to close the song, delete the Cache directory and reopen the song. Doing what exactly when may need some trial and error.
 
Last edited:
Well, changing the sample rate of an existing song is a bit like changing the canvas after finishing the painting

I hear what you're saying, but resampling has become so transparent these days that it really doesn't have to be this way; I just don't think heavy resampling is something that is very common at all so it's not given much development priority.
Hard to say how you went about it and how much of the original 'editing history' can be preserved in the converted version of the song
Well I endeavored to go about timestretching in the 'purest' way, given the multitude of methods as you mentioned. Setting a 'file tempo' and setting the track to 'timestretch' is such a good way to work, because you can actually move the take after that anywhere on the timeline (assuming your file tempo is true to the original take!) and it will be on the beat wherever you happen to drop it in the arrangement.

Just doesn't work with resampling, plain and simple.

Consider bouncing timestretched tracks before converting the song, to avoid timing complications
Yeah sadly I came to this conclusion last night that the error going on here was completely internal to S1, and that I'd be forced to bounce and lose all my takes/comps.

Setting the 'File Tempo' in each event back to its correct value (which was lost in resampling) also does not fix the issue, because apparently the events got moved around on the timeline as well.

I do believe however that this is 'Software Shortcoming' territory -- Presonus simply forgot to add a mechanism which accounts for 'Event-based' file tempos.

For those interested, I believe the crux of why this is happening is that the 'File Tempo' you select in an event inspector is not encoded into the audio file, merely added to the event inspector -- shouldn't be an issue, however, after resampling a bunch of files, S1 apparently attempts to read the file tempi from the newly (replaced) tracks, sees that there isn't one, and writes new empty values to the Event Inspector Tempi.

I believe the fix would be as simple as adding an option to 'ignore timestretched file tempos on resampling', to prevent S1 from reading an empty file tempo and overwriting the event tempo and thus breaking the project.

As it stands, the only other thing I can think of is manually encoding the tempo information for each TS track after resampling -- which is so far in the weeds I'm not even going to attempt it. Might be a useful experiment, though.

Thanks guys for helping me think through this problem
 
Back
Top