• Hi and welcome to the Studio One User Forum!

    Please note that this is an independent, user-driven forum and is not endorsed by, affiliated with, or maintained by PreSonus. Learn more in the Welcome thread!

Solved How to produce an accurate bass tab from an MP3 file

OneBass

New member
I’m trying to produce a bass tab from an MP3 file downloard from YouTube and am experiencing some inaccuracies shown in the screen shot. My process is shown below if anyone want to duplicate it. Thanks for any suggestions.


  1. Download the YouTube and convert to MP3
  2. Drag the MP3 file into S1 and normalize it
  3. Detect the tempo, display the tempo track and drag the Audio Track onto the Tempo Track.
  4. Notice that the tempo is not constant, delete all the tempo points and set the first point to 133 BPM to force a constant tempo.
  5. Open the Marker track and set the End Flag at Bar #114
  6. Activate the metronome and display the metronome settings
  7. In the metronome settings, render the click from Timeline Start to Song End
  8. Drag the beginning of the Audio Track to Bar #2 and play both to verify that the click and audio tracks are synchronized
  9. Right mouse click the audio track>Audio>Detect Chords
  10. Open the Chord Track and drag the audio to the Chord Track
  11. Right-click the audio, Separate Stems and select Bass
  12. Normalize the bass stem
  13. Visually notice any gaps in the bass stem and play it to confirm. Bar #19 is an example of some missing bass notes
  14. Select the bass stem and Ctrl-M to bring up the MIDI info in Melodyne. [Side note, when I did this, it did create the MIDI info but also took me to my web browser and brought up a page saying that Melodyne version 5.4.2 is freely available. This surprised me as I thought Studio One updates would automatically include Melodyne updates. I chose not to upgrade Melodyne in this way and at this time.]
  15. We now have the Melodyne “blobs” visible, close that window.
  16. Create an Instrument Track, name it “MIDI Bass Guitar” and select “Existing”.
  17. Drag the bass stem onto the just-created instrument track
  18. Display the browser Instruments>Presonus>Bass> and drag “Fingered Bass” onto the MIDI Bass Guitar Track
  19. Mute all tracks except the MIDI Bass Guitar track to verify we have sound. A bass sound was confirmed.
  20. Double-click the MIDI Bass Guitar track to bring up the piano roll and on the “Track” tab, pull down the “Apply Staff Preset” menu and select Guitar/Basses and Electric Bass.
  21. Click the Full Score Page Layout button
  22. Save the tab in PDF format by clicking the Print tool and select “Microsoft Print to PDF”.

Questions:
  1. How can I help S1 improve the accuracy of the generated bass tab? Would pre-processing the audio track with low-pass filtering or noise reduction help?
  2. Is there a better process than what I’ve shown above?
  3. I’m assuming that the Presonus programmers trained an AI algorithm on hundreds of songs for it to separate bass stems. If this is true, does it make sense to suggest to them to incorporate a knowledge of music, music theory and how bass players play bass into their algorithm? The tempo, chord progression and genre of music could be inputs for this.
 

Attachments

  • Bass Tab.jpg
    Bass Tab.jpg
    71,7 KB · Views: 11
Creating tab from a 1965 recording? There's an old saying about dancing bears... It's not how well the bear dances, it's that the bear can dance at all.

IMO, we are definitely at the dancing bear step of technology. Stem separation will definitely get better over time, but I was surprised to see how good Presonus's first crack at this was. Last week, I took a video of my 3 piece band shot with an IPhone, split it into 4 tracks, and was able to process the vocal and drums nicely. Biggest surprise is that I didn't have to touch the bass - sounded great as is. But, that's the simplest case.

In a more complex arrangement, there's a lot of other parts that the AI has to contend with, and the quality may not be as good.

There's a few things I might try to improve the results. First, if there's a constant drum track through the song, I'd try separating it first and use it to detect tempo. Sharper transients would improve the detection accuracy.

Second, after getting the midi, I'd look at it and listen to it compared to the bass audio. I'd probably stick on a different sounding instrument, and pitch it up an octave to make it easier to hear. Assuming all is OK, I would then quantize the midi, check it again and edit any notes that weren't right.

This is all theoretical, but might give you a few ideas.
 
Yes, I should have mentioned in my original post that Presonus has actually done a great job. This is new technology and to be able to approximate a separated bass stem from a cloud of competing low-end frequencies really is an accomplishment. And the quality of the source audio is not what you'd say is very high.

I'm not entirely following your first idea. Based on the tempo map, the tempo of the song is not constant, that's why I forced it constant at 133 BPM with the idea that this might help the separation algorithm. Are you saying try consolodating the vocals, bass and other and then try separating the bass from the consolodated track because the kick drum wouldn't be present to confuse it?
 
The "Audio Tracker" feature in Toontrack EZBass works well for this kind of project.
They offer a 10 day trial if you're interested in checking it out.

 
I'm not entirely following your first idea. Based on the tempo map, the tempo of the song is not constant, that's why I forced it constant at 133 BPM with the idea that this might help the separation algorithm. Are you saying try consolidating the vocals, bass and other and then try separating the bass from the consolidated track because the kick drum wouldn't be present to confuse it?

No, sorry I was unclear. What I meant was that tempo detection works best if you have a signal with sharp, clean, unambiguous transients on the beat. By extracting the drum part first, then using only that for tempo detection, it should help to get a cleaner tempo map. Again, in theory.

I've used Melodyne tempo detection a lot. One of my projects is a "covers for drunks" duo where we play songs from the 60s to today. For many of the songs, I build drum tracks where I attempt to capture the feel. To do that, I start with the original track in S1, and tempo map it.

I've learned that tempo map results are, um, varied. In the best case, you'll see minor speedups and slowdowns throughout, even if the source track was recorded to a click. In other cases, you can get a wild increase partway through one bar, and a compensating wild decrease in the next. You can manually tweak that in the Melodyne tempo map, but that isn't fun.

The toughest songs to map, I find, are the oldest ones. Time was much more elastic. I did a track for James Brown's "I Feel Good (I Got You)", which, to the ear, has got a strong, unambiguous groove (Hey, it's James), but made Melodyne collapse into a crying heap.
 
Thanks for the EZBass link Trucky. I viewed the Audio tracker video at he talks about it working best on sources free of distortion; in general, my sources will contain distortion. He also says for most accurate results, monophonic recordings are suggested.

This led me to explore options for cleaning up and removing distortion and I’m finding this is a huge ocean in its own right. Tools like iZotope RX, Acon Digital Acoustica and Steinberg SpectraLayers come up.

Before going down any of these paths I wonder if anyone here has a contact at Presonus where I could ask if they have an internal document called Best Practices for Pre-processing Song Files For Best Bass Tablature Results, or similar. Or at a minimum they may have a set of tips since they know their software the best. I understand they shut down their forum.
 
Thanks for the EZBass link Trucky. I viewed the Audio tracker video at he talks about it working best on sources free of distortion; in general, my sources will contain distortion. He also says for most accurate results, monophonic recordings are suggested.

This led me to explore options for cleaning up and removing distortion and I’m finding this is a huge ocean in its own right. Tools like iZotope RX, Acon Digital Acoustica and Steinberg SpectraLayers come up.

Before going down any of these paths I wonder if anyone here has a contact at Presonus where I could ask if they have an internal document called Best Practices for Pre-processing Song Files For Best Bass Tablature Results, or similar. Or at a minimum they may have a set of tips since they know their software the best. I understand they shut down their forum.
I doubt PreSonus has a „very specific instructions for a single user‘s very specific question“ document. 🤓
I would suggest you use a stem separation tool to free the bass from the rest of the Arrangement and then try the EZBass audio tracker. If you are using Studio One 7 you already have stem separation built in.
 
Before going down any of these paths I wonder if anyone here has a contact at Presonus where I could ask if they have an internal document called Best Practices for Pre-processing Song Files For Best Bass Tablature Results, or similar.
I don't need to ask any of my contacts at PreSonus to tell you that there's no such document ;)

And I agree with Tommy that stem separation is a good starting point.
 
I doubt PreSonus has a „very specific instructions for a single user‘s very specific question“ document. 🤓
I would suggest you use a stem separation tool to free the bass from the rest of the Arrangement and then try the EZBass audio tracker. If you are using Studio One 7 you already have stem separation built in.
Yes, I get really good results using the following steps...
1. Use "Any Video Converter" to create a wave file for the song.
2. Load the wave audio file into Studio One Pro 7 and use "Stem Separator" feature to isolate and create the Bass wave audio file.
3. Load the Bass wave audio file into EZBass "Audio Tracker".
 
One of my projects is a "covers for drunks" duo where we play songs from the 60s to today.

I have played in bands for years.. Best way is to find a good bass player, most good players can work up the parts in less than 30 minutes. There are some sites like "airgigs" where you can hire a bass player to do this for you for really cheap. However, it is an interesting project and there are some really good suggestions here for trying to get it done.

As a side note, I would be interested in the "covers for drunks" playlist if you have one!! Sounds like a great playlist to put together on Spotify for parties! Or maybe you already have it on Spotify? :)
 
OneBass, you got me curious, which is my fatal flaw. I came up with an approach that seems to work well. Don't have time to get into too much detail, but the issue with "You've Got Your Troubles" is that the bass was in the left channel only and buried and the tempo inconsistent. Stem separation didn't stand a chance. For it to work, you have to make the bass part more audible. Here's what I did:
  1. I started with mapping the tempo of the whole track using Studio 7's tempo detection.
  2. By eyeballing the tempo map, I set the file to a static tempo (in this case, it was 135 bpm).
  3. I then used a combo of multiband compression, steep low pass filter and MaxxBass to make the bass more audible.
  4. Next, I split the channels into two mono channels and discarded the right.
  5. Now, Melodyne. It detected the bass track as a polyphonic audio track. You want that. You'll see a clear bass part in the Melodyne, as well as a ton of other low level notes.
  6. Delete all those low level notes. You'll be left with a monophonic bass part.
  7. Use Melodyne's quantization macro to tighten up the timing.
  8. Translate into midi, then the chart.
Attached you'll see what I ended up with. Not sure if the bar lines are in the right place, but it's clean. FWIW, I first experimented with a more modern recording where the stem separation worked well. No bass cleanup was needed - worked great.
 

Attachments

Last edited:
Many thanks js1,
Your procedure is more refined than mine and your results are better too.
What you have produced is an example of what I was hoping Presonus would have already done.

One question. Did you have to uses any 3rd party tools to accomplish Step 3? Or was Studio One adequate?
 
The problem I was trying to solve was to make the bass more audible. If you can't hear it clearly, then the stem separation can't "hear" it either.

I used third party tools for step 3 out of habit, but I could have done it with the Presonus plugins. My thought was to filter to get rid of non-bass, and use multiband compression to make the bass steady. I did use Waves MaxxBass to manufacture some bass harmonics, but I don't know if that was necessary.
 
In an attempt to try to give something back here, I thought I'd share a recent discovery that may be of use to those attempting this same thing.

I'm noticing that extraneous musical information can find its way from the audio bass stem into the MIDI representation which can cause unwanted MIDI notes to be generated. The discovery is Stip Silence in the Audio menu. I haven't yet found guidance on how to apply this to bass guitar, but the link below explains how to apply it to vocals. It looks powerful.

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
 
Back
Top