Surf.Whammy
New member
- Genre
- Soundtrack
- Instruments
- Hammond B-3X, MODO Bass, SampleTank 4 (IK Multimedia), EW ComposerCloud, Kontakt (Native Instruments), UVI.Net, Addictive Drums 2 (XLN Audio), 11ElevenLabs AI Voices, Realivox Blue (RealiTone)
- Effects
- White 2A, Brickwall Limiter, EQP-1A Vintage Tube Program Equalizer (IK Multimedia), EC300 (McDSP), Vocal Bender (Waves), TrackPlug (Wave Arts), Pro-C2, Pro-Q3, Timeless 4 (FabFilter), Butch Vig Vocals (Waves), Vocal Synth (iZotope), Xtreme FX 2, Whoosh, Walker 2 (UVI)
- Special techniques
- The instruments are played by music notation in Studio One. I do some of the voice-overs, but other voices are done by 11ElevenLabs AI. This is mixed for headphone listening, and I use SONY MDR-7506 headphones. All the voices (real and AI) have elaborate effects, including vocal transformations for some of the voices.
- Released when
- November 2024
This is the second chapter in my ongoing old-time, science fiction radio play series ("Extreme Gravity"), which I started approximately 20 years ago. The story is based on the hypothesis that (a) like an electric guitar pickup (copper wire wrapped in a coil around magnets) which produced a signal when energized by vibrating metal guitar strings, (b) gravity can be generated by energizing a coil of fiber optic cable by sending an energized laser beam through it, hence the "gravity generator" and "Extreme Gravity".
At first, I did everything with a real Fender American Deluxe Stratocaster, an Alesis ION Analog Synthesizer, and simply read the story, but now I am redoing the first few chapters with VSTi virtual instruments in Studio One using music notation. There are 27 chapters, and the later chapters are equally elaborate and are done in Studio One in the same way, although occasionally I play some of the electric guitar parts but mostly for doing things like slides and glissandi which are not easy or even possible to do with music notation. Lately I have discovered 11ElevenLabs AI, which provides a virtual festival of voices that say whatever I write and usually have Australian or British accents, which is fine with me.
I compose the stories and music; and the initial reason for adding music was to prevent companies from using the chapters as audiobooks, which is due to the fact that music folks are smarter and have better attorneys than audiobook folks, which is evidenced by the strict rules for selling and renting music as contrasted to audiobooks. With audiobooks someone can buy one copy and then rent it many times without paying royalties to the author for each rental; but with music, every copy must be licensed and royalties must be paid, even when rented, which is the way it works in YouTube, as well.
I am planning to release these chapters on Audible with accompanying Kindle eBooks, but not for a while. At present, my strategy is to release prototype mixes for each chapter on my YouTube channel at no charge, because most folks like me have no money. So, if you see "PT 1 or Pan" it's a version, where for some things there might be as many as 25 or more versions. The version with the highest number is the final prototype. I will do final mixing for the Audible versions, but the versions on YouTube are mixed nicely, although for Audible I need to lower the volume levels of the music, which I can do in Studio One by moving the instruments and voices to Bus Tracks and then mixing accordingly, which also requires using certain standards that I will follow using the WLM Plus Loudness Meter (Waves).
The current mixing strategy is designed to get good levels when played by YouTube, which is a combination of mixing in Studio One but then adjusting the Studio One mix after hearing what YouTube does when it processes the audio, something which YouTube does very nicely and always improves the sound, which makes it like the melodic broadcast signal processors used in the 1950's to ensure radio stations conformed to FCC broadcasting rules and regulations, which now are emulated nicely by IK Multimedia, like White 2A Leveling Amplifier, Black 76 Limiting Amplifier, and so forth, some of which are provided by other VST effects plug-in companies. This took a while to discover and required a lot of experimentation, including using "ducking" and a few other advanced techniques. There are more things which can be done for YouTube and Apple, but it requires certified third-party mastering, which at present costs too much. I don't get the same loudness and other characteristics as, for example, Metallica, but when you listen with studio-quality headphones I think the various instruments are good.
Since I do everything, one of the important strategies here in the sound isolation studio is to keep things as simple as possible when it's practical, which for music notation maps to defining that there are 12 notes and 10 or so octaves. In turn, this maps to using soprano treble staves for everything and using transposition to specify the way notes are played, where for example the staves for bass are played two octaves lower than notated, while staves for guitar are played one octave lower than notated. This way instead of needing to remember 120 notes, I just need to remember 12 notes which depending on octave and intent can be deep, middle, or high.
I also do everything in the key signature of C in 4/4 time and explicitly specify sharps, flats, and naturals in each measure rather than as part of a key signature, which overall maps to the only way to make music notation WYSIWYG, since if you specify any other key signature, then for example a note might appear to be "Middle C" when actually it is C#5 rather than C5 in scientific pitch notation. Additionally, I avoid specifying articulations, styles, and all that visually cluttering, total waste of time, and thoroughly pointless nonsense, which is necessary and useful when producing sheet music to be played by trained musicians and singers, but for digital music primarily using VSTi virtual instruments and sampled-sound libraries is not necessary and is extraordinarily frivolous.
Instead, I use sampled-sound libraries where instruments are played in specific articulations and styles. I want a legato violin, then I select a set of samples where a trained and proficient violinist played the notes legato; and I do everything else with VST effects plug-ins, for example using tremolo and vibrato to avoid the problem caused when a sampled-sound library is not chromatically sampled--which I call "diatonically sampled" where only every other note actually is sampled-- which requires the computer to synthesize non-sampled notes and for motion effects like tremolo and vibrato changes the speed, rate, and intensity. Instead, I use a dry set of sampled sounds and then add tremolo or vibrato as a VST effects plug-in, which keeps the speed, rate, and intensity steady for every note, sampled or synthesized.
P. S. I listed the Genre as "Soundtrack", but (a) it's an old-time, science fiction radio play and (b) there is no Genre for it in the drop-down list, but so what.
At first, I did everything with a real Fender American Deluxe Stratocaster, an Alesis ION Analog Synthesizer, and simply read the story, but now I am redoing the first few chapters with VSTi virtual instruments in Studio One using music notation. There are 27 chapters, and the later chapters are equally elaborate and are done in Studio One in the same way, although occasionally I play some of the electric guitar parts but mostly for doing things like slides and glissandi which are not easy or even possible to do with music notation. Lately I have discovered 11ElevenLabs AI, which provides a virtual festival of voices that say whatever I write and usually have Australian or British accents, which is fine with me.
I compose the stories and music; and the initial reason for adding music was to prevent companies from using the chapters as audiobooks, which is due to the fact that music folks are smarter and have better attorneys than audiobook folks, which is evidenced by the strict rules for selling and renting music as contrasted to audiobooks. With audiobooks someone can buy one copy and then rent it many times without paying royalties to the author for each rental; but with music, every copy must be licensed and royalties must be paid, even when rented, which is the way it works in YouTube, as well.
I am planning to release these chapters on Audible with accompanying Kindle eBooks, but not for a while. At present, my strategy is to release prototype mixes for each chapter on my YouTube channel at no charge, because most folks like me have no money. So, if you see "PT 1 or Pan" it's a version, where for some things there might be as many as 25 or more versions. The version with the highest number is the final prototype. I will do final mixing for the Audible versions, but the versions on YouTube are mixed nicely, although for Audible I need to lower the volume levels of the music, which I can do in Studio One by moving the instruments and voices to Bus Tracks and then mixing accordingly, which also requires using certain standards that I will follow using the WLM Plus Loudness Meter (Waves).
The current mixing strategy is designed to get good levels when played by YouTube, which is a combination of mixing in Studio One but then adjusting the Studio One mix after hearing what YouTube does when it processes the audio, something which YouTube does very nicely and always improves the sound, which makes it like the melodic broadcast signal processors used in the 1950's to ensure radio stations conformed to FCC broadcasting rules and regulations, which now are emulated nicely by IK Multimedia, like White 2A Leveling Amplifier, Black 76 Limiting Amplifier, and so forth, some of which are provided by other VST effects plug-in companies. This took a while to discover and required a lot of experimentation, including using "ducking" and a few other advanced techniques. There are more things which can be done for YouTube and Apple, but it requires certified third-party mastering, which at present costs too much. I don't get the same loudness and other characteristics as, for example, Metallica, but when you listen with studio-quality headphones I think the various instruments are good.
Since I do everything, one of the important strategies here in the sound isolation studio is to keep things as simple as possible when it's practical, which for music notation maps to defining that there are 12 notes and 10 or so octaves. In turn, this maps to using soprano treble staves for everything and using transposition to specify the way notes are played, where for example the staves for bass are played two octaves lower than notated, while staves for guitar are played one octave lower than notated. This way instead of needing to remember 120 notes, I just need to remember 12 notes which depending on octave and intent can be deep, middle, or high.
I also do everything in the key signature of C in 4/4 time and explicitly specify sharps, flats, and naturals in each measure rather than as part of a key signature, which overall maps to the only way to make music notation WYSIWYG, since if you specify any other key signature, then for example a note might appear to be "Middle C" when actually it is C#5 rather than C5 in scientific pitch notation. Additionally, I avoid specifying articulations, styles, and all that visually cluttering, total waste of time, and thoroughly pointless nonsense, which is necessary and useful when producing sheet music to be played by trained musicians and singers, but for digital music primarily using VSTi virtual instruments and sampled-sound libraries is not necessary and is extraordinarily frivolous.
Instead, I use sampled-sound libraries where instruments are played in specific articulations and styles. I want a legato violin, then I select a set of samples where a trained and proficient violinist played the notes legato; and I do everything else with VST effects plug-ins, for example using tremolo and vibrato to avoid the problem caused when a sampled-sound library is not chromatically sampled--which I call "diatonically sampled" where only every other note actually is sampled-- which requires the computer to synthesize non-sampled notes and for motion effects like tremolo and vibrato changes the speed, rate, and intensity. Instead, I use a dry set of sampled sounds and then add tremolo or vibrato as a VST effects plug-in, which keeps the speed, rate, and intensity steady for every note, sampled or synthesized.
P. S. I listed the Genre as "Soundtrack", but (a) it's an old-time, science fiction radio play and (b) there is no Genre for it in the drop-down list, but so what.
Last edited: