I have started using this strategy, and it's working nicely here in the sound isolation studio.
I am experimenting with the technique of instantiating an Instrument Track and corresponding VSTi virtual instrument; using it for an instrument part; recording the generated audio to a matching Audio Track; and then disabling and hiding the Instrument Track.
Since I do everything with music notation, one of the nice features of this strategy is that the corresponding music notation continues to show in the Edit window; so even though the Instrument Track has been replaced with an Audio Track generated and recorded from the Instrument Track, I can see the music notation, and if necessary I can enable the corespondent Instrument Track to do edits and updates, followed by rendering the audio and repeating the process.
There are a few intermediate steps, but (a) it's not complex and (b) browsing the different instruments, articulations, dynamics, and so forth is what takes the most time, at least when I have a general idea what I want but nothing super specific, where for example I might want a Latin percussion instrument but wander into browsing Middle Eastern or Indian instruments until I find an instrument sound, articulation, and playing style that I like.
This YouTube video is the current version of the song I am developing; and it has six "sparkled" instruments which are used to create and play approximately 75 "sparkles"; but there need to be twice as many based on the number of "sparkes" in "Billie Jean" (Michael Jackson) and "Who Owns My Heart" (Miley Cyrus), which were produced by Quincy Jones and Rock Mafia, respectively, where the "sparkles" are instruments and voices and are done intentionally by design rather than by the serendipity of pleasing "mistakes".
For reference, some of the instrument parts are variations of "Billie Jean" (Michael Jackson) and "I Feel Fine" (Beatles), since I make an effort to avoid having original ideas.
The "sparkles" are done with {Guiro (Percussion Factory, UVI.net), Hollywood Pop Brass (EW ComposeCloud+), Whoosh (UVI.net), Kutu Wapa (World Suite 3 (UVI.net), Kemenche (Silk, Persian Empire, EW Composer Cloud+), Mongol Khomus (World Suite 3,UVI.net)}, where the key insight for arranging is that there are no restrictions on the genre, style, and country origins of instruments and voices. It's all a matter of tones and textures rather than origins and traditional rules of arranging.
By the time you consider Creative Access (Waves), EW ComposerCloud+, IK Multimedia, Music Production Suite Pro (iZotope), Native Instruments, RealiTone, Sonic Pass (UVI.net), Studio One Pro+ ({PreSonus), 11ElevenLabs AI, and a virtual festival of libraries of foley sounds, (a) there are tens of thousands of instruments and voices and (b) when you include advanced synthesizers there are millions of tones and textures.
One might suppose it would be nice to know every possible tone, and texture; but all these virtual instruments, voices, and foley sounds can be browsed to audition them as you look for a specific type of tone and texture.
For example, what's a Kemenche, Kutu Wapa, and Mongol Khomus, or any of the other exotic World Instruments?
I have no idea, but I like the way they sound!
[
NOTE: This is mixed for headphone listening, specifically SONY MDR-7506 headphones; and this focus is on being able to hear the "sparkles", so they are more dominant in this mix by design.]