A Reflection on AI and Music Production: Using Dreamtonic’s Synthesizer V Studio 2 Pro for Vocal Productions

Introduction
After I started using Dreamtonic’s Synthesizer V Studio 2 Pro recently, I posted a thread about music and AI on Mastodon.
It turns out I have a lot to say on the subject – and this is a nice opportunity to compile my ideas in a more coherent form. Morten Mosgaard on Mastodon added some interesting thoughts, and turned it into a conversation. So with his kind approval, I’m presenting a part of this post as a discussion.
About Synthesizer V. In their own words, it’s “vocal and song production software for creating realistic-sounding original songs or vocal covers by simply entering notes and lyrics, choosing a voice, customizing expressions, and letting our virtual singers sing.”
It’s a plugin that can be used in any DAW – it is also delivered with a standalone version. Notes can be entered by mouse or with a MIDI keyboard, and the advanced algorithm instantly suggests realistic-sounding phrases that can subsequently be customized and altered to your heart’s content.
Use Case – Synthesizer V Studio 2 Pro: TMO – “If We Lose This Love”
This is my first tune where I’m using Synthesizer V for all the vocals – it’s a retro disco track that takes us all the way back to the late 1970s and, dare I say it, Studio 54. I wrote and recorded the song and the backing track last year, but didn’t add any vocals, because my own voice isn’t suited to the genre – at all – and, as a casual dabbler in music, I don’t have access to pro-level vocalists.
The lead vocalist here is Felicia, a “powerfully classic yet modern feminine voice native in English, versatile across pop, opera, and rock genres.” Dozens of voice models are available, and you can manipulate them in lots of different ways, so there will be many “versions” of Felicia out there.
I won’t get into the workflow of Synthesizer V, since this is more of a discussion about AI and music in general. But YouTube, as ever, is a great resource for this.
The Discussion
Terje: AI and music is an increasingly interesting topic. When I say that, I don’t mean prompt-based toys like Suno and Udio. I’m talking about serious AI tools, that have quickly become an important part of most musicians’ and engineers’ workflow. It’s interesting in the way it’s being used; but also in terms of how it’s defined, how it’s perceived, and how it’s presented. What is AI music? What are AI-assisted processes? And where do you draw the line between different definitions?
At this point in history, I assume many professional musicians, composers, and engineers will be wary to reveal that they’re using AI in productions. There’s a lot of stigma attached to it – and people will be quick to judge. I started thinking about it when I bought this voice plugin that can produce such uncannily real vocal performances.
I’m using it as a tool in my own tunes. I do all the work I usually do – writing, performing, arranging, mixing, producing and mastering the tracks myself. I’m fairly confident in my abilities in other departments, and everything in my process is “manual” – even my arpeggio loops.
But I did need a vocal tool like this badly, because my weak voice is useless for most things. And yet, there’s this nagging feeling that when I use Synthesizer V, I owe people an “explanation” for what sounds like a very decent vocal performance. I don’t agree that this should be the case for reasons mentioned below, but… there it is.
Dreamtonic defines their technology as AI, but there are some important distinctions that set it apart from the way most people probably think about AI. For one, the voice models are based on the input of singers they hired for this exact purpose – there’s no wholesale scraping of the entire Spotify catalog, no intrusive data training unbeknownst to creators, and no lame, prompt-based and auto-generated lyrics (which also means, unfortunately, that any lyrical lameness here is entirely my own fault). I don’t understand the technicalities, but the way the original voice recordings are manipulated is apparently what makes this plugin AI.
From a musician’s perspective, it’s basically like any old virtual instrument (VSTi). The plugin and all its dependencies are stored locally on your hard-drive, and won’t drain your local power supplier whenever you produce a vocal. And you still have to put in the work – you have to write the lyrics, the melody, and perform and manipulate the performance with keyboards, controllers, and an advanced set of parameters. Just like you do with an orchestral sample library and a Tina Guo cello for, say, a soundtrack.
It’s not exactly a sample library, though. Until recently, the entire expression range of a sampled cello was, more or less, limited to physical recordings of the instrument. Synthesizer V goes far beyond this, and manipulates sound in ways that (…well, I can’t explain it, can I??).
I’m convinced that in a very short time, traditional sampling and NI Kontakt will be old hat; virtual violins, saxophones and neys will all be “generated with AI” – technologically – whilst still being played and manipulated by musicians, like we do today.
But will producers ever tell you – the listener – that this is the case? And will you – the listener – think of this as “cheating,” or “not real”? And if so, do you also think Adele’s sad, sampled, soft pianos from 2011 is “cheating”? When does an “instrument” become an instrument? And when doesn’t it?
Is it okay for a mastering engineer to use a locally stored, and, comparatively speaking, equally “morally defensible” AI tool to get the mix to 80% of its potential, and concentrate on the last 20% – or is it cheating? Is a voice replica somehow more problematic than other “instruments” and tools because it’s so much more personal, idiomatic and closely attached to human identity? So many double quotes, so many questions!
I think this kind of “AI” – whatever it is – is in the same place today that the gramophone, microphones, loops & samples, and autotune once were – there’s a lot of resistance, prejudice, and misunderstandings. But it’s as undeniably a central part of music’s future as these other contraptions are to its past. And terrible dreck and lack of talent will certainly remain present – Suno sucks, and people who for the life of them can’t write or produce a decent tune, will still somehow make these “magical” tools that can make everything sound smooth and professional in an instant, sound, well, awful.
A lot of things will change for professional musicians and composers. Revenue sources will dry up. And automatically generated slop stock music will soar. But I don’t worry for a second that new technologies will ever replace creative people. Because however fierce the competition, people will never stop loving singing, playing, composing, arranging and engineering music. And we will always find ever new and creative ways to use, manipulate and profit from new technology – even ones as scary as this one.
Morten: Personally, I miss more nuance in the discussion today. AI isn’t new at all, and lots of tools have been using artifical Intelligence for years.
LLMs are still quite new, and they bring a lot of questionable practices, like insane use of resources, unethical use, pirated content, steal first ask later (or wait for a trial), capitalistic exploitation of a hyped market and so on. But this is not AI as a whole, there is a big difference of the various players in this field.
Terje: Yes, I agree. I suppose there are roughly three main strands of resistance to AI in music:
- What you mention – a backlash against massive, resource-heavy models driven by mega-capitalism without moral grounding, trained on data where creators are neither informed nor compensated.
- That it’s a dishonest way of making music – that listeners are being misled, and that musicians and engineers, by using AI, are taking shortcuts and handing over the work to machines, that they’re not really making the music themselves, and that this flattens and marginalizes creativity.
- That it will take jobs and income away from hardworking people in the music industry.
Point 1: I completely agree it’s a real problem that needs to be addressed and regulated.
Point 2 is the big moral and philosophical debate about art, expression, technology, authenticity, and integrity, where prejudice, lack of knowledge, misunderstandings, conflated concepts, and strong opinions will shape the conversation for a long time to come. In many ways, it reminds me of the debate in the 1980s about whether it was even possible to create music with soul and personality using synthesizers (when the only politically correct and widely accepted answer in circulation seemed to be: “Yes, it’s possible, but only if you’re Kraftwerk“).
And right now, you’re seeing some fairly political, almost boilerplate statements, where platforms like Bandcamp and Pond5 ban AI, while (in Bandcamp’s case) only offering vague definitions of what is actually acceptable and what isn’t, and (in Pond5’s case) imposing wholesale bans that effectively exclude everyone except those with an acoustic guitar and a 4-track recorder.
And then there’s this track that was recently banned from the Swedish charts (a lovely and harmless little tune, if you ask me!) because AI was involved, and because the artist’s identity is… um, “unclear”? 😄
This point is going to become a real battleground — and one I’ll be following with great interest.
Point 3 I think is true. But I also think, unfortunately, that it’s a shift that can’t really be stopped – beyond trying to create new opportunities within a new reality.
Morten: I agree with so much of what you’re saying, and I think it’s such an interesting topic.
In developments like these, you will always have those against, those defending and those trying to find a place in the middle. Right now there is very little “in the middle” – at least here in my corner of the fediverse – and very much against everything AI or totally pro AI. And what people mostly mean is LLMs.
Afterthoughts
I think Morten’s point about nuance in the discussion is very important. Every commercial DAW will come with AI capabilities. Every eq, compressor and limiter will eventually come will an AI setting. Every virtual instrument will have AI-assisted vibrato, expression and portamento. Microphones will be shipped with AI-guided dynamic and room software.
“Banning AI-assisted workflows” is almost an oxymoron – it’s everywhere!
And in this landscape, how can you possibly navigate AI bans, or even recognize and avoid AI as a listener? I think it’s impossible. And I don’t necessarily understand why it’s an imperative. But let’s discuss it, let’s get the definitions right – and let’s not throw the baby out with the bathwater. One thing is certain: AI has a future in music, whether you like it or not.
Thanks to Morten for allowing me to post his thoughts here. Follow him on Mastodon – he’s a cool guy!
And here’s a link to the original thread on Mastodon.

It’s been a few years since our last post. You may remember Paul LeRoy (arrow) from our
Knob manufacturers Arton and Leonard Brother have negotiated a deal with Simon Cowell for a new television show.
Pop darling Shaoncé has revealed details about her new album scheduled for release in early 2012. After the tremendous success of her previous album, Shock, people are curious to see if she can live up to the hype — but judging by yesterday’s press conference in Paris there is no need for concern. Sporting an outfit that would make Lady Gaga proud, the young singer won the world press over with her sharp wit and acute observations. Her warm, conversational style created a unique and friendly atmosphere, and even old codgers like myself had to let down our professional guard a bit to enjoy the exhilarating intelligence and humanity of the encounter.