A Reflection on AI and Music Production: Using Dreamtonic’s Synthesizer V Studio 2 Pro for Vocal Productions

Introduction

After I started using Dreamtonic’s Synthesizer V Studio 2 Pro recently, I posted a thread about music and AI on Mastodon.

It turns out I have a lot to say on the subject – and this is a nice opportunity to compile my ideas in a more coherent form. Morten Mosgaard on Mastodon added some interesting thoughts, and turned it into a conversation. So with his kind approval, I’m presenting a part of this post as a discussion.

About Synthesizer V. In their own words, it’s “vocal and song production software for creating realistic-sounding original songs or vocal covers by simply entering notes and lyrics, choosing a voice, customizing expressions, and letting our virtual singers sing.”

It’s a plugin that can be used in any DAW – it is also delivered with a standalone version. Notes can be entered by mouse or with a MIDI keyboard, and the advanced algorithm instantly suggests realistic-sounding phrases that can subsequently be customized and altered to your heart’s content.

Use Case – Synthesizer V Studio 2 Pro: TMO – “If We Lose This Love”

TMO – “If We Lose This Love” (2025/6)

This is my first tune where I’m using Synthesizer V for all the vocals – it’s a retro disco track that takes us all the way back to the late 1970s and, dare I say it, Studio 54. I wrote and recorded the song and the backing track last year, but didn’t add any vocals, because my own voice isn’t suited to the genre – at all – and, as a casual dabbler in music, I don’t have access to pro-level vocalists.

The lead vocalist here is Felicia, a “powerfully classic yet modern feminine voice native in English, versatile across pop, opera, and rock genres.” Dozens of voice models are available, and you can manipulate them in lots of different ways, so there will be many “versions” of Felicia out there.

I won’t get into the workflow of Synthesizer V, since this is more of a discussion about AI and music in general. But YouTube, as ever, is a great resource for this.

The Discussion

Terje: AI and music is an increasingly interesting topic. When I say that, I don’t mean prompt-based toys like Suno and Udio. I’m talking about serious AI tools, that have quickly become an important part of most musicians’ and engineers’ workflow. It’s interesting in the way it’s being used; but also in terms of how it’s defined, how it’s perceived, and how it’s presented. What is AI music? What are AI-assisted processes? And where do you draw the line between different definitions?

At this point in history, I assume many professional musicians, composers, and engineers will be wary to reveal that they’re using AI in productions. There’s a lot of stigma attached to it – and people will be quick to judge. I started thinking about it when I bought this voice plugin that can produce such uncannily real vocal performances.

I’m using it as a tool in my own tunes. I do all the work I usually do – writing, performing, arranging, mixing, producing and mastering the tracks myself. I’m fairly confident in my abilities in other departments, and everything in my process is “manual” – even my arpeggio loops.

But I did need a vocal tool like this badly, because my weak voice is useless for most things. And yet, there’s this nagging feeling that when I use Synthesizer V, I owe people an “explanation” for what sounds like a very decent vocal performance. I don’t agree that this should be the case for reasons mentioned below, but… there it is.

Dreamtonic defines their technology as AI, but there are some important distinctions that set it apart from the way most people probably think about AI. For one, the voice models are based on the input of singers they hired for this exact purpose – there’s no wholesale scraping of the entire Spotify catalog, no intrusive data training unbeknownst to creators, and no lame, prompt-based and auto-generated lyrics (which also means, unfortunately, that any lyrical lameness here is entirely my own fault). I don’t understand the technicalities, but the way the original voice recordings are manipulated is apparently what makes this plugin AI.

From a musician’s perspective, it’s basically like any old virtual instrument (VSTi). The plugin and all its dependencies are stored locally on your hard-drive, and won’t drain your local power supplier whenever you produce a vocal. And you still have to put in the work – you have to write the lyrics, the melody, and perform and manipulate the performance with keyboards, controllers, and an advanced set of parameters. Just like you do with an orchestral sample library and a Tina Guo cello for, say, a soundtrack.

It’s not exactly a sample library, though. Until recently, the entire expression range of a sampled cello was, more or less, limited to physical recordings of the instrument. Synthesizer V goes far beyond this, and manipulates sound in ways that (…well, I can’t explain it, can I??).

I’m convinced that in a very short time, traditional sampling and NI Kontakt will be old hat; virtual violins, saxophones and neys will all be “generated with AI” – technologically – whilst still being played and manipulated by musicians, like we do today.

But will producers ever tell you – the listener – that this is the case? And will you – the listener – think of this as “cheating,” or “not real”? And if so, do you also think Adele’s sad, sampled, soft pianos from 2011 is “cheating”? When does an “instrument” become an instrument? And when doesn’t it?

Is it okay for a mastering engineer to use a locally stored, and, comparatively speaking, equally “morally defensible” AI tool to get the mix to 80% of its potential, and concentrate on the last 20% – or is it cheating? Is a voice replica somehow more problematic than other “instruments” and tools because it’s so much more personal, idiomatic and closely attached to human identity? So many double quotes, so many questions!

I think this kind of “AI” – whatever it is – is in the same place today that the gramophone, microphones, loops & samples, and autotune once were – there’s a lot of resistance, prejudice, and misunderstandings. But it’s as undeniably a central part of music’s future as these other contraptions are to its past. And terrible dreck and lack of talent will certainly remain present – Suno sucks, and people who for the life of them can’t write or produce a decent tune, will still somehow make these “magical” tools that can make everything sound smooth and professional in an instant, sound, well, awful.

A lot of things will change for professional musicians and composers. Revenue sources will dry up. And automatically generated slop stock music will soar. But I don’t worry for a second that new technologies will ever replace creative people. Because however fierce the competition, people will never stop loving singing, playing, composing, arranging and engineering music. And we will always find ever new and creative ways to use, manipulate and profit from new technology – even ones as scary as this one.

Morten: Personally, I miss more nuance in the discussion today. AI isn’t new at all, and lots of tools have been using artifical Intelligence for years.

LLMs are still quite new, and they bring a lot of questionable practices, like insane use of resources, unethical use, pirated content, steal first ask later (or wait for a trial), capitalistic exploitation of a hyped market and so on. But this is not AI as a whole, there is a big difference of the various players in this field.

Terje: Yes, I agree. I suppose there are roughly three main strands of resistance to AI in music:

  1. What you mention – a backlash against massive, resource-heavy models driven by mega-capitalism without moral grounding, trained on data where creators are neither informed nor compensated.
  2. That it’s a dishonest way of making music – that listeners are being misled, and that musicians and engineers, by using AI, are taking shortcuts and handing over the work to machines, that they’re not really making the music themselves, and that this flattens and marginalizes creativity.
  3. That it will take jobs and income away from hardworking people in the music industry.

Point 1: I completely agree it’s a real problem that needs to be addressed and regulated.

Point 2 is the big moral and philosophical debate about art, expression, technology, authenticity, and integrity, where prejudice, lack of knowledge, misunderstandings, conflated concepts, and strong opinions will shape the conversation for a long time to come. In many ways, it reminds me of the debate in the 1980s about whether it was even possible to create music with soul and personality using synthesizers (when the only politically correct and widely accepted answer in circulation seemed to be: “Yes, it’s possible, but only if you’re Kraftwerk“).

And right now, you’re seeing some fairly political, almost boilerplate statements, where platforms like Bandcamp and Pond5 ban AI, while (in Bandcamp’s case) only offering vague definitions of what is actually acceptable and what isn’t, and (in Pond5’s case) imposing wholesale bans that effectively exclude everyone except those with an acoustic guitar and a 4-track recorder.

And then there’s this track that was recently banned from the Swedish charts (a lovely and harmless little tune, if you ask me!) because AI was involved, and because the artist’s identity is… um, “unclear”? 😄

This point is going to become a real battleground — and one I’ll be following with great interest.

Point 3 I think is true. But I also think, unfortunately, that it’s a shift that can’t really be stopped – beyond trying to create new opportunities within a new reality.

Morten: I agree with so much of what you’re saying, and I think it’s such an interesting topic.

In developments like these, you will always have those against, those defending and those trying to find a place in the middle. Right now there is very little “in the middle” – at least here in my corner of the fediverse – and very much against everything AI or totally pro AI. And what people mostly mean is LLMs.

Afterthoughts

I think Morten’s point about nuance in the discussion is very important. Every commercial DAW will come with AI capabilities. Every eq, compressor and limiter will eventually come will an AI setting. Every virtual instrument will have AI-assisted vibrato, expression and portamento. Microphones will be shipped with AI-guided dynamic and room software.

“Banning AI-assisted workflows” is almost an oxymoron – it’s everywhere!

And in this landscape, how can you possibly navigate AI bans, or even recognize and avoid AI as a listener? I think it’s impossible. And I don’t necessarily understand why it’s an imperative. But let’s discuss it, let’s get the definitions right – and let’s not throw the baby out with the bathwater. One thing is certain: AI has a future in music, whether you like it or not.

Thanks to Morten for allowing me to post his thoughts here. Follow him on Mastodon – he’s a cool guy!

And here’s a link to the original thread on Mastodon.

January 3, 2025: Early Lessons – “Neither Barc Nor Bacharolle”

January 2nd brought a terrible track, so let’s quickly forget about that. It’s published on YouTube, as per the definition of the task – look it up on your own risk.

Today’s selection has more going for it, but this little chamber piece has some issues as well. If there’s anything to take away from the Ted Mountainé 2025 Challenge at this early stage, it’s that these sketches need more clarity. When there’s no deadline involved, we can chip away at details for weeks on end to carve out the essence of a track.

And, indeed, that’s often the way it’s done around here:

  1. Throw everything into the pot in a confusing hodgepodge of crossing wires and heavily stacked loose ends – nothing’s too complicated, nothing’s too silly, and, most importantly, nothing’s too much.
  2. Go treasure hunting for golden nuggets in these overloaded tracks.

A track a day requires a different strategy. The original idea needs to be solid and clear. The Jan 2 track was a mess because there was no scope, no recognizable form, arrangement, melody… or much of anything else for that matter.

The Jan 3 track has more form (the sonata), a framework (an expanded string quartet), and some recognizable stylistic elements. But it’s still too messy and complicated, with individual lines getting in the way of each other. And there’s harmonic exploration that was fun to play around with at the time, but it doesn’t have the support needed to make sense within the musical context – especially during the middle part.

So an important reminder for the future is to build a better foundation to support the load of the choices we make. Keep it simple from the start – no ornaments or spiral staircases until the load-bearing walls and floors are in place.

But it’s the weekend, so, to stay with the architectural vernacular, we’ll probably stick to building blocks for a few days, with focus on synth patches, sampling and rhythm beds. Results will be posted. Ted Mountainé out!

The Ted Mountainé 2025 Challenge

Ted Mountainé starts off the new year with a brand new project: The plan is to publish a piece of music every day of the year, at least until general fatigue sets in. For a time traveller this makes absolutely no sense at all, but that’s what Ted Mountainé is all about!

We start off with a mood piece in a style reminiscent of the intro to Vangelis’ The City, or, indeed, Blade Runner – a trick Ted often reverts to when he needs an “ouverture” to capture the feel of a new project. It’s probably a bit more romantic and optimistic in tone than Vangelis’ more dystopian sounds in these examples. And it might be too optimistic considering that the year we are now entering, 2025, could be a turbulent ride. But let’s hope Ted proves us wrong.

April, 1988: Victor Herbert Goes Rogue

LeRoy and Schiing testing fake Remo conga skins at our facility in the Dolomite Mountains

The inventor Paul LeRoy (28 at this time and, therefore, currently unaware of his own death), met up with our percussionist Schiing the other day. They discussed, amongst other things, the incident that shook the time-travelling community last month (or, last month thirty-five years ago, to be precise), when a young and out-of-time Victor Herbert somehow managed to delay David Mamet’s play Speed-the-Plow.

It was rescheduled from April 13 to May 3, 1988, and it caused all kinds of havoc in the time-space continuum. Safe to say, Herbert is not a popular man at this particular moment in time. Everyone involved in the business knows that if you reschedule a Mamet play by a certain amount of days, all his other plays will be rescheduled accordingly. This interferes with the actors’ schedules, and, because these are often big names involved in a variety of projects, important movies and plays will be moved or cancelled.

Indeed, Mamet wrote Our American Cousin under his Tom Taylor pseudonym in 1857, and as a consequence this episode reintroduced the 1865 Abraham Lincoln assassination to history. This hasn’t occurred since the George VI tea incident.

One can only speculate what was on Herbert’s mind, but it is well known that since learning about his fate and reputation post-Eileen (1917) when he was forced to compose in a simpler style in a misguided attempt to pander to newer musical sensibilities, he has become a bitter man, and many believe that he was also responsible for the recent HarperCollins Bridgerton misprint scandal.

LeRoy and Schiing also discussed the impact of fake Remo conga skins in the broader context of pop and easy listening, and how this might have affected the popularity of Peter Allen’s classic live version of I Go to Rio across the different time rifts.

We hope to bring you a YouTube video of the entire discussion soon.

Robots in Suits and the Mercedes-Benz 123 Series

Back in ’76, at the Frankfurt Motor Show, Ted Mountainé discovered the Mercedes-Benz 123 series for the first time, which marked the beginning of a long-lasting love affair.

Unlike his band leader peers, who often went for flashy luxury cars, Mountainé appreciated the solid and unassuming aura of a car that looked more like a car than any other car he had ever seen, the new Mercedes 200.

And in many ways this mobile entity, practically lifted directly from Plato’s aspatial, atemporal Forms into the physical world by pure German industrial strength, became a symbol of the no-nonsense, utilitarian easy listening landscape that shaped a certain part (elevators, mostly) of the following decades with Ted Mountainé behind the steering wheel.

Unfortunately, due to the temporal whims of the space-time continuum, most people reading these words will not be aware of Mountainé’s moderate reputation in some of our parallel worlds.

But we are lucky enough to be able to present a few of his sound recordings here, to showcase the subliminal audial presence he commanded in some other – choice, naturally – worlds. Here is Robots in Suits, a musical taster from another 1983.

Floating in Space (Cocktails) ’90

Ted Mountainé says: “Hey, put on your retro sports jacket and get in my spaceship for a bit of action before we watch another episode of “30something”!”

We’ll bring you three drinks and the tab in the spacebar. Not because we’re cheap, but because we’re in constant search of a joke that’ll make your toes curl.

This is a massive undertaking – an intergalactic smooth jazz ride with too many references to mention. Sadly, it was recorded before we famously reevaluated and fine-tuned our mixing philosophy (essentially: “learned to mix”), so even though we stand by the music 100%, mistakes were made in the sound engineering department. This causes a certain lack of overall energy during the proceedings, and some rather embarrassing balancing issues. But we hope it’s still an enjoyable ride.

Waiting for Summer

Ted Mountainé is, as we know, currently busy in his role as a spokesperson for the International Association of Introverted Jet-Setters Travelling in the Past (IAoIJSTitP). The micro-organizational aspects of this engagement are profound in their minimality, simply due to the nature of the (incredibly annoying) people occupied with these matters.

It has tested the patience of many a regular jet-setter who has accidentally wound up on this particular yacht, as it were. We won’t easily forget the year when Leonard Bernstein unwittingly found himself as the Jet Set Miniature Assembly Kit Ceremony Master, haha.

Regardless, a couple of years ago, Ted took some time off to create this exciting montage consisting of scenery from his beautiful life and music. He didn’t really know where to stop, though (a common affliction for time travelers), so the video and the tune goes on for far too long – which we suppose only proves what we suspected long ago: You can get too much of a good thing.

But Ted stands rigorously by his work: “If every man and woman stepped out of their sepia-toned lives and took a break from their generally depressing world views, and picked up some cues from the happy distortion of cardboard people in escapist entertainment and advertisement settings instead, I honestly think we would all be better off for it,” he claims.

So, until we meet again, please enjoy this little bundle of overbaked joy from the Ted Mountainé Orchestra, for as long as you can take it.

Update 2018: A Letter from the Staff

It’s been a few years since our last post. You may remember Paul LeRoy (arrow) from our first article back in 2011? It turns out he was right about time travel all along, and as a result of our journalistic investigations into the area, we’ve been going back and forth through history during the past couple of years.

All our correspondents have been busy in alternate realities, and all this time our editor, Ted Mountainé, has been churning out mediocre synth music to redirect the attention of the general public and stay out of view from suspicious governmental bodies.

We will go into all of this in greater detail if time permits. Before any of that, though, we want to take the opportunity to apologize about Donald Trump. You see, that was our fault – it turns out that Ted Lewis’ Hat experimented with some probability theory events in what he thought to be a dummy universe – it wasn’t

It is an easy mistake to make, so we hope you’ll forgive him, as we have.

We hope to bring you an interview with Paul LeRoy himself in the near future. He’s currently busy playing lead clarinet in the 1973 incarnation of Raymond Lefèvre’s orchestra, but we should be able to lure him out. It’s an incredibly boring job – Lefèvre is, after all, all about the string section.

Cowell: Stupid Might Beat Boring But I’m Tired of Throwing the Punches

Knob manufacturers Arton and Leonard Brother have negotiated a deal with Simon Cowell for a new television show.

American Preservationist will debut on ABC on March 16, 2012 and it will be hosted by Stephen Hawking. Executive producers are Brother’s sister Susie Brother Love, former CEO of Hallmark’s NASA division, and folk singer Gordon Lightfoot. Lightfoot will arrange the title tune, a “sunny keyboard version” of Natalie Cole’s “Miss You Like Crazy.”

Simon Cowell describes the show as “an Idol show for anyone, or anything, on the verge of extinction.” The difference between the new show and Cowell’s previous endeavors, he says, is that this time he’s turning the spotlight on things actually worth saving. Cowell claims that he’s “dead tired” of disposable teen idols, and painfully regrets signing new three-year deals with ITV for Britain’s Got Talent and The X Factor. He says that the prospect of having to endure “these ridiculous fatheads” well into his sixties scares him shitless.

The Brothers are excited about the opportunity to get a break from the knob business. “You know, it’s a broad concept. We can move from Tony Orlando to Mike Love, and then to a moth-ridden desk, in an instant. Hopefully we’ll squeeze some knobs in there as well, but that’s not important, really. It’s an opportunity for us to do something completely different — and it’s nice not to worry about profits for a change, because honestly, no one expects the concept to catch on with sponsors. We’re doing it because it’s not knobs.”

Stephen Hawking doesn’t want to comment on his role. When we asked him about the show at a local protest targeting plans to enlarge a hotel complex on the River Cam recently, he simply shrugged his shoulder. Metaphorically speaking. that is.

Shaoncé vs. Foucault: The Use of Pleasure

Pop darling Shaoncé has revealed details about her new album scheduled for release in early 2012. After the tremendous success of her previous album, Shock, people are curious to see if she can live up to the hype — but judging by yesterday’s press conference in Paris there is no need for concern. Sporting an outfit that would make Lady Gaga proud, the young singer won the world press over with her sharp wit and acute observations. Her warm, conversational style created a unique and friendly atmosphere, and even old codgers like myself had to let down our professional guard a bit to enjoy the exhilarating intelligence and humanity of the encounter.

Produced by the people behind the hit tv series Glee soundtrack, the title of her new album is The Use of Pleasure. It’s primarily based on original texts by the French philosopher Michel Foucault, translated into English by acclaimed American songwriter Diane Warren, occasionally assisted by ms. Shaoncé herself.

On her new album, Shaoncé is particularly preoccupied with Foucault’s disassociation from the structuralist movement and how it has affected what she perceives as impotent principles of application in computational semiotics. This is underscored by the extensive use of FM synthesis and autotuning. “I would never use autotune under normal circumstances,” she says. “But for this album we really had no choice — there is no better way to represent the concepts of artificial intelligence, computer-human interaction — all that stuff that we’re dealing with on the album. So you may say that autotune is the recurring theme here, a kind of non-musical leitmotif.”

I asked her if this meta-perspective might not get lost on her audience in a musical landscape where autotune is, per se, the default. “But, you see, what you’re saying, that is the beauty of it. No one knows for certain whether it’s one thing or the other. That’s my point — our ears are so accustomed to autotune that we don’t question it anymore. It’s the same with semiotics. We are so entrenched in one way of thinking — the wrong way, I’d say — that we can’t really see what’s right anymore.”

The Use of Pleasure will be released on EMI in 2012.

1 2