How I used Suno AI to cover my own demo album

When I was young and had too much free time, I used to write and record music. I wasn’t very successful—mostly because I couldn’t sing or play any instruments. The songs always ended up sounding much worse than they did in my head.

But now, it turns out I can just feed my old demo songs to AI and ask it to do a better job.

So that’s what I did—and the result exceeded all my expectations. It’s crazy how well it can follow the song structure, including the lyrics, instrumental parts, and so on.

But let me show you some examples.

In my car

This song has an upbeat pop melody—mostly guitar, but also some piano and strings. My favorite part is the instrumental section in the middle.

I’m afraid you’ll have to listen to the original. I know, I know.

So I threw it at Suno AI. It immediately recognized the lyrics—not perfectly, but close enough. I fixed the mistakes, added genres, and tweaked some sliders.

The
The important sliders

It takes about 30 seconds to generate a couple of versions. I played around with genres a bit and picked the one I liked most.

The AI-generated version has the same overall mood. Some musical parts are different—not identical, but it keeps the same vibe. I really like the version with the female voice.

My favorite instrumental section is sorta there. But for some reason, she sings the melody that was originally played by strings. Still, it works.

Hold on to the boy

This song is a melancholic, slow-tempo, psychedelic… OK, I’m not really sure what it is, to be honest.

And now the AI version.

It perfectly captures the mood and intonations of the original. The guitar part after the chorus? Absolutely nailed. And the ending—with that intense singing/shouting—I actually like it even more than the original.

The only thing I miss is the guitar part during the chorus. It’s either missing or just kind of buried in the mix—middy, and not clearly distinguishable.

Good night

This song is again melancholic, rather slow, a bit drowsie.

I picked it for this article because I think it was the easiest for AI to make it right. Mostly because I think it quite clearly falls under the trip-hop genre. And also because it's very simplistic.

You can see the bass line is there, drums are kinda like in the original, but better. All the intonations in the lyrics are there.

My takeaways

Here's what I've learned so far.

  1. It’s funny how the AI sometimes rearranges parts of the song—like moving a guitar solo from the end to the middle, or turning a melody into a bass line. Unexpected, but interesting.

  2. The audio influence slider works too well. At 90%, it not only replicates all the parts but also copies my voice—with the exact same intonation and even the same missed notes. It even mirrors the drum parts I wish it would skip.

  3. The lyric handling is amazing. You can edit the text, and the singer follows it to the letter—while still trying to match the original intonation and timing. It’s pure magic. Total cinema.

  4. When multiple instruments play a polyphonic melody (like piano and guitar), the AI struggles to separate them cleanly. You end up with a muddy hybrid that sounds a bit like both. This probably comes down to the quality of the original recording.

  5. Not a single song is quite ready to publish. It gets 90% of the way there, but there’s always some rough edge—off intonation, messy guitar solo, something that gives away it’s AI-generated. Still, I think most of those issues stem from the low quality of the source material.

#AI #Music

More posts by @yt →
Powered by Mind This.