Key Takeaways
- AI music falls short of human music quality despite impressive advancements in generation technology.
- AI music lacks the high-fidelity and warm sound of human-produced music due to white noise.
- AI music has not produced a hit song and struggles with legal issues due to sampling copyrighted music without permission.
AI music has come far in recent years, but it still pales in comparison to the music humans make. When you listen past the surface-level shine, it just doesn’t sound as good as what real musicians and producers can create.
How Exactly Is AI Music Generated?
Before AI came along, you created music by sitting down and playing an instrument, like a guitar or synthesizer, or singing. You have to physically move your body to make music, and then, on top of that, spend hours and hours of work to put all the pieces together into a song.
How AI systems create music is entirely different.
AI uses machine learning algorithms to process a dataset of recorded music and learn things about music like melody, chords, instruments and music genres. In order to recreate the music that skilled musicians and producers can make, an AI system needs to break music down into its fundamental building blocks.
Once they’ve learned how to adequately recreate music, generative music tools like Suno, or earlier attempts like Meta’s MusicGen, then allow you to interact with their AI music generator through prompts. You can use them to produce music by simply describing the music you want to create in a few words or sentences.
Compared to how we’ve been making music for thousands of years, it’s a bizarre way to create something that’s meant to be filled with meaning and emotion. You can try an AI music generator to see how it works for yourself.
AI Music Is Low-Fi, but Not the Cool Kind
Nailing the basic structure of a pop song was a big stumbling block in the early days. Just take Meta’s MusicGen as an example. But now we have sites like Suno that, on a technical front, create seriously impressive full music tracks.
In saying that, don’t be fooled into thinking the results stack up against the high-fidelity music that you might not have realized you’re used to. It’s like how watching videos at a minimum of 1080p is normal now, but in the past our screens or bandwidth limits meant not everyone could get that quality.
The tell-tale sign of low-fi audio quality is hearing a lot of white noise in the track. Think of an old record player, vinyl, or even a cassette tape. Remember listening to the hiss and crackle? You can hear something similar in AI music tracks, as if you’re hearing the music being played through an old radio.
Although it’s not as bad as it once was, this background noise is in just about every AI music track that I have listened to on Suno.
Here’s one example from Suno. This track is called ‘Strongest Duo’ and the noise is all throughout the vocals.
If I was producing this track myself, I wouldn’t be trying to add distortion to this voice because it sounds more punchy and tight if the vocals are crisp and clean.
Here’s another track on Suno, this time featuring a classical violin. Show this track to a trained violin player, or an audio engineer, and they will easily tell you the strings don’t sound anywhere near as good as they should.
There’s a whole genre of music called low-fi music which has this characteristic warm sound, but in modern music production it’s usually done on purpose. Clearly, with AI music, it’s not meant to be there, but AI companies have yet to figure out how to get rid of it.
AI Still Hasn’t Produced a Hit Song
AI-generated music has yet to hit the big time and land a spot in the music charts, which is a good indicator that AI music isn’t as good as you think.
Instead, examples such as the rap diss track “BBL Drizzy”, which sampled a portion of an AI-generated track, have caused massive copyright lawsuits to erupt. Brought by the world’s biggest record labels (Universal Music Group, Sony, and Warner Records), the Verge reports that Suno is under legal fire, as is another AI music company called Udio.
The uncomfortable fact about AI music generators is they wouldn’t exist if they hadn’t sucked up vast amounts of copyrighted music catalogs without permission. Creating useful AI music plugins for production is arguably a much more positive and better use of AI, instead of creating music tracks wholesale and bypassing music production altogether.
In any case, there is more to music than just how it sounds. The reason so many people adore Taylor Swift or Billie Eilish is because there’s a great story behind how they made it to where they are. We don’t just want to listen to their music, we want to hear their personal story, copy how they dress, and glimpse behind the curtain to witness their glamorous life.
Can AI music produce the same kind of intrigue? Not at all.
Music Is Far Too Complex for AI to Replicate
Taking an idea for a song and recording and releasing it to the world can take weeks, months, or years. Unlike AI music, which takes barely a few minutes to produce with a handful of words fed into a prompt, real music takes an incredible amount of skill, imagination, and emotion to create.
No matter how complex AI music algorithms are, human music-making is far more complex. Even if AI companies overcome the quality problem, there is no way that AI can produce music that truly sounds great, because at the end of the day, we’re interested in so much more than the basic structure of a song. We’re interested in the human behind the music.