Every era of music brings new technology: the electric guitar, the sampler, autotune and now AI — powerful enough to mimic a voice, but unable to feel the life behind it. AI has found its way into the music world, despite pushback, with some creatives using it to make new samples, alter existing tracks and even generate entire songs.
Many artists — such as Ye (formerly known as Kanye West) and Paul McCartney — have utilized AI in some form in their music. However, there’s a difference between using AI to enhance a track and using it to fully generate one. AI-generated music should not replace regular music nor should it be the standard, but it can be used responsibly.
Just as autotune can’t completely carry an artist to stardom, AI cannot and should not replace real talent. While it is often disappointing to see many well-known, talented artists use AI as a shortcut in their creative process, not all usage is necessarily the same.
For example, AI may be used to separate the different stems, or layers, of a song, which makes sampling much more accessible for artists. This can be useful for artists as it allows them to experiment with samples without getting the original stems. Paul McCartney even used the technology to isolate and enhance vocals from an unreleased Beatles song with John Lennon, releasing “Now and Then” in 2023. There are acceptable uses of AI; it’s just a matter of defining what is ethically acceptable and what is not.
The current lack of systematic AI regulation doesn’t help either. While recent legislation, such as the recently-passed “Take It Down Act” – which prohibits knowingly posting harmful deepfakes of others in the U.S. – codifies the responsible use of AI, there is still a lack of accountability for users in this regard. For example, it is now comically easy to go online, type something up, and have an AI voice read it while mimicking a real one without permission. There must be more regulation on the usage of AI voices, especially as public figures and figures of authority can be mimicked more easily than ever before.
Machines Meet the Studio
While creativity and artificial intelligence are seemingly opposites, there are instances in which AI can enhance a creative vision. For example, on the JPEGMAFIA song “either on or off the drugs”, the sample flipped is Future’s “Turn on the Lights”, but it is an AI-generated cover of the song created to mimic a slower song. However, this is still an imperfect process, as JPEGMAFIA could have potentially found a real person to record the sample instead.
Some usage comes off as just flat-out lazy. Earlier this year, Ye posted multiple versions of an upcoming album titled “BULLY” on X, each containing AI vocals on the songs. While Ye said in the tweet that he would re-record vocals with his own voice, some of the songs have since been uploaded to streaming platforms with very few changes.
As AI’s usage becomes normalized, it poses an important question: How will we differentiate AI-generated music from its “real” counterparts? As artists increasingly use distortion and autotune as part of their music brand, will artists have to avoid certain styles to dodge associations with AI usage? I don’t believe so, but this is just one of many dilemmas in the age of AI in music.
There are many valid concerns regarding AI’s use in music production, just as there are in other fields like writing, coding and graphic design. As AI becomes increasingly influential in the public sphere, only time will tell how its evolution will force the creative world to develop. Will AI eventually become as ubiquitous as autotune, or will it fade into obscurity as a relic?

