During a recent appearance on the Rolling Stone Music Now podcast Living Colour guitarist Vernon Reid was asked about the potentials and pitfalls he sees in AI technology becoming more prevalent in music making. And he's got serious concerns.
"I've used technology a great deal and I dig the things that it can do, but I see things on the horizon that are very troubling," said Reid. "Because it's not just about a single technology, it's about a compendium of technologies – each of them going in their own directions and at a certain nexus they're going to link up. It's going to have a huge impact on how music is produced, how music in consumed.
While fellow guitarists like Joe Walsh have been somewhat dismissive of AI technology as a potentially influential force in the way music will be made in the future, Reid has clearly given the issue much more nuanced thought, and sees it as part of a wider ongoing story.
"Already technology has had a tremendous impact, throughout the whole of recorded music, technology has been at the centre of what's sellable, what's not sellable, the choices that musicians and artists make, and the choices that consumers make. And particularly when I look at the streaming services and how low the pay scale is for musicians.
"The superstars will always make their money, but for indie artists, midrange artists, it's just a different situation. I can see a time where generative AI, combined with the technology being able to capture and model phonemes – the elements that make our voices individual –are going to combine and be leveraged in a way that I don't think people is really thinking of the unintentional consequences of what that is.
We're already seeing the potential pitfalls that could lie ahead with the actor and writers strike in Hollywood – fears that if the digital likeness of performers can be captured, it could devalue the need for real performers in the future. The beginnings of this in music can be seen with AI cover songs and AI-generated versions of famous vocalists being used to create new music. Without any input or permission for that singer.
This technology will inevitably get better at capturing the details of artists – the phenomes Reid references. He now warns of what the next level of that could be.
Get the MusicRadar Newsletter
Want all the hottest music and gear news, reviews, deals, features and more, direct to your inbox? Sign up here.
"The voices become indistinguishable from the actual voices of the artist," Reid imagines as the future of generative AI. "The first attempts are clever and they're amusing but at a certain point we have enough material of people's speaking voices and their singing voices, and the thing is these technologies are not static – they're not stable where they are currently.
"Most of these things, maybe they go viral or maybe they don't, but the idea of having Whitney Houston sing again, or having Prince sing again, and have them sing a song that's completely composed by generative AI. You can take a songwriter, say Stevie Wonder. You could take a time period – the '70s with Talking Book, Fulfillingness' First Finale, Music Of My Mind, and ask ChatGPT, 'Write a lyric that Stevie Wonder never wrote, and I want you to focus on a time period of the 1970s.' And you could just hit the regenerate button until something is, 'Wow, that's actually pretty good.'"
It's not so much a possibility as inevitable, but what about guitar parts? Surely AI can't write a solo that can touch the emotive expression of the greats?
"This is where it's tricky," notes Reid. "Because it's not just about playing superfast notes. It will have to do things like bending a note and doing a vibrato, that's a physical activity. I'm talking about sliding your finger on the fretboard, not even talking using an actual slide. Those are very particular and very personal signature things. But the idea that can't be modelled; I would like to think it can't be modelled, I would like to think so, but [already] now there are plenty of bands that don't have amps onstage.
The genie can't go back in the bottle now, or as Reid says "the toothpaste back in the tube", no matter how much certain areas of the music industry dismiss the technology as a fad. AI-generated songs based around popular artists might be a gimmick now, but at the point when they get good, really good, that's where things take a very different turn for music with minimal human involvement.
"We're creating a situation where we have no idea what the endpoint is," Warns Reid. "The concern for me is what would stop a music streaming service from creating a completely artificial artist, give that person a name and a biography just to see what happens – see if they get likes. Because people are just taking music as a utility for them, as opposed to getting involved with, 'This person's interesting, who are they? Where are they from? What's their biography?' You can create an electronic artist that doesn't have to exist and [the streaming services] don't have to pay anybody, they're paying themselves.
"The idea that we privilege our human uniqueness and that can never be challenged, I think that is really under threat now."
Check out the full podcast above.
Rob is the Reviews Editor for GuitarWorld.com and MusicRadar guitars, so spends most of his waking hours (and beyond) thinking about and trying the latest gear while making sure our reviews team is giving you thorough and honest tests of it. He's worked for guitar mags and sites as a writer and editor for nearly 20 years but still winces at the thought of restringing anything with a Floyd Rose.
“One of the best guitar solos ever conceived - captured live on stage!”: Uncovering the truth about the Clapton classic that he called "wrong" but Eddie Van Halen loved
“This one is a prototype. We’re getting close to the finish line tho!”: Abasi Concepts set introduce an extended-range nylon-string to its lineup and Tosin Abasi got so excited with it he spilled his coffee