If you’re a music tech nerd, guitar enthusiast, or just an inquisitive listener, you’ve probably heard of Rick Beato: the silver-haired YouTube personality is one of the most notable figures in a particular corner of the internet that’s devoted to unpacking how music works and how it’s made.
With over three million subscribers, Beato’s one of the most - if not the most - popular online creators focused exclusively on music-making. His channel covers a variety of subjects within the broader spheres of musical performance, composition, theory, production and education, and his videos are, on the whole, thoroughly informative and entertaining.
Though the majority of his content exists to educate, Beato isn’t shy about sharing his own personal take on modern music and the tools used to make it. In his more opinionated video essays, the creator’s previously explored topics such as why he believes Gen Z “doesn’t care about music”, how “computers ruined rock music” and why a scroll through the world’s top ten viral songs should tell you that we’re “doomed”.
The latest phenomenon to enter Beato’s crosshairs is Auto-Tune, the famed Antares plugin that alters the pitch and formants of vocal recordings. In a video essay, entitled “How Auto-Tune DESTROYED Popular Music”, Beato lays into Auto-Tune, claiming that the ubiquity of vocal pitch correction will make it harder for future audiences to judge the authenticity of ‘fake’ pop music generated by AI-powered tools.
The title of Beato’s video suggests that he’s going to explain how Auto-Tune has “destroyed” popular music. His actual explanation, though, centres on how he believes Auto-Tune could lead music to be threatened by AI in the future. So… has Auto-Tune destroyed popular music, or not? We’d argue that Auto-Tune hasn’t done anything of the sort, but instead radically changed music for the better.
Beato begins by recalling the first time he heard Auto-Tune, on Cher’s landmark 1998 single “Believe”. “Little did I know that 24 years in the future, that effect would be on virtually every pop song, every hip-hop song, almost every rock song,” Beato continues. “It’s so invasive in contemporary music.” He goes on to run through noticeably Auto-Tuned songs, pointing out heavily processed vocals in Maroon 5’s “She Will Be Loved”, The Kid LAROI and Justin Bieber's “Stay” and BTS’s “Black Swan”.
Beato argues that the impending AI-assisted annihilation of the music industry will be caused, in part, by the fact that listeners can no longer “notice when something has been digitally altered”. After sharing an excerpt of Luis Fonsi’s reggaeton hit “Despacito”, Beato observes: “it sounds very robotic. I can completely imagine a computer creating this thing. This is one of the things that heavy use of Auto-Tune has done.”
It’s not just Auto-Tune that Beato believes is threatening the future of popular music, but also beat correction, quantization and drum programming more generally. “I’ve harped about this on my channel for years [...] I’ve talked about the dangers of music becoming completely computerized and lacking humanity,” he says. (Has anyone told Rick about electronic music?)
“If you use Auto-Tune so much that the voice sounds robotic and it sounds like a completely synthesized voice, then when AI creates that synthesized voice, people aren’t going to be able to tell the difference.” Beato continues. “Computers can easily do this. They will get better at this over time. The general listener doesn’t know the difference, and they frankly don’t care. And I don’t think they’re going to care when musicians and songwriters are replaced by AI.”
We certainly share Beato’s concerns about where the unbridled advancement of AI-powered creative tools could lead us in the future. The question of how this technology might one day threaten the livelihoods of musicians, producers and artists across all disciplines is a vitally important one. That train, however, has unfortunately left the station, and it was always going to do so, whether Adam Levine used Auto-Tune or not.
If the sophistication of AI tech continues advancing at its current pace, there's no doubt it'll be able to fool audiences into believing AI-generated vocals are real. This will happen regardless of whether audiences have been desensitized to fakery by Auto-Tune, or whether they’ve grown accustomed to digitally altered music more generally. The urgent question isn't how we'll be able to spot the fake voices when they appear, but how we'll be able to ensure the real ones remain employed.
Besides, the scenario that Beato imagines isn’t some distant future - it’s right around the corner. Today’s tech comes within a hair’s breadth of imitating the human singing voice. Though not yet sophisticated enough to fool a casual listener, mass-market plugins like Emvoice and the technology behind them will become more advanced, and there’s little doubt that soon, we’ll be able generate entirely convincing synthesized voices in our DAWs that can hoodwink even the most attentive ear.
Artist and academic Holly Herndon has already trained an AI to mimic the sound of her singing voice, a feat which she demonstrated in a mind-blowing TED Talk last year. (The AI has since covered Dolly Parton's "Jolene".) We’ll leave it to you to decide whether you think it’s a credible imitation, but to our ears, it’s pretty damn close. Only a whisker of artificiality remains between this and the real thing. If that’s what AI can do now, then it’s not hard to imagine where the next decade might lead.
The real problems with Beato’s argument become apparent when he begins to talk about T-Pain, the pioneering rapper, songwriter and producer known for his liberal use of Auto-Tune. “There are artists that have built their entire career on Auto-Tune, like T-Pain. He even has his own Auto-Tune plugin,” Beato remarks, opening up a video of T-Pain's Tiny Desk Concert to showcase his natural singing voice. “The amazing thing about T-Pain is that he can actually sing really well, you just rarely hear it.”
Here, Beato misses the point to an outrageous degree. T-Pain didn’t start using Auto-Tune because he was struggling to hit the right notes. He used it to develop his own sound, in order to express his musical ideas more effectively - the same reason someone might pick up any other instrument. In doing so, he unwittingly sparked one of the most influential trends in modern music. This hybrid of singing and rapping, processed heavily with Auto-Tune, influenced countless artists working in hip-hop, trap and R&B, inspiring everyone from Kanye West to Lil Wayne.
When T-Pain first heard Auto-Tune in the Jennifer Lopez song “If You Had My Love”, the plugin wasn’t widespread in the production community. The rapper wasn’t exactly sure what he was hearing, but he knew he had to have it. The song sparked a two-year search for the technology behind the sound he’d come across. “I’d get these cracked CDs of plugins from my friends, and I’d go through every preset, and go ‘One of these has got to be it’,” T-Pain told Berklee students in a 2020 workshop.
When he discovered Auto-Tune, the artist was still finding his own creative path and searching for a way to differentiate himself from other rappers. “I was getting drowned out by the sound of everyone wanting to be a rapper [...] I was like, 'Man, I’ll never be heard if I’m just that voice in the crowd. I gotta be that one that stands out.’” Auto-Tune helped T-Pain do just that. “Nobody had really heard Auto-Tune the way I heard it,” he continues. “Not to say that it hadn’t been done, it just hadn’t been done the way I did it.”
By processing his voice, T-Pain found his voice. What Beato doesn’t acknowledge is that while the technologies he’s taken issue with - Auto-Tune, pitch and timing correction, drum programming - might be shaping a future in which the kind of music he personally likes to listen to is less popular and less prevalent, they’re also being used by new generations of artists and producers to expand and augment their creativity. Artists are making music that could never have been made without these tools, and audiences are responding to it.
Beato’s expectation that only those who cannot sing would want to use Auto-Tune represents a vast failure of the imagination. “The Auto-Tune plug-in has become one of my most treasured creative tools,” classically trained vocalist and electronic musician Eliza Bagg told us in a recent interview. When she’s not soloing with the New York Philharmonic, Bagg records experimental vocal music as Lisel. Her most recent album, Patterns for Auto-Tuned Voices and Delay, is made up of vocal improvisations processed with a variety of software, including Auto-Tune.
A virtuoso vocalist, Lisel uses Auto-Tune to expand the potential of her chosen instrument, her voice. “I developed a vocal processing system that allowed me to change the idea of what my instrument is,” she says. “Now, what begins inside my body and continues on the computer is one process, and the ideas that result from it are my instrument. I feel like I’ve developed a new instrument, that is my voice in relation to the various Auto-Tune plugins and everything they do.”
Perhaps the most striking example of an artist repurposing Auto-Tune to augment their sound can be found in the music of Bon Iver. While recording their third album, 22 A Million, frontman Justin Vernon and co-producer Chris Messina developed a vocal processing method nicknamed 'The Messina'. Based around Auto-Tune, the set-up allows him to pitch-shift and layer his vocals into harmonies played via keyboard, all in real time. The resulting effect played a fundamental role in defining the sound of the entire project.
In 2016, Frank Ocean chose to use Auto-Tune on “Nikes”, the lead single from his era-defining second album Blonde, prompting fans to theorize that the effect was intended to create an alternative high-pitched character that sings lyrics from the perspective of his younger, inexperienced self. Not only modulating voices, Auto-Tune is being used by artists to explore new versions of themselves. It’s even helping some explore their experiences surrounding gender identity through their creative practice.
A number of transgender artists that includes Katie Dey, Lyra Pramuk and the late SOPHIE have used vocal manipulation as a means of reckoning with identity in their music. Many trans people choose to undergo voice therapy and/or voice surgery as part of their transition. Vocal processing can serve as a means of examining the complex meanings and associations of the voice, and how it relates to their gender identity, through their work. Dissolving the binaries of male and female, technologies like Auto-Tune allow these artists to render the lived experiences behind their music audible in sound.
Though it’s since been reoriented in new creative directions, let’s not forget Auto-Tune was originally developed to correct out-of-tune vocals. While Beato believes this might have prompted the demise of musicianship, others would argue that it’s played a democratizing role in empowering artists who weren’t blessed with natural vocal talents. One such artist is Preston-based Rainy Miller, who records saturnine alt-R&B that's centred around his Auto-Tuned voice.
“I didn’t want to rap, and I couldn’t really sing, so it’s something I kinda fell into,” Miller told Loud & Quiet when asked about his reasons for adopting the technique. “It feels really central to what I do now. I’m not musically trained, I’ve never had money to buy loads of instruments, and I wasn’t raised by a musical family. But to me, that shouldn’t stop you from making music if that’s what you want to do.” Auto-Tune didn't just help Miller find his voice, it gave him one.
It’s fair to say that Beato may not have had the artists we’ve mentioned here in mind when he he said that Auto-Tune destroyed popular music. If we’re using a narrow definition of popular music, then it might not encompass some of the more innovative approaches we’ve surveyed. The point, though, still stands. How much better off would modern music really be if Luis Fonsi hadn’t used Auto-Tune in “Despacito”? And if these Auto-Tuned songs have sold copies in their millions - which they have - who are we to say they should sound any different?
Auto-Tune may have been invented to correct off-key vocal recordings, but it’s since taken on a life of its own, and changed the sound of contemporary music for the better. The majority of artists aren’t using the plugin to fix up their vocals, but instead making an artistic choice to use Auto-Tune to transform their sound. Like so many other creative tools, Auto-Tune has been far removed from its original purpose, reimagined by musicians and producers dreaming up new ways to express themselves.
As new technologies are developed, established paradigms are threatened, and people will invariably catastrophize about the looming danger they believe that change represents. When the synthesizer was invented, we heard a similar outcry. Angry musicians protested concerts that featured the instruments, and the Musician’s Union even passed a motion to ban synths among their members, driven by the fear that they, too, might be replaced by machines.
But if new techniques aren’t given the space to flourish, if new tools aren’t explored, exploited and repurposed, then how will music evolve? Happily, synthesizers have endured, finding a harmonious place in our musical world. Auto-Tune, like the synth before it and the AI-powered tech to come, will continue to help musicians and producers do what they do best: express new ideas, invent new genres, and discover new sounds. Exactly what making music should be all about.