“To my knowledge, no one else has tried a project like this”: Reinier Zonneveld is playing B2B with an AI clone of his “musical brain”

renier
(Image credit: Press/Reinier Zonneveld)

Science fiction novelist Philip K Dick once asked: do androids dream of electric sheep? Five decades later, techno star Reinier Zonneveld is asking whether AI models have ever envisioned producing stadium-sized techno.

Science fiction is the reality we’re living in, with rapidly advancing technology now redrawing every aspect of our lives. In electronic music, Reinier is leading the charge with the R² project, an AI model trained on his own “musical brain” that triggered sounds, composed melodies and even physically played instruments as part of his 10-hour headline performance at his R² festival earlier this summer.

The motivations were two-fold: to demonstrate AI’s ample creative possibilities and respond to a climate in which creatives are increasingly fearful of being replaced by this kind of technology.

“The first time I did a show was in 2024. Back then there were all these emerging AI platforms and models that were being trained on copyrighted material without paying the artist,” Reinier says. “This was my way of stating that this is copyright infringement. You can’t just take people’s work and not pay them for it if you’re training a piece of software or machine on it. But it was also to show what an AI model does, to show the creative possibilities.”

The R² 2025 show, performed in front of a crowd of 20,000 at Holland’s Spaarnwoude, saw Reinier take his ambitions even further. This time, he performed alongside a hologram to more clearly demonstrate the role of his AI-powered creative partner. More than just another electronic instrument in his live set-up, it’s a genuine collaborator, analyzing the nuances of his performance and responding in real time with its own musical output.

“It’s a cliche to say but techno has always been really forward-thinking: Jeff Mills, Speedy J, all these legends, they made music that no one had ever heard before,” says Reinier. “At first with AI, many people were scared, and understandably, if you have an online application that can generate video, pictures and audio unrestricted based on other people’s work. But I want R² to empower people to see this as something that can help them be incredibly more productive, not replaced.”

renier

(Image credit: Press/Reinier Zonneveld)

Reinier has lived a life deep in the rave trenches since the 2010s. Renowned for his love of experimentation – and a good party – he emerged from a classical background before embracing electronic music, producing and DJing alongside running his own Filth on Acid label, now celebrating its 200th release. Boundary-pushing is an integral part of his make-up, not only within technology but in terms of musical stamina, earning an acknowledgement from the Guinness Book of Records for the longest electronic music live set.

It was in 2019 when Reinier first sparked conversations around the possibilities of machine learning and artificial intelligence, becoming curious about how these technologies could help to shape the blistering acid techno that made up his marathon live sets.

“At the time AI was a blank space, it did not exist in the same way it does now,” Reinier says. “So we started experimenting with inputting music, just playing with the software. Then there was a point during our development where the profile of AI suddenly accelerated and many more people became aware of its possibilities.”

It was then that Reinier and his team decided to “clone his musical brain”; to train an AI model on his extensive archive of recordings and DJ sets. “The idea was to fight against all these emerging AI platforms and models that were being trained on copyright material without paying the artist, something I’m very much against,” he states.

renier

Cooper Seykens (Image credit: Cooper Seykens)

Reinier’s team at the beginning included himself and his programmer Cas. With more than 80 days of recordings available, there was a huge amount of material to draw on when training the AI model.

“To my knowledge, no one else has tried a project like this, partly because I don’t think anyone has access to as much data as we did,” Reinier says. “I’m such a music nerd, I have all of my recordings, even if an idea or track is not great, I always finish something and capture it - which has put me in this unique position of being able to train AI purely through my music.”

Reinier’s team designed three different AI models - one generates random sounds, another can be prompted to produce music in a specific style or tempo, and the third is capable of analyzing Reinier’s own music in real-time and using that as a starting point for its own performance. This is the model that’s used in Reinier’s performance today.

“This has an audio input so, as soon as I input a command or prompt on my laptop, it records the audio I’m playing, then when I stop, the audio goes into the AI model that then continues from this point,” he says. “I’m not 100 percent sure where it will go, but the starting point is the music I make in the moment.”

“I’m not 100 percent sure where it will go, but the starting point is the music I make in the moment”

At the core of Reinier’s hardware set-up is a Prism Sound Titan interface, connected to a Macbook running Ableton. Sequencing goes mainly through Native Instruments Maschine, sending MIDI via an interface to a Moog Sub 37, Roland SH-101 and TR-909. Once prompted by Zonneveld’s input, the R² generates its own MIDI data; this is then sent back to the hardware where the data can be used to trigger synthesizers, drum machines, and more. The AI also ‘plays’ the synthesizers via several custom-built robots equipped with hammers for striking keys.

“The AI is integrated into the system through a supercomputer located at the back of the stage. As soon as you let AI do its job, it sounds like you went to the studio and finished your groove – it’s a little layer on top,” Reinier explains. “Sometimes, you can ask it to make a minor continuation, with a little build-up or key change, which can be tricky but is really fun to do. With the continuation model, it will stay in the same key, tempo but it might introduce a new melody or build-up. It’s like playing with someone B2B and you’re not quite sure what they’re going to do.”

According to Reinier, challenges with the model revolve around uncertainty as it’s not possible to be 100% sure of what it might do. However, much like a human collaborator, it can push you to venture into new creative territories that might otherwise have gone unexplored.

“Using AI like this means it’s more tailored to your style and performance, it’s almost like someone producing on top of what you’re doing,” he says. “However, what we found in 2024 was that the audience was not sure of what was happening on stage. We had this incredible system where it wasn’t possible to work out where I ended and the AI began.”

renier

(Image credit: Press/Reinier Zonneveld)

In 2025, Reinier and his team worked to develop this perspective of R² by enhancing music-making innovation with visual elements that embodied the AI in its performance. These came in the form of a hologram and a robot that was developed to physically trigger the synths.. As soon as R² was activated during this year’s set, a hologram came to life above the stage.

“I was surprised at how well it worked, as we didn’t have that much time to test it,” says Reinier. “The AI model did have its own channel for the synth, so if I didn’t like it, I could reduce the volume. But it was a really special experience, and really fun. I was very stressed beforehand as there are so many things that could go wrong.”

Reinier’s programmer was at the back of the stage maintaining the computer that had become so hot the previous year that it almost overheated. This year, an air conditioning unit was added to keep it cool, but there were still myriad pitfalls to swerve.

“We knew there were way more things that could go wrong,” says Reinier. “Could the robot last for the whole show? Would the vibrations of the show disrupt it? This was 20,000 capacity so it’s incredibly loud up there.”

“We had the changeover too - so we built the set-up the night before, but then it had to be brought on stage without anything breaking, it was very stressful. Still, as soon as we started playing, it made sense, this was cool.”

On stage, Reinier’s laptop hosts a dashboard which allows him to manipulate as many as 20 AI models trained on different parts of his artistry. One excels at making grooves and is highly percussive, as it’s been trained on the introductions to his extended mixes.

“This model is great at layering extra grooves on top and something I use a lot,” he explains. “It feels like you went into the studio for an hour to finish what you were improvising, so really adding to the quality of what you’ve done - and something that I could not just do with my normal live set.”

“From a robotic perspective, it’s very complex as you have to deal with latency,” Reinier continues. “You have to use these hammers and they need to be within ten milliseconds to go up and down to press the right notes at the right point on the synthesisers. The next step is sound design live on stage with the synths, then it’s really a mirror of what I’m doing.”

renier

(Image credit: Press/Reinier Zonneveld)

Many in the music industry are fearful of sophisticated AI models making artists and musicians redundant, but Reinier is hoping for a more positive outcome. “I can envision how it can be used to enhance sounds, mixdowns, music vibes, what you’ve never heard before,” he states. “There is a whole new layer of creativity accessible which we can explore rather than looking to make what has already been made before.”

“However it is important to know that if it is trained on people’s work, then these people should be paid for it, this is a key factor,” he adds. “Without the training data, then AI is worthless - and I’ve seen positive conversations at the European Commission to protect creatives.”

However, as the technology is still developing, there are clearly some challenges for Reinier and his team to overcome. It can be time-consuming to train, sometimes with disappointing results. It also requires a huge amount of processing power, a demand that will ramp up as the system grows in complexity. There are also billions of calculations at play, which can make it challenging to rectify any coding issues.

“With one of the first models we had, it was super good at melodic house and techno with brilliant melodies,” Reinier says. “But we never got to recreate this - we put in more data as we thought it would be a better combo but it didn’t work. You have to remember that this is an ongoing exploration.”

Many emerging producers may be concerned by technological advances, asking why they should try and patiently hone their craft in the studio when tech may have the potential to outdo them. But Reinier believes that producers are underestimating their capabilities by adopting this mindset.

“AI is trained on humans - the AI cannot train itself on itself. The quality only goes down if you try so this human element is essential,” he says. “I wouldn’t be afraid of mastering any skill at the moment and looking to be proficient at something in the studio.”

“On the flip, there will be people wanting to monetise this, there could be thousands and thousands of AI tracks on Spotify. But there are already steps being made to take AI music from streaming services. As long as we can be honest with ourselves and artists, then I’m hopeful for a musical future.”

Citing the experimental attitudes of Richie Hawtin and Speedy J as an inspiration, Reinier is excited for what he believes could be a bold new dawn of music creation and discovery. “I’m super happy I met such an amazing programmer who wants to invest time in putting this together,” he says.

“Imagine far in the future you can just link your thoughts to the speakers for example and imagine the music, somehow control and come up with new music without having to touch anything, this is far in the future but there are so many possibilities. In the meantime, let’s try to push things forward: it’s good to explore.”

Click here to pre-register for R2 Festival 2026.

Categories
Jim Ottewill

Jim Ottewill is an author and freelance music journalist with more than a decade of experience writing for the likes of Mixmag, FACT, Resident Advisor, Hyponik, Music Tech and MusicRadar. Alongside journalism, Jim's dalliances in dance music include partying everywhere from cutlery factories in South Yorkshire to warehouses in Portland Oregon. As a distinctly small-time DJ, he's played records to people in a variety of places stretching from Sheffield to Berlin, broadcast on Soho Radio and promoted early gigs from the likes of the Arctic Monkeys and more.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.