A research team at Google has developed NSynth Super, an experimental open source instrument that uses machine learning and neural networks to generate sounds.
The ongoing Magenta project has been set up to explore how machine learning tools can help people to create art and music in new ways; one of its earlier creations was the NSynth Neural Synthesizer. This uses a deep neural network to learn the characteristics of sounds, and then creates new sounds based on these characteristics.
As part of a bid to make this technology more accessible, the Magenta team has now developed NSynth Super in collaboration with Google Creative Lab. This open source hardware features a touchscreen interface and enables musicians to generate sounds from four different sources.
From 16 original source sounds across 15 pitches, it’s said to be possible to generate more than 100,000 new sounds. Four sound sources are assigned to each of the four dials, and musicians can use these to select the source sounds that they want to explore between. Using the touchscreen, it’s possible to navigate the new sounds that combine the acoustic qualities of the original four.