The conflict of man vs machine is as old as humankind. However, the increasingly accelerated technologic advancements of the last decade have put the topic at the forefront of the public debate. Will artificial intelligence softwares replace human brains? Will virtual reality take over tangible reality? We don’t know yet. In the meantime, some people are deciding to work ‘hand in hand’ with them, like British duo Emptyset
James Ginzburg and Paul Purgas released Blossoms earlier this year, an album that, as they explain, “was generated entirely from the output of a neuronal network-based artificial intelligence system”. After training the software with hours of improvised recordings as well as old tracks of themselves, the system generated multiple sounds that, thanks to Emptyset’s human action and touch, have become ten songs – whose titles, just like the album’s, are related to nature and flowers: Petal, Pollen, Bloom or Filament, among others. AS they explain, “the record itself is a biomimetic phenomenon. A machine enacting a biological process in an entirely synthetic way.” Today we speak with them about the future of music and technology, the challenges of performing live an AI-created album, and what keeps them excited about the future.
As you describe on your website, Emptyset is a project that “examines the material properties of sound and its correspondence with architecture, performance and physical modes of production”. So I guess you don’t define yourselves as ‘just’ musicians, right? How would you describe what you do?
Music is certainly what brought us together, although the overlaps between the other disciplines we are interested in have been what has kept the project engaging for us over time. Considering broadly the possibilities of the sonic has been the central focus of how we approach our output, whether that be in its more relatable or legible forms or more ephemeral and abstracted iterations. So often, we are navigating the edges of what is accorded to be sound and what could be determined as music and this is territory we seem to continually find ourselves working within.
After several acclaimed albums, you’ve released another one: Blossoms. This one in particular differs from the rest as it “was generated entirely from the output of a neural network-based artificial intelligence system”. Basically, you had some software do the work for you! (Laughs) In what ways was the creative and production process of this album different and similar to previous ones?
The process of making this album spanned two years, the first year and a half of which was occupied with working with various experts in the field of applying neural networks to the synthesis of sound. It wasn't until this past April that we had a system that was capable of creating interesting enough results to feel that we could create a body of work from it. The next step was working out what to train the system with – we took the approach to train it on our back catalogue and to add three days of improvised recordings that we performed in wood and metal pieces.
The logic was to both teach it a sense of the sonic aesthetic of our existing material and then to cross-pollinate it with the kind of performance approaches we have taken in projects such as Skin, and the large-scale instruments we designed for a performance at the David Roberts Art Foundation in 2017. We did this with the idea that it would avoid the problem of the system just trying to reproduce the average of all Emptyset tracks – we wanted to induce variation and the system to create a new iteration.
Once you trained it, what was next?
Once we trained the neural network, it spent a few weeks thinking and outputting sounds. We fine-tuned the models, and by early June, this system started to create surprising results. Each output was 1-5 seconds and we ended up with 100 hours of outputs. Sifting through these and working out how to create a listenable experience took quite a lot of time, and we had to enter into the strange logic of the system to be able to find patterns in the audio that we could consolidate into tracks.
At the same time, due to the system having its own sense of musicality, the results could be quite extreme, so there was a process of working out how to control the audio. In a sense, the production side of the album didn’t differ much from our previous work, whether it was working with structures made through microtonal sine waves and noise, long-distance radio transmissions in Signal, or through instruments we designed on Borders – though here, the process of arriving at the raw material to work with was certainly an entirely new experience.
There are several artists and collectives working together with artificial intelligence softwares: from painters to creative coders, to musicians and filmmakers. Until now, many believed that ‘machines’ could never replace human’s creativity and inspiration. Were they wrong? How do you see the increasingly important role of intelligent softwares and devices in the arts?
We are currently entering a new phase of artificial intelligence where we will find it harder and harder to tell the difference between what an intelligent system and a human have created. By the end of working on our album, the system was producing results that were unusable because they were too convincing – they sounded too much like a literal cross between the source material, so it didn’t have the fascinating and unsettling component that we were interested in, this idea of a sound emerging into being. This was from hearing the system arriving at an understanding and revealing its process rather than the more polished products of its final conclusions.
While systems will need a training data set to create new work, we as a species have produced so much media that everything is in place to provide systems with all the material they need to learn and create new works. Whilst there will initially be weaknesses, the systems will soon learn how to create better systems and ultimately how to create their own data sets to learn from.
The songs of the album have a sort of dark, distorted, uncanny and even unsettling vibe. The artificial intelligence system created them after hours of extensive audio training. You provided this system with lots of material – a combination of existing sounds and ten hours of improvised recordings using wood, metal and drum skins. In doing this selection, did you more or less calculate the outcome/result? 
The process we used was something of an educated guess as this technology had just emerged. At the time, we had no sense of what it would sound like. The systems available up to that point were not sonically engaging for us or even capable of producing a high enough audio quality to be useable. It was only in April this year that we found a system that could realise what we wanted and seemed like a viable foundation to develop Blossoms.
From here, after a few rounds of tuning the system to create more directed results, it was then down to the neural network to create the sonic outputs, and it was down to us to interpret and structure them into ‘music’. We were both extremely surprised by what came out of the system – it was equal parts fascinating to watch and frightening to realise the implications.
After taking part in festivals such as LEV and Unsound, known for their very well-curated line-up of avant-garde artists, I’m curious to know how do you face the challenge of performing live the songs that a software has created.
This has been one of the biggest challenges of manifesting this record live as much of the software requires significant amounts of computing power and rendering time that is at present not possible to achieve in real-time. Though the machine itself produced the sonic output and we structured and arranged these into tracks, so with this method, the act of performing this live could still be determined in the areas of live arrangement and dynamics shaping/effects as we have worked before. We are currently keeping our ear to the ground for new developments in real-time neural network systems as once this technology becomes accessible, it would be an exciting element to start to explore within a live context.
The entire album has a floral theme going on: its title is Blossoms and the songs also have names like Petal, Bloom, Bulb, Stem or Pollen. Do you draw any parallelism between the birth/blossoming of flowers and the birth/blossoming of new technologies and softwares? Is it a way to reconcile the natural world with technology?
The thematic approach of the album is making a direct link to nature and biological processes, but equally, the record itself is a biomimetic phenomenon. A machine enacting a biological process in an entirely synthetic way. It’s behaviour at times even mirroring the developmental behaviour of cognition and understanding that seems familiar from animals and humans. The premise of growth, development and expansion are processes that reflect across nature and technology, so we wanted to acknowledge that whilst likewise, it being very apparent that these obscure and abstracted sonics certainly didn’t feel natural or organic in origin. So there was a certain play on this territory of harmony and contradiction within this approach to titling.
Cutting-edge, avant-garde, pioneering; your work is always at the forefront of musical creation, and year after year, you keep pushing the boundaries further. What keeps you going? And what are some of the most exciting technologies/devices/softwares you’re working with or exploring right now?
We enjoy the process of learning and applying that learning to creating work. The fascination with whatever it is we are working on or with is the primary engine of pushing the project forward. That is often incremental – we made this album because we had an idea; if we didn’t, we would leave things to fallow until we did. A sense of urgency and curiosity is what drives us to make work.
At the moment, most of our focus is the continuing development of the live version of Blossoms, but in terms of future work, we will see what new ideas appear in the next years. We are not technologists – we work with old media as much as new ones –; it’s more a question of whether something – a concept, principle or technology – creates the possibility of expanding on the work we have already done and creates a potentially inspiring voice for our methodology.
In addition to this release, you’ve been working quite a lot individually: curating festivals, giving conferences and participating in talks, releasing solo works, writing for books, running record labels, etc. So, when you’re not working – does that happen at all? –, what do you like to do in your free time? Being so involved in the creative and artistic fields, is there any other skill that remains unknown to the public, like cooking or gardening?
We are both working on many projects simultaneously, but the majority of these in reality function around spending time with people we like and enjoy collaborating with. We also share a common recreational interest in watching supernatural films, making curry and hanging out with a cat called Parsnip.