Ethnomusicology in the Uncanny Valley: Jennifer Walshe and the age of AI

silicon-valley-fiona

by Andrew Chung

Jennifer Walshe, vocal extended technique extraordinaire, has been performing and speaking at Darmstadt on and off for nearly 20 years starting in the year 2000. “When I was ten,” she said with a wink during this year’s lecture. Walshe is an artist whose practice can seem ageless: constantly getting ahead of the curve and deeply tuned in to the present – whichever present happens to present itself. Her recent work mines Twitter as a poetic anthology, for instance, or projects into song a popular internet meme of a cheeky and grammatically insouciant shiba inu in a performance that is somehow more “doge”-like than the doge meme itself. Wow! So linguistic trends! Very syntax! O-M-G!

In her lectures, Walshe tends to collect a handful of themes and send them hurtling towards the whirring, whimsy machine of her aesthetic thinking, like a discursive particle collider experiment generating undiscovered forms of radiation. Her Darmstadt presentations – last time it was her New Discipline manifesto, the time before that her ideas on music, flarf poetry, and cell phone videos in an “extended field” – peek under all sorts of rocks and pause to smell patches of moss even as they travel particular topical paths.

This year, she presented her thoughts and some of her most recent work, which engages with artificial intelligence and machine learning. She brought up the robotics pioneer Masahiro Mori, who coined the term “uncanny valley”: that strange zone where beings that partially resemble (healthy) humans cause a special revulsion. This “uncanniness” is the playground of zombies and creepy automata, which, according to the experts, trigger an innate fear of death and thereby remind us of our own mortality. Walshe used the uncanny to name the fears and reservations in popular culture around increasingly sophisticated AI paradigms that threaten to replace humans in all sorts of contexts – from the ethically problematic adoption of computerised policing in the United States, to the unnervingly accurate way Facebook data can be used to predict whether a relationship will collapse in the next two months. Walshe predicted that very soon, within the next forty years, even, there will be AI paradigms capable of creating music indistinguishable from human-made music. The prospect causes some reflex in me to recoil in horror – and I don’t think I’m alone.

And yet, Walshe thinks AI will yield unprecedented opportunity for artists, if only we can turn our wholesale revulsion into a new fascination. There is an alternative, in other words, to falling victim of now-cliched affects: the enslavement of the species by intelligent Skynet-like machines, the (much scarier) pragmatic concern that AI will make human workers irrelevant or redundant in factory and office settings. In the meantime, Walshe framed mechanised deep-learning models as a strange collaborator with whom to cohabitate the uncanny valley. Presenting some of her latest experiments, she demonstrated an AI that Dadabots (CJ Carr and Zack Zukowski) had trained on some of her own vocal improvisations. Forcing a machine to learn about her music in its way, they essentially asked that machine to produce a new Walshe-style aria in order to hear what the hidden layers of the machine’s furtive workings would cough up. The results demonstrated what she might sound like to the tympanum of the network. “We get to be ethnomusicologists doing our fieldwork in the uncanny valley,” she enthused.

When she finished up her zany, far-reaching talk in the sweltering lecture hall at Darmstadt’s Lichtenbergschule, a hand shot up in the audience for a question. “How do we you that you’re the real Jenny Walshe? How do we know that you’re not just a hologram or an android?” Without skipping a beat, Walshe deadpanned: “because I wouldn’t let the android version of me sweat.”

***

I started to wonder whether, in the prophesied coming of AI-generated art, this sweat is still worth something. Is there still a stubborn exceptionality to the saltiness or viscosity of people’s labour, in an age when humans seem increasingly fungible – and costly – in comparison to machines? I’m no expert in AI, but there’s a thought game I like to play with people who, knowing I’m a musicologist, ask me what I think about the use of machine-learning to analyse musical structure (an active field called computational music theory) and even to compose new musical works. The thought experiment is this: imagine building a bot that can create amazing paintings, indistinguishable from the ones produced by human painters. Now imagine hanging them in a white cube gallery in Soho and releasing them into the art market. What would happen to their values at auction if you revealed they had been made by an AI? I have a hard time believing nothing would happen to the price calculus once collectors caught wind of this revelation. But I would be totally fascinated to see what would happen if a piece written by an AI were programmed at a major music festival under a perfectly human-seeming pseudonym. What would happen in the reception of this piece if the revelation of its origins took place sometime after its premiere?

I wondered if AI would mean the cheapening of musical labour or if it could even be approached aesthetically in the same way as “human-made” music. In 2009, as an undergraduate, in an otherwise unremarkable fit of desultory Facebook-scrolling procrastination, I happened upon a deep-learning based generator for jazz piano solos. I recently tried to locate it again, but could only find a great demonstration of the same principle: the deepjazz project by Princeton computer science student Ji-Sung Kim. Trained on a corpus of Pat Metheny songs, deepjazz produces pretty convincing counterfeits, although they play in plonky midi on deepjazz’s soundcloud page. What I remember in my reaction to this posthuman jazz piano, both then and now, is how impressed I was – not with the musical result, but with the proficiency of the machine.

Is this necessarily something to worry about? Because musical AI has already debarked, and maybe that happened a long time ago. Less obviously techy versions of music-writing algorithms have been with us for a long time: think of Mozart’s musical dice games – an early form of algorithmic composition – or the machine-grade hit factory that is K-pop’s fabulously efficient song production system. Walshe questioned whether it matters who (or what) writes our elevator music or the tunes in our video games. The answer is that, at least in some situations, it wouldn’t matter much one way or the other if a machine or a human wrote the music.

In conversation after a performance of the workshop at Darmstadt she co-leads with David Helbich, Walshe surmised to me that there would basically be little incentive to create AI paradigms to replace human instrumentalists and composers. Art music is enough of a niche product, commercially speaking, that its robotisation wouldn’t be viable, for both fiduciary reasons and aesthetic ones. Turns out the sweat does have a value. The use of AI to generate music would be its own aesthetic choice with aesthetic ramifications, not just another silent and efficient replacement of the worker.

***

With the increasingly impressive technological advances in neural networks and deep-learning paradigms, it seems that new thresholds are constantly being surpassed. Walshe and I chatted about the latest version of AlphaGo Zero, from Google’s DeepMind division, which is trained to play an abstract board game (go) that originated in China some 2,500 years ago. After training the algorithm for only a few hours, AlphaGo was able to beat – and I mean really cream – the previous version. This, as Jenny told me, freaked out the scientists.

Amid various dire or excited pronouncements from the code-proficient that the great singularity is coming, or that AI will be the force that necessitates universal basic income legislation, what composers and music-makers will have to do is simply adapt their practices and their understandings of their roles and agency to a field in which artificial intelligence is an intelligent player. The future, Walshe predicted, would be less Skynet than simply shifting compositional and musical practices to treat AIs and machines as collaborators. In other words, the composer’s role is to be a curator of what machines can do for us. What she did assert was that musicians should be savvy of the kinds of sophistication that are already and will soon be possible with machine-learning paradigms.

I should say, in the interests of disclosure, this: as an academic and musicologist, the diffident reservations I have expressed here are totally at odds with my philosophical commitments. In my reactions to AI, forms of human exceptionalism keep cropping up. Yet, rationally speaking, I am dedicated to the idea that there is no reason to believe in a strict human-machine dichotomy. That we have always been technological, always been offloading our thinking onto machines to do some of that thinking for us. Why is AI categorically any different from using a calculator as a prosthetic extension of the arithmetic centres of our brains? Apparently, for a reason I can’t quite understand, there is a suspicious residue – which I’m not sure I know how to deal with. It’s a residue that fears for composers in an age threatening to make them expendable, at least for some applications. And, if music that draws intensely from a wide range of disparate traditions can be produced with a powerful enough AI and a large enough training corpus – and the programmer doesn’t need a deep and considered knowledge of musical history and theory – would that make me as a musicologist irrelevant?

***

Walshe told to me she thinks one of the exciting frontier spaces in AI research is explainable artificial intelligence, or XAI. In contrast to the black-box paradigms common today, in which the criteria the machines use to make their choices are relatively opaque, XAI would give us the chance to gain insights into how an algorithm thinks. It would let us into its decision-making process behind what, say, it believes constitutes a typical Celtic folk ballad chorus, or why a text-score-producing bot thinks “get a new girlfriend” should comprise the second movement of a text-score sonata it generates. Walshe told her lecture audience that music generated by AI could be an opportunity to hear styles that humans would never drum up on their own. To hear voices from the future, and to draw on those voices in robustly human practices. Grinning, she ended her lecture: “what a time to be alive!”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s