AI’s Can Make Music – So What?

AI’s Can Make Music – So What?

The current tidal wave of generative AI innovation has produced written essays of passable credibility and visual images of startling originality. Now it is music’s turn.

It is now beyond doubt that AI’s can make music, perhaps even good music, depending on your definition of “good.” A spate of recent studies show that humans find it increasingly difficult to distinguish between human and AI created music. A collaboration between UC Berkeley researchers and OpenAI revealed that humans could not tell the difference between pop music generated by AI’s and by

humans. Similar results for jazz were found at the University of Amsterdam and for classical music by Sony researchers at their CSL research Laboratory. And for a change of pace, YouTubers twosetviolins – a talented violin duo – have raised the concept of AI/human music bakeoff to high comedy in a recent YouTube post.

The big boys are stepping in. OpenAI released a neural net architecture for generating music called Jukebox in 2020, which now seems like ancient history. Just this past January, Google Research announced MusicLM with this dry description, “A model generating high-fidelity music from text descriptions … MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task…”

What is particularly interesting about MusicLM is that it generates music based on text captions and other similar inputs. For example, music can be generated from rich text descriptions of a famous painting such as Klimt’s The Kiss or Guernica by Picasso. Music can also be produced from text specifications of a location, genres or epochs (1980’s German house music anyone?). It can even take inspiration from whistled melodies.

AI-generated music is with us for good, it seems. It is becoming mainstream as music for media backgrounds, advertising, industrial films– in short, commercial music. In a way, this fits the German notion of Gebrauchsmusik or “utility music,” music made not for its own sake, but for some specific, identifiable purpose such as an event or sales promotion. AI’s growing ability to generate music will certainly provide new ways of making a talented composer more efficient, for example, or creating global music that is culturally resonant based on machine learning based analysis of tastes. I believe that AI music will also have a great future in personalizing music for individual wellbeing – enhancing productivity, focus and relaxation, to say nothing its potential value for amplifying music therapy in the treatment of anxiety and OCD.

But nonetheless, I say a loud SO WHAT?

An obscure science fiction short story Virtuoso provides my explanation. Journalist Herbert Gold wrote his one science fiction story in 1953. (I wish he had written many more). It concerns a famous pianist whose robot asks one day if he might be shown the rudiments of music. That evening, the pianist is shocked to find that the robot, named Rollo, has ingested not only the fundamentals of music but a level of

musicianship sufficient to perform Beethovan’s Appassionata sonata at a level that brings tears to the Maestro’s eyes. Rollo remarks that “it was not difficult,” but concludes that he must never play the piano again, stating with uncommon insight that “to me yes, I can translate the notes into sounds at a glance. From only a few I am able to grasp at once the composer’s conception. It is easy for me. I can also grasp that this…music is not for robots. It is for man. To me it is easy, yes…It was not meant to be easy.”

Advertisement

That is exactly the point. Composing/performing music is inherently hard for humans. As audience members, we engage with a performance because of its difficulty and the drama of human fallibility. As Moliere once said, “The greater the obstacle, the more glory in overcoming it.” It is why we revere performances by the late Artur Rubenstein, who famously played Chopin with brio – and mistakes. The Japanese have a term for this – wabi sabi – which suggests that imperfection is part of the beauty we are able to experience. The temporary, unique quality of each performance is what we relish, its transience as opposed to a machine-like, repeatable and routine perfection.

Music made by humans reflects a performer’s ability to feel as well as the skill she brings to performing a complex piano score – a Rachmaninoff piano concerto for example – from memory, without mistakes and with full expression. It’s about the human touch. Recently, I witnessed Norwegian pianist Leif Ove Andsnes perform a

piano suite by Dvorak with such concentration that his fingers lingered on the keys for a full 30 seconds as the final notes died away. Such musical performances come from deep inside a performer’s emotions and their need to express an inner state, to satisfy a need. I’ll never forget the Instagram post of a kid in Kiev who continued to play Hans Zimmer’s “Time” in a Kiev public square while air raid sirens sounded all around him.

Which brings us back to the question of the relationship between humans and AI. A key question whose answer lies in the future is this – how is the trade space for human experience shifting? The answer in musical terms seems pretty straightforward. Here is a gedanken experiment. Would you want to go to a U2 (Keith Jarrett, Vikinger Olafsson) concert, if the music was performed by an AI persona and delivered by a bank of computers and speakers? No! We want the human performance in all its

fallibility and emotion. We want to feel the human touch. This is why we mourn the passing of a great musician, but not the deleting of a computer program or the unplugging of a computer.

Here’s another gedanken experiment. What meaning can an AI – at least by current technological standards – draw from a performance, whether by a human or ai. Analyze, yes, but be moved? Inspired?

Music is one of the best things we have as humans, both as performers and listeners. And it is a miracle – a plucked string creates vibrations in the air that exhibit Pythagorean mathematics and just happen to impinge on the cochlear membranes in our ears that are curved in just in the right way to generate electrical impulses to our auditory cortex so we can hear harmony rather than random noise. This miracle of music has been described as nothing less than food for the soul; Nietzsche highlighted its importance by observing that, “Without music, life would be an accident.”

These days, music and other art forms have a great deal to tell us about being human in the age of AI – a lessons with particular urgency these days as the innovation tidal wave continues to mount. Napoleon once remarked, “Music is what tells us the human race is greater than we realize.” Music that is created and possessed by humans will always be different from the seamless cycling of AI algorithms, no matter how musical they might be.

Advertisement

Leave a Reply

Your email address will not be published. Required fields are marked *