Jason Lopez: The idea of a computer writing music in the style of a composer is not new. Twenty-five years ago David Cope, a University of California Santa Cruz music professor, who's now retired, produced music with a program called experiments in musical intelligence also known EMI, which he had developed since the early 1980s. Among the composers, EMI was learning from? Bach.
David Cope: The reason I chose the inventions as sort of an initial output was because most of us who've taken piano lessons as children ended up struggling with these little contrapuntal nightmares because they are difficult, but they pose wonderfully interesting physical challenges and teach certain techniques that are indispensable, that no other pieces seem to do in quite the same way. What I want the project to do is to teach me about musical style and hopefully produce some good music that I like to listen to. Because I think that's a very, very important quality.
Jason Lopez: Cope developed the program to the point of releasing albums of computer-generated music. The originality of EMI’s output depends on what you program into it.
David Cope: You need a minimum of two works of the type you want to emulate, the more works, the more unique or individual the output will be. And the fewer works. The more like one of the works you input, the results will be. So therefore if you put two works in, there's a good chance that you will recognize pretty, you know, pretty clearly, some of the musical ideas relevant to one of the works in the database. If you put in a hundred works, the chances are remote that you'll be able to draw immediate connections by ear, at least to any one of the works that appear in the database.
Jason Lopez: By the early 2000s, Cope had developed EMI to such a point that he renamed the program Emily Howell and released several albums of music under her name. At the end of this podcast, we’ll play music by EMI as well as by Bach and see if you can tell the difference.
Ahmed Elgammal: Can we teach AI to learn from classical composers learning how to take a motif like that and develop it into a whole segment in a movement? And then with the second motif develop the second segment.
Jason Lopez: Aside from being a computer science professor, Elgammal is a developer of an AI program called playform. It's a collaboration tool for artists. It makes AI programming accessible by allowing creators to build and train their own AI. His reason for completing Beethoven's 10th Symphony goes beyond being a classical music lover. It was to advance the capabilities of artificial intelligence.
Ahmed Elgammal: There's a lot of things that need to be done in order to be able to take scores that are very fragmented and make a whole movement or two movements. To write minutes of music out of that, it's a big challenge, a lot has to really be done.
Jason Lopez: That to-do list was a bit daunting. They had to feed AI Beethoven’s styles of writing forms like scherzos or trios. The way he develops melodic lines and harmonizes them. Counterpoint. His use of repeats, bridges and codas. And then there’s how Beethoven likes to divide the parts up among instruments in orchestrations. To take scant material, just a handful of fragments, required far more than filling in the blanks. It meant AI had to learn what would Beethoven do.
Ahmed Elgammal: So, Beethoven just left small phrases or motifs. Exactly, like when you hear the 5th symphony and there are these four notes beginning the symphony. These four notes Beethoven would take and develop a whole movement.
Jason Lopez: AI finishing Beethoven's 10th demonstrates something very powerful. This thing was a process. Because the process. It was not a matter of feeding a few scraps of music into a computer, hitting a button, and out pops a symphony. The reality is that it was a process. The first results from the AI didn’t sound like what Beethoven would do. It meant working with machine learning to develop the piece.
David Cope: Computers were made by human beings… …not so mysterious.
Jason Lopez: Remember the scene, the dawn of humanity, in 2001: A Space Odyssey? The primates discover tools for the first time… bones lying on the ground which they pick up as weapons. One of the apes, victorious, throws it into the air. As it descends back to earth the image jump cuts millions of years into the future, replaced by a spaceship headed for a space station. A none-too-subtle message that a jawbone and a space station are basically the same thing. Add to that, AI. In computer speak, AI thinks. But that’s jargon. It no more does actual thought than books actually remember information. Many technologists would say, if intelligence is defined as what humans do, then…
Wendy Pfeiffer: There is no artificial intelligence in the world.