Can artificial intelligence create music successfully? The answer probably depends on the definition of music. Is it an organized combination of sounds and silence, defined by traits like rhythm and harmony? Are its personal and cultural background and emotional impact defining? Can AI create an art form that is central to so many lives? Major tech brands like Sony and IBM are experimenting with these questions, and a growing number of artists are using AI music tools. The tools tend to use musical components like style, mood and cadence to classify, co-write and independently author new songs, at times coming up with both instrumentals and lyrics. Are these efforts successful? Listen below.
The Flow Machines Project, led by Francois Pachet at Sony Computer Science Laboratories and Pierre and Marie Curie University, is an example of one such AI composition program. It is named after the concept of a state called “flow,” in which a creator is fully absorbed in their work and highly productive, a term coined by Hungarian-American Psychologist Mihaly Csikszentmihalyi. The machines aim to facilitate, enhance and prolong this experience for artists. As the project site puts it:
The Flow Machine Project is about designing and implementing the next generation of authoring tools, in particular for musical composition and literary text writing. These tools intend to boost individual creativity by helping users create their own style.
The research team works with a principle that individual creative works are not usually unique, but that styles are. Flow Machines use the concept of musical style as an avenue to collaborate with human musicians. It uses songs from a big database to learn many styles.
The project treats style as “a tangible object,” one that is both “faithful and flexible.” As computational objects, musical styles can be explored, combined and altered to meet a user’s preferred constraints and configurations.
“Exploiting unique combinations of style transfer, optimization and interaction techniques, Flow Machines compose novel songs in many styles,” according to the project write-up. Composers can run with what the system produces or alter the track. They can even have it analyze, reproduce and potentially refine their own style.
Examples of Flow Machines’ work include “Daddy’s Car,” the AI’s effort to write in the style of The Beatles, which can be heard below. It also came up with “DeepBach,” chorales written in the composer’s style. While the AI is capable of working with syllables and lyrics, French composer Benoît Carré teamed up with Flow Machines to arrange, produce and write the words for these songs.
Other examples of AI-based musical experiments and products include the Artificial Intelligence Virtual Artist, or Aiva. It can write new classical compositions after studying famous composers, and focuses on making music for commercials, movies, trailers and games. Here is its “’Genesis’ Symphonic Fantasy in A minor, Op. 21:”
Jukedeck is another AI composer. It was first used as a video background-music generator. Users can program it based on genre, mood and beats per minute and then alter the tracks it writes. It has been notably used by the K-pop star Highteen:
IBM Watson Beat applies its artificial neural network, which is modeled on a human brain’s network of neurons, to finding and analyzing patterns in a song’s basic features, like rhythm and key. It then uses this understanding to build new tracks, and assists “human musician[s] in coming up with fresh new music,” according to IBM.
"To teach the system, we broke music down into its core elements, such as pitch, rhythm, chord progression and instrumentation. We fed a huge number of data points into the neural network and linked them with information on both emotions and musical genres,” Software Engineer Richard Daskas said of working with Watson Beat. He helped IBM to create the score for this Red Bull Racing video: