The Evolution Of Music

by Anitra Pavlico

In a recent study, data scientists based in Japan found that classical music over the past several centuries has followed laws of evolution. How can non-living cultural expression adhere to these rules?

Evolution is an “algorithmic process applied to populations of individuals.” [1] Individuals vary, and certain individuals’ traits are passed on while others are culled. These steps are repeated many times. In biology, scientists can study the gene as a “unit of inheritance,” but an analogous unit of inheritance has to be selected in a study of a cultural practice. Eita Nakamura at Kyoto University and Kunihiko Kaneko at the University of Tokyo decided to look at unique musical features such as the tritone–a dissonant interval of three whole notes–and measure the number of occurrences in Western musical compositions over the centuries.

According to Nakamura and Kaneko, “The mean and standard deviation of the frequency (probability) of tritones steadily increased during the years 1500-1900.” Because this might have been just a function of individual composers’ preferences or “social communities” and not necessarily governed by statistical evolutionary laws, they developed a mathematical model of evolution to tell the difference. The tritone is a relatively rare musical event, but its use has spread over the centuries in a way that the study’s authors say follow precise statistical rules. [2]

*    *    *

Music, and likely other forms of art, has progressed in a way that involves carrying on some traditions from the past–in part so that people recognize it as the art it purports to be. (Many complain, for example, that modern music or art is “not music” or “not art.”) At the same time, musicians intermittently introduce new features to capture listeners’ interest. The researchers say their model can be used to analyze this balance between typicality and novelty that may well exist in other cultural areas. It appears their approach is already being used to examine evolutionary rules at play in the development of language, other musical genres, and scientific topics.

When I googled evolution in music, I kept being diverted to a disagreement over whether music is “hardwired into our brains by evolution,” as Philip Ball writes in The Music Instinct, or whether, as Steven Pinker controversially pronounced, it is “auditory cheesecake” that’s a pleasurable phenomenon, but a mere by-product and not a “main dish” of language. In other words, as Ball writes, an “aesthetic parasite.” I wonder how the recent study from Nakamura and Kaneko affects the equation: If humans follow evolutionary rules, and music follows the same rules, then isn’t there something to the fact that we are apparently progressing in tandem?

Jay Schulkin and Greta Raglan have argued that the cognitive and neural connections facilitating musical expression and understanding underscore its importance to “our socialization and well-being as a species.” [3] Humans may have sung even before we spoke using syntax. Schulkin and Raglan write that “Music, like food ingestion, is rooted in biology,” and that “our evolution is tightly bound to music and to the body as an instrument.” Not only that, but they note that music helps to facilitate social understanding, nurturing, and cooperation–which, one imagines, helps to perpetuate our survival.

While it may not be necessary for human life as food or water are, it is clear that music has greatly enhanced our existence. It has been with us practically from the beginning and has grown up with us over tens of thousands of years. Then again, it is reductive to characterize musical expression as a biological necessity along the lines of fins or lungs. Dutch scholar Henkjan Honing notes that “rather than a by-product of evolution, music or more precisely musicality is likely to be a characteristic that survived natural selection in order to stimulate and develop our mental faculties.” [4] The squabble over whether music is hard-wired in us is academic, because as Honing writes, “the purely evolutionary explanations for the origins of music largely overlook the experience of music we all share.”

*    *    *

I confess I didn’t become exercised either way about what music’s precise role in humans’ evolution has been. It is a fascinating topic that I look forward to exploring further, but I’m with Honing when he says that we shouldn’t underemphasize music’s indescribable effect on our psyches. I am an amateur pianist and have loved music for as long as I can remember. What interests me more about the new study is that it signals that music is in the hands of data scientists. As in other areas of machine learning, I wish we could use it in limited, discrete ways to address particular items of interest, but not have it run roughshod over an entire discipline. As you might imagine, though, AI has started to be harnessed in not just the the analysis but also the creation of music.

In a panel on AI and the music industry in April of this year, one of the panelists, Lydia Gregory of the firm FeedForward AI (which claims on its website to use AI “to enable and augment human creativity”), said that AI will “have an impact [in] automating bits that creators find boring, or finding other ways to support the creative process.” Panelist Simon Wheeler from the record company Beggars Group agreed that “the interesting ideas [for AI] are around assisting creativity. Being an accompaniment to a performance, to giving creative inspiration when you’ve got writer’s block… something that can introduce something, maybe based on what you’ve already created,  or maybe it’s out of the blue.”

Whether music has evolved as a language intertwined with spoken language, or whether it is an evolutionary parasite that has grown with us simply because we love having it around, it is clear that it emanated from humans’ brains and not from an algorithm or otherwise from zeroes and ones. We are well on our way, however, to having AI compose music that at some point soon we may hear on the radio or buy on iTunes–in fact, I suspect I already own some electronic music that humans did not write, per se. In 2016, scientists from Sony CSL Research Labs showcased the first song to be “composed by AI,” an odd tune called “Daddy’s Car” that features Beatles-esque harmonies. [5] CSL apparently looked at more than 13,000 songs to help it develop “Daddy’s Car,” but describes it as “composed in the style of The Beatles.” As it turns out, French composer Benoît Carré arranged and produced the songs and wrote the lyrics, which discuss Daddy’s car and the backseat and something about “turns me on.” I was originally sure that a computer wrote those lyrics. It’s unclear what role AI actually had in it, then.

Isn’t this what musicians do now, anyway–look at current songs’ characteristics, especially when the songs are very popular such as those by the Beatles, and emulate certain characteristics? Yes, but humans writing songs can never manage to be “off” in quite the same way a computer can. I worry that at some point it will be much cheaper for corporate entities to purchase terrible music made by computers than music by humans. Then AI music will be everywhere–at the mall, in commercials, in TV theme shows, and so on.

One particular concern I have about the recent study out of Japan is that “forward-thinking” musicians and AI programmers will try to game the system by looking at examples of novelty noted by Nakamura and Kaneko and trying to “evolve” their music at an artificially accelerated rate so as to sound like the most modern music. One thing to note about the tritone, one of the musical features that they studied, is that it sounds horrible. [6] If you try to make your music vault ahead of the evolution curve as determined by machine learning by sprinkling tritones liberally into your compositions, it will not sound like music to many people. Hopefully, omnipresent AI music will not have desensitized us, and we will be able to recognize it as bad and reject it, as we have for thousands of years with music we do not like. Of course, we need to keep making music ourselves, tricking the algorithms, and influencing its evolution according to our own musical desires.

 

[1] See MIT Technology Review, “Data mining reveals the hidden laws of evolution behind classzical music,” Sept. 28, 2018, at https://www.technologyreview.com/s/612194/data-mining-reveals-the-hidden-laws-of-evolution-behind-classical-music

[2] The authors say they found evidence of these “statistical evolutionary laws” in the Western classical music data they studied: “1. Beta-like distribution of frequency features 2. Steady increase of the mean and standard deviation 3. Nearly constant ratio of the mean and standard deviation 4. (Possibly) exponential-like growth of the mean,” Nakamura, E. and Kaneko, K. (2018), “Statistical Evolutionary Laws in Music Styles,” available at https://arxiv.org/abs/1809.05832.

[3] Schulkin, J., and Raglan, G. (2014), “The Evolution of Music and Human Social Capability,” available at https://www.frontiersin.org/articles/10.3389/fnins.2014.00292/full.

[4] https://www.psychologytoday.com/intl/blog/music-matters/201309/was-steven-pinker-right-after-all

[5] https://youtu.be/LSHZ_b05W7o

[6] https://www.youtube.com/watch?v=_u9laoGPB_o Try watching this video all the way through, I dare you. You will be running back to “Daddy’s Car.” I should add that tritones are often quickly resolved to a non-dissonant chord or interval. For example, the first two notes in The Simpsons’ theme (The Simp-) and the first two notes in West Side Story’s “Maria” (Ma-ri) proceed directly to the non-tritone note higher on the scale. Admittedly this sounds pleasing, but I think overusing this ploy would weary the listener.

The image accompanying this post is by Gellinger via Pixabay.