Here is brain (and ear) food from musical hunter David Byrne in an excerpt from an article he published last month, which is itself an excerpt from his new book, How Music Works. The article goes way further than what I’ve excerpted here; it pokes into not only how machines change music and how music relates to human evolution but also research about what music is as a neurological entity in your brain. Typical lightweight David Byrne stuff.
Technology has altered the way music sounds, how it’s composed and how we experience it. It has also flooded the world with music. The world is awash with (mostly) recorded sounds. We used to have to pay for music or make it ourselves; playing, hearing and experiencing it was exceptional, a rare and special experience. Now hearing it is ubiquitous, and silence is the rarity that we pay for and savor.
Does our enjoyment of music—our ability to find a sequence of sounds emotionally affecting—have some neurological basis? From an evolutionary standpoint, does enjoying music provide any advantage? Is music of any truly practical use, or is it simply baggage that got carried along as we evolved other more obviously useful adaptations? Paleontologist Stephen Jay Gould and biologist Richard Lewontin wrote a paper in 1979 claiming that some of our skills and abilities might be like spandrels—the architectural negative spaces above the curve of the arches of buildings—details that weren’t originally designed as autonomous entities, but that came into being as a result of other, more practical elements around them.
Dale Purves, a professor at Duke University, studied this question with his colleagues David Schwartz and Catherine Howe, and they think they might have some answers. They discovered that the sonic range that matters and interests us the most is identical to the range of sounds we ourselves produce. Our ears and our brains have evolved to catch subtle nuances mainly within that range, and we hear less, or often nothing at all, outside of it. We can’t hear what bats hear, or the subharmonic sound that whales use. For the most part, music also falls into the range of what we can hear. Though some of the harmonics that give voices and instruments their characteristic sounds are beyond our hearing range, the effects they produce are not. The part of our brain that analyzes sounds in those musical frequencies that overlap with the sounds we ourselves make is larger and more developed—just as the visual analysis of faces is a specialty of another highly developed part of the brain.
The Purves group also added to this the assumption that periodic sounds— sounds that repeat regularly—are generally indicative of living things, and are therefore more interesting to us. A sound that occurs over and over could be something to be wary of, or it could lead to a friend, or a source of food or water. We can see how these parameters and regions of interest narrow down toward an area of sounds similar to what we call music. Purves surmised that it would seem natural that human speech therefore influenced the evolution of the human auditory system as well as the part of the brain that processes those audio signals. Our vocalizations, and our ability to perceive their nuances and subtlety, co-evolved.
– David Byrne, from “How Do Our Brains Process Music?”