Now that World Listening Day has come and gone, what are we to do with the remaining 364 days? One suggestion is to take up a listening practice, a routine of setting aside a few minutes each day in which hearing can expand into listening. To enhance my practice, and as an endcap to my “esteemed listener” interview series, I corresponded with composer and sound arts educator Brian Heller.
In planning the Walker’s recent observance of World Listening Day (WLD), I thought at great length about why listening matters and what we can learn from skilled listeners. From your perspective, what is it about listening—and about our relationship to listening—that merits attention?
More and more, I am of the belief that we need to acknowledge listening as a valid activity unto itself. And then we need to put that into practice. Having a day dedicated to listening encourages us to slow down and take time to experience the process of listening. No matter what you’re listening to (a natural environment, an artificial one, an artistic work, etc.), there is simply no shortcut for taking the time to experience it. WLD is a chance to consider closely our sense of hearing and our perception of sound (two different things). WLD also provides an exchange of ideas that helps us get more out of the listening process.
Speaking of the theoretical side of listening, I’ll mention R. Murray Schafer, a pioneer in the area of sound studies whose birthday is honored on WLD. In Schafer’s endeavor to understand our total sonic environment, he developed “ear cleaning” exercises which train the ears to listen more discriminatingly. Are there acoustic phenomena that for you serve as a baseline to hone your sense of hearing?
I had the pleasure of studying with Schafer for a short time and “ear cleaning” became absolutely essential. In everyday life, I try to be aware of my “noise floor,” to use a technical term from audio engineering, and I prefer it to be quite low. For example, my days of having music on while doing everything stopped some time ago. I gradually became more intentional and purposeful about listening and turned off the running soundtrack. We all know people who have the TV continuously running in the background, and I could never do that. There are many reasons why someone might choose to have the TV on, but I think it’s in part a consequence of the elevated noise-floor we’ve collectively acclimated to in modern life.
Yes, I recently heard someone use the term “hyperdrone” referring to background noise, meaning everything from humming refrigerators to roaring traffic. The value of escaping the hyperdrone and lowering our “noise floor” is beautifully articulated by Gordon Hempton, an acoustic ecologist who describes silence as an endangered species.
I definitely try to notice when I’m in a particularly quiet space. It’s not always a place you might expect, so you have to remain open wherever you are. Last winter, my lovely girlfriend and I spent some time in northern California’s Redwood forests. We were hiking amid huge trees and all the other things that live in and around them, appreciating a wonderful variety of noises. But on one particular day, in one particular place, I noticed an amazing silence. I felt like I couldn’t even hear the air! Aside from the few moments I’ve spent in an anechoic chamber, that was easily the deadest silence I’ve experienced. And it was in a place brimming with life.
Do you have a listening regimen that lets you hear with fresh ears?
As an audio engineer, dealing with the problem of a “listening regimen” is tremendous. The collection of habits and practices that address “fresh ears” are among the most important. We know that, for example, when working with recorded music our brains quickly adapt to and accept the sonic qualities of whatever we’re listening to. This means that if we spend about 20 minutes or so really getting into what we’re listening to, we’ll end up thinking it’s the best-sounding thing we’ve ever heard. (I’m talking sonically, not necessarily artistically.) This is just what the brain does. Although there are a few objective measuring tools, it gets very subjective very quickly. One way to stay objective and critical is to employ a set of recordings that are preselected “reality checks” and use these for comparison throughout the process.
One of the things I’ve most enjoyed learning from skilled listeners is how they talk about sound and the vocabulary used to describe sound’s qualities.
The old saying that “talking about music [sound] is like dancing about architecture” seems more and more true to me each day…it’s really tough!
But I’m sure you’ve got a handle on it, as I imagine terminology features prominently in your teaching. One thing I’m curious about: Are there differences between “aural,” “sonic,” and “acoustic”? It seems these words are used interchangeably but I’m sure they denote different things.
You’re correct, although sometimes the context determines meaning. Differentiating the terms (and others that might be used similarly) requires first understanding that there are (at least) 3 different things going on that get us to hear a sound:
The physical fact of the way the air moves in the world (acoustics)
The sensing of that air and its translation to mechanical energy in our ears (aural/sonic)
And then the translation of that motion in our hearing to chemical energy for processing in the brain (psychological or psychoacoustic).
Just like anything else, any time there’s a translation or conversion from one state to another, it gets complicated.
From the standpoint of working with beginning students, I believe they need to re-imagine and then reconnect with sound as its own physical and psychological thing, and not only a carrier for music. This helps build an understanding of the technical vocabulary, which can be quite imposing. Eventually, we can go about connecting that technical language to an aesthetic one. I’ve found a key part of my role in this as a teacher is to show how vocabulary gives us better precision when talking about sound, and how essential that is. From the standpoint of being an audio engineer working with artists, however, subjectivity comes first. Just yesterday I recorded a concert where a very skilled and talented artist was having a problem with her stage monitor and asked for it to sound “more womanly.” I’ve also been asked, among other things, to make something sound “more chocolate.” It sounds a little strange, but it’s not like there’s an obvious term for what they’re talking about, without having that technical vocabulary. Looking down at the equipment, there’s no “chocolate” knob, so you begin the process of understanding the intention and translating it into something sonic.
The more I continue to learn—especially about psychoacoustics—the more I think humbleness is in order for all of us. If there’s any doubt, question, or opposition, we like to respond definitively with the phrase “I know what I hear.” The truth is that we often don’t. This is innocent enough, because we don’t know that we don’t know. One of the core parts of my job is precisely “to know what I hear.” It’s such a rich area that there are always ways to know *better*, no matter how much you know now.
Language is one tool to describe sound, but there are also notation systems. Have you ever encountered a notation system so unusual or unconventional that it influenced your thoughts about musical performance or composition?
For some reason, I’ve always been attracted to the notational problems of composers. When I was in music school, I spent afternoons in the music library picking out random 20th century music scores that looked like they might be interesting. I wound up getting a great deal out of this, in part because it led me to see that all composers must not only have sounds in mind, but also must solve the grand problem of communicating physical instructions to let those sounds come into existence. When it comes to notation, some rather fearless models are out there which encouraged me to do whatever best gets the message across and to be open to whatever that solution looks like.
I also considered how composers use notation to get across (what I would call) different layers of meanings in their work. For example, I saw that in George Crumb, although sometimes the actual staff notation was not terribly unconventional, his layouts conveyed conceptual and symbolic aspects that might otherwise go unnoticed. I also saw smaller things Crumb did that improve clarity when one reads over a score to find relationships tying together disparate parts. In contrast to Crumb’s detailed work necessitating precise notation and complex techniques, you have Herbert Brun and John Cage using graphic devices that intentionally circumvent the ‘need’ for composer-determined precision. Some of the resulting notation systems look nothing like a conventional musical score, but get at the essence of what a score is: a practical tool to get the intended sound into the air.
Brian Heller is an artist and technician who approaches composition, recording, and education with a unique blend of skills. Since graduating from The Hartt School, he has been working as a freelance composer, recording engineer, and educator in both the public and private sectors. This has included work for Minnesota Public Radio, Antenna Audio Tours, Innova Records, Line 6, Zeitgeist, and numerous independent composers and performers. He has also published reviews and feature stories for Electronic Musician magazine, and held senior staff engineering positions at The Banff Centre, the Tanglewood Music Center, and the Aspen Music Festival and School. His compositional activities have included grants and commissions from several organizations, and performances and broadcasts across the United States, Canada, and the Czech Republic. He currently directs the Sound Arts program at Minneapolis Community and Technical College.