An Audio Professional’s Guide to Hearing Loss


I live in a world without birds.

Okay, that’s not exactly true. I do occasionally see them around, but they no longer create the joyful cacophony I remember from my childhood. Maybe they just have nothing left to sing about.

My microwave is mute, and so I watch its timer to let me know when the leftovers are warm. While in the kitchen, I have apparently left the fridge door ajar. I know this because my wife is hollering from the loft at the other end of the house to tell me so—somehow alerted by a chime too subtle for me to detect from a foot away.

My bicycle has a magic bell to warn pedestrians on the path ahead. I’m amused at the startled reaction from a gaggle of preoccupied tweens on their phones, my approach silently announced with a mere flick of my thumb.

I am gregarious by nature. And yet I dread the stress of social situations. It seems that people these days mumble and fail to articulate properly. I am incredulous that everyone else around me seems to be tracking the conversation. Surely they are just gleaning bits and pieces of useful information and faking it, like I am. They all seem to do a better job than I of hiding their concern that they’ve just answered a question that wasn’t asked or laughed at something that wasn’t funny.

And yes, there is a statistically significant probability that it is me you have been following down the freeway for the last 7 miles with his turn signal on.

My world is by no means silent, though. It sounds like the world sounds. It’s just that the world doesn’t make the same sounds that it used to. Where once I lived in a world with high-pitched sounds, they have now been replaced by an ever-present, steady, high-pitched tone. Two, actually. The one on the left is the loudest, and at an annoyingly nonmusical interval lower in pitch than the one on the right. I can hear them if I’m standing next to the ocean. I can hear them when waiting at the pedestrian crossing for the freight train to pass. I can hear them over everything, and they never stop.

I find it ironic that I live in a world without birds, but also in one without silence.

It’s the End of the World as We Know It

For many years, I lived and breathed sound. This is a story of how that ended in one sense, while beginning in another. I still think about sound in ways that normal people just don’t think about sound. To this day, I awaken quite regularly from dreams that have taken place in recording studios. My perceptions of my own hearing loss are therefore inevitably put under the same scrutiny that I once applied to my work as a sound designer, and what that scrutiny has uncovered is this: Most hearing people harbor misconceptions about what being hearing impaired is actually like.

I offer the following to those interested in a deeper understanding of hearing loss, articulated by someone who has spent his life consciously analyzing the emotional, physical, and philosophical aspects of our sonic experience. This isn’t medical advice, and it may or may not even be scientifically accurate. So just take it as a personal account of things you might be interested to know should you ever face hearing loss, or if there is a loved one in your life who is constantly saying, “Huh?”

Sweet Dreams are Made of This

I’m 7 years old, and I’m lying with my chest across the arm of the sofa. On the floor below is a portable record player, and I can’t stop just watching it. Later, I ruin several of my older brother’s 45s with my homemade gramophone. Using a sewing needle scotch-taped to an old cereal box crudely fashioned into a cone, I rest the needle in the groove of a record and spin the unpowered turntable with my finger. The fact that sound is somehow encoded in the grooves of that plastic disc and that I can hear it coming from my cardboard cone is a miracle to me. As I continue to come of age, it’s building crystal radio sets and lying in bed with a walkie-talkie, listening to truckers from who knows where.

I have other early memories that may or may not also play a part in this story: a high fever and my panicked mom resorting to dunking me in a cold bath to try and break it. A high school dance where I am a moth drawn to the flame of a PA speaker stack at the left front of the stage, feeling compelled to stand a couple feet away in order to feel the sonic energy shaking my innards while creating a tickling sensation in my left ear, which is turned to directly face the sonic onslaught. The salesperson at the local SpeakerLab store, annoyed when I show up asking to hear “Dark Side of the Moon” on the K-Horns. Again.

When I was in 10th grade, a shop teacher played a film in class, something like Careers in Electronics. Buried within it was a brief sequence of a recording engineer setting up microphones in front of musicians before retreating to a glass-walled room fronted by an enormous control console loaded with knobs and sliders. That. Was. It. I sat there among my sleepy and/or stoned first-period-elective classmates awash in a beam of revelatory light from above. I knew that somehow, this would be my vocation. It had to be. I became obsessed. There was no second choice. No plan B. The rest of my high school education was marked by frustration that none of the required nor elective class offerings had anything remotely to do with becoming a recording engineer.

However, I soon discovered that a friend of mine’s older sister, a musician, was dating an assistant engineer who worked at a recording studio in Seattle, not far from my home in the gritty, blue-collar suburb of White Center. And not just ANY recording studio, Kaye-Smith studios! My new crushes Ann and Nancy Wilson and their band Heart created hits like Barracuda there. Bachman Turner Overdrive recorded their biggest hits like Taking Care of Business at Kaye-Smith at about the same time Steve Miller recorded Fly Like an Eagle. The Spinners even recorded Rubber Band Man there. This was the BIG time—at least as big as things got in our sleepy little corner of the country in the mid-seventies. Friend’s sister’s boyfriend? That was the only crack of daylight I needed. Immediately I started a letter-writing campaign, begging for a tour, and it eventually bore fruit. Now, to enlist my older brother to accompany me for the visit, since I didn’t yet have a driver’s license and felt this pilgrimage far too momentous for public transportation.

I was granted a cursory tour by a dude clearly more interested in checking a favor for his girlfriend off his list than he was in mentoring a wide-eyed, acne-faced wannabe without a driver’s license. But it didn’t matter. I was in the inner sanctum. An engineer was playing back music from a 24-track, 2” wide tape—the coolest things I had ever seen—occasionally soloing the bass track and turning knobs to adjust its tone—the coolest thing I’d ever heard. My guide offered me a few discouraging words about the incredibly slim odds of ever getting a job even emptying wastebaskets there, while offering a warning/brag about rampant sex and cocaine use in the control room creating a challenging work environment that I would clearly be unable to handle. Unfazed, I promised myself right then and there, “I’m gonna work here someday!”

After what seemed an eternity as the kid with his face pressed against the glass, on the outside looking in, I finally got a foot in the door. My first job was in the dub room, supporting the operations of Seattle’s leading commercial production studio, Lawson Productions. Soon, I was doing simple voice sessions and moving my way up, and for the next thirty-some years, I made my living as a recording engineer. For most people, that job title calls to mind presiding over a 20-foot-long console banging out hits, almost ASKING for hearing loss. But that wasn’t me. My craft involved things like voice-overs, commercials, creating soundscapes. The term “sound design” had yet to be coined, but that’s what it was, and I LOVED it. I was among the anointed elite who understood the alchemy of the sound designer—creating emotional impact using tools and techniques that are mysterious to mere mortals.

A handful of years into my career, Lawson Productions expanded and moved a couple blocks west, taking over the now mostly silent Kaye-Smith studios. I will never forget the day I was working on a commercial for a local retailer when I had a sudden and vivid flashback: Me standing in THIS VERY ROOM years before, promising my future self that I would work here one day. And here I was. A chill went through me, and tears came to my eyes.

Divine destiny. I mean, obviously.

Eventually I joined forces with a colleague to open our own studio, Clatter&Din, down behind the Pike Place Market. Somewhere between high school shop class and Lawson’s dub room, I had graduated from The Evergreen State College in Olympia—right about the same time as Sub Pop Records co-founder Bruce Pavitt. During my tenure at Lawson’s (later renamed Bad Animals after the Heart album of the same name), I witnessed the birth of grunge. Kurt Cobain’s death announced its decline the same month that Clatter opened its doors. All this time, I had been operating (literally) next door to, rather than within, this scene. My interest was sound design. It may not have been as glamorous as being part of an emerging musical zeitgeist, but the hours were better, paychecks more predictable, and it wasn’t as loud. The warned-about sex and cocaine never materialized, but I was fine with that. I had career longevity to think about!

What a Fool Believes

I’m now about 15 years into my career, and another successful session is in the books as my clients—an agency producer, creative director, and account manager—say their goodbyes and pile into an airport cab. They are heading home to Honolulu while I retreat back into my studio to do billing and document session details, while 2 interns clear lunch dishes and pint glasses. Audio post is absolutely the best part of the commercial production process. It’s the last stop. The big decisions have already been made, my client’s client has already signed off on the edit and chosen to skip the audio mix session to catch an earlier flight, and so everyone left is in a good mood. The vast majority of my clients represent long-term intermittent friendships. Between bites of Thai food, inappropriate jokes, and catching up on each other’s lives, we make hundreds of little technical and creative decisions that add up to a polished mix of music, voice, and creative sound design. Truth be told, I make most of those decisions myself while the party continues behind me. If a high-end TV commercial (as they were still called in 2006) is a gift, it’s one I receive already boxed. Audio post is simply the process of tying on the bow before giving it to the world. (This analogy was, of course, purposely chosen to illustrate how the advertising industry thinks more highly of what it does than, well, literally everybody else on the planet.)

The following morning, I am prepping sound effects for a later session when I get a call from the agency producer in Hawaii.

“Hey Vince! Everybody is loving the mix, but there’s one little thing we heard when we got back, and I was wondering if you could give it a quick fix.”

This is certainly a common occurrence, made so much easier to accommodate of late due to the emergence of the internet and the replacement of physical tape with digital files. It takes mere minutes and little expense to execute a quick tweak to the mix and put it on the FTP server for them to download.

“Sure! What’s up?”

The commercial is for a luxury resort property, and it begins with an exterior shot that slowly dissolves into a shot of the lobby interior—the camera moving down from the ceiling past a chandelier in the foreground to reveal happy, shiny people enjoying the good life. During that dissolve from the outside to the interior, my client explains, they hear a high-pitched squeal in the background. It is brief and faint, and I suppose we had just missed it. I quickly surmise that one layer of the outdoor city ambiance track that I’d placed in the opening of the spot probably contained a truck brake, and it occurred just as the sound effect was crossfading to the interior ambiance track.

I open the session on my workstation and, holding the phone against my right ear with my shoulder, hit “PLAY”. There is no squeal to be heard. “When does this squeal happen? I’ve got nothing on this end.”

He assures me that it is there. Right at 4 seconds and 10 frames in. I assure him that it isn’t there. (After all, hadn’t he called me “golden ears” just yesterday?) After some back and forth, comparing notes to make sure we’re both listening to the same version of the mix and trying to conceive of anything that could have created an audio artifact in only his digital copy, I ask him to hang on a second. I put the phone down, face forward toward my monitors and turn them up loud. Palms resting on the mixing surface, I lean in, face down and eyes closed—like I always did when I was trying to listen to something REALLY HARD. Hitting play, I wait. Four seconds and 10 frames later, I hear it. Subtle, but definitely there, definitely a problem, and definitely easy to fix. I am flummoxed as to why I hadn’t heard it before. And then I remember the phone pressed against my ear. Going back to the top, I hit play again, this time holding my index finger in my right ear where the phone had been, and wait 4 seconds. No squeal. Repeat. Finger removed—squeal.

OH. FUCK.

I quickly mutter some BS excuse about my monitors being out of alignment, offer a quick apology, and have a revised mix in his hands a few minutes later. But I am shaken. At that instant, I know that I am screwed. Years—perhaps a lifetime—of blissful self-delusion are shards at my feet.

“Golden ears,” my ass.

If this were a movie, say, for instance, last year’s The Sound of Metal, the next line would read, “That was the day my world shattered. Eventually I learned ASL, made peace with my fate, and attained a fulfilling next chapter. The End.”

But of course, hearing loss like mine isn’t anything like that. My dark epiphany, recounted above, taught me that there are three distinct aspects to hearing loss, and they happen in this order: First, your hearing deteriorates slowly over time. Second, you become aware that your hearing has deteriorated. And finally, you admit that your hearing loss is actually affecting your life and you do something about it. Due to our brain’s amazing plasticity coupled with our innate capacity for self-deception, these three distinct phases of hearing loss can be separated from each other by years.

Ghost in the Machine

It’s easy to think of the ears as two microphones that deliver a stereo feed directly to you, the listener. That’s why movies that attempt to portray what the world sounds like to someone with partial hearing loss resort to simply rolling off the high end. (That’s sound engineer speak for “turning down the treble.”) The effect is a muffled sound, like someone slipped a sock puppet over those two microphones. It works for the movies, but, of course, that isn’t what it’s actually like. That’s because in addition to the “microphones,” there are complex cables that carry the audio signals to a super-advanced audio processor, and both of these systems sit between the “microphones” and you, the listener. That complex audio processor is a subsystem of the device that you probably just used to infer what I’m talking about: your brain.

I guess you could say that my brain and I have a lot in common; back in my sound designer days, I would often get pretty crappy source elements to work with. My job was to edit, apply noise reduction, replace some sounds entirely, apply equalization, adjust the balance, etc., in order to produce an end result that was greater than the sum of its parts. And that is pretty much exactly what that brain/processor does before presenting the sounds of the world around you as something up to the exacting standards of you, the listener.

This is why it is inaccurate to assume that my world—one in which high-frequencies are virtually non-existent—sounds muffled. It doesn’t. The sound processor in my head has compensated to make it sound natural to me, the listener. When I hear people in front of me speak, it sounds like I’m hearing the full range of their voices. But that’s just a trick of the sound processor in my head doing its job. The high-frequency information isn’t really there, and so to me, it just sounds like people aren’t speaking clearly. They’re mumbling and failing to employ good diction. People who are aware of my hearing challenges—even my wife who has lived through my entire decline—will tend to talk LOUDER when I’ve asked for something to be repeated. But the problem isn’t volume. It’s the fact that my ears can’t tell your S’s from your F’s and your T’s from your P’s. And lacking enough other information, the processor will sometimes screw up while trying to “fix-it-in-the-mix” for me, the listener. That “other information” can be things like the context of the sentence and, perhaps most importantly, visual information.

That’s right! It turns out that I’m a lip reader! I just didn’t know it until I realized I have a more difficult time understanding people who I’m not actually watching speak. When I was a sound designer, I knew that a certain sound effect had more or less impact depending on what it was synced with. The exact same massive “thud” sound effect sounded WAAAY more badass when synced up with a giant boulder hitting the ground than it did when synced up with a small boulder hitting the ground. This is because our brains are wired to look for clues to augment the signal coming in from our ears—our perceptions enhanced by that added context. A mouth forming an “S” looks different from a mouth forming an “F,” and so that visual information is passed on to the processor, which uses it to calculate for me, the listener, its best guess at which word was spoken. If the speaker’s mouth is out of my eye line, or is perhaps covered with a mask for some reason (!), my processor is flying blind, so to speak, and it will often either guess wrong or throw up its hands and say, “I dunno. Your guess is as good as mine.”

I’ll share here another misconception, one that isn’t particularly useful to know but which I find fascinating because it shows just how hard the audio processor in my brain is working to make sense of the world. As you might recall from my “phantom brake squeal” story, my hearing loss is asymmetric. It is considerably worse in my left ear (the one blasted by the PA cabinet back in 9th grade) than in my right. At this point, I can still hear some higher frequencies with my right ear, but pretty much none with my left.

So what happens when I face a pair of quality speakers and listen to music that has a broad stereo field? You would think that it would sound like the balance knob is turned toward the right, with everything on the left sounding muffled and at a lower volume. But that’s not what happens. It just sounds like a full stereo mix. I know intellectually that my central processor isn’t receiving a full stereo mix, however, a full stereo mix is what it dutifully presents to me, the listener. How does that even work!? Take, for instance, a mandolin solo that is panned to the left. (I like bluegrass. Don’t judge.) My ears apparently hear the percussive undertones of the strings and the processor detects subtle phase relationships and temporal differences and says to me, the listener, “Yup. That’s over there on the left.”

Then, and this is the truly amazing part, although the malfunctioning left microphone detects none of the mandolin’s high frequencies, the right microphone does detect some of them, and the processor says, “I’m pretty sure those high frequencies match up with those percussive tones, so I’m just going to go ahead and pan those over to the left as well.”

I can test this by plugging my right ear. The mandolin (and everything else) becomes a muffled, muddy mess. Unplug my right ear, and there’s the mandolin in all of its (relative) glory—on the LEFT!

This takes me back to my own “phase 2” process of becoming aware of my hearing loss. I finally started to recognize that for years, when I was trying to understand someone in a noisy environment, I would unconsciously turn my head to present my “good” ear to them. Since my early twenties, my wife and I have taken long walks to ward off the effects of our sedentary careers. All that time, I’ve had a strong preference for being on her left so that my “good” ear faced her. Nowadays, walking on her left has become more of a necessity than a preference, but in the self-delusion phase, I would awkwardly circle behind her crossing the street so that I ended up on her left without knowing why. It just felt better.

And when I was in my late teens, I made an observation that best illustrates how completely my capacity for self-delusion protected me from even the slightest notion that my hearing was anything less than perfect. Like most people, I have a favorite ear to use when talking on the phone, and for me, that is the right ear. I noticed that on the rare occasions I would shift the phone to my left ear—to write something down or whatever — that the phone sounded radically different. I was fascinated by this and marked it down as the brain’s capacity for shaping our perceptions. When I had the phone to my right ear, it sounded like it was “supposed to.” Like I was talking with someone on the phone. But when I moved it to my left ear, it sounded like a cheap, tinny little speaker through which I could hear a distant voice speaking. Which, of course, is exactly what it was. My left ear sounded technically correct in how it assessed what it was hearing, whereas my right ear, so acclimated to talking on the phone, had adapted to sound better, more present and more real. How brilliant was I at such a young age to notice this amazing capability of my brain? At least that’s what I thought at the time.

Now, I do still think that brain plasticity and/or left/right dominance has something to do with this phenomenon, and I’ve seen research to that effect. However, in my case, I now realize that I’ve likely had compromised hearing in my left ear for most, if not all, of my life! I’d convinced myself that my observations of the difference in how the phone sounded in each of my ears was evidence of the acuity of my hearing, when instead, it was likely evidence of its deficits.

As the above stories show, my journey from phase one (loss) to phase two (realization) was probably more than a decade. In hindsight, it took a surprisingly long time for the clues to add up, finally to arrive at a squealing-brake tipping point. It turns out that my colleagues may actually not have been full of shit when they raved about the quality of this brand of mic preamplifier versus another, as I kept to myself my judgment that they were mere poseurs, since I myself could hear no difference. By now I’ve made peace with the fact that I may have successfully pulled off a 30+year, award-laden career as a respected audio professional despite having crappy hearing!

Stop Making Sense

Thus far I’ve spoken at length about the sound processor in my brain. Now I turn my attention to the other piece of equipment that sits between my ears and me, the listener. I earlier referred to them as signal cables and am, of course, talking about the auditory nerves. Problems with the “microphones” are well documented. The “microphone” is a tiny nautilus-shaped structure that is lined with microscopic hair cells of various lengths that are “tuned” to vibrate in response to various frequencies—from about 20hz (cycles per second) up to as much as 20,000hz. When these “hairs” are damaged, due to exposure to loud sounds, chemical assaults, age, bad genetic luck, or any combination thereof, they can no longer generate the signals that would normally be sent along the “signal cable” to the “processing unit.” This damage is usually quantified by sitting in a quiet booth at the audiologist’s office (or Costco) listening through headphones to pure tones at carefully calibrated levels and frequencies. What you can or cannot hear is charted as an audiogram—your personal response curve that generally serves to quantify exactly how much your hearing does or does not suck.

However, wear and tear on the signal cable can also affect what you hear and how you hear it. The jury is out on the most likely causes of this damage, but the effect has become known as “hidden hearing loss.” It’s called this because some people who can ace the pure tone audiogram test—in other words, their microphones are working fine—can still have incredible difficulty functioning in noisy environments. Conversation in a restaurant becomes impossible even though the tones in the soundproof booth come through loud and clear. Something is scrambling the signal.

Of course, it makes sense that whatever has assaulted the little hair receptors in your “microphones” has possibly also messed with your signal cable. And that can lead to some pretty weird perceptions that most people have never heard of. Allow me to illuminate with a couple examples.

Most people know the neo-hippie anthem “Home” by Edward Sharp & The Magnetic Zeros. I know it well, and I remember exactly what it used to sound like back in 2010. So I know that at 1:37 into the song, there is a chorus whistling the melody. In 2021, I can still hear it. But, and this is the weird part, THEY’RE WHISTLING THE WRONG NOTES! No matter how hard I try to mentally “squint” to force what I know to be the correct melody into focus, I can’t. The third note, for example, is a whole note that is precisely one half step above what it should be! It’s horribly unmusical, and I can’t mentally change it no matter how many times I play it.

Digging deeper into my boomer playlist, the same thing happens with the higher end of the sax in Gerry Rafferty’s 80s hit “Baker Street” as well as Bruce Hornsby’s piano solo in the middle of “That’s Just the Way It Is”. Again, my neural audio processor is presenting these songs to me as full fidelity productions. It’s just that they now sound like crap. The glockenspiel line during the breakdown at the 3:05 mark of Joe Jackson’s “Stepping Out” has been completely removed from my 2021 re-release, but I’ll take that over hearing it played out of tune any day.

My neural audio processor is a pretty good sound designer, but it has its limits. If I’m listening to speech, and my processor can determine from the context of a sentence plus input from the lip-reading subsystem what a given word should be, even though the microphones didn’t pick it up, it will be inserted into the stream presented to me, the listener, and I won’t even know it happened. Good job, audio processor! The same can be true with music that I’m familiar with. My memory can fill gaps and make a familiar song sound better than music I’m NOT familiar with. (Maybe that’s why some of us, as we age, tend to stay with music we know.) However, if the signal cable screws up and sends a signal for the wrong note, and does so with a high degree of confidence, the audio processor weighs everything and dutifully decides to send the wrong note to me, the listener. It may agree with me, the listener, that some horrible musical choices are being made, but hey, it’s just doing its job.

Bad signal cables also do a poor job of separating the signal from the noise. That is why even people with good “microphones” can still have problems with speech in a noisy room. And this brings me to another misconception that I’ll now address: the misconception that volume equals clarity. Zoom calls are a constant struggle for me. I am sometimes amazed that everyone seems to be following the conversation while I can easily become exhausted by the mental effort of trying to comprehend. This leads me to “tune out” to the point that I suddenly become aware I’ve stopped paying attention to the conversation, and then anxious that perhaps I’ve just been asked to respond to something, or am about to be, even though I don’t know what anyone is talking about. It isn’t an issue of volume. I wear quality headphones or AirPod Pro earbuds to eliminate distraction. Everyone is eager and willing to try to better accommodate me, but they can’t because they think it’s just an issue of volume. However, the real issue is something else—something that I and virtually nobody else seem to understand: While some people on a video call wear headset mics that place the microphone inches from their mouths, most people just rely on the built-in mic on their laptop. The room they are in may be noisy and echoey, and so automatic noise-suppression circuitry kicks in. That’s not a good thing, because along with the background noise, intelligibility is also suppressed, at least for people like me with faulty “signal cables” in their heads.

Here’s something from my audio knowledge base that I wish more people understood about microphones (the actual electronic devices) and intelligibility: Let’s say a microphone is placed 2 inches from a speaker’s mouth. The vast majority of the sound hitting the microphone is “direct”, with a small percentage of the sound consisting of general room noise plus reflections of the speaker’s voice from walls and ceilings. Now, if we double that mic distance from 2 inches to 4 inches, you would think that would double the ratio of reflected vs. direct sound. But that isn’t what happens. It’s logarithmic, with something like a fourfold increase in the ratio of reflected to direct sound with each doubling of distance. So double that a couple more times to 16 inches and things are just starting to get out of hand—especially in a reflective and/or noisy room. Now double that again to something representing the distance between your mouth and a built-in laptop microphone or car Bluetooth microphone, and you have something that all the volume in the world isn’t going to help with intelligibility for people like me.

The Great Pretender

As the above example illustrates, failure of that “signal cable” can go a long way toward explaining the limitations of hearing aids. Don’t get me wrong: hearing aids have drastically improved the quality of my life. For about 3 years after the phantom brake squeal incident, I continued working in a state of denial—sound designing and mixing as though everything was dandy. The thought of NOT doing so was panic-inducing, and so I faked it as best I could. It wasn’t that hard, really. I recall once working on some sound design that involved a layer of general residential background sound—light traffic, a distant lawnmower, a far-away barking dog…and birds. My client was a young agency producer. At one point she asked me if I could “do something about those birds.” Unable to hear them AT ALL myself, I probed like a showbiz psychic, “Yeah, they work, but they ARE a little piercing, I suppose.”

“Exactly,” she agreed, “I like em, but they just cut through too much.”

Nodding in agreement, I reached for the channel’s equalizer and turned it down about 6db between 3khz and 6khz, then turned to make eye contact with her. “Better?”

Greeted with a thumbs up, I went back to work placing the voice track, while behind me the stream of banter, gossip, and jokes continued. I loved joining in, but realized that I couldn’t follow the conversation. People would address me, and I wouldn’t answer right away. I’m sure I probably came across more as the absent-minded professor focused on his work rather than as the deaf audio guy. But I knew the truth.

I recall this period in my life with great sadness. I wasn’t being forced into the stigma of wearing hearing aids by my inability to do my job, but rather by my inability to interact with my clients, who, in many cases, were also my friends. Here I am in an acoustically perfect room that has an extremely low level of self-noise, and I can’t converse with my clients unless I am turned to face them. I have to ask them to repeat a suggestion or important piece of information one too many times, and then turn my head to present my “good ear” when they do so. Not a tenable situation. And so I finally went to get fitted for my first pair of hearing aids.

Surprised by the wide assortment of fashion colors to choose from, I went with the silver model and kept my silver hair long enough to cover the tops of my ears. I told only a few people about them, and for the most part, nobody else noticed. The day I got them, I went into a studio control room that I knew to be empty at the time and put on a familiar piece of music. The hearing aids were able to lift enough high frequencies up above the threshold at which I could again hear them that the result was stunning. I immediately and unexpectedly burst into tears. My “audio processor” had been fooling me, the listener, for so long that I had no idea how much I had lost. It was thrilling and devastating all at the same time.

For a few years, I continued my career using the hearing aids, although they made me feel like the audio equivalent of an MLB umpire calling balls and strikes while wearing thick glasses. But eventually I ran up against their limitations. As my hearing loss progressed, the aids were increasingly in the position of trying to amplify frequencies that I could simply no longer hear. Hearing aids work on the principle of multiplication: A threefold amplification of a frequency I can’t hear isn’t 3 + 0. It’s 3 X 0. And so the aids no longer allow me to hear birds or microwave oven timers. In their place, I hear, or more accurately feel, what can best be described as a faint sound akin to someone blowing into a microphone, but turned way, way down. What my aids CAN still do is boost those S’s and T’s enough to help me understand speech. At the same time, however, they convert a whole slew of other sounds in that frequency range into a piercing, fatiguing, and generally unpleasant audio assault. That bag of chips you’re opening or that silverware scraping your plate can be like an ice pick in my ear. My hearing aids help me navigate the world of social interaction better than I can without them. But I can’t rip them out of my ears fast enough at the end of the day. And they do nothing to cure the flaws in the “signal cables” that make speech so difficult to understand in social situations and on video calls, nor can they silence the eternal high-pitch ringing from which sleep and distraction are the only escape.

Eventually the limitations of my hearing aids became too great, my ability to fake it too challenging and exhausting, so I asked my producer to quit booking me for anything other than the most basic, and preferably clientless, sessions. Pretty soon I had transitioned to video editing, and then eventually slipped out the back door pretty much unnoticed.

Parents are often told that the day will come when they will pick up their child and then set her down for the last time—never again to be lifted into their arms. And they almost never remember, looking into the rearview mirror, exactly when it was that that day came and went. That’s pretty much what working, laughing, and collaborating with a room full of people in my audio control room was like. It’s been years now, and I don’t recall exactly the last time it happened. There was no ceremony, no toasts, nothing to mark what in hindsight was a significant occurrence. I still dabble with audio on my computer and still work with media, but the 30-year party in the studio has ended. That’s what it felt like: a nonstop good time. I am forever grateful for it, but I’d be lying to say that I don’t mourn its loss.

In the movie Amadeus, the character Salieri, beautifully portrayed by F. Murray Abraham, shocks the young priest with his rant in which he rages at the cruelty of a god who would bestow upon him such a deep love of music while denying him the talent which He had instead conferred upon his crude and irreverent nemesis, Mozart. In my darker times, I relate to Salieri’s anger. Life is unfair. While this seems to be a universal truth, I am nonetheless too often bitter that I’ve not somehow managed to be spared from it, asking why I should be granted such a deep love and fascination with the world of sound, working at the only vocation I could imagine in which I could find such daily joy and satisfaction, one to which I was obviously drawn by Divine intent—only to have my ears fail me?

Kind of a dick move, God.

Oh well.

I am not proud of the time I’ve wasted thrashing around the Salieri cesspool of pity. I like to think I’m better than that. Looking at the whole of my life, there is no objective reason that the scales which too often are weighed toward mourning and loss should not be more than balanced by gratitude. The kid with his face against the glass never left me, even to this day. I am happy for him, that he got in and made the most of his shot. I know neither birdsong nor silence, and yet am acutely aware that countless people struggling with their own version of “life is unfair” would trade their lot for mine in a heartbeat. I know this intellectually, but a loss is still a loss, and mourning takes time. Maybe a lifetime.

We all lose some hearing as we get older, and it is, in fact, an epidemic, with our children coming of age in an increasingly noisy world. My hope is that sharing my experience with you, the listener, eases the journey a bit, and perhaps helps to dispel the loneliness that can sometimes set in when the only voice you can understand is your own.

©2021 Vince Werner – All rights reserved