Skip to main content

Listening with two ears

Studies of barn owls offer insight into just how the brain combines acoustic signals from two sides of the head into a single spatial perception

Why do people have two ears? We can, after all, make sense of sounds quite well with a single ear. One task, however, requires input from both organs: pinpointing the exact direction from which a sound, such as the cry of a baby or the growl of a dog, is emanating. In a process called binaural fusion, the brain compares information received from each ear and then translates the differences into a unified perception of a single sound issuing from a specific region of space.

Extensive research has shown that the spatial cues extracted by the human brain are differences in the arrival time and the intensity, or force, of sound waves reaching the ears from a given spot. Differences arise because of the distance between the ears. When a sound comes from a point directly in front of us, the waves reach both ears at the same time and exert equal force on the receptive surfaces that relay information to the brain. But if a sound emanates from, say, left of center, the waves will reach the right ear slightly after the left. They will also be somewhat less intense at the right because, as they travel to the far ear, some fraction of the waves will be absorbed or deflected by the head.

The brain's use of disparities in timing and intensity becomes especially obvious when tones are delivered separately to each ear through a headset. Instead of perceiving two distinct signals, we hear one signal--a phantom--originating from somewhere inside or outside the head. If the stimuli fed to the ears are equally intense (equally loud) and are conveyed simultaneously, we perceive one sound arising from the middle of the head. If the volume is lowered in just one ear or if delivery to that ear is delayed, the source seems to move in the direction of the opposite ear.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


This much has long been known. What is less clear is how the brain manages to detect variances in timing and intensity and how it combines the resulting information into a unified spatial perception. My colleagues and I at the California Institute of Technology have been exploring this question for more than 25 years by studying the behavior and brain of the barn owl (Tyto alba). We have uncovered almost every step of the computational process in these animals. (The only other sensory vertebrate system that is as completely defined belongs to a fish.) We find that the owl brain combines aural signals relating to location not all at once but through an amazing series of steps. Information about timing and intensity is processed separately in parallel pathways that converge only late in those pathways. It is highly probable that humans and other mammals achieve binaural fusion in much the same manner.

Turning Heads

I FIRST THOUGHT of examining the neural basis of sound location in owls in 1963, when I heard Roger S. Payne, now at the Whale Conservation Institute in Lincoln, Mass., report that the barn owl can catch a mouse readily in darkness, solely by relying on acoustic cues. I had recently earned a doctorate in zoology and wanted to know more about how animals identify the position of a sound source, but I had yet to choose a species to study. Three years later, at Princeton University, I observed the exquisite aural abilities of barn owls for myself after I obtained three of them from a bird-watcher. When I watched one of the owls through an infrared-sensitive video camera in a totally dark room, I was impressed by the speed and accuracy with which it turned its head toward a noise. I concluded that the head-turning response might help uncover whether such animals use binaural fusion in locating sound. If they did, studies of their brain could help elucidate how such fusion is accomplished.

As I had anticipated, the head-turning response did prove extremely useful to me and my postdoctoral fellows, particularly after I established a laboratory at Caltech in 1975. In some of our earliest research there, Eric I. Knudsen, now at Stanford University, and I obtained indirect evidence that barn owls, like humans, must merge information from the two ears to locate a sound. When one ear was plugged, the animals turned the head in response to noise from a loudspeaker, but they did not center on the speaker.

In the early 1980s Andrew Moiseff and I additionally showed that the barn owl extracts directional information from disparities in the timing and the intensity of signals reaching the two ears--technically called interaural time differences and interaural intensity differences. As part of that effort, we measured the differences that arose as we moved a speaker across the surface of an imaginary globe around an owl's head. Microphones we had placed in the ears relayed the signals reaching each ear to a device that measured arrival time and volume. When we eased the speaker from the midline of the face (zero angle) 90 degrees to the left or right, the difference in arrival time at the two ears increased systematically. Those results resembled the findings of human studies.

In contrast to human findings, the difference in intensity did not vary appreciably as the speaker was moved horizontally. But it did increase as the speaker was moved up or down from eye level--at least when the sound included waves of frequencies higher than three kilohertz, or 3,000 cycles per second. Payne, who had seen the same intensity changes in earlier studies, has attributed them, apparently correctly, to an asymmetry in the placement of the owl's ears. The left ear is higher than eye level but points downward, whereas the right ear is lower but points upward. The net result is that the left ear is more sensitive to sounds coming from below, and the right is more sensitive to sounds from above.

Satisfied that arrival time and intensity often differ for the two ears, we could go on to determine whether the owl actually uses specific combinations of disparities in locating sound sources. We intended to put a standard headset on tame animals and to convey a noise separately to each ear, varying the difference in delivery time or volume, or both. We would then see whether particular combinations of time and intensity differences caused the animals to turn the head reliably in specific directions. Unfortunately, we did not receive cooperation from our subjects. When we tried to affix the earphones, each owl we approached shook its head and backed off. We managed to proceed only after we acquired tiny earphones that could be inserted into the owls ear canal.

We also had to devise a way to measure the direction of head turning, determining both the horizontal and vertical components of the response to each set of stimuli. We solved the problem mainly by applying the search-coil technique that Gary G. Blasdel, now at Northwestern University's Feinberg School of Medicine, had designed a few years earlier. We fit two small coils of copper wire, arranged perpendicularly to each other, on an owl's head. We positioned the owl between two big coils carrying electric current. As the head moved, the large coils induced currents in the small ones. Variations in the flow of current in the smaller coils revealed both the horizontal and vertical angles of the head turning.

Sure enough, the owl responded rapidly to signals from the earphones, just as if it had heard noise arising from outside the head. When the sound in one ear preceded that in the other ear, the head turned in the direction of the leading ear. More precisely, if we held the volume constant but issued the sound to one ear slightly before the other ear, the owl turned its head mostly in the horizontal direction. The longer we delayed delivering the sound to the second ear, the further the head turned.

Similarly, if we varied intensity but held timing constant, the owl tended to move its head up or down. If we issued sounds so that both the delivery time and the intensity of signals to the left ear differed from those of the right, the owl moved its head horizontally and vertically. Indeed, combinations of interaural timing and intensity differences that mimicked the combinations generated from a speaker at particular sites caused the animal to turn toward exactly those same sites. We could therefore be confident that the owl brain does fuse timing and intensity data to determine the horizontal and vertical coordinates of a sound source. The process by which barn owls calculate distance is less clear.

Fields in Space

TO LEARN HOW the brain carries out binaural fusion, we had to examine the brain itself. Our research plan built on work Knudsen and I had completed several years earlier. We had identified cells that are now known to be critical to sound location. Called space-specific neurons, they react only to acoustic stimuli originating from specific receptive fields, or restricted areas in space [see box on opposite page]. These neurons reside in a region of the brain called the external nucleus, which is situated within the auditory area of the midbrain (the equivalent of the mammalian inferior colliculus). Collectively, the space-specific neurons in the left external nucleus form a map of primarily the right side of auditory space (the broad region in space from which sounds can be detected), and those of the right external nucleus form a map of primarily the left half of auditory space, although there is some overlap.

We identified the space-specific cells by resting a microelectrode, which resembles a sewing needle, on single neurons in the brain of an anesthetized animal. As we held the electrode in place, we maneuvered a speaker across the surface of our imaginary globe around the owl's head. Certain neurons fired impulses only if the noise emanated from a particular receptive field. For instance, in an owl facing forward, one space-specific neuron might respond only if a speaker were placed within a receptive field extending roughly 20 degrees to the left of the owl's line of sight and some 15 degrees above or below it. A different neuron would fire when the speaker was transferred elsewhere on the globe.

How did these neurons obtain directional information? Did they process the relevant cues themselves? Or were the cues extracted and combined to some extent at one or more lower way stations (relay centers) in the brain [see box on pages 34 and 35], after which the results were simply fed upward?

Moiseff and I intended to answer these questions by carrying out experiments in which we would deliver sounds through earphones. But first we had to be certain that signals able to excite particular space-specific neurons truly mimicked the interaural time and intensity differences that caused the neurons to fire under more natural conditions--namely, when a sound emanated from a spot in the neuron's receptive field. A series of tests gave us the encouragement we needed. In these studies, we issued sounds through the earphones and monitored the response of individual neurons by again holding a microelectrode on or near the cells. As we hoped, we found that cells responded to specific combinations of signals. Further, the sets of timing and intensity differences that triggered strong firing by space-specific neurons corresponded exactly to the combinations that caused an owl to turn its head toward a spot in the neuron's receptive field. This congruence affirmed that our proposed approach was sensible.

In our initial efforts to trace the steps by which the brain circuitry accomplishes binaural fusion, Moiseff and I tried to find neurons sensitive to interaural timing or intensity differences in the way stations that relay signals from the auditory nerve up to the midbrain. These preliminary investigations, completed in 1983, suggested that certain stations are sensitive only to timing cues, whereas others are sensitive solely to intensity cues. The brain, it seemed, functioned like a parallel computer, processing information about timing and intensity through separate circuits.

Parallel Processing

SUCH CLUES led us to seek further evidence of parallel processing. Joined by Terry T. Takahashi, now at the University of Oregon, we began by examining the functioning of the lowest way stations in the brain--the cochlear nuclei. Each cerebral hemisphere has two: the magnocellular nucleus and the angular nucleus. In owls, as in other birds, each fiber of the auditory nerve--that is, each signal-conveying axon projecting from a neuron in the ear--divides into two branches after leaving the ear. One branch enters the magnocellular nucleus; the other enters the angular nucleus.

We wondered how the space-specific neurons would behave if we prevented nerve cells from firing in one of the two cochlear nuclei. We therefore injected a minute amount of a local anesthetic into either the magnocellular or angular nucleus. The results were dramatic: the drug in the magnocellular nucleus altered the response of space-specific neurons to interaural time differences without affecting the response to intensity differences. The converse occurred when the angular nucleus received the drug. Evidently, timing and intensity are indeed processed separately, at least at the lowest way stations of the brain; the magnocellular neurons convey timing data, and the angular neurons convey intensity data.

These exciting results spurred me to ask Takahashi to map the trajectories of the neurons that connect way stations in the auditory system. His work eventually revealed that two separate pathways extend from the cochlear nuclei to the midbrain. The anatomical evidence, then, added further support to the parallel-processing model.

While Takahashi was conducting his mapping research, W. E. Sullivan and I explored the ways magnocellular and angular nuclei extract timing and intensity information from signals arriving from the auditory nerve. To understand our discoveries, one must be aware that most sounds in nature are made up of several waves, each having a different frequency. When the waves reach a receptive surface in the ear, called the basilar membrane, the membrane begins to vibrate, but not uniformly. Different parts of the membrane vibrate maximally in response to particular frequencies. In turn, neurons that are connected to the maximally vibrating areas (and thus are tuned to specific frequencies) become excited. These neurons propagate impulses along the auditory nerve to the brain.

We and others find that the intensity of a sound wave of a given frequency is conveyed to the brain from the ear by the firing rate of auditory neurons tuned to that frequency. This much makes intuitive sense. Our next result is less obvious. Neurons of the auditory nerve also exhibit what is called phase locking: they fire at characteristic points, or phase angles, along the sound wave [see bottom illustration in box at left]. That is, a neuron tuned to one frequency will tend to fire, for example, when the wave is at baseline (zero degrees), although it does not necessarily fire every time the wave reaches that position. A neuron tuned to a different frequency will tend to fire at a different phase angle, such as when a wave is cresting (at the point called 90 degrees, which is a quarter of the way through a full 360-degree wave cycle), or reaches some other specific point. In both ears, impulses produced by neurons tuned to the same frequency will lock to the same phase angle. But, depending on when the signals reach the ears, the train of impulses generated in one ear may be delayed relative to the impulse train generated in the opposite ear.

It turns out that cells of the magnocellular nucleus exhibit phase locking. But they are insensitive to intensity; changes in the volume of a tone do not affect the rate of firing. In contrast, few angular neurons show phase locking, although they respond distinctly to changes in intensity. These and other results indicate that the owl depends on trains of phase-locked impulses relayed from the magnocellular nucleus for measuring interaural time differences, and the animal relies on the rate of impulses fired by the angular nucleus for gauging interaural intensity differences. Overall, then, our analyses of the lowest way stations of the brain established that the cochlear nuclei serve as filters that pass along information about timing or intensity, but not both.

Way Stations in the Brain

WE THEN PROCEEDED to explore higher regions, pursuing how the brain handles timing data in particular. Other studies, which will be discussed, addressed intensity. We learned that when phase-locked impulses induced by sound waves of a single frequency (a pure tone) leave the magnocellular nucleus on each side of the brain, they travel to a second way station: the laminar nucleus. Impulses from each ear are transmitted to the nucleus on both the opposite and the same side of the head. The laminar nucleus is, therefore, the first place where the information from both ears comes together in one place.

The general problem of how the brain combines timing data has been a subject of speculation for decades. Lloyd A. Jeffress put forth a reasonable model in 1948, while spending a sabbatical leave at Caltech. Jeffress proposed that the nerve fibers carrying time-related signals from the ears (called delay lines) vary in how rapidly they deliver signals to way stations in the brain. They ultimately converge at neurons (known as coincidence detectors) that fire only when impulses from the two sides arrive simultaneously.

Signals reaching the ears at different times would attain coincidence--arrive at coincidence detectors in unison--if the sum of a sound wave's transit time to an ear and the travel time of impulses emanating from that ear to a coincidence detector were equal for the two sides of the head. Consider a sound that reached the left ear five microseconds before it reached the right ear. Impulses from the two ears would meet simultaneously at a coincidence detector in, say, the right hemisphere if the delay lines from the left ear (the near ear) prolonged the transit time of impulses from that ear to a coincidence detector by five microseconds over the time it would take impulses to traverse fibers from the right ear [see top illustration in box on opposite page].

Since 1948, physiological studies examining neuronal firing in dogs and cats and anatomical studies of chicken brains have suggested that the brain does in fact measure interaural time differences by means of delay lines and coincidence detection. In 1986 Catherine E. Carr, now at the University of Maryland, and I demonstrated in the barn owl that nerve fibers from magnocellular neurons serve as delay lines and neurons of the laminar nucleus serve as coincidence detectors.

Firing Squad

BUT THE OWL'S detection circuit, like those of other mammals that have been examined, differs somewhat from the Jeffress model. Neurons of the laminar nucleus respond most strongly to coincidence brought about by particular time differences. Yet they also respond, albeit less strongly, to signals that miss perfect coincidence. The number of impulses declines gradually as the interaural time difference increases or decreases from the value that produces coincidence--that is, until the waves reaching one ear are 180 degrees (a full half cycle) out of phase from the position that would bring about coincidence. At that point, firing virtually ceases. (The neurons also respond, at an intermediate level, to signals delivered to just one ear.)

In a way, then, coincidence detectors, by virtue of the delay lines feeding them, can be said to be maximally sensitive to specific time differences. They are not, however, totally selective as to when they produce a peak response. They can be induced to fire with rising strength as the phase difference increases beyond 180 degrees from the value that produces coincidence. When the displacement reaches a full 360 degrees, the arrival time of sound waves at one ear is delayed by the time it takes for a sound wave to complete a full cycle. In that situation, and at every 360-degree difference, coincidence detectors will repeatedly be hit by a series of synchronous impulses and will fire maximally. Thus, the same cell can react to more than one time difference. This phenomenon is called phase ambiguity.

After a coincidence detector in the laminar nucleus on one side of the brain determines the interaural time difference produced by a sound of a given frequency, it simply passes the result upward to higher stations, including to the core region of the midbrain auditory area on the opposite side of the head. Consequently, the higher areas inherit from the laminar nucleus not only selectivity for frequency and interaural time differences but also phase ambiguity. The information in the core, in turn, is passed to a surrounding area--known as the shell of the midbrain auditory area--on the reverse side of the brain, where it is finally combined with information about intensity.

The Intensity Pathway

MY COLLEAGUES AND I understand less about the operation of the intensity pathway that converges with the time pathway in the shell. But we have made good progress. Unlike the magnocellular nucleus, which projects up only one stage, to the laminar nucleus, the intensity-detecting angular nucleus projects directly to many higher stations (except the external nucleus). Among them is the posterior lateral lemniscal nucleus.

The posterior lateral lemniscal nucleus receives inhibitory signals from its counterpart on the other side of the head and gets excitatory signals from the angular nucleus on that side as well. The balance between excitatory and inhibitory signals determines the rate at which the lemniscal neurons fire in response to intensity differences in sound between the ears. Geoffrey A. Manley and Christine Kppl of the Technical University of Munich showed in my laboratory that the strength of the inhibition declines systematically from dorsal to ventral within the nucleus. This finding indicates that neurons selective for different intensity disparities form an orderly array within the nucleus. To the barn owl, sounds that are louder in the left ear indicate down, and sounds that are louder in the right ear indicate up. The posterior lemniscal nucleus is, therefore, the first site that computes and maps sound source elevation.

The next higher station is the lateral shell of the midbrain auditory area; neurons from the posterior lemniscal nucleus on each side of the brain send signals to the shell in the opposite hemispheres. In the shell, most neurons respond strongly to both interaural intensity and interaural timing differences generated by sounds within a narrow range of frequencies. This station does not provide the owl with sufficient information to ensure accurate sound location, however, because phase ambiguity persists.

The ambiguity disappears only at the level of the external nucleus, home of the space-specific neurons. These neurons are broadly tuned to frequency, receiving timing and intensity data from many frequency channels. This convergence supplies the input needed for the brain to select the correct coordinates of a sound source. The selectivity of space-specific neurons, then, results from the parallel processing of time and intensity data and from the combination of the results in the shell and in the external nucleus itself.

We have not yet resolved the number of space-specific neurons that must fire in order for an owl to turn its head toward a sound source. Nevertheless, we know that individual neurons can carry the needed spatial data. This fact belies the view of some researchers that single neurons cannot represent such complex information and that perceptions arise only when whole groups of cells that reveal nothing on their own fire impulses collectively in a particular pattern.

Neural Algorithms

TOGETHER our neurological explorations have elucidated much of the algorithm, or step-by-step protocol, by which the owl brain achieves binaural fusion. Presumably, we humans follow essentially the same algorithm (although some of the processing stations might differ). Recall, for example, that several lines of evidence suggest mammals rely on delay lines and coincidence detection in locating sounds.

We can extrapolate even further. The only other neural algorithm for a sensory task that has been deciphered in equal detail is one followed by electricity-emitting fish of the genus Eigenmannia. Walter F. Heiligenberg of the University of California, San Diego, and his associates worked out the rules enabling members of this species to determine whether their electric waves are of higher or lower frequency than those of other Eigenmannia in the immediate vicinity. (In response, a fish might alter the frequency of the wave it emits.) Eigenmannia rely on parallel pathways to process separate sensory information. Also, relevant information is processed in steps; the parallel pathways converge at a high station; and neurons at the top of the hierarchy respond selectively to precise combinations of cues. The fish algorithm is thus remarkably similar to that of the barn owl, even though the problems that are solved, the sensory systems involved, the sites of processing in the brain and the species are different. The similarities suggest that brains follow certain general rules for information processing that are common to different sensory systems and species.

Carver A. Mead, here at Caltech, thinks the owl algorithm may also teach something to designers of analog silicon chips, otherwise known as VLSI (Very Large Scale Integrated) circuits. In 1988 he and John Lazzaro, then his graduate student, constructed an owl chip that reproduces the steps through which the barn owl measures interaural time differences. The model, about 73 square millimeters in area, contains only 64 auditory nerve fibers in each ear (many fewer than truly exist) and some 21,000 delay lines. (It also has 200,000 transistors, mainly to regulate the delay lines.) Even in its pared-down version, the electronic nervous system takes up much more space and energy than does the biological system. Historically, engineers have constructed chips according to principles drawn from electronics, physics and chemistry. The economy of the biological circuit suggests that natural principles may help engineers build analog chips that consume less energy and take up less space than usual.

My laboratory's research into the owl brain is by no means finished. Beyond filling in some of the gaps in our knowledge of binaural fusion, we hope to begin addressing other problems. For example, the late Alvin M. Liberman of Haskins Laboratories in New Haven, Conn., proposed that the human brain processes speech sounds separately from nonspeech sounds. By the same token, we can ask whether the owl separately processes signals for sound location and other acoustic information. Some brain stations that participate in spatial orientation may also take part in other sensory activities, such as making owls selectively attuned to the calls of mates and chicks. How does the owl, using one set of neurons, sort out the algorithms for different sensory tasks? By solving such riddles for the owl, we should begin to answer some of the big questions that relate to more complex brains and, perhaps, to all brains.

THE AUTHOR

MASAKAZU KONISHI has been Bing Professor of Behavioral Biology at the California Institute of Technology since 1980. He earned a doctorate in zoology from the University of California, Berkeley, in 1963. Three years later he joined the faculty of Princeton University, where he studied hearing and vocalization in songbirds as well as sound localization in owls. Konishi moved to Caltech as a professor in 1975. In his free time, he enjoys hiking, skiing and training dogs for sheepherding.