Essay

Against All Odds

When Ingeborg and Erwin Hochmair started developing a cochlear implant that would allow users to understand speech, they were told it would never work. Here, Erwin describes how, through ingenuity, belief and dogged determination, they proved the critics wrong.

Read more Last updated: 2018-01-04

When Ingeborg and Erwin Hochmair started developing a cochlear implant that would allow users to understand speech, they were told it would never work. Here, Erwin describes how, through ingenuity, belief and dogged determination, they proved the critics wrong.

Erwin Hochmair by Erwin Hochmair
In collection Sound
Reading duration: 3 minutes

"The auditory nerve has 20,000 fibres – and you want to build a cochlear implant (CI) with eight channels? It will never work!” This was the verdict of an esteemed physiologist back in the mid 1970s when we began our work. He wasn’t the only sceptic – physiologists in particular knew about the complexity of the hearing system and the fragility of the inner ear. The view was that eight channels would be nowhere near enough to stimulate all the fibres of the auditory nerve.

In 1975, Ingeborg (then Desoyer) and I approached this task a lot more optimistically when Kurt Burian, head of the second Ear, Nose and Throat University Clinic in Vienna’s General Hospital, asked us if we might be interested in developing a CI. He had heard of similar work going on in the USA.

Ingeborg had only just finished her thesis in electrical engineering at the Vienna University of Technology. She was immediately enthusiastic about this idea and her positivity was infectious. We considered the electrical stimulation of the auditory nerve as a problem that could be solved by engineering means. Luckily I had already dealt at great length with semiconductors, integrated circuits and circuit technology while working as an assistant at the Institute for Physical Electronics. During my two-year visiting research at NASA’s Marshall Space Flight Centre in Huntsville, Alabama, USA, I’d developed circuits using technology that seemed likely to be suitable for the circuitry of an implant.

Speech Illustration
© Lym Moreno

The first implant

On 16 December 1977, after an incredibly short developmental period of two years, our first CI was implanted by Kurt Burian. It was the world’s first microelectronic multichannel cochlear implant. The advantage of a multitude of channels is that they allow you to hear different tone pitches. This means that high and low tones are actually heard as high or low rather than the monotone heard by the first single-channel implant users.

In order to protect the electronic system from the damp biological surroundings in the inner ear, we placed the circuitry inside an airtight case with airtight cable exits. This implant was the prototype for all modern systems, since it already had all the characteristics of today’s devices. If the Continuous Interleaved Sampling (CIS) coding strategy – which makes speech easier to understand – had already been devised then, our very first patient could have understood a few words, I am sure. Instead he could perceive the rhythms of sentences and sounds, but no speech.

Understanding speech

The possibilities for encoding speech are broad. Initially we carried out psychoacoustic measurements, and experiments. This involved playing specific tones to CI users and asking for their feedback – for instance whether a sound was loud or sharp. The breakthrough came with a system developed a short time later. It had only four stimulation channels but enabled the understanding of single words in quiet surroundings. In March 1980, patient CK took a pocket speech processor home. She was the first person in the world who could understand speech via a portable processor.

Over the following decades, hearing with implants has improved dramatically. One reason was the development in speech coding – how the speech processor splits incoming sound into its frequency bands and converts it into electrical signals. Another milestone was bilateral implantation, which is when both ears are implanted. Hearing on both sides means you can understand speech better – even in noisy environments – and you can determine the direction of a sound, which is especially beneficial in road traffic. Over the past few years, the devices have been even more refined to make the sound signal clearer and easier to perceive.

But how can this great success in speech comprehension be explained? When we first started our work, we – and certainly no one else – even dared to dream that we’d get this far. One important factor has been the brain’s ability to adapt, which was previously hugely underestimated. Speech is also a robust communication tool – even if some details are lost, understanding is only slightly affected.

Guitar Illustration
© Lym Moreno

The future

Compared to speech, music demands a lot more from the hearing system. One of MED-EL’s research projects is aiming to develop existing coding strategies even further. The goal is not only to get more enjoyment out of music but also to achieve an even better sound localisation and understanding of speech in noisy environments. Fully implantable systems are an attractive rospect for many people with hearing problems, too. That’s why MED-EL is also looking at development in this area. However, there are still some more hurdles to cross, such as developing a rechargeable battery pack and high-quality microphones that only pick up sound from outside rather than inside the body.

Explore Life offers you a wide range of diverse content with a focus on hearing. Set off for an exploration tour through articles, interviews, videos and more.