Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 2 - RobSpear (Robust Speech Encoding in Impaired Hearing)

Teaser

The prevalence of hearing impairment amongst the elderly is a stunning 33%, while the younger generation is sensitive to noise-induced hearing loss through increasingly loud urban life and lifestyle. Yet, hearing impairment is inadequately diagnosed and treated because we fail...

Summary

The prevalence of hearing impairment amongst the elderly is a stunning 33%, while the younger generation is sensitive to noise-induced hearing loss through increasingly loud urban life and lifestyle. Yet, hearing impairment is inadequately diagnosed and treated because we fail to understand how the components that constitute a hearing loss impact robust speech encoding.

In 2009, a ground-breaking discovery demonstrated that the most sentivive structures of the cochlea are the auditory-nerve fibers which synapse onto the inner-hair-cells. Until then, it was believed that damaged outer-hair-cells were the dominant source of sensorineural hearing loss, and that its diagnosis through a standard clinical audiogram sufficiently characterized listening difficulties among the hearing impaired. This new type of sensorineural hearing loss - cochlear synaptopathy (or cochlear neuropathy)- occurs after ageing, noise-exposure or ototoxic drugs and permanently degrades the quality with which audible sound can be processed in challenging listening backgrounds (such as noisy restaurants). Because synaptopathy occurs before outer-hair-cells are damaged, its prevalence among the ageing and noise-exposed society is expected to be high, and much higher than predicted by clinically abnormal audiograms.

Synaptopathy poses a challenge towards understanding how sensorineural hearing loss results in reduced speech perception, because (i) it can presently only be quantified using post-mortem histology techniques, and (ii) synaptopathy and outer-hair-cell defecits have different functionional consequences for sound encoding, and hence the new generation of hearing-aid algorithms should take into account both hearing deficits in their fitting strategies.

RobSpear aims to (i) develop non-invasive methods which quantify synaptopathy in humans and can, in the future, be adopted in clinical hearing diagnostics. The hearing profile we develop will quantify both the outer-hair-cell and synaptopathy aspect of sensorineural hearing loss to yield an individualized hearing loss profile which is much more sensitive than present practice, but needed if we want to provide the best-matching hearing loss treatment. (ii) We use a combined computational modeling, EEG and sound perception approach to study how synaptopathy affects the robust coding of speech in noisy listening scenarios. This step is necessary to understand in which aspect of sensorineural hearing loss is most affecting sound perception in the different frequency regions of auditory processing. Lastly (iii), the hearing loss profile from (i) yields individualized computational models of auditory processing which are used as front-ends for an individualized hearing-loss algorithm which will be optimized using a model-based and machine-learning based approach to mitigate both the synaptopathy and outer-hair-cell loss deficit aspect of sensorineural hearing loss.

Using an interdisciplinary approach, RobSpear targets hearing deficits along the ascending stages of the auditory pathway to revolutionize how hearing impairment is diagnosed and treated. RobSpear can yield immense reductions of health care costs through effective treatment of currently misdiagnosed patients and studies the impact of noise-induced hearing deficits on our society.

Work performed

We simultaneously progressed on three topic areas: (i) developing brainstem-EEG methods to diagnose and quantify synaptopathy in humans, (ii) understanding the relative weigth of synaptopathy and outer-hair-cell deficits in degrading sound and speech perception after hearing damage, and (iii) developing a model and machine-learning based framework to design individualized hearing restoration algorithms which mitigate both the synaptopathy and outer-hair-cell loss aspect of sensorineural hearing damage.

(i) We developed auditory stimuli which target synaptopathy based on computational model simulations with a model of the human auditory periphery which simulates brainstem EEG responses and their decline due to either outer-hair-cell loss or synaptopathy. We identified two paradigms, one based on the derived-band envelope-following response (Keshishzadeh et al., 2019 accepted) and one based on using square-wave envelope-following responses (Vasilkov et al, ARO poster, manuscript in prep.). We validated the quality of our stimuli in capturing hearing deficits, frequency specificity and synaptopathy (even when outer-hair-cell deficits are also present) experimentally in listeners with normal audiograms, impaired audiograms and with self-reported hearing difficulties. Two full-length research papers detailing these experiments are in preparation and the work was recently presented at the int. hearing loss conference (podium) and association for research in otolaryngology meeting (posters). We furthermore reduced the measurement procedure to 1 hr, and enabled a translation of our paradigms to a clinical measurement setup to enable recordings on a larger population of normal, hearing-impaired and tinnitus patients which will be collected over the summer/fall of 2019.

(ii) Because we (uniquely) include listeners with and without outer-hair-cell loss in our studies investigating the role of cochlear synaptopathy in degraded sound perception, we were able to show that synaptopathy is much more detrimental to degrading the perceptual cues necessary to perform two basic auditory perception tasks, namely amplitude-modulation detection and tone-in-noise detection (Verhulst et al., Acta Acustica, 2018, Osses et al. ICA, accepted). We used a model- based approach which includes both synaptopathy and OHC loss simulations to conclude this finding, and our results imply that it is not outer-hair-cell loss, but rather synaptopathy (which co-exists with outer-hair-cell deficits) which has a detrimental effect on the temporal precision with which audible sound is processed. Furthermore, our recent results extrapolate this finding to the high-pass portions of speech-in-noise encoding (Garrett et al., in prep), providing the first evidence that synaptopathy is important for speech encoding and is reflected in both brainstem EEG metics as well as sound perception.

(ii) In the first period of RobSpear, we set up the closed-loop framework necessary for model and machine-learning based individualized hearing algorithms. Using the hearing loss profile from (i), we are presently developing a method which extracts individualized model parameters of synaptopathy and outer-hair-cell loss (Keshishzadeh et al., ISAAR abstract submitted). These individualized models are then placed in an optimization loop in which we optimize the signal processing which needs to be applied to the input speech to yield transformed speech at the output of the hearing-impaired model is identical to that of a reference normal-hearing model. One the one hand, we are using the simulated brainstem speech signal from the computational model to “manually” find appropriate signal processing strategies, and on the other, we are developing a machine-learning based method which will allow us to backpropagate through the system while minimizing a loss term at the level of the cochlea and the brainstem. To enable backpropagation, we developed a neural net approximation of our norma

Final results

Because we are using a computational modeling approach which takes the direct physiology evidence from synaptopathy (Kujawa and Liberman, 2019, Valero et al., 2017, Wu et al. 2018) into the model framework to predict how it impacts brainstem EEG signals, we were well positioned to force a breakthrough in the field develop EEG metrics which are most sensitive to isolate the synaptopathy aspect of sensorineural hearing loss. While several labs are still adopting purely experimental approaches in humans to investigate which stimuli are most promising (e.g. Bharadwaj et al., 2015, Bramhall et al., 2019, Prendergast et al., 2017, 2018), our model-based approach is faster, more specific and has yielded two different stimulus sets (Keshishzadeh et al, ARO, 2019; Vasilkov et al., ARO, 2019) which can be used for this purpose. On the basis of these results in the first two studies, we are planning a new data-collection over the summer with a 1hr optimized diagnostic hearing diagnostics test battery. These measurements will serve as a basis for the development of a numerical method which extracts the frequency-specific parameter of synaptopathy and outer-hair-cell loss such that individualized models of the auditory periphery can be built (2019/2020). These models will then be used to develop individualized hearing restoration strategies, which we will also validate experimentally in listeners with normal or impaired audiograms (2020/2021).

The same model-based approach was able to tackle the experimentally challenging task of identifying the role of synaptopathy for degraded sound perception. This aspect is crucial to decide which speech features need to be “enhanced” in hearing-aid algorithms, but since synaptopathy can currently only be diagnosed directly using post-mortem histology, experimental approaches can only speculate its existence in humans. The model includes functional and physiological aspects of synaptopathy and outer-hair-cell loss and can study the role of each defecit to sound perception separately and in combination. So far, this approach has brought us to understand how the high-frequency portions of speech are encoded, which will yield “perceptually-relevant” hearing-aid algorithms near the end of the project. We plan to build upon our succesfull model-based approach to also understand how the low-pass portions of speech-in-noise are encoded (crucial for reliable sound perception), which will serve important information regarding how the low-frequency information in hearing-aid algoritms should be processed.

Regarding the machine-learning based approach to model individualized hearing-loss profiles and their embedding within a fully differentiable feedback loop, RobSpear has gone well beyond the state of the art. Not only are neural-net approaches which perform end-to-end speech enhancement still very rare, we have show that our methods can work succesfully (Baby et al., 2019 ICASSP) and in real-time (CoNNear, patent application in prep.). Moreover, we are already one step further in setting up a real-time backpropagation framework for hearing-aid signal processing, which can truly become a game-changer in the hearing-aid field if the progress within RobSpear keeps this pace. Lastly, biophysically realistic computational models of the auditory pheriphery (Verhulst et al., 2018) are very slow to compute, and hence rarely adopted in applications of robotics, sound perception modeling, speech enhancement, our CoNNear model offers an answer to this problem and might in the future be widely adopted in those applications, hence our patent application. In the second phase of RobSpear, we plan to finalize the closed-loop system for individualized hearing aid fitting, and we will test the success in restoring speech intelligibility in listeners with synaptopathy and combined synaptopathy/outer-hair-cell damage.

Website & more info

More info: https://www.ugent.be/en/research/research-ugent/trackrecord/trackrecord-h2020/erc-h2020/sarah-verhulst.htm.