Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 2 - ORIENT (Goal-directed eye-head coordination in dynamic multisensory environments)

Teaser

Problem: Rapid object identification is crucial for survival of all organisms, but poses daunting challenges if many stimuli compete for attention, and multiple sensory and motor systems are involved in the processing, programmingand generating of an eye-head gaze-orienting...

Summary

Problem: Rapid object identification is crucial for survival of all organisms, but poses daunting challenges if many stimuli compete for attention, and multiple sensory and motor systems are involved in the processing, programming
and generating of an eye-head gaze-orienting response to a selected goal. How do normal and sensory-impaired brains decide which signals to integrate (“goal”), or suppress (“distracter”)? Audiovisual (AV) integration only
helps for spatially and temporally aligned stimuli. However, sensory inputs differ markedly in their reliability, reference frames, and processing delays, yielding considerable spatial-temporal uncertainty to the brain.
Vision and audition utilize coordinates that misalign whenever eyes and head move. Meanwhile, their sensory acuities vary across space and time in essentially different ways. As a result, assessing AV alignment poses major
computational problems, which so far have only been studied for the simplest stimulus-response conditions.

Impact: Understanding the underlying neuro-computational principles and control mechanisms is crucial to diagnose
and alleviate disorders in sensory-impaired or motor-impaired patients. It is also crucial to understand and help overcome sensorimotor degradation in the elderly population.

Approach: We will tackle these problems on different levels, by applying dynamic eye-head coordination paradigms in complex environments, while systematically
manipulating visual-vestibular-auditory context and uncertainty. I parametrically vary AV goal/distracter statistics, stimulus motion, and active vs. passive-evoked body movements.
We perform advanced psychophysics to healthy subjects, and to patients with well-defined sensory (auditory, visual, or vestibular) disorders. We probe sensorimotor strategies of normal and impaired systems, by
quantifying their acquisition of priors about the (changing) environment, and use of feedback about active or passive-induced self-motion of eyes and head. I challenge current eye-head control models by incorporating
top-down adaptive processes and eye-head motor feedback in realistic cortical-midbrain networks.
In a collaborative effort with the Robotics Institute in Lisbon, our computational modeling will be critically tested on an autonomously learning humanoid robot, equipped with binocular foveal vision, multiple-degrees of
freedom ocular and neck-muscular systems, and human-like audition.

Work performed

Midterm report (Months 1-30):

The project started in January 2017, setting up the collaboration with the Visual Lab of the Robotics Institute at the Instituto Técnico Superior in Lisbon (IST). The project coordinator paid several visits to the IST, jointly drafting the Memorandum of Understanding,
in which both parties agreed on the terms for the collaboration. The MoU was signed by the Deans of the Nijmegen and Lisbon Universities in Sept. 2017. In Sept. 2018 the IST status was made second beneficiary, to enable the partner
to engage in the scientific activities of the Action. So far, 24 research papers have already resulted from the work in this Action: 17 papers from Subproject 1, 6 papers from Subproject 2, and 1 paper from Subproject 3. Overall, the project is a great success.

The research progress:

Subproject 1: Human multisensory gaze control in complex environments: psychophysics.

The multisensory two-axis vestibular chair at the Faculty of Science (Radboud Research Facilities), became fully available for the psychophysical experiments of Subproject 1 in June 2018 (for video, see: http://www.mbfys.ru.nl/~johnvo/OrientWeb/VestibChairDemo.mov). In Sept. 2017, J Heckman (PhD 1), and per Oct. 2018, A Barsingerhorn (Postdoc 1), were appointed on Subproject 1. Experiments on neural mechanisms underlying sound-localisation in noisy environments, and Bayesian mechanisms underlying audiovisual integration were published in 2017/2018/2019: Van Opstal et al., 2017; Bremen et al., 2018; Van Bentum et al,., 2017; Ege et al., 2018a,b; 2019; Zonooz et al.,2018a,b,2019. The PI has held several presentations on this work in international conferences and invited seminars, e.g. at the NCM meetings in Santa Fe, NM, USA, and in Toyama, Japan; in Rovereto, Italy; in Kosice, Slovakia; in Alicante, Spain. PhD1 presented his work at the Gordon Conference on eye movements in Lewiston, MN, USA. Postdoc 1 presented her results at the European Conference on eye movements in Alicante, Spain.
We hired prof A Snik per Sept 1, 2017 on this subproject as an expert audiologist, to work on sensory-deprived patients in collaboration with our applicants. He is a world-recognized expert on auditory technology and audiology, and fits perfectly in the Action’s aims. In Jan 2019, we attracted Francesca Rocchi (Italy) as postdoc 2 to work on audio-visual psychophysics and plasticity/adaptation.
To set up the auditory patient work within Subproject 1 under the supervision of Prof Snik, we hired two young PhD researchers for a period of 6 months (Jan-June, 2019): S Sharma, and S Ausili. They performed sound-localization studies in our lab with hearing-impaired patients, equipped with a cochlear implant (either unilateral, or bilateral) and a hearing aid (so-called bimodal electro-acoustic hearing). So far, five publications have appeared from this work (Snik et al., 2019; Huinck et al., 2019; Vogt et al., 2018, 2019; Sharma et al., 2019; Ausili et al., 2019), and about 6 more papers are expected to follow soon.

Subproject 2: Computational modelling of eye-head gaze control.

During the first months of the Action (Feb-May, 2017), the PI appointed B Kasap (PhD student), to work on a computational model of the midbrain by implementing a novel spiking neural network algorithm. Six manuscripts have arisen from this work (in the J Neurophysiology (2018), in Neurocomputing (2018); in Front Appl Math and Stat (2018), and to PLoS Comput Biol (2019), and two papers in Progr in Brain Res Vols. 248 and 249). In April 2019, the PI appointed Postdoc 3 on this Subproject, dr Arizoo Alizadeh (Iran), who will extend the spiking network modelling to 3D eye-head coordination.

Subproject 3: Humanoid robotic model of the eye-head gaze-control system.

The collaboration with prof Bernardino on Subproject 3 goes very well. Between April–Oct 2017 a master student (Miguel Lucas) designed and tested a first prototype robotic eye (Master Thesis report; Oct 2017; see Web

Final results

The publications (N=26) that have already arisen form the project exceed expected results so far. The project will attain most of its planned goals: the advanced 3D eye-head human psychophysics (already in progress), work on sensory-deprived patients (currently on auditory, later also visual and vestibular), and the computational modelling in Nijmegen (well on its way). The robotics project with the Lisbon group will deliver a fully controllable 3D eye-head humanoid system at the end of the project, but may have to refrain from implementing the human auditory system, given the encountered nontrivial (mechanical) complexities that we had to resolve for the oculomotor and head-motor systems.

Website & more info

More info: http://www.mbfys.ru.nl/.