Opendata, web and dolomites

RETICULUS TERMINATED

Integration of retinal inputs by distinct collicular cell types

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

Project "RETICULUS" data sheet

The following table provides information about the project.

Coordinator
AARHUS UNIVERSITET 

Organization address
address: NORDRE RINGGADE 1
city: AARHUS C
postcode: 8000
website: www.au.dk

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country Denmark [DK]
 Total cost 200˙194 €
 EC max contribution 200˙194 € (100%)
 Programme 1. H2020-EU.1.3.2. (Nurturing excellence by means of cross-border and cross-sector mobility)
 Code Call H2020-MSCA-IF-2015
 Funding Scheme MSCA-IF-EF-ST
 Starting year 2017
 Duration (year-month-day) from 2017-09-01   to  2019-08-31

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    AARHUS UNIVERSITET DK (AARHUS C) coordinator 200˙194.00

Map

 Project objective

Animals have a diverse set of behaviors that are triggered by specific sensory stimuli, such as motion or looming. In the visual system, this process begins in the retina where the visual scene is divided into 20 parallel information channels before reaching the brain. The superior colliculus is one of the main recipients of retinal output and it mediates tractable visually-guided behaviors such as eye movement, orienting or escaping behaviors. However, it remains unknown how visual signals from individual retinal ganglion cell types are processed by neurons in the superior colliculus to achieve specific computations relevant to behaviors. To answer these questions, I will use transgenic mouse lines recently identified by the host laboratory, in which specific types of retino-recipient neurons of the superior colliculus are labeled with Cre recombinase. First, I will characterize the response properties of Cre-labeled individual cell types using in vivo two-photon calcium imaging during visual stimulation. Next, I will initiate calcium sensor-functionalyzed trans-synaptic viral tracing from Cre-labeled collicular cell types, and perform two-photon calcium imaging of labeled presynaptic retinal ganglion cells and starter collicular neurons. With this approach I will relate the activity of neurons to the activity of connected neuronal networks, and evaluate the degree of convergence and divergence in retino-collicular connectivity. By linking cell types, circuits and computations, this work will provide mechanistic insight into the circuit basis for parallel processing of visual information and various visual functions in the healthy system, while possibly disclosing novel therapeutic targets for visual motor diseases.

 Publications

year authors and title journal last update
List of publications.
2018 Ana F. Oliveira, Keisuke Yonehara
The Mouse Superior Colliculus as a Model System for Investigating Cell Type-Based Mechanisms of Visual Motor Transformation
published pages: , ISSN: 1662-5110, DOI: 10.3389/fncir.2018.00059
Frontiers in Neural Circuits 12 2019-08-29

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "RETICULUS" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "RETICULUS" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.3.2.)

EGeoCC (2019)

Ethnic geography and civil conflict

Read More  

EOBRECA (2019)

Differential Roles of Estrogens in Obesity-mediated ER+ Breast Cancer Development

Read More  

EyeGestLearn (2019)

Applying eye-tracking to investigate information uptake from co-speech gestures in online learning environments

Read More