Opendata, web and dolomites

SOUNDSCENE SIGNED

How does the brain organize sounds into auditory scenes?

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

 SOUNDSCENE project word cloud

Explore the words cloud of the SOUNDSCENE project. It provides you a very rough idea of what is the project "SOUNDSCENE" about.

computational    hearing    impairments    provides    situations    exist    competing    networks    digital    faced    perceived    strategies    auditory    composite    aging    real    appropriately    manipulation    functional    channel    world    arrives    record    cortex    network    organizes    barrier    parse    active    engage    neuronal    behavioural    critical    interactions    optogenetic    regions    neuro    recreate    waveform    elucidate    listen    paradigm    mixture    negatively    normal    healthy    solves    sound    central    ear    cellular    unknown    underpins    young    scene    manipulate    ease    highlight    circuit    sources    inability    noisy    brain    neural    electrophysiological    combine    recordings    operate    understand    single    area    humans    hippocampus    guide    imaging    sense    advancing    perceptual    listeners    cell    conjunction    models    prefrontal    listening    animal    mechanisms    little    attentional    rehabilitative    count    reconstruct   

Project "SOUNDSCENE" data sheet

The following table provides information about the project.

Coordinator
UNIVERSITY COLLEGE LONDON 

Organization address
address: GOWER STREET
city: LONDON
postcode: WC1E 6BT
website: n.a.

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country United Kingdom [UK]
 Total cost 1˙999˙999 €
 EC max contribution 1˙999˙999 € (100%)
 Programme 1. H2020-EU.1.1. (EXCELLENT SCIENCE - European Research Council (ERC))
 Code Call ERC-2017-COG
 Funding Scheme ERC-COG
 Starting year 2018
 Duration (year-month-day) from 2018-09-01   to  2023-08-31

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    UNIVERSITY COLLEGE LONDON UK (LONDON) coordinator 1˙999˙999.00

Map

 Project objective

Real-world listening involves making sense of the numerous competing sound sources that exist around us. The neuro-computational challenge faced by the brain is to reconstruct these sources from the composite waveform that arrives at the ear; a process known as auditory scene analysis. While young normal hearing listeners can parse an auditory scene with ease, the neural mechanisms that allow the brain to do this are unknown – and we are not yet able to recreate them with digital technology. Hearing loss, aging, impairments in central auditory processing, or an inability to appropriately engage attentional mechanisms can negatively impact the ability to listen in complex and noisy situations and an understanding of how the healthy brain organizes a sound mixture into perceptual sources may guide rehabilitative strategies targeting these problems.

While functional imaging studies in humans highlight a network of brain regions that support auditory scene analysis, little is known about the cellular and circuit based mechanisms that operate within these brain networks. A critical barrier to advancing our understanding of how the brain solves the challenge of scene analysis has been a failure to combine behavioural testing, which provides a crucial measure of how any given sound mixture is perceived, with methods to record and manipulate neuronal activity in animal models. Here, I propose to use a novel behavioural paradigm in conjunction with high-channel count electrophysiological recordings and optogenetic manipulation to elucidate how auditory cortex, prefrontal cortex and hippocampus enable scene analysis during active listening. These methods will allow us to record single cell activity from a number of brain regions more typical of functional imaging studies in order to understand how processing within each area, and the interactions between these areas, underpins auditory scene analysis.

 Deliverables

List of deliverables.
Data Management Plan Open Research Data Pilot 2019-10-03 18:12:33

Take a look to the deliverables list in detail:  detailed list of SOUNDSCENE deliverables.

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "SOUNDSCENE" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "SOUNDSCENE" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.1.)

evolSingleCellGRN (2019)

Constraint, Adaptation, and Heterogeneity: Genomic and single-cell approaches to understanding the evolution of developmental gene regulatory networks

Read More  

IMMUNOTHROMBOSIS (2019)

Cross-talk between platelets and immunity - implications for host homeostasis and defense

Read More  

AST (2019)

Automatic System Testing

Read More