AVISPIRE

Audio-VIsual Speech Processing for Interaction in Realistic Environments

 Coordinatore "NATIONAL CENTER FOR SCIENTIFIC RESEARCH ""DEMOKRITOS""" 

 Organization address address: Patriarchou Gregoriou Str.
city: AGHIA PARASKEVI
postcode: 15310

contact info
Titolo: Ms.
Nome: Marina
Cognome: Fontara
Email: send email
Telefono: -6503276
Fax: -6503319

 Nazionalità Coordinatore Greece [EL]
 Totale costo 87˙500 €
 EC contributo 87˙500 €
 Programma FP7-PEOPLE
Specific programme "People" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013)
 Code Call FP7-PEOPLE-2009-RG
 Funding Scheme MC-IRG
 Anno di inizio 2009
 Periodo (anno-mese-giorno) 2009-10-01   -   2013-03-31

 Partecipanti

# participant  country  role  EC contrib. [€] 
1    "NATIONAL CENTER FOR SCIENTIFIC RESEARCH ""DEMOKRITOS"""

 Organization address address: Patriarchou Gregoriou Str.
city: AGHIA PARASKEVI
postcode: 15310

contact info
Titolo: Ms.
Nome: Marina
Cognome: Fontara
Email: send email
Telefono: -6503276
Fax: -6503319

EL (AGHIA PARASKEVI) coordinator 87˙500.00

Mappa


 Word cloud

Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.

acoustic    visual    extracted    life    voice    ideal    quality    interaction    recognition    audio    significant    environments    human    speech    real   

 Obiettivo del progetto (Objective)

'The topic of audio-visual speech processing has attracted significant interest over the past 15 years. Relevant research has been focusing on recruiting visual speech information, extracted from the speaker's mouth region, as a means to improve robustness of traditional, unimodal, acoustic-only based speech processing. Nevertheless, to-date, most work has been limited to ideal-case scenarios, where the visual data are of high-quality, typically of steady frontal head pose, high resolution, and uniform lighting, while the audio signal contains speech by a single subject, in most cases artificially contaminated by noise in order to demonstrate significant improvements in speech system performance. Obviously, these conditions remain far from unconstrained, multi-party human interaction, thus, not surprisingly, practical audio-visual speech systems have yet to be deployed in real life. In this proposal, we aim to work towards expanding the state-of-the-art from the ideal “toy” examples to realistic human-computer interaction in difficult environments like the office, the automobile, broadcast news, and during meetings. Successful audio-visual speech processing there requires progress beyond the state-of-the-art in processing and robust extraction of visual speech information, as well as its efficient fusion with the acoustic modality, due to the varying quality of the extracted stream information. We propose to study a number of speech technologies in such environments (e.g., speech recognition, activity detection, diarization, separation), which stand to benefit from multimodality. The envisaged work will span 42 months of activity, and is planned as a natural evolution of research efforts of the candidate, Dr. Gerasimos Potamianos, while at AT&T Labs and IBM Research in the US, to be conducted jointly with the host organization, the Institute of Informatics and Telecommunications at the National Center of Scientific Research, "Demokritos", in Athens, Greece.'

Introduzione (Teaser)

If computers could read lips just like humans doe, what techniques would then be required in order to effectively capture voice by using inexpensive equipment? This is the issue addressed by EU researchers working on improving speech recognition systems in distinguishing voice from multiple speakers under real-life conditions.

Altri progetti dello stesso programma (FP7-PEOPLE)

IMOTEC-BOX (2011)

Isotopic and molecular techniques for determining the efficiency of in-situ bioremediation and chemical oxidation of chlorinated compounds

Read More  

VTG-CDG (2008)

Vesicular Golgi trafficking deficiencies in unsolved CDG type II patients

Read More  

COGNITIVE-AMI (2013)

SEMANTIC AND COGNITIVE DESCRIPTIONS OF SCENES FOR REASONING AND LEARNING IN AMBIENT INTELLIGENCE

Read More