Opendata, web and dolomites

DYMO SIGNED

Dynamic dialogue modelling

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

Project "DYMO" data sheet

The following table provides information about the project.

Coordinator
HEINRICH-HEINE-UNIVERSITAET DUESSELDORF 

Organization address
address: UNIVERSITAETSSTRASSE 1
city: DUSSELDORF
postcode: 40225
website: www.uni-duesseldorf.de

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country Germany [DE]
 Total cost 1˙499˙956 €
 EC max contribution 1˙499˙956 € (100%)
 Programme 1. H2020-EU.1.1. (EXCELLENT SCIENCE - European Research Council (ERC))
 Code Call ERC-2018-STG
 Funding Scheme ERC-STG
 Starting year 2019
 Duration (year-month-day) from 2019-09-01   to  2024-08-31

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    HEINRICH-HEINE-UNIVERSITAET DUESSELDORF DE (DUSSELDORF) coordinator 1˙499˙956.00
2    UNIVERSITAT DES SAARLANDES DE (SAARBRUCKEN) participant 0.00

Map

 Project objective

With the prevalence of information technology in our daily lives, our ability to interact with machines in increasingly simplified and more human-like ways has become paramount. Information is becoming ever more abundant but our access to it is limited not least by technological restraints. Spoken dialogue systems address this issue by providing an intelligent speech interface that facilitates swift, human-like acquisition of information.

The advantages of speech interfaces are already evident from the rise of personal assistants such as Siri, Google Assistant, Cortana or Amazon Alexa. In these systems, however, the user is limited to a simple query, and the systems attempt to provide an answer within one or two turns of dialogue. To date, significant parts of these systems are rule-based and do not readily scale to changes in the domain of operation. Furthermore, rule-based systems can be brittle when speech recognition errors occur.

The vision of this project is to develop novel dialogue models that provide natural human-computer interaction beyond simple information-seeking dialogues and that continuously evolve as they are being used by exploiting both dialogue and non-dialogue data. Building such robust and intelligent spoken dialogue systems poses serious challenges in artificial intelligence and machine learning. The project will tackle four bottleneck areas that require fundamental research: automated knowledge acquisition, optimisation of complex behaviour, realistic user models and sentiment awareness. Taken together, the proposed solutions have the potential to transform the way we access information in areas as diverse as e-commerce, government, healthcare and education.

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "DYMO" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "DYMO" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.1.)

QNets (2019)

Open Quantum Neural Networks: from Fundamental Concepts to Implementations with Atoms and Photons

Read More  

EffectiveTG (2018)

Effective Methods in Tame Geometry and Applications in Arithmetic and Dynamics

Read More  

QUAHQ (2019)

PROBING EXOTIC QUANTUM HALL STATES WITH HEAT QUANTUM TRANSPORT

Read More