Opendata, web and dolomites

MeMAD SIGNED

Methods for Managing Audiovisual Data: Combining Automatic Efficiency with Human Accuracy

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

 MeMAD project word cloud

Explore the words cloud of the MeMAD project. It provides you a very rough idea of what is the project "MeMAD" about.

vision    memories    ways    speech    mobile    data    achievements    millions    images    sequences    internationally    asset    consumption    techniques    descriptions    automatic    watching    storytelling    aligned    action    industries    cope    20th    videos    history    media    modern    network    21st    actually    collected    handle    memad    benefit    words    revolutionize    material    matching    human    records    framework    repositories    visual    video    creative    digital    invasive    fast    communicate    interactively    listening    multilingual    entertain    semantic    attract    resource    translation    auditory    audio    neural    latest    mainly    impaired    big    english    re    broadcasting    deep    objects    processed    sounds    portion    bases    hearing    purpose    people    content    recognizes    computer    efficient    description    intermodal    centuries    audiovisual    learning    films    machine    time    anyone    integrates    verbalisations    surrogates    moving    source    ing    extraction    visually    created   

Project "MeMAD" data sheet

The following table provides information about the project.

Coordinator
AALTO KORKEAKOULUSAATIO SR 

Organization address
address: OTAKAARI 1
city: ESPOO
postcode: 2150
website: http://www.aalto.fi/en/

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country Finland [FI]
 Project website https://memad.eu
 Total cost 3˙431˙593 €
 EC max contribution 3˙431˙593 € (100%)
 Programme 1. H2020-EU.2.1.1. (INDUSTRIAL LEADERSHIP - Leadership in enabling and industrial technologies - Information and Communication Technologies (ICT))
 Code Call H2020-ICT-2017-1
 Funding Scheme RIA
 Starting year 2018
 Duration (year-month-day) from 2018-01-01   to  2020-12-31

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    AALTO KORKEAKOULUSAATIO SR FI (ESPOO) coordinator 752˙207.00
2    LIMECRAFT NV BE (GENT) participant 633˙080.00
3    EURECOM FR (BIOT) participant 401˙598.00
4    HELSINGIN YLIOPISTO FI (HELSINGIN YLIOPISTO) participant 397˙398.00
5    UNIVERSITY OF SURREY UK (GUILDFORD) participant 370˙680.00
6    YLEISRADIO OY FI (HELSINKI) participant 354˙850.00
7    INSTITUT NATIONAL DE L'AUDIOVISUEL FR (BRY-SUR-MARNE) participant 212˙528.00
8    LINGSOFT OY FI (HELSINKI) participant 157˙717.00
9    LINGSOFT LANGUAGE SERVICES OY FI (TURKU) participant 151˙532.00

Map

Leaflet | Map data © OpenStreetMap contributors, CC-BY-SA, Imagery © Mapbox

 Project objective

Audiovisual media content created and used in films and videos is key for people to communicate and entertain. It has also become an essential resource of modern history, since a large portion of memories and records of the 20th and 21st centuries are audiovisual. To fully benefit from this asset, fast and effective methods are needed to cope with the rapidly growing audiovisual big data that are collected in digital repositories and used internationally. MeMAD will provide novel methods for an efficient re-use and re-purpose of multilingual audiovisual content which revolutionize video management and digital storytelling in broadcasting and media production. We go far beyond the state-of-the-art automatic video description methods by making the machine learn from the human. The resulting description is thus not only a time-aligned semantic extraction of objects but makes use of the audio and recognizes action sequences. While current methods work mainly for English, MeMAD will handle multilingual source material and produce multilingual descriptions and thus enhance the user experience. Our method interactively integrates the latest research achievements in deep neural network techniques in computer vision with knowledge bases, human and machine translation in a continuously improving machine learning framework. This results in detailed, rich descriptions of the moving images, speech, and audio, which enable people working in the Creative Industries to access and use audiovisual information in more effective ways. Moreover,the intermodal translation from images and sounds into words will attract millions of new users to audiovisual media, including the visually and hearing impaired. Anyone using audiovisual content will also benefit from these verbalisations as they are non-invasive surrogates for visual and auditory information, which can be processed without the need of actually watching or listening, matching the new usage of video consumption on mobile devices.

 Deliverables

List of deliverables.
Specification of the data interchange format, intermediate version Documents, reports 2019-09-05 12:46:34
Data management plan Open Research Data Pilot 2019-05-28 16:15:03
Libraries and tools for multimodal content analysis Other 2019-05-28 16:15:03
Multimodally annotated dataset of described video Other 2019-05-28 16:15:03
TV programme annotation model Documents, reports 2019-05-28 16:15:03
Setup of website with presentation of project and consortium partners Websites, patent fillings, videos etc. 2019-05-28 16:15:03
Report on multimodal machine translation Documents, reports 2019-05-28 16:15:03
Data management plan, update 1 Open Research Data Pilot 2019-09-05 12:44:32
Specification of the data interchange format, initial version Documents, reports 2019-09-05 12:44:32
Evaluation report, initial version Documents, reports 2019-09-05 12:44:32

Take a look to the deliverables list in detail:  detailed list of MeMAD deliverables.

 Publications

year authors and title journal last update
List of publications.
2018 Francis, Danny; Huet, Benoit; Merialdo, Bernard
EURECOM participation in TrecVid VTT 2018
published pages: , ISSN: , DOI:
TRECVID 2018, 22nd International Workshop on Video Retrieval Evaluation, November 13-15, 2018, Gaithersburg, USA 2019-08-05
2018 Mats Sjöberg, Hamed R. Tavakoli, Zhicun Xu, Hector Laria Mantecon and Jorma Laaksonen
PicSOM Experiments in TRECVID 2018
published pages: , ISSN: , DOI:
TRECVID 2018, 22nd International Workshop on Video Retrieval Evaluation, November 13-15, 2018, Gaithersburg, USA 2019-08-05
2018 Cohendet, Romain; Demarty, Claire-Hélène; Duong, Ngoc Q.K.; Sjöberg, Mats; Ionescu, Bogdan; Do, Thanh Toan
MediaEval 2018: Predicting Media Memorability
published pages: , ISSN: 1613-0073, DOI:
CEUR Workshop Proceedings 2283 2019-08-05
2018 Sulubacak, Umut; Tiedemann, Jörg; Rouhe, Aku; Grönroos, Stig-Arne; Kurimo, Mikko
The MeMAD Submission to the IWSLT 2018 Speech Translation Task
published pages: 89-94, ISSN: , DOI:
Proceedings of the International Workshop on Spoken Language Translation 2019-08-05

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "MEMAD" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "MEMAD" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.2.1.1.)

EuConNeCts4 (2019)

European Conferences on Networks and Communications (EuCNC)

Read More  

OpertusMundi (2020)

A Single Digital Market for Industrial Geospatial Data Assets

Read More  

NEoteRIC (2020)

NEuromorphic Reconfigurable Integrated photonic Circuits as artificial image processor

Read More