Opendata, web and dolomites

VUAD SIGNED

Video Understanding for Autonomous Driving

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

Project "VUAD" data sheet

The following table provides information about the project.

Coordinator
KOC UNIVERSITY 

Organization address
address: RUMELI FENERI YOLU SARIYER
city: ISTANBUL
postcode: 34450
website: www.ku.edu.tr

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country Turkey [TR]
 Total cost 145˙355 €
 EC max contribution 145˙355 € (100%)
 Programme 1. H2020-EU.1.3.2. (Nurturing excellence by means of cross-border and cross-sector mobility)
 Code Call H2020-MSCA-IF-2019
 Funding Scheme MSCA-IF-EF-ST
 Starting year 2020
 Duration (year-month-day) from 2020-04-01   to  2022-03-31

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    KOC UNIVERSITY TR (ISTANBUL) coordinator 145˙355.00

Map

 Project objective

Autonomous vision aims to solve computer vision problems related to autonomous driving. Autonomous vision algorithms achieve impressive results on a single image for various tasks such as object detection and semantic segmentation, however, this success has not been fully extended to video sequences yet. In computer vision, it is commonly acknowledged that video understanding falls years behind single image. This is mainly due to two reasons: processing power required for reasoning across multiple frames and the difficulty of obtaining ground truth for every frame in a sequence, especially for pixel-level tasks such as motion estimation. Based on these observations, there are two likely directions to boost the performance of tasks related to video understanding in autonomous vision: unsupervised learning and object-level reasoning as opposed to pixel-level reasoning. Following these directions, we propose to tackle three relevant problems in video understanding. First, we propose a deep learning method for multi-object tracking on graph structured data. Second, we extend it to joint video object detection and tracking by exploiting temporal cues in order to improve both detection and tracking performance. Third, we propose to learn a background motion model for the static parts of the scene in an unsupervised manner. Our long-term goal is also to be able to learn detection and tracking in an unsupervised manner. Once we achieve these stepping stones, we plan to combine the proposed algorithms into a unified video understanding module and test its performance in comparison to static counterparts as well as the state-of-the-art algorithms in video understanding.

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "VUAD" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "VUAD" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.3.2.)

MarshFlux (2020)

The effect of future global climate and land-use change on greenhouse gas fluxes and microbial processes in salt marshes

Read More  

SingleCellAI (2019)

Deep-learning models of CRISPR-engineered cells define a rulebook of cellular transdifferentiation

Read More  

MetAeAvIm (2019)

The Role of the Metabolism in Mosquito Immunity against Dengue virus in Aedes aegypti

Read More