Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 1 - VI-DAS (Vision Inspired Driver Assistance Systems)

Teaser

Road accidents continue to be a major public safety concern. Human error is the main cause of accidents. Intelligent driver systems that can monitor the driver’s state and behaviour show promise for our collective safety. VI-DAS will progress the design of next-gen 720°...

Summary

Road accidents continue to be a major public safety concern. Human error is the main cause of accidents. Intelligent driver systems that can monitor the driver’s state and behaviour show promise for our collective safety. VI-DAS will progress the design of next-gen 720° connected ADAS (scene analysis, driver status). Advances in sensors, data fusion, machine learning and user feedback provide the capability to better understand driver, vehicle and scene context, facilitating a significant step along the road towards truly semi-autonomous vehicles. On this path there is a need to design vehicle automation that can gracefully hand-over and back to the driver. VI-DAS advances in computer vision and machine learning will introduce non-invasive, vision-based sensing capabilities to vehicles and enable contextual driver behaviour modelling. The technologies will be based on inexpensive and ubiquitous sensors, primarily cameras. Predictions on outcomes in a scene will be created to determine the best reaction to feed to a personalised HMI component that proposes optimal behaviour for safety, efficiency and comfort. VI-DAS will employ a cloud platform to improve ADAS sensor and algorithm design and to store and analyse data at a large scale, thus enabling the exploitation of vehicle connectivity and cooperative systems. VI-DAS will address human error analysis by the study of real accidents in order to understand patterns and consequences as an input to the technologies. VI- DAS will also address legal, liability and emerging ethical aspects because with such technology comes new risks, and justifiable public concern. The insurance industry will be key in the adoption of next generation ADAS and Autonomous Vehicles and a stakeholder in reaching L3. VI-DAS is positioned ideally at the point in the automotive value chain where Europe is both dominant and in which value can be added. The project will contribute to reducing accidents, economic growth and continued innovation.
The innovative Human-Centred Method implemented by VI-DAS started from accidents and driving errors analysis to support the design of VIDAS prototypes at each phase of the development process.
VI-DAS use cases tackle complex situations rather than simple actions and manoeuvre descriptions in different scenarios. One of the main objectives in VI-DAS is to address the hand-over and hand-back between manual and automated driving modes, focused on the driver’s status and scene interpretation, always keeping the driver in the loop.

Work performed

From M1 to M12 the project activity has been focused in completing the first iteration of the development and integration of the alpha prototype. Firstly, a short definition stage was accomplish generating the first set of specifications requirement and architecture. After the specification, the RTD activities started developing the main modules of the overall VI-DAS System: Outside sensing, Inside Sensing, Understand, Advise/Act, Connect and Risk. Once that the modules where defined and a first version was ready, the project has focused on validating the developments. The activities carried out during this period have been focused on defining the testing and validation methodology and integrating the first VI-DAS prototype.

The dissemination and communication play a key role for the success of the VI-DAS project. Thus, dedicated activities have been carried out to plan and generate dissemination material.

After closing the first Integration Cycle (Alpha Prototype), the second cycle started in M13. This cycle has the objective of achieving integration and validation of the second prototype (Beta). This prototype will be made available in M24 of the project and will consist on an integrated system that contains the new developments carried out by the consortium during the RTD tasks. This system will serve as the basis for conducting testing activities. The results of the Beta test will feed back into the RTD tasks for the final version. This version of the platform will be viewed as complete feature. Taking these objectives into account and the feedback obtained from the Alpha prototype the first task in the second RTD cycle was to review the requirements, specifications and architecture and generate the second version.

During this second cycle, special emphasis has been dedicated to produce the intelligence embedded in each functional module as well as to the legal, insurance and ethical issues

Final results

The progress achieved in the VI-DAS project will have an important impact in the following areas:

- Automotive sensory data fusion and aggregation
One of the issues when addressing 720° activities monitoring in real-time in a vehicle is the capacity to manipulate very high-bandwidth parallel data streams, while also caring about the latency of the processed data. In VI-DAS the software framework developed will address the need for high computational performance by adapting to modern on-board hardware architectures based on multi-core, multi-CPU, and more general distributed system architectures, increasing thus the computation capabilities while still preserving time coherency and proper management of data latencies in critical data fusion and synchronisation tasks.
In the VI-DAS software platform, the standardised concepts will be integrated and extended, or complemented, with representations of the driver, his/her physiological and mental state (considering its pose, gaze direction, drowsiness, etc.), and information on other vehicle occupants.

- Driver state monitoring
In VI-DAS, personalised driver models for long term performance of facial feature detection and tracking as well as for appropriate alertness level assessment will be learned and adapted in-situ.
Driver distraction arises from competing activities placing demands upon cognitive, visual, auditory, verbal, motoric, and other resources, separately or in any combination. Moreover, other factors like driver’s skills or expertise, emotional load, stress level, or driver’s impairment substantially contribute to the true assessment of the effective abilities to perform the driving task.

- Confidence estimation to support risk estimation
In VI-DAS, we will research and advance the confidence estimation techniques for next-generation real-time techniques from artificial intelligence, specifically for deep learning.
In addition, VI-DAS will explore an advanced concept on simulation-in-the-loop.

- Efficient, customisable, and optimised HMI
In VI-DAS HMI, modalities will be allocated on the fly, adapting to changes in the holistic environment (driver, car, ambience and traffic) sensing or other relevant factors.
The major innovation in the HMI fields will be the development of personalised cognitive-aware modality allocation systems that will be based on driving models and scene understanding obtained from the sensors. By including the driver’s information processing characteristics, personal driving modes and situational context, automatic adaptive multimodal HMI systems will be generated for improving safety and comfort.

- Connected component security
VI-DAS will complement the existing approaches for vehicular threat analysis by proposing the joint development of a threat analysis method and an intrusion detection system.

Website & more info

More info: http://vi-das.eu.