Autonomous vehicles, although in its early stage, have demonstrated huge potential in shaping future life styles to many of us. However, to be accepted by ordinary users, autonomous vehicles have a critical issue to solve – this is trustworthy collision detection. No one likes an autonomous car that is doomed to a collision accident once every few years or months. In the real world, collision does happen at every second - more than 1.3 million people are killed by road accidents every single year. The current approaches for vehicle collision detection such as vehicle to vehicle communication, radar, laser based Lidar and GPS are far from acceptable in terms of reliability, cost, energy consumption and size. For example, radar is too sensitive to metallic material, Lidar is too expensive and it does not work well on absorbing/reflective surfaces, GPS based methods are difficult in cities with high buildings, vehicle to vehicle communication cannot detect pedestrians or any objects unconnected, segmentation based vision methods are too computing power thirsty to be miniaturized, and normal vision sensors cannot cope with fog, rain and dim environment at night. To save people’s lives and to make autonomous vehicles safer to serve human society, a new type of trustworthy, robust, low cost, and low energy consumption vehicle collision detection and avoidance systems are badly needed.
This consortium proposes an innovative solution with brain-inspired multiple layered and multiple modalities information processing for trustworthy vehicle collision detection. It takes the advantages of low cost spatial-temporal and parallel computing capacity of bio-inspired visual neural systems and multiple modalities data inputs in extracting potential collision cues at complex weather and lighting conditions.
Deliverables
List of deliverables.
Preliminary visual neural system models for collision cues extraction
Jin Xiao, Yuhang Tian, Ling Xie, Xiaoyi Jiang, Jing Huang A Hybrid Classification Framework Based on Clustering published pages: 2177-2188, ISSN: 1551-3203, DOI: 10.1109/tii.2019.2933675
IEEE Transactions on Industrial Informatics 16/4
2020-03-05
2019
Qinbing Fu, Hongxin Wang, Cheng Hu, Shigang Yue Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review published pages: 263-311, ISSN: 1064-5462, DOI: 10.1162/artl_a_00297
Artificial Life 25/3
2020-03-05
2019
Qinbing Fu, Cheng Hu, Jigen Peng, F. Claire Rind, Shigang Yue A Robust Collision Perception Visual Neural Network With Specific Selectivity to Darker Objects published pages: 1-15, ISSN: 2168-2267, DOI: 10.1109/tcyb.2019.2946090
IEEE Transactions on Cybernetics
2019-12-17
2019
Daqi Liu, Nicola Bellotto, Shigang Yue Deep Spiking Neural Network for Video-Based Disguise Face Recognition Based on Dynamic Facial Movements published pages: 1-10, ISSN: 2162-237X, DOI: 10.1109/tnnls.2019.2927274
IEEE Transactions on Neural Networks and Learning Systems 19 July 2019
2019-12-16
2019
Hongxin Wang, Jigen Peng, Xuqiang Zheng, Shigang Yue A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds published pages: 1-15, ISSN: 2162-237X, DOI: 10.1109/TNNLS.2019.2910418
IEEE Transactions on Neural Networks and Learning Systems 01 May 2019
2019-12-16
Are you the coordinator (or a participant) of this project? Plaese send me more information about the "ULTRACEPT" project.
For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.
Send me an email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.
Thanks. And then put a link of this page into your project's website.
The information about "ULTRACEPT" are provided by the European Opendata Portal: CORDIS opendata.
More projects from the same programme (H2020-EU.1.3.3.)