Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 1 - SELFCEPTION (Robotic self/other distinction for interaction under uncertainty)

Teaser

There are around 35 million private and nonindustrial robots in the world in 2019, a market of 18.7 billion euros. More specifically, the collaborative robotics and home care robotics sectors are expected to increase roughly tenfold and four fold respectively by 2020. However...

Summary

There are around 35 million private and nonindustrial robots in the world in 2019, a market of 18.7 billion euros. More specifically, the collaborative robotics and home care robotics sectors are expected to increase roughly tenfold and four fold respectively by 2020. However, autonomous robot technology in Europe is not yet ready to lead this high expectancy due to the lack of robust functionality in uncertain environments. While robotics is progressively revolutionizing industrial sectors, applications involving non-constrained or open-ended scenarios do not have robust solutions and several user-end initiatives and SMEs have disappeared.
Hence, a key challenge for robotics and artificial intelligence research is developing systems that are able to autonomously interact with humans and their surrounding environment in situations that involve varying degrees of uncertainty. In fact, while humans can continuously learn from their experiences and perceive their body as a whole as they interact with the world, robots do not yet have these capabilities. Providing humanoid robots, and artificial agents in general, the capacity to perceive their body as humans do, is a breakthrough technology that goes beyond one discipline and has even philosophical and societal implications. However, this is a challenging problem that needs to revisit stablished state-of-the-art action and perception algorithms. Therefore, SELFCEPTION defined a roadmap to include some characteristics of human perception and action into artificial agents.

The project developed a computational model for self/other distinction in robots inspired in current neuroscientific and psychology findings, i.e., a synthetic probabilistic model of the sensorimotor relationships that captures what the robot perceives (sensory response) and the actions that it exerts, in order to enable the machine to differentiate its own body from other elements in the environment. In essence, the robot had to answer a simple question: “is this my body?”. The unique interdisciplinary and inter-sectorial vision of this project, connecting cognitive psychology, neuroscience, artificial intelligence and robotics had two main scientific implications: i) it has reinforced the materialisation of the next generation of perceptive robots able to build its perceptual schema and distinguish its actions from other entities; and ii) it has provided some insights about how humans unconsciously maintain their own perceptual representation.

Several achievements have been accomplished:
• 1st implementation of Active Inference construct for body perception and action on a real humanoid robot.
• 1st replication of a body-illusion into an artificial agent.
• Mirror non-appearance test was passed by a humanoid robot.
The project drew the following conclusions:
• AI and robotics:
o Self/other distinction can be achieved without being conscious.
o Flexible perception and action. A flexible process for approximating the body and the world.
o Combining artificial neural networks with algorithmic knowledge allows large scale cross/inter-modal sensory information decoding and adaptation.
• Humans:
o Under the predictive coding theory body models are affected by instantaneous bottom-up sensory cues biasing inference problems such as body localization.
o We identified that action-reflexes could play a role on sensorimotor conflicts.

Work performed

The project had three key objectives implemented in three Work Packages (WP) and one specific dissemination WP:
WP1. Multisensory self/other distinction model.
1. A secondment was performed at Leiden University during the initial phase of the project with Prof. Bernhard Hommel from the department of general psychology. A roadmap of human perception characteristics that should be implemented into robots was defined.
2. Afterwards, a primer of computational model of perception with multisensory learning was designed based on predictive coding theory. (Lanillos & Cheng, 2018, Diez-Valencia et al. 2018)
3. The model was further extended to include body perception and action following the theoretical construct of Active Inference (Oliver & Lanillos, 2019).
4. Finally, an algorithm was designed to enable a robot to distinguish its body from other entities (Lanillos & Cheng, 2019).

WP2. Experimental evaluation of self-perception and self/other model in a humanoid robot. We proposed a novel evaluation based on cognitive psychology quantitative benchmarks. For that purpose, we adapted some experimental paradigms in humans and robots to make results comparable.
1. We evaluated the influence of different sensory sources (visual, tactile and proprioceptive) when estimating the location of the end-effector. Our model was able to report similar hand locations drift due to the prediction error propagation.
2. Body learning and estimation was tested in a humanoid robotic.
3. A full construct of body perception and action based on Active Inference was validated on the humanoid robot iCub. This is the first time that a model of the free-energy principle has been successfully implemented on a real humanoid robot. Previous works were just theoretical or simulated being partially biased by the simplifications of the models. The humanoid robot was able to perform robust dual-arm reaching and visual tracking tasks of an object using the same mathematical model.

WP3. Algorithms and models evaluation in a SME humanoid robot. The last stage was performed in collaboration with PAL Robotics, a robotic company/SME settled in Barcelona. We deployed the researched algorithms in a final experiment for self/other distinction on a mirror. A demo was presented (video: https://youtu.be/3l9N972xjD8)

WP4. Exploitation and dissemination. Several types of dissemination and communication channels were addressed. More information can be found at the project website (www.selfception.eu).
Gender dimension: a special event to reduce Gender imbalance named WeLead: weleadwomen.wordpress.com

Final results

The SELFCEPTION project contributed with the first implementation of a neuro-inspired (predictive coding) body perception and action, including non-appearance self/other distinction on a real humanoid robot. Before this project, only theoretical and simulation approaches were investigated. This allowed us to both i) evaluate these models for interaction under uncertainty and ii) validate the mathematical theories in real world experiments. All publications and results generated in this project were ideated by the principal investigator and receptor of the Marie Slodowska-Curie grant, Pablo Lanillos. Project results had also influenced other subareas, such as tactile systems (Kaboli et al. 2017), prosthetics (Tayeb et al. 2019), cognitive psychology (Hinz et al. 2018) and computational psychiatry (Lanillos et al. 2019).

Furthermore, by disseminating the results directly to non-academic players reduced the distance between the industry and academic goals. SELFCEPTION has participated in strategic groups, constructing a significant network of interdisciplinary and inter-sectorial leaders enforcing continuation and future collaborations. Thus, strengthen the relevance of embodied artificial intelligence and the need of interdisciplinary approaches to give solutions to core challenges of robotics and artificial intelligence, placing Europe at the centre of the scientific community in interdisciplinary basic and technological science.

Website & more info

More info: http://www.selfception.eu.