Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 1 - InStance (Intentional stance for social attunement)

Teaser

The InStance project focuses on the question of whether (and under what conditions) people adopt an intentional mindset towards robots, a mindset that is typically adopted towards other humans. An intentional mindset is what the philosopher Daniel Dennett termed “intentional...

Summary

The InStance project focuses on the question of whether (and under what conditions) people adopt an intentional mindset towards robots, a mindset that is typically adopted towards other humans. An intentional mindset is what the philosopher Daniel Dennett termed “intentional stance” - predicting and explaining behaviour with reference to the agent’s mental states such as beliefs, desires and intentions. To give an example: when I see a person gazing at a glass filled with water and extending their arm in its direction, I automatically surmise that the person intends to grasp it, because they feel thirst, believe that water will ease their thirst, and hence want to drink water from the glass. The terms “intend”, “feel” or “believe” all refer to mental states, and the assumption is that through referring to mental states, I can understand and explain someone else’s behaviour. However, for non-intentional systems (such as man-made artefacts), we often adopt the design stance - assuming that the system’s has been designed to behave in particular way (for example the car will slow down when one pushes the brakes not because the car intends to be slower, but because the car has been designed to slow down when the brake pedal is pushed).

Adopting either the intentional stance or the design stance is crucial not only for predicting others’ behaviour but presumably also for becoming engaged in a social interaction. That is, when I adopt the intentional stance, I direct my attention to where somebody is pointing, and hence we establish joint focus of attention, thereby becoming socially attuned. On the contrary, if I see that a machine’s artificial arm is pointing somewhere, I might be unwilling to attend there, as I do not believe that the machine wants to show me something, i.e., there is no intentional communicative content in the gesture.

This raises the question: to what extent are humans ready to adopt the intentional stance towards robots with human-like appearance, and to attune socially with them?

It might be that once a robot imitates human-like behaviour at the level of subtle (and often implicit) social signals, humans might automatically perceive its behaviour as reflecting mental states. This would presumably evoke social cognition mechanisms to the same (or similar) extent as in human-human interactions, allowing social attunement. By social attunement we mean a collection of mechanisms of social cognition that the brain employs during interactions with others: for example, joint attention, visual-spatial perspective taking, theory of mind. Joint attention is a mechanism through which two or more individuals attend the same event or object in the environment. Engagement in joint attention often happens through directing others’ attention to where one is attending through, for example, gaze direction or a pointing gesture. Visual-spatial perspective taking is a mechanism that allows for taking someone else’s perspective in representation of space (for example, I understand that my “right” is “left” for my interaction partner, who is sitting opposite to me). Finally, theory of mind is a more sophisticated mechanism that allows understanding that someone’s mental states might be different from one’s own, or even differently representing reality (as in the case of false belief tasks: one observes an agent leaving the room, and during their absence items in the room are rearranged. Employing the theory of mind mechanism should allow the observer to understand that the absent person, upon return, has a different representation of state of affairs than what is in fact reality). In daily interactions with other humans we employ such mechanisms automatically. But would we employ similar mechanisms also in interaction with humanoid robots?

The objectives of the InStance project are to understand various factors that contribute to activating mechanisms of social attunement in intera

Work performed

In the first 18 months of the InStance project, I established an entirely new lab, the Social Cognition in Human-Robot Interaction (S4HRI) lab, at the Italian Institute of Technology in Genova, Italy, where I decided to carry out my ERC project.
The activities of S4HRI lab are to a large part linked with the InStance grant. I designed the lab space consisting of a sound-attenuated experimental cabin for human-robot interaction studies + EEG studies, and an external area with three experimental workstations for various psychophysical experiments. I have also equipped the lab with a humanoid robot iCub, with a Cozmo robot, a 64-channel EEG system and a mobile Tobii eye tracking system.
I recruited members of the InStance team: three PhD students, one postdoctoral fellow, a student assistant, and two research fellows. The members of InStance are assisted by other members of my team (recruited around the same time): four postdoctoral fellows, one PhD student, and one software engineer.

The progress of the InStance project has been according to the plan. We have been addressing the following questions:
- Impact of human-like behavior of the robot on social attunement and adoption of intentional stance (work package 1)
- The role of communicative social signals (work package 2): here, we have addressed the issue of mutual gaze in social contexts, as well as action expectations and behavioural reciprocity (robot behavior contingent on the human behavior)
- Means and methods of probing adoption of the intentional stance (a topic transverse across all work packages)

(1) In order to address the question of how much human-like behavior impacts social attunement, we first needed to collect data on parameters of behavior in humans, in various tasks. To this means, we conducted three experiments in which we measured human eye and head movements that were to be implemented first on a robot simulator, and then on the embodied robot. We successfully managed to extract relevant features of human head and eye movements for producing human-like movements on the iCub robot.
(2) To address questions related to communicative signals in social interaction, we conducted three experiments on mutual gaze in joint attention, and one experiment on mutual gaze in deception. The results showed that mutual gaze influences engagement in joint attention with a robot, and that it makes it more difficult for participants to provide false information to the robot, after the robot directs its gaze towards participants’ eyes. This shows that mutual gaze – being a potent social signal – has an impact on treating an artificial agent as a social entity. In addition to this series of studies, we conducted two experiments in which we examined the impact of contingent (reciprocal) behavior on social engagement with a robot. We showed that when the robot follows participants’ gaze direction, participants engage with the robot more, they like it more, and they perceive it more as human like. We also addressed the question of gaze as communicative signal in action expectations. We showed that violations of expectations regarding gaze communication in the context of action sequence have an impact on engagement in joint attention.
(3) Finally, to address the issue measuring adoption of the intentional stance, we developed a questionnaire in which series of pictures are presented to participants, depicting a short “story”. In the questionnaire, participants are asked to choose between a mentalistic and more mechanistic interpretation of what is happening in the depicted story. Results of a first study in which we administered the questionnaire online showed that in some contexts, participants selected the mentalistic over the mechanistic interpretation, suggesting that in principle it is possible that intentional stance is adopted towards a robot. In addition to developing an explicit measure of adoption of intentional stance, we are aiming to also find more im

Final results

Although social attunement (in the sense of social cognition mechanisms such as joint attention or spatial perspective taking) has been widely investigated in the field of social neuroscience and psychology, specific factors influencing social attunement with various types of agents (artificial agents in particular) have not yet been systematically examined. It is not clear whether humans can socially attune to embodied artificial agents in real-time interactions, and what precisely are the behavioural characteristics of those agents that would allow for the attunement. InStance offers a novel avenue of research where various factors influencing social attunement are manipulated and their influence on fundamental mechanisms of social cognition is examined. In more detail, InStance uses real-time interaction protocols with humanoid robots and examines their behavioural characteristics (gaze behaviour, communicative gestures) that have an impact on adopting the intentional stance and social attunement. Furthermore, in the next steps of the project, we will address higher-level factors that might play a role in adoption of intentional stance, factors such as cultural embedding or experience with robots. More specifically, we will address the question of whether Asian cultures differ from Western cultures in the likelihood of adopting intentional stance towards artefacts, and if so, what are the key elements involved in such cultural differences.

InStance’s results shall allow for understanding whether certain behavioural parameters of the robot behavior (or certain other factors) are more crucial than others regarding engagement of the human user in social interaction with a robot. Moreover, InStance’s findings will provide a set of guidelines for roboticists regarding robot behaviours that should facilitate social attunement. In addition, one of InStance’s outcomes shall be a test battery that will probe adoption of the intentional stance towards other agents, with combined explicit and implicit measures. Further, we shall be able to identify not only behavioural but also neural markers of the intentional stance. Finally, the findings of InStance should explain to what extent cultural background, individual differences and experience shape attitudes towards humanoid robots.

In summary, InStance offers a unique and interdisciplinary contribution to the fields of social neuroscience and engineering (social robotics in particular), as it aims to systematically investigate specific parameters of an agent’s behaviour that allow for attribution of mental states, and thereby social attunement. The area of social and cognitive neuroscience will benefit from InStance by gaining knowledge about what aspects of human behaviour make us being perceived as intentional agents whose actions appear to result from mental operations. This is a fundamental question related to social cognition in social interactions, because being perceived as an intentional agent allows for establishing a common social context with others. Such issues are addressed in fundamental research on social cognition and also in philosophy. InStance offers a unique approach to study this question, due to the use of embodied presence of a complex artificial agent (humanoid robot) allowing for human-like natural social interactions on the one hand, and experimental control on the other. In the context of applied areas of engineering, InStance will help in designing machines that are acceptable by human users at the most fundamental level of implicit cognitive processes. Although InStance’s focus is on complex humanoid robots, as they offer embodied human-like presence, findings of inStance might become generalisable to other devices that interact with humans (e.g., virtual personal assistants on mobile phones, interfaces of autonomous driving vehicles). In terms of the specific field of social robotics, InSance will provide information on characteristics of robot

Website & more info

More info: https://instanceproject.eu/.