However, for non-intentional systems (such as man-made artefacts), we achieve the most efficient predictions adopting the Design Stance - assuming that the system’s has been designed to behave in particular way (i.e. the speed of the car will reduce when pushing the brakes). Adopting either the intentional stance or the design stance has profound consequences not only for predicting others’ behaviour but also for becoming engaged in a social situation. That is, when I adopt the intentional stance, I direct my attention to where somebody is pointing, and hence we establish joint focus of attention, thereby becoming socially attuned. On the contrary, if I see that a machine’s artificial arm is pointing somewhere, I might be unwilling to attend there, as I do not believe that the machine wants to show me something, i.e., there is no intentional communicative content in the gesture.
This raises the question: to what extent are humans ready to attune socially with artificial systems that have human-like appearance, such as humanoid robots? It might be that once a robot imitates human-like behaviour at the level of subtle (and often implicit) social signals, humans might automatically perceive its behaviour as reflecting mental states. This will presumably evoke social cognition mechanisms to the same (or similar) extent as in human-human interactions, allowing social attunement.
Let's Talk! An interview with Agnieszka Wykowska, ERC winner
This website reflects only the view of the Principal Investigator and the European Commission is not responsible for any use that may be made of the information it contains.