Synopsis

Autonomous robots need motor cognition to perform complex tasks efficiently in real-world environments: this means not only the acquisition of specific motor skills, but also the ability to handle novel situations beyond learned schemas, to generalise what is learned to unexpected scenarios. Human infants learn through exploration the consequences of their actions over the objects they play with (i.e. the objects affordances). This is progressively extended to the interaction among different objects, eventually leading to the discovery of tool use and to the emergence of prediction and planning capabilities, that are exploited to solve complex tasks through powerful generalisation mechanisms. Interestingly, other animals show similar cognitive capabilities (e.g. primates, crows).
Inspired by these observations, roboticists have been working on computational models for learning object affordances and tool use, that can be employed for prediction and planning, thus enhancing robot autonomy.

The goal of this workshop is to depict the current state of the art concerning the development of such models in humanoid robots, and to sketch the main challenges and future directions. We foster a multi-disciplinary discussion, which aims at understanding how the collaboration between roboticists with different backgrounds (e.g. developmental robotics, artificial intelligence, control theory) and researchers from life sciences (e.g. psychology, neuroscience, linguistics) can improve both the abilities of autonomous robots and our knowledge of the human brain.

In particular, some fundamental questions will drive the discussion:
- are object affordances a pre-requisite for prediciton and planning?
- are the basic mechanisms involved in the learning of affordances similar to ones that lead to tool use?
- what are the algorithm and strategies that can support the emergence of this knowledge in robots?
- can affordances enable generalisation (and even creativity) in cognitive robots?