Invited speakers

1. Alex Kacelnik, Oxford University, UK

2. Jacqueline Fagard, University Paris Descartes, France
How infants discover tool use?

How do infants understand that they can act on an out-of-reach object by using another object (a “tool”)? By using the example of tool use, I will show several results to illustrate how “understanding”, i.e. knowledge of objects’ physical properties, emerges from a succession of steps, going from low level detection of regularities, to knowledge within a specific context, and finally to more abstract generalization of that knowledge.
In particular I will illustrate (1) how infants go from fetal manual babbling to being able to use their hands as tools to grasp a within-reach object (e.g. the tool); (2) how, from their own exploration of objects and from observation of others around them, infants progressively come to understand a tool’s affordances independently of the context.


3. Luciano Fadiga, University of Ferrara, Italy
The motor side of objects semantics
Objects are not only physical entities located in the environment. Objects, often created by us, are sites of interaction with our body and their brain representation automatically implies motor interaction. This concept is very close to the classical idea of motor affordance originally proposed by Gibson. The empirical confirmation of this perspective comes from the electrophysiological study of monkey cortical circuits subserving motor control of the grasping hand, visuomotor transformation of objects into hand poses and interrelation between pragmatic and semantic representations of objects and actions. In humans a similar picture emerges from neuroimaging and prompts interesting links with language as well. In my presentation I will show and discuss the most recent 'state of the art' on the topic integrating the perspective with some very recent empirical results from our laboratory.

4. Sinan Kalkan, Middle East Technical University, Turkey
Affordances, Concepts and Context
We, as highly cognitive animals, seamlessly recognize and use objects, or concepts, and their affordances. Not only that, we utilize them differently in different spatial, temporal or social situations, i.e., contexts. In this talk, I will present our recent efforts on integrating all these crucial capabilities together into a framework which links contextualized concepts and affordances together and allows using them for various robotic settings in different contexts.

5. José Santos-Victor, Instituto Superior Tecnico, Portugal
Learning object affordances for tool use and problem solving
One of the hallmarks of human intelligence is the ability of predicting the consequences of actions and efficiently plan behaviors based on such predictions. This ability is supported by internal models that human babies acquire incrementally during development through sensorimotor experience: i.e. by interacting with objects in the environment while being exposed to sensori perception. An elegant and powerful concept to represent these internal models has been proposed in developmental psychology under the name of object affordances: action possibilities that an object offers to an agent. Affordances are learned ecologically by the agent and exploited for action planning. Clearly, endowing artificial agents with such cognitive capabilities is a fundamental challenge both in artificial intelligence and robotics. We propose a learning framework in which an embodied agent (i.e. in our case, the humanoid robot iCub) autonomously explores the environment, and learns object affordances as probabilistic dependencies between actions, object visual properties and observed effects; we use Bayesian Networks to encode this probabilistic model. By making inferences across the learned dependencies a number of cognitive skills are enabled: e.g. i) predicting the effects of an action over an object, or ii) selecting the best action to obtain a desired effect. By exploring object-object interactions the robot can develop the concept of tool (i.e. a handheld object that allows to obtain a desired effect on another object), and eventually use the acquired knowledge to plan sequences of actions to attain a desired goal (i.e. problem solving).
 
6. Giorgio Metta, Italian Institute of Technology, Italy
Learning grasp-dependent tool affordances on the iCub humanoid robot
The ability to learn about and efficiently use tools constitutes a desirable property for general purpose humanoid robots, as it allows them to extend their capabilities beyond the limitations of their own body. Yet, it is a topic that has been tackled only recently by the robotics community. Most of the studies published so far make use of tool representations that only allow to generalize knowledge among similar tools in a very limited way. Moreover, most studies assume that the tool is always grasped in its common or canonical grasp position, thus not considering the influence of the grasp configuration in the outcome of the actions performed with them. In this talk we present a method that tackles both issues simultaneously by using an extended set of functional features and novel representation of the tool use effect which implicitly account for the grasping configuration and allow for generalization among tools. Moreover, learning happens in a self-supervised manner: First, the robot autonomously discovers the affordance categories of the tools by clustering the effect of their usage. These categories are subsequently used as a teaching signal to associate visually obtained functional features to the expected tool’s affordance. In the experiments, we show how this technique can be effectively used to select, given a tool, the best action to achieve a desired effect.

7. Vaishak Belle, KU Leuven, Belgium

8. Justus Piater, Innsbruck University, Austria
Stacked learning for bootstrapping symbols from sensorimotor experience
General-purpose autonomous robots for deployment in unstructured domains such as service and households require a high level of understanding of their environment. For example, they need to understand how to handle objects, how to operate devices, the function of objects and their important parts, etc. How can such understanding be made available to robots? Hard-coding is not feasible, and conventional machine learning approaches will not work in such high-dimensional, continuous perception-action spaces and realistic amounts of training data. One way to get robots to learn higher-level concepts may be to focus on simple learning problems first, and then learn harder problems in ways that make use of simpler problems already-learned. For example, learning problems can be stacked by making the output of lower-level learners available as input to higher-level learning problems, effectively turning hard problems into easier problems by expressing them in terms of highly-predictive attributes. This talk discusses how this can be done, including further boosting learning efficiency by active learning, and automatic, unsupervised structuring of sets of learning problems and their interconnections. Following a stacked learning approach, we discuss how symbolic planning operators can be formed in the continuous sensorimotor space of a manipulator robot that explores its world, and how the acquired symbolic knowledge can be further used in developing higher-level reasoning skills.

9. Norbert Kruger, University of Southern Denmark, Denmark
What we can learn from the primate's visual system
We discuss the impact (or lack thereof) biologically motivated vision has had on computer vision in the last decades. We then summarize a number of computer vision and robotic problems for which biological models can give indications for how these can be addressed. Then we summarize important findings about the primate's visual system and draw a number of conclusions for the development of algorithms from these findings.


10. Yannis Aloimonos, University of Maryland, USA