We focus on scenarios where robots take the role of assistants, trainers or collaborators in order to achieve physical manipulation tasks. We use machine-learning algorithms to model and observe human behaviors, develop robot implementations and replicate the task in a human-like manner or detect human intention.
We look at tasks that require a sequence of actions that range from single arm motions, fine manipulation and coordinated behaviors. We use knowledge acquired by observing humans performing a task or by directly guiding the robot through kinesthetic demonstrations.
Perfect robotic controls can never exist, so we work to devise control strategies for unexpected changes. We develop algorithms and controllers so that our robots are capable of reacting in a fast changing environment.
The human hand has incredible functionality. Our research focuses on understanding and replicating this capacity to robotic hands with very different kinematics.
Artificial agents which learn through imitation and social interactions provide important insights for human social cognition. We focus on modelling the cognitive mechanisms involved in social interaction, such as intention attribution and agency. Our research enables us to design human-like robotic behaviors to reach more realistic human-robot interactions.