english only
EPFL > STI > IMT > LASA > Teaching > Student Projects
LASA MENU
Home
Research
Publications
Teaching
People
Arriving to LASA
Videos / Photos
Press & News

GitHub Repository
Older Source Codes
The Blog
Intranet

Prospective Students
TEACHING INDEX
Teaching
Student Projects
Former Projects
Robot Contests

Student Projects propositions

If you are interested in one of the below projects, Semester Projects, Master Projects at EPFL, or Master Projects in Industry please contact the first person of reference indicated in each description either by telephone, or by email, or by visiting us directly to the LASA offices.


Semester Projects

   
   

Activating grasp with a prior from the muscular activity

Neuroprosthetic devices are used to restore motor abilities lost after pathologies or trauma. In the case of upper-limb prosthesis, an important functionality is grasping. To restore this ability, the prosthesis should be able to achieve rapidly a secure grasp. Thus, the device should adapt promptly to the characteristics of the object, such as the size and the texture of the object. The objectís characteristics (mass, friction at the surface) are directly connected to the force applied to the object, and an inaccurate estimation of those can lead to an unsuccessful grasp. To apply the proper forces, the device should make use of the information coming from tactile sensors1 to determine the real weight and friction.
On the other hand, the configuration of the fingers should also depend on the userís intention. The prosthetic device should identify accurately the grasping intention and perform the desired grasp type. The grasping intention could be decoded from the muscular activity of the arm in the early stages of the reaching motion2. Therefore, the prosthesis could predict the grasping type from the muscular activity, command the fingers to close in a desired configuration and make use of the tactile information to securely grasp the object.
The goal of the semester project is to combine the aforementioned approaches and to implement these on a robotic hand. The student will familiarize her/himself with machine learning methods for processing of the Electromyographic and tactile signals, while developing a real-time robotic application. The candidate should have taken at least one machine learning/ pattern recognition course as wells as good programming skills in C++ or python.

References:
[1] M. Li, K. Hang, D. Kragic and A. Billard, Dexterous Grasping under Shape Uncertainty, Robotics and Autonomous Systems, 2015.
[2] I. Batzianoulis, S. El-Khoury, E. Pirondini, M. Coscia, S. Micera and A. Billard, EMG-Based Decoding of Grasp Gestures in Reaching-to-Grasping Motions. Robotics and Autonomous Systems, 2017

Project: Semester Project
Period: 12.01.2018 - 07.07.2018
Section(s): EL MA ME MT MX
Type: 20% theory, 10% software, 70% implementation
Knowledge(s): Machine learning, Robotics, C++/Python
Subject(s): Prosthesis Control, Multisensory robotic system
Responsible(s): Iason Batzianoulis
 
   
   
   

Adaptive human-robot interaction: From human intention to compliant robotic behavior

Robots are mainly here to assist us with tasks that are repetitive and burdensome. Machine learning and control theory provided us with a variety of techniques to teach our robots to perform such tasks. However, the ability of robots to adapt their tasks to their environment or to the intention of their human-user is limited. Providing robots with such adaptive abilities will unlock new possibilities for assistive robotics. Consider polishing as a task for a robotic arm. The robot learns how to polish from human demonstrations. However, during polishing, the human-user can safely grab the robot and change the polishing direction by applying few repetitions of movements in a new desirable direction. This means that the robots quickly adapts its motions to the intention of the human, thus, assisting him/her in performing the new task.

Previously, as the first step, we proposed a method for adapting the robotís behavior to the intention of a human-user. This method is implemented/tested in simulation, and well-documented here. For the next step, the student will implement this method on a real robot. We will be using 7-DOF Kuka LWR 4+. An impedance controller will be provided to control the end-effector of the robot, and the student will mostly focus on adaptive motion planning using dynamical system. The method will be implemented in C++ using ROS libraries. At the end, we expect a compliant robot that polishes a surface and adapts its behavior (i.e., the location and the shape of the polishing) to the motions of the human.

Project: Semester Project
Period: 01.08.2017 - 01.02.2018
Section(s): IN MA ME MT MX
Type: 10% theory, 30% software, 60% implementation
Knowledge(s): Basic of Robotics and control, and C++ programming,
Subject(s): Physical Human-robot interaction, Adaptive control
Responsible(s): Mahdi Khoramshahi
   
   
   

Learning Manipulation with 4 Robotic Arms

Many industrial tasks require to have several robotic arms working on the same piece simultaneously. This is very difficult as we want the robots to perform the task while not intercepting each other. The joint workspace of the robots is highly non-convex and cannot be expressed mathematically. This project will apply machine learning techniques to learn a representation of the feasible workspace of the 4 robotic arms. This representation will then be used in an inverse kinematic controller to control for the robot's motions at run time. The algorithm will be validated to control 4 robotic arms in the lab that must manipulate objects on a moving conveyer belt.

Project: Semester Project
Period: 01.01.2017 - 15.07.2018
Section(s): EL IN MA ME MT PH
Type:
Knowledge(s):
Subject(s): Robotics, Machine Learning
Responsible(s): Aude Billard
   

Master Projects at EPFL

   
   

Reaching for a moving object with YuMi

The use of multi-arm robotic systems allows for highly complex manipulation of heavy objects that would otherwise be impossible for a single-arm robot. In our work [1], we propose a unified coordinated control architecture for reaching and grabbing a moving object by a multi-arm robotic system. Due to the complexity of the task and the system, each arm must coordinate not only with the objectís motion but also with the motion of other arms, in both task and joint spaces. At the task-space level, the proposed unified dynamical system coordinates the motion of each arm with the rest of the arms and the resultant motion of the arms with that of the object. At the joint space level, the coordination between the arms is achieved by introducing a centralized inverse kinematics (IK) solver under data-driven self-collision avoidance constraints; formulated as a quadratic programming problem (QP) and solved in real-time.
The aim of this project is to implement the unified framework on YuMi; a dual-arm robotic system developed by ABB. The student will first review the related literatures and familiarize him/herself with the Robot Operating system (ROS) and the provided libraries [2,3,4]. The proposed control architecture will then be implemented in C/C++ in a simulator in Linux environment and, finally, with the real robot for performing a handover scenario for where an operator holds a tray and hands it over to YuMi.
[1] Mirrazavi Salehian, S. S., Figueroa, N. and Billard, A. (2017) A Unified Framework for Coordinated Multi-Arm Motion Planning. (Under review).
[2] Mirrazavi Salehian, S. S., Centralized motion generator, https://github.com/sinamr66/Multiarm_ds
[3] Mirrazavi Salehian, S. S., Centralized IK solver, https://github.com/sinamr66/QP_IK_solver
[4] Mirrazavi Salehian, S. S., Constructing data set for SCA, https://github.com/sinamr66/SCA_data_construction

Project: Master Project at EPFL
Period: 01.01.2018 - 01.08.2018
Section(s): EL IN ME MT MX PH
Type: 20% theory 60% software, 20% hardware
Knowledge(s): C++, ROS, Machine learning, Robotics
Subject(s): Motio planning, Self-collision avoidance
Responsible(s): Seyed Sina Mirrazavi Salehian
   
   
   

Robot teleoperation- Combining muscular activity with gaze

An important part of neuroprosthetic control is to decode userís motion intention. This intention is then converted into appropriate movements for the prosthetic or assistive device. When controlling prosthetic hand-arm systems, one can use eye movements as a natural way to determine the object the user intends to grasp. Eye movements give only the direction in which the object of interest may be located but not the exact location.
In this project we will examine potentials improvements in localization of the object by fusing gaze detection with monitoring of muscular activity (EMG) of the arm. An estimation of the target position in 2D space would come from the gaze, while EMG could be used to train two machine learning algorithms for regressions to predict the hand position in the x- and y-directions. A combination of these two systems is not trivial due to noise introduced by random eye movements, head motion and the non-stationary nature of the EMG signals.
The student will gain experience in state-of-art computer vision methods as well as machine learning regression methods applied on noisy biomedical signals. The goal of the project is a teleoperation system using machine learning methods, where a user would control remotely a robotic arm and hand.

Project: Master Project at EPFL
Period: 12.11.2017 - 07.07.2018
Section(s): EL ME MT MX
Type: 40% theory, 10% software, 50% implementation
Knowledge(s): Machine learning, Robotics, C++/Python
Subject(s): Prosthesis Control, Multisensory robotic system
Responsible(s): Iason Batzianoulis
 
   
   
   

Robust Bimanual Reaching motion for ABB-Yumi Robot

To perform many of our daily tasks, we use our both arms (and hands). This fact allows us to have a better control over our environment (better perception, higher precision, higher degrees of actuations, and higher applied forces). Given the uncertainties in our surrounding environment, our ability to coordinate the motion of our arms is extraordinary. Endowing robots with the same ability would increase their performance in the interaction with uncertain environments. Imagine a scenario where the robotic task is to grasp an object with imprecise location (only a probability distribution is available). This imprecision can be due to noisy perceptions or the fact that the object is moving with unknown dynamics. In such conditions, taking the maximum likelihood for granted and performing the task in a deterministic fashion might lead to poor performances and even failures. However, the robot can perform exploratory motions to gain better knowledge about the environment (i.e., a probability distribution with higher confidence for the target) which in turn would increase the performance of the task.

As the first step in this project, the student will focus on the formulation of a simple algorithm for simultaneous estimation and motion planning (using Kalman filters and dynamical system). As the second step, the student will implement and test this algorithm using ABB Yumi robot. The implementation will be done in C++ using ROS libraries where the robot is controlled in position. At the end, we expect a bimanual robot that grasps objects efficiently under environmental uncertainties.

Project: Master Project at EPFL
Period: 01.08.2017 - 01.02.2018
Section(s): EL IN MA ME MT MX
Type: 30% theory, 30% software, 40% implementation
Knowledge(s): Basics of Robotics and control, and C++ programming
Subject(s): Estimation, Motion planning, and Control
Responsible(s): Mahdi Khoramshahi
   
   
   

Learning Manipulation with 4 Robotic Arms

Many industrial tasks require to have several robotic arms working on the same piece simultaneously. This is difficult as the robot should not intercept each other while performing the task. The joint workspace of the robot is highly non-convex and cannot be expressed mathematically. This project will apply machine learning techniques to learn a representation of the feasible workspaces f 4 robotic arms. This representation will then be used in an inverse kinematic controller to control for the robot's motions at run time. The algorithm will be validated to control 4 robotic arm in the lab that must manipulate objects on a moving conveyer belt. It will also extend the approach to enable to manipulate the object under perurbations, such as when the conveyer belt slows down or accelerates rapidly.

Project: Master Project at EPFL
Period: 01.01.2017 - 15.07.2018
Section(s): EL IN MA ME MT PH
Type:
Knowledge(s):
Subject(s): Robotics, Machine Learning
Responsible(s): Aude Billard
   

Master Projects in Industry

no projects




Last update: 01/03/2012