english only
EPFL > STI > IMT > LASA > Teaching > Student Projects
LASA MENU
Home
Research
Publications
Teaching
People
Arriving to LASA
Videos / Photos
Press & News

GitHub Repository
Older Source Codes
The Blog
Intranet

Prospective Students
TEACHING INDEX
Teaching
Student Projects
Former Projects
Robot Contests

Student Projects propositions

If you are interested in one of the below projects, Semester Projects, Master Projects at EPFL, or Master Projects in Industry please contact the first person of reference indicated in each description either by telephone, or by email, or by visiting us directly to the LASA offices.


Semester Projects

   
   

Learning robot optimal trajectories online using Inverse Reinforcement Learning

Inverse Reinforcement Learning (IRL) is a field of learning the desired objectives, values or rewards by observing the behavior of a system. In this particular scenario, the goal is to learn on-line the optimal motion parameters of a robot arm, in order to achieve a successful obstacle avoidance.

In this project, the user directs the robot motion with a joystick, while the brain activity is monitored. It is shown that an erroneous or unexpected behavior during a task could result in the expression of error-related potentials (ErrP) in the brain activity. The aim of this project is the use of ErrPs and/or the feedback from the joystick for predicting on-line the optimal robot trajectories for obstacle avoidance.

The student will study the problem of online prediction of whether a trajectory generated by a given set of parameters will be acceptable to the user, aiming to find a robust classifier that achieves this and converges quickly with a small number of samples. S/he will implement a real-time IRL approach to a Kuka LWR robot arm and gain in-hand experience in machine learning and robotic control

Project: Semester Project
Period: 11.07.2018 - 31.10.2018
Section(s): EL MA ME MT MX
Type: 30% theory, 10% software, 50% implementation
Knowledge(s): Machine learning, Robotics, C++/Python
Subject(s): Robot teleoperation, Inverse Reinforcement Learning
Responsible(s): Iason Batzianoulis
 
   
   
   

Robot teleoperation- Combining muscular activity with gaze

An important part of neuroprosthetic control is to decode userís motion intention. This intention is then converted into appropriate movements for the prosthetic or assistive devices. When controlling prosthetic hand-arm systems, one can use eye movements as a natural way to determine the object the user intends to grasp. Eye movements give only the direction in which the object of interest may be located but not the exact location.

In this project we will examine potentials improvements in localization of the object by fusing gaze detection with monitoring of muscular activity (EMG) of the arm. An estimation of the target position in 2D space would come from the gaze, while EMG could be used to train two machine learning algorithms for regressions to predict the hand position in the x- and y-directions. A combination of these two systems is not trivial due to noise introduced by random eye movements, head motion and the non-stationary nature of the EMG signals.

The student will gain experience in state-of-art computer vision methods as well as machine learning regression methods applied on noisy biomedical signals. The goal of the project is a teleoperation system using machine learning methods, where a user would control remotely a robotic arm and hand.

Project: Semester Project
Period: 11.07.2018 - 31.10.2018
Section(s): EL MA ME MT MX
Type: 40% theory, 10% software, 50% implementation
Knowledge(s): Machine learning, Robotics, C++/Python
Subject(s): Rehabilitation robotics, Multisensory robotic system
Responsible(s): Iason Batzianoulis
   
   
   

Learning Manipulation with 4 Robotic Arms (this project is assigned)

Many industrial tasks require to have several robotic arms working on the same piece simultaneously. This is very difficult as we want the robots to perform the task while not intercepting each other. The joint workspace of the robots is highly non-convex and cannot be expressed mathematically. This project will apply machine learning techniques to learn a representation of the feasible workspace of the 4 robotic arms. This representation will then be used in an inverse kinematic controller to control for the robot's motions at run time. The algorithm will be validated to control 4 robotic arms in the lab that must manipulate objects on a moving conveyer belt.

Project: Semester Project
Period: 01.01.2017 - 15.07.2018
Section(s): EL IN MA ME MT PH
Type:
Knowledge(s): Robotics, Machine Learning, C++, ROS
Subject(s): Robotics, Machine Learning
Responsible(s): Aude Billard
   

Master Projects at EPFL

   
   

Robot teleoperation- Combining muscular activity with gaze

An important part of neuroprosthetic control is to decode userís motion intention. This intention is then converted into appropriate movements for the prosthetic or assistive devices. When controlling prosthetic hand-arm systems, one can use eye movements as a natural way to determine the object the user intends to grasp. Eye movements give only the direction in which the object of interest may be located but not the exact location.

In this project we will examine potentials improvements in localization of the object by fusing gaze detection with monitoring of muscular activity (EMG) of the arm. An estimation of the target position in 2D space would come from the gaze, while EMG could be used to train two machine learning algorithms for regressions to predict the hand position in the x- and y-directions. A combination of these two systems is not trivial due to noise introduced by random eye movements, head motion and the non-stationary nature of the EMG signals.

The student will gain experience in state-of-art computer vision methods as well as machine learning regression methods applied on noisy biomedical signals. The goal of the project is a teleoperation system using machine learning methods, where a user would control remotely a robotic arm and hand.

Project: Master Project at EPFL
Period: 15.07.2018 - 31.10.2018
Section(s): CH EL GC ME MT MX
Type: 50% theory, 50% implementation
Knowledge(s): Machine learning, Matlab, Robotics, C++/Python
Subject(s): Rehabilitation robotics, Multisensory robotic system
Responsible(s): Iason Batzianoulis, Aude Billard
   
   
   

Learning robot optimal trajectories online using Inverse Reinforcement Learning

Inverse Reinforcement Learning (IRL) is a field of learning the desired objectives, values or rewards by observing the behavior of a system. In this particular scenario, the goal is to learn on-line the optimal motion parameters of a robot arm, in order to achieve a successful obstacle avoidance.

In this project, the user directs the robot motion with a joystick, while the brain activity is monitored. It is shown that an erroneous or unexpected behavior during a task could result in the expression of error-related potentials (ErrP) in the brain activity. The aim of this project is the use of ErrPs and/or the feedback from the joystick for predicting on-line the optimal robot trajectories for obstacle avoidance.

The student will study the problem of online prediction of whether a trajectory generated by a given set of parameters will be acceptable to the user, aiming to find a robust classifier that achieves this and converges quickly with a small number of samples. S/he will implement a real-time IRL approach to a Kuka LWR robot arm and gain in-hand experience in machine learning and robotic control.

Project: Master Project at EPFL
Period: 15.07.2018 - 31.10.2018
Section(s): EL MA ME MT MX
Type: 50% theory, 50% implementation
Knowledge(s): Machine learning, Robotics, C++/Python
Subject(s): Robot teleoperation, Inverse Reinforcement Learning
Responsible(s): Iason Batzianoulis, Aude Billard
   
   
   

Reaching for a moving object with YuMi

The use of multi-arm robotic systems allows for highly complex manipulation of heavy objects that would otherwise be impossible for a single-arm robot. In our work [1], we propose a unified coordinated control architecture for reaching and grabbing a moving object by a multi-arm robotic system. Due to the complexity of the task and the system, each arm must coordinate not only with the objectís motion but also with the motion of other arms, in both task and joint spaces. At the task-space level, the proposed unified dynamical system coordinates the motion of each arm with the rest of the arms and the resultant motion of the arms with that of the object. At the joint space level, the coordination between the arms is achieved by introducing a centralized inverse kinematics (IK) solver under data-driven self-collision avoidance constraints; formulated as a quadratic programming problem (QP) and solved in real-time.
The aim of this project is to implement the unified framework on YuMi; a dual-arm robotic system developed by ABB. The student will first review the related literatures and familiarize him/herself with the Robot Operating system (ROS) and the provided libraries [2,3,4]. The proposed control architecture will then be implemented in C/C++ in a simulator in Linux environment and, finally, with the real robot for performing a handover scenario for where an operator holds a tray and hands it over to YuMi.
[1] Mirrazavi Salehian, S. S., Figueroa, N. and Billard, A. (2017) A Unified Framework for Coordinated Multi-Arm Motion Planning. (Under review).
[2] Mirrazavi Salehian, S. S., Centralized motion generator, https://github.com/sinamr66/Multiarm_ds
[3] Mirrazavi Salehian, S. S., Centralized IK solver, https://github.com/sinamr66/QP_IK_solver
[4] Mirrazavi Salehian, S. S., Constructing data set for SCA, https://github.com/sinamr66/SCA_data_construction

Project: Master Project at EPFL
Period: 01.01.2018 - 01.08.2018
Section(s): EL IN ME MT MX PH
Type: 20% theory 60% software, 20% hardware
Knowledge(s): C++, ROS, Machine learning, Robotics
Subject(s): Motio planning, Self-collision avoidance
Responsible(s): Seyed Sina Mirrazavi Salehian
   
   
   

Learning Manipulation with 4 Robotic Arms

Many industrial tasks require to have several robotic arms working on the same piece simultaneously. This is difficult as the robot should not intercept each other while performing the task. The joint workspace of the robot is highly non-convex and cannot be expressed mathematically. This project will apply machine learning techniques to learn a representation of the feasible workspaces f 4 robotic arms. This representation will then be used in an inverse kinematic controller to control for the robot's motions at run time. The algorithm will be validated to control 4 robotic arm in the lab that must manipulate objects on a moving conveyer belt. It will also extend the approach to enable to manipulate the object under perurbations, such as when the conveyer belt slows down or accelerates rapidly.

Project: Master Project at EPFL
Period: 01.01.2017 - 15.07.2018
Section(s): EL IN MA ME MT PH
Type:
Knowledge(s):
Subject(s): Robotics, Machine Learning
Responsible(s): Aude Billard
   

Master Projects in Industry

no projects




Last update: 01/03/2012