english only
EPFL > STI > IMT > LASA > Teaching > Student Projects
LASA MENU
Home
Research
Publications
Teaching
People
Arriving to LASA
Videos / Photos
Press & News

GitHub Repository
Older Source Codes
The Blog
Intranet

Prospective Students
TEACHING INDEX
Teaching
Student Projects
Former Projects
Robot Contests

Student Projects propositions

If you are interested in one of the below projects, Semester Projects, Master Projects at EPFL, or Master Projects in Industry please contact the first person of reference indicated in each description either by telephone, or by email, or by visiting us directly to the LASA offices.


Semester Projects

   
   

Task-adaptation for assistive robotics using switching dynamical systems

Robots are mainly here to assist us with tasks that are repetitive and burdensome. Machine learning and control theory provided us with a variety of techniques to teach our robots to perform such tasks. However, the ability of robots to adapt their tasks to their environment or to the intention of their human-user is limited. Providing robots with such adaptive abilities will unlock new possibilities for assistive robotics. Consider polishing as a task for a robotic arm. The robot learns how to polish from human demonstrations. However, during polishing, the human-user can safely grab the robot and change the polishing direction by applying few repetitions of movements in a new desirable direction. This means that the robots quickly adapts its motions to the intention of the human, thus, assisting him/her in performing the new task. In the first step of this project, the student will implement a simple polishing task for a given surface and a given polishing direction. For this purpose, a dynamical system will be used to generate the desired trajectories for the robotic arm’s end-effector. An impedance controller will be used to control the end-effector which allows the human to safely interact with the robot. In the second part of this project, we will design an adaptive control controller for the polishing direction; i.e., by using the interactional forces, the polishing direction will be adapted to the intention of the human user. Implementations will be tested using Matlab simulations. If there is time and interest, we will have the opportunity to apply the work on a real robot and test it in interaction with humans.

Project: Semester Project
Period: 15.02.2017 - 09.06.2017
Section(s): EL IN MA ME MT MX
Type: 25% theory, 25% software, 50% testing
Knowledge(s): Control systems, Robotics, Matlab, C++
Subject(s): Human-robot interaction, dynamical systems
Responsible(s): Mahdi Khoramshahi
   

Master Projects at EPFL

   
   

Improving the stability of an EMG classifier

Electromyographic (EMG) signals represent neuromuscular activity and are effective biological signals for expressing movement intent for external device control. EMG pattern recognition is an advanced, intelligent signal processing technology and has been proposed as a potential method for reliable user intent classification. A typical pattern recognition algorithm extracts information from EMG signals and then classifies the user's intended movement for external device control. The benefit of pattern recognition algorithms are that they can increase the neural information extracted from EMG signals using a small number of monitored muscles and allow intuitive control of external devices. In pattern recognition-based myoelectric control, high accuracy for multiple discriminated motions is presented in most of related literature. However, there is a gap between the classification accuracy and the usability of practical applications of myoelectric control, especially the effect of long-term usage. In this project, we will aim to address these challenges by developing an algorithm that could adapt to the variability of the EMG signals through different days of trial. Furthermore, the student will use Inverse Reinforcement Learning (IRL) techniques for adapt the most suitable optimal control policy to reach the new target based on the robot performance. The project will be conducted in collaboration with the CNBI lab of Prof. Jose Millan and the TNE lab of Prof. Silvestro Micera.

Project: Master Project at EPFL
Period: 20.02.2017 - 23.06.2017
Section(s): ME
Type: 70% theory, 20% software, 10% testing
Knowledge(s): Machine learning Techniques, Matlab, C++/Python
Subject(s): Machine learning, Prosthesis Control
Responsible(s): Iason Batzianoulis
 
   
   
   

Decoding the motion intention through Electromyography and Electroencephalography

We use our hands for a large variety of tasks during daily life while we work, recreate, and communicate. Thus, the loss of hand's functionality affects all aspects of life. Robotic devices are frequently used to restore motor abilities lost after pathologies or trauma, such as exoskeletons for the patient's assistance during rehabilitation and the prostheses for amputees. As these devices are in direct, physical contact with the user, it is important the device to react promptly to detection of user’s intention. Most commonly, these devices decode the user’s intention from bioelectrical signals, such as Electromyography (EMG) or Electroencephalography (EEG), and use this information to generate the corresponding trajectories. A smooth and seamless control of the device demands a rapid estimation of the intention of the user as well as the accurate expression of the intension from the device. In this project, the student will examine the muscular activity of the arm as well as the brain activity when healthy able-bodied subjects perform reaching motions. The project aims at the analysis of EMG and EEG signals for decoding, first, the existence of the grasping intention and, then, the grasping gesture. In addition, this project will conclude to the development of an approach that combines the aforementioned sensory information for the control of a robotic hand. The project will be conducted in collaboration with the CNBI lab of Prof. Jose Millan and the TNE lab of Prof. Silvestro Micera.

Project: Master Project at EPFL
Period: 20.02.2017 - 23.06.2017
Section(s): EL IN MA ME MT MX
Type: 50% theory, 30% software, 20% testing
Knowledge(s): Machine learning Techniques, Robotics, Matlab, C++/Python
Subject(s): Machine learning, Prosthesis Control
Responsible(s): Iason Batzianoulis, Denys Lamotte
 
   
   
   

Learning from noisy demonstrations: the role of compliance in exploration-exploitation trade-off

Learning by demonstration provides us with a powerful framework to enable a robot to perform desirable tasks. However, real-world demonstrations are prone to noise and other uncertainties; especially, when a teacher (i.e., the person who provides the demonstrations) can only provide sub-optimal solutions. While these noisy demonstrations can speed up the learning process at the beginning, it is favorable if our robot can go beyond this sub-optimality and reach the optimal solution to perform the task. This requires a delicate balance between the exploitation of the noisy demonstrations and the exploration for the optimal solution. As human, one key element that enables us to exhibit this behavior naturally is our physical compliance. When we start to learn a new task (e.g., learning to dance from a teacher), we stay compliant. This allows the teacher to easily provide us with new demonstrations (or in another words, we exploit the teacher). Reaching a satisfactory confidence in performance, we start to reduce our compliance (i.e., neglecting the demonstration) and search for small improvements. This project aims to study this approach from a machine learning point-of-view.

The student will start by formulating the problem as a RL problem where given an initial condition, the learning agent tries to reach a goal state. However, at each trial, there exists a noisy demonstration which with a low probability provide a sub-optimal solution. As a part of the learning process, the agent learns either to reject or to comply to this solution. We are interested in investigating the hypothesis that the agent prefers to comply to such noisy demonstrations at the beginning of the learning, and starts to reject them as the agent learns to reach the goal on its own. Understanding the underlying principles of this task, we can move on to more realistic scenarios involving physical human-robot interaction.

Project: Master Project at EPFL
Period: 20.02.2017 - 23.06.2017
Section(s): EL IN MA ME MT MX
Type: 35% theory, 15% software, 50% testing
Knowledge(s): Reinforcement learning, Matlab
Subject(s): Machine learning
Responsible(s): Mahdi Khoramshahi, Laura Cohen
   
   
   

Shared and Adaptive Robotic-Workspace for Intelligent and Safe Human-Robot Interaction

In classical industrial settings, conveyer belts are often used as feeders for the next component of a production line; by moving only in one direction at a constant speed, they move objects from the workspace of one robot to the workspace of another. However, by intelligently controlling the movements of a conveyer belt, robotic technology can provide new possibilities for industrial applications. At first, by considering the conveyer belt as an additional degree of freedom to the robot, we can expand the workspace of our robotic manipulators. For example, by moving a large object (i.e., larger than the robotic arm’s workspace) on the conveyer, the robotic arm can operate at any region-of-interest on the object. Second, the conveyer belt can be used as a means to share the workspaces among several stationary robots. For example, two robots sitting at the two opposite ends of a conveyer belt can pass an object back and forth for different types of operations. With regard to safety, this approach has two advantages over the classical approach (where two stationary robots are sitting close to each other to share their workspaces): (1) it removes the risk of two robots colliding into each other, and (2) it provides a safer setting for human interaction. For example, by detecting the intervention of the human, the conveyer belt can bring the object to the middle of the belt (where it is away from the workspaces of the two arms) for human reach. For this project, the student will start by implementing a motion generator using DS (dynamical systems) in Matlab which produces reference trajectories for the robotic arm’s end-effector and the conveyer belt. This motion generator enables the robotic arm to reach/grasp an approaching object on the conveyer belt. In the next step, the student will extend the same motion generator to the case where there are two robots at the opposite ends of the conveyer belt. In this case, the two robots take turns to operate the object, and the conveyer belt moves the object between the two robots. In the final step, the student will consider the intervention of the human, by first detecting the human, and then switching to a new motion generator which brings the object to the middle of the belt. Implementations will be tested using Matlab simulations, and if time and progress permit, we will move forward to more realistic setups.

Project: Master Project at EPFL
Period: 20.02.2017 - 23.06.2017
Section(s): EL IN MA ME MT MX
Type: 20% theory, 30% software, 50% testing
Knowledge(s): Control systems, Robotics, Matlab, C++
Subject(s): Collaborative robotics, Human-robot interaction
Responsible(s): Mahdi Khoramshahi
 
   

Master Projects in Industry

   
   

Detection of product purchases from shelves in unconstrained, uncalibrated, heavily cluttered environments

Our work requires us to track changes within shelves in retail environments, corresponding to shoppers picking up products and buying them or putting them back on the shelves. Our methodology involves analysing video from one or more cameras recording up to 16h of store activity.
The task is complex as high numbers of shoppers move and occlude the videos when approaching the shelves, viewing angles are sometimes drifting throughout the day and there is no opportunity for calibration of the recording equipment when deploying in store.
The goal of the project is to leverage on the existing background subtraction system to integrate shopper occlusion and gradual camera movements to facilitate the detection process.
Knowledge requirements: Very good knowledge of C++ and principles of machine learning/computer vision (knowledge of OpenCV is a welcome plus).
Interested candidates should send an email to aude.billard@epfl.ch with a copy of their CV and grades.

Project: Master Project in Industry
Period: 01.09.2016 - 01.08.2017
Section(s): EL IN MT
Type:
Knowledge(s): Machine Learning, Programming in C++
Subject(s): Machine Learning
Responsible(s): , Basilio Norris
URL: Click here
   
   
   

Extraction of profiles of shopping and purchases patterns

In the scope of our shopper behaviour studies, we are confronted with the task of segmenting profiles of shoppers according to the type of purchases they make (categories of products, departments visited, …).
The goal of this project is to investigate the structure of shopping profiles in terms of clustering of sparse, high-dimensional data from hundreds of “shopping carts”, and to extract the main purchasing profiles and product category “proximities” according to the shopping behaviour rather than their relative location in store.
Knowledge requirements: Good understanding of Machine Learning concepts and tools (Clustering, Dimensional Reduction / Scaling, …). Good knowledge of at least one Machine Learning-related programming language (e.g. Matlab, R, Python, C++).

Interested candidates should send an email to aude.billard@epfl.ch with a copy of their CV and grades.

Project: Master Project in Industry
Period: 01.09.2016 - 01.08.2017
Section(s): EL IN MT
Type:
Knowledge(s): Machine Learning, Programming in C++ / Python / Matlab / R
Subject(s): Machine Learning
Responsible(s): , Basilio Norris
URL: Click here
   



Last update: 01/03/2012