english only
EPFL > STI > IMT > LASA > Teaching > Student Projects
LASA MENU
Home
Research
Publications
Teaching
People
Arriving to LASA
Videos / Photos
Press & News

GitHub Repository
Older Source Codes
The Blog
Intranet

Prospective Students
TEACHING INDEX
Teaching
Student Projects
Former Projects
Robot Contests

Student Projects propositions

If you are interested in one of the below projects, Semester Projects, Master Projects at EPFL, or Master Projects in Industry please contact the first person of reference indicated in each description either by telephone, or by email, or by visiting us directly to the LASA offices.


Semester Projects

   
   

Learning Geometric Properties of Objects based on Tactile Information

Taking advantage of the sense of touch, humans are able to grasp and manipulate objects of different physical properties with high compliance and dexterity. However, robots' performance to practice such dexterity largely depends on object physical and geometrical properties like shape and elasticity. Therefore, it is important to estimate such properties of target objects, to plan and generate desired robotic motion. Tactile sensors can provide direct and informative feedback to the robotic system from abundant simultaneous contact points during physical interaction. This project aims at exploring and learning the geometric properties of different objects using tactile sensors mounted on robotic hands. To achieve this goal, we will first estimate the shape of target objects based on measured tactile information, then we will validate the developed approach in robotic exploration/manipulation tasks on a robotic platform (Allegro Hand mounted with SynTouch biomimetic sensor and Tekscan pressure sensor)). References: [1] Sommer N, Billard A. Multi-contact haptic exploration and grasping with tactile sensors. Robotics and autonomous systems. 2016 Nov 1;85:48-61. [2] Kaboli M, Yao K, Feng D, Cheng G. Tactile-based active object discrimination and target object search in an unknown workspace. Autonomous Robots. 2019 Jan 31;43(1):123-52. [3] Jamali N, Ciliberto C, Rosasco L, Natale L. Active perception: Building objects' models using tactile exploration. In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids) 2016 Nov 15 (pp. 179-185).

Project: Semester Project
Period: 01.02.2020 - 01.08.2020
Section(s): EL IN MA ME
Type: 40% theory, 60% implementation
Knowledge(s): Programming skills (MATLAB/Python/C++); Robotics; ROS
Subject(s): Robotics, data processing, machine learning
Responsible(s): Farshad Khadivar, Kunpeng Yao
URL: Click here
   
   
   

Human-Robot Collision Simulation and Experimental Validation

During this semester project, we propose an experimental robotics application of control an analysis for assessing the safety of mobile robots around human pedestrians. Objective: Developing an experimental setting for human-robot collision analysis. You will benefit from access to multiple mobile robots, learning to control them and implement different controllers (Velocity control, Impedance control) for gathering collision data. Moreover, you will learn how to use a motion capture system, and a set of sensors for assessing collision between a robot and a human mannequin in multiple scenarios. The student is expected to have good experience in C++ or python, solid knowledge of control, and good understanding of solid mechanics and deformation analysis would be a plus.

Project: Semester Project
Period: 01.02.2020 - 01.08.2020
Section(s): EL IN MA ME MT PH
Type: 20% theory, 30% software, 50% implementation
Knowledge(s): C++, Python/Matlab, Robotics, Mechanics
Subject(s): Human-Robot Interaction, Mechanics
Responsible(s): Diego F. Paez Granados, David Julian Gonon
URL: Click here
   

Master Projects at EPFL

   
   

Learning Physical Properties of Deformable Objects based on Tactile Information

Taking advantage of the sense of touch, Humans are able to grasp and manipulate objects of different physical properties with high compliance and dexterity. However, robots' performance to practice such dexterity largely depends on object physical properties like shape and elasticity, especially when it comes to interaction with deformable objects. Therefore, it is important to estimate such physical properties of target objects, to plan and generate desired robotic motion. Tactile sensors can provide direct and informative feedback to the robotic system from abundant simultaneous contact points during physical interaction. This thesis aims at exploring and learning the physical properties (shape and stiffness) of deformable objects using tactile sensors mounted on robots. To achieve this goal, we will (1) estimate the shape and stiffness of target objects based on measured tactile information, (2) construct probabilistic models of physical properties for deformable objects, and (3) validate the developed approach in robotic exploration/manipulation tasks on a robotic platform (Allegro Hand mounted with SynTouch biomimetic sensor and Tekscan pressure sensor)). References: [1] Sommer N, Billard A. Multi-contact haptic exploration and grasping with tactile sensors. Robotics and autonomous systems. 2016 Nov 1;85:48-61. [2] Kaboli M, Yao K, Feng D, Cheng G. Tactile-based active object discrimination and target object search in an unknown workspace. Autonomous Robots. 2019 Jan 31;43(1):123-52. [3] Jamali N, Ciliberto C, Rosasco L, Natale L. Active perception: Building objects' models using tactile exploration. In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids) 2016 Nov 15 (pp. 179-185).

Project: Master Project at EPFL
Period: 01.02.2020 - 01.10.2020
Section(s): EL IN MA ME MT PH
Type: 40% theory, 60% implementation
Knowledge(s): Programming skills (MATLAB/Python/C++); Robotics; ROS
Subject(s): Robotics, data processing, machine learning
Responsible(s): Kunpeng Yao, Farshad Khadivar
URL: Click here
   
   
   

Human Modeling for Human-Robot Interaction

OpenSim-based model for human-robot interaction.  Objective:  Developing a human model with real-time usability for analyzing human partner state in a collaborative setting with a robot. In the current project, the student will develop a simulation tool that connects accurate biomechanics simulation of the human body based on OpenSim software [1], and integrate it with real-time motion capture data for estimating the workload of multiple muscle groups in the human, and ultimately develop a multidimensional measure of the ergonomics in a task [2]. First step: developing a software framework in which a human model can be rapidly updated for different human users, that is, having the main framework which can adapt to changes of the user. e.g., amputees, limited limp motions, different inertia properties. Second step: developing a model of ergonomics which goes beyond current discrete models (REBA [2]) for achieving a continuous cost-function of the multidimensional task cost in terms of an estimated load to the human user. For further work, the student can focus on the theoretical (validity of the model in comparison with previous methods) and the practical aspects (generalization and implementation for real scenarios using existing controllers at LASA) of the project. References: [1] https://opensim.stanford.edu/ [2] https://www.inesc-id.pt/publications/13850/pdf/ [3] S. Hignett and L. McAtamney, “Rapid entire body assessment (reba),” Applied Ergonomics, vol. 31, no. 2, pp. 201–205, 2000.

Project: Master Project at EPFL
Period: 01.09.2019 - 01.05.2020
Section(s): EL IN MA ME MT PH
Type: 30% theory, 50% software, 20% implementation
Knowledge(s): C++, Python/Matlab, Robotics
Subject(s): Physical Human-Robot Interaction, Biomechanics
Responsible(s): Diego F. Paez Granados, Baptiste Busch
 
   
   
   

Finding creative solutions in hard to solve robotic problems

Most interesting real world robotic tasks do not have a well-defined continuous reward function; for example, imagine a robot that has to autonomously clean up a room: it is hardly conceivable (if not impossible) that we can come up with a reward function that is continuous and/or does not impose any bias on the solution. Recently, Uber AI labs published a new algorithm, called Go-Explore [1][2], that can effectively explore very large spaces with little domain knowledge and assumptions, but they do not provide any results on a robotics task or real-world application. The Go-Explore algorithm consists of two stages: (1) learning how to solve a task in simulation in the raw action space, and (2) use learning from demonstrations (LfD) to find a robust/reactive controller to apply it on the actual (possibly stochastic) system. The goal of this project is three-fold: (1) define (or learn) the appropriate state representation to feed the Go-Explore algorithm (a good initial starting point is to learn a state representation using a combination of some of the robotic priors loss functions by Jonschkowski and Brock [3], but many more can be exploited [4]), (2) investigate intelligent exploration policies instead of randomly sampled actions (one way could be to define primitive policies based on dynamical systems that perform a few parameterized behaviors: e.g., point-reaching etc.), and (3) investigate how to insert dynamical system-based LfD approaches in the second part of the algorithm. The algorithm will be evaluated in one of the following tasks: (1) USB key insertion, (2) irregularly-shaped peg-in-the-hole task that involves dual-manipulation, (3) a robot that needs to clean-up a desk and put each object in specific baskets.

References:

  • [1]: https://eng.uber.com/go-explore/
  • [2]: https://arxiv.org/abs/1901.10995
  • [3]: Jonschkowski, R. and Brock, O., 2015. Learning state representations with robotic priors. Autonomous Robots, 39(3), pp.407-428.
  • [4]: Lesort, T., Díaz-Rodríguez, N., Goudou, J.F. and Filliat, D., 2018. State representation learning for control: An overview. Neural Networks.

Project: Master Project at EPFL
Period: 01.09.2019 - 01.05.2020
Section(s): EL IN MA ME MT PH
Type: 60% theory, 40% implementation
Knowledge(s): Python, C++
Subject(s): Reinforcement Learning, Robot Control, Machine Learning
Responsible(s): Konstantinos Chatzilygeroudis
 
   

Master Projects in Industry

   
   

Integrating tactile and visual information for faster inference of object's geometry

This project aims to explore a new approach to integrating visual and contact sensors to perceive objects better and faster. Information about the shape, size and pose of the objects in environment is crucial for many simple robotic scenarios, such as reaching, grasping and manipulation tasks. In laboratory experiments, a set of several (10-20) cameras are often employed to detect objects, which would not be feasible to employ if robots were to operate in outside environment. The algorithms from computer vision, such as three dimensional reconstruction with multiple vision cameras cannot give the speed and accuracy required for grasping and manipulating objects. Further, the object gets obscured by the robotic hand when under the grasp, which hinders the image processing algorithm. On the other hand, tactile exploration algorithm has been developed in our lab [1] which should detect objects even when grasped but the object exploration not guided by vision is time-taking process. Human beings often rely on the combination of visual and tactile information while grasping objects or performing other day-to-day activities. Taking a cue from human expertise, we would develop a similar strategy for robots to efficiently combine the two sensors data. The robotic setup would consist of a kuka arm with an allegro hand mounted on it. The phalanges of the hand would be covered with tekscan pressure sensors to obtain information about the contact with the object. To obtain visual information, realsense depth camera would be employed which yields a 3D point cloud of the object in its field of view. As a pilot study, we used the two together to reconstruct a model of toy rooster (see image). In the project, we will try to reconstruct something simple such as a big bowl, which can be grasped in different ways. The project will start with preliminary goals of estimating the pose, shape and size of objects. Depending on time and progress, more interesting tasks such as detecting feasible grasp points on the object would be tackled. Once implemented, the output of project will be tested for grasping objects using robot controlled by EMG sensors (in collaboration with TNE lab). The project offers a lot of scope for creative ideas. Interested students should have adequate programming skills (python, C++, MATLAB, ROS) and background in machine learning and computer vision. The project should be carried out in spring semester but dates are flexible.

Project: Master Project in Industry
Period: 01.02.2020 - 01.08.2020
Section(s): EL ME MT
Type: 30% theorie, 30% software, 30 % implementation
Knowledge(s):
Subject(s): Programming, machine learning, computer graphics
Responsible(s): Saurav Aryan
URL: Click here
   
   
   

Neuroprosthetic control II, incremental learning

An emerging application of robotic-control is on neuroprosthetics, where robotics could offer solutions for improving the rehabilitation of individuals with motor disabilities. One specific application is the control of a hand prosthesis. Hand neuroprostheses help individuals with an upper-limb amputation restore some of their lost ability to grasp [1]. To do so, the neuroprothesis decodes the motion intention of the user usually from the muscle activity of the residual limp [2]. This intention is then converted into proper motor commands for activating the hand prosthesis [1,2]. All these processes are handled by a microcontroller, placed inside the device. Due to constraints in the available space inside the device but also in the power supply of the device, the memory capacity and computational speed might by be limited. It is hence crucial the machine learning algorithm, responsible for decoding the motion intention, to exploit the minimum available resources of the microcontroller efficiently.

Another important aspect of performance in neuroprostheitc control is the accuracy of the decoder after a period of usage. Due to the low signal-to-noise ratio of the muscle signals and small sifts on the location of the sensors, the accuracy of the decoder decreases in time. This leads to frequent retraining the decoder, a time-consuming procedure. This project will attempt to address this issue with an alternative approach; by employing incremental learning [3]. In this way, the algorithm would be able to adapt to the changes of the input signals and avoid a full retrain of the algorithm.

The student would have to design an experimental framework for recording data and testing the developed approach in as close as possible real conditions. Furthermore, the student will develop a machine learning approach based on incremental learning and it should be able to run on a microcontroller with constraint memory and computational resources. The project will be done in collaboration with CSEM (the Swiss Center for Electronics and Microtechnology [4]). The project is set for the spring semester of 2020.

References:
[1] Zhuang et al. 2019, “Shared human–robot proportional control of a dexterous myoelectric prosthesis”, Nature Machine Intelligence, 9 (1).
[2] Batzianoulis et al. 2018, “Decoding the Grasping Intention from Electromyography during Reaching Motions”, Journal of NeuroEngineering and Rehabilitation, 15 (57).
[3] Chefrour 2019, “Incremental supervised learning: algorithms and applications in pattern recognition”, Evolutionary Intelligence, 12(2)
[4] CSEM SA: https://www.csem.ch
[5] Neuroprosthetic control demonstrations: link 1, link 2, link 3

Project: Master Project in Industry
Period: 17.12.2019 - 31.03.2020
Section(s): EL IN MA ME MT MX
Type: 40% theory, 40% implementation, 20% testing
Knowledge(s): Machine learning, C++, python, Robotics
Subject(s): Machine learning, Prosthesis Control, Microcontroller
Responsible(s): Iason Batzianoulis, Aude Billard, Marc Pons
 
   
   
   

Neuroprosthetic control I, on-chip user decoder intent

An emerging application of robotic-control is on neuroprosthetics, where robotics could offer solutions for improving the rehabilitation of individuals with motor disabilities. One specific application is the control of a hand prosthesis. Hand neuroprostheses help individuals with an upper-limb amputation restore some of their lost ability to grasp [1]. To do so, the neuroprothesis decodes the motion intention of the user usually from the muscle activity of the residual limp [2]. This intention is then converted into proper motor commands for activating the hand prosthesis [1,2]. All these processes are handled by a microcontroller, placed inside the device. Due to constraints in the available space inside the device but also in the power supply of the device, the memory capacity and computational speed might by be limited. It is hence crucial for the machine learning algorithm, responsible for decoding the motion intention, to manage efficiently the available resources of the microcontroller.

In the framework of the European H2020 DeTOP project [3], CSEM (the Swiss Center for Electronics and Microtechnology[4]) has designed a SoC specially tailored for controlling myo-prostheses. The SoC has been demonstrated [6] using the BioPatRec [7] open source software running on a computer.

The student will have to implement three machine learning algorithms on CSEM’s SoC for identifying three hand gestures; pinch and power grasps, and open hand. The three machine learning algorithms will be an Echo State Network (a form of Recurrent Neural Network), Support Vector Machine and Linear Discriminant Analysis. Also, the student will perform a systematic comparison on the performance, computational footprint and the memory requirements of the algorithms when controlling a robotic hand in real-time. The project will be done in collaboration with CSEM, and the icyflex2 32-bit low power processor [8] included in CSEM’s SoC will be used. The project is set for the spring semester of 2020

References:
[1] Zhuang et al. 2019, “Shared human–robot proportional control of a dexterous myoelectric prosthesis”, Nature Machine Intelligence, 9 (1).
[2] Batzianoulis et al. 2018, “Decoding the Grasping Intention from Electromyography during Reaching Motions”, Journal of NeuroEngineering and Rehabilitation, 15 (57).
[3] H2020 DeTOP project: http://www.detop-project.eu/
[4] CSEM SA: https://www.csem.ch
[5] M. Pons et al., “A 20 Channel EMG SoC with an Integrated 32b RISC Core for Real-Time Wireless Prosthetic Control,” IEEE ESSCIRC 2019.
[6] Neuroprosthetic control demonstrations: link 1, link 2, link 3
[7] BioPatRec
[8] J.-L. Nagel et al., “The icyflex2 processor architecture,” CSEM Scientific and Technical Report 2009.

Project: Master Project in Industry
Period: 17.12.2019 - 31.03.2020
Section(s): EL IN MA ME MT MX
Type: 60% implementation, 20% theory, 20% testing
Knowledge(s): Machine learning, C++, Robotics
Subject(s): Machine learning, Prosthesis Control
Responsible(s): Iason Batzianoulis, Aude Billard, Marc Pons
 
   
   
   

Robotic soft manipulation for grasping applications using RGBd point-clouds

perception and grasping soft and deformable objects are challenging, yet essential for many robotic applications. In this project, we are intending to prototype fast and reliable algorithms for perception and grasping such objects. More specifically, for perception part, we utilize the Intel RealSense camera which provides color and depth information. For the grasping part, we employ the robotic Allegro hand (mounted on KUKA IIWA robotic arm) which we control in impedance mode allowing us to control the interaction forces with the objects. This project is funded by Samsung, however will be carried out in LASA laboratory. The student is expected to spend 20 hours per week, and he/she will receive the usual stipend for doing a master project in industry. Please do not hesitate to contact us for more details.

Project: Master Project in Industry
Period: 01.10.2019 - 01.03.2020
Section(s): EL IN MA ME MT MX PH
Type: 30% theory, 50% implementation, 20% experimentation
Knowledge(s): C++, python, basic knowledge of robotic
Subject(s): manipulation, grasping
Responsible(s): Mahdi Khoramshahi
 
   



Last update: 01/03/2012