english only
EPFL > STI > IMT > LASA > Teaching > Student Projects
LASA MENU
Home
Research
Publications
Teaching
People
Arriving to LASA
Videos / Photos
Press & News

GitHub Repository
Older Source Codes
The Blog
Intranet

Prospective Students
TEACHING INDEX
Teaching
Student Projects
Former Projects
Robot Contests

Student Projects propositions for 2014 Autumn

If you are interested in one of the below projects, Semester Projects or Master Projects, please contact the first person of reference indicated in each description either by telephone, or by email, or by visiting us directly to the LASA offices. If you are looking for a project for 2015 Spring, please click here.


Semester Projects

   
   

Identification of modelling parameters for the KUKA LBR IIWA robots

Identifying the kinematic and dynamic parameters of a robot is one of the main critical ingredients for developing high-quality control algorithms. In our lab, we own a few KUKA LBR IIWA robots. While we have very accurate kinematic parameters, our dynamic parameters need to be improved. This project aims at developing the code to control the robots, collect the appropriate data, and experiment with several techniques for identifying the dynamic parameters of our robots. This will allows us to control the robots with bigger accuracy and safety.

Project: Semester Project
Period: 01.09.2019 - 01.02.2020
Section(s): EL IN MA ME MT PH
Type: 70% software, 30% theory
Knowledge(s): C++, Linear Algebra, Basics of robot modelling
Subject(s): Model Identification, Robot Control, Machine Learning
Responsible(s): Konstantinos Chatzilygeroudis
   
   
   

Adapting Human Kinematic Models from Interaction Data (assigned)

Objective: Developing a framework for data-driven developing of human kinematic models for human-robot cooperative work. In the current project, the student will develop a simulation tool for connecting accurate biomechanics simulation of the human body based on OpenSim software [1]. The work would start by developing a simulation tool which can take motion capture data from human users (labeled markers), and derive the possible kinematic chain (through model learning) that could be the base of a whole human-model. That is, learning models of human kinematics flexible enough to include amputees, people with limited limp motions, or simplified models based on performed motions. [2] [1] https://opensim.stanford.edu/ [2] Zakotnik, J., Matheson, T., & Dürr, V. (2004). A posture optimization algorithm for model-based motion capture of movement sequences. Journal of Neuroscience Methods, 135(1–2), 43–54. https://doi.org/10.1016/j.jneumeth.2003.11.013

Project: Semester Project
Period: 01.09.2019 - 01.04.2020
Section(s): EL IN MA ME MT PH
Type: 40% theory, 30% software, 20% implementation
Knowledge(s): C++, ML, Robotics
Subject(s): Machine Learning, Human-Robot Interaction
Responsible(s): Diego F. Paez Granados, Baptiste Busch
 
   
   
   

Learning human postures for online human-robot dynamics estimation (assigned)

Understanding human posture in different human-robot interaction frameworks is crucial for achieving proper control and satisfactory interactions for human users. For this project, we would like to develop a simulation of the human-robot interaction for a robot platform Qolo - a standing mobility device for people with Spinal Cord Injury. The main goal is to learn from motion capture data estimations of the COM locations using inertia models of a simplified human kinematic chain in parallel to data of less accurate posture information (embedded pressure sensors) for achieving a similar COM estimation model. The main goal is to achieve an estimation algorithm of the human body motion interaction, thus, estimate the COM of the human-robot dyad. Further work can focus on a whole dynamic model of the platform for real-time stability and motion tracking usage. [1] Qolo: https://diegofpaez.wordpress.com/portfolio/crowdbot-qolo/ [2] https://ieeexplore.ieee.org/document/8594199

Project: Semester Project
Period: 01.09.2019 - 01.05.2020
Section(s): EL IN MA ME MT PH
Type: 30% theory, 40% software, 30% implementation
Knowledge(s): C++, Matlab/Python, Robotics
Subject(s): Robotics, Machine Learning
Responsible(s): Diego F. Paez Granados
URL: Click here
   

Master Projects

   
   

Human Modeling for Human-Robot Interaction

OpenSim-based model for human-robot interaction.  Objective:  Developing a human model with real-time usability for analyzing human partner state in a collaborative setting with a robot. In the current project, the student will develop a simulation tool that connects accurate biomechanics simulation of the human body based on OpenSim software [1], and integrate it with real-time motion capture data for estimating the workload of multiple muscle groups in the human, and ultimately develop a multidimensional measure of the ergonomics in a task [2]. First step: developing a software framework in which a human model can be rapidly updated for different human users, that is, having the main framework which can adapt to changes of the user. e.g., amputees, limited limp motions, different inertia properties. Second step: developing a model of ergonomics which goes beyond current discrete models (REBA [2]) for achieving a continuous cost-function of the multidimensional task cost in terms of an estimated load to the human user. For further work, the student can focus on the theoretical (validity of the model in comparison with previous methods) and the practical aspects (generalization and implementation for real scenarios using existing controllers at LASA) of the project. References: [1] https://opensim.stanford.edu/ [2] https://www.inesc-id.pt/publications/13850/pdf/ [3] S. Hignett and L. McAtamney, “Rapid entire body assessment (reba),” Applied Ergonomics, vol. 31, no. 2, pp. 201–205, 2000.

Project: Master Project at EPFL
Period: 01.09.2019 - 01.05.2020
Section(s): EL IN MA ME MT PH
Type: 30% theory, 50% software, 20% implementation
Knowledge(s): C++, Python/Matlab, Robotics
Subject(s): Physical Human-Robot Interaction, Biomechanics
Responsible(s): Diego F. Paez Granados, Baptiste Busch
 
   
   
   

Finding creative solutions in hard to solve robotic problems

Most interesting real world robotic tasks do not have a well-defined continuous reward function; for example, imagine a robot that has to autonomously clean up a room: it is hardly conceivable (if not impossible) that we can come up with a reward function that is continuous and/or does not impose any bias on the solution. Recently, Uber AI labs published a new algorithm, called Go-Explore [1][2], that can effectively explore very large spaces with little domain knowledge and assumptions, but they do not provide any results on a robotics task or real-world application. The Go-Explore algorithm consists of two stages: (1) learning how to solve a task in simulation in the raw action space, and (2) use learning from demonstrations (LfD) to find a robust/reactive controller to apply it on the actual (possibly stochastic) system. The goal of this project is three-fold: (1) define (or learn) the appropriate state representation to feed the Go-Explore algorithm (a good initial starting point is to learn a state representation using a combination of some of the robotic priors loss functions by Jonschkowski and Brock [3], but many more can be exploited [4]), (2) investigate intelligent exploration policies instead of randomly sampled actions (one way could be to define primitive policies based on dynamical systems that perform a few parameterized behaviors: e.g., point-reaching etc.), and (3) investigate how to insert dynamical system-based LfD approaches in the second part of the algorithm. The algorithm will be evaluated in one of the following tasks: (1) USB key insertion, (2) irregularly-shaped peg-in-the-hole task that involves dual-manipulation, (3) a robot that needs to clean-up a desk and put each object in specific baskets.

References:

  • [1]: https://eng.uber.com/go-explore/
  • [2]: https://arxiv.org/abs/1901.10995
  • [3]: Jonschkowski, R. and Brock, O., 2015. Learning state representations with robotic priors. Autonomous Robots, 39(3), pp.407-428.
  • [4]: Lesort, T., Díaz-Rodríguez, N., Goudou, J.F. and Filliat, D., 2018. State representation learning for control: An overview. Neural Networks.

Project: Master Project at EPFL
Period: 01.09.2019 - 01.05.2020
Section(s): EL IN MA ME MT PH
Type: 60% theory, 40% implementation
Knowledge(s): Python, C++
Subject(s): Reinforcement Learning, Robot Control, Machine Learning
Responsible(s): Konstantinos Chatzilygeroudis
 
   
   
   

Fast adaptation via policy search for high-dimensional robots

The most data-efficient algorithms for reinforcement learning in robotics are model-based policy search algorithms, which alternate between learning a dynamical model of the robot and optimizing a policy to maximize the expected return given the model and its uncertainties [1]. Nevertheless, most model-based policy search approaches do not scale to high dimensional state/action spaces. Recently, a new model learning procedure has been introduced [2] that leverages parameterized black-box priors of the dynamics (e.g., simulators) and Gaussian processes to (1) scale up to high-dimensional systems, and (2) be robust to large inaccuracies of the prior information, in order to adapt to unforeseen situations (e.g., damage). One of the major drawbacks of this approach is the large computational cost. The goal of this project is to implement methods for speeding up the learning procedure, while keeping the adaptation capabilities of the algorithm. The student will have to augment the algorithm with: (1) faster regression techniques (e.g., sparse Gaussian processes [3] or Gaussian Mixture Regression (GMR)), (2) local models [4] (i.e., partition the policy space and learn a different model for each partition), (3) learning actuator models [5], and/or (4) learning compensatory actions [6]. The student will evaluate the approach on: (1) an iCub humanoid robot performing a lift-and-place task (i.e., lift an object and place it in a different location) where the weight/dynamics of the object cannot be anticipated before-hand, and/or (2) an irregularly-shaped peg-in-the-hole task that the friction coefficients and peg-specifications cannot be fully determined in simulation.

References:

  • [1]: "A survey on policy search algorithms for learning robot controllers in a handful of trials", Konstantinos Chatzilygeroudis, Vassilis Vassiliades, Freek Stulp, Sylvain Calinon and Jean-Baptiste Mouret. arXiv preprint arXiv:1807.02303, 2018.
  • [2]: "Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics", Konstantinos Chatzilygeroudis, Jean-Baptiste Mouret. ICRA, 2018.
  • [3]: "Distributed gaussian processes", Marc Deisenroth, Jun Wei Ng. arXiv preprint arXiv:1502.02843, 2015.
  • [4]: "Guided policy search", Sergey Levine, and Vladlen Koltun. International Conference on Machine Learning, 2013.
  • [5]: "Learning agile and dynamic motor skills for legged robots", J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun and M. Hutter. Science Robotics, 2019.
  • [6]: "Model-plant Mismatch Compensation Using Reinforcement Learning", I. Koryakovskiy, M. Kudruss, H. Vallery, R. Babuška and W. Caarls. IEEE Robotics and Automation Letters, 2018.

Project: Master Project at EPFL
Period: 01.09.2019 - 01.05.2020
Section(s): EL IN MA ME MT PH
Type: Type: 50% implementation, 50% theory
Knowledge(s): C++, Python, Basic Linear Algebra
Subject(s): Reinforcement Learning, Robot Control, Machine Learning
Responsible(s): Konstantinos Chatzilygeroudis
URL: Click here
   



Last update: 09/28/2014