english only
EPFL > STI > IMT > LASA > Teaching > Student Projects
LASA MENU
Home
Research
Publications
Teaching
People
Arriving to LASA
Videos / Photos
Press & News

GitHub Repository
Older Source Codes
The Blog
Intranet

Prospective Students
TEACHING INDEX
Teaching
Student Projects
Former Projects
Robot Contests

Student Projects propositions for 2014 Autumn

If you are interested in one of the below projects, Semester Projects or Master Projects, please contact the first person of reference indicated in each description either by telephone, or by email, or by visiting us directly to the LASA offices. If you are looking for a project for 2015 Spring, please click here.


Semester Projects

   
   

Adapting Human Kinematic Models from Interaction Data (assigned)

Objective: Developing a framework for data-driven developing of human kinematic models for human-robot cooperative work. In the current project, the student will develop a simulation tool for connecting accurate biomechanics simulation of the human body based on OpenSim software [1]. The work would start by developing a simulation tool which can take motion capture data from human users (labeled markers), and derive the possible kinematic chain (through model learning) that could be the base of a whole human-model. That is, learning models of human kinematics flexible enough to include amputees, people with limited limp motions, or simplified models based on performed motions. [2] [1] https://opensim.stanford.edu/ [2] Zakotnik, J., Matheson, T., & Dürr, V. (2004). A posture optimization algorithm for model-based motion capture of movement sequences. Journal of Neuroscience Methods, 135(1–2), 43–54. https://doi.org/10.1016/j.jneumeth.2003.11.013

Project: Semester Project
Period: 01.09.2019 - 01.04.2020
Section(s): EL IN MA ME MT PH
Type: 40% theory, 30% software, 20% implementation
Knowledge(s): C++, ML, Robotics
Subject(s): Machine Learning, Human-Robot Interaction
Responsible(s): Diego F. Paez Granados, Baptiste Busch
 
   
   
   

Learning human postures for online human-robot dynamics estimation (assigned)

Understanding human posture in different human-robot interaction frameworks is crucial for achieving proper control and satisfactory interactions for human users. For this project, we would like to develop a simulation of the human-robot interaction for a robot platform Qolo - a standing mobility device for people with Spinal Cord Injury. The main goal is to learn from motion capture data estimations of the COM locations using inertia models of a simplified human kinematic chain in parallel to data of less accurate posture information (embedded pressure sensors) for achieving a similar COM estimation model. The main goal is to achieve an estimation algorithm of the human body motion interaction, thus, estimate the COM of the human-robot dyad. Further work can focus on a whole dynamic model of the platform for real-time stability and motion tracking usage. [1] Qolo: https://diegofpaez.wordpress.com/portfolio/crowdbot-qolo/ [2] https://ieeexplore.ieee.org/document/8594199

Project: Semester Project
Period: 01.09.2019 - 01.05.2020
Section(s): EL IN MA ME MT PH
Type: 30% theory, 40% software, 30% implementation
Knowledge(s): C++, Matlab/Python, Robotics
Subject(s): Robotics, Machine Learning
Responsible(s): Diego F. Paez Granados
URL: Click here
   

Master Projects

   
   

Human Modeling for Human-Robot Interaction

OpenSim-based model for human-robot interaction.  Objective:  Developing a human model with real-time usability for analyzing human partner state in a collaborative setting with a robot. In the current project, the student will develop a simulation tool that connects accurate biomechanics simulation of the human body based on OpenSim software [1], and integrate it with real-time motion capture data for estimating the workload of multiple muscle groups in the human, and ultimately develop a multidimensional measure of the ergonomics in a task [2]. First step: developing a software framework in which a human model can be rapidly updated for different human users, that is, having the main framework which can adapt to changes of the user. e.g., amputees, limited limp motions, different inertia properties. Second step: developing a model of ergonomics which goes beyond current discrete models (REBA [2]) for achieving a continuous cost-function of the multidimensional task cost in terms of an estimated load to the human user. For further work, the student can focus on the theoretical (validity of the model in comparison with previous methods) and the practical aspects (generalization and implementation for real scenarios using existing controllers at LASA) of the project. References: [1] https://opensim.stanford.edu/ [2] https://www.inesc-id.pt/publications/13850/pdf/ [3] S. Hignett and L. McAtamney, “Rapid entire body assessment (reba),” Applied Ergonomics, vol. 31, no. 2, pp. 201–205, 2000.

Project: Master Project at EPFL
Period: 01.09.2019 - 01.05.2020
Section(s): EL IN MA ME MT PH
Type: 30% theory, 50% software, 20% implementation
Knowledge(s): C++, Python/Matlab, Robotics
Subject(s): Physical Human-Robot Interaction, Biomechanics
Responsible(s): Diego F. Paez Granados, Baptiste Busch
 
   
   
   

Finding creative solutions in hard to solve robotic problems

Most interesting real world robotic tasks do not have a well-defined continuous reward function; for example, imagine a robot that has to autonomously clean up a room: it is hardly conceivable (if not impossible) that we can come up with a reward function that is continuous and/or does not impose any bias on the solution. Recently, Uber AI labs published a new algorithm, called Go-Explore [1][2], that can effectively explore very large spaces with little domain knowledge and assumptions, but they do not provide any results on a robotics task or real-world application. The Go-Explore algorithm consists of two stages: (1) learning how to solve a task in simulation in the raw action space, and (2) use learning from demonstrations (LfD) to find a robust/reactive controller to apply it on the actual (possibly stochastic) system. The goal of this project is three-fold: (1) define (or learn) the appropriate state representation to feed the Go-Explore algorithm (a good initial starting point is to learn a state representation using a combination of some of the robotic priors loss functions by Jonschkowski and Brock [3], but many more can be exploited [4]), (2) investigate intelligent exploration policies instead of randomly sampled actions (one way could be to define primitive policies based on dynamical systems that perform a few parameterized behaviors: e.g., point-reaching etc.), and (3) investigate how to insert dynamical system-based LfD approaches in the second part of the algorithm. The algorithm will be evaluated in one of the following tasks: (1) USB key insertion, (2) irregularly-shaped peg-in-the-hole task that involves dual-manipulation, (3) a robot that needs to clean-up a desk and put each object in specific baskets.

References:

  • [1]: https://eng.uber.com/go-explore/
  • [2]: https://arxiv.org/abs/1901.10995
  • [3]: Jonschkowski, R. and Brock, O., 2015. Learning state representations with robotic priors. Autonomous Robots, 39(3), pp.407-428.
  • [4]: Lesort, T., Díaz-Rodríguez, N., Goudou, J.F. and Filliat, D., 2018. State representation learning for control: An overview. Neural Networks.

Project: Master Project at EPFL
Period: 01.09.2019 - 01.05.2020
Section(s): EL IN MA ME MT PH
Type: 60% theory, 40% implementation
Knowledge(s): Python, C++
Subject(s): Reinforcement Learning, Robot Control, Machine Learning
Responsible(s): Konstantinos Chatzilygeroudis
 
   



Last update: 09/28/2014