Chih-Hsuan Chen

Multimodal modelling and motion prediction in dynamic environments

Principle Supervisor
Dr.-Ing. Birgit Graf
Fraunhofer-Institut für Produktionstechnik und Automatisierung IPA

Collaboration partners:

  • Universität Hamburg
  • Fondazione Istituto Italiano Di Tecnologia

Competence Area: Embodiment


Objectives

Tracking the precise location of humans and dynamic obstacles allows service robots to distinguish between places where they can operate safely and others, where collisions and thus safety-critical situations are likely to occur. Therefore, in this project, vision-based environment modelling, that is 3-D point cloud generation as well as geometric mapping will be integrated with other perception modalities such as sound-source localisation and touch sensors. Furthermore, motion tracking and prediction functions will be developed based on the integrated sensor input which will be provided in real-time to higher-level, cognitive modules.


Expected Results

Methods to efficiently process and store this enhanced environment map will be designed and implemented, that are specifically suitable for real-time collision avoidance. Besides providing functionality to give fast feedback on the risk level of planned actions, directly affecting behaviour by prohibiting dangerous actions, sophisticated real-time 3-D environment maps will also form the input for higher-level, cognitive safety modules, enabling them to make faster decisions based on more precise predictions.


Biography

Chih-Hsuan Chen has attended Fraunhofer IPA since June, 2017 as PhD Fellow and focuses on research of perception for service robotics.
He has 2 years plus of experience in industrial robotics and 3 year plus of experience in humanoid robotics. He also holds Marie Curie Fellowship as an Early Stage Researcher.