FullFusion: A Framework for Semantic Reconstruction of Dynamic Scenes

Mihai Bujanca, Mikel Luján, Barry Lennox

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

363 Downloads (Pure)

Abstract

Assuming that scenes are static is common in SLAM research. However, the world is complex, dynamic, and features interactive agents. Mobile robots operating in a variety of environments in real-life scenarios require an advanced level of understanding of their surroundings. Therefore, it is crucial to find effective ways of representing the world in its dynamic complexity, beyond the geometry of
static scene elements.

We present a framework that enables incremental reconstruction of semantically-annotated 3D models in dynamic settings using commodity RGB-D sensors. Our method is the first to perform semantic reconstruction of non-rigidly
deforming objects along with a static background. FullFusion is a step towards enabling robots to have a deeper and richer understanding of their surroundings, and can facilitate the study of interaction and scene dynamics.

To showcase the potential of FullFusion, we provide a quantitative and qualitative evaluation on a baseline implementation which employs specific reconstruction and segmentation pipelines. It is, however, important to highlight
that the modular design of the framework allows us to easily
replace any of the components with new or existing counterparts.
Original languageEnglish
Title of host publicationThe IEEE International Conference on Computer Vision (ICCV) Workshops
Publication statusPublished - Oct 2019

Fingerprint

Dive into the research topics of 'FullFusion: A Framework for Semantic Reconstruction of Dynamic Scenes'. Together they form a unique fingerprint.

Cite this