NGD-SLAM: Towards Real-Time Dynamic SLAM without GPU

Yuhao Zhang, Mihai Bujanca, Mikel Luján

Research output: Preprint/Working paperPreprint

Abstract

Many existing visual SLAM methods can achieve high localization accuracy in dynamic environments by leveraging deep learning to mask moving objects. However, these methods incur significant computational overhead as the camera tracking needs to wait for the deep neural network to generate mask at each frame, and they typically require GPUs for real-time operation, which restricts their practicality in real-world robotic applications. Therefore, this paper proposes a real-time dynamic SLAM system that runs exclusively on a CPU. Our approach incorporates a mask propagation mechanism that decouples camera tracking and deep learning-based masking for each frame. We also introduce a hybrid tracking strategy that integrates ORB features with optical flow methods, enhancing both robustness and efficiency by selectively allocating computational resources to input frames. Compared to previous methods, our system maintains high localization accuracy in dynamic environments while achieving a tracking frame rate of 60 FPS on a laptop CPU. These results demonstrate the feasibility of utilizing deep learning for dynamic SLAM without GPU support. Since most existing dynamic SLAM systems are not open-source, we make our code publicly available at: https://github.com/yuhaozhang7/NGD-SLAM
Original languageEnglish
PublisherarXiv.org
Pages1-7
Number of pages7
DOIs
Publication statusPublished - Oct 2025

Fingerprint

Dive into the research topics of 'NGD-SLAM: Towards Real-Time Dynamic SLAM without GPU'. Together they form a unique fingerprint.

Cite this