Removing Dynamic 3D Objects from Point Clouds of a Moving RGB-D Camera
Canben Yin Shaowu Yang Xiaodong Yi Zhiyuan Wang Yanzhen Wang Bo Zhang Yuhua Tang
HPCL | School of Computer, National University of Defense Technology
Most state-of-the-art visual simultaneous localization and mapping (SLAM) systems are designed for applications in static environments. However, during a SLAM process, dynamic objects in the field-of-view of the camera will affect the accuracy of visual odometry and loop-closure detection. In this paper, we present a solution to removing dynamic objects from RGB images and their corresponding depth images when an RGB-D camera is mounted on a mobile robot for visual SLAM. We transform two selected successive images to the same image coordinate frame through feature matching. Then we detect candidate image pixels of dynamic objects by applying a threshold to the image difference between the two images. Furthermore, we utilize depth information of the candidate pixels to decide whether true dynamic objects are found. Finally, in order to extract a complete 3-dimentional (3D) dynamic objects, we fhind the correspondence between the object and a cluster of the point cloud comuted from RGB-D images. To evaluate the performance of detecting and removing dynamic objects, we do experiments in various indoor scenarios, which demonstrate the efficiency of the proposed algorithm.
paper (4.3MB) codes (coming soon...) slides (coming soon...)
This work is supported by Research on Foundations of Major Applications, Research Programs of NUDT, Project ZDYYJCYJ20140601.