The Inventors’ SLAM system is comprised of two main components, the frontend (for camera tracking and surface extraction) and the backend (for pose graph and map optimization). The frontend used is the Kintinuous mapping system, based on the KinectFusion™, system. This real-time dense mapping system integrates all depth measurements into a volumetric data structure and carries out camera pose estimation is then carried out via a dense Iterative Closest Point (ICP) function. As the volumetric structure moves, the region of the surface that leaves the volume is extracted in point cloud form. The stream of “cloud slices” along a camera trajectory each have an associated camera pose. The final component in the frontend is a visual place recognition module that detects visual loop closures and computes appropriate relative camera pose constraints.
The backend performs incremental pose graph optimization coupled with incremental non-rigid dense map optimization. A dense every-frame pose graph is optimized according to the loop closure constraints. The optimized dense camera trajectory is used to conjunction with matched visual features to constrain a non-rigid space deformation of the map. The inventors use an embedded deformation approach to apply it to large scale dense maps captured with the Kintinuous frontend and derive efficient incremental methods to prepare the map for deformation. By combining pose graph optimization with non-rigid deformation of a dense map, they obtain highly accurate dense maps over large scale trajectories that are both locally and globally consistent.