The Visual-inertial navigation system (VINS) integrates visual and inertial sensors, providing advanced navigation capabilities. Recent improvements in these sensors have made VINS widely applicable in areas such as robotic navigation and autonomous driving, due to its complementary functionality and decreasing sensor costs. This paper reviews the latest developments in VINS, introducing visual simultaneous localization and mapping and its role in VINS. Key technologies, including direct and indirect methods for image processing in visual odometry and Inertial Measurement Unit preintegration for robot motion estimation, are discussed. Both filter-based and optimization-based state estimation methods in VINS are examined. Filter-based methods, like the Extended Kalman Filter, offer real-time state updates for attitude tracking and map construction, while optimization-based methods focus on minimizing reprojection or other error metrics to improve localization accuracy and robustness. The application of dynamic simultaneous localization and mapping, which addresses dynamic objects in complex environments, is also explored. This paper summarizes current research challenges and proposes future directions in the field of dynamic simultaneous localization and mapping.
Keywords: Visual-Inertial Navigation System, visual simultaneous localization and mapping, extended Kalman Filter, dynamic simultaneous localization and mapping, semantic methods