Skip to main content
. 2022 May 24;16:866294. doi: 10.3389/fnbot.2022.866294

Algorithm 1.

Weighted asynchronous fusion SLAM algorithm.

Input
1. Determine the situation in the sampling space
2. If there is only camera data in the sampling interval, i=1, then the estimation model is transformed into a Kalman filter for pose estimation from a single sensor data at the camera sampling rate [Eq.(10)], otherwise the sampling interval has pose estimates from both the laser sensor and the vision sensor, then
3. (1) Take the frequency of the laser sensor as the sampling rate and as the pre-measure
    (2) Fusion of the pose estimated by the camera at the current  moment
    (3) Refuse the pose result estimated by the laser sensor, and finally use the sensor estimation value after two fusions as the final pose estimation result [Eq.(11) – (12)]
4. According to the operating state of the robot, an angle-based weighting factor is introduced [Eq. (13)]. If the robot moves in a straight line, then increase confidence in vision [Eq. (16)]. Otherwise when the robot appears to rotate, then increase confidence in lasers [Eq. (17)]
5. Let k=k+1, return 1
Output