| Algorithm 1. The overall steps: image to point cloud generation. | |
| Input: Image pair input | |
| Output: The 3D point cloud of the environment | |
| 1: | Initialize the encoder and decoder model |
| 2: | Initialize the proper model and input size |
| 3: | Initialize the calibration parameter, such as intrinsic and projection matrix. |
| 4: | while image frames are available, do |
| 5: | Read image pairs |
| 6: | Convert to torch tensor |
| 7: | Concatenate the image pairs |
| 8: | Extract the features using the encoder network |
| 9: | Depth output using decoder network |
| 10: | Functional interpolation of result if the size is different |
| 11: | Squeeze the output to the array |
| 12: | Project the disparity to points |
| 13: | Convert to point field for visualization |
| 14: | end |