Skip to main content
. 2021 May 12;7:e529. doi: 10.7717/peerj-cs.529

Table 1. The main characteristics of the most relevant State-of-the-art.

Published Single/Multiple frame Single/Multiple object Static/Dynamic object Input type Methods
Kulikajevas (2019) Single frame Single object Static object RGB-D sensor Hyper neural network
Kulikajevas (2019) Single frame Single object Static object 3D Models GANs neural network
Widya et al. (2019) Multiple (2 image sequences) Single object Static object Monocular endoscope Structure from motion (SfM)
Wang (2018) Single frame Single object Static object RGB-D sensor Monocular SLAM
Yang (2020) Single frame Multiple (full scene) Static scene (remove dynamic objects) Monocular RGB Online incremental mesh generation
Shimada (2020) Single frame Single object Dynamic object Monocular RGB Markless 3D human motion capture
Peng (2020) Single frame Single object Dynamic object Monocular RGB GCN network
Ku (2019) Single frame Corp single object Dynamic object Monocular RGB geometric priors, shape reconstruction, and depth prediction
Lu (2020) Multiple (two consecutive point-cloud) Multiple (full scene) Dynamic objects Outdoor LiDAR datasets LSTM and GRU networks
Weng et al. (2020) Single frame Multiple (full scene) Dynamic objects Outdoor LiDAR datasets Predict next scene using LSTM
Akhter (2010) Single frame Multiple objects Dynamic objects Monocular RGB Structure from motion
Fragkiadaki et al. (2014) Multiple frames Single object Dynamic object Monocular RGB Non-rigid structure-from-motion (NRSfM)
Ranftl (2016) Multiple frames (two consecutive) Multiple (full scene) Dynamic object Monocular RGB Segments the optical flow field into a set of motion models
Kumar, Dai & Li (2019) Multiple (2 frames) Multiple (full scene) Dynamic objects Monocular RGB Super pixel over segmentation
Proposed framework Multiple (whole video frames sequence) Multiple (full scene) Dynamic objects Monocular RGB Unsupervised learning and point cloud fusion