Skip to main content
. 2021 Oct 27;21(21):7120. doi: 10.3390/s21217120

Table 16.

Literature Analysis: Graph-based Representation for 3DOR.

Point- GCNN [69] RGNet [70] HGNet [71] S-AT GCN [72]
Detector Category One-stage Two-stage Two-stage Two-stage
Environment Outdoor Indoor Indoor Outdoor
Scenario Object detection scenario from a LiDAR point cloud using Graph neural network 3D object proposal generation and relationship extraction scenario in point cloud using relation graph network Raw point clouds processing scenario for direct 3D bounding box prediction. Local geometrical feature extraction scenario
Advantage(s) Detects multiple objects by predicting their category and shape in a single shot with auto registration mechanism Extracts uniform appearance features by point attention pooling method
Holds appearance and position relationship between 3D objects by building a relation graph
Learns semantics via hierarchical graph representation,
Applies multi-level semantics by capturing the relationship of the points to detect 3D objects
FE layers boost the contrast ration of feature map and increase the 3D recognition (true positive) rate of the subsequent CNN for small and sparse objects
Limitation(s) Does not maintain the accuracy with down sampled data for the hard and moderate levels Gives poor performance for detecting thin objects The ProRe module is not effective for object detection if object features had been adequately learned Run-time speed drops with FE layers