Skip to main content
. 2022 Aug 9;22(16):5946. doi: 10.3390/s22165946

Table 1.

Review of scientific papers on the application and analysis of the lidar system in modern road transportation.

Reference Description of Application Conclusion
Autonomous driving [1] A review of the state-of-the-art lidar technologies and the associated perception algorithms for application in autonomous driving The limitations and challenges of the lidar technology are presented, as well as the impressive results of the analyzed algorithms
[21] Discussion of the lidar systems’ role in autonomous driving applications The vital role of monitoring fixed and moving objects in traffic
[22] A review of lidar applications in automated extraction of road features and a discussion on challenges and future research Use of lidar for various transportation applications, including on-road (road surface, lane, and road edge), roadside (traffic signs, objects), and geometric (road cross, vertical alignment, pavement condition, sight distance, vertical clearance) information extraction
[23] Simultaneous localization and mapping (SLAM)-based indoor navigation for autonomous vehicles directly based on the three-dimensional (3D) spatial information from the lidar point cloud data A comparative analysis of different navigation methods is conducted based on extensive experiments in real environments
[24] Extensive analysis of automotive lidar performance in adverse weather conditions, such as dense fog and heavy rain Poor perception and detection of objects during rain and fog; the proposed rain and fog classification method provides satisfactory results
[25] Testing the lidar system for outdoor unmanned ground vehicles in adverse weather conditions, including rain, dust, and smoke Signal attenuation due to scattering, reflection, and absorption of light and the reduction of detection distance are identified
[26] Analysis of the effects of fog conditions on the lidar system for visibility distance estimation for autonomous vehicles on roads The visibility distances obtained by lidar systems are in the same range as those obtained by human observers; the correlation between the decrease in the optical power and the decrease of the visual acuity in fog conditions is established
[27] Analysis of the performance of a time-of-flight (ToF) lidar in a fog environment for different fog densities The relations between the ranging performance and different types of fog are investigated, and a machine learning-based model is developed to predict the minimum fog visibility that allows successful ranging
[28] Application of Kalman filter and nearby point cloud denoising to reconstruct lidar measurements from autonomous vehicles in adverse weather conditions, including rain, thick smoke, and their combination The experiments in the 2 × 2 × 0.6 m space show an improved normal weather 3D signal reconstruction from the lidar data in adverse weather conditions, with a 10–30% improvement
[29] Analysis of the influence of adverse environmental factors on the ToF lidar detection range, considering the 905 nm and 1550 nm laser wavelengths A significant difference in the performance of the two laser types is identified—a 905 nm laser is recommended for poor environmental conditions
Road
detection
[30] Deep learning road detection based on the simple and fast fully convolutional neural networks (FCNs) using only lidar data, where a top-view representation of point cloud data is considered, thus reducing road detection to a single-scale problem High accuracy of road segmentation in all lighting conditions accompanied by fast inference suitable for real-time applications
[31] Automatic traffic lane detection method based on the roadside lidar data of the vehicle trajectories, where the proposed method consists of background filtering and road boundary identification Two case studies confirm the method’s ability to detect the boundaries of lanes for curvy roads while not being affected by pedestrians’ presence
[32] Deep learning road detection based on the FCNs using camera and lidar data fusion High system accuracy is achieved by the multimodal approach, in contrast to the poor detection results obtained by using only a camera
[33] Road detection based on the lidar data as input to the system integrating the building information modeling (BIM) and geographic information system (GIS) Accurate road detection is achieved by lidar data classification, but additional manual adjustments are still required
[34] Lidar-histogram method for detecting roads and obstacles based on the linear classification of the obstacle projections with respect to the line representing the road Promising results in urban and off-road environments, with the proposed method being suitable for real-time applications
[35] Road-segmentation-based pavement edge detection for autonomous vehicles using 3D lidar sensors The accuracy, robustness, and fast processing time of the proposed method are demonstrated on the experimental data acquired by a self-driving car
[36] An automated algorithm based on the parametric active contour model for detecting road edges from terrestrial mobile lidar data Tests on various road types show satisfactory results, with dependence on the algorithm parameter settings
Object
recognition on and along the road
[37] Visual localization of an autonomous vehicle in the urban environment based on a 3D lidar map and a monocular camera The possibility of using a single monocular camera for the needs of visual localization on a 3D lidar map is confirmed, achieving performance close to the state-of-the-art lidar-only vehicle localization while using a much cheaper sensor
[38] Probabilistic localization of an autonomous vehicle combining lidar data with Kalman-filtered Global Navigation Satellite System (GNSS) data Improved localization with smooth transitions between using GNSS data to using lidar and map data
[39] Generating high-definition 3D maps based on the autonomous vehicle sensor data integration, including GNSS, inertial measurement unit (IMU), and lidar Existing autonomous vehicle sensor systems can be successfully utilized to generate high-resolution maps with a centimeter-level accuracy
[40] Vehicle localization consisting of curb detection based on ring compression analysis and least trimmed squares, road marking detection based on road segmentation, and Monte Carlo localization Experimental tests in urban environments show high detection accuracy with lateral and longitudinal errors of less than 0.3 m
[41] Vehicle localization based on the free-resolution probability distributions map (FRPDM) using lidar data Efficient object representation with reduced map size and good position accuracy in urban areas are achieved
[42] Optimal vehicle pose estimation based on the ensemble learning network utilizing spatial tightness and time series obtained from the lidar data Improved pose estimation accuracy is obtained, even on curved roads
[43] Autonomous vehicle localization based on the IMU, wheel encoder, and lidar odometry Accurate and high-frequency localization results in a diverse environment
[44] Automatic recognition of road markings from mobile lidar point clouds Good performance in recognizing road markings; further research is needed for more complex markings and intersections
[45] Development and implementation of a strategy for automatic extraction of road markings from the mobile lidar data based on the two-dimensional (2D) georeferenced feature images, modified inverse distance weighted (IDW) interpolation, weighted neighboring difference histogram (WNDH)-based dynamic thresholding, and multiscale tensor voting (MSTV) Experimental tests in a subtropical urban environment show more accurate and complete recognition of road markings with fewer errors
[46] Automatic detection of traffic signs, road markings, and pole-shaped objects The experimental tests on the two-kilometer long road in an urban area show that the proposed method is suitable for detecting individual signs, while there are difficulties in distinguishing multiple signs on the same construction
[47] Recognition of traffic signs for lidar-equipped vehicles based on the latent structural support vector machine (SVM)-based weakly supervised metric learning (WSMLR) method Experiments indicate the effectiveness and efficiency of the proposed method, both for the single-view and multi-view sign recognition
[48] Automatic highway sign extraction based on the multiple filtering and clustering of the mobile lidar point cloud data The tests conducted on three different highways show that the proposed straightforward method can achieve high accuracy values and can be efficiently used to create an accurate inventory of traffic signs
[49] Pedestrian and vehicle detection and tracking at intersections using roadside lidar data, the density-based spatial clustering of applications with noise (DBSCAN), backpropagation artificial neural network (BP-ANN), and Kalman filter The experimental tests with a 16-laser lidar show the proposed method’s accuracy above 95% and detection range of about 30 m
[50] Vehicle tracking using roadside lidar data and a method consisting of background filtering, lane identification, and vehicle position and speed tracking Satisfactory vehicle detection and speed tracking in experimental case studies, with a detection range of about 30 m; difficulties in the vehicle type identification
[51] Vehicle detection from the Velodyne 64E 3D lidar data using 2D FCN, where the data are transformed to the 2D point maps An end-to-end (E2E) detection method with excellent performance and a possibility for additional improvements by including more training data and designing deeper networks
[52] Convolutional neural network (CNN)-based multimodal vehicle detection using three data modalities from the color camera and 3D lidar (dense-depth map, reflectance map, and red-green-blue (RGB) image) The proposed data fusion approach provides higher accuracy than the individual modalities for the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset
[53] Camera and lidar data fusion for pedestrian detection using CNNs, where lidar data features (horizontal disparity, height above ground, and angle) are fused with RGB images The tests on the KITTI pedestrian detection dataset show that the proposed approach outperforms the one using only camera imagery
[54] CNN-based classification of objects using camera and lidar data from autonomous vehicles, where point cloud lidar data are upsampled and converted into the pixel-level depth feature map, which is then fused with the RGB images and fed to the deep CNN Results obtained on the public dataset support the effectiveness and efficiency of the data fusion and object classification strategies, where the proposed approach outperforms the approach using only RGB or depth data
[55] Real-time detection of non-stationary (moving) objects based on the CNN using intensity data in automotive lidar SLAM It is demonstrated that non-stationary objects can be detected using CNNs trained with the 2D intensity grayscale images in the supervised or unsupervised manner while achieving improved map consistency and localization results
[56] Target detection for autonomous vehicles in complex environments based on the dual-modal instance segmentation deep neural network (DM-ISDNN) using camera and lidar data fusion The experimental results show the robustness and effectiveness of the proposed approach, which outperforms the competitive methods
[57] Road segmentation, obstacle detection, and vehicle tracking based on an encoder-decoder-based FCN, an extended Kalman filter, and camera, lidar, and radar sensor fusion for autonomous vehicles Experimental results indicate that the proposed affordable, compact, and robust fusion system outperforms benchmark models and can be efficiently used in real-time for the vehicle’s environment perception
[58] CNN-based real-time semantic segmentation of 3D lidar data for autonomous vehicle perception based on the projection method and the adaptive break point detector method Practical implementation and satisfactory speed and accuracy of the proposed method
[59] E2E self-driving algorithm using a CNN that predicts the vehicles’ longitudinal and lateral control values based on the input camera images and 2D lidar point cloud data Experimental tests in the real-world complex urban environments show promising results
[60] Pedestrian recognition and tracking for autonomous vehicles using an SVM classifier and Velodyne 64 lidar data, generating alarms when pedestrians are detected on the road or close to curbs The validity of the method was confirmed on the autonomous vehicle platform in two scenarios: when the vehicle is stationary and while driving