Abstract
Collisions arising from lane departures have contributed to traffic accidents causing millions of injuries and tens of thousands of casualties per year worldwide. Many related studies had shown that single vehicle lane departure crashes accounted largely in road traffic deaths that results from drifting out of the roadway. Hence, automotive safety has becoming a concern for the road users as most of the road casualties occurred due to driver's fallacious judgement of vehicle path. This paper proposes a vision-based lane departure warning framework for lane departure detection under daytime and night-time driving environments. The traffic flow and conditions of the road surface for both urban roads and highways in the city of Malacca are analysed in terms of lane detection rate and false positive rate. The proposed vision-based lane departure warning framework includes lane detection followed by a computation of a lateral offset ratio. The lane detection is composed of two stages: pre-processing and detection. In the pre-processing, a colour space conversion, region of interest extraction, and lane marking segmentation are carried out. In the subsequent detection stage, Hough transform is used to detect lanes. Lastly, the lateral offset ratio is computed to yield a lane departure warning based on the detected X-coordinates of the bottom end-points of each lane boundary in the image plane. For lane detection and lane departure detection performance evaluation, real-life datasets for both urban roads and highways in daytime and night-time driving environments, traffic flows, and road surface conditions are considered. The experimental results show that the proposed framework yields satisfactory results. On average, detection rates of 94.71% for lane detection rate and 81.18% for lane departure detection rate were achieved using the proposed frameworks. In addition, benchmark lane marking segmentation methods and Caltech lanes dataset were also considered for comparison evaluation in lane detection. Challenges to lane detection and lane departure detection such as worn lane markings, low illumination, arrow signs, and occluded lane markings are highlighted as the contributors to the false positive rates.
Keywords: Computer science, Vision-based, Lane departure detection, Lateral offset ratio, Lane detection, Lane departure warning framework
1. Introduction
Crashes due to lane departures account for the majority of highway fatalities and caused hundreds of human deaths, thousands of injuries, and billions of dollars in losses each year. It is reported in Jacobs and Aeron-Thomas (1999); Jacobs et al. (2000) that Malaysia has been ranked as the country with the highest fatalities per 100,000 of population in the world since 1996. From a global point of view, in 1999 the regional distribution of the 750,000 annual fatalities places almost half of them in Asia (Jacobs and Aeron-Thomas, 1999; Jacobs et al., 2000). A similar proportion of road fatalities in Asia holds for 2014 (Al-Madani, 2018). Furthermore, the road fatalities statistical data found in Al-Madani (2018); Jacobs and Aeron-Thomas (1999) shows an increasing trend in worldwide traffic fatalities, which agrees with the predicted future development of road fatalities in different regions of the world (Kopits and Cropper, 2005; Wegman, 2017). Particularly in the region of South Asia, the predicted number of road fatalities for 2020 is more than 3.5 times higher than the total road fatalities recorded in 1990.
In response to this problem, Lane Keeping Assistance (LKA) (Yoshida et al., 2015); Lane Departure Warning (LDW) (Haloi and Jayagopi, 2015; Narote et al., 2018; Viswanath et al., 2016); Lane Following (LF) (Jung and Kelber, 2004, Jung and Kelber, 2005b); Lateral Control (LC); Intelligent Cruise Control (ICC); Collision Warning (CW) (Meng and Spence, 2015), and Autonomous Vehicle Guidance (Vacek et al., 2007) have been called for by vehicle manufacturers to enhance vehicle safety.
In general, research on LDW can be divided into two classes. The first consists of those without control units, such as LDW and collision warning (CW). This passive safety system only collects information and sends a warning or reminder if necessary. These systems are usually implemented in Driving Assistant System (DAS), instead of in vehicles as part of autonomous vehicle guidance. The second type of system is designed with feedback, namely, an active safety system, and with the aim of affecting the vehicle's behaviour, which functions as a controller.
In the present paper, a passive type of LDW is proposed: a vision-based lane departure warning (VBLDW) framework. VBLDW includes vision-based lane detection (VBLD) followed by the computation of the lateral offset ratio (LOR). VBLD is composed of a pre-processing stage followed by a detection stage. The pre-processing stage of VBLD is made up of colour space conversion, region-of-interest extraction, and lane marking segmentation. The finite impulse response saturation autothreshold (FIRSA) lane marking segmentation method used in this paper is implemented by a 2D-Finite impulse response (2D-FIR) filter followed by a saturation function and autothreshold. The detection stage of VBLD is formed by the Hough transformation and peak detection, reverse Hough transformation, and the drawing of the lines detected in the original image. For performance evaluation of VBLDW, clips from real-life datasets have been used to evaluate the lane boundary detection rate and the lane departure detection rate.
The rest of this paper is organized as follows. Section 2 begins with the introduction of VBLDW, which presents VBLD in subsection 2.1 and the LOR in subsection 2.2. The experimental test bed and real-life datasets are presented in Section 3, where the camera used in this work is described in subsection 3.1 and the real-life datasets are presented in subsection 3.2. The experimental results and a discussion are presented in Section 4. Some conclusions and some avenues for future research are presented in Section 5.
2. Methodology
As an essential part of an Intelligent Transport System (ITS), LDW plays a vital role in reducing road fatalities by giving a lane departure warning to drivers for any unintended lane departures. The proposed VBLDW is based on VBLD. ‘Lane departure’ means that a travelling vehicle is departing from its current lane or has a tendency to go across a lane boundary. As a result, the driver or monitoring system will see only one lane boundary moving towards the middle horizontally in the front view. Based on this fact, a lane departure can be determined by checking the horizontal position of each lane boundary (Lu, 2015), which correspond to the X-coordinates of the bottom end-points of the lane boundary in the image plane. A warning is issued by the system when the vehicle comes within a defined distance from the lane boundary. Fig. 1 shows a flow chart involving VBLD and the proposed VBLDW. In this section, the method of VBLD is introduced first in subsection 2.1, which is then followed by a description of the LOR for the proposed VBLDW in subsection 2.2.
Figure 1.

Flow chart for vision-based lane detection and vision-based lane departure warning.
2.1. Vision-based lane detection
VBLD was developed in MATLAB Simulink. Basically, it has two stages: pre-processing and detection. The pre-processing stage has three main components: colour space conversion, a Region of Interest extraction, and FIRSA lane marking segmentation, in that order. The pre-processing stage uses the input image (Step 1 in Fig. 1) from a Logitech C525 camera Logitech (2018), which is fixed at the centre of the front windscreen to capture road footage (Sternlund et al., 2017; Sudhakaran, 2017). For simplicity, the camera is assumed to be set up so as to make the baseline horizontal, which ensures that the horizon is in the image and is parallel to the X-axis of the image plane. In this paper, it is assumed that the resolution of the image input to the pre-processing stage is Red, Green, Blue (RGB).
The initial step is to convert the input RGB colour image to greyscale (Step 2 in Fig. 1). The reason for this is to reduce the processing time by lessening the amount of data to be processed. Equation (1) represents the function which is to be applied to the RGB image for this conversion. The original image from frame 1 of clip #1 is shown in Fig. 2, while Fig. 3 shows the converted greyscale image.
| (1) |
Where
-
- Red component of the image.
-
- Green component of the image.
-
- Blue component of the image.
Figure 2.

Original frame 1 image from clip #1.
Figure 3.

RGB to greyscale conversion of the original frame 1 image from clip #1.
The reason for employing a pre-processing stage is because the images in videos taken in the real world contain significant outliers other than the actual lane markings. The images need to be properly prepared with outliers filtered out, depending on the requirements of different computer vision systems. This can be accomplished by extracting a Region of Interest (ROI) and then applying a lane marking segmentation. Fig. 4 shows the greyscale image after extracting the ROI from the original image. One can see that the bottom half of the image has been selected as the ROI (Narote et al., 2018) (Step 3 in Fig. 1). Usually, the upper region of the image is considered to contain outliers due to unwanted features such as vehicles, road signs, and roadside trees. The image resolution of the ROI is reduced to .
Figure 4.

Region of Interest in greyscale of the original frame 1 image from clip #1.
The flow chart of the FIRSA lane marking segmentation method has three main blocks: a 2-Dimensional Finite Impulse Response (2-D FIR) filter, a Saturation block, and an Autothreshold block, as illustrated in Fig. 5. It is applied in Step 4 of VBLD, as shown in Fig. 1. The 2-D FIR filter is a basic filter for image processing, and is used to support edge extraction (Chandrasekar and Ryu, 2016) and noise removal (Shah, 1997). Edge extraction is essential in image processing because edges represent a significant portion of the characteristics contained in an image, such as, in the context of lane detection, the lane markings painted on the road surface. In this paper, a 2-D FIR filter (Terrell and Simpson, 1986) is used to extract these lane markings. The 2-D FIR edge extraction is designed for regions where the light intensity changes slowly. It can determine an edge and smooth the noise in a noisy image, simultaneously. It is also computationally more efficient than the popular Laplacian of Gaussian method (Shah, 1997).
Figure 5.
Flow chart of Finite Impulse Response Saturation Autothreshold lane marking segmentation.
As in Terrell and Simpson (1986), a 2-D FIR filtering operation in spatial convolution may be carried out be using a neighbourhood averaging method, where the pixel value is changed to the pixel value, as shown in Equation (2). In Equation (2), A denotes the filter kernel and ⁎ denotes the operation of convolution. The edge detection filter kernel in this case is and each pixel in the filter kernel is as shown in Equation (3).
| (2) |
| (3) |
where and for full output size; replicate padding is applied for the reference pixels outside the image boundary. is the filtered image, is the ROI greyscale image, A is the filter kernel, are the discrete spatial coordinates of the filter kernel, are the original image dimensions, and are the filter kernel dimensions. Suppose that the filter kernel pixels are given as shown in Fig. 6 and every pixel of the filter kernel is assumed to satisfy . Since the filter kernel in Equation (3) is a single row filter kernel, Equation (2) can be reduced to Equation (4).
| (4) |
Figure 6.
Discrete spatial coordinates of the filter kernel.
Figs. 7a-7b show the input ROI greyscale image and the surface plot of the input ROI greyscale image for frame 1 of clip #1, respectively. An irregularity is observed in the input ROI greyscale image surface plot, as shown in Fig. 7b), due to unwanted noise appearing on the road surface and side road. Still, the lane markings appear as white pixels in Fig. 7a and can be translated as the peaks in the Z-axis of the surface plot in Fig. 7b. In order to remove unwanted noise pixels appearing in the input ROI greyscale image and distinctly extract the lane markings, the 2-D FIR filter is then applied to the input ROI greyscale image. Figs. 7c-7d show the filtered image and the surface plot of filtered image for frame 1 of clip #1, respectively. From the filtered image and surface plot of the filtered image, it can be seen that the lane markings white pixels in Fig. 7c clearly stand out from the rest of the image pixels, which translates into distinct peaks with the flat reference surface at zero intensity in Fig. 7d. Still, non-positive distinct peaks are observed in the surface plot of the filtered image in Fig. 7d due to the filter kernel used in 2-D FIR. Hence, in order to remove the non-positive distinct peaks appearing in Fig. 7d, a saturation is applied to the filtered image, as shown in the flow chart of the FIRSA lane marking segmentation method in Fig. 5.
Figure 7.
Images (a), (c), and (e) represent the input Region of Interest greyscale image, filtered image, and saturated image for frame 1 of clip #1, respectively. Images (b), (d), and (f) represent the surface plot of the greyscale image, the surface plot of the filtered image, and the surface plot of the saturated image for frame 1 of clip #1, respectively.
The output signal of the 2-D FIR is connected to a saturation block, as shown in Fig. 5, which produces an output signal that is the value of the input signal bounded to the upper saturation value of (1) and lower saturation value of (0). The specified upper and lower limits are used to saturate the unwanted range of signals (). Figs. 7e-7f show the saturated image and the surface plot of the saturated image for frame 1 of clip #1, respectively. It can be seen that the non-positive distinct peaks have been saturated to a reference intensity value of (0). The saturated signals are then connected to the autothreshold block, as shown in Fig. 5.
The autothreshold block in the flow chart of FIRSA lane marking segmentation method as shown in Fig. 5 is a binarization step, which is required to highlight the lane marking of the filtering images. The core problem of binarization is how to determine the optimal threshold: if the threshold is excessively large, then a lane edge point will be missed or some redundant information will be detected. The FIRSA lane marking segmentation method employs Otsu thresholding for the binarization (Otsu, 1979). In Otsu thresholding, the histogram of the image pixel values is examined so as to choose a threshold automatically. The idea of Otsu thresholding is to search for two peaks: one representing a foreground pixel value and one representing a background pixel value, and choose a point in between these two peaks as the threshold value. Frame 1 of the filtered clip #1 is chosen and Otsu thresholding is applied to get a normalized threshold value. Then, the normalized threshold value (red solid line) is plotted together with the histogram for the filtered frame 1 of clip #1 as shown in Fig. 8. The thresholded image is shown in Fig. 9.
Figure 8.

Histogram and a normalized threshold (red solid line) for the filtered frame 1 of clip #1.
Figure 9.
Thresholded image for filtered frame 1 of clip #1.
The Hough transform (HT) Duda and Hart (1972) is a technique that recognizes a specific configuration of points in an image, such as a lane segment, curve, or other pattern. The fundamental principle is that the form sought may be expressed via a known function depending on a set of parameters. A particular instance of the form sought is therefore entirely specified by the values taken by such a set of parameters.
For example, by taking Equation (5) as the representation of straight lanes, any straight lane is entirely specified by the value of the parameters . Equivalently, if one takes a different type of representation, as in Equation (6), the straight lane is completely specified by the pair .
| (5) |
| (6) |
For illustrating the detection stage operation, the edge detection is performed on frame 1 of clip #1, shown in Fig. 9, in order to find the left and right edges of the segmented lane markings in the binary image. In this paper, a standard Hough transformation is used for the edge detection of the lane markings (Step 5 in Fig. 1). Fig. 10 shows the result of the Hough transformation in the parameter plane for frame 1 of clip #1 image with theta, θ. The resolution is set to radians and the ρ resolution is configured to 1. The range of θ is set to −1.2217 radians to 1.1868 radians. Local maxima detection is applied to the parameter plane to extract the peaks that correspond to the left and right lane boundaries (Step 5 in Fig. 1). Fig. 10 shows the results of this local maxima detection, with the highlighted two red squares indicating the right and left lane boundaries at and . The configuration for the extraction of the local maxima is that two local maxima are sought, the neighbourhood size is , and the threshold is 1.
Figure 10.

Hough transformation and Hough peaks (two red squares) in parameter plane for frame 1 of clip #1.
A reserve Hough transformation is carried out on the Hough peaks in order to display the detected lane boundaries for frame 1 of clip #1 (Step 6 in Fig. 1). This step involves the identification of the Cartesian coordinates of the intersection of the reference image boundary lines and the lines described by constant pairs . Fig. 11 shows an example of Line 1 intersecting the boundaries of the reference image at and Line 2 intersecting the boundaries at . The Cartesian coordinates of , , , and can be calculated from Equation (6). Fig. 12 shows the results of the reverse Hough transformation applied to the ROI greyscale image for frame 1 of clip #1 as the ground truth. The image boundary intersection points are highlighted in a black ‘X-mark’ with the solid black lines representing the Hough lines, as shown in Fig. 12. Both of the Hough lines are in line with the ground truth for the left and right lane boundaries. The Hough peak indicates the left lane boundary and the Hough peak indicates the right lane boundary.
Figure 11.
Example of intersection of Hough lines.
Figure 12.

Hough lines of the Region of Interest greyscale image for frame 1 of clip #1.
The Cartesian coordinates for , , , and are required to be extended for another 90 pixels vertically due to the selection of the bottom half of the image as the ROI. The Cartesian coordinates for and are linked with an overlay red line that corresponds to the left lane boundary (Step 7 in Fig. 1). The Cartesian coordinates for and are linked with an overlay red line that corresponds to the right lane boundary (Step 7 in Fig. 1). Both overlaid red lines show a correct lane detection for the original image for frame 1 of clip #1, as shown in Fig. 13.
Figure 13.

Correct lane detection for frame 1 of clip #1.
2.2. Lateral offset ratio
The lateral offset of the vehicle with respect to the centre of the lane has been used to predict lane departure, as in Risack et al. (2000); Marino et al. (2009). However, the existing techniques depend on some kind of camera calibration procedure to obtain the offset, while the proposed VBLDW framework does not require any intrinsic or extrinsic camera parameters (Jung and Kelber, 2005a). In this paper, both X-coordinates of the bottom end-points ( and ) of the lane boundary in the image plane, as described in subsection 2.1, will be analysed for each frame for the computation of the LOR (Step 8 in Fig. 1):
| (7) |
where is the lane departure warning threshold, set to a constant value of 0.8, is the detected left bottom end-point of the right lane boundary, is the detected right bottom end-point of the right lane boundary, and is one-half of the horizontal width of the image plane. Based on ISO 17361:2007 (International Organization for Standardization [ISO], 2007), there is no specification of how early the warning threshold should be made before crossing the lane. However, there is a warning threshold reference provided in ISO 17361:2007 (International Organization for Standardization [ISO], 2007), which states that the reference warning threshold is placed roughly at 80% of the lane width from the centre of the lane boundary. The absolute value indicates the horizontal pixel distance between the detected and in the image plane. The absolute value indicates the horizontal pixel distance between the detected and in the image plane. The min function is used to select the minimum element of and for each frame. The difference between the selected minimum element and indicates the horizontal pixel distance between the warning threshold and the detected or . Hence, the LOR is defined as the ratio between the horizontal pixel distance between the warning threshold and the detected or and the horizontal pixel distance between the warning threshold and . The projection of a road in the image plane for determining the LOR can be seen in Fig. 14.
Figure 14.
Projection of a road in image plane.
If the vehicle is travelling parallel to the lane boundaries and exactly at the centre of the lane, then the LOR as a function of the left bottom end-point of the left lane boundary, , the right bottom end-point of the right lane boundary, , one-half the horizontal width of the image plane, , and the warning threshold, , will be constant and equal to 0.25. If the vehicle is travelling parallel to the lane boundaries but displaced from the centre of the lane, LOR will still be constant, but with a value smaller than 0.25 (in fact, this value can be close to −1 if the vehicle is close to a lane boundary). In this case, no lane departure warning signal should be triggered, because the vehicle does not appear to be leaving its lane. If the vehicle is travelling towards a lane boundary from the centre of the lane, LOR will decrease from 0.25 to −1. In this case, a lane departure warning signal should be triggered, because the vehicle appears to be leaving its lane. Hence, the ‘Lane Departure’ text is displayed in the sequence of images in video clip #5, as shown in Figs. 15b-15h.
Figure 15.
Sequence of images in video clip #5 demonstrating the detected lane departure.
Table 1 shows the lane departure identification using LOR (Step 9 in Fig. 1). LOR ranges from −1 to 0.25. The range of values of LOR for a lane departure zone is −1≤ LOR ≤0. In addition, a LOR value equal to zero indicates the vehicle is crossing the warning threshold. Hence, a lane departure warning signal is issued. The range of values of LOR for the no lane departure zone is 0< LOR ≤0.25 with the maximum value of LOR indicating that the vehicle is located at the centre of the lane. Hence, no lane departure warning signal is issued. The minimum value of LOR =−1 indicates that the vehicle is crossing one of the lane boundaries. Hence, a lane departure warning signal is issued.
Table 1.
Lane departure identification using lateral offset ratio.
| LOR | Lane Departure Identification |
|---|---|
| LOR = 0.25 | No lane departure |
| 0< LOR ≤0.25 | No lane departure |
| LOR =0 | Lane Departure |
| −1≤ LOR ≤0 | Lane departure |
| LOR =−1 | Lane Departure |
Fig. 16 shows the LOR and lane departure detection (LOR ≤0) for the sequence of images at intervals of 20 frames of video clip #5 that were used in Fig. 15. Fig. 16 illustrates the sequence of images of a right departing vehicle. At frame 5600 of clip #5, as shown in Fig. 15a, the LOR has a value of 0.1875, which reflects the fact that the vehicle is still within the no lane departure zone (LOR ). Hence, there is no ‘Lane Departure’ text displayed in frame 5600. At frame 5620 of clip #5 as shown in Fig. 15b, the LOR is equal to −0.0234, which reflects the fact that the vehicle has moved from the centre of the lane and crossed the warning threshold (LOR ). Hence, the text ‘Lane Departure’ is displayed in frame 5620. For frames 5640 and 5660 as shown in Figs. 15c-15d, a decreasing trend of LOR values (−0.3281 to −0.7266) is observed in Fig. 16, which reflects the fact that the vehicle is going to cross the lane boundary and is still in the lane departure zone (LOR ). Hence, the text ‘Lane Departure’ is displayed in frames 5640 and 5660.
Figure 16.

Lateral offset ratio and lane departure detection (LOR ≤0) for the sequence of images in video clip #5 as shown in Fig. 15.
For frames 5680 to 5720 in Figs. 15e-15g, an increasing trend of LOR (−0.8281 to −0.4688 to −0.1719) is observed in Fig. 16, which reflects the fact that the vehicle has just crossed the lane boundary and is still moving within the lane departure zone (LOR ). Hence, the text ‘Lane Departure’ is displayed in frames 5680, 5700, and 5720. For frame 5740 as shown in Fig. 15h, the LOR is 0, which reflects the fact that the vehicle is crossing the warning threshold (LOR ). Hence, the text ‘Lane Departure’ is displayed in frame 5740. For frames 5760 and 5780 as shown in Figs. 15i-15j, the LOR is 0.0547 and 0.0781, which reflects the fact that the vehicle is located within the no lane departure zone (LOR ). Hence, there are no ‘Lane Departure’ texts displayed in frames 5760 and 5780.
3. Instrumentation
The proposed VBLDW has been implemented with MATLAB and Simulink, under the operating system Windows 10, using Intel 4 cores @ 1.6GHz processor and 4 GB RAM. Fig. 17 shows the flow chart of road footage acquisition in off-line mode so that trimming could be done on the video clip footage before running the proposed VBLDW simulation. The experimental test bed is shown in Fig. 18 and a schematic diagram for the camera position is shown in Fig. 19.
Figure 17.
Test bed for acquiring real-life datasets.
Figure 18.

The experimental setup.
Figure 19.

Attached camera position in sectional view.
3.1. Logitech C525 camera
The video sensing device that was used to collect the road footage was Logitech C525 (Logitech, 2018). This camera was attached to the front windscreen at a height of 1.045 mabove the ground and located at the centre of the windscreen, as shown in Fig. 19. The frame rate of the road footage is 30 frames per second (FPS). Since the frame size has an enormous impact on the total speed of the system, the image resolution was reduced to after being captured from the camera, for real-time image processing.
3.2. Real-life datasets
The real-life datasets of road footages were acquired in the same rate () and trimmed into corresponding clip numbering as presented in Table 2, Table 3 for daytime and night-time driving environments, respectively. Real-life datasets contained road footages of daytime and night-time driving environments in urban roads and highway. Table 2 shows the descriptions of captured daytime road footage clips on urban roads except clip #1 on highway, which contained multiple occurrences of lane departure. Road surface conditions such as worn lane markings, glare, reflections, wet, other road markings like arrow marks, and occluded lane markings during daytime driving environment were observed.
Table 2.
Road footage descriptions for lane detection and lane departure detection in a daytime driving environment.
| Clip no. | Traffic | Lane marking type | Road surface condition | No. of frame | No. of lane |
|---|---|---|---|---|---|
| 1 (Em, 2018a) | Moderate | Solid, broken, straight | Occluded lane markings, worn lane markings | 1946 | 7784 |
| 2 (Em, 2018b) | Heavy | Solid, broken, straight, curved | Occluded lane markings, other road markings, worn lane markings | 3296 | 13184 |
| 3 (Em, 2018c) | Light | Solid, broken, straight, curved | Occluded lane markings, other road markings, wet, worn lane markings, reflections | 4496 | 13488 |
| 4 (Em, 2018d) | Moderate | Solid, broken, straight | Occluded lane markings, wet, worn lane markings, reflections | 1556 | 4668 |
| 5 (Em, 2019s) | Light | Solid, broken, straight, curved | Occluded lane markings, other road markings, glare, worn lane markings, reflections | 7049 | 22802 |
| 6 (Em, 2019t) | Very heavy | Solid, broken, straight, curved | Other road markings, reflections | 2400 | 8021 |
| 7 (Em, 2019u) | Moderate | Solid, broken, straight, curved | Other road markings, occluded lane markings, worn lane markings | 8400 | 28098 |
| 8 (Em, 2019v) | Moderate | Solid, broken, straight, curved | Other road markings, occluded lane markings, worn lane markings | 2550 | 8827 |
| 9 (Em, 2019w) | Heavy | Solid, broken, straight, curved | Other road markings, occluded lane markings, worn lane markings | 3750 | 12335 |
| 10 (Em, 2019a) | Moderate | Solid, broken, straight, curved | Other road markings, occluded lane markings | 2100 | 6300 |
| 11 (Em, 2019b) | Light | Solid, broken, straight | Other road markings, occluded lane markings | 2100 | 6549 |
| 12 (Em, 2019c) | Moderate | Solid, broken, straight, curved | Other road markings, occluded lane markings | 2520 | 9820 |
| 13 (Em, 2019d) | Light | Solid, broken, straight | Other road markings, occluded lane markings | 539 | 1617 |
| Total | 42702 | 143493 |
Table 3.
Road footage descriptions for lane detection and lane departure detection in a night-time driving environment.
| Clip no. | Traffic | Lane marking type | Road surface condition | No. of frame | No. of lane |
|---|---|---|---|---|---|
| 14 (Em, 2019e) | Moderate | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare, worn lane markings | 12719 | 50876 |
| 15 (Em, 2019f) | Light | Solid, broken, straight | Other road markings, occluded lane markings, night glare | 510 | 1530 |
| 16 (Em, 2019g) | Moderate | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare | 1829 | 5487 |
| 17 (Em, 2019h) | Light | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare | 1079 | 3237 |
| 18 (Em, 2019i) | Light | Solid, broken, straight, curved | Other road markings, night glare | 329 | 987 |
| 19 (Em, 2019j) | Light | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare, worn lane markings | 1739 | 5217 |
| 20 (Em, 2019k) | Light | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare | 239 | 717 |
| 21 (Em, 2019l) | Light | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare | 869 | 2607 |
| 22 (Em, 2019m) | Light | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare | 630 | 1890 |
| 23 Em (2019n) | Light | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare | 2070 | 6210 |
| 24 (Em, 2019o) | Light | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare | 900 | 2700 |
| 25 (Em, 2019p) | Light | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare, worn lane markings | 2550 | 7650 |
| 26 (Em, 2019q) | Light | Solid, broken, straight, curved | Other road markings, occluded lane markings, other road markings, night glare, worn lane markings | 720 | 2160 |
| 27 (Em, 2019r) | Moderate | Solid, broken, straight, curved | Other road markings, occluded lane markings, night glare, worn lane markings | 1079 | 3237 |
| Total | 27262 | 94505 |
Table 3 shows the descriptions of captured night-time road footage clips on urban roads except clip #14 on highway, which contained multiple occurrences of lane departure. Road surface conditions such as worn lane markings, night glare, other road markings like arrow marks, and occluded lane markings during night-time driving environment are also considered in the experiment investigation. The evaluation will be done in frame by frame basic (Cualain et al., 2012). In total, the real-life datasets contained approximately 69964 frames and 237998 lane boundaries under daytime and night-time driving environments.
4. Results & discussion
The effectiveness of VBLD and VBLDW are investigated in this section. The real-life datasets shown in Table 2, Table 3 contained all three test scenarios, namely straight road, curved road, and false alarms. The performance of VBLD and VBLDW in lane detection and lane departure detection will be evaluated under daytime and night-time driving environments. The lane detection results for VBLD will be compared with the existing lane detection algorithms reported in the literature. Whereas, the overall lane departure detection results for VBLDW is presented. The performance evaluation of lane detection and lane departure detection are summarised in subsection 4.1. While, the comparison of lane detection results and overall lane departure detection results for VBLDW are summarised in subsection 4.2.
4.1. Performance evaluation of lane detection and lane departure detection
The lane detection results under daytime and night-time driving environments are presented in Table 4, Table 5, respectively. In order to evaluate the performance of VBLD, the number of lanes per frame, the number of detected lanes per frame, the number of correctly detected lanes per frame, and the number of false positive lanes per frame, were manually counted on a frame to frame basis. Equations (8), (9), and (10) present the formulas used for the total number of detected lanes, the detection rate, and the false positive rate, respectively.
| (8) |
| (9) |
| (10) |
Where
-
- Total number of detected lane.
-
- Total number of correctly detected lane.
-
- Total number of false positive lane.
Table 4.
Lane detection results for VBLD in a daytime driving environment.
| Clip no. | No. of detected lane | No. of correctly detected lane | Lane detection rate, % | No. of false positive lane | False positive rate, % |
|---|---|---|---|---|---|
| 1 | 3892 | 3754 | 96.45 | 138 | 3.55 |
| 2 | 6592 | 5680 | 86.17 | 912 | 13.83 |
| 3 | 8992 | 8770 | 97.53 | 222 | 2.47 |
| 4 | 3112 | 2838 | 91.20 | 274 | 8.80 |
| 5 | 14094 | 13212 | 93.74 | 882 | 6.26 |
| 6 | 4800 | 4669 | 97.27 | 131 | 2.73 |
| 7 | 16796 | 15886 | 94.57 | 912 | 5.43 |
| 8 | 5100 | 4966 | 97.37 | 134 | 2.63 |
| 9 | 7500 | 7273 | 96.97 | 227 | 3.03 |
| 10 | 4200 | 4126 | 98.24 | 74 | 1.76 |
| 11 | 4200 | 4083 | 97.21 | 117 | 2.79 |
| 12 | 5039 | 4940 | 98.04 | 99 | 1.96 |
| 13 | 1077 | 840 | 77.99 | 237 | 22.01 |
| Total | 85394 | 81035 | 94.06 | 4359 | 5.94 |
Table 5.
Lane detection results for VBLD in a night-time driving environment.
| Clip no. | No. of detected lane | No. of correctly detected lane | Lane detection rate, % | No. of false positive lane | False positive rate, % |
|---|---|---|---|---|---|
| 14 | 25438 | 24821 | 97.57 | 617 | 2.43 |
| 15 | 1020 | 989 | 96.96 | 31 | 3.04 |
| 16 | 3658 | 3582 | 97.92 | 78 | 2.13 |
| 17 | 2158 | 2129 | 98.66 | 29 | 1.34 |
| 18 | 658 | 648 | 98.48 | 10 | 1.52 |
| 19 | 3478 | 3100 | 89.13 | 378 | 10.87 |
| 20 | 478 | 455 | 95.19 | 23 | 4.81 |
| 21 | 1738 | 1730 | 99.54 | 8 | 0.46 |
| 22 | 1260 | 1219 | 96.75 | 41 | 3.25 |
| 23 | 4140 | 4025 | 97.22 | 115 | 2.78 |
| 24 | 1800 | 1774 | 98.56 | 26 | 1.44 |
| 25 | 5100 | 4514 | 88.51 | 586 | 11.49 |
| 26 | 1440 | 1296 | 90.00 | 144 | 10 |
| 27 | 2158 | 1955 | 90.59 | 203 | 9.41 |
| Total | 54524 | 52237 | 95.36 | 2289 | 4.64 |
In clips #1-#13, 96.45%, 86.17%, 97.53%, 91.20%, 93.74%, 97.27%, 94.57%, 97.37%, 96.97%, 98.24%, 97.21%, 98.04% and 77.99% of 3892, 6592, 8992, 3112, 14094, 4800, 16796, 5100, 7500, 4200, 4200, 5039, and 1077 lanes were detected correctly with 3.55%, 13.83%, 2.47%, 8.80%, 6.26%, 2.73%, 5.43%, 2.63%, 3.03%, 1.76%, 2.79%, 1.96%, and 22.01% false positive rate in lane detection in a daytime driving environment as shown in Table 4, respectively. Although it was noticed that the worn lane markings in clips #1-#5 and #7-#9 are more significant than the other clips in a daytime driving environment, still the lane detection rate in clips #1, #3, #7, #8, and #9 are maintained above 94%. In fact, the worn lane markings found in clips #1-#5 and #7-#9 were only applied on small portion of the printed lane markings on road surface and still visible in a daytime driving environment. Nevertheless, VBLD is still capable to detect the worn lane markings successfully in moderate to heavy traffic flow. It is also observed that the shadow of moving vehicle as shown in Fig. 20d does not affect the lane detection result of VBLD. Fig. 20d shows correct lane detection, which contained shadow of moving vehicle under daytime and heavy raining driving environment.
Figure 20.
Frame images of correct lane detection and lane departure detection in a daytime driving environment using VBLDW.
Frame images of correct lane detection for lane markings can be seen at frames 1183, 1341, 1356, 1341, 6776, 0564, 7437, 1696, 0110, 0255, 1224, 1397, and 0365 in clips #1-#13, as shown in Figs. 20a-20m, respectively. Whereas, frame images of false positive lane detection, such as a worn lane markings, wet road surface, other road markings like arrow sign painted on the road surface, occluded lane markings, junction box, glare, and reflections shown in Figs. 21a-21m were observed at frames 1511, 0373, 4480, 0231, 2915, 1718, 7298, 1004, 0398, 0378, 1910, 2000, and 0231 in clips #1-#13, respectively.
Figure 21.
Frame images of false positive lane detection and lane departure detection results in a daytime driving environment using VBLDW.
In clips #14-#27, 97.57%, 96.96%, 97.92%, 98.66%, 98.48%, 89.13%, 95.19%, 99.54%, 96.75%, 97.22%, 98.56%, 88.51%, 90.00%, and 90.59% of 25438, 1020, 3658, 2158, 658, 3478, 478, 1738, 1260, 4140, 1800, 5100, 1440, and 2158 lanes were detected correctly, with 2.43%, 3.04%, 2.13%, 1.34%, 1.52%, 10.87%, 4.81%, 0.46%, 3.25%, 2.78%, 1.44%, 11.49%, 10.00%, and 9.41% false positive rate in lane detection in a night-time driving environment as shown in Table 5, respectively. Although the driving environment in clips #14-#27 were at night-time environment, and clips #1-#13 were at daytime, still the lane detection rates in night-time driving environment are comparable to those of clips in daytime driving environment. Hence, the driving environment isn't an important factor affecting the performance of the lane detection. Instead, sufficient illumination in ROI for revealing the road features like lane markings is an essential factor to be considered in determining the performance of VBLD in a night-time driving environment. Besides, VBLD was also successfully detected the lane markings as shown in Fig. 22c, particularly in total darkness driving environment with only illumination comes from vehicle headlights. Based on observation, majority of frame images in clips #14-#18 and #21-#24 were under sufficient illumination from the road lamps, which resulted above 96% lane detection rate in a night-time driving environment. In addition, the quality of the road surface conditions, such as the clearness of the road features like lane markings and sufficient illumination to reveal the road surface conditions, were fundamentally better on highway compared to urban roads.
Figure 22.
Frame images of correct lane detection and lane departure detection results in a night-time driving environment using VBLDW.
Frame images of correct lane detection for lane markings can be seen at frames 04079, 0070, 0144, 0713, 0270, 0758, 0077, 0119, 0066, 1653, 0096, 2497, 0131, and 0125 in clips #14-#27, as shown in Figs. 22a-22n, respectively. Frame images of false positive lane detection, such as the worn lane markings, occluded lane markings, night glare, and other road markings like arrow sign shown in Figs. 23a-23n, were detected at frames 06224, 0184, 1814, 0227, 0293, 0466, 0201, 0025, 0319, 0097, 0887, 1913, 0305, and 0741 in clips #14-#27, respectively. The experiment results show the effectiveness of VBLD in detecting lane markings under challenging driving environments. In average, 94.06% and 95.36% of lane markings were detected correctly in daytime and night-time driving environments, respectively.
Figure 23.
Frame images of false positive lane detection and lane departure detection results in a night-time driving environment using VBLDW.
The lane departure detection results based on VBLDW in daytime and night-time driving environments are presented in Table 6, Table 7, respectively. For the performance evaluation of lane departure detection, the number of lane departure frame, the number of detected lane departure frame, and the number of false positive frame were counted manually on a frame to frame basis. Equations (11), (12), and (13) represent the formulas used for total number of lane departure frame, detection rate, and false positive rate, respectively.
| (11) |
| (12) |
| (13) |
Where
-
- Total number of detected lane departure frame.
-
- Total number of correctly detected lane departure frame.
-
- Total number of false positive lane departure frame.
Table 6.
Lane departure detection results for VBLDW in a daytime driving environment.
| Clip no. | No. of detected lane departure frame | No. of correctly detected lane departure frame | Lane departure detection rate, % | No. of false positive lane departure frame | False positive rate, % |
|---|---|---|---|---|---|
| 1 | 302 | 257 | 85.10 | 45 | 14.90 |
| 2 | 783 | 599 | 76.50 | 184 | 23.50 |
| 3 | 566 | 475 | 83.92 | 91 | 16.08 |
| 4 | 221 | 103 | 46.60 | 118 | 53.40 |
| 5 | 2020 | 1746 | 86.44 | 274 | 13.56 |
| 6 | 789 | 754 | 95.56 | 35 | 4.44 |
| 7 | 3166 | 2969 | 93.78 | 197 | 6.22 |
| 8 | 80 | 49 | 61.25 | 31 | 38.75 |
| 9 | 469 | 457 | 97.44 | 12 | 2.56 |
| 10 | 20 | 0 | 0.00 | 20 | 100.00 |
| 11 | 831 | 831 | 100.00 | 0 | 0.00 |
| 12 | 806 | 771 | 95.66 | 35 | 4.34 |
| 13 | 359 | 359 | 100.00 | 0 | 0.00 |
| Total | 10412 | 9370 | 78.63 | 1042 | 21.37 |
Table 7.
Lane departure detection results for VBLDW in a night-time driving environment.
| Clip no. | No. of detected lane departure frame | No. of correctly detected lane departure frame | Lane departure detection rate, % | No. of false positive lane departure frame | False positive rate, % |
|---|---|---|---|---|---|
| 14 | 2295 | 1846 | 80.44 | 449 | 19.56 |
| 15 | 107 | 97 | 90.65 | 10 | 9.35 |
| 16 | 498 | 438 | 87.95 | 60 | 12.05 |
| 17 | 148 | 136 | 91.89 | 12 | 8.11 |
| 18 | 153 | 143 | 93.46 | 10 | 6.54 |
| 19 | 692 | 385 | 55.64 | 307 | 44.36 |
| 20 | 121 | 114 | 94.21 | 7 | 5.79 |
| 21 | 188 | 181 | 96.28 | 7 | 3.72 |
| 22 | 328 | 315 | 96.04 | 13 | 3.96 |
| 23 | 818 | 741 | 90.59 | 77 | 9.41 |
| 24 | 170 | 149 | 87.65 | 21 | 12.35 |
| 25 | 905 | 618 | 68.29 | 287 | 31.71 |
| 26 | 273 | 205 | 75.09 | 68 | 24.91 |
| 27 | 400 | 256 | 64.00 | 144 | 36.00 |
| Total | 7096 | 5624 | 83.73 | 1472 | 16.27 |
In clips #1-#13, 85.10%, 76.50%, 83.92%, 46.60%, 86.44%, 95.56%, 93.78%, 61.25%, 97.44%, 0.00%, 100.00%, 95.66%, and 100.00% of 302, 783, 566, 221, 2020, 789, 3156, 80, 469, 20, 831, 806,and 359 lane departure frames were detected correctly with 14.90%, 23.50%, 16.08%, 53.40%, 13.56%, 4.44%, 6.22%, 38.75%, 2.56%, 100.00%, 0.00%, 4.34% and 0.00% false positive rate in a daytime driving environment, respectively. Although clip #10 was found to have zero lane departure detection rate, but the number of falsely detected lane departure frames were insignificant (20 frames) to be considered in statistical analysis. Nevertheless, clips #6, #9, #11, #12, and #13 achieved above 95% lane departure detection rate in a daytime driving environment.
Frame images of correct lane departure detection can be seen at frames 1183, 1341, 1356, 1341, 6776, 0564, 7437, 1696, 0110, 0255, 1224, 1397, and 0365 in clips #1-#13, shown in Figs. 20a-20m, respectively. Frame images of false positive lane departure detection can be seen at frames 1511, 0373, 4480, 0231, 2915, 1718, 7298, 1004, 0398, 0378, 1910, 2000, and 0231 in clips #1-#13, shown in Figs. 21a-21m, respectively. The falsely detected arrow sign painted on the road surface, occluded lane markings, and worn lane markings were the root cause of false positive lane departure detection, which contributed an incorrect LOR computation. Hence, ‘Lane Departure’ text was wrongly displayed in those frame images, in spite of the fact that the vehicle was at the centre of the lane. It was also noticed that the lane departure detection results using VBLD are highly reliant on the lane detection results using the prior VBLD in a daytime driving environment.
In clips #14-#27, 80.44%, 90.65%, 87.95%, 91.89%, 93.46%, 55.64%, 94.21%, 96.28%, 96.04%, 90.59%, 87.65%, 68.29%, 75.09%, and 64.00% of 2295, 107, 498, 148, 153, 692, 121, 188, 328, 818, 170, 905, 273, and 400 lane departure frames were detected correctly, with 19.56%, 9.35%, 12.05%, 8.11%, 6.54%, 44.36%, 5.79%, 3.72%, 3.96%, 9.41%, 12.35%, 31.71%, 24.91%, and 36.00% false positive rate in a night-time driving environment, respectively. It was observed that the lane departure detection performance deteriorated under conditions of low illumination, which reflects the independence of VBLDW's lane departure detection results from the prior algorithm. In fact, illumination that is insufficient to reveal the condition of the road features was the essential factor that contributed to the higher false positive rate of lane departure detection in a total darkness driving environment. Thus, this error propagated into the LOR computation, which resulted falsely LDW albeit the driven vehicle was at the centre of the lane.
Frame images of correct lane departure detection can be seen at frames 04079, 0070, 0144, 0713, 0270, 0758, 0077, 0119, 0066, 1653, 0096, 2497, 0131, and 0125 in clips #14-#27, shown in Fig. 22a-22n, respectively. Frame images of false positive lane departure detection can be seen at frames 06224, 0184, 1814, 0227, 0293, 0466, 0201, 0025, 0319, 0097, 0887, 1913, 0305, and 0741 in clips #14-#27, as shown in Fig. 23a-23n, respectively. In spite of the fact that the vehicle was being driven in the centre of the lane, the worn lane markings, falsely detected vehicle from the side, falsely detected arrow sign, and tree shadows found in clips #14-#27 had caused false positive lane departure detection. Thus, ‘Lane Departure’ text was wrongly displayed in those frame images. The experiment results show the effectiveness of VBLDW in detecting lane departures under challenging driving environments namely daytime and night-time. In average, 78.63% and 83.73% of lane departures were detected correctly in daytime and night-time driving environments, respectively.
4.2. Comparison of lane detection and lane departure detection
FIRSA and benchmark lane marking segmentation methods are implemented and included in Table 8 for comparative performance evaluation using clips #1-#4. The benchmark methods use lane marking segmentation method found on the Canny, Sobel, Prewitt, and Roberts algorithms, followed by Hough transform to detect lane markings. Table 8 shows that the FIRSA lane marking segmentation method performs better than the benchmark lane marking segmentation methods for clips #1 and #3, which represents a common situations in the highway and urban roads, respectively. In spite of highest lane detection rate in clips #1 and #3, FIRSA lane marking segmentation method possesses some instability when it comes to clip #2, which achieving lane detection rate of 86.17% and false positive rate of 13.83%. FIRSA lane marking segmentation method encountered slightly higher false positive rate mainly due to the incorrect lane marking detection, particularly the one on the right lane marking as illustrated in Fig. 21b. Instead of detecting the right lane marking for driving direction, the FIRSA lane marking segmentation method falsely detected the right lane marking for oncoming driving direction. Hence, the false positive detection result. Clip #4 brings a slight advantage in favour of Roberts lane marking segmentation method, while in other two scenarios in clips #1 and #3, the FIRSA lane marking segmentation method seems to have better performance than Roberts method. This situation can be related to the poor illumination during heavy rain weather and unpredictable illumination interference of vehicles. Hence, FIRSA lane marking segmentation method may perform poorly than it does during sunny weather, which is reflected experimentally in Table 8. It is concluded that, during rainy weather, the successfulness of VBLD using FIRSA lane marking segmentation method mainly depends on clarity of the printed lane markings on road surface and sometimes depend on illumination availability on the road.
Table 8.
Comparison of lane marking segmentation methods for clips #1-#4.
| Clip | Method | Lane detection rate, % | False positive rate, % |
|---|---|---|---|
| 1 | FIRSA | 96.45 | 3.55 |
| 1 | Canny | 43.63 | 56.37 |
| 1 | Sobel | 74.20 | 25.80 |
| 1 | Prewitt | 72.61 | 27.39 |
| 1 | Roberts | 76.57 | 23.43 |
| 2 | FIRSA | 86.17 | 13.83 |
| 2 | Canny | 64.08 | 35.92 |
| 2 | Sobel | 90.66 | 9.34 |
| 2 | Prewitt | 89.96 | 10.04 |
| 2 | Roberts | 91.66 | 8.34 |
| 3 | FIRSA | 97.53 | 2.47 |
| 3 | Canny | 31.94 | 68.06 |
| 3 | Sobel | 95.86 | 4.14 |
| 3 | Prewitt | 95.75 | 4.25 |
| 3 | Roberts | 96.71 | 3.29 |
| 4 | FIRSA | 91.20 | 8.80 |
| 4 | Canny | 24.87 | 75.13 |
| 4 | Sobel | 90.49 | 9.51 |
| 4 | Prewitt | 89.65 | 10.35 |
| 4 | Roberts | 92.29 | 7.71 |
In order to verify the validity of VBLD's lane detection results in subsection 4.1, Caltech lanes dataset (Aly, 2008) has been chosen for lane detection methods comparison due to its widely used in lane detection research for comparative studies. The Caltech lanes dataset consists of four clips (cordova1, cordova2, washington1, and washington2) totalling 1225 frames in the size of , which include many types of complex traffic scenes, namely tree shadows, passing vehicles, and other road markings. Table 9 tabulates the clips description of Caltech lanes dataset.
Table 9.
Caltech lanes dataset descriptions.
| Clip | Traffic | Lane marking type | Road surface condition | No. of frame | No. of lane |
|---|---|---|---|---|---|
| cordova1 | Light | Solid, broken, straight, curved | Occluded lane markings, other road markings, worn lane markings | 250 | 919 |
| cordova2 | Light | Solid, broken, straight, curved | Occluded lane markings, other road markings, glare, worn lane markings | 406 | 1048 |
| washington1 | Moderate | Solid, broken, straight | Occluded lane markings, other road markings, glare | 337 | 1274 |
| washington2 | Light | Solid, broken, straight | Occluded lane markings, other road markings | 232 | 931 |
| Total | 1225 | 4172 |
For lane detection comparison under Caltech lanes dataset, similar evaluation metrics are used, namely lane detection rate and false positive rate in lane detection in Equations (9)-(10), respectively. The results show the effectiveness of VBLD in detecting lanes on urban streets with varying conditions, which achieved an average of 90.55% lane detection rate and 9.45% false positive rate results under Caltech lanes dataset as shown in Table 10. cordova2 clip obtained lowest lane detection rate and highest false positive rate among the four clips due to false positives were mostly found for right lane of the street with no right lane boundary (Aly, 2008). Hence, VBLD detects the curb as the right lane boundary, which resulted higher false positive rate and lower lane detection rate for cordova2 clip. For washington1 clip, most of the false positives were detected under heavy tree shadows at the beginning frames, which contributed 9.94% false positive rate in lane detection. For cordova1 and washington2 clips, both achieved higher lane detection rate above 93.00% under clear conditions except cross stop lines and cross walks that contributed into false positives in lane detection. Figure 24, Figure 25 show samples of correct and false positive lane detection results of VBLD for each clip under Caltech lanes dataset, respectively.
Table 10.
Lane detection results for VBLD using Caltech lanes dataset.
| Clip | No. of detected lane | No. of correctly detected lane | Lane detection rate, % | No. of false positive lane | False positive rate, % |
|---|---|---|---|---|---|
| cordova1 | 500 | 465 | 93.00 | 35 | 7.00 |
| cordova2 | 812 | 690 | 84.98 | 122 | 15.02 |
| washington1 | 674 | 607 | 90.06 | 67 | 9.94 |
| washington2 | 464 | 437 | 94.18 | 27 | 5.82 |
| Total | 2450 | 2199 | 90.55 | 251 | 9.45 |
Figure 24.
Samples of correct lane detection results of VBLD under Caltech lanes dataset.
Figure 25.
Samples of false positive lane detection results of VBLD under Caltech lanes dataset.
Beside VBLD, other well-known lane detection methods were also compared under Caltech lanes dataset in similar evaluation metrics, namely lane detection rate and false positive rate. Since Caltech lanes dataset consists of four individual clips, the respective clip's lane detection rate and false positive rate were averaged. Table 11 shows the performance comparison of different lane detection methods under Caltech lanes dataset. The comparison was based on two-lane mode. Other than evaluation metrics, environment and runtime were also considered in lane detection methods comparison in Table 11. The runtime represents the average computation time to process one frame.
Table 11.
The performance comparison of different lane detection methods under Caltech lanes dataset (Aly, 2008).
| Method | Average lane detection rate, % | Average false positive rate, % | Environment | Runtime, ms |
|---|---|---|---|---|
| Aly (2008) | 96.30 | 12.08 | – (OpenCV + C++) | 191 |
| Ye et al. (2018) | 98.48 | 2.43 | 2 cores @ 3.50 GHz (Matlab) | 121 |
| Gupta and Choudhary (2018) | 97.28 | 1.75 | 2 cores @ 3.00 GHz (OpenCL + C++) | 29.7 |
| Hoang et al. (2017) | 87.50 | 12.34 | 6 cores @ 3.47 GHz (OpenCV + C++) | 26.1 |
| Kim et al. (2017) | 98.55 | – | 4 cores @ 3.6 GHz | 10-13 min (training time) |
| Niu et al. (2016) | 96.33 | 2.85 | 4 cores @ 2.83 GHz (C++) | 19.9 |
| Hou et al. (2016) | 92.50 | 5.85 | 2 cores @ 2.30 GHz (OpenCV + C++) | 180 |
| Shin et al. (2015) | 92.78 | – | 4 cores @ 3.30 GHz (OpenCV + C++) | 60.7 |
| Guo et al. (2015) | 95.75 | 9.91 | 2 cores @ 3.0 GHz | 40 |
| Liu and Li (2013) | 93.15 | 4.67 | 2 cores @ 2.53 GHz (Matlab) | 400 |
| Ruyi et al. (2011) | 95.90 | 2.40 | 2 cores @ 2.5 GHz (OpenCV + C) | 62 |
| VBLD | 90.55 | 9.45 | 4 cores @ 1.6GHz (Matlab) | 13 |
| VBLD with real-life datasets | 94.71 | 5.29 | 4 cores @ 1.6GHz (Matlab) | 4.9 |
The comparison was carried out in two-lane mode, which the lane detection only applied to the two boundaries of the host lane on Caltech lanes dataset. VBLD results were compared with the lane detection methods described by Aly (2008); Ye et al. (2018); Gupta and Choudhary (2018); Hoang et al. (2017); Kim et al. (2017); Niu et al. (2016); Hou et al. (2016); Shin et al. (2015); Guo et al. (2015); Liu and Li (2013); Ruyi et al. (2011) under Caltech lanes dataset. The main reason for selecting Aly (2008) works is because Aly generated the public Caltech lanes dataset for evaluation and most of the researchers evaluated performance of their lane detection method using this dataset. As shown in Table 11, the average lane detection rate of VBLD was 8% lower than the state-of-the-art lane detection method in Kim et al. (2017) and most of the lane detection methods in Table 11. It is mostly due to state-of-the-art lane detection methods used sophisticated processing environment such as higher processor clock speed per core count and some methods like Gupta and Choudhary (2018); Hou et al. (2016); Shin et al. (2015); Ruyi et al. (2011) applied tracking stage after detection stage for enhancing lane detection rate. However, VBLD outperforms most of the state-of-the-art lane detection methods in runtime evaluation. The average computation time of VBLD under Caltech lanes dataset is 13 ms in image resolution. Some of state-of-the-art lane detection methods used top-view based approach for enhancing lane detection rate, which required intensive computing power for Inverse Perspective Mapping (IPM) and prolonged the runtime per frame. Hence, lane detection method with shorter runtime like VBLD is an ideal for real time implementation for lane detection application. Most of the state-of-the-art lane detection methods were implemented in a highly optimised environment like OpenCV and C++, which resulted better lane detection rates in Table 11. In addition, the field of view (FOV) for real-life datasets is different compared to the FOV of Caltech lanes dataset. For comparison, real-life datasets contained narrow FOV. Whereas, Caltech lanes dataset contained wider FOV. Hence, FOV of the camera does affect the lane detection results of VBLD, preferably the narrower FOV yielded better lane detection than wider FOV. Wider FOV may resulted poorer lane detection results due to outliers.
VBLD not only is used to compare with the state-of-the-art lane detection methods under Caltech lanes dataset, VBLD with real-life datasets is also presented in Table 11 for verifying the validity of lane detection results generated in subsection 4.1. Table 11 shows that VBLD achieved comparable results in average lane detection rate at above 90% and less than 10% of average false positive rate for both datasets. It is noted that real-life datasets contained smaller resolution frame images at and resulted shorter average runtime per frame at 4.9 ms. The main purpose for performance comparison of different lane detection methods in Table 11 isn't completing with the state-of-the-art lane detection methods in evaluation metrics but to provide a mean for identifying the reference points of VBLD in term of evaluation metrics, environment, and runtime compared to the state-of-the-art lane detection methods under similar dataset.
As for lane departure detection comparison, non of public or well-known datasets were identified for providing fair comparison with VBLDW. Hence, VBLDW's lane departure detection results generated in subsection 4.1 were presented here under real-life datasets. Real-life datasets consist of 69964 frames of road footage for lane departure detection analysis for clips #1-#27. Since real-life datasets consists of 27 individual clips, the respective clip's lane departure detection rate and false positive rate in lane departure rate were averaged. Out of the total road footage frames, 17508 frames were belong to lane departure warning frames in VBLDW.
Table 12 presents the overall lane departure detection results for VBLDW. It can be seen that daytime and night-time driving environments were considered in VBLDW in terms of the lane departure detection rate and false positive rate in lane departure detection analysis. In a daytime driving environment, VBLDW achieved an average of 78.63% lane departure detection rate and 21.37% false positive rate in lane departure detection under real-life datasets. However, in a night-time driving environment, VBLDW contributed an average of 83.73 % lane departure detection rate and 16.27% false positive rate in lane departure detection. Although the results show the consistency of VBLDW in lane departure detection under variation of driving environments, however, the false positive rate in lane departure detection using VBLDW is still consider higher at above 16% due to limitation of vision system in overcoming the disturbances such as worn lane markings, low illumination, other road markings like arrow sign, occluded lane markings, and tree shadows.
Table 12.
Overall lane departure detection results for VBLDW under real-life datasets.
| Method | Average lane departure detection rate, % |
Average false positive rate in lane departure detection, % |
Environment | Runtime, ms | ||
|---|---|---|---|---|---|---|
| Daytime | Night-time | Daytime | Night-time | |||
| VBLDW | 78.63 | 83.73 | 21.37 | 16.27 | 4 cores @ 1.6GHz (Matlab) | 5.1 |
5. Conclusion
In this paper, a vision-based lane departure warning (VBLDW) framework has been presented, composed of a vision-based lane detection (VBLD) framework followed by a computation of the lateral offset ratio. VBLDW has two stages: pre-processing and detection. In the pre-processing stage, a colour space conversion, an extraction of the region of interest, and a finite impulse response saturation autothreshold lane marking segmentation method are implemented, in that order. In the detection stage, the Hough transformation and peak detection, reverse Hough transformation, and the drawing of the detected lines in the original image are implemented. Lastly, the lateral offset ratio is computed for determining whether to issue a lane departure warning. This computation is based on the detected X-coordinates of the bottom end-points of each lane boundary in the image plane. Road footage from urban roads and a highway in the city of Malacca were collected for evaluating the performance in lane detection and lane departure detection. The lane detection rate and the false positive rate were studied. An average of lane detection rates of 94.06% and 95.36% were achieved in daytime and night-time driving environments, respectively. The corresponding false positive detection rates were 5.94% and 4.64%, respectively. The experimental results show the effectiveness of the VBLD in detecting lanes under challenging driving conditions, namely daytime and night-time. It has been seen from the evaluation of the performance of this method for lane detection that neither the driving environment nor the traffic flow are the essential factors affecting the performance of the proposed VBLD. Instead, road surface conditions, particularly worn lane markings, were found to be the main contributor to the false positive rate. Benchmark lane marking segmentation methods were also implemented and compared with FIRSA lane marking segmentation method in clips #1-#4. Overall, FIRSA lane marking segmentation method achieved higher lane detection rate and lower false positive rate than benchmark lane marking segmentation methods. Besides real-life datasets, a well-known Caltech lanes dataset was used for lane detection comparison with state-of-the-art lane detection methods. In average, VBLD achieved 90.55% lane detection rate and 9.45% false positive rate under Caltech lanes dataset. In the evaluation of the lane departure detection, detection rates of 78.63% and 83.73% were obtained in daytime and night-time driving environments, respectively. The corresponding false positive rates were 21.37%, and 16.27%, respectively. The experimental results show the effectiveness of the proposed VBLDW in detecting lane departures in daytime and night-time driving environments. Although the lane departure detection performs better in a night-time driving environment, low illumination under night-time driving conditions resulted in a lower lane departure detection rate, as observed in clips #14 and #25. In a night-time driving environment, the false positive rate was intensified when there were worn lane markings. Nevertheless, based on the experimental results, the proposed system performed effectively under daytime driving environment except clip #4 under heavy rain weather. However, in real-world situations, low illumination and road surface interference are often encountered in our daily driving. Thus, a vision-based system, especially the lane detection system proposed in this paper as well as the proposed VBLDW system, fall short in terms of their reliability under complex driving environments and road surface conditions. The examples of complex driving environments and road surface conditions encountered in the experiments were low illumination at night, worn lane markings, arrow signs, and occluded lane markings. Future research should include mainly the coverage of all-weather test environments. As a part of an intelligent transportation system, lane detection performance can be further enhanced with inclusion of tracking elements for smoother lane detection results. Also, the vehicle's dynamical state can be retrieved to enhance the lane departure detection rate in various road conditions.
Declarations
Author contribution statement
Em Poh Ping: Conceived and designed the experiments; Performed the experiments; Analysed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
J. Hossen: Conceived and designed the experiments.
Fitrian Imaduddin: Conceived and designed the experiments.
Wong Eng Kiong: Conceived and designed the experiments.
Funding statement
The research described in this paper was supported by Multimedia University (MMU) Mini Fund (Grant No. MMUI/180170) and the Malaysian Ministry of Higher Education (MOHE) for Fundamental Research Grant Scheme (FRGS/1/2018/TK03/MMU/02/1).
Competing interest statement
The authors declare no conflict of interest.
Additional information
Supplementary content related to this article has been published online at:
Clip #1. URL: https://data.mendeley.com/datasets/rkyfwv9hw8/1, https://doi.org/10.17632/RKYFWV9HW8.1.
Clip #2. URL: https://data.mendeley.com/datasets/ynmympptmy/1, https://doi.org/10.17632/YNMYMPPTMY.1.
Clip #3. URL: https://data.mendeley.com/datasets/m88955h7vb/1, https://doi.org/10.17632/M88955H7VB.1.
Clip #4. URL: https://data.mendeley.com/datasets/vry8n5nkrf/1, https://doi.org/10.17632/VRY8N5NKRF.1.
Clip #5. URL: https://data.mendeley.com/datasets/f24x2p6b5h/1, https://doi.org/10.17632/F24X2P6B5H.1.
Clip #6. URL: https://data.mendeley.com/datasets/xskxs82mz6/1, https://doi.org/10.17632/XSKXS82MZ6.1.
Clip #7. URL: https://data.mendeley.com/datasets/dppstzh8n6/2, https://doi.org/10.17632/DPPSTZH8N6.2.
Clip #8. URL: https://data.mendeley.com/datasets/hgt5whhj6n/1, https://doi.org/10.17632/HGT5WHHJ6N.1.
Clip #9. URL: https://data.mendeley.com/datasets/bvbykc4hxf/2, https://doi.org/10.17632/BVBYKC4HXF.2.
Clip #10. URL: https://data.mendeley.com/datasets/g98zzcn6nr/1, https://doi.org/10.17632/G98ZZCN6NR.1.
Clip #11. URL: https://data.mendeley.com/datasets/z3yjbd4567/1, https://doi.org/10.17632/Z3YJBD4567.1.
Clip #12. URL: https://data.mendeley.com/datasets/ytn823rw8j/1, https://doi.org/10.17632/YTN823RW8J.1.
Clip #13. URL: https://data.mendeley.com/datasets/946jzttn7/1, https://doi.org/10.17632/946JZTTN7N.1.
Clip #14. URL: https://data.mendeley.com/datasets/cww75348bj/1, https://doi.org/10.17632/CWW75348BJ.1.
Clip #15. URL: https://data.mendeley.com/datasets/k74tdgbhjm/1, https://doi.org/10.17632/K74TDGBHJM.1.
Clip #16. URL: https://data.mendeley.com/datasets/hps9jsjwxp/2, https://doi.org/10.17632/HPS9JSJWXP.2.
Clip #17. URL: https://data.mendeley.com/datasets/bxmmttx535/1, https://doi.org/10.17632/BXMMTTX535.1.
Clip #18. URL: https://data.mendeley.com/datasets/smx7tbx29p/1, https://doi.org/10.17632/SMX7TBX29P.1.
Clip #19. URL: https://data.mendeley.com/datasets/kcxpm835gw/1, https://doi.org/10.17632/KCXPM835GW.1.
Clip #20. URL: https://data.mendeley.com/datasets/m25z57438h/1, https://doi.org/10.17632/M25Z57438H.1.
Clip #21. URL: https://data.mendeley.com/datasets/cjptbmddpk/2, https://doi.org/10.17632/CJPTBMDDPK.2.
Clip #22. URL: https://data.mendeley.com/datasets/yhd2j7ddxc/1, https://doi.org/10.17632/YHD2J7DDXC.1.
Clip #23. URL: https://data.mendeley.com/datasets/5zjf62drv7/1, https://doi.org/10.17632/5ZJF62DRV7.1.
Clip #24. URL: https://data.mendeley.com/datasets/r8vm7nbgvm/1, https://doi.org/10.17632/R8VM7NBGVM.1.
Clip #25. URL: https://data.mendeley.com/datasets/642n3xx8s6/1, https://doi.org/10.17632/642N3XX8S6.1.
Clip #26. URL: https://data.mendeley.com/datasets/wmymrk79tg/1, https://doi.org/10.17632/WMYMRK79TG.1.
Clip #27. URL: https://data.mendeley.com/datasets/wb4hgnr6k3/1, https://doi.org/10.17632/WB4HGNR6K3.1.
References
- Al-Madani H.M. Global road fatality trends' estimations based on country-wise micro level data. Accid. Anal. Prev. 2018;111:297–310. doi: 10.1016/j.aap.2017.11.035. [DOI] [PubMed] [Google Scholar]
- Aly M. IEEE Intelligent Vehicles Symposium, Proceedings. 2008. Real time detection of lane markers in urban streets; pp. 7–12.arXiv:1411.7113v1 [Google Scholar]
- Chandrasekar P., Ryu J.Y. Design of programmable digital FIR/IIR filter with excellent noise cancellation. Int. J. Appl. Eng. Res. 2016;11:8467–8470. [Google Scholar]
- Cualain D.O., Glavin M., Jones E. Multiple-camera lane departure warning system for the automotive environment. IET Intell. Transp. Syst. 2012;6:223. [Google Scholar]
- Duda R.O., Hart P.E. Use of the hough transformation to detect lines and curves in pictures. Commun. ACM. 1972;15:11–15. [Google Scholar]
- Em P.P. Clip #1. 2018. https://data.mendeley.com/datasets/rkyfwv9hw8/1
- Em P.P. Clip #2. 2018. https://data.mendeley.com/datasets/ynmympptmy/1
- Em P.P. Clip #3. 2018. https://data.mendeley.com/datasets/m88955h7vb/1
- Em P.P. Clip #4. 2018. https://data.mendeley.com/datasets/vry8n5nkrf/1
- Em P.P. Clip #10. 2019. https://data.mendeley.com/datasets/g98zzcn6nr/1
- Em P.P. Clip #11. 2019. https://data.mendeley.com/datasets/z3yjbd4567/1
- Em P.P. Clip #12. 2019. https://data.mendeley.com/datasets/ytn823rw8j/1
- Em P.P. Clip #13. 2019. https://data.mendeley.com/datasets/946jzttn7n/1
- Em P.P. Clip #14. 2019. https://data.mendeley.com/datasets/cww75348bj/1
- Em P.P. Clip #15. 2019. https://data.mendeley.com/datasets/k74tdgbhjm/1
- Em P.P. Clip #16. 2019. https://data.mendeley.com/datasets/hps9jsjwxp/2
- Em P.P. Clip #17. 2019. https://data.mendeley.com/datasets/bxmmttx535/1
- Em P.P. Clip #18. 2019. https://data.mendeley.com/datasets/smx7tbx29p/1
- Em P.P. Clip #19. 2019. https://data.mendeley.com/datasets/kcxpm835gw/1
- Em P.P. Clip #20. 2019. https://data.mendeley.com/datasets/m25z57438h/1
- Em P.P. Clip #21. 2019. https://data.mendeley.com/datasets/cjptbmddpk/2
- Em P.P. Clip #22. 2019. https://data.mendeley.com/datasets/yhd2j7ddxc/1
- Em P.P. Clip #23. 2019. https://data.mendeley.com/datasets/5zjf62drv7/1
- Em P.P. Clip #24. 2019. https://data.mendeley.com/datasets/r8vm7nbgvm/1
- Em P.P. Clip #25. 2019. https://data.mendeley.com/datasets/642n3xx8s6/1
- Em P.P. Clip #26. 2019. https://data.mendeley.com/datasets/wmymrk79tg/1
- Em P.P. Clip #27. 2019. https://data.mendeley.com/datasets/wb4hgnr6k3/1
- Em P.P. Clip #5. 2019. https://data.mendeley.com/datasets/f24x2p6b5h/1
- Em P.P. Clip #6. 2019. https://data.mendeley.com/datasets/xskxs82mz6/1
- Em P.P. Clip #7. 2019. https://data.mendeley.com/datasets/dppstzh8n6/2
- Em P.P. Clip #8. 2019. https://data.mendeley.com/datasets/hgt5whhj6n/1
- Em P.P. Clip #9. 2019. https://data.mendeley.com/datasets/bvbykc4hxf/2
- Guo J., Wei Z., Miao D. Lane detection method based on improved RANSAC algorithm. Proceedings - 2015 IEEE 12th International Symposium on Autonomous Decentralized Systems; ISADS 2015; 2015. pp. 285–288. [Google Scholar]
- Gupta A., Choudhary A. Real-time lane detection using spatio-temporal incremental clustering. IEEE Conference on Intelligent Transportation Systems, Proceedings; ITSC; 2018. pp. 1–6. [Google Scholar]
- Haloi M., Jayagopi D.B. IEEE Intelligent Vehicles Symposium, Proceedings 2015-Augus. 2015. A robust lane detection and departure warning system; pp. 126–131. [Google Scholar]
- Hoang T.M., Baek N.R., Cho S.W., Kim K.W., Park K.R. Road lane detection robust to shadows based on a fuzzy system using a visible light camera sensor. Sensors. 2017;17 doi: 10.3390/s17112475. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hou C., Hou J., Yu C. An efficient lane markings detection and tracking method based on vanishing point constraints. Chinese Control Conference; CCC; 2016. pp. 6999–7004. [Google Scholar]
- International Organization for Standardization [ISO] Intelligent transport systems–lane departure warning systems–performance requirements and test procedures. 2007. https://www.iso.org/standard/41105.html
- Jacobs G., Aeron-Thomas A. RoSPA Road Safety Congress. 1999. A review of global road accident fatalities; pp. 1–15. [Google Scholar]
- Jacobs G., Thomas A.A., Astrop A. Department for International Development; London: 2000. Estimating Global Road Fatalities. Technical Report. [Google Scholar]
- Jung C.R., Kelber C.R. Computer Graphics and Image Processing, 2004. Proceedings. 17th Brazilian Symposium on. 2004. A robust linear-parabolic model for lane following; pp. 72–79. [Google Scholar]
- Jung C.R., Kelber C.R. Intelligent Transportation Systems, Proceedings. IEEE; 2005. A lane departure warning system using lateral offset with uncalibrated camera; pp. 102–107. [Google Scholar]
- Jung C.R., Kelber C.R. Computer Graphics and Image Processing, 2005. SIBGRAPI 2005. 18th Brazilian Symposium on. 2005. An improved linear-parabolic model for lane following and curve detection; pp. 131–138. [Google Scholar]
- Kim J.J., Kim J.J., Jang G.J.J., Lee M. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection. Neural Netw. 2017;87:109–121. doi: 10.1016/j.neunet.2016.12.002. http://linkinghub.elsevier.com/retrieve/pii/S0893608016301885 [DOI] [PubMed] [Google Scholar]
- Kopits E., Cropper M. Traffic fatalities and economic growth. Accid. Anal. Prev. 2005;37:169–178. doi: 10.1016/j.aap.2004.04.006. [DOI] [PubMed] [Google Scholar]
- Liu W., Li S. An effective lane detection algorithm for structured road in urban. Intelligent Science and Intelligent Data Engineering; IScIDE 2012; Heidelberg: Springer; 2013. pp. 759–767. [Google Scholar]
- Logitech Logitech C525 HD Webcam. 2018. https://www.logitech.com/en-us/product/hd-webcam-c525
- Lu G. University of Ottawa; 2015. A Lane Detection, Tracking and Recognition System for Smart Vehicles. Ph.D. thesis. [Google Scholar]
- Marino R., Scalzi S., Orlando G., Netto M. American Control Conference. 2009. A nested PID steering control for lane keeping in vision based autonomous vehicles; pp. 2885–2890. [Google Scholar]
- Meng F., Spence C. Tactile warning signals for in-vehicle systems. Accid. Anal. Prev. 2015;75:333–346. doi: 10.1016/j.aap.2014.12.013. [DOI] [PubMed] [Google Scholar]
- Narote S.P., Bhujbal P.N., Narote A.S., Dhane D.M. A review of recent advances in lane detection and departure warning system. Pattern Recognit. 2018;73:216–234. [Google Scholar]
- Niu J., Lu J., Xu M., Lv P., Zhao X. Robust lane detection using two-stage feature extraction with curve fitting. Pattern Recognit. 2016;59:225–233. [Google Scholar]
- Otsu N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979;9:62–66. [Google Scholar]
- Risack R., Mohler N., Enkelmann W. A video-based lane keeping assistant. Proceedings of the IEEE Intelligent Vehicles Symposium; Dearborn, MI; 2000. pp. 356–361. [Google Scholar]
- Ruyi J., Reinhard K., Tobi V., Shigang W. Lane detection and tracking using a new lane model and distance transform. Mach. Vis. Appl. 2011;22:721–737. [Google Scholar]
- Shah M. 1997. Fundamental of Computer Vision. Orlando, FL. [Google Scholar]
- Shin B.S., Tao J., Klette R. A superparticle filter for lane detection. Pattern Recognit. 2015;48:3333–3345. [Google Scholar]
- Sternlund S., Strandroth J., Rizzi M., Lie A., Tingvall C. The effectiveness of lane departure warning systems—a reduction in real-world passenger car injury crashes. Traffic Injury Prev. 2017;18:225–229. doi: 10.1080/15389588.2016.1230672. [DOI] [PubMed] [Google Scholar]
- Sudhakaran A. Eindhoven University of Technology; 2017. Lateral Control Using a MobilEye Camera for Lane Keeping Assist. Ph.D. thesis. [Google Scholar]
- Terrell T., Simpson R. Two-dimensional FIR filter for digital image processing. J. Inst. Electr. Radio Eng. 1986;56:103–106. http://ieeexplore.ieee.org/document/5261456/ [Google Scholar]
- Vacek S., Schimmel C., Dillmann R. EMCR. 2007. Road-marking analysis for autonomous vehicle guidance; pp. 1–6. [Google Scholar]
- Viswanath P., Chitnis K., Swami P., Mody M., Shivalingappa S., Nagori S., Mathew M., Desappan K., Jagannathan S., Poddar D., Jain A., Garud H., Appia V., Mangla M., Dabral S. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2016. A diverse low cost high performance platform for advanced driver assistance system (ADAS) applications; pp. 819–827. [Google Scholar]
- Wegman F. The future of road safety: a worldwide perspective. IATSS Res. 2017;40:66–71. [Google Scholar]
- Ye Y.Y., Hao X.L., Chen H.J. Lane detection method based on lane structural analysis and CNNs. IET Intell. Transp. Syst. 2018;12:513–520. http://digital-library.theiet.org/content/journals/10.1049/iet-its.2017.0143 [Google Scholar]
- Yoshida H., Omae M., Wada T. Toward next active safety technology of intelligent vehicle. J. Robot. Mechatron. 2015;27:610–616. [Google Scholar]














