Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2022 Apr 20;81(23):33549–33567. doi: 10.1007/s11042-022-12919-8

Geometric invariant features for the detection and analysis of vehicle

Mallikarjun Anandhalli 1,, A Tanuja 1, Pavana Baligar 2
PMCID: PMC9018969  PMID: 35463223

Abstract

The intelligent traffic management system (ITS) is one of the active research areas. Vehicle detection is a major role in traffic analysis. In the paper, analysis of detecting vehicles is proposed based on the features posed by the vehicle. The foreground pixels from image are extracted by histogram based foreground segmentation. After segmenting, Hu-Moments and Eigen values features are extracted and normalized. The classifiers are trained with the extracted Hu-Moments and Eigen values. The experiments are conducted on different benchmark datasets, and results are analysed considering the overall classification accuracy. Results of the algorithm are satisfactory and acceptable in real time.

Keywords: Vehicle detection, Hu-Moments, Eigen values

Introduction

Traffic congestion is one of the major issues in this era. Due to the pandemic situation of COVID-19 which has hit all around the world has made public to make use of private vehicle over public transportation. According to the recent research in this covid-19 situation, it is evident that automobile companies have grown up at least 5-10% due to rapid rise in the sale of vehicle. It has lead to the traffic congestion.

To deal with traffic congestion, vehicle detection [9], tracking analysis [2] etc are major important task to be considered. Many algorithms have addressed the issues related to vehicle detection and tracking with varying environmental conditions.

The motivation behind carrying the proposed work is to understand the flow of traffic. The traffic control department needs a traffic model to address the issues of congestion that happen at peak period.

Research has been carried out in other domain like water marking, sign detection etc considering different invariants [1214]. Which motivated to take invariants for detection of vehicle.

The main contribution of the proposed system is in detection of the vehicles based on the shape descriptors with rule-based classifiers to get the utmost accuracy. The use of Eigen vectors along with the classifiers in vehicle detection allows us to see the intrinsic nature of training sample and detect them at a finest level.

The methodology involves extracting the features using shape descriptor and contour of the input image thereby deciding whether the given input is a vehicle or non – vehicle. These types of feature along with rule-based classifier help to detect an object is a vehicle.

The vehicle has a defined set of orientation with reference to pattern, shape, corners etc. Thus, by considering the shape of vehicle particularly, contours of input training samples of the vehicle are extracted. In order to define the shape feature vector of input training samples i.e. Hu- Moments are considered because of its characterization and quantify the shape of a vehicle. Calculation of the Hu – Moments, log transform is applied on the Hu- moments to prevent noise and spikiness of the feature extracted.

The Hu- Moments [10] and Eigen values [16] are used for shape descriptor to extract the features from region of interest. The classifier is trained with extracted features for further classification of vehicle and non-vehicle images. The detailed study is carried out with state of art to check the robustness of the proposed system.

The outline of the paper is structured as follows:In section2 Literature survey with detailed explanation of the work carried, section3 the proposed methodology, section4 with Results and Discussions are explained in detail and future scope with conclusion in the section5.

Literature Survey

The following section investigate various research expert in the field of vehicle detection to solve real-time traffic problems by considering car features. Existing algorithms, techniques and merits of detection are discussed.

Cao et al. [5] and [4], have determined HOG features of vehicles and trained SVM classifiers. In order to improve accuracy of the algorithm, authors proposed LPS- Hog features method and trained these features to linear SVM. The algorithm still has a scope of improvement with respect to accuracy of detection.

The authors Tuermar et al. [18] segmented the road region to limit search for the Hog features in road region only. Disparity map is used to limit the search for road region only. The limitations of the work is that it needs higher resolution to improve the accuracy.

Gupte et al. [7] proposed the system for classification of cars and no-cars. The step involved in system are detection, tracking and classification. vehicle detection is based on the dimensions of the vehicle. Other than the classification of vehicle, the methodology also provides other parameters like velocity and location of vehicle. The algorithm has a scope to improve the classification of vehicle more precisely.

Lai et al. [11] proposed a methodology of detecting vehicles by considering vehicle length. The work is reported on virtual loop assignment and to identify the type of vehicle. The categories are classified into 4 types like motor- cycle, fire truck, van and sedan. The analysis of each vehicle type is represented with 1D signature chart. The authors have considered length of vehicle has the only parameter to classify type of vehicle.

Yun Wei et al. [19], proposed a method using two step detection algorithm by combining Haar and Histogram Oriented Gradients. The target region is extracted using strong descriptive ability of hog feature with segmenting the region of interest based on the Haar features with detection accuracy of 97.96%.

Hassaballah et al. [8], proposed a methodology for the detection of vehicle based on local binary pattern (LBP). The LBP is used to extract local structure of vehicle. The histogram generated concurrence with LBP is used to measure the dissimilarity between regions of training images. The histogram fit is generated using the forests clustering. Latent features obtained from clustering discriminative codebooks are used to search different LBP features of the random regions utilizing the Chi-square dissimilarity measure. The map obtained in training phase is used to determine the vehicle location in test images.

Achyunda Putra et al. [15], proposed a methodology where region of interest features is extracted using Histogram Oriented Gradient. KNN classifier is used to classify between car and no car with accuracy of 84%. However authors carried out work and compared with only SVM classifier.

Aleena Ajay et al. [1], proposed a methodology to detect small vehicles from aerial images. In training phase, the features are extracted using singular value decomposition and histogram of oriented gradients and then trained to SVM classifier to classify vehicle and non-vehicular image. In detection phase, the features are extracted from region of interest and detection of vehicles is performed. The comparison is made between SVD and HOG methods. The accuracy with SVD is 63.31% and HOG is 63.23%.

Qian Tian et al. [17] proposed a model using Hu features of vehicle and non-vehicle. The adaptive thresholding and edge detection algorithms are carried out prior to the extraction of Hu features. SVM is trained with the features for detection of vehicle. The performance of algorithm which has led to the accuracy of 88%. The authors believe that greater accuracy can be reached with knowing best match between shape descriptor and classifier.

Proposed Methodology

The research is carried out in 5 phases 1. Histogram Based Fore-ground Segmentation 2. Pre-processing 3. Feature Extraction 4. Classification 5. Prediction as shown in Fig. 1. The architecture of the proposed method is illustrated in the Fig. 2.

Fig. 1.

Fig. 1

Flowchart of proposed system

Fig. 2.

Fig. 2

Architecture of proposed system

Histogram Based foreground Segmentation

To obtain the region of interest in proposed system maximum intensity values from histogram are nullified to showcase only region of interest.

The images from dataset are split into 3 different channels i.e. R, G and B channel and there histograms are drawn respectively to obtained foreground from image.

The mathematical expression to define foreground pixel based on histogram is given by (1)

Pf=Px(n),iff(n)fmax0.60,iff(n)fmax0.6 1

where Px(n) is the probability of pixel n, total number of occurrences of pixel n is denoted by f(n), fmax is the maximum occurrence of pixel calculated from the histogram.

After the successful trials of histogram based foreground segmentation on different images, have found that the foreground pixels are segmented without much loss of foreground pixels. As defined in the (1) the segmentation is performed by considering 0.6*fmax.

Pre-processing

The images are categorized into two groups based on presence and absence of vehicle. If vehicle is present then it is considered as positive image else they are considered to be negative image. All the images are converted to binary images for faster computation rate and then contours are extracted for canny edge detection.

Feature Extraction

After pre-processing stage, all images are imposed on 8 x 8 matrix and from each cell features are extracted using Hu and Eigen values.

Hu-Moments

It has been the major implementation technique in the sector of image processing and pattern recognition. With the aid of algebraic invariants theory, the features of object do not change on following operations:-

  • Translation (position change)

  • Rotation (orientation change)

  • Scaling (size change)

  • Reflection

Using central moments better results of object matching can be achieved due to the fact that they are invariant to translation. Hu moment are calculated which are invariant to reflection, position, size and orientation changes . The central moments Mji are calculated as:

Mji=a,bf(a,b).(a.a)j.(b,b)i 2

(a’,b’) being center of the mass, calculated as:

a=M10M00 3
b=M01M00 4

Hu invariants can be derived accordingly,

H1=P20+P02 5
H2=P20+P022+4P112 6
H3=P30+3P122+3P21P032 7
H4=P30+3P122+P21P032 8
H5=P30+3P12P30P12P30+P1223P21+P032+3P21P033P30+P122P21+P032 9
H6=P20P02P30+P122P21+P032+4P11P30+P12P21+P03 10
H=3P21P03P30+P12P30+P1223P21+P032+P303P12P21+P033P30+P032P21+P032 11

The contour image is super imposed with the 8 x 8 matrix as shown in Fig. 3. Hu moments are calculated on cell of the grid for both vehicle and non-vehicle region which generates seven Hu-moments values and these values are added to get one single value for each cell of 8 x 8 matrix using (12).

M(Sumof7HuMoments)=H1+H2+H3+H4+H5+H6+H7 12

The single Hu - Moment is caluclated from each cell and represented in matrix format for both vehicle and non-vehicle images as shown in Tables 1 and 2.

Fig. 3.

Fig. 3

8 * 8 matrix imposition on the extracted edge features of an Image

Table 1.

Normalized Hu-Moment values of vehicular Image

10.2349 5.93348 10.445 11.0871 16.1451 8.2234 8.2234 9.72738
6.85407 4.63136 7.99248 6.13923 6.8358 3.62077 6.72236 8.77499
5.02306 0 0.49044 11.9779 4.83834 8.93275 11.7424 5.20786
6.15424 0 0 0 1.255 8.76542 6.40891 5.7371
11.3367 9.10404 2.85066 0 0 6.35331 0 8.2234
8.2234 0 5.52104 8.32002 8.28637 0.58106 0 8.2234
8.2234 0 0 0 0 0 0 8.2234
9.72738 8.2234 8.2234 8.2234 8.2234 8.2234 8.2234 9.22046
Table 2.

Normalized Hu-Moment values of non- vehicular Image

6.4633 2.8918 4.372 8.8492 8.2234 5.0864 3.7042 4.3601
0 0 12.206 7.6572 4.818 3.8534 0 4.4935
0 0 2.4362 8.3229 7.3826 3.6353 0 0
0 0 3.7173 5.931 14.731 2.5973 0 0
3.9908 4.3136 4.0735 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0.8159
6.4548 7.5016 3.9301 2.9164 3.3009 5.2215 7.3927 5.9606

Eigen Values

Eigen values and Eigen vectors mandatorily implemented on square matrices. Eigen values may accept value Zero, but Eigen vectors are always non-zero.

let, the image with matrix E be of size (m ×m).

If ‘v’ and ‘u’ are scalar multiples of eigenvalue matrix E, then eigen vector of the linear transformation of E is labelled as v and ϕ is scalar factor of eigenvalue corresponding to that eigenvector.

Ev=u=ϕv 13

Equation (13) can be written as given in (14).

EϕIv=0 14

In (14), if the determinant of the matrix (E - ϕ I) is zero then we get non- zero solutions. Where I (m×m) is an identity matrix. Therefore, the eigenvalues of E will have values of ϕ that satisfy (15).

EϕI=0 15

Equation (15) is known as secular or characteristic equation of E. The basic algebraic theorem tells that the polynomial characteristic of an ‘E’ (m x m) matrix with ‘m’ the degree of polynomial results in (16) when multiplied with ‘m’ linear terms.

EϕI=ϕ1ϕϕ2ϕ.......ϕnϕ 16

The Eigen values are calculated for each cell of 8 x 8 matrix. These values are normalized to eliminate the complex values and to get single value for image. The result of Eigen values of an image is shown in Table 3.

Table 3.

Eigen values of vehicular image

10.2 20.4 (10.19999999999999 + 0j) 10.2 10.2 10.2 10.2 20.4
20.4 -3.41E-15 10.2 (20.40000000000003 + 0j) (20.39999999999997 + 0j) 0 10.2 10.2
20.4 0 0 (30.600000000000016 + 0j) 20.4 10.2 40.8 20.4
30.6 0 0 0 0 10.2 (30.599999999999973 + 0j) (20.39999999999998 + 0j)
10.2 61.2 0 0 0 20.4 0 10.2
10.2 0 0 10.2 10.2 10.2 0 10.2
10.2 0 0 0 0 0 0 10.2
10.2 20.4 10.2 10.2 10.2 10.2 10.2 10.2

Hu- Moments represent suitable shape features that are invariant to scaling, translation and rotation. Eigen values are compact form of image features and represent principal components of an image matrix. The uniqueness of these shape descriptors motivated to experiment on detection of vehicles.

Classification

Hu and Eigen features are extracted from the image to classify vehicle and non-vehicle images using supervised learning algorithm. The train-test split technique is considered to split the dataset which help in evaluating the performance of classifiers. The train-test split is made in ratio of 80-20.

Support Vector Machine[SVM]

Support Vector Machine is supervised machine learning algorithm widely used in face detection, image classification, handwriting recognization with highest accuracy compared to traditional query based refinement.

The aim of SVM algorithm is to determine a hyperplane which classifies the sample data into space of dimension N, where N indicates the feature number. A hyperplane is drawn to separate two sets of data points. Our main aim is to determine the plane with maximum distance between the points of two sets i.e. the maximum margin. Hyperplanes act as decision boundaries which help to classify the sample data points. Unoptimized decision boundary with new data may lead to greater misclassification.

The dimension of hyperplane depends upon feature number. As the feature number increase, dimension of the hyperplane also increase. Support vectors are sample data points which are closer to hyperplane and controls the orientation of hyperplane. The data points helps to build the SVM model.

K Nearest Neighbor

KNN is used mainly in the field of data mining. The new object is classified based on the similarity of nearest objects. The nearest object is grouped with most common class which is identified by using Euclidean distance function as measurement. The best results can be met with larger sample size and the algorithm can be considered for regression based applications.

Logistic Regression

Logistic regression is supervised learning and is mainly applied for biform classification.

The logistic regression have four methods to assess the relationship between one or more predictor variables and a response variable.

  • Binary logistic regression – Number of categories are 2 and characteristics are at 2 levels. It is used when the dependent variable is divided into two branches and is non-parametric and also error term is not differing with respect to independent variable. Ex. Pass or Fail, Yes or No, High or Low.

  • Ordinal logistic regression – Number of categories are 3 or more and characteristics are at natural ordering of the levels. Ex. Medical condition (Critical, Serious, Stable, Good).

  • Nominal logistic regression – Number of categories are 3 or more and characteristics are not as per at natural ordering of the levels. Ex. Color (Red, Green, Blue), Subjects (Science, Maths and Art).

  • Poisson logistic regression – Number of categories are 3 or more and characteristics are the number of times an event occurs. Ex. 0,1,2,3,4,5,.......,etc.

Gradient Boosting Classifiers [XBoost]

It is also called as Extreme Gradient Boosting. The classifier is widely used because of its scalability, performance is outstanding compared with other algorithms and also computational complexity is less.

Gradient boosting classifier uses the decision tree to create a strong prediction model. The algorithm combines the weak learning models to build a strong model.

In XBoost classifier algorithm, many models are trained sequentially. Each model reduces the loss function of whole system using Gradient Descent method. To provide a accurate estimation of the response, learn procedure fits the new model consecutively. The main idea behind the XBoost algorithm is to build a base learners which can correlate with negative gradient of loss function.

With aforementioned unique features of classifiers, all four classifiers are included in proposed system to define the performance of vehicle detection.

Prediction

During this phase, histogram based foreground segmentation is performed on images which were not the part of training phase. Pre-processing step is performed taking the images from segmentation and shape features are extracted using Hu and Eigen values. The classification algorithms are performed to classify between car and no-car based on the feature extracted.

Results and Discussions

Experiments are conducted on the i3 processor with 8GB RAM and implemented using python which includes libraries like opencv, PIL, sklearn etc. The algorithm is trained with datasets, some of the images in datasets is as shown in the Fig. 4. The images are segmented using histogram and performed pre- processing steps as aforementioned. The images obtained from pre-processed step are used to extract the features using Hu and Eigen value and passed to different classifiers to check the classification performance using shape descriptors. In the testing phase, images which were not included in training phase are considered to evaluate the performance of an algorithm in detecting vehicular and non-vehicular image effectively.

Fig. 4.

Fig. 4

Fraction of positive and negative image dataset

Performance analysis

Precision, Recall and F1-score

Precision, Recall and F1-score are the metrics to measure performance of all classifiers and determine the best model. Classification report is generated on each classifier to know the precision, recall and f1-score for comparison.

Precision is the ratio of accurately predicted positive images by sum of false positive images and true positive images. If the precision value is high then, it is a sign of low false positive rate, which indicates the model is performing pretty well.

Precision=TPTP+FP 17

Recall is the ratio of accurately predicted positive images with the actual positive images which is obtained by sum of true positive and false negative. It tells that how many are positive image when labelled it has positive. If the measure is 0.5 then it is said as good model.

Recall=TPTP+FN 18

F1-score is necessary when we have uneven distribution of positive and negative classes. It is a better metrics, if we need balance between Recall and Precision. The score above 0.5 is an indication of good model.

F1score=2(RecallPrecision)Recall+Precision 19

Table 4 will give us the comparative analysis of predicting the vehicles based on features extracted from the Hu-Moments and classified by different algorithms. Experimental results show the precision rate is ranging from 84 to 88%, F1-score is in range of 0.89-0.92 which is above the 0.5 and recall value is in range of 0.92 - 0.96. The recall and f1-score are above 0.5, which is an indication of good model.

Table 4.

Classifiers metric results of Hu-Moments based detection

Classifier Precision Recall F1-Score
SVM 0.86 0.92 0.89
KNN 0.84 0.94 0.89
Logistic regression 0.87 0.96 0.91
Gradient booster 0.88 0.96 0.92

The Table 5 shows precision of different classifiers, where the features are extracted using Eigen values. The experimental results show that Eigen features are giving good results with range of 80- 97%. We can observe that Gradient Booster is giving good results compared with other classifiers on Eigen value and Hu-moments.

Table 5.

Classifier metric results of Eigen value based detection

Classifier Precision Recall F1-Score
SVM 0.8 0.93 0.89
KNN 0.97 0.85 0.9
Logistic regression 0.8 0.97 0.89
Gradient booster 0.97 0.99 0.98

ROC and Accuracy

Receiver Characteristic Curve (ROC) is important metric used to visualize the performance of classification model making use of predicted probability. ROC curve is plotted with true positive rate (TPR) on y-axis and false positive rate (FPR) on x-axis. The ROC is a metric used to evaluate classification of binary problems.

TPR give us the proportion of positive classes that are predicted correctly

TPR(TruePositiveRate)=TPTP+FN 20

FPR gives us the proportion of positive classes that are predicted incorrectly

FPR(FalsePositiveRate)=FPTN+FP 21

Random classifier of ROC is always baseline to measure the performance of classifiers, above the baseline is considered as the good performance and below the baseline is considered as bad performance.

Area under curve (AUC) in ROC gives the degree of separability of the classes by classifiers. Higher the degree of the area under curve, better the prediction of classes. The AUC is given by y=g(x) and with two points as x=c and x=d, then

AUC=cdg(x)dx 22

The Figs. 5 and 6 shows area under curve for all the four classifiers on Hu-Moments and Eigen values feature, which determines the performance of the proposed system. The results of area under curve is above 0.5 which as expected value.

Fig. 5.

Fig. 5

ROC Of classifiers based on Hu-Moments

Fig. 6.

Fig. 6

ROC Of classifiers based on eigen values

The Table 6 shows implementation results with accuracy based on Hu-Moments and Eigen values. Eigen values based prediction of test samples with 80 to 97%. With Table 6 it can be stated that Gradient Booster classifier on Eigen value based feature extraction of images are giving satisfactory results.

Table 6.

Accuracy comparison of classifiers

Classifier Hu-Moment accuracy Eigen value accuracy
SVM 0.81 0.92
KNN 0.81 0.94
Logistic regression 0.85 0.96
Gradient booster 0.86 0.97

Comparative Study

The Table 7 gives the observations on proposed system and other methodologies as stated in literature survey. This paper uses Hu-Moments and Eigen value based feature extraction to predict vehicle or non-vehicle images using different classifiers. The datasets is of 10000 images and the results of all four classifiers are satisfactory. The best results are obtained with the Gradient Booster classifiers on Eigen value shape descriptor.

Table 7.

Comparative study between proposed system and state of art

SI Reference Author Observations Accuracy in %
1 Qian Tian et al. [8] Feature extraction - Hu-Moments, 85 - 88
Classiifer- SVM
2 Shijie Dai et al. [6] Feature extraction - Tchebichef Moments, 92
Classifier - SVM
3 A. Ajay et al. [19] Feature extraction -SVD and HOG-63.23 SVD- 63.31
HOG classifier- SVM
4 Yun Wei et al. [18] Feature Extraction- Haar and 97.96
HOG Classifier - SVM
5 Proposed System Feature extractions - Hu and Eigen Values, Classifier- SVM, KNN, Logistic regression , Gradient Booster Hu Moment, Xboost-86 Eigen Value, Xboost - 97

Comparison with benchmark dataset

The proposed method is tested on the benchmark datasets such as GTI [3], CaltechGraz and Stanford [20] to evaluate the performance. The detailed study is as shown in the Table 8. ROC curve of hu and eigen value based detection on different classifer using GTI dataset is plotted as shown in Figs. 7 and 8. Figures 9 and 10 gives detection accuracy on CaltechGraz dataset. Figures 11 and 12 shows the performance of proposed method on Stanford dataset. One can see from the figures, the proposed method gave satisfactory results with Eigen value shape descriptor classified using Xboost.

Table 8.

Comparison with benchmark dataset

Dataset No. of images Classifier Hu- Moments (accuracy) Eigen (accuracy)
GTI 7325 SVM 0.71 0.63
KNN 0.74 0.75
Logistic regression 0.74 0.57
XBoost 0.78 0.83
CaltechGraz 675 SVM 0.90 0.86
KNN 0.90 0.95
Logistic regression 0.94 0.74
XBoost 0.70 0.74
Stanford 8144 SVM 0.76 0.84
KNN 0.83 0.68
Logistic regression 0.85 0.95
XBoost 0.70 0.74
Fig. 7.

Fig. 7

ROC of Hu- moments on GTI dataset

Fig. 8.

Fig. 8

ROC of eigen value on GTI dataset

Fig. 9.

Fig. 9

ROC of Hu- moments on caltechgraz dataset

Fig. 10.

Fig. 10

ROC of eigen value on caltechgraz dataset

Fig. 11.

Fig. 11

ROC of Hu- moments on stanford dataset

Fig. 12.

Fig. 12

ROC of eigen value on stanford dataset

Conclusion And Future Enhancement

The proposed system focused on the comparative analysis of vehicle detection using the Hu-Moments and Eigen value based feature extraction with different classifiers. The performance results of Eigen values with Gradient Booster classifier has 97% of accuracy which is satisfactory. However, better accuracy can be achieved with larger dataset and enhancement can be done for detecting the vehicle in video.

Acknowledgements

The Research is supported by Science Engineering Research Board under startup Research Grant Program in Engineering Science with File NO. : SERB/SRG/2019/002277 and is gratefully acknowledged.

Biographies

Mallikarjun Anandhalli

is currently working as Associate Professor in department of E & C KLSGIT, Belagavi, he has completed B.E, M.Tech and Ph.D from VTU, Belagavi. He has completed his B.E in 2011, followed by M.Tech in 2013 and Ph.D in 2018. He has 5 years of Research experience and 2 years of Academic experience and currently working as a principle investigator for Smart Intelligent Traffic Management System which is funded by SERB, India. His research interest includes Computer Vision, Image and Video Processing, Machine Learning etc. He has published good number of papers in peer reviewed Journals in area of Image and Video processing. graphic file with name 11042_2022_12919_Figa_HTML.jpg

A. Tanuja

is currently working as Research Associate under the project for Smart Intelligent Traffic Management System which is funded by SERB, India. She has completed M.Tech in 2016. Her research area of interest is Machine Learning, Computer vision, Deep Learning. graphic file with name 11042_2022_12919_Figb_HTML.jpg

Pavana Baligar

received a B.E. degree in Computer Science Engineering from SKSVM. Agadi College of Engineering & Technology, India in 2013 and an M.Tech degree in Computer Science at AMC College of Engineering, Bangalore in 2015. Her research interests are Speech Emotion Recognition, Image Processing, Machine Learning and Deep Learning. graphic file with name 11042_2022_12919_Figc_HTML.jpg

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Mallikarjun Anandhalli, Email: malliarjun71@gmail.com.

A. Tanuja, Email: a.tanuja3@gmail.com

References

  • 1.Ajay A, Sowmya V, Soman KP (2017) Vehicle detection in aerial imagery using eigen features. In: 2017 international conference on communication and signal processing (ICCSP), Chennai, pp 1620–1624
  • 2.Anandhalli M, Baligar VP. A novel approach in real-time vehicle detection and tracking using Raspberry Pi. Alexandria Engineering Journal. 2018;57(3):1597–1607. doi: 10.1016/j.aej.2017.06.008. [DOI] [Google Scholar]
  • 3.Arróspide J, Salgado L, Nieto M (2012) Video analysis based vehicle detection and tracking using an MCMC sampling framework. EURASIP Journal on Advances in Signal Processing, vol. 2012, Article ID 2012:2
  • 4.Cao X, Wu C, Lan J, Yan P, Li X. Vehicle detection and motion analysis in low-altitude airborne video under urban environment. IEEE Transactions on Circuits and Systems for Video Technology. 2011;21(10):1522–1533. doi: 10.1109/TCSVT.2011.2162274. [DOI] [Google Scholar]
  • 5.Cao X, Wu C, Yan P, Li X Linear SVM classification using boosting HOG features for vehicle detection in low-altitude airborne videos. In: 2011 18th IEEE international conference on image processing, Brussels, Belgium, pp 2421–2424
  • 6.Dai S, Huang H, Gao Z, Li K, Xiao S (2009) Vehicle-logo Recognition Method Based on Tchebichef Moment Invariants and SVM. 2009 WRI World Congress on Software Engineering, Xiamen, China, pp 18–21
  • 7.Gupte S, Masoud O, Martin RFK, Papanikolopoulos NP. Detection and classification of vehicles. IEEE Trans Intell Transport Syst. 2002;3(1):37–47. doi: 10.1109/6979.994794. [DOI] [Google Scholar]
  • 8.Hassaballah M, Kenk MA, El-Henawy IM. Local binary pattern-based on-road vehicle detection in urban traffic scene. Pattern Anal Applic. 2020;23:1505–1521. doi: 10.1007/s10044-020-00874-9. [DOI] [Google Scholar]
  • 9.Hassaballah M, Kenk MA, Muhammad K, Minaee S Vehicle detection and tracking in adverse weather using a deep learning framework. In: IEEE transactions on intelligent transportation systems
  • 10.Hu M-K. Visual pattern recognition by moment invariants. IRE Trans Inform Theory. 1962;8(2):179–187. doi: 10.1109/TIT.1962.1057692. [DOI] [Google Scholar]
  • 11.Lai AHS, Yung NHC. Vehicle-type identification through automated virtual loop assignment and block-based direction-biased motion estimation. IEEE Trans Intell Transport Syst. 2000;1(2):86–97. doi: 10.1109/6979.880965. [DOI] [Google Scholar]
  • 12.Li L, Li S, Abraham A, Pan J-S. Geometrically invariant image watermarking using Polar Harmonic Transforms. Inform Sci. 2012;199:1–19. doi: 10.1016/j.ins.2012.02.062. [DOI] [Google Scholar]
  • 13.Li L, Pan J, Yuan X. High capacity watermark embedding based on invariant regions of visual saliency. IEICE Trans Fundam Electron Commun Comput Sci. 2011;94-A:889–893. doi: 10.1587/transfun.E94.A.889. [DOI] [Google Scholar]
  • 14.Li L, Yuan X, Lu Z, Pan J-S. Rotation invariant watermark embedding based on scale-adapted characteristic regions. Inform Sci. 2010;180(15):2875–2888. doi: 10.1016/j.ins.2010.04.009. [DOI] [Google Scholar]
  • 15.Putra A, Al Islama F, Fitri U, Wayan Firdaus M. HOG feature extraction and KNN classification for detecting vehicle in the highway. IJCCS (Indonesian Journal of Computing and Cybernetics Systems), [S.l.] 2020;14(3):231–242. doi: 10.22146/ijccs.54050. [DOI] [Google Scholar]
  • 16.Sharma P, Aneja A, Kumar A, Kumar S. Face recognition using neural network and eigen values with distinct block processing. Int J Sci Eng Res. 2011;2(5):ISSN 2229–5518. [Google Scholar]
  • 17.Tian Q, Zhong T, Li H (2013) A new method for vehicle detection using MexicanHat wavelet and moment invariants. SiPS 2013 Proceedings, Taipei, Taiwan, pp 289–294
  • 18.Tuermer S, Kurz F, Reinartz P, Stilla U. Airborne vehicle detection in dense urban areas using HoG features and disparity maps. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2013;6(6):2327–2337. doi: 10.1109/JSTARS.2013.2242846. [DOI] [Google Scholar]
  • 19.Wei Y, Tian Q, Guo J, Huang W, Cao J (2019) Multi-vehicle detection algorithm through combining Harr and HOG features. Mathematics and Computers in Simulation, vol 155
  • 20.(2013) 3D Object Representations for Fine-Grained Categorization Jonathan Krause, Michael Stark, Jia Deng, Li Fei-Fei 4th IEEE Workshop on 3D Representation and Recognition, at ICCV 2013 (3dRR- 13). Sydney, Australia

Articles from Multimedia Tools and Applications are provided here courtesy of Nature Publishing Group

RESOURCES