Skip to main content
. 2019 Jun 10;19(11):2629. doi: 10.3390/s19112629

Table 6.

Advantages and limitations of different classification models.

Methods Advantages Limitations
k-Nearest Neighbor
  • -

    Easy to understand and easy to implement

  • -

    Training is very fast

  • -

    Robust to noisy training data

  • -

    It is particularly well suited for multimodal classes

  • -

    It is sensitive to the local structure of the data

  • -

    Memory limitation

  • -

    Being supervised learning lazy Algorithm e.g., runs slowly

Neural Network
  • -

    Efficiently handles noisy inputs

  • -

    Computational rate is high

  • -

    When an element of the neural network fails, it can continue without any problem with their parallel nature

  • -

    It is semantically poor

  • -

    Difficult in choosing the type of network architecture

  • -

    Requires high processing time for large neural networks

Gaussian Mixture Model
  • -

    Training is very fast

  • -

    Performs well on the data of different size and densities

  • -

    The result is not stable

  • -

    Sensitive to violations and distributional assumptions

Hidden Markov Model
  • -

    Convenient for modeling sequential data

  • -

    Learning can take place directly from raw data

  • -

    Often has a large number of unstructured parameters

  • -

    Unable to capture higher order correlation

Decision Tree
  • -

    Requires little data preparation

  • -

    Nonlinear relationships between parameters do not affect tree performance

  • -

    Easy to interpret and explain

  • -

    Performs well with large data in a short time

  • -

    Complexity

  • -

    Possibility of duplication with the same sub-tree on different paths

Support Vector Machine
  • -

    Produces very accurate classifiers

  • -

    Less over-fitting, robust to noise

  • -

    Especially popular in text classification problems where very high-dimensional spaces are the norm

  • -

    Memory-intensive

  • -

    Requires both positive and negative examples

  • -

    Needs to select a good kernel function

  • -

    SVM is a binary classifier. To do a multi-class classification, pair-wise classifications can be used (one class against all others, for all classes)

  • -

    There are some numerical stability problems in solving the constraint, QP (Quadratic programming)

  • -

    Computationally expensive, thus runs slow

Self-Organizing Map
  • -

    Simple and easy-to-understand

  • -

    A topological clustering unsupervised algorithm that works with a nonlinear data set

  • -

    The excellent capability to visualize high- dimensional data onto 1 or 2-dimensional space makes it unique especially for dimensionality reduction

  • -

    Time consuming algorithm

k-Means
  • -

    Low complexity

  • -

    Necessity of specifying k

  • -

    Sensitive to noise and outlier data points

  • -

    Clusters are sensitive to the initial assignment of centroids

Fuzzy Measure
  • -

    Efficiently handles uncertainty

  • -

    Properties are described by identifying various stochastic relationships

  • -

    Allows a data point to be in multiple clusters

  • -

    Without prior knowledge, the output is not good

  • -

    Precise solutions depend upon the direction of decision

Expectation-Maximization Meta
  • -

    Can easily change the model to adapt to a different distribution of data sets

  • -

    Parameters number does not increase with the training data increasing

  • -

    Slow convergence in some cases

Bayesian Classifier
  • -

    Improves the classification performance by removing the irrelevant features

  • -

    Good performance

  • -

    Short computational time

  • -

    Information theoretically infeasible

  • -

    Computationally infeasible