Skip to main content
. 2022 Jul 25;12(8):562. doi: 10.3390/bios12080562

Table 1.

Comparison of applications along with advantages and disadvantages of SVMs, NNs and other common AI algorithms used in biomedical applications.

AI Algorithms Applications in Medical Sciences Advantages Disadvantages
Support Vector Machine (SVM)
  • Biomarker imaging in neurological and psychiatric disorders [18]

  • Human–machine interface [19]

  • Cancer diagnosis [20]

  • Early detection of Alzheimer’s disease [21]

  • Cardiac monitoring [22]

  • Predicting surgical site infection [23]

  • Glucose monitoring [24]

  • Surgery [25,26,27]

  • Pandemic resource management [28]

  • Healthcare monitoring system [29]

  • Highly accurate, convergence to a solution for a problem is faster, solving complex problems, good scaling for high-dimensional data, and requirement of a minimum number of training samples.

  • Selecting appropriate kernel function is important, requirement of longer training time for large datasets, high computational cost.

  • Difficulties in understanding and interpreting the final model, variable weights, and individual impacts.

  • Problems in managing the missing values and prone to overfitting.

Neural Network (NN)
  • Cancer diagnosis [13,30,31,32]

  • Identifying Parkinson’s disease [33]

  • Image-based cardiac monitoring [22]

  • Alzheimer’s disease [34,35]

  • Surgery [25,26,27]

  • Sensor applications [36,37]

  • Diabetes prediction [38]

  • Human–machine interface [39]

  • Pandemic resource management [40]

  • Computer vision [41]

  • Efficient, fast, and flexible algorithm.

  • Calculates output without programmed rules, continuously learns and improves itself.

  • Multitasking and has wide applications. It can work with nonlinear and complex databases.

  • Longer training time and large datasets are required.

  • High hardware cost and requires lengthy and complex programs.

  • Interpretation and modification are difficult due to black box nature.

  • Prone to overfitting. High data dependency may give faulty results.

Naïve Bayes (NB)
  • Disease prediction [42]

  • Medical diagnosis [43,44]

  • Systems performance management [44]

  • Pandemic resource management [29]

  • Easy implementation, high learning and classification speed.

  • Capable of managing overfitting, noisy data, and missing values.

  • Able to predict the class of a test dataset. Useful for solving multi-class prediction problems.

  • Biased for non-ideal training set.

  • Challenges in performing regression and co-dependent features.

  • Not suitable for complex problems.

K-Nearest Neighbor (KNN)
  • Glucose monitoring for diabetes [24]

  • Pandemic resource management [28]

  • Disease prediction [45]

  • Computer-aided diagnosis [46]

  • Heart-disease prediction [47]

  • Healthcare-monitoring system [29]

  • Simple algorithm. No assumptions for features and output of the dataset.

  • Effective against noisy data, managing large data.

  • Stable performance, high learning speed, and good overfitting management.

  • Time expensive, sensitive to local data.

  • Moderate accuracy, slow classification speed.

  • Poor handling of correlated data

Decision Tree (DT)
  • Glucose monitoring for diabetes [24]

  • Surgery [26,27]

  • Medical diagnosis [44]

  • Systems performance management [44]

  • Healthcare-monitoring system [29]

  • Very fast, efficient, and simple to understand and interpret.

  • Can handle a large variety of data types.

  • High computational, learning, and classification speed.

  • Complex calculations. Time and computationally expensive.

  • Poor in handling overfitting, noisy, and correlated data.

  • Inadequate at performing regression and has medium accuracy.

Random Forest (RF)
  • Disease prediction [48,49]

  • Healthcare-monitoring system [29]

  • Heart-disease prediction [22]

  • Good for managing noisy data. High classification speed.

  • Good for handling large and heterogeneous databases.

  • Automatic feature definition. Input feature normalization is not required.

  • Complex work function, difficulties in implementation.

  • Moderate accuracy, slow learning speed, poor handling of missing values.

  • Prone to overfitting. Proper definition of depth and number of trees is important.

Logistic Regression
  • Image-based cardiac monitoring [22]

  • Glucose monitoring for diabetes [24]

  • Pandemic resource management [28]

  • Healthcare-monitoring system [29,50]

  • Simple implementation and interpretation.

  • Good training efficiency. Outputs are well-calibrated and classified.

  • Empirical parameter tuning is not required. Good accuracy for simple data sets.

  • Fails to solve non-linear problems.

  • Assumes linearity in dependent and independent variables.

  • Prone to overfitting for high-dimensional datasets. Highly dependent on parameters and features.