Skip to main content
. 2025 Aug 8;15:29102. doi: 10.1038/s41598-025-13306-7

Table 1.

Hyperparameters of Machine Learning Models.

Model Tuned Parameters Search Space
AdaBoost n_estimators = 100, learning_rate = 0.1 n_estimators: [50, 100, 150, 200], learning_rate: [0.01, 0.05, 0.1, 0.5, 1.0]
Decision Tree max_depth = 10, min_samples_split = 5, min_samples_leaf = 2 max_depth: [5, 10, 15, None], min_samples_split: [2, 5, 10], min_samples_leaf: [1, 2, 4]
Extra Trees n_estimators = 150, max_depth = 12, min_samples_split = 4 n_estimators: [100, 150, 200], max_depth: [10, 12, 15], min_samples_split: [2, 4, 6]
Gradient Boosting n_estimators = 200, learning_rate = 0.05, max_depth = 5 n_estimators: [100, 200, 300], learning_rate: [0.01, 0.05, 0.1], max_depth: [3, 5, 7]
K-Nearest Neighbors n_neighbors = 5, weights = ’distance’, metric = ’minkowski’ n_neighbors: [3, 5, 7, 9], weights: [‘uniform’, ‘distance’], metric: [‘euclidean’, ‘manhattan’, ‘minkowski’]
Linear Regression fit_intercept = True, normalize = False fit_intercept: [True, False], normalize: [True, False]
Neural Network hidden_layer_sizes = (100, 50), activation = ’relu’, solver = ’adam’, alpha = 0.0001 hidden_layer_sizes: [(50,), (100,), (100, 50)], activation: [‘relu’, ‘tanh’, ‘logistic’], solver: [‘adam’, ‘sgd’], alpha: [0.0001, 0.001]
Random Forest n_estimators = 200, max_depth = 15, min_samples_split = 4, min_samples_leaf = 2 n_estimators: [100, 200, 300], max_depth: [10, 15, 20, None], min_samples_split: [2, 4, 6], min_samples_leaf: [1, 2, 4]
XGBoost n_estimators = 300, learning_rate = 0.05, max_depth = 6, subsample = 0.8, colsample_bytree = 0.8 n_estimators: [100, 200, 300, 400], learning_rate: [0.01, 0.05, 0.1], max_depth: [4, 6, 8], subsample: [0.6, 0.8, 1.0], colsample_bytree: [0.6, 0.8, 1.0]
PINN n_layers = 4, Hidden_unit_0 = 102, Hidden_unit_1 = 127, Hidden_unit_2 = 37, Hidden_unit_3 = 127, learning_rate = 0.00595, Physics_weight = 0.0243 n_layers = [2, 3, 4, 5, 6], Hidden_unit_0 [32, 64, 96, 128, 160], Hidden_unit_1 = [32, 64, 96, 128, 160], Hidden_unit_3 = [16, 32, 48, 64], learning_rate = [1e-4, 5e-4, 1e-3, 5e-3, 1e-2] (log-uniform sampling), [1e-3, 1e-2, 5e-2, 0.1, 0.2] (log-uniform sampling)