Skip to main content
. 2024 Jan 8;10(1):19. doi: 10.3390/jimaging10010019
Algorithm 3 main
Input: DataSpectre.xlsx
Output: Score, Modelchoice
Begin
Reading data:
  data = Read Excel file: “Data.xlsx”
Extracting features and labels:
X = Feature selection without the target variable: “Class”
y = Reading data from the target variable: “Class”
Splitting data into training and testing sets:
X_train, X_test, y_train, y_test = Training and testing data split (X, y)
Creating classifiers and models:
Classifiers= [‘SVM’, ’Logistic’, ’RandomForest’, ’AdaBoost’, ’DecisionTree’, ’KNeighbors’, ‘XGBoost’]
models=[SVM(), LogisticRegression(), RandomForestClassifier(), AdaBoostClassifier(), DecisionTreeClassifier(), KNeighborsClassifier(), XGBClassifier()]
Writing a function to train, test, and evaluate models:
Score, Modelchoice = Train_test_evaluate(X_train, X_test, y_train, y_test, models)
                G, S = Genetic(X, y, X_train, X_test, y_train, y_test, Modelchoice)
                Choose the indices associated with the highest scores in the variable ‘ind.’
                For vi in ind
for i, value in enumerate(G[vi])
        if value is True then
                append i to indices_true1
                        end
end
X_train_GA = X_train with columns selected using indices in indices_true1
X_test_GA = X_test with columns selected using indices in indices_true1
Score, Modelchoice = Train_test_evaluate(X_train_GA, X_test_GA, y_train, y_test, models)
                end