Skip to main content
. 2024 Oct 12;14(20):2274. doi: 10.3390/diagnostics14202274
Algorithm 2: The training algorithm of a DL model
1 Input  LC25000 dataset DL
2 Output  Trained model
3 BEGIN
4       Inputs 224 × 224 × 3
5       Batch_S  25
6       Lr 0.0001
7       Dropout  0.4
8       Batch_N   {momentum = 0.99, epsilon = 0.001}
9       L {0,1,2,3,4}.
10     G_A_P Global average pooling
11     Dense  = 4
12     FOR EACH image IN DL
13       Data_images. append(image)
14     END FOR
15     Splitting DL
16     Model.fc  {G_A_P, DROP, Dense}
17     Model  Model (inputs = X.inputs, outputs = Model.fc)
18     OPT  Adamx (0.0001)
19     FOR EACH lay in Model.layers [-20:]
20       IF not instance (lay, lay. Batch_N)
21                 lay.trainable = True
22       END IF
23     END FOR
24     Model.compile(OPT, loss = “sparse_categorical_crossentropy”).
25 END