Skip to main content
. 2021 Nov 22;11(11):1281. doi: 10.3390/life11111281
Algorithm 1. The VGG-19 model description.
Input: CXR images of dimension 500 × 500 pixels from the Training CXR Dataset
Output: VGG model weights
1. epochs ← 100
2. for each image in the dataset do
3.  resize the image to 224 × 224 pixels
4.  normalize the image pixels values from (0,255) to (0,1)
5. end
6. Load the VGG-19 model pre-trained on the ImageNet dataset
7. Remove the last layer of the model
8. Make non-trainable all the layers of the model
9. Add a Flatten layer to the model output to obtain 1-D array of features
10. Apply a batch normalization to the 1-D array of features
11. Add a fully connected layer with 256 hidden neurons
12. Apply a dropout for inactivate units (40%) in the previous layer
13. Add a fully connected layer with 128 hidden neurons
14. Apply a dropout for inactivate units (60%) in the previous layer
15. Apply a batch normalization
16. Add a fully connected layer with four hidden units and a softmax activation function.
17. Optimize the model with Adam with learning_rate = 0.01 and a decay = learning_rate/epochs
18. Train the model for the given number of epochs and a batch size of 32
19. Save the final model