Skip to main content
. 2021 Aug 28;11(9):1562. doi: 10.3390/diagnostics11091562
Algorithm A1. The pseudocodes of model training.
Input: load CNN model name MODEL, mini-batch size N, pre-trained config CONFIG, optimizer function OPTIM, loss function LOSS, data path PATH, training iteration EPOCH
1  model = LoadModel(MODEL)
2  if CONFIG has pre-trained parameters then
3    Load pre-trained parameters
4  if CONFIG need freeze some layers then
5    Set requires_grad = False
6  model.to(‘cuda:0’)
7  if CONFIG has optimizer parameters then
8    Set parameters of optimizer (include Adam or SGD)
9    optimizer = OPTIM(learning_rate=0.1/0.01)
10  if CONFIG need adjust learning rate then
11    Set lr_scheduler to adjust learning rate
12  if CONFIG has more fine-tuning setting then
13    Add other fine-tuning setting (e.g. Batch Normalization)
14  criterion = LOSS()
15  if PATH is valid then
16    Prepare train data loader
17    train_loader = DataLoader(batch_size=N)
18    Prepare validation data loader
19    valid_loader = DataLoader(batch_size=N)
20  for an epoch in EPOCH do
21    for traing data in train_loader do
22     train batch-size training data
23     zero gradients buffers
24     calculate training loss
25     backpropagate the error
26     update weight
27     if log training history then
28      Log accuracy and loss of each epoch in history
29     Test the trained model in validation data set
30     model.eval()
31     for validation data in valid_loader do
32      calculate the best accuracy
33  Save the trained model
34  Save procedure history