Table 5.
Layers | CNN-4 | ResNet-6 | VGG-7 | MobileNet-6 | ||
---|---|---|---|---|---|---|
Single Learning | FluSense | — | 93.55/65.27 | 93.91/64.76 | 93.23/63.86 | 91.26/58.24 |
COUGHVID | — | 66.14/59.43 | 68.86/60.43 | 65.15/56.42 | 64.17/54.83 | |
Transfer Learning | Parameters | FC | 58.59/53.68 | 61.35/57.50 | 54.68/54.14 | 56.91/53.93 |
conv/block 3 & FC | 68.04/57.04 | 67.01/57.97 | 64.97/57.15 | 67.88/59.71 | ||
conv/block 2-3 & FC | 69.05/60.98 | 67.89/59.25 | 64.92/59.79 | 67.94/58.93 | ||
conv/block 1-3 & FC | 69.43/55.54 | 66.23/56.31 | 67.31/56.17 | 65.21/55.64 | ||
Embeddings Cat | conv/block 3 | 67.73/60.65 | 67.21/59.45 | 65.85/58.27 | 64.32/56.46 | |
conv/block 2 | 67.30/57.81 | 66.17/55.59 | 65.58/52.30 | 67.36/52.31 | ||
conv/block 1 | 65.15/59.30 | 65.35/59.77 | 58.67/51.92 | 66.37/53.77 | ||
Embeddings Add | conv/block 3 | 66.76/59.30 | 64.27/58.88 | 66.08/60.17 | 65.94/58.24 | |
conv/block 2 | 66.39/58.82 | 64.55/57.27 | 67.77/58.55 | 64.37/57.19 | ||
conv/block 1 | 65.91/57.17 | 64.63/58.21 | 63.85/58.97 | 64.17/56.60 |
Single learning indicates training from scratch and transfer learning includes “Parameters” (transferring parameters), “Embeddings Cat,” and “Embeddings Add” (incorporating emebdddings). The Models' performances with transfer learning are based on the COUGHVID dataset. For “Parameters,” the “Layers” column indicates the layers that are randomly initialised and trainable during the training procedure, and the remaining layers are frozen and initialised by the pre-trained FluSense models; for “Embeddings Cat,” “Embeddings Add,” and “Layers,” the column lists the convolutional layer/block (conv/block), after which embeddings incorporation happens. For convenience, the best test AUC and test UAR of every model under three transfer learning strategies are shown in bold face.