Skip to main content
. 2023 Jul 21;23:131. doi: 10.1186/s12911-023-02215-2

Table 4.

Overall prediction performance (measured as MAE) of the proposed Transformer Models (bottom 3) on the test dataset for ASBP, ADBP, and SpO2 in comparison with traditional Transformer and U-Net model as baseline. MLM, fine-tuned with Masked Language Modeling

Methods Parameter Counts, × 106 SpO2, % ASBP, mmHg ADBP, mmHg
Transformer 1.4 1.65 ± 1.94 6.76 ± 5.24 3.57 ± 4.39
U-Net 13.3 1.02 ± 1.46 5.03 ± 4.78 2.98 ± 3.41
Transformer w/multi-task 2.2 1.28 ± 1.67 6.44 ± 5.32 3.42 ± 4.17
MLM-Transformer 2.2 0.75 ± 1.04 4.97 ± 4.72 2.99 ± 2.39
MLM-Transformer w/personalization 2.2 0.56 ± 0.79 2.41 ± 2.72 1.31 ± 1.77