Skip to main content
. 2021 Jul 12;11:14301. doi: 10.1038/s41598-021-92776-x

Figure 6.

Figure 6

Schematic of deep transfer learning approach. DS refers to input data from a source domain, in this case a HAR dataset, to learn a task TS, which is represented by the label space YS (the HAR activity classes). DT refers to the target domain, in this case the FLOODLIGHT data, where YT are the disease classification outputs of HC, PwMSmild or PwMSmod for target task TT. During transfer learning, a model’s parameters and learned weights, f(·) of DS, are then used to initialise and train a model on target domain DT and task TT. Transfer learning is then performed by transferring the source model’s layers (where these weights and parameters are “frozen”) to subsequently re-train a new model (i.e. fine-tune) using DT data for the new target task, TT. Downstream layers in the network are fine-tuned towards this new target task decision YS.