Skip to main content
. 2022 Sep 28;8(10):263. doi: 10.3390/jimaging8100263

Table 3.

Detailed results of Task I. Precision, recall, F1 score and their mean values are reported for each team.

(a) VisionLabs
Precision Recall F1-score
Real 0.89 0.88 0.89
Deepfake 0.95 0.96 0.96
accuracy 0.94
macro avg 0.92 0.92 0.92
weighted avg 0.94 0.94 0.94
(b) DC-GAN (Amped Team)
Precision Recall F1-score
Real 0.86 0.78 0.82
Deepfake 0.92 0.95 0.93
accuracy 0.90
macro avg 0.89 0.87 0.88
weighted avg 0.90 0.90 0.90
(c) Team Nirma
Precision Recall F1-score
Real 0.55 0.80 0.65
Deepfake 0.90 0.74 0.81
accuracy 0.75
macro avg 0.72 0.77 0.73
weighted avg 0.80 0.75 0.76
(d) AIMH Lab
Precision Recall F1-score
Real 0.52 0.49 0.51
Deepfake 0.80 0.82 0.81
accuracy 0.73
macro avg 0.66 0.66 0.66
weighted avg 0.72 0.73 0.72
(e) PRA Lab—Div. Biometria
Precision Recall F1-score
Real 0.43 0.76 0.55
Deepfake 0.86 0.59 0.70
accuracy 0.64
macro avg 0.64 0.68 0.62
weighted avg 0.74 0.64 0.66
(f) Team Wolfpack
Precision Recall F1-score
Real 0.05 0.06 0.05
Deepfake 0.59 0.55 0.57
accuracy 0.41
macro avg 0.32 0.30 0.31
weighted avg 0.44 0.41 0.42
(g) SolveKaro
Precision Recall F1-score
Real 0.17 0.31 0.22
Deepfake 0.59 0.39 0.47
accuracy 0.37
macro avg 0.38 0.35 0.34
weighted avg 0.47 0.37 0.40