Table 5.
Confusion matrices of model test results for the Model-Naïve paragraph (P3).
|
|
Predicted authentic | Predicted fake | Accuracy | |
| Model-trained participants | ||||
|
|
Authentic | 19 | 8 | 0.704 |
|
|
Model-trained generator | 7 | 29 | 0.806 |
|
|
Model-naïve generator | 0 | 14 | 1.00 |
|
|
Overall | —a | — | 0.805 |
| Model-naïve participants | ||||
|
|
Authentic | 16 | 6 | 0.727 |
|
|
Model-trained generator | 1 | 7 | 0.875 |
|
|
Model-naïve generator | 1 | 2 | 0.667 |
|
|
Overall | — | — | 0.758 |
| Pregenerated TTSb | ||||
|
|
Authentic | — | — | — |
|
|
Model-trained generator | 3 | 21 | 0.875 |
|
|
Model-naïve generator | 0 | 2 | 1.00 |
|
|
Overall | — | — | 0.885 |
aNot applicable.
bTTS: text-to-speech.