Table 7.
Summary of key findings on video quality and engagement across platforms.
| Key finding | Data support | Interpretation |
|---|---|---|
| No significant difference in video quality between professional and non-professional uploaders | Table 5: All p > 0.05 for PEMAT, VIQI, GQS, mDISCERN | Professional credentials do not guarantee higher-quality public health communication. |
| No major inter-platform differences in information reliability (mDISCERN) | Table 4: mDISCERN-sum scores similar across platforms (p > 0.05) | All platforms face similar challenges in conveying reliable medical information. |
| Weak-to-moderate correlation between engagement and quality metrics | Table 6: Most r-values between 0.2–0.4; highest correlation: Bilibili Views vs. VIQI (r = 0.701, p < 0.001) | Audience engagement is poorly predictive of video quality; high views ≠ high quality. |
| YouTube leads in comprehensibility and actionability (PEMAT) | Table 4: PEMAT-T and PEMAT-A significantly higher on YouTube (p < 0.001) | YouTube videos are more structured and easier to understand and act upon. |
| Bilibili and TikTok lead in production quality (VIQI, GQS) | Table 4: VIQI and GQS significantly higher on Bilibili and TikTok (p < 0.001) | Short-form platforms excel in visual and production quality, but may lack depth. |
| TikTok has highest engagement despite shortest videos | Table 1: Median length = 114 s; highest likes, comments, shares | Algorithmic promotion and format favor high interaction, not necessarily quality. |