Skip to main content
BMC Public Health logoLink to BMC Public Health
. 2025 May 24;25:1919. doi: 10.1186/s12889-025-23130-3

Quality and reliability evaluation of pancreatic cancer-related video content on social short video platforms: a cross-sectional study

Yuting Lei 1,#, Foqiang Liao 1,#, Xin Li 1,#, Yin Zhu 1,
PMCID: PMC12102813  PMID: 40413453

Abstract

Background

Pancreatic cancer (PC) has become one of the leading causes of cancer-related deaths worldwide. Social media platforms are widely used for health information dissemination because of their visual appeal and entertainment value. This study evaluates the content, quality, and reliability of PC-related information on domestic short video platforms.

Methods

A total of 265 PC-related videos were retrieved from three short video-sharing platforms: TikTok, Bilibili, and Kwai. The Global Quality Scale (GQS) and the modified DISCERN score were employed to evaluate the quality and content of the videos, respectively. Correlation analysis was conducted to explore the relationships between different video variables.

Results

The overall quality of the video content was low, with median scores of 2 (IQR: 2–3) for both the GQS and the modified DISCERN score. Most of the videos related to PC were posted by healthcare professionals (219/265, 82.6%). Videos from specialists received more likes on short social video platforms than those from nonspecialists did (median: 678 vs. 270, P = 0.005). Educational videos scored highest in both the GQS (median: 3, IQR 2–3) and the modified DISCERN score (median: 2.5, IQR 2–3). There was a positive correlation between GQS and video duration (r = 0.31, P < 0.01) and the modified DISCERN score (r = 0.434, P < 0.001).

Conclusion

The quality and reliability of the videos on these platforms were generally unsatisfactory in terms of source and content. Videos produced by healthcare professionals or institutions are more informative in terms of comprehensiveness, quality of information, and reliability than are those produced by non-healthcare professionals.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12889-025-23130-3.

Keywords: Pancreatic cancer, Short video, Health information, Quality, Reliability, TikTok, Bilibili, Kwai, GQS, DISCERN

Introduction

Pancreatic cancer (PC) is one of the leading causes of cancer-related deaths globally, and its global burden has more than doubled over the past two decades [1]. According to the Global Cancer Observatory (GLOBOCAN) 2020 [2], as of 2020, the estimated number of new cases of PC worldwide was approximately 420,000, with a corresponding death toll of approximately 410,000. According to 2023 cancer statistics from the United States (US), PC ranks fourth among all cancers in terms of cause of death [3], and it is projected to rise to the second leading cause by 2030 [4]. Moreover, in China, there were an estimated 95,000 (crude incidence rate: 6.92/105) and 85,000 (crude mortality rate: 6.16/105) new cases and deaths, respectively, due to PC in 2015, ranking 10th and 6th among all malignant tumors in China, respectively [5]. The prognosis for PC is generally poor, with an overall 5-year relative survival rate of approximately 10%, and effective treatments are currently lacking [3, 6].

Early detection, diagnosis, and treatment play crucial roles in prognosis for PC patients, significantly improving their 5-year survival rate [7]. In this process, it is crucial for patients to have a comprehensive understanding of PC and actively participate in screening.

As indispensable tools for disseminating information, social media platforms are increasingly utilized to share health-related information, given people’s preference for obtaining such information online [8, 9]. In recent years, short-form video-sharing platforms such as TikTok, Kwai and Bilibili have increased in popularity. Unlike traditional text-based information, short videos are more engaging and visually stimulating, making them easier for the public to accept and remember [10, 11]. The latest data show that actively using social media to access disease-related information is linked to positive patient outcomes. This allows patients to better self-manage their condition and helps alleviate the economic burden on healthcare [12]. In oncology, Shusen Zheng et al. [13], Lian-Shuo Li et al. [14], and Ren-hao Hu et al. [15] conducted evaluations of the content and accuracy of videos related to liver, colorectal, and gastric cancer on social media platforms.However, video evaluations related to PC are lacking. Therefore, this study aims to assess the role of domestic videos related to PC in the popularization of science and information dissemination to fill the research gap in this field.

Materials and methods

Data collection

Between January 22 and January 24, 2024, an observational retrospective study was conducted to evaluate the quality of short videos related to PC from three short video-sharing platforms (TikTok, Kwai and Bilibili) in China; the Chinese term “pancreatic cancer” was used as the keyword. The top 100 videos in the comprehensive ranking were screened. The exclusion criteria were as follows: duplicate videos, videos irrelevant to the subject, videos without sound, and videos not in Chinese. The video’s basic information and the content’s quality were recorded and evaluated by two independent reviewers (Lei YT and Li X), and any disagreements were resolved by consensus involving a third investigator (Liao FQ). The following video information was recorded: name; identity authentication of the uploader; publication date; duration of video; video content; and number of likes, shares and collections. We also counted videos that published erroneous views related to PC.

Evaluation methodologies and procedure

Two investigators carefully reviewed and used the Global Quality Scale (GQS) and the modified DISCERN tool to assess the content of the videos. The GQS is a commonly utilized tool to assess the quality of health information presented in videos and containing a rating scale ranging from 1 (very poor) to 5(excellent) [16, 17]. The modified DISCERN tool was used to assess the reliability of videos based on the following five parameters: clarity, relevance, traceability, robustness, and fairness [18, 19]. The videos were evaluated by two investigators according to five parameters, with scores ranging from 0 to 5 [20, 21]. We assessed the completeness of the video content based on whether it covered the following five aspects: symptoms, risk factors, diagnosis, treatment, and prognosis. The videos were categorized into three groups: not involved (0 points), partial explanation (1 point), and full explanation (2 points). The uploaders were divided into two main groups: healthcare professionals and non-healthcare professionals. Within the healthcare professionals’ group, all qualified doctors or nurses were included in the assessment. The non-healthcare professionals group comprised science bloggers and patients.

Statistical analysis

Categorical variables are reported as frequencies and percentages, and chi-square tests or Fisher’s exact tests were performed. Continuous variables were first tested for normality. The normality test of continuous variables was performed using Kolmogorov-Smirnov test. For variables that did not follow a normal distribution, we used the median and interquartile range (IQR). For variables that followed a normal distribution, we reported the means and standard deviation (SDs). The data were analyzed via Student’s t test and the Mann-Whitney U test, as appropriate. Cohen’s coefficient (κ) was used to assess the overall rating agreement, which was performed according to a previous study [22]. Cohen’s κ for this study was greater than 0.8, indicating good interrater reliability. Spearman analysis was used to analyze the correlation between different variables. Additionally, differences between multiple groups were compared via Bonferroni correction. A two-tailed P value < 0.05 was considered statistically significant. SPSS version 26.0 (IBM; Chicago, IL, USA) and R statistical software version 4.2.3 (www.r-project.org) were used for statistical analysis.

Results

Selection of short videos

A total of 300 videos were collected from the TikTok, Kwai, and Bilibili. 9 duplicate videos, 7 silent videos, 17 videos unrelated to the content, and 2 non-Chinese videos were excluded. A total of 265 short videos were ultimately included and evaluated ( Fig. 1).

Fig. 1.

Fig. 1

Search strategy for short videos on pancreatic cancer

Short video characteristics

Among the 265 short videos included in this study, 84 (31.7%) originated from the TikTok platform, 88 (33.2%) from Kwai, and the remaining 93 videos (35.1%) from Bilibili. The videos primarily came from two sources: healthcare professionals and non-healthcare professionals. In this study, most videos related to PC were uploaded by healthcare professionals (219/265, 82.6%). We further categorized healthcare professionals into PC specialists (148/219, 67.6%) and nonspecialists (71/219, 32.4%). Among non-healthcare professionals (46/265, 17.4%), we identified two categories—science bloggers and individual users—with science bloggers contributing more videos than individual users (37 vs. 9). The lengths of the videos varied from 44 s to 160 s, with a median length of 80 s. The median number of likes received by the videos was 610 (IQR: 147–2091), whereas the median number of shares and saves were 67 (IQR: 17–268) and 149 (IQR: 40–451), respectively (Table 1).

Table 1.

Video characteristics

Characteristic N = 265
Short-video sharing platforms [n(%)]
TikTok 84(31.7)
Kwai 88(33.2)
Bilibili 93(35.1)
Video source [n(%)]
Specialists 148 (55.8)
Non-specialists 71 (26.8)
Science blogger 37 (14.0)
Individual user 9 (3.4)
Number of likes [median(IQR)] 610 (147–2091)
Number of shares [median(IQR)] 67 (17–268)
Number of collections [median(IQR)] 149 (40–451)
Video duration [s, median(IQR)] 80 (44–160)
Mislead information [n(%)] 3 (1.1)
Completeness score [median(IQR)] 3 (2–4)
GQS scores [median(IQR)] 2 (2–3)
DISCERN scores [median(IQR)] 2 (2–3)

The sources of videos and audience preferences on TikTok, Kwai, and Bilibili are shown in Table 2. On TikTok, most videos were uploaded by professional individuals, accounting for 66.7% (56/84), followed by nonprofessional individuals and science bloggers, each accounting for 14.3% (12/84), and finally patient uploads at 4.7% (4/84). On Kwai and Bilibili, the number of videos uploaded by specialists exceeded those uploaded by nonspecialists, whereas science bloggers outnumbered individual patients’ uploads. Additionally, our study revealed that videos posted on TikTok received more likes (median: 1817 vs. 376 vs. 190, P < 0.001), shares (251 vs. 49 vs. 26, P < 0.001), and saves (366 vs. 106 vs. 80, P < 0.001) than those posted on the other two social platforms did.

Table 2.

Comparison of different short-video sharing platforms

Variables TikTok (N = 84) Kwai (N = 88) Bilibili (N = 93) p valve
Video source [n(%)] 0.002
 Specialists 56 (66.7) 55 (62.5) 37 (39.8)
 Non-specialists 12 (14.3) 24 (27.3) 35 (37.6)
 Science blogger 12 (14.3) 8 (9.1) 17 (18.3)
 Individual user 4 (4.7) 1 (1.1) 4 (4.3)
Number of likes [median(IQR)] 1817 (680–6977) 376 (158–1145) 190 (53–943) < 0.001
Number of shares [median(IQR)] 251 (80–884) 49 (13–157) 26 (9–98) < 0.001
Number of collections [median(IQR)] 366 (152–1187) 106 (28–333) 80 (20–213) < 0.001
Completeness score [median(IQR)] 2 (2–3) 3 (2–4) 3 (2–5) < 0.001
GQS scores [median(IQR)] 2 (2–3) 2 (2–2) 2 (2–3) 0.001
DISCERN scores [median(IQR)] 2 (2–2) 2 (2–2) 2 (2–3) 0.223

Compared to nonspecialists, specialists received more likes on short social video platforms (median: 678 vs. 270, P = 0.005). There was no significant difference between the two groups in terms of video sharing or video collections. At the same time, videos posted by specialists and nonspecialist showed no significant differences in terms of the completeness, information quality, or reliability of video content. In the comparison between science bloggers and patients’ videos, although there was no difference in likes and shares for educational videos nor was there any statistical significance in video collections, the videos from science bloggers outperformed those from patients in terms of video content completeness (median: 4.5 vs. 2, P < 0.001) or reliability (median: 2.5 vs. 2, P = 0.001) (Table 3).

Table 3.

Comparison of different video source

Variables Non-specialists (N = 71) Specialists (N = 148) Science blogger (N = 37) Individual user (N = 9) p valve
Number of likes [median(IQR)] 270 (71-1196) 678 (197–2041) 641 (114–9745) 3347 (908-15000) 0.005
Number of shares [median(IQR)] 45 (11–163) 69 (23–217) 139 (33-1681) 252 (61–703) 0.005
Number of collections [median(IQR)] 130 (22–443) 150 (48–376) 184 (55-1849) 312 (168–1308) 0.058
Completeness score [median(IQR)] 3 (2–5) 3 (2–4) 4.5 (3–6) 2 (1-4.5) < 0.001
GQS scores [median(IQR)] 2 (2-2.25) 2 (2–3) 3 (2–3) 2 (2-2.5) 0.003
DISCERN scores [median(IQR)] 2 (2–2) 2 (2–2) 2.5 (2–3) 2 (1–2) 0.001

We also analyzed the videos over time and divided them into two groups: those published before 2023 and those published after 2023. The results revealed an increase in the proportion of videos uploaded by healthcare professionals after 2023, but this difference was not statistically significant. Additionally, the videos in the two groups did not significantly differ in the number of likes, shares, completeness scores, GQS scores, or the modified DISCERN scores. (Table S1)

Short video content analysis

Drawing from the description of the five main aspects outlined in the Methods section and the prevalent issues concerning PC (symptoms, risk factors, diagnosis, treatment, and prognosis), the content completeness of each video was assessed. The findings revealed that few videos offered comprehensive coverage of PC-related content. As shown in Tables 4, 70.6% of the videos either partially addressed or completely omitted relevant information about PC symptoms, 73.2% of the videos entirely overlooked PC risk factors, 61.5% of the videos failed to mention the related diagnostic methods of PC, and only 49.8% of the videos touched upon some treatment measures. Furthermore, only a small fraction of the videos (3%) provided detailed descriptions of the prognosis of PC.

Table 4.

Completeness of video content

Video content Not involve (0 points) Partial explanation (1 point) Full explanation (2 points)
Symptoms, n (%) 104 (39.3) 83 (31.3) 78 (29.4)
Risk factors, n (%) 194 (73.2) 38 (14.3) 33 (12.5)
Diagnosis, n (%) 163 (61.5) 65 (24.5) 37 (14.0)
Treatment, n (%) 101 (38.1) 132 (49.8) 32 (12.1)
Prognosis, n (%) 133 (50.2) 124 (46.8) 8 (3.0)

Video quality and reliability assessments

As shown in Table 2, overall, the quality of the video content was relatively low, with a median score of 2 (IQR: 2–3) for both the GQS and the modified DISCERN score. On the TikTok and Bilibili platforms, the GQS scores of the videos were superior to those on the Kwai platform (P = 0.001); however, there was no significant difference in the GQS scores between the TikTok and Bilibili platforms. The modified DISCERN scores did not differ significantly across the three platforms (P = 0.223).

Figure 2 compares the completeness, GQS and the modified DISCERN scores for videos from different platforms and sources (Fig. 2A–C). Comparisons of completeness scores, GQS and the modified DISCERN scores in videos from different platforms are displayed in violin plots (Fig. 2D–F).

Fig. 2.

Fig. 2

Comparison between the modified DISCERN scores and Global Quality Scores (GQS) across video sources

Correlation analysis

We utilized Spearman correlation analysis to examine the relationships between the ratings and various video variables. The findings revealed a positive correlation between GQS and video duration (r = 0.31, P < 0.01), as well as between GQS and the modified DISCERN score (r = 0.434, P < 0.001) (Table 5). Moreover, a modest correlation was observed between the number of video collections and the modified DISCERN score (r = 0.122, P = 0.048), whereas no significant correlation was found with the GQS. Furthermore, no notable correlations existed between the number of video likes or shares and either the GQS or the modified DISCERN score.

Table 5.

Correlation analysis between video quality score and video features

GQS Modified DISCERN
r p valve r p valve
GQS - - 0.434 < 0.001
Modified DISCERN 0.434 < 0.001 - -
Likes -0.059 0.338 0.092 0.136
Shares -0.074 0.230 0.075 0.226
Collections -0.018 0.766 0.122 0.048
Video duration 0.310 < 0.001 0.413 < 0.001

Discussion

Principal findings

In this study, we assessed the content, quality, and reliability of health education videos about PC on TikTok, Kwai, and Bilibili. Our findings indicated that the overall quality of PC-related short videos on TikTok, Kwai, and Bilibili was unsatisfactory, which may be attributed to relatively low standards for access on these platforms and a lack of video review mechanisms. The median scores for video quality and reliability on the three platforms were both 2 points. The video quality on TikTok and Bilibili platforms was similar and superior to that on Kwai, with a statistically significant difference. However, the reliability of the videos on all three platforms was not statistically significant. One potential explanation is that videos on TikTok often garner more likes, indicating broader viewership and potential popularity. Additionally, the shorter average duration of videos on Kwai than on the other platforms may contribute to lower video quality. Alexander J. Didier et al. examined 39 YouTube videos related to PC to identify resources that accurately provide information on the diagnosis and treatment of PC [23]. The study highlighted that the quality of these videos could be improved but noted that the sample size was small and limited to a single platform. Additionally, the average length of these videos was longer (5.1 min vs. 1.3 min) than that of domestic short-video platforms, which may affect viewer concentration. Furthermore, foreign videos tend to focus more on disease symptoms and signs, whereas domestic videos emphasize treatment and prognosis. As a result, there is an urgent need both internationally and domestically to strengthen the regulation of video content quality. Future research should encompass a wider range of platforms, include larger sample sizes, and present more comprehensive content to improve the scientific and educational value of PC-related videos.

Quality of the short videos on pancreatic cancer

In this study, we evaluated the content of health education videos on PC across the TikTok, Kwai, and Bilibili platforms, with a focus on completeness, information quality, and reliability. Generally, the videos we analyzed were of poor quality, with only a few offering comprehensive information on the five key aspects of PC: symptoms, risk factors, diagnosis, treatment, and prognosis. This situation may be attributed to the lack of oversight during video upload.

Moreover, our research findings indicate that videos created by healthcare professionals possess greater instructional value in content completeness, information quality, and reliability than those created by non-healthcare professionals do. This could be attributed to healthcare professionals or institutions having a better grasp of the consensus and literature surrounding PC and being more attuned to the latest knowledge. However, the video content analysis also revealed that nonspecialist have certain misunderstandings and errors in their knowledge of PC, which may stem from differences in professional backgrounds and insufficient professional training, leading to an incomplete understanding of the disease. Overall, only three Traditional Chinese Medicine (TCM) practitioners expressed incorrect views when disseminating knowledge about PC. They ignored internationally recognized treatment guidelines and promoted scientifically unverified herbal therapies, which to a certain extent reduced the overall quality of videos released by healthcare professionals. Additionally, videos posted by non-healthcare professionals, particularly patients, may introduce biases, as they often rely on personal experiences and opinions [24]. This biased content may not only delay patients’ treatment but also negatively affect their long-term outcome and prognosis. These findings highlight the critical need to ensure the accuracy and scientific integrity of medical information disseminated through video platforms. Although TikTok released the “TikTok-approved institutions, physicians can dispense healthcare content, requiring professional team review " in March 2021, which stipulates that only certified individuals or organizations can publish health-related content and revoke certifications for violations to improve the quality and reliability of videos [25], other video platforms do not impose similar restrictions on the uploaders of videos. Many non-healthcare professionals still upload medical science videos. Moreover, our study revealed that on Bilibili, compared with those on the other two platforms, the proportion of videos uploaded by non-healthcare professionals was greater (22.6%). This situation may lead to a decline in the quality of medical-related videos on this short video platform.

Third, the audience for popular science short videos typically does not include healthcare professionals, which presents certain challenges for content creators. To accommodate time constraints, video uploaders often need to explain concisely and understandably, resulting in relatively straightforward content. In contrast to traditional forms of medical knowledge dissemination, audiences tend to prefer content with a journalistic or storytelling nature. Therefore, video creators need to possess a greater level of expressive ability. Combining case studies with knowledge dissemination methods can effectively engage viewers and disseminate medical information in less time. Notably, in our study, some videos uploaded by professionals were often recorded by doctors during patient consultations. These videos were well received by audiences, garnering many likes, saves, and shares.

Additionally, the characteristics of platform user groups play a crucial role in the variations in video quality. Bilibili appeals to a younger audience with a preference for animation and gaming; Kwai is popular among users in third- and fourth-tier cities and rural areas; and TikTok attracts a wide range of age groups because of its short video format. These differences in users may affect the focus and quality of video content, which in turn may affect the way users access information about PC. Although the sampling strategy in this study reduced bias through standardized search protocols, it did not fully circumvent the potential influence of the platforms’ inherent algorithms (e.g., platform-specific recommendation systems prioritizing high-engagement content). Future studies could adopt the following approaches to enhance data representativeness: (1) incorporating stratified sampling based on temporal dimensions (e.g., evenly selecting videos by month); (2) collaborating with platforms to obtain raw backend data streams.

In conclusion, employing short videos for public health education is a viable strategy. When executed properly, this method is effective, convenient, and cost-free. However, video quality varies, with some inaccurate health information being disseminated. Hence, platform moderators must rigorously vet video content and the credentials of uploaders to ensure the dissemination of high-quality information. In the future, developing a plugin that recognizes the quality of user-uploaded videos only through machine learning methods seems to be an effective way to achieve video quality regulation [2628]. By doing so, short science education videos can positively contribute to health education and disease management.

Practical significance

As internet technology continues to advance and the demand for higher health standards increases, interest in internet-based health education is on the rise. The internet has shifted the role of patients from passive information receivers to active information seekers [29].

A randomized controlled trial revealed that, compared with written pamphlets, online video education significantly improved the disease knowledge and clinical outcomes of patients with atopic dermatitis [30]. Another randomized controlled trial by Molavynejad et al. demonstrated significant reductions in weight, blood sugar parameters, and lipid levels in the video education group compared with control group [31]. However, the inconsistency in video quality has raised many concerns, such as some videos deceiving consumers by providing incorrect information.

Undoubtedly, we need to pay more attention to high-quality video content. An excellent health promotion video should integrate scientific rigor, accessibility, and clarity. Therefore, assessing the quality of videos is crucial for providing audiences with reliable information. Future research should propose recommendations on how to establish and develop these platforms.

Future directions and limitations

As the internet becomes the primary source of information, it is drawing more of people’s attention. Recently, the Chinese government issued a document titled “Guiding Opinions on Publishing and Disseminating Health Science Knowledge through Various Media” [32], further highlighting the importance of popularizing health science. Considering the widespread popularity of video-sharing platforms, it is necessary to explore the basic standards for content on these platforms. Therefore, we propose the following recommendations to promote short videos as an effective tool for health education. First, it is recommended that the government and video platforms collaborate to establish a monitoring body, consisting of medical experts, responsible for reviewing video content before it is uploaded. In parallel, considering the potential of machine learning technology for video quality recognition, developing machine learning-based plugins to automatically assess the quality of user-uploaded videos would significantly improve the efficiency of content regulation. Second, both the government and video platforms should actively encourage and support healthcare professionals in producing high-quality health education videos. Government agencies or relevant institutions could organize experts to produce authoritative videos, distributing them through official channels to ensure that the public receives scientifically reliable health guidance. Finally, healthcare professionals should make an effort to avoid the overuse of technical jargon in their videos. Instead, they should focus on using clear, accessible language and straightforward explanations to improve the accessibility and understanding of the information for a wider audience.

The intervention framework demonstrates multidimensional feasibility through synergistic technological, resource, and legislative convergence. Technically, the rapid development of artificial intelligence provides strong support, and the machine learning-based video quality recognition plug-in integrates natural language processing and image feature analysis and other technologies to achieve accurate assessment and provide technical means for supervision. In terms of resource allocation, the selected platform has a high flow rate, has received a large number of government and institutional financial support, and there are constantly large hospitals and authoritative medical institutions have been stationed and certified. At the legal level, a number of laws and regulations have been issued to regulate the reliability of video content, and through policy, legislation and regulatory measures, platforms can be pushed to strengthen the management of information authenticity and improve the quality and reliability of video content.

Our study has several strengths. First, we focused on the three largest short video platforms in China, covering audiences of all ages and varying cultural backgrounds. This broad coverage renders our findings more realistic and reliable while avoiding the limitations associated with using a single research platform. Second, in evaluating video content, we utilized GQS to assess video quality and the modified DISCERN to evaluate reliability, allowing for a multidimensional video information analysis. Finally, this study represents the first analysis of the quality of short videos related to PC across three social media platforms in China.

However, our study has certain limitations. First, we included only the top 100 videos from each platform. Despite the relatively small proportion of videos included, we believe it is sufficiently representative, and the top 100 videos have no significant impact on the analysis. Second, we included only videos uploaded to domestic short video-sharing platforms, so the study results may not be generalizable to other language platforms platforms (such as YouTube), and subsequent cross-language research is needed to fill this gap. Additionally, there are potential cultural and regulatory differences that may affect the generalisability of the results of this study to other language platforms. Finally, the assessment tools we selected are subjective and may lack comparability across different studies.

Conclusion

In our study, we collected and evaluated the information quality of 300 PC-related videos on three major short-form video-sharing platforms in China. We discovered that the quality and reliability of videos on these platforms were subpar in terms of both source and content. Overall, videos created by healthcare professionals or institutions were more promising in terms of content completeness, information quality, and reliability than those created by non-healthcare professionals were. Therefore, as video-sharing platforms become increasingly popular, strengthening supervision and quality control over these platforms is particularly important. People should also exercise caution when obtaining healthcare management information on short video platforms.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1 (14.1KB, docx)

Abbreviations

PC

Pancreatic cancer

GQS

The Global Quality Scale

GLOBOCAN

The Global Cancer Observatory

US

The United States

IQR

interquartile range

SDs

standard deviation

κ

Cohen’s coefficient

TCM

Traditional Chinese Medicine

Author contributions

YT.L. and X.L. designed the study, participated in searching related videos and literature, collected data, and drafted the manuscript. YT.L. and FQ.L. summarized the data and conducted the statistical analysis. Y.Z. edited and reviewed the manuscript. All authors have read and approved the final manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (82370661, 82260133), Jiangxi Province’s Thousand Talents Plan for introducing and cultivating high-level talents in innovation and entrepreneurship (jxsp2019201028), Science and Technology Innovation Team cultivation project of the First Affiliated Hospital of Nanchang University (YFYKCTDPY202202).

Data availability

No datasets were generated or analysed during the current study.

Declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Yuting Lei, Foqiang Liao and Xin Li are equal contribution of first authors

References

  • 1.Klein AP. Pancreatic cancer epidemiology: Understanding the role of lifestyle and inherited risk factors. Nat Rev Gastroenterol Hepatol. 2021;18(7):493–502. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Globocan. Cancer today. Accessed 10 Jan 2021; Available from: https://gco.iarc.fr/today/home
  • 3.Siegel RL, et al. Cancer statistics, 2023. CA Cancer J Clin. 2023;73(1):17–48. [DOI] [PubMed] [Google Scholar]
  • 4.Rahib L, et al. Projecting cancer incidence and deaths to 2030: the unexpected burden of thyroid, liver, and pancreas cancers in the united States. Cancer Res. 2014;74(11):2913–21. [DOI] [PubMed] [Google Scholar]
  • 5.Chen W, et al. Cancer statistics in China, 2015. CA Cancer J Clin. 2016;66(2):115–32. [DOI] [PubMed] [Google Scholar]
  • 6.Jemal A, et al. Cancer statistics, 2006. CA Cancer J Clin. 2006;56(2):106–30. [DOI] [PubMed] [Google Scholar]
  • 7.Qiu W, et al. Oridonin-loaded and GPC1-targeted gold nanoparticles for multimodal imaging and therapy in pancreatic cancer. Int J Nanomed. 2018;13:6809–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Adam M, et al. Design preferences for global scale: a mixed-methods study of glocalization of an animated, video-based health communication intervention. BMC Public Health. 2021;21(1):1223. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Welch V, et al. Interactive social media interventions to promote health equity: an overview of reviews. Health Promot Chronic Dis Prev Can. 2016;36(4):63–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Goodyear VA, et al. The effect of social media interventions on physical activity and dietary behaviours in young people and adults: a systematic review. Int J Behav Nutr Phys Act. 2021;18(1):72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.He Z, et al. The reliability and quality of short videos as a source of dietary guidance for inflammatory bowel disease: Cross-sectional study. J Med Internet Res. 2023;25:e41518. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Oser SM, et al. Glycated hemoglobin differences among Blog-Reading adults with type 1 diabetes compared with those who do not read blogs: Cross-Sectional study. JMIR Diabetes. 2019;4(2):e13634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Zheng S, et al. Quality and reliability of liver Cancer-Related short Chinese videos on TikTok and bilibili: Cross-Sectional content analysis study. J Med Internet Res. 2023;25:e47210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Li LS, et al. Quality and educational content of Douyin and TikTok short videos on early screening of rectal cancer. JGH Open. 2023;7(12):936–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hu RH, et al. Quality and accuracy of gastric cancer related videos in social media videos platforms. BMC Public Health. 2022;22(1):2025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Balci AS, et al. Evaluation of the reliability, utility, and quality of the lid loading videos on YouTube. Int Ophthalmol. 2023;43(6):2065–72. [DOI] [PubMed] [Google Scholar]
  • 17.Bernard A, et al. A systematic review of patient inflammatory bowel disease information resources on the world wide web. Am J Gastroenterol. 2007;102(9):2070–7. [DOI] [PubMed] [Google Scholar]
  • 18.Charnock D, et al. DISCERN: an instrument for judging the quality of written consumer health information on treatment choices. J Epidemiol Community Health. 1999;53(2):105–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Sun F, Yang F, Zheng S. Evaluation of the liver disease information in Baidu encyclopedia and Wikipedia: longitudinal study. J Med Internet Res. 2021;23(1):e17680. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Langille M, et al. Systematic review of the quality of patient information on the internet regarding inflammatory bowel disease treatments. Clin Gastroenterol Hepatol. 2010;8(4):322–8. [DOI] [PubMed] [Google Scholar]
  • 21.Song S, et al. Short-Video apps as a health information source for chronic obstructive pulmonary disease: information quality assessment of TikTok videos. J Med Internet Res. 2021;23(12):e28318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Lydersen S. [Cohen’s kappa - a measure of agreement between observers]. Tidsskr nor Laegeforen, 2018;138(5). [DOI] [PubMed]
  • 23.Didier AJ, et al. Evaluation of the quality and comprehensiveness of YouTube videos discussing pancreatic Cancer. J Cancer Educ. 2023;38(6):1894–900. [DOI] [PubMed] [Google Scholar]
  • 24.Berland GK, et al. Health information on the internet: accessibility, quality, and readability in english and Spanish. JAMA. 2001;285(20):2612–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Douyin: approved institutions, physicians can dispense health care content, requiring professional team review. Cited 14 May 2023; Available from: https://www.thepaper.cn/newsDetail_forward_11828200
  • 26.Khan Y, Thakur S. Fake news detection of South African COVID-19 related tweets using machine learning. In 2022 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD). 2022. IEEE.
  • 27.Zivkovic M et al. A novel method for covid-19 pandemic information fake news detection based on the arithmetic optimization algorithm. In. 2021 23rd international symposium on symbolic and numeric algorithms for scientific computing (SYNASC). 2021. IEEE.
  • 28.Vesic A et al. Hidden sadness detection: differences between men and women. In 2021 zooming innovation in consumer technologies conference (ZINC). 2021. IEEE.
  • 29.Kong W, et al. TikTok as a health information source: assessment of the quality of information in Diabetes-Related videos. J Med Internet Res. 2021;23(9):e30409. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Armstrong AW, et al. Online video improves clinical outcomes in adults with atopic dermatitis: a randomized controlled trial. J Am Acad Dermatol. 2011;64(3):502–7. [DOI] [PubMed] [Google Scholar]
  • 31.Molavynejad S, Miladinia M, Jahangiri M. A randomized trial of comparing video Telecare education vs. in-person education on dietary regimen compliance in patients with type 2 diabetes mellitus: a support for clinical telehealth providers. BMC Endocr Disord. 2022;22(1):116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Guiding. Opinions on publishing and disseminating health science knowledge through various media. Available from: http://www.nhc.gov.cn/xcs/s3581/202205/1c67c12c86b44fd2afb8e424a2477091.shtml. Accessed 2022-05-31.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material 1 (14.1KB, docx)

Data Availability Statement

No datasets were generated or analysed during the current study.


Articles from BMC Public Health are provided here courtesy of BMC

RESOURCES