Skip to main content
Preventive Medicine Reports logoLink to Preventive Medicine Reports
. 2024 May 15;43:102761. doi: 10.1016/j.pmedr.2024.102761

Enhancing infectious diseases early warning: A deep learning approach for influenza surveillance in China

Liuyang Yang a,b,1, Jiao Yang b,1, Yuan He c,1, Mengjiao Zhang a, Xuan Han b, Xuancheng Hu b, Wei Li d, Ting Zhang b,, Weizhong Yang b,
PMCID: PMC11127166  PMID: 38798906

Abstract

Objective

This study aimed to develop a universally applicable, feedback-informed Self-Excitation Attention Residual Network (SEAR) model. This model dynamically adapts to evolving disease trends and surveillance system changes, accommodating various scenarios. Thereby enhancing the effectiveness of early warning systems.

Methods

Surveillance data on influenza-like illness (ILI) was collected from various regions including Northern China, Southern China, Beijing, and Yunnan. The reproduction number (Rt) was estimated to determine the threshold for issuing warnings. The Self-Excitation Attention Residual Network (SEAR) was devised employing deep learning algorithms and was trained, validated, and tested. The SEAR model's efficacy was assessed based on five metrics: accuracy rate, recall rate, F1 score, confusion matrix, and the receiver operating characteristic curve.

Results

With an advance warning set at three days, the SEAR model outperformed five primary models − logistic regression, support vector machine, random forest, Extreme Gradient Boosting, and Long Short-Term Memory model − in all five evaluation metrics. Notably, the model's warning performance declined with an increase in the early warning value and the number of warning days, albeit maintaining a ROC value over 0.7 in all scenarios.

Conclusion

The SEAR model demonstrated robust early warning performance for influenza in diverse Chinese regions with high accuracy and specificity. This novel model, augmenting traditional systems, supports widespread application for respiratory disease outbreak monitoring. Future evaluations could incorporate alternative indicators, with the model continuously updating through data feedback, thus enhancing its universal applicability. Ongoing optimization, using iterative feedback and expert judgment, heralds a transformative approach to surveillance-based early warning strategies.

Keywords: Influenza transmissibility, Deep learning, Early warning, Infectious disease surveillance

1. Introduction

Over the past century, respiratory infectious disease pandemics had occurred four times: the Spanish pandemic of 1918, H2N2 in 1957, H3N3 in 1968, and H1N1 in 2009. The 1918 H1N1 pandemic alone resulted in an estimated 50 million deaths and a 10–12 year decrease in life expectancy per capita (Johnson and Mueller, 2002). Despite significant improvements in humanity's ability to cope with influenza pandemics, influenza continues to occur every year, with an estimated 1 billion people worldwide infected each year and 290,000–650,000 fatalities due to influenza-related respiratory diseases (Iuliano et al., 2018, Geneva, 2023). This poses a severe threat to human health all over the world.

The establishment of influenza early warning thresholds and the issuance of early warning signals are essential for the timely detection of potential epidemics or pandemics and the implementation of mitigating measures. Various methods have been employed for the development of influenza epidemic thresholds, such as the Average Epidemic Curve (AEC) method recommend by WHO (World Health Orgnization, 2023), the Standard Deviation Multiplier method used by the the United States (Hong Kong Department of Health, 2022, NCIRD, 2023), the Moving Epidemic Method (MEM) developed by the Vega et al. (Rakocevic et al., 2019, Vega et al., 2013), and the Cumulative Sum and Control Chart method (CUSUM) (Tan et al., 2021). Nonetheless, there is no universal standard for such thresholds (Boyle et al., 2011). The SEAR model demonstrated robust early warning performance for influenza in diverse Chinese regions with high accuracy and specificity. This novel model, augmenting traditional systems, supports widespread application for respiratory disease outbreak monitoring. Future evaluations could incorporate alternative indicators, with the model continuously updating through data feedback, thus enhancing its universal applicability. Ongoing optimization, using iterative feedback and expert judgment, heralds a transformative approach to surveillance-based early warning strategies.

The field of respiratory infectious disease surveillance and prediction has greatly benefited from the digital era, with the availability of vast amounts of real-time data and improved computational capabilities. This has opened up new avenues for researchers and experts to explore novel approaches and develop more robust and accurate early warning models. The emergence of machine learning and artificial intelligence techniques presents an exciting opportunity to revolutionize influenza early warning systems. These advanced analytical methods can analyze large volumes of data, identify patterns and correlations, and generate predictive models that adapt to changing circumstances. By harnessing the power of machine learning algorithms, researchers can potentially identify subtle signals or early indicators of influenza outbreaks that may be missed by traditional models. The establishment of a universal standard for such thresholds remains an ongoing challenge, but by embracing new technologies and methodologies, we can strive towards more accurate and adaptable early warning systems for influenza. Previous studies have indicated that the pattern of influenza epidemics varies between provinces in China, with provinces in Northern China generally having a regular winter peak. In contrast, some provinces in Southern China, such as Shanghai, have both winter and summer peaks (Lei et al., 2021). This disparity in influenza epidemic characteristics between the Northern China and Southern China makes it challenging to apply a uniform approach both accurate and timely to the development of influenza epidemic thresholds in China. For instance, Yue et al. found that the MEM method was ineffective in developing influenza epidemic thresholds in China, with a sensitivity and specificity of 54 % and 81 %, respectively, and it performed worse in southern provinces than in northern provinces (Cheng et al., 2016). The AEC and standard deviation multiplier methods are similar in that their thresholds are based on the assumption of single peaks and data that conform to a normal distribution; however, many southern Chinese cities, such as Shanghai, Guangdong, and Hong Kong are multi-modal and non-normal distribution. Cowling J et al. have shown that CUSUM is not as effective in providing early warning of influenza epidemics, with the area under the receiver operating characteristic (ROC) curve being less than 0.6 (Cowling et al., 2006). China is a country spanning approximately 50° latitude and 60° longitude, encompassing a wide range of climatic zones, including approximately 10 different climatic types. These variations in climate are accompanied by significant differences in cultural practices and economic development across the nation. Therefore, in order to enhance cooperation between regions and improve the joint application of monitoring and early warning systems on a larger scale, it is necessary to establish a more universal surveillance and early warning technology that is not reliant on predictive models.

Deep learning is an area of Artificial Intelligence (AI) and computer science that uses data and algorithms to replicate how humans learn and enhance accuracy. In the context of infectious disease research, deep learning can analyze existing surveillance data and identify patterns to forecast future trends, thus promoting the prevention and control of infectious diseases (Santangelo et al., 2023). Compared to the four influenza threshold formulation methods, deep learning does not necessitate data distribution and type, making it an ideal way to implement a unified influenza epidemic warning formulation in a region such as China, where epidemic characteristics between the north and south vary greatly. The primary objective of this study was to develop a novel deep-learning model, referred to as the Self-Excitation Attention Residual Network (SEAR), tailored for influenza data exhibiting diverse epidemic characteristics. The model was built upon Chinese influenza surveillance data, with the purpose of predicting and providing early warnings for epidemic trends within China, and subsequently, evaluating its performance. SEAR is a product of the fusion of Resnet and the Squeeze and Excitation Attention (SE attention) mechanism (Hu et al., 2020). The model begins its process by employing the SE Attention mechanism to pinpoint salient features within the dataset. It then utilizes Resnet to amplify these essential features, which in turn, augments the model's accuracy. The unique strength of SEAR resides in its ability to focus on and extract the indispensable features of the data, while concurrently disregarding features that contribute insignificantly to the final outcomes.

2. Method

2.1. Data sources

In this study, we selected four regions in Northern China (except Beijing), Southern China (except Yunnan), Beijing, and Yunnan Province for the study. We collected the influenza-like illness (ILI) percentage (ILI%) from the Chinese National Influenza Surveillance Network for the four regions, as shown in Fig. S1. Since the actual weekly influenza incidence rate (h) was not accessible, h was estimated with the help of ILI% and IPR (ILI positive rate), which is thought to be a precise estimate of influenza incidence, as per existing studies (Wong et al., 2013, Ali et al., 2018).

2.2. Estimation of instantaneous reproduction number

Due to the uneven distribution of sentinel hospitals and population size and density within cities, the incidence rate is not a suitable evaluation index to assess the transmission capacity of influenza (Te Beest et al., 2013). Ali et al. proposed that the instantaneous reproduction number (Rt), defined as the average number of secondary cases generated by an infected person at time t, is a more appropriate indicator to measure influenza transmission capacity (Te Beest et al., 2013, Ali et al., 2018). Therefore, we estimated the daily influenza transmission capacity, Rt, in this study, based on weekly influenza surveillance data. The Rt measures the risk and speed of infectious disease transmission, and a higher value indicates a greater risk of rapid epidemic spread. When Rt is greater than 1, the epidemic will quickly spread in the population, and when Rt is less than 1, the number of infections in the population will gradually decrease. The estimation process is outlined below:

  • (1)

    A time series of weekly incidence counts was created by multiplying the weekly incidence rate by a constant of 10,000, the reciprocal of the sentinel site coverage in the study province, and then rounding the value to the nearest integer (Ali et al., 2022). (2) Weekly incidence counts were interpolated with a spline function to generate daily incidence counts, which were then used to estimate Rt (Ali et al., 2018, Ali et al., 2022). (3) Rt can be estimated by the Bayesian framework of the branching process model, as proposed by Cori et al (Cori et al., 2013), with the estimation procedure described in the Supplementary Material.

2.3. Data labeling

This study aimed to predict the early warning categories of daily Rt using a supervised learning method. The prediction result was a binary variable, either 0 or 1. If the Rt value is greater than the warning threshold, it is marked as 1, signifying the need for a warning; if the Rt value is less than the warning threshold, it is marked as 0, indicating that no warning is required. The warning threshold is adjustable depending on the warning task. Fig. S2 illustrates the time points at which influenza Rt in Beijing needs to be warned under different warning thresholds between 1.0 and 1.10, with red indicating that a warning is necessary and blue indicating that no warning is required. Subplots a, c, d, e, and f correspond to different warning thresholds, respectively.

As shown in Fig. S2, as the warning threshold increases, the number of time points needing to be warned decreases. When the warning threshold is greater than 1.06, some of the rapidly rising bands of Rt are labeled as no warning, resulting in a large number of false negative labels. In contrast, when the warning threshold is less than 1.02, some of the bands of lateral oscillation are labeled as a warning, leading to a large number of false positive labels. If a large number of false negative and false positive labels appear, it is evident that they need to meet the requirements for the purpose of early warning. The warning threshold should be determined according to the actual task; if a more sensitive warning result is desired, a lower warning threshold should be used; if a more specific warning result is desired, a higher warning threshold should be used. In the above figure, the warning threshold value between 1.02 and 1.04 is more meaningful, as the number of warning and non-warning time points is relatively balanced, which meets the requirements of deep learning for classification task data set.

2.4. Model construction

This study aimed to construct and train a model to identify the intrinsic features of the input data and establish a mapping from Rt features to warning categories to classify Rt for warning accurately. To this end, a deep learning model called SEAR is innovatively proposed, as depicted in Fig. 1.

Fig. 1.

Fig. 1

Structural diagram of SEAR model.

To construct the SEAR model, a network model was designed to map the Rt data to warning categories and accurately identify the Rt features while upholding the criteria of simplicity, lightness, and efficiency and optimizing all evaluation metrics. As the data are one-dimensional vectors, a self-attentive structure is utilized to enable the model to focus on and extract the essential features of the data while ignoring those features that have a minimal effect on the results.

First, the feature part of the data is fed into the Global Pooling layer. The Global Pooling layer helps the model to better capture and understand the overall features of the data. The feature data is divided into 4 paths after passing through the Global Pooling layer. Path 1 is connected to the SE attention module, Path 2 was linked to the residual module after convolution and global pooling layer, and Path 3 was joined with the SE attention module and the residual module, with two residual connections. Lastly, Path 4 was not connected to any module; instead, it was directly concatenated with the results of Path 1, Path 2, and Path 4.

For Path 1, a SE attention module was employed, this enabled the model to focus on and extract the essential features of the data while disregarding those that have little effect on the results.

In path 2, feature extraction was achieved through the residuals module. This module was composed of 24 convolutional layers, and short-circuit connections were added to reduce complexity, prevent overfitting, and stop the gradient from disappearing.

After paths 1, 2, 3 and 4 were concatenated, the Sigmoid activation function was used in the full connection layer to predict the probability of 0 and 1 alert categories. The prediction result was a probability value between 0–1 for each alert category.

2.5. Model training

2.5.1. Dataset construction

1) Design of a single data structure.

This study utilized a single data structure design method comparable to the backward construction method suggested in Fig. S3, albeit with a few variations. As depicted in Fig. S3, if the goal was to forecast the warning category on December 2, 2022, the advance segment was set to a number of days prior to that date, and the observation segment is set to 10 days Rt, with the data of these segments sliding forward in tandem.

  • (2)

    Dataset segmentation

This section selected 3181 days from June 8, 2011, to February 21, 2020, in Southern and Northern China, Beijing, and Yunnan Province to construct training and testing sets. According to the conventional division methods of deep learning training set, validation set, and test set, and taking into account the practical significance of ILI% prediction, the period from June 8, 2013, to October 9, 2018, was classified as a training set, totaling 2681 days; From October 9, 2018, to February 21, 2020, it was classified as a test set for a total of 500 days. The training set includes seven ILI% activity peaks, and the test set contains two ILI% peaks.

2.6. Training workflow

The 10-Fold Cross Validation Test is a model training technique commonly utilized in deep learning classification tasks that upgrades traditional training methods. The data set is divided into ten subsets randomly and equally, with one subset used as the validation set and the remaining nine as the training set. This comprehensive learning of the data allows for a more accurate model performance evaluation while reducing the overfitting caused by repeated training on local data sets. The 10-Fold Cross Validation Test is based on the 10-fold cross-validation of the training set, with the completed model of each fold tested to provide an accurate prediction and generalization performance of the model.

2.7. Training environment and parameters

Model training was conducted on a 64-bit Windows 7 operating system with an Inter(R) Core(TM) i7-6700 K 4.00 GHz central processor, 16 GB of RAM, and an NVIDIA GeForce RTX 2060 graphics card with 6 G of video memory. Deep learning was implemented using Python 3.6, Tensorflow-GPU, with 100 Epochs and a Batch_size 128. To make a comparison, three, five, and seven days were selected for the lead period, and the warning threshold, 1.02, 1.03, and 1.04, were chosen.

2.8. Model evaluation

Precision, Recall, F1 Score, Confusion matrix, and Receiver operating characteristic (ROC) curve were five evaluation indexes for assessing the early warning results.

3. Results

After training and validation, the SEAR model was used to predict the warning categories for 500 data from October 9, 2018, to February 21, 2020, for Northern and Southern China, Beijing, and Yunnan Province. Fig. 2, Fig. 3, Fig. 4, Fig. 5 compare the true labels of the warnings and the predicted labels of the model for each region under the case of 3 days ahead and the warning threshold of 1.02, respectively.

Fig. 2.

Fig. 2

Comparison of true/predicted warning time points in Northern China The upper panel of each figure indicates the time node when the warning should be issued at the warning threshold of 1.02, while the lower panel indicates the warning time node predicted by the SEAR model.

Fig. 3.

Fig. 3

Comparison of true/predicted warning time points in Southern China The upper panel of each figure indicates the time node when the warning should be issued at the warning threshold of 1.02, while the lower panel indicates the warning time node predicted by the SEAR model.

Fig. 4.

Fig. 4

Comparison of true/predicted warning time points in Beijing.

Fig. 5.

Fig. 5

Comparison of true/predicted warning time points in Yunnan Province The upper panel of each figure indicates the time node when the warning should be issued at the warning threshold of 1.02, while the lower panel indicates the warning time node predicted by the SEAR model.

The upper panel of each figure indicates the time node when the warning should be issued at the warning threshold of 1.02, while the lower panel indicates the warning time node predicted by the SEAR model.

As shown in Fig. 2, Fig. 3, Fig. 4, Fig. 5, the results illustrate that the SEAR model was more accurate in predicting the warning category for a period of 500 days, from October 9, 2018, to February 21, 2020, in Southern and Northern China, Beijing, and Yunnan Province.

To assess the model's efficacy, five evaluation indexes were employed to evaluate the early warning results, including accuracy rate, recall rate, F1 value, confusion matrix, and ROC curve. Figs. S4–S7 and Tables S1–S4 demonstrated that the SEAR model can effectively provide early warning of ILI% levels in different regions; however, as the advance time and warning threshold increase, the warning effect in different regions tends to decrease.

Meanwhile, to reflect the advantages of the SEAR model proposed in this study, we also compared it with other common models, mainly logistic regression, support vector machine, random forest, xgboost and LSTM, where the number of days ahead was chosen to be 3 days and the Rt threshold was chosen to be 1.02, and the comparison results are shown in Table 1.

Table 1.

Contrast experiment results.

Precision Recall F1 Score ROC Score
Northern China
RR 0 0.79 0.91 0.84 0.73
1 0.77 0.54 0.64
SVM 0 0.82 0.99 0.89 0.79
1 0.97 0.58 0.73
XGboost 0 0.86 0.97 0.92 0.86
1 0.94 0.75 0.84
RF 0 0.87 0.98 0.92 0.86
1 0.96 0.74 0.83
LSTM 0 0.84 0.97 0.90 0.86
1 0.93 0.65 0.77
SEAR 0 0.95 0.98 0.97 0.96
1 0.97 0.90 0.93
Southern China
RR 0 0.86 0.89 0.88 0.74
1 0.66 0.58 0.61
SVM 0 0.87 0.99 0.93 0.78
1 0.96 0.57 0.71
XGboost 0 0.90 0.98 0.94 0.84
1 0.91 0.71 0.80
RF 0 0.88 0.98 0.93 0.80
1 0.91 0.63 0.75
LSTM 0 0.88 0.85 0.86 0.80
1 0.60 0.65 0.63
SEAR 0 0.91 0.98 0.94 0.91
1 0.92 0.73 0.82
Beijing
RR 0 0.78 0.91 0.84 0.72
1 0.76 0.53 0.62
SVM 0 0.87 0.99 0.92 0.85
1 0.97 0.72 0.83
XGboost 0 0.94 0.94 0.94 0.92
1 0.89 0.90 0.89
RF 0 0.94 0.94 0.94 0.92
1 0.89 0.89 0.89
LSTM 0 0.95 0.92 0.94 0.92
1 0.86 0.91 0.89
SEAR 0 0.97 0.97 0.97 0.96
1 0.94 0.94 0.94
Yunnan
RR 0 0.83 0.92 0.87 0.69
1 0.65 0.46 0.54
SVM 0 0.85 0.99 0.92 0.74
1 0.95 0.49 0.65
XGboost 0 0.92 0.88 0.90 0.83
1 0.69 0.78 0.73
RF 0 0.92 0.95 0.93 0.85
1 0.84 0.74 0.79
LSTM 0 0.92 0.92 0.92 0.88
1 0.74 0.77 0.76
SEAR 0 0.93 0.98 0.95 0.93
1 0.93 0.77 0.84

RR: Ridge Regression.

SVM: Support Vector Machine.

XGboost: Extreme Gradient Boosting.

RF: Random Forest.

LSTM: Long Short-Term Memory.

SEAR: Self-Excitation Attention Residual Network.

From the results of the comparison experiments, it can be seen that the SEAR model can achieve relatively good results in the Southern and Northern China, Beijing, and Yunnan Province warning tasks compared to other common models.

4. Discussion

This study proposes a novel approach for the early warning task of ILI activity level using deep learning in the computer field, and its scientificity, validity, and generalizability are verified and evaluated through various methods. The main innovative points of this study are:

First, in traditional early warning studies, early warning thresholds are typically determined using static values, historical baselines, etc., based on disease prevalence trends. This study, however, employed a different approach, one that does not involve reliance on prediction models of epidemic trends, thereby avoiding the potential limitation of the accuracy of the prediction models in terms of the detectability of abnormal signals. Instead, the aim was to predict future warning signals for earlier detection of epidemics.

Second, this research investigated the detection of irregularities in the transmission rate of a disease, referred to as the Rt value, as a means of early warning regardless of the size of the epidemic. Previous studies have mainly focused on uncovering abnormalities based on epidemiological data such as the infection rate and case count, which would only be effective when the number of infections is already significant and the epidemic has already developed significantly. In contrast, this study recognizes that the rate of disease spread is not contingent upon the size of the outbreak or the stage of the epidemic, and instead looks at the average number of secondary cases to gauge the potential rate of transmission. This approach can be extended to the early warning of other respiratory infectious diseases.

Third, the SEAR early warning model has been constructed using the deep learning method, improving the traditional early warning system. This method has been proven to be more effective in terms of early warning and can be tailored and optimized for different regions and data. Multiple training, validation, and testing experiments have demonstrated the effectiveness of the SEAR model in predicting the ILI% in both Northern and Southern China, Beijing, and Yunnan Province. This proposed method is a valuable supplement to the traditional early warning system and can provide data-based reference and a basis for disease prevention and control decision-making.

More importantly, we anticipate that this model can self-optimize through further usage. However, at present, we have designed this model based on fixed early warning thresholds, which replace expert decisions in the real world, primarily due to a lack of expert opinions. In different usage scenarios where different users are involved, our model can be further optimized by incorporating feedback and correcting the accuracy of expert judgments in the early warning process. With an increased number of uses and users, machine learning can mimic human decision-making processes more accurately.

This study has certain limitations. For instance, the model created was not evaluated for all climate types; studies have indicated that factors such as temperature and humidity can influence prevalence, and the study regions in this study were all temperate monsoon and subtropical monsoon climates. Thus the applicability to other environments still needs to be discovered. Additionally, the ILI data included in this study were all weekly data, which may influence the outcomes.

In conclusion, this study developed a SEAR model for forecasting influenza Rt in both southern and northern China, Beijing, and Yunnan Province. Results showed that the model had good early warning effects, however, these effects decreased as the warning time advanced and the warning value increased. This could be attributed to the decrease in correlation between Rt (features) and the warning category (labels) as well as the increase in the number of days without warning in the dataset. Furthermore, the decrease in the number of days to be warned exacerbated the data imbalance, leading to poorer model learning. Therefore, it is recommended that practitioners adjust the number of days ahead and thresholds according to the results of this study, combined with the requirements of the early warning task, to achieve the best early warning effect.

CRediT authorship contribution statement

Liuyang Yang: Writing – review & editing, Visualization, Validation, Software, Methodology, Conceptualization. Jiao Yang: Writing – original draft. Yuan He: Resources, Formal analysis. Mengjiao Zhang: Formal analysis. Xuan Han: Data curation. Xuancheng Hu: Data curation. Wei Li: Supervision, Funding acquisition. Ting Zhang: Writing – review & editing, Supervision. Weizhong Yang: Supervision, Funding acquisition, Conceptualization.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

Our deepest gratitude goes to the anonymous reviewers and editors for their careful review and comments and thoughtful suggestions that have helped improve this paper substantially.

1. This research was supported by Biomedical High Performance Computing Platform, Chinese Academy of Medical Sciences.

2. This research was funded by:

(1) The Chinese Academy of Medical Sciences (CAMS) Innovation Fund for Medical Sciences under Grants 2021-I2M-1-044.

(2) The National Natural Science Foundation of China, grant number 71764035.

Footnotes

Appendix A

Supplementary data to this article can be found online at https://doi.org/10.1016/j.pmedr.2024.102761.

Contributor Information

Ting Zhang, Email: zt0416@126.com.

Weizhong Yang, Email: yangweizhong@cams.cn.

Appendix A. Supplementary data

The following are the Supplementary data to this article:

Supplementary Data 1
mmc1.docx (805.9KB, docx)

Data availability

Data will be made available on request.

References

  1. Ali S.T., Wu P., Cauchemez S., et al. Ambient ozone and influenza transmissibility in Hong Kong. Eur. Respir. J. 2018;51(5) doi: 10.1183/13993003.00369-2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ali S.T., Cowling B.J., Lau E.H.Y., et al. Mitigation of Influenza B Epidemic with School Closures, Hong Kong, 2018. Emerg. Infect. Dis. 2018;24(11):2071–2073. doi: 10.3201/eid2411.180612. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ali S.T., Cowling B.J., Wong J.Y., et al. Influenza seasonality and its environmental driving factors in mainland China and Hong Kong. Sci. Total Environ. 2022;818 doi: 10.1016/j.scitotenv.2021.151724. [DOI] [PubMed] [Google Scholar]
  4. Boyle J.R., Sparks R.S., Keijzers G.B., et al. Prediction and surveillance of influenza epidemics. Med. J. Aust. 2011;194(4):S28–S33. doi: 10.5694/j.1326-5377.2011.tb02940.x. [DOI] [PubMed] [Google Scholar]
  5. Cheng X., Chen T., Shu Y., et al. Assessing the application of Moving Epidemic Method for influenza in determining starting and ending thresholds of influenza epidemic in 15 northern provinces of China. Chin. J. Health Stat. 2016;33(06):979–982. [Google Scholar]
  6. Cori A., Ferguson N.M., Fraser C., et al. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am. J. Epidemiol. 2013;178(9):1505–1512. doi: 10.1093/aje/kwt133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Cowling B.J., Wong I.O., Ho L.M., et al. Methods for monitoring influenza surveillance data. Int. J. Epidemiol. 2006;35(5):1314–1321. doi: 10.1093/ije/dyl162. [DOI] [PubMed] [Google Scholar]
  8. Geneva: World Health Organization[EB/OL]. [2023-03-12]. https://www.who.int/fr/news-room/fact-sheets/detail/influenza-(seasonal).
  9. Hong Kong Department of Health. Flu Express[EB/OL]. [2022‐05‐10]. https:// www.chp.gov.hk/en/resources/29/304.html. .
  10. Hu J., Shen L., Albanie S., et al. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020;42(8):2011–2023. doi: 10.1109/TPAMI.2019.2913372. [DOI] [PubMed] [Google Scholar]
  11. Iuliano A.D., Roguski K.M., Chang H.H., et al. Estimates of global seasonal influenza-associated respiratory mortality: a modelling study. Lancet. 2018;391(10127):1285–1300. doi: 10.1016/s0140-6736(17)33293-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Johnson N.P., Mueller J. Updating the accounts: global mortality of the 1918–1920 “Spanish” influenza pandemic. Bull. Hist. Med. 2002;76(1):105–115. doi: 10.1353/bhm.2002.0022. [DOI] [PubMed] [Google Scholar]
  13. Lei H., Jiang H., Zhang N., et al. Increased urbanization reduced the effectiveness of school closures on seasonal influenza epidemics in China. Infect. Dis. Poverty. 2021;10(1):127. doi: 10.1186/s40249-021-00911-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. U.S. Centers for Disease Control and Prevention, National Center for Immunization and Respiratory Diseases (NCIRD). [EB/OL]. [2023‐03‐12]. https://www.cdc.gov/flu/weekly/overview.htm?web=1&wdLOR=c220BB2C8-C252-0C46-A45E-E378B1F16B4B.
  15. Rakocevic B., Grgurevic A., Trajkovic G., et al. Influenza surveillance: determining the epidemic threshold for influenza by using the Moving Epidemic Method (MEM), Montenegro, 2010/11 to 2017/18 influenza seasons. Euro Surveill. 2019;24(12) doi: 10.2807/1560-7917.Es.2019.24.12.1800042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Santangelo O.E., Gentile V., Pizzo S., et al. Machine Learning and Prediction of Infectious Diseases. A Systematic Review. 2023;5(1):175–198. [Google Scholar]
  17. Tan Y., Lai X., Wang J., et al. Risk-adjusted zero-inflated Poisson CUSUM charts for monitoring influenza surveillance data. BMC Med. Inf. Decis. Making. 2021;21(Suppl 2):96. doi: 10.1186/s12911-021-01443-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Te Beest D.E., Van Boven M., Hooiveld M., et al. Driving factors of influenza transmission in the Netherlands. Am. J. Epidemiol. 2013;178(9):1469–1477. doi: 10.1093/aje/kwt132. [DOI] [PubMed] [Google Scholar]
  19. Vega T., Lozano J.E., Meerhoff T., et al. Influenza surveillance in Europe: establishing epidemic thresholds by the moving epidemic method. Influenza Other Respi. Viruses. 2013;7(4):546–558. doi: 10.1111/j.1750-2659.2012.00422.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Wong J.Y., Wu P., Nishiura H., et al. Infection fatality risk of the pandemic A(H1N1)2009 virus in Hong Kong. Am. J. Epidemiol. 2013;177(8):834–840. doi: 10.1093/aje/kws314. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. World Health Orgnization. Global Epidemiological Surveillance Standards for Influenza[EB/OL]. [2023-03-12]. https://www.who.int/publications/i/item/9789241506601.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Data 1
mmc1.docx (805.9KB, docx)

Data Availability Statement

Data will be made available on request.


Articles from Preventive Medicine Reports are provided here courtesy of Elsevier

RESOURCES