Skip to main content
eBioMedicine logoLink to eBioMedicine
. 2016 Jan 12;4:191–196. doi: 10.1016/j.ebiom.2016.01.008

Enabling Precision Medicine With Digital Case Classification at the Point-of-Care

Patrick Obermeier a,b, Susann Muehlhans a,b, Christian Hoppe a,b, Katharina Karsch a, Franziska Tief a,b, Lea Seeber a,b, Xi Chen a,b, Tim Conrad c, Sindy Boettcher d, Sabine Diedrich d, Barbara Rath a,b,
PMCID: PMC4776059  PMID: 26981582

Abstract

Infectious and inflammatory diseases of the central nervous system are difficult to identify early. Case definitions for aseptic meningitis, encephalitis, myelitis, and acute disseminated encephalomyelitis (ADEM) are available, but rarely put to use. The VACC-Tool (Vienna Vaccine Safety Initiative Automated Case Classification-Tool) is a mobile application enabling immediate case ascertainment based on consensus criteria at the point-of-care. The VACC-Tool was validated in a quality management program in collaboration with the Robert-Koch-Institute. Results were compared to ICD-10 coding and retrospective analysis of electronic health records using the same case criteria. Of 68,921 patients attending the emergency room in 10/2010–06/2013, 11,575 were hospitalized, with 521 eligible patients (mean age: 7.6 years) entering the quality management program. Using the VACC-Tool at the point-of-care, 180/521 cases were classified successfully and 194/521 ruled out with certainty. Of the 180 confirmed cases, 116 had been missed by ICD-10 coding, 38 misclassified. By retrospective application of the same case criteria, 33 cases were missed. Encephalitis and ADEM cases were most likely missed or misclassified. The VACC-Tool enables physicians to ask the right questions at the right time, thereby classifying cases consistently and accurately, facilitating translational research. Future applications will alert physicians when additional diagnostic procedures are required.

Keywords: Encephalitis, Meningitis, ADEM, Surveillance, Mobile health, Data standards

Highlights

  • Routine medical records often lack important clinical information.

  • Mobile applications can help to enhance data quality and granularity in real-time.

  • Digital tools should alert physicians instantly when pertinent data are missing.

We developed an evidence-based mobile health application for the immediate case classification based on consensus criteria for aseptic meningitis, encephalitis, myelitis, and acute disseminated encephalomyelitis. Use of the ViVI Automated Case Classification Tool (VACC-Tool) at the point-of-care helped to achieve significantly enhanced data quality and granularity compared to ICD coding or retrospective data mining.

Future applications can be integrated into the physician workflow facilitating timely and consistent case ascertainment in compliance with international case criteria and regulatory data standards. This will provide accurate, high-resolution clinical data enabling syndromic surveillance, precision medicine, and measurable improvement in patient outcomes.

1. Introduction

Patients with acute infections of the central nervous system (CNS) or post-infectious neuroinflammatory disease may present with a variety of clinical signs and symptoms, which are often subtle or inconsistent (Granerod et al., 2010, Koelman and Mateen, 2015). This poses a major challenge in clinical practice, translational medicine, clinical trials, and surveillance settings. Actual case numbers may be underestimated delaying the detection of disease outbreaks and important safety signals (Zwaan et al., 2010, Kelly et al., 2013, Gundlapalli et al., 2007). The timely and accurate classification of clinical cases constitutes an important prerequisite for precision medicine and timely access to therapy (Gundlapalli et al., 2007, Hughes and Jackson, 2012, Duffy, 2015). The ability to gain access to accurate clinical data in real-time will enable healthcare providers and public health stakeholders to overcome barriers to therapeutic and diagnostic innovation (Duffy, 2015).

ADEM

acute disseminated encephalomyelitis

VACC-Tool

Vienna Vaccine Safety Initiative Automated Case Classification Tool

CNS

Central nervous system

ICD

International Catalog of Diseases

EHR

electronic health records

QM

quality management

IRB

International review board

CDISC

Clinical data interchange standards consortium

FDA

Food and drug administration

PPA

positive percent agreement

NPA

negative percent agreement

ORA

overall rate of agreement

KL

Kullback Leibler

VAERS

Vaccine adverse event reporting system

For billing purposes and in routine care, ICD-codes (International Catalog of Diseases) are commonly used. ICD-codes do not distinguish between symptoms and diagnosis and are of limited value for systematic epidemiological research (St Germaine-Smith et al., 2012). For example, the same case of aseptic meningitis can be coded as either ‘headache’ or ‘meningitis’.

ICD coding may thus result in considerable inconsistencies across sites, which can only be avoided if pre-defined case criteria are implemented universally (Zwaan et al., 2010, Gundlapalli et al., 2007, St Germaine-Smith et al., 2012, Horwitz and Yu, 1984, Rath et al., 2010, Prins et al., 2002). Standardized case criteria are particularly important as they have been developed for several complex neurological/autoimmune diseases, including aseptic meningitis, encephalitis, myelitis, and acute disseminated encephalomyelitis (ADEM) (Tapiainen et al., 2007, Sejvar et al., 2007). The application of these case criteria to electronic health records (EHR) has been shown to provide reproducible and consistent datasets as well as significant advantage over ICD-codes assigned in routine care (Zwaan et al., 2010, Muehlhans et al., 2012, Lankinen et al., 2004).

The same proof-of-concept study however, revealed that the retrospective application of standardized case criteria will result in a certain amount of missing data and indeterminate results with regard to the case definitions. This was the case whenever critical data that would be required for the case definition, had not been documented in the EHR (Rath et al., 2010). In other words, if a specific symptom such as ‘headache’ was not mentioned in the EHR, it will remain unclear to the assessor whether the patient did not have any headaches to begin with, or whether this question had not been raised during the physician encounter (Horwitz and Yu, 1984).

This observation lead to the recommendation that standardized case criteria should be implemented immediately at the point-of-care, when the patient is still accessible and all pertinent data can be obtained (Rath et al., 2010).

This translational research project aimed to evaluate whether immediate data collection with the use of innovative mobile applications might enable the physician to ask the right questions at the right time, thereby ensuring that all relevant information is collected at the point-of-care. This would increase the yield of cases that can either be classified and confirmed, or ruled out with certainty.

2. Methods

This study builds on a previous proof-of-concept study performed at a pediatric hospital in Switzerland, where standardized case criteria for aseptic meningitis, encephalitis, myelitis, and ADEM were applied retrospectively to hospital discharge summaries (Rath et al., 2010).

Now, the same four consensus case definitions (Tapiainen et al., 2007, Sejvar et al., 2007) were integrated a web-user interface (electronic clinical report form, CRF) as well as into a mobile application for the standardized case ascertainment at the point-of-care: the VACC-Tool (Vienna Vaccine Safety Initiative Automated Case Classification Tool, www.vi-vi.org). This innovative mobile application facilitating reliable case ascertainments at the patient's bedside, was designed based on the user-centered and solution-focused principles of Design Thinking (Seeber et al., 2015). The installation of the VACC-Tool on an Android-based handheld device using Java as a platform-independent programing language required approximately 5 min. For full transparency, audit trails and double data entry verification procedures were enabled, and data entry was restricted to authorized personnel only. Following the instructions of the Tool, anonymized datasets required for the respective case definitions were collected within approximately 20 min at the point-of-care. A dynamic adaptation system with subordinate questions linked to previously provided clinical information contributed to time-efficient assessments using the VACC-Tool. Automated algorithms compared the collected data with the criteria of published case definitions for meningitis, encephalitis, myelitis, and ADEM (Tapiainen et al., 2007, Sejvar et al., 2007). Results were provided immediately at the patient's bedside. In accordance with the published case definitions, cases were classified into three distinct levels of diagnostic certainty, with Level 1 being closest to gold standard, Levels 2 and 3 being less stringent but still evident. Level 4 represented a case with insufficient data, whereas Level 5 indicated a definitive “rule-out” (Tapiainen et al., 2007, Sejvar et al., 2007). Data entered into the VACC-Tool were fully compliant with standards issued by CDISC, the Clinical Data Interchange Standards Consortium (www.cdisc.org) (Souza et al., 2007). Mapping of all data elements to Clinical Data Acquisition Standards Harmonization (CDASH), Study Data Tabulation Model (SDTM) and the Biomedical Research Integrated Domain Group (BRIDG) Model enable instant data read-outs and exports, including for the automated submission of reports to regulatory authorities (Linder et al., 2010).

The VACC-Tool was available as a web user interface (eCRF) as well as a mobile application and was validated in the context of a quality management (QM) program at the Charité Department of Pediatrics in collaboration with the Robert-Koch-Institute in Berlin, Germany. All hospitalized children (0–18 years) with suspected CNS infection or inflammation meeting pre-defined QM entry criteria underwent standardized assessments by a specifically trained QM team using the VACC-Tool (Karsch et al., 2015). The QM team did not interfere with routine diagnostics or physician orders for laboratory or imaging studies. The QM program was approved by the Charité Institutional Review Board (IRB) (EA2/161/11). Informed consent procedures were waived by the IRB for the purpose of quality improvement. This work received no outside funding.

2.1. Retrospective Application of Pre-defined Case Criteria

For comparison, the same 521 clinical cases were re-classified applying the exact same case criteria and algorithms, but now retrospectively using routine EHR rather than queries raised by the QM staff at the point-of-care. As per consensus for algorithm-based datasets, any undocumented clinical signs or symptoms were reported as ‘absent’. Data abstraction and double entry verification were performed by specifically trained data entry staff in compliance with Good Clinical Practice guidelines.

2.2. Validating a New Diagnostic Standard Against an Imperfect Reference Standard

Statistical analysis was performed using SPSS 21.0 software. In the absence of an accepted gold standard for the differential diagnosis of aseptic meningitis, encephalitis, myelitis, and ADEM, analyses were conducted according to Food and Drug Administration (FDA) guidelines and suggested terminologies for the reporting of results from studies evaluating diagnostic tests. Positive and negative percent agreement (PPA, NPA) and overall rates of agreement (ORA) were calculated to test a new diagnostic test against the imperfect reference standard (Fig. 1) (Rath et al., 2010).

Fig. 1.

Fig. 1

Calculation of positive and negative percent agreement (PPA, NPA) and overall rates of agreement (ORA).

Cross tabulations were used to compare the retrospective and prospective application of case criteria in the same cohort. The statistical power describes the probability of identifying an actual difference with the used statistical test. Following Chow et al. (2008), we calculated the power of our test to be larger than 0.99 with n = 521 and alpha = 0.05 (Chow et al., 2008).

Kappa coefficients were calculated to assess the coincidence of concordant/discordant results. P-values of less than 0.05 were considered statistically significant. Reported results were calculated with 95% confidence intervals based on the total sample size of 521, with a point estimate of 0.5 corresponding to the point of largest variance within a binomial distribution (Rath et al., 2010).

Feature selection analysis was performed using the Correlation Feature Selection measure (Hall, 1998). This method identifies the best set of features based on the two criteria:

  • (1)

    Features are highly correlated with the class to predict

  • (2)

    Features are not correlated with each other

The algorithm accounts for missing values by distributing them across other values in proportion to their frequency.

The Kullback-Leibler (KL)-divergence was used to measure the relative distance of two probability distributions. The KL-divergence between two distributions P and Q is defined as KLPQ=xXPx·logPxQx (Kullback, 1987).

The KL-method was chosen as it does not make any assumption about the distribution of dependent variables. In particular, it does not assume that the dependent variable is normally distributed within each comparison group. By definition, the KL-divergence cannot be negative and increases if two distributions become more different from each other (Kullback, 1987).

3. Results

From 11/2010–06/2013, a total of 68,921 patients seen in the emergency room were screened prospectively, with 11,575 patients hospitalized at the Charité Department of Pediatrics and 521 patients (4.5%) fulfilling QM entry criteria (Karsch et al., 2015), mean age: 7.6 years (0.03; 18.03), gender: 51% male.

A flow diagram illustrating the case selection process is shown in Fig. 2.

Fig. 2.

Fig. 2

Case selection process.

3.1. Using the VACC-Tool at the Point-of-Care

Using the VACC-Tool at the point-of-care, 34.6% of patients (180/521) were successfully classified as either ‘aseptic meningitis’, ‘encephalitis’, or ‘ADEM’. None of the 521 cases fulfilled myelitis case criteria. Of the 341 unclassified cases, 194 were ruled out with certainty (Fig. 3).

Fig. 3.

Fig. 3

Case classification results for aseptic meningitis (ASM), encephalitis (ENC), and acute disseminated encephalomyelitis (ADEM) among the same clinical cases (N = 521) applying

a) automated VACC-Tool classification at the point-of-care versus

b) retrospective case classification using identical algorithms based on medical records.

3.2. Comparison Between Retrospective and Prospective Case Classification

Applying the same algorithms to EHR retrospectively, 33 cases (6.38%, 33/521) would have been missed and 38 cases (7.3%, 38/521) would have been misclassified. In contrast to use of the VACC-Tool at the point-of-care, important clinical data were not available in the EHR of 33 patients.

3.3. Comparison Between ICD-10 and VACC-Tool at the Point-of-Care

Comparison of ICD-10 codes with use of the VACC-Tool at the point-of-care revealed that 22.3% of cases (116/521) fulfilled any of the four case definitions, but were not coded as such. The most commonly missed diagnosis by ICD coding was ADEM (89/116, 76.7%). Additional 38 cases would have been misclassified (e.g. encephalitis falsely as aseptic meningitis).

3.4. Analysis of Diagnostic Accuracy With the VACC-Tool

As expected with a confirmatory diagnostic tool, NPA were usually higher than PPA, suggesting that specific disease entities can be ruled out with high levels of certainty.

Discrepancies between prospective and retrospective case classification were highest for complex disease entities such as encephalitis and ADEM. Kappa coefficients for assessing coincidence of concordant/discordant results were almost perfect for aseptic meningitis, whereas they were considerably lower for encephalitis and ADEM (Table 1).

Table 1.

Comparison of VACC-Tool case classification at the point-of-care (VACC) with retrospective case classification (RETRO) based on the same algorithms with overall rates of agreement (ORA), positive percent agreement (PPA), negative percent agreement (NPA), and kappa scores (k) (N = 521).

Categories Aseptic meningitis Encephalitis ADEM
VACC + 63 65 76
VACC − 458 456 445
RETRO + 61 65 69
RETRO − 460 456 452
VACC +/RETRO + 59 40 54
VACC +/RETRO − 4 25 22
VACC −/RETRO + 2 25 15
VACC −/RETRO − 456 431 430
ORA [95% CI] 99% [95; 100] 90% [86; 94] 93% [89; 97]
PPA [95% CI] 97% [93; 100] 62% [58; 66] 78% [74; 82]
NPA [95% CI] 99% [95; 100] 95% [91; 99] 95% [91; 99]
κ 0.95⁎⁎⁎ 0.56⁎⁎⁎ 0.70⁎⁎⁎
⁎⁎⁎

p < 0.001.

3.5. Key Features of VACC-Tool Classifications at the Point-of-Care

KL-divergence analysis revealed that pleocytosis in the cerebrospinal fluid (KL 8.34), followed by negative gram stain (KL 4.75) and bacterial culture results (KL 3.38) were critical for the classification of aseptic meningitis.

For ADEM and encephalitis, characteristic histological findings yielded highest KL-values (7.6), simultaneously leading to classifications with highest levels of evidence (‘Level 1’ of diagnostic certainty). Second highest KL-values for ADEM and encephalitis were assigned to clinical signs of cranial nerve deficits (KL 0.88 and 0.68 respectively).

Unbiased feature selection results confirmed the combination of both, clinical and laboratory/neuroimaging data as ‘relevant’. Table 2 displays unbiased feature selection results for VACC-Tool classifications including KL-divergences for key features (Kullback, 1987).

Table 2.

Feature selection results are displayed for VACC-Tool classification of aseptic meningitis, encephalitis, and ADEM. Dark gray background color indicates a positive correlation (i.e. the presence of a symptom is important) between clinical sign/laboratory finding and case classification whereas light gray background color indicates a negative correlation (i.e. the absence of a symptom is important). Numbers are KL-divergences, indicating increasing importance with increasing numerical values.

3.5.

4. Discussion

We report the successful implementation of an evidence-based eCRF and mobile application for the automated case classification of aseptic meningitis, encephalitis, myelitis, and ADEM at the point-of-care. Use of the VACC-Tool at the patient's bedside lead to significantly enhanced data quality compared to standard of care (ICD coding) or retrospective data mining using identical algorithms. The VACC-Tool allows physicians to acquire all clinical parameters that are necessary to fulfill the respective case definitions, thus generating meta-analyzable, highly standardized data for real-time surveillance and precision medicine.

Standardized case ascertainment based on pre-defined case definitions has been shown to be reproducible and consistent (Rath et al., 2010, Muehlhans et al., 2012). The retrospective classification of clinical cases however, is often challenged by missing data in medical records, thus leading to unclassifiable and indeterminate results (Horwitz and Yu, 1984, Rath et al., 2010). Undocumented signs and symptoms are usually indistinguishable from absent signs or symptoms, unless the physician decided to explicitly document pertinent negative values (Horwitz and Yu, 1984). Lack of standardization and data granularity are key challenges to data mining and other downstream applications based on medical data generated in routine care. This may lead to underestimation of actual case numbers, with significant consequences to infectious disease surveillance, public health, and pharmacovigilance (Linder et al., 2010, Al-Tawfiq et al., 2014). Real-time data capturing based on pre-defined case criteria on the other hand, may increase the numbers of suspected cases while ruling out true negatives (Rath et al., 2010). The reported validation study confirmed that with consistent use of case definitions at the point-of-care, the number of suspected cases may increase compared to retrospective case classification. With the help of the VACC-Tool, complete clinical datasets were captured.

There is an inherent limitation to the design of an independent validation study. The VACC-Tool assessments were performed independently by QM staff, whereas any orders for laboratory or neuroimaging procedures remained with the treating physician and the routine hospital workflow. The organizational structure allowed independent validation of the Tool, but with the limitation that treating physicians did not receive any information about the results of the VACC-Tool classification or any additional tests that would have been required to complete the diagnostic algorithm. To solve all suspected cases in the present study, additional laboratory and neuroimaging/electrophysiology data would have been required in 147 indeterminate cases. For example, lumbar punctures and neuroimaging and/or EEG studies had not been ordered in 37 suspected cases, rendering the classification into ‘aseptic meningitis versus encephalitis’ impossible. This drawback will be avoided with future use, when the VACC-Tool will be issuing digital reminders in cases where pertinent laboratory, neuroimaging, or electrophysiology studies are indicated according to the respective case definition.

Another limitation of this proof-of-concept study is that the initial VACC-Tool included only four case definitions. The limited range of algorithms may have resulted in some effect modification and ascertainment bias in favor of the pre-defined disease entities represented in the VACC-Tool. An unbiased and conclusive differentiation of the four disease entities from related disease entities such as Guillain-Barré-Syndrome or Multiple Sclerosis will require the inclusion of additional case definitions into the VACC-Tool and the refinement of any areas of overlap. With increasing complexity of case algorithms, the VACC-Tool will provide a clear advantage for the automated discrimination of closely related but complex disease entities while providing CDSIC-compliant standardized data at the point-of-care.

During the development of the VACC-Tool and multiple rounds of iterations, the direct feedback from clinicians was instrumental, as they were able to test the Tool in their everyday workflow and in busy clinical care settings. The key motivation for clinicians to use the VACC-Tool is the ability to discriminate several, including complex neurologic disease entities reliably and consistently. In difficult cases, for example, patients presenting with clinical symptoms consistent with both ADEM and encephalitis, such cases were classified as belonging to a pre-defined are of overlap between ADEM and encephalitis (Level 3A), indicating a need for further imaging or histopathological studies to differentiate the two. Therefore, the VACC-Tool promotes timely diagnoses while decreasing interrater variability but it also opens the possibility to trigger important follow-up diagnostic procedures, if indicated.

Diagnostic algorithms require the uniform assignment of positive as well as negative values to each variable. Bioinformaticians interested in machine learning and advanced precision medicine analyses will ask for a clear discrimination between positive and pertinent negative values for each variable. Use of the VACC-Tool at the point-of-care increased the number of conclusively and consistently classified cases thereby decreasing the risk of missing data. Therefore, use of the VACC-Tool facilitated the downstream application of unbiased feature selection and machine learning algorithms to the clinical dataset. Feature selection analysis also confirmed that the case classifications could have been 100% complete if additional laboratory, neuroimaging, and electrophysiology studies had been obtained if required according to the respective consensus case definition. White cell counts, culture results, and gram stains from cerebrospinal fluid for example, are crucial whenever a clinical suspicion of aseptic meningitis or encephalitis is raised. When such pertinent laboratory data are lacking, the number of indeterminate cases will increase.

The agreement between retrospective and prospective case classification was almost perfect for aseptic meningitis. This means that the physician assessment in routine care was not too different from the results of the VACC Tool. In cases of ADEM or encephalitis however, rates of agreement were lower, which may be attributed to the imperfection of the reference standard (retrospective use of case criteria based on incomplete datasets in health records). In other words, the more complex the disease entity and the attribution of diagnoses in routine care, the greater the added value of using the VACC-Tool.

In the future, the VACC-Tool will include a reminder-function assisting the clinician to obtain laboratory, neuroimaging or electrophysiology studies, if necessary.

The potential for quality improvement through mobile health tools is evident: The physician will be enabled to ask the right questions at the right time, thereby achieving greater data granularity and accuracy. This may contribute to timely diagnoses and ultimately, improved patient satisfaction with the physician encounter (Lin et al., 2001). If standards are integrated effectively into the clinical workflow, for example through mobile devices or pop-up windows in EHR, physicians will be enabled to generate complete datasets that are compatible with datasets obtained by their peers (Linder et al., 2010, Dal Pan, 2010). For maximum data interoperability across sites, the VACC-Tool was developed in full compliance with HIPAA (Health Insurance Portability and Accountability Act) privacy rules and CDISC data standards (Souza et al., 2007). Consistency across sites is particularly important in pharmacovigilance settings, where the ability to prove the absence of adverse events provides the greatest challenge (Linder et al., 2010, Oubari et al., 2015).

Automated case classification may be of great value to disease surveillance as well as vaccine and drug pharmacovigilance. If all relevant questions are answered in real-time, i.e. at the point-of-care, the denominator of eligible cases will be known immediately. The VACC-Tool was specifically designed to enable precision medicine as it helps to identify small subpopulations of patients, who may be at risk for adverse events following immunization or rare disease presentations following infectious and other autoimmune triggers. Only when these patients are diagnosed at the point-of-care, can biomarker studies and other subsequent studies be performed in a consistent manner. Use of the VACC-Tool will generate well-defined clinical datasets as they will be required for precision medicine approaches and new, evidence-based strategies to improve medical care for highly vulnerable patient populations (Duffy, 2015, Jaffe, 2015).

Downstream applications may include additional algorithms for causality assessments following successful case ascertainment. This may help the clinician for example, to discriminate whether a specific adverse event was triggered by ‘natural’ infection, vaccine or drug administration, autoimmune disease, or other causes. The timely paper-free transmission of unbiased safety data to health authorities represents one potential application of the VACC-Tool (Linder et al., 2010, Al-Tawfiq et al., 2014). To date, vaccine safety monitoring relies on passive surveillance systems such as VAERS (Vaccine Adverse Event Reporting System), large linked databases, and ICD-codes, with significant limitations with respect to data standardization (Lankinen et al., 2004, Baker et al., 2015).

Automated case classification will contribute significantly to the timely diagnosis of important infectious diseases, which may be under-recognized in routine care and therefore under-represented in surveillance datasets (Kelly et al., 2013, Dale, 2003). The recent outbreak of enterovirus 71-associated CNS infections provides a practical example, where automated case classification in real-time might have been useful to surveillance clinics and reference laboratories (Zander et al., 2014). Further development and improvements of the VACC-Tool could include the ability to ‘flag’ any clinically suspect cases prompting automated reporting of anonymized data signals. If integrated into the routine EHR workflow, the VACC-algorithms could be programed to prompt consultations by infectious disease or infection control specialists, thereby improving patient outcomes and quality of care (Zwaan et al., 2010, Jaffe, 2015).

In a modified version adjusted to self-reported outcomes, the VACC-Tool could also be used to strengthen the patient voice in disease surveillance (Duffy, 2015). The language would need to be adjusted to accommodate the direct reporting by laypersons willing to participate in surveillance programs, thereby providing immediate feedback to healthcare providers. For the long-term monitoring of infectious disease outcomes, patients could be encouraged to report any improvements over time.

Finally, automated case classification should be made available to promote standardized, user-centered patient care in both, high- and low-resource settings (Rath et al., 2010). The case criteria do accommodate their application diverse settings, as any specific case criteria may be met with different levels of diagnostic certainty. If used widely, the VACC-Tool will enable precision medicine based on unbiased assessments in real-time, regardless of the setting.

5. Conclusion

Automated case classification at the point-of-care will support the physician to save time and improve patient care. The usual pitfalls of underestimating or misclassifying infectious and neuroimmunological diseases in routine care and ICD coding procedures can be avoided effectively. Innovative mobile health tools will facilitate the timely identification of relevant clinical cases at the point-of-care, including neuroinfectious diseases as well as rare, neuroinflammatory adverse events. Future studies will explore the integration of automated case classification systems into the busy hospital workflow, facilitating translational research, surveillance systems, and the reporting of CDISC-compliant anonymized meta-data to health authorities.

Author Contributions

Study concept and design: Rath. Acquisition of data: Obermeier, Muehlhans, Karsch, Seeber, Tief, Chen. Analysis and interpretation of data: Obermeier, Hoppe, Conrad, Rath. Drafting of the manuscript: Obermeier. Data management, data quality check and quality assurance: Hoppe, Obermeier. Critical Revision of the manuscript for important intellectual content: Muehlhans, Hoppe, Boettcher, Diedrich, Conrad, Rath. Statistical analysis: Obermeier, Conrad. Obtained funding (in kind): Rath, Diedrich, Administrative, technical, and material support: Hoppe, Rath, Diedrich, and Boettcher.

Conflict of Interest Statements

The authors have no conflict of interest to declare.

Acknowledgments

None.

Footnotes

This work obtained no outside funding.

References

  1. Al-Tawfiq J.A., Zumla A., Gautret P. Surveillance for emerging respiratory viruses. Lancet Infect. Dis. 2014;14(10):992–1000. doi: 10.1016/S1473-3099(14)70840-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Baker M.A., Kaelber D.C., Bar-Shain D.S. Advanced clinical decision support for vaccine adverse event detection and reporting. Clin. Infect. Dis. 2015 doi: 10.1093/cid/civ430. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Chow S., Shao J., Wang H. 2008. Sample Size Calculations in Clinical Research. (Chapman & Hall/CRC Biostatistics Series). [Google Scholar]
  4. Dal Pan G.J. Commentary on “secondary use of electronic health record data: spontaneous triggered adverse drug event reporting” by Linder et al. Pharmacoepidemiol. Drug Saf. 2010;19(12):1216–1217. doi: 10.1002/pds.2050. [DOI] [PubMed] [Google Scholar]
  5. Dale R.C. Acute disseminated encephalomyelitis. Semin. Pediatr. Infect. Dis. 2003;14(2):90–95. doi: 10.1053/spid.2003.127225. [DOI] [PubMed] [Google Scholar]
  6. Duffy D.J. Problems, challenges and promises: perspectives on precision medicine. Brief. Bioinform. 2015 doi: 10.1093/bib/bbv060. [DOI] [PubMed] [Google Scholar]
  7. Granerod J., Ambrose H.E., Davies N.W. Causes of encephalitis and differences in their clinical presentations in England: a multicentre, population-based prospective study. Lancet Infect. Dis. 2010;10(12):835–844. doi: 10.1016/S1473-3099(10)70222-X. [DOI] [PubMed] [Google Scholar]
  8. Gundlapalli A.V., Tang H., Tonnierre C. AMIA Annu Symp Proc. 2007. Validity of electronic medical record-based rules for the early detection of meningitis and encephalitis; pp. 299–303. [PMC free article] [PubMed] [Google Scholar]
  9. Hall M.A. University of Waikato; Hamilton, New Zealand: 1998. Correlation-Based Feature Subset Selection for Machine Learning. [Google Scholar]
  10. Horwitz R.I., Yu E.C. Assessing the reliability of epidemiologic data obtained from medical records. J. Chronic Dis. 1984;37(11):825–831. doi: 10.1016/0021-9681(84)90015-8. [DOI] [PubMed] [Google Scholar]
  11. Hughes P.S., Jackson A.C. Delays in initiation of acyclovir therapy in herpes simplex encephalitis. Can. J. Neurol. Sci. 2012;39(5):644–648. doi: 10.1017/s0317167100015390. [DOI] [PubMed] [Google Scholar]
  12. Jaffe S. Planning for US precision medicine initiative underway. Lancet. 2015;385(9986):2448–2449. doi: 10.1016/S0140-6736(15)61124-2. [DOI] [PubMed] [Google Scholar]
  13. Karsch K., Obermeier P., Seeber L. Human parechovirus infections associated with seizures and rash in infants and toddlers. Pediatr. Infect. Dis. J. 2015 doi: 10.1097/INF.0000000000000802. [DOI] [PubMed] [Google Scholar]
  14. Kelly T.A., O'Lorcain P., Moran J. Underreporting of viral encephalitis and viral meningitis, Ireland, 2005–2008. Emerg. Infect. Dis. 2013;19(9):1428–1436. doi: 10.3201/eid1909.130201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Koelman D.L., Mateen F.J. Acute disseminated encephalomyelitis: current controversies in diagnosis and outcome. J. Neurol. 2015 doi: 10.1007/s00415-015-7694-7. [DOI] [PubMed] [Google Scholar]
  16. Kullback S. Letter to the Editor: The Kullback–Leibler Distance. Am. Stat. 1987;41(4):340–341. [Google Scholar]
  17. Lankinen K.S., Pastila S., Kilpi T., Nohynek H., Makela P.H., Olin P. Vaccinovigilance in Europe—–need for timeliness, standardization and resources. Bull. World Health Organ. 2004;82(11):828–835. [PMC free article] [PubMed] [Google Scholar]
  18. Lin C.T., Albertson G.A., Schilling L.M. Is patients' perception of time spent with the physician a determinant of ambulatory patient satisfaction? Arch. Intern. Med. 2001;161(11):1437–1442. doi: 10.1001/archinte.161.11.1437. [DOI] [PubMed] [Google Scholar]
  19. Linder J.A., Haas J.S., Iyer A. Secondary use of electronic health record data: spontaneous triggered adverse drug event reporting. Pharmacoepidemiol. Drug Saf. 2010;19(12):1211–1215. doi: 10.1002/pds.2027. [DOI] [PubMed] [Google Scholar]
  20. Muehlhans S., Richard G., Ali M. Safety reporting in developing country vaccine clinical trials—a systematic review. Vaccine. 2012;30(22):3255–3265. doi: 10.1016/j.vaccine.2012.02.059. [DOI] [PubMed] [Google Scholar]
  21. Oubari H., Tuttle R., Rath B., Bravo L. Communicating vaccine safety to the media and general public. Curr. Drug Saf. 2015;10(1):80–86. doi: 10.2174/157488631001150407111312. [DOI] [PubMed] [Google Scholar]
  22. Prins H., Kruisinga F.H., Buller H.A., Zwetsloot-Schonk J.H. Availability and usability of data for medical practice assessment. Int. J. Qual. Health Care. 2002;14(2):127–137. doi: 10.1093/oxfordjournals.intqhc.a002599. [DOI] [PubMed] [Google Scholar]
  23. Rath B., Magnus M., Heininger U. Evaluating the Brighton Collaboration case definitions, aseptic meningitis, encephalitis, myelitis, and acute disseminated encephalomyelitis, by systematic analysis of 255 clinical cases. Vaccine. 2010;28(19):3488–3495. doi: 10.1016/j.vaccine.2010.02.053. [DOI] [PubMed] [Google Scholar]
  24. Seeber L., Michl B., Rundblad G. A Design Thinking Approach to Effective Vaccine Safety Communication. Curr. Drug Saf. 2015;10(1):31–40. doi: 10.2174/157488631001150407105400. [DOI] [PubMed] [Google Scholar]
  25. Sejvar J.J., Kohl K.S., Bilynsky R. Encephalitis, myelitis, and acute disseminated encephalomyelitis (ADEM): case definitions and guidelines for collection, analysis, and presentation of immunization safety data. Vaccine. 2007;25(31):5771–5792. doi: 10.1016/j.vaccine.2007.04.060. [DOI] [PubMed] [Google Scholar]
  26. Souza T., Kush R., Evans J.P. Global clinical data interchange standards are here! Drug Discov. Today. 2007;12(3–4):174–181. doi: 10.1016/j.drudis.2006.12.012. [DOI] [PubMed] [Google Scholar]
  27. St Germaine-Smith C., Metcalfe A., Pringsheim T. Recommendations for optimal ICD codes to study neurologic conditions: a systematic review. Neurology. 2012;79(10):1049–1055. doi: 10.1212/WNL.0b013e3182684707. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Tapiainen T., Prevots R., Izurieta H.S. Aseptic meningitis: case definition and guidelines for collection, analysis and presentation of immunization safety data. Vaccine. 2007;25(31):5793–5802. doi: 10.1016/j.vaccine.2007.04.058. [DOI] [PubMed] [Google Scholar]
  29. Zander A., Britton P.N., Navin T., Horsley E., Tobin S., McAnulty J.M. An outbreak of enterovirus 71 in metropolitan Sydney: enhanced surveillance and lessons learnt. Med. J. Aust. 2014;201(11):663–666. doi: 10.5694/mja14.00014. [DOI] [PubMed] [Google Scholar]
  30. Zwaan L., de Bruijne M., Wagner C. Patient record review of the incidence, consequences, and causes of diagnostic adverse events. Arch. Intern. Med. 2010;170(12):1015–1021. doi: 10.1001/archinternmed.2010.146. [DOI] [PubMed] [Google Scholar]

Articles from EBioMedicine are provided here courtesy of Elsevier

RESOURCES