Skip to main content
PLOS One logoLink to PLOS One
. 2023 Feb 16;18(2):e0281929. doi: 10.1371/journal.pone.0281929

Using a data-driven approach for the development and evaluation of phenotype algorithms for systemic lupus erythematosus

Joel N Swerdel 1,2,*, Darmendra Ramcharran 1,¤, Jill Hardin 1,2
Editor: Luca Navarini3
PMCID: PMC9934349  PMID: 36795690

Abstract

Background

Systemic lupus erythematosus (SLE) is a chronic autoimmune disease of unknown origin. The objective of this research was to develop phenotype algorithms for SLE suitable for use in epidemiological studies using empirical evidence from observational databases.

Methods

We used a process for empirically determining and evaluating phenotype algorithms for health conditions to be analyzed in observational research. The process started with a literature search to discover prior algorithms used for SLE. We then used a set of Observational Health Data Sciences and Informatics (OHDSI) open-source tools to refine and validate the algorithms. These included tools to discover codes for SLE that may have been missed in prior studies and to determine possible low specificity and index date misclassification in algorithms for correction.

Results

We developed four algorithms using our process: two algorithms for prevalent SLE and two for incident SLE. The algorithms for both incident and prevalent cases are comprised of a more specific version and a more sensitive version. Each of the algorithms corrects for possible index date misclassification. After validation, we found the highest positive predictive value estimate for the prevalent, specific algorithm (89%). The highest sensitivity estimate was found for the sensitive, prevalent algorithm (77%).

Conclusion

We developed phenotype algorithms for SLE using a data-driven approach. The four final algorithms may be used directly in observational studies. The validation of these algorithms provides researchers an added measure of confidence that the algorithms are selecting subjects correctly and allows for the application of quantitative bias analysis.

Introduction

Systemic lupus erythematosus (SLE) is a chronic autoimmune disease of unknown origin. Clinical manifestations include fatigue, arthropathy, and involvement of nearly all organ systems, particularly cardiac and renal [14]. A review by Stojan and Petri of research on multi-country incidence rate estimates found the incidence rate of SLE to be between 1–9 cases per 100,000 person-years (PY) [5].

The use of real-world evidence (RWE) from observational data, including administrative claims and electronic health records (EHR) datasets, is critical for studying the epidemiology and clinical manifestations of SLE. Developing and applying accurate phenotype algorithms is central for using observational data for analyses of health conditions. We performed a literature search and found 58 journal articles which included phenotype algorithms for SLE. A phenotype algorithm is the translation of the case definition of a health condition or phenotype into an executable algorithm based on clinical data elements in a database [6]. The algorithms used included those with 1 or more codes for SLE in a patient’s record. Many included requirements for laboratory results from anti-nuclear antibody (ANA) tests. Others included prescriptions for anti-malarial drugs or oral corticosteroids. We found five studies validating SLE algorithms using clinical adjudication after 2010. Each study was performed on a single dataset. The research by Barnado et al provided the most recent example of the validation of algorithms for SLE. They evaluated a wide variety of algorithms using variations of the number of codes in a patient’s record for SLE (1–4 codes), presence of drugs for SLE including anti-malarials, corticosteroids, and disease-modifying antirheumatic drugs (DMARD), and results from ANA tests. They performed their analyses in a single database, the Vanderbilt University Medical Center’s Electronic Health Record system. The algorithms were evaluated in a group of 200 randomly selected patients whose medical charts were reviewed and adjudicated by clinicians to confirm or refute the presence of SLE. The best performing algorithm was one with ≥ 3 SLE International Classification of Diseases, Ninth Revision (ICD-9) code 710.0 counts, ANA positive laboratory result, ever use of DMARDs, and ever corticosteroids use. They found a positive predictive value (PPV) of 91%. They also found a PPV of 95% when they excluded subjects with Dermatomyositis (ICD-9 code 710.3) and Systemic Sclerosis (ICD-9 code 710.1). In another algorithm with ≥ 3 SLE ICD-9 code counts and ever antimalarial use, they found a PPV of 88% without excluding subjects with Dermatomyositis and Systemic Sclerosis and a PPV of 91% when excluding subjects with Dermatomyositis and Systemic Sclerosis. Limitations of this study included the use of data from a single health center and the calculation of sensitivity based on subjects with a mention of SLE in their record. Specificity was not measured in this study.

The objective of this research was to develop and evaluate phenotype algorithms for SLE suitable for use in epidemiological studies using empirical evidence from data in observational databases. The development process was performed on multiple databases from several countries to increase generalizability. The evaluation process included all performance characteristics, including sensitivity and specificity, and were assessed independent of the inclusion of SLE codes in each subject’s health record and without utilizing medical chart review.

Materials and methods

We applied a rigorous process to develop the algorithms in this study. We used an extensive literature review (details and results are in S1 File), along with several tools within the Observational Health Data Sciences and Informatics (OHDSI) tool stack for empirical analysis. The goal of instituting this process was to enhance the science of phenotype algorithm development. The full details of the process are described in S2 File. We developed and evaluated four phenotype algorithms for SLE to be analyzed in observational research (Table 1): two incident algorithms requiring at least 365 days prior look-back and at least one SLE code (“Incident, 1X”) or at least two codes with a code for SLE 31–365 days after the first code (“Incident, 2X”); two prevalent algorithms requiring at least one SLE code (“Prevalent, 1X”) or at least two codes with a code for SLE 31–365 days after the first code (“Prevalent, 2X”). Requiring a 365 day look-back for the incident algorithm will impact the sensitivity of the algorithm especially in databases where subjects are observed for shorter periods of time. Each of the algorithms was corrected for possible cohort entry date (index date) misclassification. This was achieved by allowing for signs and symptoms for SLE to mark the start of the condition if these were followed by a code for SLE within 90 days of the signs or symptoms (Fig 1). We found 90 days to be optimal to achieve good coverage of prior signs, symptoms, and drugs. In the period 365 to 91 days prior to the first diagnosis code for SLE, the rates of these signs and symptoms were significantly lower than in the 90 to 0 days prior to index date. Thus the use of 90 days prior to index as the optimal time period choice. Signs and symptoms included codes for malaise, joint pain, and low back pain, consistent with the signs and symptoms of SLE [2,79]. There were also prescriptions for drugs often used in the treatment of SLE and its symptoms, such as prednisone and methylprednisolone. The presence of these diagnosis and drug codes in the 90 days prior to the index date may indicate that the condition began prior to the appearance of the first SLE diagnosis code and the clinical diagnosis. From these results we inferred that there was likely index date misclassification and corrections to account for the prior signs, symptoms, and drugs were applied to improve the index date accuracy. The complete set of ICD, Read, and SNOMED codes used in the algorithms are in the S3 File.

Table 1. Names, characteristics, and external links of phenotypes for Systemic lupus erythematosus (SLE).

Cohort Attributes Github Link Cohort Diagnostics Shiny ID
Systemic lupus erythematosus incident and correction for index date incident population, sensitive performance characteristics Cohort 1 C2–2409
Systemic lupus erythematosus prevalent and correction for index date prevalent population, sensitive performance characteristics Cohort 2 C5–3627
Systemic lupus erythematosus incident with 2nd diagnosis code and correction for index date incident population, specific performance characteristics Cohort 3 C3–2410
Systemic lupus erythematosus prevalent with 2nd diagnosis code and correction for index date prevalent population, specific performance characteristics Cohort 4 C6–3628

Fig 1. Diagram of the timeline for the phenotype algorithms for Systemic lupus erythematosus.

Fig 1

We evaluated the algorithms against a network of nine observational databases. All databases were standardized to the OMOP Common Data Model, which provides the capability for the phenotype algorithms to be developed and consistently applied across the data sources. The nine databases were a mix of administrative insurance claims, electronic health records, and general practitioner databases from Germany, France, Australia, Japan, and US. The description and details for each of the databases are in Table 2. For each of the cohorts in each of the databases, we extracted patient-level data on patient demographics, diagnostic conditions, laboratory measurements, diagnostic and medical procedures, and drug exposures. The Optum and IBM MarketScan databases used in this study were reviewed by the New England Institutional Review Board (IRB) and were determined to be exempt from broad IRB approval, as this research project did not involve human subject research. Based on Ethical Guidelines for Epidemiological Research issued by the Japanese Ministry of Health, Labor and Welfare, ethics approval and informed consent for the JMDC database were not applicable for this study. The data from IQVIA were deemed commercial assets and there was no IRB applicable to the usage and dissemination of these result sets or required registration of the protocol with additional ethics oversight.

Table 2. Description of databases used in the study.

Name (Abbreviation) Years Country Data Type Clinical Visits included Number of Persons (millions) Average Age at First Observation Percent Female Median Length of Follow-up (years)
IQVIA Australian Longitudinal Patient Data v1945 (Australia) 1996–2020 Australia General practitioner data Outpatient 5 37 22* 0.5
IBM MarketScan Commercial Claims and Encounters v2136 (CCAE) 2000–2021 US Insurance Claims Inpatient/outpatient 157 31 51 1.56
IQVIA Disease Analyzer–France v1943 (France) 2016–2021 France General practitioner data Outpatient 4 37 52 0.9
IQVIA Disease Analyzer–Germany v1944 (Germany) 2011–2021 Germany General practitioner data with supplemental data from participating specialists Outpatient 31 43 56 0.5
Japan Medical Data Center v2129 (JMDC) 2000–2021 Japan Insurance Claims Inpatient/outpatient 12 31 49 3.29
IBM MarketScan Multi-State Medicaid v2128 (MDCD) 2006–2020 US Insurance Claims Inpatient/outpatient 31 23 56 1.52
IBM MarketScan Medicare Supplemental v2135 (MDCR) 2000–2021 US Insurance Claims Inpatient/outpatient 10 71 55 2.46
Optum Clinformatics Extended Data Mart—Date of Death v2050 (Optum) 2007–2021 US Insurance Claims Inpatient/outpatient 71 37 51 1.48
Optum Pan-Therapeutic Electronic Health Records v2137 (Optum EHR) 2007–2021 US Electronic health records Inpatient/outpatient 99 37 53 2.63

* 59% of subjects do not have a designated sex.

We applied the OHDSI CohortDiagnostics tool (https://github.com/OHDSI/CohortDiagnostics) to evaluate and compare algorithms at a population-level, characterizing overall count, incidence over time, index date breakdown, cohort overlap, and temporal characterization.

We applied the PheValuator [10] method to evaluate the performance characteristics of the algorithms against the databases. This method provides the complete set of performance characteristics, i.e., sensitivity, specificity, and positive and negative predictive value. Using this method, we also evaluated algorithms from the Barnado et al study for comparison [11]. The two algorithms from Barnado et al were “Systemic Lupus 3X plus ever anti-malarial drugs” and “Systemic Lupus 3X plus ever anti-malarial drugs excluding dermatomyositis and systemic sclerosis”. Using the semi-automated phenotype algorithm evaluation method PheValuator, we eliminated the need for obtaining and reviewing subject’s records. While algorithm validation results from chart review are considered the “gold standard”, we have compared the results from PheValuator with prior studies using chart review and found excellent agreement between the two methods [12].

The demographic and clinical characteristics of those determined by the Incident, 2X algorithm were compared to those from a non-SLE randomly selected cohort of subjects. Non-SLE subjects were matched to the SLE cohort by age, sex, and year and month of SLE diagnosis in a ratio of 1 SLE:10 non-SLE subjects. The year and month of SLE diagnosis with the first clinical visit was matched in the same year and month in the randomly selected subject. We also required the same minimum look-back period, 365 days, in the matched cohort as was specified in the SLE cohort. Comparisons were made for characteristics observed in the year prior to SLE diagnosis (SLE cohort) or the matching visit date (non-SLE cohort). Standardized difference of the mean were calculated to assess difference between the cohorts [13].

The source code to implement the process used to develop and evaluate the algorithms is publicly available at (https://github.com/OHDSI/PhenotypeEvaluations/tree/main/SLE). This repository contains the literature search algorithm, the cohort definitions in JSON format, and the code to run CohortDiagnostics and PheValuator.

Results

We examined the characteristics of the subjects included by the algorithms. These characteristics may be viewed interactively at https://data.ohdsi.org/SLECohortDiagnostics. Subject counts in each of the databases ranged from about 117 subjects in France to about 180K in CCAE for the Incident, 1X algorithm (Table 3). The counts were as expected based on the relative sizes of the databases indicating that all codes used were appropriate for each database. The counts were much higher in the US databases compared to the databases from outside the US. The reduction in the number of subjects in the Incident, 1X algorithm compared to the Incident, 2X algorithm was significant, ranging from about an 80% reduction in Germany and Australia to about a 50% reduction in Japan. This is graphically depicted in Fig 2 which shows the proportion of overlap between the algorithms in each database. Fig 2 also shows that the Incident, 2X algorithm is a proper subset of Incident, 1X algorithm, i.e., all subjects in the 2X algorithm were also in the 1X algorithm.

Table 3. Comparison of subject counts across databases for the selected algorithms for SLE.

Database Systemic lupus erythematosus incident and correction for index date Systemic lupus erythematosus incident with 2nd dx and correction for index date Systemic lupus erythematosus prevalent and correction for index date Systemic lupus erythematosus prevalent with 2nd dx and correction for index date
Australia 319 67 951 252
CCAE 183086 60225 419196 196490
France 117 26 260 75
Germany 3687 710 10137 1887
JMDC 10404 5528 23921 17063
MDCD 35547 15152 88097 47474
MDCR 24680 8039 50183 22496
Optum 101025 32891 234861 108853
Optum EHR 142159 59031 217048 88152

Australia—IQVIA Australian Longitudinal Patient Data; CCAE—IBM® MarketScan® Commercial Database; Optum—Optum® Clinformatics® Data Mart; France—IQVIA Disease Analyzer–France; Germany—IQVIA Disease Analyzer–Germany; JMDC—Japan Medical Data Center; MDCD IBM® MarketScan® Multi-State Medicaid Database; MDCR—IBM® MarketScan® Medicare Supplemental Database; Optum EHR—Optum® longitudinal EHR repository.

Fig 2. Graphical depiction of the overlap in subjects between the two incidence cohorts and the two prevalence cohorts.

Fig 2

We compared the diagnosed conditions, prescribed drugs, laboratory measurements, and clinical procedures of the subjects included in cohorts defined by the algorithms to see how the populations differed. Fig 3 shows a comparison between subjects in the Incident, 1X algorithm compared to the Incident, 2X algorithm for three selected datasets with different demographic characteristics. The CCAE database, an insurance claims database of employed individuals and their families generally under 65 years old, showed the largest disparity between the two algorithms in the period 31–365 days after the index date. Some of the differences were from higher proportions of diagnosis codes for SLE (87% Incident, 2X v. 48% Incident, 1X, standard mean difference (SMD) 0.64) and prescriptions for hydroxychloroquine (38% Incident, 2X v. 19% Incident, 1X, SMD 0.31). There were also differences, albeit fewer, in the Medicaid (MDCD) population, those generally of lower socioeconomic status. This dataset also showed differences in diagnosis codes for SLE (92% Incident, 2X v. 57% Incident, 1X, standard mean difference (SMD) 0.61) and prescriptions for hydroxychloroquine (27% Incident, 2X v. 15% Incident, 1X, SMD 0.21). The Medicare (MDCR) dataset, generally individuals 65 years and older, showed fewer significant differences in proportions between the two cohorts. However, there were also differences in diagnosis codes for SLE (83% Incident, 2X v. 49% Incident, 1X, standard mean difference (SMD) 0.54) and prescriptions for hydroxychloroquine (29% Incident, 2X v. 15% Incident, 1X, SMD 0.24). Overall, the relative proportions between the two cohorts for the majority of the characteristics in MDCR were closer to the 45° line, indicating similar proportions between the cohorts compared to Commercial Claims and Encounters (CCAE) and MDCD.

Fig 3. Comparison between proportion of subjects in the “Incident, 1X algorithm” compared to the “Incident, 2X algorithm” for three selected datasets with different demographic characteristics.

Fig 3

Points closest to the diagonal indicate similar proportions between the comparators; points farther from the diagonal indicate more disparate proportions.

We examined the Incident, 2X algorithm for subject characteristics across the databases. A higher proportion of females compared to males with SLE were identified. The largest disproportionality was in MDCD where 91% of the subjects were female. Japan had the lowest disproportionality by sex with 68% of the subjects female. The type of clinical visit on first diagnosis of SLE was most commonly an outpatient or office visit. Less than 5% of the first diagnoses were made in an inpatient or emergency room visit with the exception being MDCD where about 25% of the first diagnoses were made in an emergency room visit. This follows with other published studies examining the higher use of the emergency room in MDCD recipients [14]. Several differences were found in the demographic and clinical characteristics of the Incident, 2X algorithm compared to the matched non-SLE cohort. Differences in the standardized difference of the mean greater than 0.1 are considered imbalanced [15]. Imbalanced characteristics between the two cohorts included Black and White race (in MDCD and Optum EHR), diagnoses for rheumatoid arthritis, heart disease, and renal impairment. There were also differences in prescriptions for immunosuppressants and anti-thrombotic agents.

We also examined the index event, i.e., the diagnosis code that initiated the subject into the cohort. In the Incident, 2X algorithm, the most prevalent index event was a diagnosis code of “Systemic lupus erythematosus” (SNOMED code 257628; ICD-10CM and ICD-10GM M32.9 (“Systemic lupus erythematosus, unspecified”); ICD-9CM 710.0 (“Systemic lupus erythematosus”)). There were a significant proportion of subjects whose index event was for a sign or symptom of SLE such as malaise, fatigue, anemia, or low back pain. These accounted for about 40–50% of the index events, indicating that there was likely a large amount of index date misclassification in subjects with SLE.

Incidence rates for SLE were similar from 2015 to 2020 across many of the databases. This may be partially attributed to the standardized use of ICD-10 coding starting in the US in 2015. In CCAE, MDCD, MDCR, Optum EHR, and Japan the rates were about 16 per 100,000 person years (PY). The rates in Germany were 1 per 100,000 PY. Rates in Australia and France varied considerably, likely due to the small sample size. The small number of SLE subjects in the Australian and French databases is likely due to the database being limited to general practitioners. In other databases which include specialists, such as rheumatologists, the sample size and incidence rates are higher and more stable. The incidence rates peaked in subjects in the 50–59 year old (YO) age group. One exception was in Japan where the incidence rates continued to increase with age through ages 70-79YO. Incidence rates in females were about 23–30 per 100,000 PY with the exception of Japan where the rate in females was about 18 per 100,000 PY. The incidence rates in males ranged from about 3–8 per 100,000 PY.

Evaluation of the performance characteristics of the four final algorithms was assessed using the PheValuator method (Table 4). Due to low subject counts, we were unable to calculate the performance characteristics for Australia and France. PheValuator requires a minimum of 200 subjects with a high likelihood of having SLE to produce an accurate model. This number was not satisfied in the Australian or French databases. As noted earlier, this may be due to the limitation of these databases to general practitioners. In general, the highest PPV estimates were for the two algorithms where a second diagnosis code for SLE was required 31–365 days after index. The highest sensitivity estimates were found in the two prevalent cohorts. The mean PPVs for the two algorithms where a second code was required was 87% (incident) and 88% (prevalent). This was reduced to 57% (incident) and 58% (prevalent) where only a single diagnosis code for SLE was required. The sensitivity for the two prevalent algorithms were 82% (single code required) and 55% (two codes required). This decreased to 39% (single code required) and 24% (two codes required) for the two incident cohorts. The highest mean F1 score was found in the prevalent, single code algorithms (65%) and the lowest was in the incident algorithms requiring two SLE diagnosis codes (37%). PPV estimates were higher in the US databases compared to Germany and Japan while the sensitivity estimates were similar between the databases. The estimates for PPV for the Barnado et al algorithms were 90% for both the algorithm using ≥ 3 SLE ICD-9 code counts and ever antimalarial use, either with or without exclusion of subjects with Dermatomyositis and Systemic Sclerosis. This is similar to that found by Barnado et al (88% including subjects with Dermatomyositis and Systemic Sclerosis; 91% excluding subjects with Dermatomyositis and Systemic Sclerosis).

Table 4. Performance characteristics of the algorithms for SLE.

Phenotype Algorithm Database Sensitivity (95% CI) PPV (95% CI) Specificity (95% CI) NPV (95% CI)
Systemic Lupus 3X plus ever anti-malarial drugs (per Barnado, 2017) [11] CCAE 0.437 (0.424–0.449) 0.969 (0.962–0.975) 1.000 (1.000–1.000) 0.998 (0.998–0.998)
Optum 0.425 (0.415–0.435) 0.939 (0.931–0.946) 1.000 (1.000–1.000) 0.997 (0.997–0.997)
Germany 0.388 (0.327–0.453) 0.817 (0.735–0.883) 1.000 (1.000–1.000) 1.000 (1.000–1.000)
JMDC 0.069 (0.056–0.085) 0.706 (0.615–0.786) 1.000 (1.000–1.000) 0.999 (0.999–0.999)
MDCD 0.279 (0.271–0.288) 0.953 (0.945–0.960) 1.000 (1.000–1.000) 0.995 (0.995–0.995)
MDCR 0.292 (0.283–0.301) 0.953 (0.945–0.960) 1.000 (1.000–1.000) 0.996 (0.996–0.996)
Optum EHR 0.317 (0.305–0.329) 0.960 (0.950–0.969) 1.000 (1.000–1.000) 0.998 (0.998–0.998)
Systemic Lupus 3X plus ever anti-malarial drugs excluding DM and SSc (per Barnado, 2017) [11] CCAE 0.408 (0.395–0.420) 0.969 (0.961–0.975) 1.000 (1.000–1.000) 0.998 (0.998–0.998)
Optum 0.401 (0.391–0.411) 0.939 (0.931–0.946) 1.000 (1.000–1.000) 0.997 (0.997–0.997)
Germany 0.347 (0.287–0.411) 0.808 (0.719–0.878) 1.000 (1.000–1.000) 1.000 (1.000–1.000)
JMDC 0.063 (0.050–0.079) 0.706 (0.612–0.790) 1.000 (1.000–1.000) 0.999 (0.999–0.999)
MDCD 0.261 (0.253–0.269) 0.953 (0.945–0.960) 1.000 (1.000–1.000) 0.995 (0.995–0.995)
MDCR 0.273 (0.264–0.282) 0.953 (0.945–0.961) 1.000 (1.000–1.000) 0.996 (0.996–0.996)
Optum EHR 0.305 (0.293–0.317) 0.960 (0.950–0.969) 1.000 (1.000–1.000) 0.998 (0.998–0.998)
Systemic lupus erythematosus incident and correction for index date CCAE 0.368 (0.356–0.380) 0.598 (0.582–0.614) 0.999 (0.999–0.999) 0.998 (0.998–0.998)
Optum 0.323 (0.313–0.332) 0.538 (0.525–0.551) 0.999 (0.999–0.999) 0.997 (0.997–0.997)
Germany 0.368 (0.307–0.432) 0.335 (0.278–0.395) 1.000 (1.000–1.000) 1.000 (1.000–1.000)
JMDC 0.572 (0.544–0.600) 0.484 (0.458–0.511) 1.000 (1.000–1.000) 1.000 (1.000–1.000)
MDCD 0.377 (0.369–0.386) 0.694 (0.682–0.705) 0.999 (0.999–0.999) 0.996 (0.996–0.996)
MDCR 0.314 (0.304–0.323) 0.545 (0.532–0.558) 0.999 (0.998–0.999) 0.996 (0.996–0.996)
Optum EHR 0.434 (0.421–0.447) 0.808 (0.793–0.822) 1.000 (1.000–1.000) 0.998 (0.998–0.998)
Systemic lupus erythematosus prevalent and correction for index date CCAE 0.855 (0.846–0.864) 0.633 (0.622–0.643) 0.998 (0.998–0.998) 1.000 (1.000–1.000)
Optum 0.862 (0.855–0.869) 0.625 (0.617–0.634) 0.997 (0.997–0.998) 0.999 (0.999–0.999)
Germany 0.983 (0.958–0.995) 0.244 (0.217–0.272) 1.000 (1.000–1.000) 1.000 (1.000–1.000)
JMDC 0.859 (0.838–0.878) 0.495 (0.474–0.517) 0.999 (0.999–0.999) 1.000 (1.000–1.000)
MDCD 0.816 (0.809–0.823) 0.712 (0.705–0.720) 0.998 (0.998–0.998) 0.999 (0.999–0.999)
MDCR 0.716 (0.707–0.725) 0.597 (0.588–0.606) 0.997 (0.997–0.997) 0.998 (0.998–0.998)
Optum EHR 0.641 (0.628–0.653) 0.742 (0.730–0.755) 0.999 (0.999–0.999) 0.999 (0.999–0.999)
Systemic lupus erythematosus incident with 2nd diagnosis code and correction for index date CCAE 0.209 (0.198–0.219) 0.950 (0.937–0.961) 1.000 (1.000–1.000) 0.998 (0.997–0.998)
Optum 0.205 (0.197–0.214) 0.922 (0.909–0.933) 1.000 (1.000–1.000) 0.996 (0.996–0.996)
Germany 0.264 (0.210–0.325) 0.771 (0.666–0.856) 1.000 (1.000–1.000) 1.000 (1.000–1.000)
JMDC 0.355 (0.328–0.383) 0.563 (0.527–0.598) 1.000 (1.000–1.000) 1.000 (1.000–1.000)
MDCD 0.234 (0.227–0.242) 0.942 (0.933–0.950) 1.000 (1.000–1.000) 0.995 (0.995–0.995)
MDCR 0.175 (0.168–0.183) 0.924 (0.911–0.935) 1.000 (1.000–1.000) 0.995 (0.995–0.995)
Optum EHR 0.251 (0.240–0.263) 0.994 (0.988–0.997) 1.000 (1.000–1.000) 0.998 (0.998–0.998)
Systemic lupus erythematosus prevalent with 2nd diagnosis code and correction for index date CCAE 0.582 (0.570–0.594) 0.961 (0.954–0.967) 1.000 (1.000–1.000) 0.999 (0.999–0.999)
Optum 0.644 (0.634–0.654) 0.943 (0.937–0.948) 1.000 (1.000–1.000) 0.998 (0.998–0.998)
Germany 0.636 (0.572–0.697) 0.782 (0.717–0.837) 1.000 (1.000–1.000) 1.000 (1.000–1.000)
JMDC 0.559 (0.530–0.587) 0.576 (0.547–0.604) 1.000 (1.000–1.000) 1.000 (1.000–1.000)
MDCD 0.582 (0.573–0.590) 0.935 (0.929–0.941) 1.000 (1.000–1.000) 0.997 (0.997–0.997)
MDCR 0.468 (0.458–0.478) 0.946 (0.939–0.952) 1.000 (1.000–1.000) 0.997 (0.997–0.997)
Optum EHR 0.347 (0.335–0.360) 0.994 (0.989–0.997) 1.000 (1.000–1.000) 0.998 (0.998–0.998)

CI–Confidence interval; PPV–Positive predictive value; NPV–Negative predictive value; CCAE—IBM® MarketScan® Commercial Database; Optum—Optum’s Clinformatics® Data Mart; Germany—IQVIA Disease Analyzer–Germany; JMDC—Japan Medical Data Center; MDCD IBM® MarketScan® Multi-State Medicaid Database; MDCR—IBM® MarketScan® Medicare Supplemental Database; Optum EHR—Optum’s longitudinal EHR repository; DM–Dermatomyositis; SSc Systemic sclerosis.

Discussion

In this study we used a data-driven approach to develop phenotype algorithms for SLE. The final four selected algorithms were developed to discriminate between incident and prevalent SLE populations, each with either higher sensitivity or higher specificity. In these algorithms, corrections were applied to address possible index date misclassification that might occur if the algorithms were limited to only diagnosis codes for SLE by adjusting the index date accounting for prior drugs, signs and symptoms. We validated each algorithm using the PheValuator method to estimate the sensitivity, specificity, and positive and negative predictive values. To our knowledge, these are the first empirically derived phenotype algorithm definitions for SLE with the full set of validation performance characteristics. These performance characteristics may be used in studies requiring quantitative bias analysis (QBA). QBA provides quantitative estimates of the direction, magnitude, and uncertainty arising from systematic errors [16]. QBA, for example, may be used to estimate systematic errors in incidence rates, providing a more robust estimate. In QBA, the sensitivity and the specificity of the phenotype algorithm used within the study are included as parameters in the corrective equations. The results from the current study may be used for these corrections. The use of these algorithms with QBA in SLE observational research may improve the confidence in study results.

Researchers may choose an algorithm for use in their observational studies based on fit for function according to the estimated performance characteristics. We found a large increase in PPV when a second code for SLE was added to the algorithm with a concomitant large decrease in sensitivity compared to the single code algorithm. These differences indicate that both the single code and the two code algorithms should be considered for use in studies depending on the study requirements. For studies requiring a more sensitive algorithm, the Prevalent, 1X algorithm should be used; for studies requiring a more specific algorithm, the Incident or Prevalent, 2X algorithm should be used.

There are several strengths to the present study. First, this study developed phenotypes using data from nine large datasets covering five countries and reflect subjects of a wide range of ages and from various socioeconomic backgrounds. Many previous studies examining the performance characteristics of algorithms for SLE used smaller datasets [1720]. The data used for the development of the phenotypes was analyzed using multiple approaches, providing ancillary verification for each of our decisions in determining algorithm logic. The approach we used in this study uses publicly available, open-source software providing the capability for full result replication. Included in the supplemental information are the JSON files which provide fully reproducible phenotype algorithms. There were also several limitations to our study, which included the use of administrative datasets primarily maintained for insurance billing that is well-known to have significant deficits including coding inaccuracies [21]. In addition, the estimation of performance characteristics using the PheValuator methodology is dependent on the quality of the data in the dataset, which can vary substantially [22]. Incomplete signs and symptoms documentation in data could affect the accuracy of the index date. This may reflect differences between insurance claims databases, e.g., CCAE and EHR databases, e.g, Optum EHR. The generalizability of findings to uninsured populations may differ from the insured population observed in this study. Prescription drug treatments in claims and EHR data are not specifically associated with an indication which could affect the signs and symptoms and correction for index date misclassification and associated metrics. The algorithm validation was performed using a method involving predictive modeling of SLE rather than case review. This method does have the advantage of providing performance characteristics for multiple databases. It also provides the full set of performance metrics including sensitivity and specificity which are rarely provides in validation studies using case review [10]. Using this method we found similar results to a prior validation of SLE. The results from PheValuator have also been compared to the results from previously published validation studies and have demonstrated excellent agreement [12]. In the algorithm where an incident cohort definition was defined with only one diagnostic code it was not possible to determine if any of these were rule out diagnoses. Lastly, our study observed data from those subjects who presented for medical attention; those who did not seek medical attention but had the disease were not included and may affect the metrics in this study. Those with less severe disease may also have not sought medical attention.

Conclusions

This study developed and thoroughly evaluated phenotype algorithms for SLE based on a data-driven approach. The results of this effort yielded four final algorithms, which may be applied by other researchers to observational studies of SLE. The four algorithms include options for prevalent vs. incident cohorts as well as sensitive vs. specific definitions. The validation metrics provided with these algorithms increases the confidence in these algorithms to identify SLE subjects and also enables the application of quantitative bias analysis.

Supporting information

S1 File. Description of and results from the literature search.

(DOCX)

S2 File. Description of the process for phenotype algorithm development.

(DOCX)

S3 File. The complete set of ICD, Read, and SNOMED codes used in the algorithms.

(XLSX)

Acknowledgments

The authors would like to thank Gayle Murry for her expert work on the literature search.

Abbreviations

ANA

Anti-nuclear antibody

CCAE

Commercial Claims and Encounters

DMARD

Disease-modifying antirheumatic drugs

EHR

Electronic health records

ICD-9

International Classification of Diseases, Ninth Revision

MDCD

Medicaid

MDCR

Medicare

OHDSI

Observational Health Data Sciences and Informatics

PPV

Positive predictive value

PY

Person years

QBA

Quantitative bias analysis

RWE

Real-world evidence

SLE

Systemic lupus erythematosus

SMD

Standard mean difference

SNOMED

Systemized Nomenclature of Medicine

YO

Year old

Data Availability

The data that support the findings of this study are available from IBM, Optum, JMDC, and IQVIA but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are, however, available from the authors upon reasonable request and with permission of IBM, Optum, JMDC, and IQVIA. The authors of the present study had no special privileges in accessing these datasets which other interested researchers would not have. To request access to the datasets used in this study, researchers should use the information, database name and version number, supplied in Table 2.

Funding Statement

No sources of funding were used to conduct this study or prepare this manuscript. Johnson and Johnson will be the sponsor of Open Access, if applicable. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Jump RL, Robinson ME, Armstrong AE, Barnes EV, Kilbourn KM, Richards HB. Fatigue in systemic lupus erythematosus: contributions of disease activity, pain, depression, and perceived social support. J Rheumatol. 2005;32(9):1699–705. [PubMed] [Google Scholar]
  • 2.Greco CM, Rudy TE, Manzi S. Adaptation to chronic pain in systemic lupus erythematosus: applicability of the multidimensional pain inventory. Pain Med. 2003;4(1):39–50. doi: 10.1046/j.1526-4637.2003.03001.x [DOI] [PubMed] [Google Scholar]
  • 3.Miner JJ, Kim AH. Cardiac manifestations of systemic lupus erythematosus. Rheum Dis Clin North Am. 2014;40(1):51–60. [DOI] [PubMed] [Google Scholar]
  • 4.Danila MI, Pons-Estel GJ, Zhang J, Vilá LM, Reveille JD, Alarcón GS. Renal damage is the most important predictor of mortality within the damage index: data from LUMINA LXIV, a multiethnic US cohort. Rheumatology (Oxford). 2009;48(5):542–5. doi: 10.1093/rheumatology/kep012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Stojan G, Petri M. Epidemiology of systemic lupus erythematosus: an update. Curr Opin Rheumatol. 2018;30(2):144–50. doi: 10.1097/BOR.0000000000000480 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Overby CL, Pathak J, Gottesman O, Haerian K, Perotte A, Murphy S, et al. A collaborative approach to developing an electronic health record phenotyping algorithm for drug-induced liver injury. J Am Med Inform Assoc. 2013;20(e2):e243–e52. doi: 10.1136/amiajnl-2013-001930 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Tench CM, McCurdie I, White PD, D’Cruz DP. The prevalence and associations of fatigue in systemic lupus erythematosus. Rheumatology (Oxford). 2000;39(11):1249–54. doi: 10.1093/rheumatology/39.11.1249 [DOI] [PubMed] [Google Scholar]
  • 8.Iaboni A, Ibanez D, Gladman DD, Urowitz MB, Moldofsky H. Fatigue in systemic lupus erythematosus: contributions of disordered sleep, sleepiness, and depression. J Rheumatol. 2006;33(12):2453–7. [PubMed] [Google Scholar]
  • 9.Cezarino RS, Cardoso JR, Rodrigues KN, Magalhães YS, Souza TY, Mota L, et al. Chronic low back pain in patients with systemic lupus erythematosus: prevalence and predictors of back muscle strength and its correlation with disability. Rev Bras Reumatol Engl Ed. 2017;57(5):438–44. doi: 10.1016/j.rbre.2017.03.003 [DOI] [PubMed] [Google Scholar]
  • 10.Swerdel JN, Hripcsak G, Ryan PB. PheValuator: Development and evaluation of a phenotype algorithm evaluator. Journal of Biomedical Informatics. 2019;97:103258. doi: 10.1016/j.jbi.2019.103258 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Barnado A, Casey C, Carroll RJ, Wheless L, Denny JC, Crofford LJ. Developing Electronic Health Record Algorithms That Accurately Identify Patients With Systemic Lupus Erythematosus. Arthritis Care Res (Hoboken). 2017;69(5):687–93. doi: 10.1002/acr.22989 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Swerdel JN, Schuemie M, Murray G, Ryan PB. PheValuator 2.0: Methodological improvements for the PheValuator approach to semi-automated phenotype algorithm evaluation. J Biomed Inform. 2022:104177. doi: 10.1016/j.jbi.2022.104177 [DOI] [PubMed] [Google Scholar]
  • 13.Austin PC. Using the Standardized Difference to Compare the Prevalence of a Binary Variable Between Two Groups in Observational Research. Communications in Statistics—Simulation and Computation. 2009;38(6):1228–34. [Google Scholar]
  • 14.Mortensen K, Song PH. Minding the gap: a decomposition of emergency department use by Medicaid enrollees and the uninsured. Med Care. 2008;46(10):1099–107. doi: 10.1097/MLR.0b013e318185c92d [DOI] [PubMed] [Google Scholar]
  • 15.Ryan PB, Schuemie MJ, Welebob E, Duke J, Valentine S, Hartzema AG. Defining a reference set to support methodological research in drug safety. Drug Saf. 2013;36 Suppl 1:S33–47. doi: 10.1007/s40264-013-0097-8 [DOI] [PubMed] [Google Scholar]
  • 16.Lash TL, Fox MP, MacLehose RF, Maldonado G, McCandless LC, Greenland S. Good practices for quantitative bias analysis. Int J Epidemiol. 2014;43(6):1969–85. doi: 10.1093/ije/dyu149 [DOI] [PubMed] [Google Scholar]
  • 17.Turner CA, Jacobs AD, Marques CK, Oates JC, Kamen DL, Anderson PE, et al. Word2Vec inversion and traditional text classifiers for phenotyping lupus. BMC Med Inform Decis Mak. 2017;17(1):126. doi: 10.1186/s12911-017-0518-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Hanly JG, Thompson K, Skedgel C. Identification of patients with systemic lupus erythematosus in administrative healthcare databases. Lupus. 2014;23(13):1377–82. doi: 10.1177/0961203314543917 [DOI] [PubMed] [Google Scholar]
  • 19.Arkema EV, Jönsen A, Rönnblom L, Svenungsson E, Sjöwall C, Simard JF. Case definitions in Swedish register data to identify systemic lupus erythematosus. BMJ open. 2016;6(1):e007769–e. doi: 10.1136/bmjopen-2015-007769 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Klein NP, Ray P, Carpenter D, Hansen J, Lewis E, Fireman B, et al. Rates of autoimmune diseases in Kaiser Permanente for use in vaccine adverse event safety studies. Vaccine. 2010;28(4):1062–8. doi: 10.1016/j.vaccine.2009.10.115 [DOI] [PubMed] [Google Scholar]
  • 21.Tyree PT, Lind BK, Lafferty WE. Challenges of using medical insurance claims data for utilization analysis. American journal of medical quality: the official journal of the American College of Medical Quality. 2006;21(4):269–75. doi: 10.1177/1062860606288774 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Peabody JW, Luck J, Jain S, Bertenthal D, Glassman P. Assessing the accuracy of administrative data in health information systems. Med Care. 2004;42(11):1066–72. doi: 10.1097/00005650-200411000-00005 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Clare Mc Fadden

14 Aug 2022

PONE-D-22-09946Using a Data-driven Approach for the Development and Evaluation of Phenotype Algorithms for Systemic Lupus ErythematosusPLOS ONE

Dear Dr. Swerdel,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The manuscript has been assessed by two reviewers, and their comments are appended below. The reviewers have raised concerns about the level of detail provided in the methods section, and the clarity of some aspects of the developmental process and cohort selection. Additionally, it was noted that limitations are not adequately discussed. 

Could you please revise the manuscript to carefully address the concerns raised?

Please submit your revised manuscript by Sep 28 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Clare Mc Fadden

Editorial Office

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following in the Competing Interests section: 

Authors JS, DR, and JH are employees of Janssen Research and Development and shareholders of Johnson & Johnson.

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. 

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

"Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In this study, the authors use 9 different administrative data sets from around the world to develop phenotype algorithms for systemic lupus erythematosus. The study has several strengths. First, it uses multiple data sets and larger numbers of patients than prior studies. Second, there is a correction for misclassification of the index date by allowing for signs and symptoms of lupus to mark the index date if within 90 days of the first lupus diagnosis code. The source code is made available to increase reproducibility. Finally, they use a predictive algorithm previously published (PheValuator) as the gold standard for likelihood of the presence of lupus.

However, there are several weaknesses that must be addressed. There is no nested subset in which clinical diagnosis of SLE is compared against the Phevaluator likelihoods. Thus, there is no true sensitivity, specificity, and positive predictive value without the clinical goal standard. This should be discussed more in the methods as well as a limitation in the discussion.

The paper is very dense and sometimes difficult to read about rereading.

In table 1 and figure 2, 1 should use the same nomenclature as in the methods (i.e. incident 1x. etc. instead of cohort 1, 2, 3…). In figure 2, it is very difficult to tell what this is trying to depict the way it is currently labeled. It might be easier to understand if the Legend distinguishing blue and gray would simply say (gray = incident plus prevalent, blue equals incident). The top can then be labeled single diagnosis code, (1x) while the bottom can be labeled two diagnosis codes (2x).

It is difficult to understand how a minimum look back of 365 days was required for incident SLE, when not all datasets contain data spanning 365 days (IQVIA, table 2).

It is very concerning that low back pain was considered an early manifestation of lupus, as this is not a manifestation of lupus at all. How does removing this affect the correction for misclassified index date?

On page 13 the concept of a 3X algorithm is introduced in table 4, but it is not described in the methods for in the result text.

On page 13, fifth line from the bottom of the sentence "this is decreased to 39%…” Does not describe the condition under which the sensitivity reduced versus the preceding sentence. This is confusing.

In the discussion, a bit more explanation of QBA may be necessary.

Reviewer #2: The authors present a manuscript documenting the estimated performance of a new phenotyping algorithm for Systemic Lupus Erythematosus (SLE). This study uses a previously published tool by the lead author. The strengths of this paper include the number of datasets, published algorithm implementation for comparison, and the additional resources provided. However, the evidence provided for the performance of the algorithm is weak and it is unclear how the approach is “data-driven” as per the title and conclusion. This this manuscript suggests a large body of work that contributes to study of an important complex disease, but it read to this reviewer as an extended technical demonstration.

General Comments:

1) On the development of the algorithm presented: While the authors refer to a large amount of work behind their development process, there is little transparency into the factors that ultimately defined their decision making. This is also true of the “data-driven” and “empirical evidence” phrases used for “development and evaluation”; it is not clear how it was applied to the development portion. For example, the authors describe a literature search identifying an impressive 59 articles! The details of this investigation are not included, and the treatment of these findings amounts to selecting the condition codes and signs/symptom/treatment codes from several studies. What was used to establish the 90-day look-back window as “optimal” is not discussed, either. The authors may be doing everything according to best practices, but it is not shared with the audience. The start of the discussion reads as “The final four selected algorithms”, which again implies there was much more happening behind the scenes for this selection.

2) On the validation of the algorithms using PheValuator: Acknowledging the Dr. Swerdel is the first author on both this and the Phevaluator manuscript, please allow a brief summary: PheValuator is a tool that allows estimation of algorithm performance characteristics by generating a fuzzy silver standard on a population for evaluation by taking a strong definition of cases and controls and using a classification method to estimate likelihood across the entire population. The reliability of the estimates of algorithm performance are strongly predicated on the reliability of the ‘extremely specific (“xSpec”), sensitive, and prevalence cohorts’. Biases in those cohorts or unaccounted for differences in the source data can greatly impact the reliability of the resulting estimates. However, the authors do not make clear or justify the details of this cohort selection, nor provide information readers might use to evaluate the reliability of these estimates. In attempting to find the details, it looks like this OHDSI Atlas cohort definition describes it: https://github.com/OHDSI/PhenotypeEvaluations/blob/main/SLE/inst/cohorts/22370.json . I struggled to interpret exactly what this means, but it looks like it is expecting only a couple of SLE codes with 1 year of observation and perhaps a 21-day window separating codes. The accuracy of this interpretation aside, it should be clear to the reader what was done and justified as a foundation for the downstream estimation.

3) The authors describe several features of the identified cohorts and in comparison with the general population. These descriptions may benefit from a clinical perspective. For example on page 12: “A higher proportion of females compared to males with SLE were identified. The largest disproportionality was in MDCD where 91% of the subjects were female.” This seems expected for this population. Perhaps making more of these details available broadly while noting methodologically or clinically significant details in the discussion would be helpful to readers. Grounding this work in the user’s/reader’s needs and helping address questions of “can I use this with my data” or “are there biases I may wish to further address by extending this approach” may be very powerful.

4) The names and references among the materials are not always consistent, which can make it hard to understand and connect the data provided. The SLE Cohort Diagnostics tool is neat to see, but the Cohort numbers in the tool do not align with the Cohort numbers in Figure 2, nor do the Cohort IDs in the tool match the Cohort IDs in the Github Repository. Table 2 is not sorted the same as Table 3 (eg, by name and not by abbreviation).

5) There are some areas where clarity could be improved with regards to the cohort selection and process which is especially important when the “any non-case is a control”. For example, it wasn’t clear if the incident population evaluation only considered individuals with at least 365 days of data (as they could not possibly qualify as a case).

Specific comments:

In the Cohort Diagnostics report, some of the cohorts appear to have extraneous information, eg the exclusion condition concept set in: https://github.com/OHDSI/PhenotypeEvaluations/blob/main/SLE/inst/cohorts/22370.json

Table 3: Including denominators for these (or presenting additionally as rates) would be helpful for interpretation.

Page 5, line 100: This appendix appears to only contain the SNOMED codes, not the expanded / mapped set of codes as described in the text.

Page 10, “ex-US databases”. Is this intended to be “non-US” or “extra-US”?

Page 11, text lines 7-8: The differences in this and the definitions of the algorithms, ie, these statistics consider a window based on the index date while the algorithm window is defined based on the SLE code date, seem to make the numbers harder to interpret on the surface.

Page 12, “MDCD where about 25% of the first diagnoses were made in an emergency room visit”- It may be worth interpreting this later. Is this a feature of the population that is expected or is it a signal that the algorithm is not performing as expected or something else?

Page 12, near end: Particularly given the international cohorts used, specifying the Clinical Modification versions of the ICD systems were applicable is important for clarity.

Page 13, “Rates in Australia and France varied considerably, likely due to the small sample size.” And “Due to low subject counts, we were unable to calculate the performance characteristics for Australia and France.”. This merits further treatment. If the authors method cannot analyze 4 or 5 million person datasets, that suggests challenges for many potential users who do not have access to such large sets. Is this a matter of the low prevalence of SLE? Was there a metric that showed you could not use datasets of this size, or some error reported by PheValuator? It would be helpful for readers and users/implementers to understand. The data presented suggests to this reviewer that perhaps the algorithm does not work in these datasets- comparing the IQVIA France and Germany datasets there are dramatically different rates observed (considering Tables 2 and 3). One might conjecture that the IQVIA GP data is insufficient for SLE identification as it did function in Germany but neither France nor Australia, though there are certainly other possibilities.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Feb 16;18(2):e0281929. doi: 10.1371/journal.pone.0281929.r002

Author response to Decision Letter 0


26 Sep 2022

Reviewer Response

Reviewer #1: In this study, the authors use 9 different administrative data sets from around the world to develop phenotype algorithms for systemic lupus erythematosus. The study has several strengths. First, it uses multiple data sets and larger numbers of patients than prior studies. Second, there is a correction for misclassification of the index date by allowing for signs and symptoms of lupus to mark the index date if within 90 days of the first lupus diagnosis code. The source code is made available to increase reproducibility. Finally, they use a predictive algorithm previously published (PheValuator) as the gold standard for likelihood of the presence of lupus.

However, there are several weaknesses that must be addressed. There is no nested subset in which clinical diagnosis of SLE is compared against the Phevaluator likelihoods. Thus, there is no true sensitivity, specificity, and positive predictive value without the clinical goal standard. This should be discussed more in the methods as well as a limitation in the discussion.

Thank you for bringing up this very important point. While the validation method, PheValuator, does not use chart review for validation, we think the predicted probabilities accurately assess the likelihood of SLE in our trained cohort. This thinking is strengthened by our recent publication (https://www.sciencedirect.com/science/article/pii/S1532046422001885) where we compared the results from PheValuator to previously published chart validation studies for 17 different phenotypes encompassing 86 phenotype algorithms. We found close agreement between PheValuator results and those from chart review. We have included the following in the methods section:

Page 9, Line 11:

“While algorithm validation results from chart review are considered the “gold standard”, we have compared the results from PheValuator with prior studies using chart review and found excellent agreement between the two methods. (12)”

In the limitations section:

Page 18, Line 45:

“The results from PheValuator have also been compared to the results from previously published validation studies and have demonstrated excellent agreement.(12)”

The paper is very dense and sometimes difficult to read about rereading.

In table 1 and figure 2, 1 should use the same nomenclature as in the methods (i.e. incident 1x. etc. instead of cohort 1, 2, 3…).

Thank you for your comment. This has been corrected in both Table 1 and Figure 2.

In figure 2, it is very difficult to tell what this is trying to depict the way it is currently labeled. It might be easier to understand if the Legend distinguishing blue and gray would simply say (gray = incident plus prevalent, blue equals incident). The top can then be labeled single diagnosis code, (1x) while the bottom can be labeled two diagnosis codes (2x).

Thank you for this correction. We have made this change in Figure 2.

It is difficult to understand how a minimum look back of 365 days was required for incident SLE, when not all datasets contain data spanning 365 days (IQVIA, table 2).

Thank you for this observation. While the median follow-up time in some of the databases was indeed less than one year, we were able to observe patients with 365 day lookback as shown in Table 3. The lookback time requirement did have an impact on the number of subjects observed. For example, in IQVIA France the number of subjects in the incident cohort requiring 365 day lookback was 117 whereas the number of subjects in the prevalent cohort was 260, over a 50% reduction, obviously impacting the sensitivity of the incident algorithm.

We have added the following to Page 5, Line 88:

“Requiring a 365 day look-back for the incident algorithm will impact the sensitivity of the algorithm especially in databases where subjects are observed for shorter periods of time.”

It is very concerning that low back pain was considered an early manifestation of lupus, as this is not a manifestation of lupus at all. How does removing this affect the correction for misclassified index date?

Thank you for this comment. The high prevalence of back pain in SLE was found by Cezarino et al (2017). We have added this citation to the list of citations on Page 5, Line 96. The method we used to determine possible index date misclassification is empirically based and we included low back pain based on the relatively high prevalence of low back pain in the SLE population (> 15%) prior to initial diagnosis of SLE. Our prevalence of low back pain was similar to Cezarino et al.

On page 13 the concept of a 3X algorithm is introduced in table 4, but it is not described in the methods for in the result text.

Thank you for this correction. We have added the following line to Page 9, Line 7:

“The two algorithms from Barnado et al were “Systemic Lupus 3X plus ever anti-malarial drugs” and “Systemic Lupus 3X plus ever anti-malarial drugs excluding dermatomyositis and systemic sclerosis”.”

On page 13, fifth line from the bottom of the sentence "this is decreased to 39%…” Does not describe the condition under which the sensitivity reduced versus the preceding sentence. This is confusing.

Thank you for picking up this error. The comparison was to the incident cohorts and the sentence has been corrected at Page 14, Line 108 from:

“This decreased to 39% (single code required) and 24% (two codes required).”

To:

“This decreased to 39% (single code required) and 24% (two codes required) for the two incident cohorts.”

In the discussion, a bit more explanation of QBA may be necessary.

We appreciate your comment. We have added the following to Page 17, Line 13:

“In QBA, the sensitivity and the specificity of the phenotype algorithm used within the study are included as parameters in the corrective equations. The results from the current study may be used for these corrections.”

Reviewer #2: The authors present a manuscript documenting the estimated performance of a new phenotyping algorithm for Systemic Lupus Erythematosus (SLE). This study uses a previously published tool by the lead author. The strengths of this paper include the number of datasets, published algorithm implementation for comparison, and the additional resources provided. However, the evidence provided for the performance of the algorithm is weak and it is unclear how the approach is “data-driven” as per the title and conclusion. This this manuscript suggests a large body of work that contributes to study of an important complex disease, but it read to this reviewer as an extended technical demonstration.

General Comments:

1) On the development of the algorithm presented: While the authors refer to a large amount of work behind their development process, there is little transparency into the factors that ultimately defined their decision making. This is also true of the “data-driven” and “empirical evidence” phrases used for “development and evaluation”; it is not clear how it was applied to the development portion. For example, the authors describe a literature search identifying an impressive 59 articles! The details of this investigation are not included, and the treatment of these findings amounts to selecting the condition codes and signs/symptom/treatment codes from several studies.

The goal of this paper was to provide researcher with empirically-derived and validated phenotype algorithms that they may use in their research on SLE. We think that the methodology that we used to derive and validate these algorithms are a significant improvement over previous studies that published algorithms derived with less evidence. The rigor applied in our algorithm development process means that the algorithms may be used directly within other’s research studies with confidence of reduced bias. We respectfully disagree with the assessment that the evidence provided for the performance of the algorithm is weak. As noted in comment to Reviewer 1, we have now shown in our latest publication on PheValuator (https://pubmed.ncbi.nlm.nih.gov/35995107/) that the results from this tool compare favorably with prior validation studies using chart validation.

Your comments about our methods for this empirically- driven process being unclear are well received. In order to conserve space in the article we provided a very brief description of the process. This was not sufficient. To remedy that error, we have provided two appendices to be included in the on-line supplemental information. Supplemental information 1 provides details of the literature search including the search strategy in PUBMED, the authors and citations for the articles we found, and a table of the diagnosis codes used within algorithms in the previously published articles. Supplemental information 2 provides a flow diagram and description of the complete process we used to develop the algorithms using our data-driven approach.

To provide better clarity for the process we used to develop the algorithms we have changed the text on Page 4, Line 79 from:

“After reviewing the results from the literature search and determining all the diagnosis codes for SLE in the different vocabularies, i.e., International Classification of Diseases, Ninth (ICD-9) or Tenth (ICD-10) Revision and Read codes, codes were translated each into the Systemized Nomenclature of Medicine (SNOMED) vocabulary using the OHDSI open-source ATLAS tool (https://github.com/OHDSI/Atlas). Translating codes from disparate vocabularies into SNOMED has been shown to be effective and improves the efficiency and transportability of research.(7)”

To:

“We applied a rigorous process to develop the algorithms in this study. We used an extensive literature review (details and results are in Supplemental information 1), along with several tools within the Observational Health Data Sciences and Informatics (OHDSI) tool stack for empirical analysis. The goal of instituting this process was to enhance the science of phenotype algorithm development. The full details of the process are detailed in Supplemental information 2.”

What was used to establish the 90-day look-back window as “optimal” is not discussed, either.

Thank you for this observation. We chose a 90-day look back window as optimal as the rates for the signs and symptoms for SLE were highest during this period and significantly reduced in prevalence prior to 90 days. We have added to Page 5, Line 93:

“In the period 365 to 91 days prior to the first diagnosis code for SLE, the rates of these signs and symptoms were significantly lower than in the 90 to 0 days prior to index date. Thus, the use of 90 days prior to index as the optimal time period choice.”

The authors may be doing everything according to best practices, but it is not shared with the audience. The start of the discussion reads as “The final four selected algorithms”, which again implies there was much more happening behind the scenes for this selection.

Thank you for pointing out this lack of detail. We have increased the level of detail as described above using two new documents as supplemental information .

2) On the validation of the algorithms using PheValuator: Acknowledging the Dr. Swerdel is the first author on both this and the PheValuator manuscript, please allow a brief summary: PheValuator is a tool that allows estimation of algorithm performance characteristics by generating a fuzzy silver standard on a population for evaluation by taking a strong definition of cases and controls and using a classification method to estimate likelihood across the entire population. The reliability of the estimates of algorithm performance are strongly predicated on the reliability of the ‘extremely specific (“xSpec”), sensitive, and prevalence cohorts’. Biases in those cohorts or unaccounted for differences in the source data can greatly impact the reliability of the resulting estimates. However, the authors do not make clear or justify the details of this cohort selection, nor provide information readers might use to evaluate the reliability of these estimates.

Thank you for your comment. We felt understanding the full details of the PheValuator process is beyond the scope of this article. We use many open-source tools within our process the details and source code of which are available on-line. As Reviewer 1 noted the article is dense in detail, this agrees with our conclusion to not include more detail on the utilized tools but relies on the interested reader to use the on-line references for more detail. Also, based on a comment from Reviewer 1, we have added additional detail on the validity of the PheValuator process in the Methods section. The results presented in the PheValuator follow-up article demonstrate the close agreement of the results from PheValuator with the results from previously published articles on phenotype algorithm validation using chart review. We feel this agreement justifies the structure of the cohorts in the PheValuator process. Also, as noted above, we have provided significantly more detail on the overall process including the PheValuator process in supplemental information documents.

Page 9, Line 11:

“While algorithm validation results from chart review are considered the “gold standard”, we have compared the results from PheValuator with prior studies using chart review and found excellent agreement between the two methods. (12)”

In attempting to find the details, it looks like this OHDSI Atlas cohort definition describes it: https://github.com/OHDSI/PhenotypeEvaluations/blob/main/SLE/inst/cohorts/22370.json . I struggled to interpret exactly what this means, but it looks like it is expecting only a couple of SLE codes with 1 year of observation and perhaps a 21-day window separating codes. The accuracy of this interpretation aside, it should be clear to the reader what was done and justified as a foundation for the downstream estimation.

This is an important comment. The details of the PheValuator process used for the analyses within this paper are thoroughly described in the article published since the SLE article was submitted for review. The details of the process for interested readers are cited above (and now in the SLE article) and may be found at https://pubmed.ncbi.nlm.nih.gov/35995107/. We have also provided significantly more detail on the overall process including the PheValuator process in supplemental information documents.

3) The authors describe several features of the identified cohorts and in comparison with the general population. These descriptions may benefit from a clinical perspective. For example on page 12: “A higher proportion of females compared to males with SLE were identified. The largest disproportionality was in MDCD where 91% of the subjects were female.” This seems expected for this population. Perhaps making more of these details available broadly while noting methodologically or clinically significant details in the discussion would be helpful to readers. Grounding this work in the user’s/reader’s needs and helping address questions of “can I use this with my data” or “are there biases I may wish to further address by extending this approach” may be very powerful.

Thank you for the suggestion. The goal of this paper was to provide researchers empirically-derived and validated phenotype algorithms for studies of SLE. We feel we additionally provided a wealth of information that readers may use for their own hypothesis generation based upon the data within the cohort diagnostics shiny application. We did provide some insights into the data as you noted above. However, a thorough clinical investigation of the data is beyond the scope of this research article. We felt that a future separate article describing the clinical ramifications of the SLE phenotype algorithms was more appropriate.

4) The names and references among the materials are not always consistent, which can make it hard to understand and connect the data provided. The SLE Cohort Diagnostics tool is neat to see, but the Cohort numbers in the tool do not align with the Cohort numbers in Figure 2, nor do the Cohort IDs in the tool match the Cohort IDs in the Github Repository. Table 2 is not sorted the same as Table 3 (eg, by name and not by abbreviation).

Thank you for noting these errors. We have re-sorted Tables 2 and 3 and now they are in the same sort order based on abbreviation. We have renamed the .json files in the Github repository to match the names in Table 2 and eliminated duplicate cohorts in the repository. We have also added a reference column in Table 2 to show the cohort numbers in the cohort diagnostic shiny application.

5) There are some areas where clarity could be improved with regards to the cohort selection and process which is especially important when the “any non-case is a control”. For example, it wasn’t clear if the incident population evaluation only considered individuals with at least 365 days of data (as they could not possibly qualify as a case).

Thank you for this comment. In the analyses from PheValuator, cohorts are matched to possible controls based on the required observation period of the algorithm. For example, in the incident algorithms requiring 365-day lookback, possible controls are from subjects where their compared time period had at least 365 days prior lookback.

Specific comments:

In the Cohort Diagnostics report, some of the cohorts appear to have extraneous information, eg the exclusion condition concept set in: https://github.com/OHDSI/PhenotypeEvaluations/blob/main/SLE/inst/cohorts/22370.json

Thank you for your diligent effort in finding that error. All the cohorts have been re-checked and all extraneous concept sets have been removed.

Table 3: Including denominators for these (or presenting additionally as rates) would be helpful for interpretation.

Thank you for the comment. The denominators for these counts are the entire population for each database. The database population may be found in the sixth column of Table 2 (Number of Persons (millions)). The incidence rates can be found for different strata (e.g., age group, sex) in the shiny app, for example:

Page 5, line 100: This appendix appears to only contain the SNOMED codes, not the expanded / mapped set of codes as described in the text.

Thank you for finding this error. The new table in supplemental information 3 now contains all the SNOMED codes as well as the codes from the native classification schemes, e.g., ICD-9.

Page 10, “ex-US databases”. Is this intended to be “non-US” or “extra-US”?

Thank you for pointing out this confusing usage. We have made the change from “ex-US databases” to “databases from outside the US” on Page 10, line 33.

Page 11, text lines 7-8: The differences in this and the definitions of the algorithms, ie, these statistics consider a window based on the index date while the algorithm window is defined based on the SLE code date, seem to make the numbers harder to interpret on the surface.

The final analyses for differences in the algorithms are all compared between algorithms with the index date correction applied. The analyses use the same windows from the index date and no longer use windows starting from the SLE code date only (the original index date prior to correction). For example, in the line you have cited, the differences were between the 2X algorithm with the index date corrected and the 1X algorithm with the index date corrected. As we consider the index date correction changes the index date to a date closer to the start of the clinical occurrence of SLE, these comparisons are now appropriate.

Page 12, “MDCD where about 25% of the first diagnoses were made in an emergency room visit”- It may be worth interpreting this later. Is this a feature of the population that is expected or is it a signal that the algorithm is not performing as expected or something else?

Excellent insight, thank you. It is well known that Medicaid recipients use the emergency room more than other health insurance subscribers. To address the issue we have added the following to Page 12, Line 73:

“This follows with other published studies examining the higher use of the emergency room in MDCD recipients.(15)”

Page 12, near end: Particularly given the international cohorts used, specifying the Clinical Modification versions of the ICD systems were applicable is important for clarity.

We have changed the text at Page 12, Line 80 from:

“In the Incident, 2X algorithm, the most prevalent index event was a diagnosis code of “Systemic lupus erythematosus” (SNOMED code 257628; ICD-10 M32.9 (“Systemic lupus erythematosus, unspecified”); ICD-9 710.0).”

To:

“In the Incident, 2X algorithm, the most prevalent index event was a diagnosis code of “Systemic lupus erythematosus” (SNOMED code 257628; ICD-10CM and ICD-10GM M32.9 (“Systemic lupus erythematosus, unspecified”); ICD-9CM 710.0 (“Systemic lupus erythematosus”)).”

Page 13, “Rates in Australia and France varied considerably, likely due to the small sample size.” And “Due to low subject counts, we were unable to calculate the performance characteristics for Australia and France.”. This merits further treatment. If the authors method cannot analyze 4 or 5 million person datasets, that suggests challenges for many potential users who do not have access to such large sets. Is this a matter of the low prevalence of SLE? Was there a metric that showed you could not use datasets of this size, or some error reported by PheValuator? It would be helpful for readers and users/implementers to understand. The data presented suggests to this reviewer that perhaps the algorithm does not work in these datasets- comparing the IQVIA France and Germany datasets there are dramatically different rates observed (considering Tables 2 and 3). One might conjecture that the IQVIA GP data is insufficient for SLE identification as it did function in Germany but neither France nor Australia, though there are certainly other possibilities.

There are multiple factors likely contributing to the low subject counts in France and Australia. The likely reason is that these two databases, as well as the IQVIA German database, are exclusively general practitioner databases. It is likely that many subjects with SLE are treated by specialists. To aid the reader we are adding the following on Page 13, Line 91:

“The small number of SLE subjects in the Australian and French databases is likely due to the database being limited to general practitioners. In other databases which include specialists, such as rheumatologists, the sample size and incidence rates are higher and more stable.”

There are limitations to the use of PheValuator regarding sample size. The limiting factor is the xSpec cohort which requires at least 200 subjects for the development of an accurate model. This subject count was not satisfied in several of the databases. To clarify this point we are adding to Page 13, Line 100:

“PheValuator requires a minimum of 200 subjects with a high likelihood of having SLE to produce an accurate model. This number was not satisfied in the Australian or French databases. As noted earlier, this may be due to the limitation of these databases to general practitioners.”

Attachment

Submitted filename: Reviewer Response.docx

Decision Letter 1

Luca Navarini

5 Feb 2023

Using a Data-driven Approach for the Development and Evaluation of Phenotype Algorithms for Systemic Lupus Erythematosus

PONE-D-22-09946R1

Dear Dr. Swerdel,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Luca Navarini

Academic Editor

PLOS ONE

Acceptance letter

Luca Navarini

8 Feb 2023

PONE-D-22-09946R1

Using a Data-driven Approach for the Development and Evaluation of Phenotype Algorithms for Systemic Lupus Erythematosus

Dear Dr. Swerdel:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Luca Navarini

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Description of and results from the literature search.

    (DOCX)

    S2 File. Description of the process for phenotype algorithm development.

    (DOCX)

    S3 File. The complete set of ICD, Read, and SNOMED codes used in the algorithms.

    (XLSX)

    Attachment

    Submitted filename: Reviewer Response.docx

    Data Availability Statement

    The data that support the findings of this study are available from IBM, Optum, JMDC, and IQVIA but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are, however, available from the authors upon reasonable request and with permission of IBM, Optum, JMDC, and IQVIA. The authors of the present study had no special privileges in accessing these datasets which other interested researchers would not have. To request access to the datasets used in this study, researchers should use the information, database name and version number, supplied in Table 2.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES