Abstract
Healthcare systems are hampered by incomplete and fragmented patient health records. Record linkage is widely accepted as a solution to improve the quality and completeness of patient records. However, there does not exist a systematic approach for manually reviewing patient records to create gold standard record linkage data sets. We propose a robust framework for creating and evaluating manually reviewed gold standard data sets for measuring the performance of patient matching algorithms. Our 8-point approach covers data preprocessing, blocking, record adjudication, linkage evaluation, and reviewer characteristics. This framework can help record linkage method developers provide necessary transparency when creating and validating gold standard reference matching data sets. In turn, this transparency will support both the internal and external validity of recording linkage studies and improve the robustness of new record linkage strategies.
Keywords: JAMIA, manual review, gold standard, patient matching, matching algorithms
INTRODUCTION
The specialization of healthcare services and the mobility of patient populations have led to fragmented and incomplete health data in the US Healthcare providers require comprehensive patient records to ensure the quality, accuracy, safety, and cost-effectiveness of care. Despite widespread adoption of health information systems, such as electronic health records, patient information is typically not compiled into a single longitudinal record. Instead, many patients’ complete healthcare record is held across multiple siloed clinical repositories.1 This fragmentation can compromise patient safety, lead to duplicated testing, and impact physician clinical decisions.2 For example, incomplete patient and medication data account for nearly half of all medication errors.3 Inconsistent data also drive hospital costs through inefficient care. Duplicate records cost an average of $1100 for repeated tests and delays in care, and over $800 per emergency department visit.4,5 One-third of rejected insurance claims are attributed to inaccurate patient identification, which costs the average hospital $1.5 million and the US healthcare system $6 billion annually.5 These limitations have led healthcare systems to invest millions in record linkage and data management.6,7
To create complete longitudinal patient records, there must be approaches to effectively integrating data for the same patient across information systems and organizations. A logical solution to efficiently match patient information would be a unique patient identifier (UPI). However, the United States is the last industrialized nation without such a system.8 Congress has barred funding for developing a UPI for over 2 decades, primarily citing privacy concerns.9 To fully remove the prohibition, Congress must vote to remove the ban, which may take several years along with time to research and implement an identifier system.
Thus, without a UPI, US hospitals rely on probabilistic and heuristic patient matching algorithms driven by patient demographic information, social security numbers, and other identifiers extracted from existing medical records. Despite advanced algorithms with strong predictive power, lack of data standardization, and incomplete patient information, hinder matching accuracy. Further, some studies have found 15%–37% of linkage algorithm links rejected by human review.10 To overcome these barriers, there must be high-quality and standardized reporting of data elements, and reproducible methods to evaluate algorithmic quality, including accuracy and potential for bias.
Currently accepted and peer-reviewed methodologies for record linkage, such as GUILD, describe each step of the linkage pathway and recommends methods to assess or account for linkage error; however, they fail to describe the manual review processes that provides the basis for adjudicating performance.11–14 Prior record linkage studies that included a manual review process failed to adequately describe each step.15–17 Therefore, creating more transparent and detailed approaches to conducting manual review in record linkage studies fills a key literature gap, and may improve data integration efforts for reducing healthcare costs, and improving quality and patient safety.
Objective
We propose a novel robust framework for a consistent and reproducible evaluation of manual review gold standard data sets for measuring patient matching algorithm performance.
METHODS
We describe recommended manual review reporting elements for consistent gold standard data set creation and evaluation (Table 1). Because steps may differ in individual studies, we describe a general approach, and then variations for each element.
Table 1.
Steps 1–4 describe the recommended manual review reporting elements for preparing a gold standard data set and record pairs through data description, preprocessing, blocking, and sampling; steps 5–8 describe human training, adjudication processes, result analysis, and a description of software and reviewers
| Reporting element | Description |
|---|---|
| 1. Data set description | How a complete data set is collected and from where is it sourced. This may include type of data source, provenance, and population coverage. |
| 2. Preprocessing: Field Selection and Data Standardization | How data fields are selected and standardized prior to any record pair selection for the matching software. This includes removing all nonvalid values, using agreed-upon formats, and inputting values when needed. |
| 3. Record pair grouping | Processes, such as blocking, to group record pairs based on schemes to reduce computational complexity |
| 4. Procedure for sampling and matching pairs | Methods to sample record pairs based on blocking schemas, and match them prior to human review |
| 5. Training and process for judging record matches and nonmatches | Training and instructions provided to and used by reviewers to judge a record pair as a “match” or “non-match”. This includes how record pairs are assigned to reviewers, and criteria that reviewers use to judge matches and nonmatches. This also includes any iterative review or other steps taken to adjudicate discordance across reviews judging the same record pairs, and to determine the final match status. |
| 6. Inter-rater reliability measures | Any inter-rater reliability metrics, such as Cohen’s Kappa, record pair discordance and overall matches/nonmatches, their values, and a description of how they are used in the review process. |
| 7. Review software or tools used | Software, forms, or other support tools used to present record pairs to reviewers and for reviewers to record their adjudications. |
| 8. Reviewer characteristics | Total number of reviewers, and ideally each reviewer’s age, gender, race, cultural background, and prior experience with clinical or public health data and record linkage. |
Data set descriptions
Designing a reproducible process for record linkage requires well-described data sets. Furthermore, external validation of a linkage algorithm requires standardized metadata descriptors on the data set and its origin. Record linkage algorithms have been tested on various data sets, including cancer patients,18 newborns,19 and indigenous tribes,20 all with differing data quality. Thus, the data source should be clearly described, whether it is from electronic health records, public health records, social security master files, clinical registry, etc, as differences in data sources may impact algorithms’ generalizability to other data sources or types.13,21 The data set’s quality can be described through its provenance, collection techniques, and variables measured. Additionally, studies should describe the quality of variables in terms of completeness and accuracy. Poor data quality may lead to the clustering of identifier errors, resulting in linkage errors with unmatched or misclassified records and potential selection bias.22
Preprocessing: field selection and data standardization
This component contains 2 intermediate steps: field selection and data standardization.
First, a set of fields are selected to describe the individual characteristics of each record. Some data sets may contain a patient’s social security number, which can function as a unique identifier. In the absence of such an identifier, fields such as date-of-birth and last name can combine as quasi-identifiers to uniquely identify patients.23 Additionally, chosen fields should contain accurate and complete data.10 For example, the first-name field may be empty for many records, which may necessitate additional selected fields to ensure record uniqueness.
After appropriate field selection, an agreed-upon preprocessing method is needed to standardize records for accurate review, as address standardization alone can decrease unlinked records by up to 20%.24 Also, DOB fields should follow a consistent format, that is “MM/DD/YYYY”.25 Accepted standards such as the US Postal Service address definitions and Uniform Hospital Discharge Dataset currently exist but need greater usage to nationalize a standard format for data element capture.26,27
Record pair grouping (blocking)
When matching 2 data sets, each record between the data sets must be compared to each other. Researches utilize blocking to subset a large data set into smaller groups by common attributes.28 Specific blocking schemes efficiently reduce the computational complexity of record comparison by increasing the proportion of true matches by only comparing records within each controlled blocking scheme. Ideal blocking fields have a high variety of values and high rates of completeness.24 However, blocking may reduce review accuracy by removing true matches if performed without proper discretion.29
Procedure for sampling and matching pairs
Records from each blocking group must be sampled to create a representative training set and final data set for manual review. Depending on the blocking scheme, and size of the data set, studies may use a proportional random or stratified sampling method. Studies should accurately report the sampling methodology for reproducibility and mitigating bias. Using a representative sample across multiple dimensions such as culture, location, and age, will reduce biases in the record linkage manual review process as reviewers can make clearer adjudications between records pairs.30,31
Process for judging record matches and nonmatches
Studies should include a comprehensive overview of reviewer training. The manual review process begins with reviewer instruction, and includes steps to assess records, evaluate biases and results, and resolve discordance between reviewers. Experts should design and curate a training record linkage data set, along with a gold standard, for the chosen reviewers to train with. Of note, to verify natural language processors for radiology reports, researchers have similarly formed expert review teams to review reports and create a reference validation data set for algorithms.32,33
Afterwards, researchers should review matches and mismatches between each reviewer and the gold standard to measure discordance. If significant disagreement is present, reviewers may receive additional training. After training, reviewers will use the same software for both training and final annotation. If reviewers have differing record judgment, another annotator may be used as a tiebreaker,33 and discrepancies may be resolved in group discussion.34
Inter-rater reliability measures
Researchers should measure inter-rater reliability to understand variation between reviewers who are adjudicating record pair matches and nonmatches.35 In manual review studies, such aspects are typically measured through Cohen’s Kappa (2 raters) or Fleiss Kappa (an adaptation of Cohen’s for 3 or more raters) statistics. Cohen’s Kappa calculates a score between 0 and 1, with 1 as perfect agreement between reviewers, by comparing the observed agreement versus the probability of chance agreement. Other reliability measures include percent agreement, and Pearson’s R correlation coefficient, though they may poorly reflect the true discordance.36 In addition, algorithm performance may be assessed through positive predictive value and F1 scores.21
Review software or tools used
A description of the software used should provide its features for future reproduction. A basic software at a minimum should: (1) import, query, and tabulate data from different data sets; (2) match records across distinct data sets; and (3) record and store discordance results from manual review per reviewer. Ideal review software streamlines the review process to reduce possible biases and focus reviewer attention solely on matching records with an accessible user interface. For example, Link Plus, a CDC probabilistic record linkage software with manual review capabilities, and Febrl, an open-source linkage package, automatically sort and group records, and color-code blocking variable match status for review ease.37,38 Additionally, they instantly treat null values as empty data, and have designated keyboard shortcuts. Software should also display previously matched records under reviewed records, so reviewers can analyze previous patient information, such as former addresses and phone numbers for accurate judgment.
Reviewer characteristics
Ideally, studies should describe reviewer characteristics, including the total number of reviewers, age, gender, race, cultural background, and prior experience with clinical or public health data and record linkage research. Since assessing matches among record pairs often includes comparing individual names from different backgrounds and as names vary across demographic, social, and cultural dimensions, a lack of diversity on a review team may introduce bias in matching records and curating a gold standard dataset. Diverse teams are more likely to remain objective and examine facts with greater scrutiny.39 Thus, by reporting reviewer characteristics, researchers may help readers and users of the data set understand potential biases in the data set and in record linkage algorithms validated using the data sets, critical to creating robust and trustworthy record linkage approaches.
DISCUSSION
Automated methods for record linkage are becoming increasingly important as healthcare systems have widely adopted electronic health records and federal interests advocate for accurate patent matching.7,9 A critical step in creating and validating patient matching algorithms is establishing a gold standard to evaluate such algorithms against. To create high-quality gold standard data sets, there must be a framework that facilitates transparent reporting of manual review processes, thereby enabling critical evaluation and comparison of methodologies.
This framework builds upon prior work to provide detailed guidelines for the reproducible evaluation of the manual review process for patient matching algorithms. A crucial component that was outlined, but not described in the prior literature that established gold standards for record linkage. Such reporting enhances rigor and reproducibility, and allows end-users to better evaluate external validity and potential biases in the gold standard data sets as well as matching algorithms created using the data sets. More generally, this framework will provide critical support to technology developers and healthcare organizations in developing a nationwide strategy and approach to patient matching.
Unlike current patient matching methods, this framework supports evidence-based practices, and therefore health IT policymakers should explore strategies to expand the evidence base for real-world matching system performance, and encourage more consistent approaches to data collection and standardization.24,40,41 The Department of Health and Human Services and the ONC have made recent efforts in standardizing patient matching. These included: Project US@ for unified industry-wide specification for addresses, US Core Data for Interoperability to standardize health data classes for national information exchange, and the development of patient demographic matching specifications with the Interoperability Standards Advisory.42–44 Such efforts are crucial, as research has shown last name and address standardization can improve record linkage accuracy by up to 8%.24
Limitations
This framework has 2 main limitations. First, curating a gold standard through human review requires significant personnel and technical costs.17,45 However, a number of institutions have formed manual review teams supported by federal funding, which have not only made record linkage manual review more feasible, but a growing area of study.15,46 Second, while this framework details several key elements in reproducible manual review, we do not intend for it to be a final framework, but to set the stage for future work by establishing general guidelines. It must be applied on real-world data to determine its robustness in application and based on results can be brought to public consensus for widescale acceptance and systematic usage.
CONCLUSION
This 8-point framework provides consistent guidelines for record pair manual review when creating gold standard data sets for assessing patient matching algorithm performance. Such reporting provides critical transparency and rigor that is important to trustworthy and unbiased record linkage technology in the United States. Moreover, this framework provides new methods that will support healthcare organizations and policymakers in developing nationwide strategy for data integration efforts critical to reducing healthcare costs and improving quality.
FUNDING
This work was supported by Agency for Healthcare Research and Quality grant number 5R01HS023808-04.
AUTHOR CONTRIBUTIONS
SJG, CAH, and SNK contributed to the conception, design, acquisition, and analysis for the work. HX and XL performed analysis and contributed to design. MMR contributed to conception and design. AKG drafted the initial manuscript.
CONFLICT OF INTEREST STATEMENT
None declared.
Contributor Information
Agrayan K Gupta, Indiana University, Indianapolis, Indiana, USA.
Suranga N Kasthurirathne, Center for Biomedical Informatics, Regenstrief Institute, Indianapolis, Indiana, USA; Department of Family Medicine, Indiana University School of Medicine, Indianapolis, Indiana, USA; Black Dog Institute, University of New South Wales, Sydney, New South Wales, Australia.
Huiping Xu, Department of Biostatistics, Indiana University School of Medicine, Indianapolis, Indiana, USA.
Xiaochun Li, Department of Biostatistics, Indiana University School of Medicine, Indianapolis, Indiana, USA.
Matthew M Ruppert, Department of Medicine, University of Florida Health, Gainesville, Florida, USA; Precision and Intelligent Systems in Medicine (PrismaP), University of Florida, Gainesville, Florida, USA.
Christopher A Harle, Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA.
Shaun J Grannis, Center for Biomedical Informatics, Regenstrief Institute, Indianapolis, Indiana, USA; Department of Family Medicine, Indiana University School of Medicine, Indianapolis, Indiana, USA.
Data Availability
No new data were generated or analyzed in support of this research.
REFERENCES
- 1. Finnell JT, Overhage JM, Grannis S.. All health care is not local: an evaluation of the distribution of Emergency Department care delivered in Indiana. AMIA Annu Symp Proc 2011; 2011: 409–16. [PMC free article] [PubMed] [Google Scholar]
- 2. Friedman CP, Wong AK, Blumenthal D.. Achieving a nationwide learning health system. Sci Transl Med 2010; 2 (57): 57cm29. [DOI] [PubMed] [Google Scholar]
- 3. Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE Prevention Study Group. JAMA 1995; 274 (1): 35–43. [PubMed] [Google Scholar]
- 4. Lusk K. A decade of standardization: data integrity as a foundation for trustworthiness of clinical information. J AHIMA 2015; 86 (10): 54–7. [PubMed] [Google Scholar]
- 5. Research BB. Improving Provider Interoperability Congruently Increasing Patient Record Error Rates, Black Book Survey. Black Book Research. April 10, 2018. https://blackbookmarketresearch.newswire.com/news/improving-provider-interoperability-congruently-increasing-patient-20426295. Accessed April 3, 2022. [Google Scholar]
- 6. Park A. HealthVerity Locks Down $100M to Grow Real-World Data Management Platform. 2021. https://rwr-news.com/healthverity-locks-down-100m-to-grow-real-world-data-management-platform-fiercebiotech/. Accessed April 3, 2022.
- 7. Amato K. Healthcare Investing Trends Report. 2021. https://www.himss.org/resources/healthcare-investing-trends-report. Accessed April 3, 2022. [Google Scholar]
- 8. Hillestad R, Bigelow JH, Chaudhry B, et al. Identity Crisis? Approaches to Patient Identification in a National Health Information Network. Santa Monica, CA: RAND Corporation; 2008. [Google Scholar]
- 9.Bill HASiRBoUPIfL-H. HIMSS Applauds Senate in Removing Ban on Unique Patient Identifier from Labor-HHS Bill. 2021. https://www.himss.org/news/himss-applauds-senate-removing-ban-unique-patient-identifier-labor-hhs-bill. Accessed April 3, 2022.
- 10. Bailey M, Cole C, Henderson M, Massey C.. How well do automated linking methods perform? Lessons from U.S. historical data. J Econ Lit 2020; 58 (4): 997–1044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Privacy and Security Solutions for Interoperable Health Information Exchange Perspectives on Patient Matching: Approaches, Findings, and Challenges. The Office of the National Coordinator for Health Information Technology; 2009. https://www.healthit.gov/sites/default/files/patient-matching-white-paper-final-2.pdf. Accessed April 10, 2022.
- 12. Gilbert R, Lafferty R, Hagger-Johnson G, et al. GUILD: GUidance for Information about Linking Data sets. J Public Health 2018; 40 (1): 191–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Pratt NL, Mack CD, Meyer AM, et al. Data linkage in pharmacoepidemiology: a call for rigorous evaluation and reporting. Pharmacoepidemiol Drug Saf 2020; 29 (1): 9–17. [DOI] [PubMed] [Google Scholar]
- 14. Nechuta S, Mukhopadhyay S, Krishnaswami S, Golladay M, McPheeters M.. Record linkage approaches using Prescription Drug Monitoring Program and mortality data for public health analyses and epidemiologic studies. Epidemiology 2020; 31 (1): 22–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Joffe E, Byrne MJ, Reeder P, et al. A benchmark comparison of deterministic and probabilistic methods for defining manual review datasets in duplicate records reconciliation. J Am Med Inform Assoc 2014; 21 (1): 97–104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Libuy N, Harron K, Gilbert R, Caulton R, Cameron E, Blackburn R.. Linking education and hospital data in England: linkage process and quality. Int J Popul Data Sci 2021; 6 (1): 1671. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Antonie L, Inwood K, Lizotte DJ, Andrew Ross J.. Tracking people over time in 19th century Canada for longitudinal analysis. Mach Learn 2014; 95 (1): 129–46. [Google Scholar]
- 18. van Herk-Sukel MP, van de Poll-Franse LV, Lemmens VE, et al. New opportunities for drug outcomes research in cancer patients: the linkage of the Eindhoven Cancer Registry and the PHARMO Record Linkage System. Eur J Cancer 2010; 46 (2): 395–404. [DOI] [PubMed] [Google Scholar]
- 19. Wang Y, Caggana M, Sango-Jordan M, Sun M, Druschel CM.. Long-term follow-up of children with confirmed newborn screening disorders using record linkage. Genet Med 2011; 13 (10): 881–6. [DOI] [PubMed] [Google Scholar]
- 20. Johnson JC, Soliman AS, Tadgerson D, et al. Tribal linkage and race data quality for American Indians in a state cancer registry. Am J Prev Med 2009; 36 (6): 549–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Ramezani MI, Guru; Kum H-C.. Evaluation of machine learning algorithms in a human–computer hybrid record linkage system. CEUR Workshop Proc 2021; 2846 (4): 25. [Google Scholar]
- 22. Harron K, Hagger-Johnson G, Gilbert R, Goldstein H.. Utilising identifier error variation in linkage of large administrative data sources. BMC Med Res Methodol 2017; 17 (1): 23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Winkler WE. Chapter 14 – Record linkage. In: Rao CR, ed. Handbook of Statistics. Amsterdam: Elsevier; 2009: 351–80. [Google Scholar]
- 24. Grannis SJ, Xu H, Vest JR, et al. Evaluating the effect of data standardization and validation on patient matching accuracy. J Am Med Inform Assoc 2019; 26 (5): 447–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Genevieve Morris GF, Afzal S, Robinson C.. Patient Identification and Matching Final Report. Office of the National Coordinator for Health Information Technology: Audacious Inquiry; 2014.
- 26. Lusk K. Patient Matching in Health Information Exchanges. 2014. https://perspectives.ahima.org/patient-matching-in-health-information-exchanges/. Accessed April 3, 2022.
- 27. Technology OotNCfHI. HHS Releases Project US@ Draft Technical Specification Version 1.0 for Comment. 2021. https://www.hhs.gov/about/news/2021/06/16/hhs-releases-project-us-draft-technical-specification-for-comment.html. Accessed April 3, 2022.
- 28. A Comparison of Blocking Methods for Record Linkage. Cham: Springer International Publishing; 2014. [Google Scholar]
- 29.Learning Blocking Schemes for Record Linkage. AAAI; 2006.
- 30. Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI.. Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J 2015; 13: 8–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Xu H, Hui SL, Grannis S.. Optimal two-phase sampling design for comparing accuracies of two binary classification rules. Stat Med 2014; 33 (3): 500–13. [DOI] [PubMed] [Google Scholar]
- 32. O’Connor SD, Silverman SG, Ip IK, Maehara CK, Khorasani R.. Simple cyst-appearing renal masses at unenhanced CT: can they be presumed to be benign? Radiology 2013; 269 (3): 793–800. [DOI] [PubMed] [Google Scholar]
- 33. Wadia R, Akgun K, Brandt C, et al. Comparison of natural language processing and manual coding for the identification of cross-sectional imaging reports suspicious for lung cancer. JCO Clin Cancer Inform 2018; 2: 1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Casey A, Davidson E, Poon M, et al. A systematic review of natural language processing applied to radiology reports. BMC Med Inform Decis Mak 2021; 21 (1): 179. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Borsboom D, Mellenbergh GJ, van Heerden J.. The concept of validity. Psychol Rev 2004; 111 (4): 1061–71. [DOI] [PubMed] [Google Scholar]
- 36. Stemler S. A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. PARE 2004; 9 (4): 1–4. 10.7275/96jp-xz07. [DOI] [Google Scholar]
- 37. Christen P. Febrl: an open source data cleaning, deduplication and record linkage system with a graphical user interface. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Las Vegas, NV: Association for Computing Machinery; August 24–27, 2008: 1065–68.
- 38. Prevention CfDCa. National Program of Cancer Registries (NPCR). 2020. https://www.cdc.gov/cancer/npcr/tools/registryplus/lp.htm. Accessed April 3, 2022.
- 39. David Rock HG. Why Diverse Teams Are Smarter. 2016. https://hbr.org/2016/11/why-diverse-teams-are-smarter. Accessed April 3, 2022.
- 40. VanHouten JB. Universal Patient Identification: What It Is and Why the US Needs It. 2021. https://www.healthaffairs.org/do/10.1377/forefront.20210701.888615. Accessed April 3, 2022.
- 41. Grannis SJ, Williams JL, Kasthuri S, Murray M, Xu H.. Evaluation of real-world referential and probabilistic patient matching to advance patient identification strategy. J Am Med Inform Assoc 2022; 29 (8): 1409–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Steven Posnack CS. Project US@. 2022. https://www.healthit.gov/buzz-blog/health-data/todays-the-day-for-project-us. Accessed April 3, 2022.
- 43. United States Core Data for Interoperability (USCDI). 2022. https://www.healthit.gov/isa/united-states-core-data-interoperability-uscdi. Accessed April 3, 2022.
- 44.Interoperability Standards Advisory. 2022. https://www.healthit.gov/isa/. Accessed April 3, 2022.
- 45. Guillet F, Hamilton HJ.. Quality Measures in Data Mining. Berlin: Springer; 2007. [Google Scholar]
- 46. Bailey SR, Heintzman JD, Marino M, et al. Measuring preventive care delivery: comparing rates across three data sources. Am J Prev Med 2016; 51 (5): 752–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
No new data were generated or analyzed in support of this research.
