Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Aug 1.
Published in final edited form as: Eur J Radiol. 2023 Jun 30;165:110955. doi: 10.1016/j.ejrad.2023.110955

Taking PI-QUAL beyond the Prostate: Towards a Standardized Radiological Image Quality Score (RI-QUAL)

A single-center retrospective pilot analysis to evaluate interreader agreement

Anton S Becker 1,2,*, Francesco Giganti 3,4, Andrei Purysko 5, Jonathan Fainberg 6, Hebert Alberto Vargas 1,2, Sungmin Woo 1
PMCID: PMC10404469  NIHMSID: NIHMS1917028  PMID: 37421773

Abstract

Purpose:

To compare the interreader agreement of a novel quality score, called the Radiological Image Quality Score (RI-QUAL), to the existing Prostate Imaging Quality (PI-QUAL) score for magnetic resonance imaging (MRI) of the prostate.

Methods:

A total of 43 consecutive scans were evaluated by two subspecialized radiologists who assigned scores using both the RI-QUAL and PI-QUAL methods. The interreader agreement was analyzed using three statistical methods: concordance correlation coefficient (CCC), intraclass correlation coefficient (ICC), and Cohen's kappa. Time needed to arrive at a quality judgment was measured and compared using the Wilcoxon signed rank test.

Results:

The interreader agreement for RIQ and PI-QUAL scores was comparable, as evidenced by the high CCC (0.76 vs. 0.77, p=0.93), ICC (0.86 vs. 0.87, p=0.93), and moderate Cohen's kappa (0.61 vs. 0.64, p=0.85) values. Moreover, RI-QUAL assessment was faster than PI-QUAL (19 vs. 40 seconds, p=0.001).

Conclusion:

RI-QUAL is a new quality score that has comparable interreader agreement to the existing PI-QUAL score, but with the potential to be applied to different MRI protocols and even different modalities. Like PI-QUAL, RI-QUAL may also facilitate communication about quality to referring physicians, as it provides a standardized and easily interpretable score. Further studies are warranted to validate the usefulness of RI-QUAL in larger patient cohorts and for other imaging modalities.

Introduction

Magnetic resonance imaging (MRI) is a widely used imaging modality in clinical practice due to its high soft tissue contrast and lack of ionizing radiation. Like with any other imaging test, the diagnostic accuracy of MRI heavily relies on image quality, which can be affected by multiple factors, such as patient motion, scanner hardware and software, and imaging acquisition protocol. To ensure high-quality MR images, quality control procedures are generally implemented, such as scanner monitoring and protocol and hardware optimization [1]. Moreover, radiologists are encouraged to communicate with referring physicians about image quality to facilitate accurate diagnosis and treatment decisions [2].

Quality assessment in the diagnostic process is an important step to disentangle aleatoric and epistemic uncertainty. Aleatoric uncertainty (“how likely does this combination of findings represent a diagnosis”) is captured through various processes, for example the ‘- reporting and data system’ (-RADS) scores [3] and standardized lexica for reporting certainty [4] . Epistemic uncertainty (“how certain are we that our assessment is correct and has not been influenced by specific quality aspects of this particular examination”) is typically included as free text in radiology reports. This free text incorporates subjective vocabulary reflecting the visual assessment of images by radiologists or technologists, which can be prone to interobserver variability and lack of standardization. For example, the same artifact may be described as “motion artifact”, “movement artifact”, “peristalsis,” or “blurring”, at different locations in the report.

The orchestration of quality control in large healthcare systems presents unique challenges. Typically, images from a single scanner are distributed among several radiologists, while any given individual radiologist may be tasked with reviewing images from a multitude of scanners. This dispersed workflow makes pattern recognition of technical issues an uphill task. It can be difficult for radiologists to identify scanner-specific or protocol-specific issues when they are viewing a diverse set of images from various scanners and using various protocols in their daily practice. Subtle but systematic artifacts or recurrent quality issues may go unnoticed or unattributed to their root cause because of the wide dispersion of exams among radiologists.

Recent advancements have given rise to the Prostate Imaging Quality (PI-QUAL) score, a measure of prostate MRI quality [5] on a 1-to-5 scale which standardizes evaluation of the quality of multiparametric MRI of the prostate against objective technical recommendations, as per Prostate Imaging Reporting and Data System (PI-RADS) guidelines [6], and also against a set of more subjective criteria (i.e., visual assessment) for each sequence. Standardized scores such as PI-QUAL may address the abovementioned quality control issues. By providing a structured language for radiologists to annotate image quality and technical issues, this information can be systematically extracted from reports. It could then be used to populate interactive dashboards, displaying quality scores, and accompanying issues, organized by scanner, protocol, and over time. This would provide a centralized and objective perspective on scanner performance and offer a robust tool for tracking quality control over time. By transforming the often unstructured and dispersed knowledge about image quality into actionable data, this could revolutionize how radiologists, technologists, and physicists monitor and optimize MRI quality in large healthcare systems. It can enable pattern recognition at a system level, thereby promoting proactive maintenance and consistent imaging quality across scanners and protocols. However, PI-QUAL is inherently limited to MRI of the prostate, and currently only in its first iteration [5].

In this paper, we present a novel radiological image quality scoring system (RI-QUAL), which offers both standardization and potential flexibility to be extended to other body parts and modalities. We aimed to assess its interreader agreement and time efficiency for assessment, in comparison to the established PI-QUAL score within the context of prostate MRI.

Methods

Study population

The study population consisted of 43 consecutive prostate MR scans performed at a single institution in April 2023. As a quality assurance (QA) initiative, the analysis was exempt from institutional review board approval.

Quality Rating

The RI-QUAL was defined as a subjective 4-point scale from A - D (Figure 1):

Figure 1: RI-QUAL Scoring Card.

Figure 1:

* This must be accompanied by an explanation along with a recommendation for further management (if applicable)

  1. Diagnostic - No artifacts or limitations. Excellent quality.

  2. Diagnostic – Mild/slight artifacts or limitations, unlikely (~10%) to have a negative effect on diagnostic confidence.

  3. Diagnostic - Moderate artifacts or limitations, possible (~50%) negative effect on diagnostic confidence.

  4. Non-diagnostic - Marked/Severe artifacts or limitations; probable (~75%) negative effect = non-diagnostic exam. In the clinical routine, this score must be accompanied by an explanation and, if applicable, a recommendation for further management (imaging or non-imaging).

Letters were chosen instead of numbers to avoid confusion with existing scoring systems. There were no other fixed criteria, but several examples for various modalities were provided in the clinical standard operating procedure document (see attached Supplement 1).

modified PI-QUAL Assessment

We also evaluated interreader agreement of PI-QUAL scoring in prostate MRI. PI-QUAL is a previously proposed and validated measure of Prostate MRI quality that ranges from 1 to 5, with higher scores (i.e., PI-QUAL 4 and 5) indicating diagnostic image quality. The PI-QUAL score is based on assessment of artifacts and technical acquisition parameters, such as slice thickness, field-of-view, and in-plane resolution, as per PI-RADS technical recommendations and as explained in detail in [5]. We allowed for several modifications to the published PI-QUAL criteria: 1. We allowed for assessment of biparametric MRI, i.e., without contrast enhanced sequences. This results in 4 being the maximum PI-QUAL score. 2. We allowed for a slice thickness of 4 mm in T2w imaging (as opposed to 3 mm) without deduction from the final score. 3. Rather than systematic completion of the whole checklist, we performed stepwise assessment with omission of time-consuming steps (e.g. looking up scan parameters and checking slice angulation) if a previous step already showed limited quality. For example, if one series already has significant motion artifacts, the need to look up its detailed scan parameters becomes redundant for scoring purposes. To distinguish this scoring approach from the officially recommended PI-QUAL scoring, we will henceforth refer to it as mPI-QUAL.

Image Analysis

Two subspecialized genitourinary radiologists (AB and SW), with more than 4 and 8 years of experience in body imaging since board-certification, respectively, and >400 clinically reported prostate MRIs annually, independently evaluated the 43 prostate MRI exams using both the RI-QUAL and mPI-QUAL scores. The readers were blinded to each other's scores and the clinical information of the patients. The readers also manually measured the readout time, i.e., the time it took them to arrive at a judgment in each case using a stopwatch (time when images finished loading in PACS to time when score was decided).

Statistical Analysis

Descriptive statistics were used to summarize the patient characteristics and MRI data. The interreader agreement of RI-QUAL and mPI-QUAL was assessed using intraclass correlation coefficient (ICC), Lin’s concordance correlation coefficient (CCC), and weighted Cohen's kappa. While ICC represents is a more general form of Pearson’s correlation testing, CCC not only considers the correlation but also the agreement between the two variables and is thus more robust. Lastly Cohen’s kappa measures the degree of agreement in classification over what would be expected by chance. The three measures were chosen because they measure slightly different aspects of agreement and are widely used in the radiological literature. The resulting agreement scores were interpreted as follows: slight agreement (κ <0.20), fair (κ =0.20-0.39), moderate (κ =0.40-0.59), substantial (κ =0.60-0.79), or excellent (κ >0.80) agreement [7]. Student’s t-test was performed on Fisher’s z-transformed estimates to calculate p-values (two-sided). Readout times were compared using a Wilcoxon-signed rank test. A p<0.05 was considered indicative of statistical significance. The statistical analyses were performed using R version 4.3.0 (R Foundation for Statistical Computing, Vienna, Austria).

Results

Both RI-QUAL and mPI-QUAL exhibited substantial to excellent interreader agreement between Reader 1 and Reader 2. The CCC values were 0.76 (95% CI, 0.59 - 0.86) for RI-QUAL and 0.77 (95% CI, 0.61 - 0.86) for PI-QUAL. The ICC values were 0.86 (95% CI, 0.75 - 0.93) for RI-QUAL and 0.87 (95% CI, 0.76 - 0.93) for PI-QUAL, indicating equally excellent agreement (both p=0.93). Cohen's kappa values were 0.61 (95% CI, 0.43 - 0.80) for RI-QUAL and 0.64 (95% CI, 0.45 - 0.83) for PI-QUAL, indicating substantial and comparable (p=0.85) agreement.

The confusion matrices for RI-QUAL and mPI-QUAL scores of both readers are shown in Tables 1 & 2.

Table 1:

Confusion matrix of RISQ scores

RI-QUAL R1 R2 D C B A
D 1 0 0 0
C 1 7 1 0
B 0 1 9 6
A 0 0 6 11

Table 2:

Confusion matrix of mPI-QUAL scores. Note the maximum score of 4, since the majority of our scans (n = 33, 77%) were biparametric MRI (bpMRI; i.e., without IV contrast), and none of the mpMRI scans reached a perfect score.

mPi-QUAL R1 R2 1 2 3 4
1 1 0 0 0
2 0 3 2 0
3 0 1 7 1
4 0 0 8 20

Median readout times for RI-QUAL were 8 seconds (IQR: 7 - 10 seconds) for Reader 1 and 19 seconds for reader 2 (IQR: 15 - 23 seconds), both readers were significantly slower when performing a mPI-QUAL assessment with 33 seconds (IQR: 27 - 42 seconds) for Reader 1 and 40 seconds (IQR: 36 - 44) for Reader 2 (both p<0.001).

Discussion

In this study, we evaluated the interreader agreement of our newly developed RI-QUAL and compared it to a variation of the established PI-QUAL for prostate MRI [5]. The results showed that both methods had high interreader agreement between the two subspecialized radiologists, with similar CCC, ICC, and Cohen's kappa values. These findings suggest that the proposed RI-QUAL could be a valuable tool to identify quality issues in large radiology practices, and for standardizing communication about quality to referring physicians.

The choice of rating scale is an important consideration in the development of any assessment tool. In our study, we used a 4-point rating scale for RI-QUAL. The main advantage of using a 4-point scale, as opposed to a 5 or 3-point scale, is that it forces the rater to make a clear decision, rather than relying on the midpoint as an uncertain choice [8]. Another advantage of using fewer points on a scale is the reduction of interrater variability, as the number of response options is limited, and therefore, it is easier to reach consensus between raters [9]. This can also be performed post-hoc during statistical analysis by collapsing several categories, for example PI-QUAL 4/5 and 1/2 [9]. Whether the distinction between A and B in our scale is necessary or useful, or whether it would be advantageous to further collapse the scale leaving one relatively broad category in the middle should be subject to further investigation, and the answer may well vary between different clinical contexts. For example, for assessment of the prostate after focal treatment, a three-point-scale has recently been proposed which ties in with clinical management recommendations [10]. Furthermore, the quality rating in ultrasound for HCC screening has an officially recommended 3-point scale, roughly translating to the lesion size that may be obscured [11]. Lastly, our interreader agreement values are in line with the published literature: Hötker et al. recently reported a Kappa of 0.58 for both subjective assessment (5-point scale) and PI-QUAL score [12]. For PI-QUAL alone, both Pötsch et al. [13] and Karanasios et al. [14] reported a Kappa of 0.51, while Girometti et al. reported a slightly higher Kappa of 0.55 [15]. Overall, our study suggests that a 4-point scale may be a suitable alternative choice to the popular 5-point Likert-like scale for rating system development in quality assessment tools for medical imaging.

Our results also showed that median readout times for RI-QUAL were significantly shorter than for mPI-QUAL, indicating that RI-QUAL could potentially improve workflow efficiency in a busy clinical setting. However, the difference was rather small. This was certainly in part due to the fact that our cohort was composed mostly of bpMRI, alleviating the fact of assessing the dynamic contrast enhanced images. At our institution, mpMRI is only used in the posttreatment setting [16,17], resulting in the maximum mPI-QUAL score assigned in our cohort being 4 instead of 5. Another reason for our relatively short readout time of <1 minute when compared to the PI-QUAL literature (circa 6-8 minutes in [18]) is that we did not fill out a form for every case, but rather performed a step-wise mPI-QUAL assessment, as described in the methods. Both readers in our study are subspecialized radiologists and probably took imaging parameters implicitly into account when subjectively scoring the scans. Future studies could investigate whether this be the case for less experienced readers, who may exhibit higher variability in subjective RI-QUAL assessment, and thus may benefit from the detailed and robust framework PI-QUAL provides [6].

The aim of this study was not to replace PI-QUAL or other targeted quality scoring systems, but rather compare RI-QUAL to an established reference standard. As a matter of fact, if such specific scoring systems are available, they should be preferred over the generic RI-QUAL unless there are strong arguments to do otherwise. One such argument may be standardization across a whole department/institution, although systems like PI-QUAL may well be integrated into such institutional guidelines. Since quality assessment is usually not at the center of the report, we would not advocate to give multiple scores in a single report, as this may unnecessarily complicate interpretation and hamper acceptance by radiologists and referring physicians.

Our study has some limitations that should be considered when interpreting the results. First, albeit conceived with general applicability in mind, we only evaluated the interreader agreement of RI-QUAL and mPI-QUAL in a small sample of prostate MRI scans read by two subspecialized radiologists. Future investigations should aim to evaluate its application in a more heterogeneous environment, including general radiologists, other organs/modalities, and potentially across multiple healthcare systems. Furthermore, as PI-QUAL is an evolving initiative, improvements to future versions, for example, the inclusion of bpMRI, will need to be taken into account. Second, since this was a QA initiative, we were unable to assess the diagnostic performance of RI-QUAL or mPI-QUAL in terms of sensitivity, specificity, or accuracy. A growing body of literature suggests an association between PI-QUAL and diagnostic accuracy [12,13]. Such comparisons should be performed for RI-QUAL in the future. Similarly, since the proposed RI-QUAL was conceived with a clear quality assurance/improvement aim, it does explicitly not include endogenous factors (T1 hyperintense postbiopsy changes, (post)inflammatory peripheral zone heterogeneity), as evaluated previously by Hötker et al. [12,19]. Whether the merit of cleanly separating different sources of uncertainty outweighs the added complexity to workflow and report text remains to be investigated. Third, neither PI-QUAL nor RI-QUAL assign a specific weight or priority to any of the pulse sequences. In practice, DCE is sometimes used to compensate for artifacts in DWI. However, the overall performance of biparametric MRI without DCE has been shown to be comparable [17]. Fourth, whenever multiple sequences and anatomical regions are summarized into a single score, there is an inevitable loss of information. In RI-QUAL, we suggest the possibility to “bump” the score up if the artifact do not affect an important region. For example, if no prostate tumor is seen, an artifact obscuring a few locoregional lymph nodes may be deemed inconsequential. Nevertheless, assigning a standardized score does not absolve the radiologist from the duty to describe location and magnitude in the (ideally structured) report under the respective anatomical section, and/or in the impression or conclusion of the report.

The introduction of automated extraction of these scores from the radiology report or from the images, and their incorporation into interactive dashboards should be a future priority. This capability would enable real-time tracking of scanner performance, facilitate early identification of systematic issues, and may promote overall quality improvement in MRI operations. As these data become more accessible, it might also encourage proactive discussions about quality between radiologists, technologists, and referring physicians.

In conclusion, our study showed that our newly developed RI-QUAL has high interreader agreement and shorter readout times compared to mPI-QUAL in prostate MRI. The proposed RI-QUAL holds promise for standardizing quality assessment and communication in radiology. Further research should focus on its broader implementation, i.e., investigate interreader agreement in other MRI protocols and cross-sectional modalities, and integration into routine clinical workflows. In particular its real-world impact potential for improving MRI quality and operations in large, diverse healthcare systems should be prospectively investigated.

Supplementary Material

1

Acknowledgements:

The authors would like to thank the members of the departmental committee for standardized reporting, and the legal department, for their support of this QA initiative.

Funding:

The institution (MSKCC) receives funding from the NIH/NCI Cancer Center Support Grant P30 CA008748. The Department of Radiology (MSKCC) receives funding from the Peter Michael foundation. The funders had no role in study design, data collection, data analysis, interpretation, or writing of the report.

Footnotes

COI: The authors declare no competing interests. FG and ASB are section editors for this journal. Neither of them had any involvement in the editorial or peer-review process of this article.

Availability of data and code: All data contained within the article.

Ethics approval: Exempt (health care operations/quality assurance project).

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • [1].Stocker D, Manoliu A, Becker AS, Barth BK, Nanz D, Klarhöfer M, et al. Impact of different phased-array coils on the quality of prostate magnetic resonance images. European Journal of Radiology Open 2021;8:100327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].European Society of Radiology (ESR). ESR communication guidelines for radiologists. Insights Imaging 2013;4:143–6. 10.1007/s13244-013-0218-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Turkbey B, Rosenkrantz AB, Haider MA, Padhani AR, Villeirs G, Macura KJ, et al. Prostate imaging reporting and data system version 2.1: 2019 update of prostate imaging reporting and data system version 2. Eur Urol 2019;76:340–51. 10.1016/j.eururo.2019.02.033. [DOI] [PubMed] [Google Scholar]
  • [4].Panicek DM, Hricak H. How sure are you, doctor? A standardized lexicon to describe the radiologist’s level of certainty. AJR Am J Roentgenol 2016;207:2–3. 10.2214/AJR.15.15895. [DOI] [PubMed] [Google Scholar]
  • [5].Giganti F, Allen C, Emberton M, Moore CM, Kasivisvanathan V, PRECISION study group. Prostate Imaging Quality (PI-QUAL): A New Quality Control Scoring System for Multiparametric Magnetic Resonance Imaging of the Prostate from the PRECISION trial. Eur Urol Oncol 2020;3:615–9. 10.1016/j.euo.2020.06.007. [DOI] [PubMed] [Google Scholar]
  • [6].Giganti F, Kirkham A, Kasivisvanathan V, Papoutsaki M-V, Punwani S, Emberton M, et al. Understanding PI-QUAL for prostate MRI quality: a practical primer for radiologists. Insights Imaging 2021;12:59. 10.1186/s13244-021-009966. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33:159–74. 10.2307/2529310. [DOI] [PubMed] [Google Scholar]
  • [8].Chyung SYY, Roberts K, Swanson I, Hankinson A. Evidence-Based Survey Design: The Use of a Midpoint on the Likert Scale. Perf Improv 2017;56:15–23. 10.1002/pfi.21727. [DOI] [Google Scholar]
  • [9].Giganti F, Dinneen E, Kasivisvanathan V, Haider A, Freeman A, Kirkham A, et al. Inter-reader agreement of the PI-QUAL score for prostate MRI quality in the NeuroSAFE PROOF trial. Eur Radiol 2022;32:879–89. 10.1007/s00330-021-08169-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Giganti F, Dickinson L, Orczyk C, Haider A, Freeman A, Emberton M, et al. Prostate Imaging after Focal Ablation (PI-FAB): A Proposal for a Scoring System for Multiparametric MRI of the Prostate After Focal Therapy. Eur Urol Oncol 2023. 10.1016/j.euo.2023.04.007. [DOI] [PubMed] [Google Scholar]
  • [11].Morgan TA, Maturen KE, Dahiya N, Sun MRM, Kamaya A, American College of Radiology Ultrasound Liver Imaging and Reporting Data System (US LI-RADS) Working Group. US LI-RADS: ultrasound liver imaging reporting and data system for screening and surveillance of hepatocellular carcinoma. Abdom Radiol (NY) 2018;43:41–55. 10.1007/s00261-017-1317-y. [DOI] [PubMed] [Google Scholar]
  • [12].Hötker AM, Njoh S, Hofer LJ, Held U, Rupp NJ, Ghafoor S, et al. Multi-reader evaluation of different image quality scoring systems in prostate MRI. Eur J Radiol 2023;161:110733. 10.1016/j.ejrad.2023.110733. [DOI] [PubMed] [Google Scholar]
  • [13].Pötsch N, Rainer E, Clauser P, Vatteroni G, Hübner N, Korn S, et al. Impact of PI-QUAL on PI-RADS and cancer yield in an MRI-TRUS fusion biopsy population. Eur J Radiol 2022;154:110431. 10.1016/j.ejrad.2022.110431. [DOI] [PubMed] [Google Scholar]
  • [14].Karanasios E, Caglic I, Zawaideh JP, Barrett T. Prostate MRI quality: clinical impact of the PI-QUAL score in prostate cancer diagnostic work-up. Br J Radiol 2022;95:20211372. 10.1259/bjr.20211372. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Girometti R, Blandino A, Zichichi C, Cicero G, Cereser L, De Martino M, et al. Inter-reader agreement of the Prostate Imaging Quality (PI-QUAL) score: A bicentric study. Eur J Radiol 2022;150:110267. 10.1016/j.ejrad.2022.110267. [DOI] [PubMed] [Google Scholar]
  • [16].Becker AS, Kirchner J, Sartoretti T, Ghafoor S, Woo S, Suh CH, et al. Interactive, Up-to-date Meta-Analysis of MRI in the Management of Men with Suspected Prostate Cancer. J Digit Imaging 2020;33:586–94. 10.1007/s10278-019-003121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Woo S, Suh CH, Kim SY, Cho JY, Kim SH, Moon MH. Head-to-Head Comparison Between Biparametric and Multiparametric MRI for the Diagnosis of Prostate Cancer: A Systematic Review and Meta-Analysis. AJR Am J Roentgenol 2018;211:W226–41. 10.2214/AJR.18.19880. [DOI] [PubMed] [Google Scholar]
  • [18].Giganti F, Lindner S, Piper JW, Kasivisvanathan V, Emberton M, Moore CM, et al. Multiparametric prostate MRI quality assessment using a semi-automated PI-QUAL software program. Eur Radiol Exp 2021;5:48. 10.1186/s41747-021-00245-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Hötker AM, Dappa E, Mazaheri Y, Ehdaie B, Zheng J, Capanu M, et al. The influence of background signal intensity changes on cancer detection in prostate MRI. AJR Am J Roentgenol 2019;212:823–9. 10.2214/AJR.18.20295. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES