Skip to main content
. 2021 Oct 13;3(6):e210136. doi: 10.1148/ryai.2021210136

Figure 3:

Workflow diagram of the annotation process for producing the ground truth. For pneumothorax annotations, images were first given a preliminary positive or negative classification by an algorithm that employed text analysis of images’ radiology reports. All preliminary positive images and five times this number of preliminary negative images randomly selected from each year were then presented to the annotators. These images were first read by tier 1 and/or tier 2 annotators and were then passed on to the appropriate tier 3 annotator, depending on whether the image was clearly pneumothorax-positive or pneumothorax-negative (the image was then reviewed by a tier 3B annotator [radiology registrar]) or on whether the image fit the criteria of “unsure positive” (the image was then reviewed by a tier 3A annotator [radiology consultant]). Images that met the exclusion criteria were tagged as such. When a tier 3 annotator was not confident, the image was then referred for adjudication by a thoracic radiologist (who also served as a tier 3A annotator). Once the appropriate tier 3 annotator had given their ground truth opinion, each image went through a quality validation process, whereby it was manually reviewed to ensure the quality of annotation and anonymization. The result of this validation process was considered the ground truth as expressed in the final public dataset. Annotators, when reading images at all stages of this process, also marked images as being positive or negative for rib fractures and chest tubes, with higher-tier annotators being able to overrule lower-tier annotators when they deemed appropriate.

Workflow diagram of the annotation process for producing the ground truth. For pneumothorax annotations, images were first given a preliminary positive or negative classification by an algorithm that employed text analysis of images’ radiology reports. All preliminary positive images and five times this number of preliminary negative images randomly selected from each year were then presented to the annotators. These images were first read by tier 1 and/or tier 2 annotators and were then passed on to the appropriate tier 3 annotator, depending on whether the image was clearly pneumothorax-positive or pneumothorax-negative (the image was then reviewed by a tier 3B annotator [radiology registrar]) or on whether the image fit the criteria of “unsure positive” (the image was then reviewed by a tier 3A annotator [radiology consultant]). Images that met the exclusion criteria were tagged as such. When a tier 3 annotator was not confident, the image was then referred for adjudication by a thoracic radiologist (who also served as a tier 3A annotator). Once the appropriate tier 3 annotator had given their ground truth opinion, each image went through a quality validation process, whereby it was manually reviewed to ensure the quality of annotation and anonymization. The result of this validation process was considered the ground truth as expressed in the final public dataset. Annotators, when reading images at all stages of this process, also marked images as being positive or negative for rib fractures and chest tubes, with higher-tier annotators being able to overrule lower-tier annotators when they deemed appropriate.