Abstract
Background:
Accurate classification of acetabular fractures remains difficult. To aid in the classification of acetabular fractures and to aid in teaching, our department developed a diagnostic algorithm that involves the use of 1 standardized 3-dimensional reconstruction of a computed tomography (CT) scan (an exopelvic view without the femoral head) with 8 anatomical landmarks. The algorithm was integrated into a smartphone application (app). The main objective of this study was to test the efficacy of this algorithm and smartphone app.
Methods:
Fourteen reviewers (3 experts, 3 fellows, 3 residents, and 5 novice reviewers) evaluated a set of 35 CT scans of acetabular fractures in 2 phases. During the first phase, the scans (including axial 2-dimensional views and 3-dimensional (3D) multiplanar reconstruction views) were assessed by each reviewer twice, with an interval of 4 weeks between the readings to decrease recall bias. During that phase, the reviewers were provided with a diagram of the Letournel classification system with no guidelines for interpretation. During the second phase, performed 4 weeks after the first phase, 1 standardized 3D reconstruction (an exopelvic view without the femoral head) was reviewed twice, with an interval of 4 weeks between the readings. During that phase, the reviewers used the smartphone app. The primary outcome was the accuracy of classification. Interobserver reliability, reading time, and time needed for accurate classification were noted.
Results:
The accuracy of fracture classification was 64.5% when the standard method of analysis was used and 83.4% when the app was used (p < 0.001). Improvement was noted in all groups, with the expert group showing the least improvement (88.6% to 97.2%, p = 0.04) and the novice group showing the most improvement (42.0% to 75.5%, p < 0.001). Furthermore, use of the app greatly increased the accuracy of classification of complex fractures. The average reading time was 71.8 minutes when the standard method was used and 37.4 minutes when the app was used. The interobserver reliability improved in all groups to an excellent reliability (interclass correlation coefficient [ICC] > 0.79).
Conclusions:
The Letournel classification system is difficult to understand and to learn but remains the only system guiding the surgical strategy for acetabular fractures. The impact of diagnostic algorithms is debatable. The most important finding of the present study is the high accuracy for inexperienced groups when the app was used. Another important finding is the high reliability of this method for the diagnosis of complex acetabular fractures.
Classification of acetabular fractures is difficult, and while the work of Judet and Letournel1-3 helped to improve the general understanding of these fractures, accurate categorization remains a challenge because of the complex 3-dimensional (3D) anatomy of the pelvis and the rarity of certain acetabular fracture variants. Various studies have shown that the correct classification rate remains low among orthopaedic surgeons, even with the use of 2D and 3D computed tomography (CT) scans4,5. For example, it has been reported that inexperienced readers have an accuracy of only 11% when diagnosing these fractures on the basis of radiographs, a percentage that is no greater than the hazard ratio6,7, and that general orthopaedists and senior orthopaedic residents have an accuracy of <65%7,8. Considering these results, some authors have tried to simplify the analysis of acetabular fractures by offering new classification systems or diagnostic algorithms9-13. Some authors have introduced relatively complex redefinitions of acetabular landmarks that do not result in a real clarification of the Letournel classification9,10. Other authors have opted for the development of a new classification system involving 3 main categories with subcategories11,13. To our knowledge, the accuracy of these new concepts has not been evaluated, and if the aim is to reduce the number of possible patterns of fractures from 10 in the Letournel system to 3 principal categories, the result is a certain amount of confusion. On the other hand, we believe that the Letournel system provides the clearest method with which to define determine the classification and to define the surgical strategy, even if the 10 different patterns make the system difficult to understand for non-expert orthopaedic surgeons and radiologists.
Our department developed an easy, fast, and reliable method to correctly diagnose acetabular fractures on the basis of the Letournel classification system14. This teaching method relies on CT scans, especially 3D scans, as they were demonstrated to be superior to radiographs. 3D reconstructions are also known to improve the accuracy of classification5,7,8,14,15. In our method, 8 radiographic landmarks are systematically examined for fracture lines, including 3 anterior landmarks (iliac wing, linea arcuata, and anterior wall of the acetabulum), 3 “no man’s land” landmarks (roof of the acetabulum, quadrilateral surface, and obturator ring), and 2 posterior landmarks (posterior border of the iliac bone and posterior wall of the acetabulum)14.
Software tools can improve the effectiveness of existing methods by facilitating standardization in how they are used16-19. Therefore, we integrated our classification method into a smartphone application (app). This smartphone app was designed to make our method available for a wide audience, to aid in these difficult diagnoses while minimizing the risk of errors, and to improve communication between medical professionals when transferring patients to referral centers. The objective of the present study was to analyze the accuracy, repeatability, and time required for correct classification using this smartphone app.
Materials and Methods
Analysis was conducted with use of a 3D exopelvic reconstruction of the fractured acetabulum with the femoral head removed to ensure that the orientation allowed for the assessment of both the anterior and posterior edges of the hemipelvis (Fig. 1). This 3D view was chosen because it is the only necessary and sufficient view that makes it possible to analyze all of the landmarks that are used in this method. The evaluated algorithm requires analysis of some, but not all, landmarks and consists of 10 scenarios (1 for each pattern defined in Letournel system). The algorithm is designed to offer a classification after 2 to 5 questions (Fig. 2). The 2 questions that begin the algorithm focus on possible fracture of the iliac wing and of the posterior edge of the ischium. The presence of an attachment between the sacroiliac joint and a fragment of the acetabular roof differentiates both-column fractures from anterior-column fractures associated with posterior hemitransverse fractures. Questions about a fracture of the obturator ring and the anterior edge of the acetabulum (pelvic inlet) direct to the rest of the 10 predefined scenarios. An independent fragment of the posterior wall indicates (1) a simple posterior-wall fracture, (2) a posterior-wall fracture associated with a transverse fracture, or (3) a posterior-column fracture associated with a posterior-wall fracture. The absence of posterior-wall fracture indicates an anterior-column fracture, a transverse fracture, a simple posterior-column fracture, or a T-shaped fracture depending on the presence or absence of fracture of the obturator ring and fracture of the linea acruata. All the above options were incorporated into the app, which also allowed for the selection of the side of involvement (left or right). Each question is accompanied by an image with the associated areas of the acetabulum highlighted to facilitate fracture interpretation and classification (Fig. 3).
Fig. 1.

3D exopelvic reconstruction of an acetabulum with the femoral head removed.
Fig. 2.
Flowchart illustrating the diagnostic algorithm used in the application (app).
Fig. 3.
Screenshots from the app. Each image was accompanied by a question pertaining to specific findings: (A) Is there a fracture affecting the iliac wing? (B) Is there a fracture affecting the posterior edge of the ischial spine? (C) Is the roof of the acetabulum an independent part from the sacroiliac joint? (D) Is there a fracture to the obturator ring? (E) Is there a fracture to the anterior edge of the acetabulum? (F) Is there an independent posterior wall fracture?
Thirty-five cases from the records of our acetabular fracture referral center database (containing records of >450 acetabular fractures between 2007 and 2015) were chosen by the operating surgeons (P.J. and G.R.) and were classified according to the Letournel system on the basis of the initial radiograph, the CT scans (2D and 3D reconstructions), and the surgical findings. Cases were disregarded and replaced when an agreement on classification could not be reached. Fourteen simple fracture patterns and 18 complex fracture patterns were chosen to constitute the study sample, matching the proportion of each fracture type in our series and in the literature20.
Fourteen reviewers were recruited for this study and were assigned to one of 4 groups depending on their experience level. The first group consisted of 3 experts (experienced surgeons) who had completed their learning curve in acetabular fracture surgery (>50 cases each)6, the second group consisted of 3 first-year acetabular fracture fellows, the third group consisted of 3 senior orthopaedic residents (postgraduate year [PGY]-4 and PGY-5) with little experience in pelvic and acetabular trauma surgery, and the fourth group consisted of 5 novice readers (junior residents [PGY-1] and last-year medical students).
Representative CT scans of the fractured acetabulum in each patient were assessed in 2 phases. During the first phase, axial 2D views and 3D multiplanar views images were provided on the Vue PACS system (Carestream). During this phase, reviewers were provided only a diagram of Letournel classification system with no guidelines for interpretation. During the second phase, performed 4 weeks after the first phase, only 1 standardized 3D reconstruction (an exopelvic view without the femoral head) was provided and the reviewers used the smartphone app to determine and record the classification. The total time for interpretation of each set of images was also recorded. The average time needed to obtain an accurate classification (ATAC) was calculated using the following formula:
![]() |
The ATAC represents the actual power of the method to obtain an accurate diagnosis. To evaluate intraobserver reliability, the classification of these fractures was performed twice during each phase, but only the first reading in each phase was used for accuracy testing. To minimize recall bias, the interval between each reading was 4 weeks and the order of the cases was randomized for each reading.
Statistical analysis was performed with use of SPSS 18.0 software (IBM). The accuracy of the classifications for each group and each reading was compared with use of the Fisher test. The time needed for the readings was compared with use of the Student t test. Intraobserver agreement in each group, as well interobserver reliability between groups, was calculated with use of the interclass correlation coefficient (ICC). Intraobserver agreement was calculated as the mean ICC for each evaluator in a given group, whereas interobserver agreement was calculated between different members of the same group as the different groups did not have the same level of experience and were not comparable. These coefficients were interpreted and translated into levels of agreement with use of the Landis and Koch grading system, with 0.00 to 0.20 indicating slight agreement; 0.21 to 0.40, fair agreement; 0.41 to 0.60, moderate agreement; 0.61 to 0.80, substantial agreement; and ≥0.81, almost perfect agreement21. The level of significance was set at p = 0.05.
Results
Accuracy
The accuracy of classification for the selected sample was 64.5% when the standard method of CT analysis was used and 83.4% when the algorithm was used (p < 0.001) (Table I). As expected, the expert group showed the least improvement, from 88.6% to 97.2%; nonetheless, this improvement was significant (p = 0.04). The fellow and resident groups showed good improvement, from 74.0% to 89.7% (p = 0.026) and from 50.1% to 72.5% (p = 0.002), respectively. The novice group showed the most improvement, from 42.0% to 75.5% (p < 0.001). When the app was used, there were no differences in accuracy between the expert and fellow groups or between the resident and novice groups; however, the expert and fellow groups had significantly better accuracy than the resident and novice groups (p < 0.05).
TABLE I.
Diagnostic Accuracy According to Expertise
| Standard CT Analysis | Application Analysis | P Value | |
| Expert group | 88.6% | 97.2% | 0.042 |
| Fellow group | 74.0% | 89.7% | 0.026 |
| Resident group | 50.1% | 72.5% | 0.002 |
| Novice group | 42.0% | 75.5% | <0.001 |
The fractures were then divided into 2 groups: simple fractures (anterior wall, anterior column, posterior wall, posterior column, and transverse fractures) and complex fractures (posterior column with posterior wall, transverse with posterior wall, anterior column or wall with posterior hemitransverse, T-shaped, and both-column fractures) (Table II). Only the novice group showed improvement when using the app to diagnose simple fractures, although the fellow group and the resident group showed a tendency toward significance. However, all groups showed significant improvement when using the app to diagnose complex fractures. The most common error that the reviewers made when using the app was confusing both-column fractures and anterior-column factures associated with posterior hemitransverse fractures, and vice versa. We found that the question asking if there is a fragment of the acetabular roof attached to the sacroiliac joint was the most difficult to answer.
TABLE II.
Diagnostic Accuracy According to Complexity of Fracture
| Standard CT Analysis | Application Analysis | P Value | |
| Expert group | |||
| Simple | 89% | 98% | 0.06 |
| Complex | 89% | 96.3% | 0.016 |
| Fellow group | |||
| Simple | 81% | 97% | 0.08 |
| Complex | 69% | 87% | 0.028 |
| Resident group | |||
| Simple | 62% | 85% | 0.07 |
| Complex | 38% | 66% | 0.008 |
| Novice group | |||
| Simple | 47% | 87% | <0.001 |
| Complex | 35% | 70% | <0.001 |
Time for Interpretation
The average time for interpretation was 71.8 minutes when the standard method was used and only 37.4 minutes when the app was used. Table III shows the subgroup analysis of the time required for interpretation. One important measure was the average time needed to obtain an accurate classification, which improved from 1.6 minutes without the app to 1.03 minutes with the app in the expert group (p = 0.1), from 2.77 minutes to 1.5 minutes in the fellow group (p < 0.01), from 4.65 minutes to 1.6 minutes in the resident group (p < 0.05), and from 7.32 minutes to 1.54 minutes in the novice group (p < 0.01). The average time to obtain an accurate classification was significantly different between groups for the 2 readings without the app, but not significantly different in the 2 readings with the app.
TABLE III.
Time for Interpretation According to Expertise*
| Standard CT Analysis* (min) | Application Analysis* (min) | P Value | |
| Expert group | 48.3 (1.4) | 34.3 (1.0) | 0.1 |
| Fellow group | 69.8 (2.0) | 33.3 (1) | 0.015 |
| Resident group | 77.5 (2.2) | 38.3 (1.1) | 0.033 |
| Novice group | 83.8 (2.4) | 41 (1.2) | <0.001 |
The first value in each column represents the average total time needed for the reviewers in each group to classify all 35 fractures. The value in parentheses represents the average time needed to obtain an accurate classification of a single fracture and was calculated according to the formula: overall reading time/number of fractures.
Intraobserver Reliability
The expert group demonstrated almost perfect intraobserver agreement for both readings (ICC = 0.92 without the app and 0.95 with the app; p > 0.05). In contrast, the fellow group (ICC = 0.76 without the app and 0.96 with the app; p < 0.05), the resident group (ICC = 0.5 without the app and 0.93 with the app; p < 0.001), and the novice group (ICC = 0.42 without the app and 0.89 with the app; p < 0.001) all demonstrated significantly better intraobserver reliability when the app was used (Table IV). All groups had excellent intraobserver reliability (ICC ≥ 0.89) when the app was used.
TABLE IV.
Mean Intraobserver Reliability
| Standard CT Analysis* | Application Analysis* | P Value | |
| Expert group | 0.92 | 0.95 | >0.05 |
| Fellow group | 0.76 | 0.96 | <0.05 |
| Resident group | 0.5 | 0.93 | <0.001 |
| Novice group | 0.42 | 0.89 | <0.001 |
The values are given as the intraobserver correlation coefficient.
Interobserver Reliability
Interobserver agreement, used to evaluate the homogeneity within the individual groups, showed significant improvement in association with the use of the app in all groups except for the expert group, with the ICC improving from 0.86 to 0.93 in the expert group (p > 0.05), from 0.59 to 0.89 in the fellow group (p < 0.05), from 0.3 to 0.79 in the resident group (p < 0.01), and from 0.3 to 0.83 in the novice group (p < 0.01) (Table V).
TABLE V.
Mean Interobserver Reliability
| Standard CT Analysis* | Application Analysis* | P Value | |
| Expert group | 0.86 | 0.93 | >0.05 |
| Fellow group | 0.59 | 0.89 | <0.05 |
| Resident group | 0.3 | 0.79 | <0.01 |
| Novice group | 0.3 | 0.83 | <0.01 |
The values are given as the interobserver correlation coefficient.
Discussion
The use of smartphones and mobile apps is increasing in the medical community17-19. In fact, the use of smartphones by orthopaedic surgeons doubled over a period of 2 years and is especially prevalent in trainees17. Additionally, a recent survey found that smartphone apps are frequently used by surgeons with >15 years of experience18, even though the majority of available apps target the non-medical audience22. On the other hand, orthopaedic apps constitute only 16% of all surgical apps for professionals, and classification-related apps represent only 4% of orthopaedic apps19. The present study evaluated an app that was targeted to orthopaedic surgeons to help them classify difficult fractures; at the time of the study, this app was the only freely available app of its type on the App Store and Google Play markets. The accuracy of classification with use of this app varied from 72.5% to 97.2% depending on the experience level of the examiner. To our knowledge, these are the best accuracy rates for acetabular fracture classification that have been reported in the literature (Table VI). Our main objective for this app is to help untrained orthopaedic residents and/or emergency room doctors accurately classify fractures and communicate the accurate fracture type to the expert.
TABLE VI.
Diagnostic Accuracy in the Literature
| Study/Experience Level | Radiographs | 2D reconstruction | 3D reconstruction |
| Hüfner et al.7 (1999) | |||
| Junior | 11% | 30% | 60% |
| Senior | 32% | 55% | 64% |
| Kickuth et al.5 (2002) | |||
| Expert (orthopaedics) | 60% | 68% | 87% |
| Expert (radiologists) | 78% | 81% | 89% |
| O’Toole et al.15 (2010) | |||
| Senior | 48% | — | 68% |
| Garrett et al.8 (2012) | |||
| Junior | — | 36.3% | 52.5% |
| Senior | — | 42.4% | 57.6% |
| Schäffler et al.16 (2012) | |||
| Senior | — | 54% | — |
| Present study (with app) | |||
| Expert | — | — | 97.2% |
| Senior | — | — | 89.7% |
| Junior | — | — | 75.5% |
Several systems are available for the classification of acetabular fractures (Arbeitsgemeinschaft für Osteosynthesefragen [AO], Harris, etc.)11,23, but the most common is the Letournel (or Judet-Letournel) system1,24,25. This system is difficult to understand and to learn but remains the only widely accepted system to guide the surgical approach for acetabular fractures25. Several studies have shown low accuracy in association with the use of the Letournel system26. For example, Hüfner et al., in a study involving the use of standard radiographs, reported a classification accuracy that was not superior to the hazard ratio7. Several authors have presented algorithms or modified systems to improve the accuracy of classification11-13. Some investigators have tried to classify the fracture patterns differently, but the benefit of doing so is doubtful as it introduces confusion with the 10 simple patterns described by Letournel11,13. Ly et al. found that accuracy improved from 50% to 59% when a group of residents used a classification algorithm that the authors themselves had designed12. The accuracy of classification clearly depends on the quality of images and the method of classification used. In a recent study, Jouffroy et al. evaluated the impact of the use of a standardized method for the diagnosis of acetabular fractures14. That method involved not another classification system, but analysis of predefined anatomical landmarks on standardized 3D views. The authors showed a significant improvement in accuracy from 60.5% to 77.1% among inexperienced examiners (p = 0.001). However, a closer look at the published literature shows that classification accuracy depends largely on the experience level of the examiner. Garrett et al. and Hüfner et al. each compared results between junior and senior examiners7,8. In those 2 studies, the rates of correct classification among junior examiners were 52.5% and 60%, respectively. In comparison, the residents and medical students in the present study achieved a correct classification rate of >72.5% when using the app. Among senior or expert examiners, the reported rates of accurate classification have varied from 54% to 89% in the literature5,7,8,15,16. In the present study, the rate of correct classification reached 97.2% for expert examiners when using the app, thus showing that this method provides value not just for inexperienced examiners. In fact, it should be underlined that all of the experience groups in the present study demonstrated a significant improvement in the accuracy of classification of complex fractures when the app was used.
It is interesting to consider the repeatability of the method. In several previous studies that have evaluated other methods, the intraobserver coefficient was never higher than 0.74 (Table VII)6,8,14,27-30. In the present study, the intraobserver coefficient was never lower than 0.89 when the app was used, regardless of the experience of the examiners. Furthermore, the interobserver coefficient was also analyzed in our study, and all experience groups showed significant improvement, with a coefficient of >0.8. This finding emphasizes that the use of this app increases homogeneity between different examiners in the same group, thus standardizing acetabular fracture classification.
TABLE VII.
Intraobserver Reliability in the Literature*
| Study/Experience Level | Radiograph | CT Scan | Scan + 3D |
| Visutipol et al.27 (2000) | 0.44 | ||
| Beaulé et al.6 (2003) | |||
| Expert | 0.69 | 0.74 | |
| Senior | 0.67 | 0.69 | |
| Junior | 0.51 | 0.51 | |
| Ohashi et al.28 (2006) | |||
| Expert | 0.42 | 0.7 | |
| O’Toole et al.15 (2010) | |||
| Senior | 0.64 | 0.7 | |
| Garrett et al.8 (2012) | |||
| Junior | 0.27 | 0.42 | |
| Senior | 0.29 | 0.44 | |
| Clarke-Jenssen et al.29 (2015) | |||
| Senior | 0.46 | 0.6 | |
| Hutt et al.30 (2015) | |||
| Expert | 0.42 | 0.51 | 0.8 |
| Present study (with app) | |||
| Expert | 0.95 | ||
| Senior | 0.96 | ||
| Junior | 0.89 |
The values are given as the intraobserver correlation coefficient.
Finally, it is important to note that the use of the app greatly facilitates classification. The time needed for classification was cut in half compared with that needed when the standard method was used. To our knowledge, no other authors have reported on the time required for acetabular fracture classification, so while it was impossible to compare our data with previous findings, it seems logical that the use of more-complex decision trees would increase the time needed for classification.
The present study had some limitations. First, limiting the analysis to the 3D exopelvic view requires a perfect view, and, if the image is not perfect, it is not always possible to analyze the 8 necessary landmarks; however, with experience, a 3D exopelvic view is easy to obtain (at our institution, it takes <2 minutes). On the other hand, this app provides the classification only, without analyzing any other important criteria for the surgical indication (e.g., cartilage impaction, intra-articular fragments, etc.). The main objective of this app is to facilitate classification, and therefore it is still necessary to analyze CT scans for accurate management. Finally, the present study was monocentric and may have a selection bias in terms of the examiners as our department is a tertiary referral center for acetabular fracture management and residents may have chosen our department because they are interested in acetabular fractures. The number of analyzed cases and the number of examiners decrease this potential bias.
In conclusion, the most important finding of the present study is the high classification accuracy of the inexperienced groups when using the app. Another important finding is the high reliability of this method to classify complex acetabular fractures with near-perfect accuracy. This free app provides a very easy way to diagnose acetabular fractures, regardless of the level of experience of the examiner.
Disclosure of Potential Conflicts of Interest
Footnotes
Investigation performed at Groupe Hospitalier Paris Saint Joseph, Paris, France
Disclosure: The smartphone application was developed exclusively with funds from our institution. There was no other source of funding. The Disclosure of Potential Conflicts of Interest forms are provided with the online version of the article (http://links.lww.com/JBJSOA/A31).
References
- 1.Judet R, Judet J, Letournel E. Fractures of the acetabulum: classification and surgical approaches for open reduction. Preliminary report. J Bone Joint Surg Am. 1964. December;46:1615-46. [PubMed] [Google Scholar]
- 2.Judet R, Letournel E. Les fractures du cotyle. Masson et Cie; Paris: 1974. [Google Scholar]
- 3.Letournel E. Acetabulum fractures: classification and management. Clin Orthop Relat Res. 1980. September;(151):81-106. [PubMed] [Google Scholar]
- 4.Jouffroy P. Diagnostic lésionnel des fractures du cotyle. In: Duparc J, editor. Cahiers d’enseignement la SoFCOT conférences d’enseignement. Paris: Elsevier Masson; 2001. p. 97-122. [Google Scholar]
- 5.Kickuth R, Laufer U, Hartung G, Gruening C, Stueckle C, Kirchner J. 3D CT versus axial helical CT versus conventional tomography in the classification of acetabular fractures: a ROC analysis. Clin Radiol. 2002. February;57(2):140-5. [DOI] [PubMed] [Google Scholar]
- 6.Beaulé PE, Dorey FJ, Matta JM. Letournel classification for acetabular fractures. Assessment of interobserver and intraobserver reliability. J Bone Joint Surg Am. 2003. September;85(9):1704-9. [PubMed] [Google Scholar]
- 7.Hüfner T, Pohlemann T, Gänsslen A, Assassi P, Prokop M, Tscherne H. [The value of CT in classification and decision making in acetabulum fractures. A systematic analysis]. [German.]. Unfallchirurg. 1999. February;102(2):124-31. [DOI] [PubMed] [Google Scholar]
- 8.Garrett J, Halvorson J, Carroll E, Webb LX. Value of 3-D CT in classifying acetabular fractures during orthopedic residency training. Orthopedics. 2012. May;35(5):e615-20. [DOI] [PubMed] [Google Scholar]
- 9.Prasartritha T, Chaivanichsiri P. The study of broken quadrilateral surface in fractures of the acetabulum. Int Orthop. 2013. June;37(6):1127-34. Epub 2013 Apr 24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Harris JH, Jr, Lee JS, Coupe KJ, Trotscher T. Acetabular fractures revisited: part 1, redefinition of the Letournel anterior column. AJR Am J Roentgenol. 2004. June;182(6):1363-6. [DOI] [PubMed] [Google Scholar]
- 11.Harris JH, Jr, Coupe KJ, Lee JS, Trotscher T. Acetabular fractures revisited: part 2, a new CT-based classification. AJR Am J Roentgenol. 2004. June;182(6):1367-75. [DOI] [PubMed] [Google Scholar]
- 12.Ly TV, Stover MD, Sims SH, Reilly MC. The use of an algorithm for classifying acetabular fractures: a role for resident education? Clin Orthop Relat Res. 2011. August;469(8):2371-6. Epub 2011 Jun 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Lawrence DA, Menn K, Baumgaertner M, Haims AH. Acetabular fractures: anatomic and clinical considerations. AJR Am J Roentgenol. 2013. September;201(3):W425-36. [DOI] [PubMed] [Google Scholar]
- 14.Jouffroy P, Sebaaly A, Aubert T, Riouallon G. Improved acetabular fracture diagnosis after training in a CT-based method. Orthop Traumatol Surg Res. 2017. May;103(3):325-9. Epub 2016 Dec 23. [DOI] [PubMed] [Google Scholar]
- 15.O’Toole RV, Cox G, Shanmuganathan K, Castillo RC, Turen CH, Sciadini MF, Nascone JW. Evaluation of computed tomography for determining the diagnosis of acetabular fractures. J Orthop Trauma. 2010. May;24(5):284-90. [DOI] [PubMed] [Google Scholar]
- 16.Schäffler A, Fensky F, Knöschke D, Haas NP, Becken AG, 3rd, Stöckle U, König B. [CT-based classification aid for acetabular fractures: evaluation and clinical testing]. [German.]. Unfallchirurg. 2013. November;116(11):1006-14. [DOI] [PubMed] [Google Scholar]
- 17.Andrawis JP, Muzykewicz DA, Franko OI. Mobile device trends in orthopedic surgery: rapid change and future implications. Orthopedics. 2016. Jan-Feb;39(1):e51-6. Epub 2016 Jan 5. [DOI] [PubMed] [Google Scholar]
- 18.Franko OI. Smartphone apps for orthopaedic surgeons. Clin Orthop Relat Res. 2011. July;469(7):2042-8. Epub 2011 May 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Kulendran M, Lim M, Laws G, Chow A, Nehme J, Darzi A, Purkayastha S. Surgical smartphone applications across different platforms: their evolution, uses, and users. Surg Innov. 2014. August;21(4):427-40. Epub 2014 Apr 7. [DOI] [PubMed] [Google Scholar]
- 20.Matta JM. Operative treatment of acetabular fractures through the ilioinguinal approach: a 10-year perspective. J Orthop Trauma. 2006. January;20(1)(Suppl):S20-9. [PubMed] [Google Scholar]
- 21.Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977. March;33(1):159-74. [PubMed] [Google Scholar]
- 22.Wiechmann W, Kwan D, Bokarius A, Toohey SL. There’s an app for that? Highlighting the difficulty in finding clinically relevant smartphone applications. West J Emerg Med. 2016. March;17(2):191-4. Epub 2016 Mar 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Marsh JL, Slongo TF, Agel J, Broderick JS, Creevey W, DeCoster TA, Prokuski L, Sirkin MS, Ziran B, Henley B, Audigé L. Fracture and dislocation classification compendium - 2007: Orthopaedic Trauma Association classification, database and outcomes committee. J Orthop Trauma. 2007. Nov-Dec;21(10)(Suppl):S1-133. [DOI] [PubMed] [Google Scholar]
- 24.Scheinfeld MH, Dym AA, Spektor M, Avery LL, Dym RJ, Amanatullah DF. Acetabular fractures: what radiologists should know and how 3D CT can aid classification. Radiographics. 2015. Mar-Apr;35(2):555-77. [DOI] [PubMed] [Google Scholar]
- 25.Alton TB, Gee AO. Classifications in brief: Letournel classification for acetabular fractures. Clin Orthop Relat Res. 2014. January;472(1):35-8. Epub 2013 Nov 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Potok PS, Hopper KD, Umlauf MJ. Fractures of the acetabulum: imaging, classification, and understanding. Radiographics. 1995. January;15(1):7-23, discussion :23-4. [DOI] [PubMed] [Google Scholar]
- 27.Visutipol B, Chobtangsin P, Ketmalasiri B, Pattarabanjird N, Varodompun N. Evaluation of Letournel and Judet classification of acetabular fracture with plain radiographs and three-dimensional computerized tomographic scan. J Orthop Surg (Hong Kong). 2000. June;8(1):33-7. [DOI] [PubMed] [Google Scholar]
- 28.Ohashi K, El-Khoury GY, Abu-Zahra KW, Berbaum KS. Interobserver agreement for Letournel acetabular fracture classification with multidetector CT: are standard Judet radiographs necessary? Radiology. 2006. November;241(2):386-91. Epub 2006 Sep 27. [DOI] [PubMed] [Google Scholar]
- 29.Clarke-Jenssen J, Øvre SA, Røise O, Madsen JE. Acetabular fracture assessment in four different pelvic trauma centers: have the Judet views become superfluous? Arch Orthop Trauma Surg. 2015. July;135(7):913-8. Epub 2015 May 1. [DOI] [PubMed] [Google Scholar]
- 30.Hutt JRB, Ortega-Briones A, Daurka JS, Bircher MD, Rickman MS. The ongoing relevance of acetabular fracture classification. Bone Joint J. 2015. August;97-B(8):1139-43. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.



