Supplemental Digital Content is available in the text
Keywords: acetabulum, artificial intelligence, developmental dysplasia of the hip, Sharp's angle
Abstract
Developmental dysplasia of the hip (DDH) is common, and features a widened Sharp's angle as observed on pelvic x-ray images. Determination of Sharp's angle, essential for clinical decisions, can overwhelm the workload of orthopedic surgeons. To aid diagnosis of DDH and reduce false negative diagnoses, a simple and cost-effective tool is proposed. The model was designed using artificial intelligence (AI), and evaluated for its ability to screen anteroposterior pelvic radiographs automatically, accurately, and efficiently.
Orthotopic anterior pelvic x-ray images were retrospectively collected (n = 11574) from the PACS (Picture Archiving and Communication System) database at Second Hospital of Jilin University. The Mask regional convolutional neural network (R-CNN) model was utilized and finely modified to detect 4 key points that delineate Sharp's angle. Of these images, 11,473 were randomly selected, labeled, and used to train and validate the modified Mask R-CNN model. A test dataset comprised the remaining 101 images. Python-based utility software was applied to draw and calculate Sharp's angle automatically. The diagnoses of DDH obtained via the model or the traditional manual drawings of 3 orthopedic surgeons were compared, each based on the degree of Sharp's angle, and these were then evaluated relative to the final clinical diagnoses (based on medical history, symptoms, signs, x-ray films, and computed tomography images).
Sharp's angles on the left and right measured via the AI model (40.07° ± 4.09° and 40.65° ± 4.21°), were statistically similar to that of the surgeons’ (39.35° ± 6.74° and 39.82° ± 6.99°). The measurement time required by the AI model (1.11 ± 0.00 s) was significantly less than that of the doctors (86.72 ± 1.10, 93.26 ± 1.12, and 87.34 ± 0.80 s). The diagnostic sensitivity, specificity, and accuracy of the AI method for diagnosis of DDH were similar to that of the orthopedic surgeons; the diagnoses of both were moderately consistent with the final clinical diagnosis.
The proposed AI model can automatically measure Sharp's angle with a performance similar to that of orthopedic surgeons, but requires far less time. The AI model may be a viable auxiliary to clinical diagnosis of DDH.
1. Introduction
Developmental dysplasia of the hip (DDH) is a common disease, with an incidence of about 1 per 1000 and a high disability rate.[1] For a better prognosis, early diagnosis, and treatment of DDH is important, but at the early stage there are often mild or no symptoms. Screening and diagnostic methods do exist, but the shortage of professional orthopedic surgeons in grassroots areas still makes early diagnosis of DDH difficult. Thus, a simple and cost-effective tool to help quickly diagnose DDH from a large number of pelvic anterior images, and reduce the rate of misdiagnosis, is urgently needed.
The most common method for diagnosing DDH uses x-ray images. DDH is characterized by a shallow, steep, and straight acetabular roof. In addition, Sharp's angle is widened, that is, the angle between the lower edge of the pelvic teardrop and the line connecting the lower edge of the teardrop and the outer edge of the acetabulum. The degree of Sharp's angle reflects the acetabular development and the coverage of the acetabulum on the femoral head, and can be used to diagnose and predict DDH progression. Thus, for patients with DDH the determination of Sharp's angle is essential for making clinical decisions.[2,3] In adults and older children with closed Y-shaped triradiate cartilage, Sharp's angle normally ranges from 33° to 38°. Hip dysplasia is suggested when Sharp's angle is greater than 47°.[4]
Presently, Sharp's angles are manually drawn and measured on pelvic x-ray images by a professional orthopedic surgeon. However, so many pelvic x-ray images are generated daily in the hospital that it is difficult for orthopedic surgeons to remain current. Furthermore, doctors in some grassroots areas are insufficiently trained to diagnose DDH. Thus, a convenient and efficient method to measure Sharp's angle is urgently needed.
In recent years, the application of artificial intelligence (AI) has led to advancements in speech and image recognition and text understanding.[5] There have also been many achievements in medical imaging research,[6] notably in radiology,[7] dermatology,[8] and ophthalmology.[9,10,11,12] The mainstream models in computer vision have been screened and compared, including a regional convolutional neural network (R-CNN) series,[13,14] the YOLO (You Only Look Once) series,[15,16] and SSD (Single Shot Detection).[17]
In this current study, Mask R-CNN was employed[18] to locate 4 key points on x-ray images that correspond to Sharp's angle, and these points were utilized to calculate Sharp's angle automatically. The validity of this proposed model was evaluated by comparing the automated measurements of Sharp's angle with the manual measurements taken by well-trained orthopedic surgeons.
2. Methods
This study was approved by the Ethics Review Board of Second Hospital of Jilin University (No. 223 2018). A requirement for patients’ informed consent was waived. Patients’ data was retrospectively collected and personal information was removed to ensure anonymity. A deep learning artificial neural network to measure Sharp's angle automatically was proposed and developed step by step (Fig. 1).
Figure 1.

Research flow chart.
2.1. Datasets
Standardized anteroposterior pelvic radiographs were collected from patients who visited Second Hospital of Jilin University between August 2009 and May 2018. The images were downloaded from the picture archiving and communication system (PACS) of the hospital, intended for modeling and analysis only.
Among the downloaded images, well-exposed standardized anteroposterior pelvic radiographs of patients aged 12 to 100 years were selected by 2 orthopedic professors and 2 radiology professors. Images were excluded from this study if the patient was diagnosed with any of the following: pelvic rotation or tilt, according to the criteria of Tönnis[19]; postoperative hip replacement or acetabular fracture; diseases including severe hip osteoarthritis or tuberculosis; tumors causing severe changes in hip morphology; severe hyperplasia of the upper edge of the acetabulum; or images with blurred teardrops.
Eventually, 12,528 x-ray images of patients who underwent Digital Radiography exams were initially included in this study, based on the inclusion and exclusion criteria. Of these, 954 were removed during the labeling process due to poor image quality, 101 were randomly selected as test data (not used for deep neural network training), and the remaining 11,473 constituted a training and validation dataset at a ratio of ∼8:2 (i.e., 9248 and 2225 images, respectively, training and validation).
2.2. Labeling the pelvic x-ray images
The labeling tool LabelMe was utilized to mark the 4 key points (the lower edge of the teardrop and the outer edge of the acetabulum on both sides) that comprise Sharp's angle on standardized pelvic x-ray images (Fig. 2). Each labeled x-ray image was examined twice before inclusion in the training dataset. Each annotated image with marked corresponding key point coordinates was stored in a separate ∗.json file.
Figure 2.

Sharp's angles (black lines) on a pelvic X-ray image. Key point A, lower edge of the teardrop on the left; key point B, outer edge of the acetabulum on the left; key point C, lower edge of the teardrop on the right; key point D, outer edge of the acetabulum on the right. These 4 key points were predicted on the X-ray images and the Sharp's angles were automatically drawn and calculated with the Python-based utility. Green lines are an example of an annotated image with the corresponding key point coordinates stored in a separate ∗.json file. The areas in the green box, centered on key points in the red boxes, and the background are the 6 categories of the AI model.
2.3. Preprocessing training images
The x-ray images included in this study were generated using different equipment, so each was first scaled in x × y pixels (x = 1024) while maintaining the original image's aspect ratio. For model training, the images were made to fill a 1024 × 1024 pixel square. The pixel mean was deducted from the RGB (red, green, blue) channels before training. To apply to the modified Mask R-CNN model, the ∗.json file with the annotated data was converted into an MS COCO style annotation file by our own Python-based utility software. A custom key category and a square generated with the key points returned the vertex coordinates of the border.
2.4. Modifications of the Mask R-CNN model
The standard Mask R-CNN model was selected[18] and fine adjustments were made based on it. ResNet101 + FPN (feature pyramid network) was still used as the backbone of the network, which was consistent with the standard model in the literature. According to suggestions in the literature,[18] the header network portion was fine adjusted. In addition to the existing classification/border prediction/mask prediction branch, a parallel branch was added for key point detection (Fig. 3). When dealing with key point detection, each key point was treated as a one-hot binary mask whose dimensions were tested and optimized to a value of 56 × 56 pixels.
Figure 3.

Modification of the standard Mask R-CNN Model. An additional branch for key point detection was adopted in the header network section of the Mask R-CNN model. Apart from the irregular quadrilateral shown in Figure 2, the area near each of these 4 key points was also regarded as separate categories. Each category then has 4 new corresponding key points. With the background, there were 6 classifications (NUM_CLASSES = 6).
For model training, the irregular quadrilateral (see Fig. 3) was first treated as the only category (excluding the background category) to predict the position of the corresponding 4 key points. However, the results showed that the performance using the test data was not ideal. Our Python-based utility was then modified to label the area near each of the 4 key points as a separate category and automatically generate a small square centered on the original key point. This gave us 5 categories, which included the original irregular quadrilateral (see Fig. 3) and the 4 new categories described above.
Each of the 4 vertices of the small red squares in Fig. 3 was considered a new key point, so 4 additional categories provided the corresponding additional 4 × 4 key points. The predictions on the test set data were then the coordinates of the center point calculated from the vertices of each small square border. With this new training strategy, the performance on the test data set was significantly better than the previous one. We suspected that if the number of key points was insufficient, there might be an overfit of the model capacity. The loss function in this study was basically consistent with the loss function described in a previous research,[18] and we added the loss function for key point detection. For each key point, the cross-entropy loss of its Softmax was minimized during training.
2.5. Training and inference
Four graphics processing units (GPUs) were used in a mini-batch and 2 images per GPU resulted in an effective batch size of eight. A region of interest was considered positive if the intersection-over-union was at least 0.7; otherwise it was negative. The ratio was 1:2 for positive-to-negative samples of each image. Stochastic gradient descent was used with an initial learning rate of 0.002, a momentum of 0.9, and a weight decay of 0.0001.
First, the rest of the network was fixed and the head network iteratively learned 30 epochs. At this point, the validation loss started to show a trend of flattening and small increase, which indicated a possible overfitting to the network. (See details in Fig. 4). The initial learning rate was divided by 10 and all network layers were opened to learn another 50 epochs to achieve the last result. As shown in Fig. 4, although the training loss continued to decrease a bit, the validation loss started to increase after 80 epochs (30 + 50).
Figure 4.

Training loss and validation loss data obtained during model training and tuning. These 2 figures were taken from drawings in TensorBoard. The X axis is the numbers of steps/epochs and the Y axis is the loss value.
The X-ray images in the training dataset were utilized for training and validation at a ratio of ∼8:2. In the test phase, the image was scaled to x × y pixels (x = 1024) while maintaining the original image's aspect ratio.
2.6. Code
The open source project matterport/Mask_RCNN (https://github.com/matterport/Mask_RCNN) from GitHub was modified to fit the training objects and goals of the present study. A pre-trained weight file (mask_rcnn_coco.h5) based on the MS COCO dataset was used to start the actual training.
2.7. Measurement and diagnosis
The proposed AI model was applied to the 101 anteroposterior pelvic radiographs in the test dataset. The lower edge of the teardrop and the outer edge of the acetabulum were then detected. The positions of the 4 key points were predicted and Sharp's angle was drawn and automatically calculated via our Python-based utility (See details in Fig. 2). Accordingly, Sharp's angles between 33° and 38° were considered normal. Sharp's angles less than 32° are uncommon and probably of no clinical significance, whereas angles from 39° to 42° are the upper limit of normal. Sharp's angles over 47° are characteristic of DDH. The prognosis for hip joints with Sharp angles between 42° and 47° is under investigation and requires dynamic observation[4] according to the diagnostic criteria.
The Sharp's angles of the 101 test orthotopic anterior pelvic X-ray images were also measured traditionally by 3 rigorously trained attending orthopedic surgeons using the PACS tools. A DDH diagnosis was based on the measured angles. To evaluate efficacy, the measurements of Sharp angles of the acetabulum performed by the AI model and surgeons were compared with the clinical DDH diagnosis (based, as standard, on medical history, symptoms, signs, x-ray films, and computed tomography [CT] images).
2.8. Statistical analysis
SPSS 21.0 statistical software was used to analyze and organize the data. The count data are represented by the number of cases and percentage rate, i.e., n (%). Comparisons between groups were performed by Chi-Squared test (χ2). Measurement data are described as mean ± standard deviation. According to the Kolmogorov-Smirnov normality test, P > .05 indicated that the data conformed to normal distribution.
Comparisons between groups were analyzed by variance F-test. Two groups were compared using the least significance difference test and Student t test. The consistency of the indicator diagnosis results was tested by the kappa statistic. Statistical significance was set at P < .05.
3. Results
The Sharp's angles of the 101 test anteroposterior pelvic radiographs, measured by different approaches, were compared and analyzed (Table 1; Supplementary Table 1). On the left, the Sharp's angles as measured respectively by the AI model and surgeons were 40.07° ± 4.09° and 39.35° ± 6.74°. The Student t test analysis indicated that there was no significant difference between these 2 measurements (t = 1.422, P = .158). On the right, the corresponding Sharp angles were 40.65° ± 4.21° and 39.82° ± 6.99°, which were also statistically similar (t = 1.587, P = .116).
Table 1.
Left and right Sharp's angles measured by the AI model and surgeons∗.

Another important parameter for evaluating method performance is the time required for measuring Sharp's angle. In the test dataset of 101 X-ray images, the measurement time of the AI model (1.19 ± 0.00 s) was significantly less than that of surgeons B (86.72 ± 1.10 s), C (93.26 ± 1.12 s), and D (87.34. ± 0.80 s; Fig. 5; Supplementary Table 2).
Figure 5.

Bar graph of the time consumed by the AI method and surgeons for measuring Sharp's acetabular angles for each X-ray image.
The developmental status of the pelvic acetabulum was evaluated according to the measured Sharp's angles. The performance of the AI model in measuring Sharp's angle was judged by the diagnostic efficacy of DDH. Thus, the DDH diagnostic sensitivity, specificity, and accuracy of the AI model and orthopedic surgeons were compared (Table 2; Supplementary Table 3). On the right side, the diagnostic sensitivities for DDH of the AI model and surgeons B, C, and D were 62.2, 78.4, 67.6, and 64.9%, respectively, and all shared the same specificity of 78.1%. The corresponding diagnostic accuracies were 74.3, 72.3, 69.3, and 70.3, showing no significant difference.
Table 2.
DDH diagnostic accuracy of the AI method and surgeons, n (%)∗.

On the left side, the diagnostic sensitivities for DDH of the AI model and surgeons B, C, and D were 83.3, 83.3, 72.2, and 77.8%, respectively, with specificities of 81.9, 86.7, 85.5, and 88.0%, and accuracies of 79.2, 82.2, 80.2, and 84.2%. Thus, according to these evaluation parameters, the diagnostic ability of the AI model was equivalent to that of the orthopedic surgeons in diagnosing DDH based on Sharp's angles.
The Sharp's angle-based DDH diagnostic consistency between the AI method and the surgeons and the final diagnosis results was studied (Table 4, Supplementary Table 3). The kappa test indicated that the AI method and surgeons all shared a moderate diagnostic consistency with the final diagnosis results, with no significant difference among them. Thus, the consistency of Sharp's angle-based evaluation of the acetabulum by the AI method suggests that it could have an important role in assisting the diagnosis of DDH.
Table 4.
Diagnostic consistency for DDH between diagnosis from AI method or surgeons and the confirmed diagnosis results, n∗.

4. Discussion
The deep neural network model is currently used in image research for image classification, object detection, and semantic segmentation. Researchers should pragmatically select what is most appropriate for the clinical application. For the present study, we considered that a classification judgment was required with regard to DDH diagnosis. However, surgeons do not intuitively accept a classification judgment achieved via the black box approach of an artificial neural network.[20] Therefore, we determined 4 sites that were essential for measuring Sharp's angle. The present model accurately locates the 4 key points, which are on both sides of the outer edge of the acetabulum and the lower edge of the teardrop. Sharp's angles were then drawn from these points. Thus, the AI model and the traditional manual method used by surgeons share the same measurement procedures for Sharp's angle. Mask R-CNN was mainly used for semantic segmentation and to site the key points on the human body. For the first time, key point positioning was applied to medical x-ray images to calculate meaningful medical diagnostic angles based on those detected sites. In addition, there was no significant difference between the measurements of the AI model and the surgeons.
When fine-tuning the proposed AI model, we found that the initial performance was not ideal. This may be because too few key points caused an overfit of the model capacity. When we increased the number of key points and categories appropriately, the performance of the model significantly improved. It was also determined that the addition of a parallel branch for key point detection to the classification/border prediction/mask prediction branch of the standard Mask R-CNN did not impose a significant additional burden on the training of the model.
Doctors in China have very heavy workloads. Their stressful working conditions not only threaten their physical and mental health, but negatively influence the quality and efficiency of medical services and patient safety.[21] The AI model proposed in the present study was able to measure Sharp's angle 74 times faster than the physicians could (Fig. 5). This suggests that the AI method could relieve the heavy daily workloads of orthopedic surgeons by improving their work efficiency.
In addition, the diagnostic accuracy of DDH based on the measured Sharp's angle by the AI model was comparable to that of the surgeons (Tables 2 and 3). The AI model could be applied in primary care hospitals, which provide 80% of the clinical care services in most countries,[22] even while the number of doctors and the available technology remain insufficient. In China for example, only 17.6% of the medical undergraduates and above become doctors in township hospitals.[23] Application of the AI model in grassroots medical facilities will help differentiate those patients with an abnormal Sharp's angle, and could thus improve the accuracy and efficiency of diagnosis and treatment. As application of the AI model improves the efficiency of doctors’ daily work, the egalitarianism of medical care will improve, and the shortage of doctors and technology may be compensated to some extent.
Table 3.
DDH diagnostic performance of the AI method and 3 surgeons.

In clinical practice, in almost 40% of young and middle-aged patients hip pain is caused by dysplasia. Yet, dysplasia due to minor poor acetabular development is often not serious. As observed on pelvis x-ray images, Sharp's angle is only slightly wider than normal,[24,25] and these patients are very susceptible to misdiagnosis. Our present results showed that the diagnostic accuracy of DDH based Sharp angle as measured by the AI model was equivalent to that of the surgeons. In addition, it took much less time for the AI model to perform the measurement, making batch screening possible. By aiding diagnosis of DDH at the early stage, this AI method can contribute to better clinical decisions and treatment strategies, and improve the prognosis before cartilage degeneration occurs.
The overall accuracy of the automated auxiliary detection method for DDH is 76.73%. A final clinical diagnosis requires multiple indicators. This method can contribute to an accurate diagnosis, but there are many challenges to its ultimate application in the clinic.[20] In the future, a CNN model which measures the center-edge angle and the acetabular index will be constructed with the existing data. CT and magnetic resonance imaging (MRI) data can be used to train the CNN model; CT images can better exhibit the acetabular dysplasia[26,27] and MRI depict joint changes in the cartilage.[28] The multi-modality data (x-ray, CT, and MRI data) of the trained CNN model will more accurately assist the diagnosis of DDH.
5. Conclusion
This study proposed a new method for auxiliary diagnosis of DDH. The method utilized deep neural network AI to detect key points automatically on pelvic x-ray images at the lower edge of the teardrop and the outer edge of the acetabulum. These were utilized for automated calculation of the acetabular Sharp's angle. We improved the Mask R-CNN model as needed for detection of the key points on medical images. The sensitivity, specificity, and accuracy of this method for auxiliary diagnosis of DDH are similar to that of surgeons, while it is much more time efficient.
Acknowledgments
We thank the following contributors for their help and support: Prof. Changfu Zhao; Qian Wang(Statical Work); Prof. Sa Huang; Prof. Chen Li; Prof. Xin Li; Prof. He Liu; Prof. Qing Han; Dr. Xiaonan Wang; Dr. Yang Song; Dr. Zhonghan Wang; Dr. Yuhao Zheng; Dr. Boyan Zhang; Dr. Guangyu Chu; Dr. Han Song; Dr. Zhuhao Li; Dr. Chenyu Shi; Dr. Chaohua Gao; Dr. Kerong Yang; Dr. Shutao Wang; and Dr. Ruifeng Zhang.
Author contributions
Conceptualization: Qiang Li, Lei Zhong, Hongnian Huang.
Data curation: Qiang Li, Lei Zhong, Hongnian Huang, He Liu, Yanguo Qin, Yiming Wang, Zhe Zhou, Heng Liu, Wenzhuo Yang, Meiting Qin, Jing Wang, Yanbo Wang, Meng Xu, Ye Huang.
Formal analysis: Qiang Li, Lei Zhong, Hongnian Huang, He Liu, Yanguo Qin.
Funding acquisition: Jincheng Wang, Meng Xu, Ye Huang.
Investigation: Qiang Li, Lei Zhong, Hongnian Huang, He Liu, Yanguo Qin, Yiming Wang.
Methodology: Qiang Li, Lei Zhong, Hongnian Huang.
Project administration: Jincheng Wang, Meng Xu, Ye Huang.
Resources: Qiang Li, Lei Zhong, Hongnian Huang, He Liu, Yanguo Qin.
Software: Qiang Li, Hongnian Huang, Teng Zhou, Ye Huang.
Supervision: Jincheng Wang, Meng Xu, Ye Huang.
Validation: Qiang Li, Lei Zhong, Meng Xu.
Visualization: Qiang Li, Lei Zhong, Meng Xu.
Writing – original draft: Qiang Li, Ye Huang.
Writing – review & editing: Qiang Li, Ye Huang.
Supplementary Material
Supplementary Material
Supplementary Material
Footnotes
How to cite this article: Li Q, Zhong L, Huang H, Liu H, Qin Y, Wang Y, Zhou Z, Liu H, Yang W, Qin M, Wang J, Wang Y, Zhou T, Wang D, Wang J, Xu M, Huang Y. Auxiliary diagnosis of developmental dysplasia of the hip by automated detection of Sharp's angle on standardized anteroposterior pelvic radiographs. Medicine. 2019;98:52(e18500).
Abbreviations: AI = artificial intelligence, CT = computed tomography, DDH = developmental dysplasia of the hip, GPU = graphics processing unit, MRI = magnetic resonance imaging, PACS = picture archiving and communication system, R-CNN = regional convolutional neural network.
This work was funded by the Scientific Development Program of Jilin Province (no. 3D5177743429, 3D516D733429, and 20170204004GX). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
The authors declare that there is no conflict of interest.
Supplemental Digital Content is available for this article.
References
- [1]. Keller MS, Nijs EL. The role of radiographs and US in developmental dysplasia of the hip: how good are they? Pediatr Radiol 2009;39: Suppl 2: S211–5. [DOI] [PubMed] [Google Scholar]
- [2]. Tannast M, Hanke MS, Zheng G, et al. What are the radiographic reference values for acetabular under- and overcoverage? Clin Orthop Relat Res 2015;473:1234–46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3]. Omeroglu H, Inan U. Inherited thrombophilia may be a causative factor for osteonecrosis of femoral head in male patients with developmental dysplasia of the hip: a case series. Arch Orthop Trauma Surg 2012;132:1281–5. [DOI] [PubMed] [Google Scholar]
- [4]. Sharp IK. Acetabular dysplasia: the acetabular angle. J Bone Joint Surg Br Vol 1961;43:268–72. [Google Scholar]
- [5]. Beam AL, Kohane IS. Translating artificial intelligence into clinical care. JAMA 2016;316:2368–9. [DOI] [PubMed] [Google Scholar]
- [6]. Wang D, Khosla A, Gargeya R, et al. Deep learning for identifying metastatic breast cancer. Quantitative Biology 2016. [Google Scholar]
- [7]. Nezhad MZ, Zhu D, Sadati N, et al. SUBIC: a supervised bi_clustering approach for precision medicine. Statistics 2017. [Google Scholar]
- [8]. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:115–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9]. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018;172:1122–31. e1129. [DOI] [PubMed] [Google Scholar]
- [10]. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama 2016;316:2402–10. [DOI] [PubMed] [Google Scholar]
- [11]. Erickson BJ, Korfiatis P, Akkus Z, et al. Machine learning for medical imaging. Radiographics 2017;37:505–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [12]. Wang F, Zhang P, Qian B. Clinical risk prediction with multilinear sparse logistic regression. Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM 2014. [Google Scholar]
- [13]. Girshick R. r-cnn F. Proceedings of the IEEE International Conference on Computer Vision 2015; https://arxiv.org/abs/1504.08083. [Google Scholar]
- [14]. Ren S, He K, Girshick R, et al. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 2017;39:1137–49. [DOI] [PubMed] [Google Scholar]
- [15]. Redmon J, Farhadi A. YOLO9000: Better, Faster, Stronger. IEEE Conference on Computer Vision & Pattern Recognition; 2017. [Google Scholar]
- [16]. Redmon J, Farhadi A. YOLOv3: an incremental improvement. 30TH IEEE Conference On Computer Vision and Pattern Recognition (CVPR 2017). https://arxiv.org/abs/1804.02767. [Google Scholar]
- [17]. Liu W, Anguelov D, Erhan D, et al. SSD: Single Shot MultiBox Detector. Computer Science 2015. [Google Scholar]
- [18]. 2017;He K, Gkioxari G, Dollár P, Girshick R. r-cnn M. Computer Vision (ICCV), 2017 IEEE International Conference on. [Google Scholar]
- [19]. Tonnis D. Normal values of the hip joint for the evaluation of X-rays in children and adults. Clin Orthop Relat Res 1976. 39–47. [PubMed] [Google Scholar]
- [20]. Müller VC, Bostrom N. Future progress in artificial intelligence: a poll among experts. AI Matters 2014;1:9–11. [Google Scholar]
- [21]. Michtalik HJ, Yeh HC, Pronovost PJ, et al. Impact of attending physician workload on patient care: a survey of hospitalists. JAMA Intern Med 2013;173:375–7. [DOI] [PubMed] [Google Scholar]
- [22]. Sydney University Press, Britt H, Miller GC, Henderson J, et al. General Practice Activity in Australia 2015–16. 2016. [Google Scholar]
- [23]. Health N, Commission FP. China's Health And Family Planning Statistical Yearbook 2014. Beijing: Peking Union Medical College Press; 2014. [Google Scholar]
- [24]. Sewell MD, Eastwood DM. Screening and treatment in developmental dysplasia of the hip-where do we go from here? Int Orthop 2011;35:1359–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25]. Sharpe P, Mulpuri K, Chan A, et al. Differences in risk factors between early and late diagnosed developmental dysplasia of the hip. Arch Dis Child Fetal Neonatal Ed 2006;91:F158–162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26]. Akiyama M, Nakashima Y, Fujii M, et al. Femoral anteversion is correlated with acetabular version and coverage in Asian women with anterior and global deficient subgroups of hip dysplasia: a CT study. Skeletal Radiol 2012;41:1411–8. [DOI] [PubMed] [Google Scholar]
- [27]. Akiyama K, Sakai T, Koyanagi J, et al. Three-dimensional distribution of articular cartilage thickness in the elderly cadaveric acetabulum: a new method using three-dimensional digitizer and CT. Osteoarthritis Cartilage 2010;18:795–802. [DOI] [PubMed] [Google Scholar]
- [28]. Kim YJ. Novel cartilage imaging techniques for hip disorders. Magn Reson Imaging Clin N Am 2013;21:35–44. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
