Skip to main content
BMC Musculoskeletal Disorders logoLink to BMC Musculoskeletal Disorders
. 2025 Sep 30;26:884. doi: 10.1186/s12891-025-09137-2

Early diagnosis of knee osteoarthritis severity using vision transformer

Punita Panwar 1, Sandeep Chaurasia 1,, Jayesh Gangrade 2,, Ashwani Bilandi 3
PMCID: PMC12487547  PMID: 41029374

Abstract

Knee Osteoarthritis (K-OA) is characterized as a progressive joint condition with global prevalence, exhibiting deterioration over time and impacting a significant portion of the population. It happens because joints wear out slowly. The main reason for osteoarthritis is the wearing away of the cushion in the joints, which makes the bones rub together. This causes feelings of stiffness, unease, and difficulty moving. Persons with osteoarthritis find it hard to do simple things like walking, standing, or going up stairs. Besides that, it can also make people feel sad or worried because of the ongoing pain and trouble it causes. Knee osteoarthritis exerts a sustained impact on both the economy and society. Typically, radiologists assess knee health through MRI or X-ray images, assigning KL-grades. MRI excels in visualizing soft tissues like cartilage, menisci, and ligaments, directly revealing cartilage degeneration and joint inflammation crucial for osteoarthritis (OA) diagnosis. In contrast, X-rays primarily show bone, only inferring cartilage loss through joint space narrowing—a late indicator of OA. This makes MRI superior for detecting early changes and subtle lesions often missed by X-rays. However, manual diagnosis of Knee osteoarthritis is laborious and time-consuming. In response, deep learning methodologies such as vision transformer (ViT) has been implemented to enhance efficiency and streamline workflows in clinical settings. This research leverages ViT for Knee Osteoarthritis KL grading, achieving an accuracy of 88%. It illustrates that employing a simple transfer learning technique with this model yields superior performance compared to more intricate architectures.

Keywords: Knee osteoarthritis, Deep learning, MRI images, X-ray, Vision transformer (ViT)

Introduction

Mechanical strain causes knee osteoarthritis (K-OA), which worsens over time. This degenerative disease gradually erodes the protecting articular cartilage of the knee joint. The WHO estimates that 9.6% of men and 18% of women over 60 have symptomatic osteoarthritis [1]. K-OA is a progressive condition that grows over a period of 10 to 15 years, touching the three main divisions of the knee joint: the Medial, Lateral, and Patellofemoral joints. This significantly interferes with daily activities. Collectively, these divisions form a modified hinge joint, allowing for flexion, extension, and limited rotational motion [2, 3]. Primary risk factors contributing to the onset of K-OA comprise gender, age, obesity, traumatic injury, genetic predisposition, bone abnormalities, and lifestyle choices presented in Fig. 1 [4, 5].

Fig. 1.

Fig. 1

Causes of knee Osteoarthritis

Plain radiographs (X-rays) are frequently employed in routine assessments of knee OA. Osteoarthritis causes cartilage erosion, reducing the space between the knee joint skeletons. However, knee OA symptoms may emerge before plain radiological changes. X-rays cannot detect OA structural characteristics or disease progression signs that may accelerate disease development [6]. MRI can precisely evaluate joint structural deterioration, a major technological development. Hospital radiologists often choose the more sensitive MRI for early OA identification [7]. Therefore, radiologists only use Knee X-ray images to assess K-OA severity.

Using Vision Transformers (ViT) on MRI images for Knee Osteoarthritis (K-OA) diagnosis offers significant advantages over X-rays. MRI provides detailed visualization of soft tissues like cartilage, allowing for earlier and more comprehensive detection of OA. It can identify subtle changes and various pathologies (e.g., bone marrow lesions) not visible on X-rays, and it does so without ionizing radiation. When combined with ViTs, MRI data enables the model to extract rich, complex features, leading to potentially higher accuracy in grading, reduced subjectivity in diagnosis, and improved potential for monitoring disease progression. This approach leverages the detailed information from MRI for a more advanced and efficient diagnosis of K-OA.

Their task involves assessing and categorizing the images using a 5-point ordinal scale known as the Kellgren and Lawrence (KL) Scale, where a grade of 0 represents a “normal” stage, and a grade of 4 signifies a “severe” stage of K-OA. Each grade on the KL Scale corresponds to distinct pathological characteristics observed in knee X-ray images. Here is the interpretation of each grade:

  • Grade 0: Pathological features are not present, indicating a normal knee joint.

  • Grade 1: There is an uncertain narrowing of the joint area and possible osteophyte lipping (formation of bone outgrowths).

  • Grade 2: Positive osteophytes are observed, and there may be a possible narrowing of the joint area.

  • Grade 3: Moderate multiple osteophytes are observed, accompanied by clear reduction in joint space, some indication of sclerosis (heightened bone density), and possible bone end deformity.

  • Grade 4: Significant osteophytes are noted, alongside pronounced reduction in joint space, severe sclerosis, and distinct bone end deformity.

The KL classification score relies on radiological characteristics such as cartilage narrowing and the formation of osteophytes within the joint [8]. Three-dimensional (3D) MR images enable comprehensive visualization of the entire knee as a unified organ, illustrating all the tissues within the joint [9]. Between 2010 and 2012, approximately 52.5 million individuals received a diagnosis of arthritis, affecting 22.7 million with characteristics linked to the condition [10]. This figure saw a subsequent rise of 1.9 million from 2013 to 2015, reaching a total of 54.4 million individuals affected by arthritis. Projections suggest that by 2040, arthritis will affect approximately 78.4 million people [11]. This study aims to predict Knee Osteoarthritis (K-OA):

  • Assess the usefulness of Vision Transformer (ViT) for diagnosing knee osteoarthritis severity using MRI images.

  • X-ray imaging for K-OA sometimes have limitations in recognizing soft tissue and grading severity. These limits make robust deep learning algorithm development and assessment difficult. This collection addresses these issues by offering a wide range of MRI scans from people with different stages of OA.

  • Assessing ViT efficacy in predicting osteoarthritis severity using criteria including accuracy, precision, recall, and f1-score.

Literature review

Recent research have used DL models, including vision transformers, to accurately and early identify osteoarthritis using MRI images. DL methods are increasingly used to detect and classify osteoarthritis in x-ray and MRI images. This section reviews relevant papers on utilizing deep learning to analyze knee osteoarthritis using x-ray and MRI imaging.

Zhang et al. [12] first classified knee joints in radiographic images using a Residual Neural Network (ResNet). ResNet and CBAM are used to predict Kellgren-Lawrence (KL) grades later. The model has 74.81% multi-class regular accuracy.

Wang et al. [13] prioritized classical samples and managed low-confidence instances. Experiments used a five-fold method for five-class and early-stage OA. The method outperforms others in five-class OA calculation with a mean accuracy of 70.13%. The technique excels in KL0 vs. KL2 despite human intervention in early-stage OA identification.

Cueva et al. [14] present a semi-automated DNN along with a fine-tuned ResNet-34. This model detects OA lesions in both knees according to the KL scale. The training process employs a publicly available dataset, while the validation process utilizes a private dataset. To handle dataset imbalances, transfer learning is applied. The model achieves a multi-class accuracy average of 61%.

Chen et al. [15] proposed a two-step technique. Initial knee joint identification uses a unique one-stage YOLOv2 network due to the uniform size of knee joints in x-ray images. CNN models are refined to classify knee joint images. A unique adjustable ordinal loss penalizes misclassifications with a larger predicted-to-actual KL grade distance, reflecting the ordinal character of knee KL grading. The highly calibrated VGG19 model with the planned ordinal loss achieves 69.70% classification accuracy in knee KL grading.

In the study by Dalia et al. [16], a novel object detection algorithm based on YOLOv5 and classification models using VGG-16 and ResNet are introduced. They attained 69.80% accuracy using Osteoarthritis Initiative (OAI) x-ray scans from 4796 individuals.

A. Tiulpin et al. [17] used a deep convolutional neural network. They evaluated this network on a dataset comprising x-ray images from 3,000 individuals, totalling 5,960 knees. However, its performance was not very satisfactory, achieving an average accuracy of 66.71%.

V. Pedoia et al. [18] conducted research on identifying knee osteoarthritis by analysing T2 maps MRI using a deep learning approach. They employed a densely connected Convolutional Neural Network and evaluated it using data from 4,384 individuals. The outcomes demonstrated an accuracy score of 83.44.

B. Guan et al. [19] focused on forecasting the advancement of knee arthritis pain through deep learning techniques. They employed a specific type of artificial neural network and utilized data from 4,674 individuals vulnerable to knee arthritis. Despite achieving an accuracy score of 80%, which was deemed inadequate, further enhancements were deemed necessary.

A. Tiulpin et al. [20] introduced a method to forecast the advancement of knee arthritis employing a deep learning approach featuring a deep convolutional neural network. Their study incorporated X-ray images and clinical data from 2,129 individuals. The experimental outcomes revealed that the deep CNN in this study achieved accuracy of 81%, which fell short of the anticipated standard.

Alshareef, E.et al. [21] and colleagues employed a pre-trained Vision Transformer (VI-T), refining its parameters. They employed x-ray images from 4130 patients sourced from the Osteoarthritis Initiative (OAI) and attained a 70% accuracy rate.

Numerous research have used deep learning to analyze knee radiographs for osteoarthritis. A key research gap, identified through a thorough analysis of related literature, has been consistently emphasized in previous studies.

To tackle this issue, a dataset of MRI images was utilized to improve upon the performance accuracy scores previously identified in studies focusing on the detection of osteoarthritis using knee X-ray images.

Material and method

In the subsequent section, a comprehensive methodology is delineated to achieve the objectives outlined in the current study. Initially, Sect. 3.1 elaborates on the dataset. Consequently, Sect. 3.2 outlines the pre-trained deep learning network utilized in the experiments. Section 3.3 expounds on the experimental settings and the training process. Finally, Sect. 3.4 elucidates the performance metrics.

Dataset

This study included MRI data from Dr. Navneet Imaging & Path Lab and Kamal Diagnostic Center, which was examined by an experienced physician from Mahatma Gandhi Hospital, Jaipur. MRI images from 1530 people are in DICOM format. Each MRI scan has 130–140 slices, showing numerous knee perspectives. The T1 core view, which shows the knee from the front, was used for analysis. Since osteoarthritis symptoms usually occur at 45 and older, the dataset contains people in this age range. MRI scans of osteoarthritis patients’ left, and right knees are included.

To ensure consistency between left and right knee images, all left knee MRI images were flipped [22]. These images were then saved in JPEG format with a resolution of 512 × 512 pixels using Micro DICOM Viewer software. The final dataset comprises a total of 750 MRI images, covering different severity grades. Specifically, Grade 0 consists of 250 MRI images, Grade 1 consists of 200 MRI images, Grade 2 consists of 150 MRI images, Grade 3 consists of 100 MRI images, and Grade 4 consists of 50 MRI images. Table 1 provides the total count of patients and images.

Table 1.

Total number of patients and images

Grade Number of Patients in Left Leg Number of Patients in Right Leg Total Number of Images
KL0 31 29 60 250
KL1 22 19 41 200
KL2 32 29 61 150
KL3 21 8 29 100
KL4 5 2 7 50

The data pre-processing procedures are illustrated in Fig. 2.

Fig. 2.

Fig. 2

Data Collection and pre-processing procedures

Vision transformer

The Vision Transformer ViT classifies images using Transformer-based architecture on image patches. Splitting the image into uniform-sized patches, linearly embedding each patch, position embedding, and sending the vector sequence through a Transformer encoder are the steps. A typical way is to add a learnable “classification token” to the sequence for classification [24].

In the current study, the ViT has been implemented utilizing the Keras Sequential API. The model’s initial stage involves integrating a ViT, a transformer-centric framework renowned for its adeptness in handling visual information. After the ViT layer, a Flatten layer is transforming the multidimensional output of the ViT into a one-dimensional vector. This is followed by incorporating Batch Normalization layers, which are facilitating the normalization of input data and fostering stability within the learning dynamics. A Rectified Adam optimizer (Rectified Adam) is initialized with a learning rate of 1e-4, chosen for its adaptive learning rate capabilities, which can enhance training stability and convergence speed. This optimizer is utilized to compile the model, incorporating a Categorical Cross entropy loss function with label smoothing set to 0.2 to address overfitting and bolster generalization performance. Accuracy serves as the evaluation metric monitored during training. The images undergo resizing to a specified target dimension of 224 × 224 pixels, and they are required to adhere to the RGB color mode specification.

The Architecture of ViT illustrated in Fig. 3.

Fig. 3.

Fig. 3

The Architecture of Vision Transform (ViT)

Experimental setup & training process

The Knee Osteoarthritis (K-OA) risk experiment used a unique computer system. The 64-bit Ubuntu 22.04.2 LTS machine used an Intel i9-10850 K CPU and 64 GiB of RAM. NVIDIA GeForce RTX 3080-Ti handled graphics.

This study split the dataset into 7:1.5:1.5 training, testing, and validation subsets. The Vision Transformer model has 20 training epochs.

Performance metrics

The K-OA classifier models were evaluated using conventional performance metrics, including sensitivity (commonly known as recall), accuracy, specificity, precision, and F1 score [23, 24]. Accuracy, initially, represents the ratio of correctly classified cases to the total number.

Therefore, it computes to evaluate the overall effectiveness of the given method on the dataset and is expressed mathematically by Eq. (1):

graphic file with name d33e500.gif 1

TP (true positive) denotes instances accurately recognized as positive, FN (false negative) denotes positive cases mistakenly classified as negative, FP (false positive) signifies instances incorrectly labelled as positive, and TN (true negative) indicates negative cases correctly identified as negative.

Subsequently, sensitivity (recall) in Eq. (2) gauges the likelihood of correctly identifying all positive instances in the dataset. It is thus the ratio of accurately predicted patients to all relevant occurrences.

graphic file with name d33e513.gif 2

5Precision, as described in Eq. (3), is the ratio of properly identified positive cases to all elements expected to be positive.

graphic file with name d33e524.gif 3

Lastly, the F1-score, depicted in Eq. (4), combines precision and recall. This entails calculating the harmonic mean (average) of both precision and recall.

graphic file with name d33e535.gif 4

Result analysis

The results of this study, covering all outcomes and classification models from diverse viewpoints, will be detailed in this section. Table 2 presents the performance of the Vision Transformer model when applied to MRI images. The Vision Transform model demonstrates an 88% test accuracy when evaluated on the MRI dataset, accompanied by precision, recall, and F1 score measurements of 89%, 88%, and 88% respectively, after 20 epochs.

Table 2.

Performance of vision transformer model

Overall test accuracy Precision Recall F1-Score
88 89 88 88

The obtained findings can be validated similarly by analysing the confusion matrix. Figure 4 depicts the confusion matrix for the Vision Transformer model. The confusion matrix for this model showcases its performance. Among 60 samples, the Vision Transformer model successfully classified 53 samples as grade 0, but misclassified the remaining grades. Similarly Grade 1 had 56 correct guesses out of 60, much like Grade 0. But Grades 2, 3, and 4 had different outcomes. Grade 2 managed 38 correct guesses but also made some errors. In contrast, Grades 3 and 4 were quite accurate, with 59 correct guesses out of 60 for each illustrated in Fig. 4.

Fig. 4.

Fig. 4

Confusion Matrix of Vision Transform (ViT)

The classification model demonstrated varying accuracy levels across different grades. Grades 3 and 4 showed exceptional performance, achieving accuracies of 98.33% each, indicating nearly perfect identification of instances assigned to these grades. Grade 1 also displayed high accuracy, reaching 93.33%. However, Grade 0, while relatively accurate at 88.33%, exhibited lower accuracy compared to Grades 1, 3, and 4. Grade 2 had the lowest accuracy at 63.33%, suggesting greater challenges in accurately classifying instances labelled with this grade described in Table 3.

Table 3.

Grade wise accuracy

Grade0 Grade1 Grade2 Grade3 Grade4
88.33 93.33 63.33 98.33 98.33

Figure 5 illustrates graphical representations of training, validation accuracy, and validation loss for all deep learning models, facilitating comparative analysis. In Fig. 5(a), the training and validation accuracy plots demonstrate that as the number of epochs increases, both training and validation accuracy improve noticeably. Similarly, in Fig. 5(b), the training and validation loss plots indicate a decrease in both training and validation loss values with an increase in epochs illustrated in Fig. 5.

Fig. 5.

Fig. 5

Training and Validation Accuracy (a) and Loss (b) (VI-T)

Figure 5 Comparative Analysis of training and validation (a) accuracy (b).

loss for Vision Transform (ViT) Model.

Comparisons to current state of the art research

Table 4 compares our proposed study to leading studies. For a balanced comparison, we included 2021–2023 studies.

Table 4.

The performance of our proposed study in detecting osteoarthritis from knee X-ray images is compared to other state-of-the-art studies

Ref. No. Year Classification Techniques Dataset Performance Accuracy
Chen, Pingjun, et al. [15] 2019 YOLO2, VGG, ResNet, and DenseNet 4130 X-ray images 69.7%
Leung, Kevin, et al. [25] 2020 ResNet34 728 patients x-ray 87%
Liu, et al. [26] 2020 Faster RCNN 2770 X-ray images 82.5%
Dalia, Yuvraj, et al. [16] 2021 YOLOv5, VGG16, and ResNet 4796 patients’ x-ray images 69.8%
Pedoia, Valentina, et al. [27] 2019 DenseNet network 4,384 subjects with T2 sequence MRI images 83.4%
Swiecicki, Albert, et al. [28] 2021 Faster R-CNN and VGG-16 9,739 MRI images 71.9%
Thomas, et al. [29] 2020 CNN 40,280 x-ray images 71%
Y. Wang, et al. [30] 2021 CNN + YOLO 4506 x-ray images 95%
Yuniarno, et al. [31] 2022 Deep CNN 390 x-ray images 83%
B. C. Dharmani et al. [32] 2023 EfficientNet-B1 9739 x-ray images 89%
J. H. Cueva, et al. [33] 2023 Fine Tuned ResNet-34 4796 x-ray images 61%
Proposed Method 2024 Vision Transformer (ViT) 750 MRI images 88%

Table 4 presents a comparative overview of our proposed Vision Transformer (ViT) model’s performance against other state-of-the-art approaches for osteoarthritis detection. A critical analysis of this table reveals significant variations in reported accuracies, which can be primarily attributed to differences in imaging modalities, dataset characteristics, and algorithmic approaches. Notably, the majority of the cited studies utilized X-ray images for OA detection and grading [15, 16, 25, 2933]. While X-rays are widely accessible and cost-effective, their inherent limitation in visualizing soft tissues like cartilage, as discussed previously, can constrain the achievable accuracy for subtle OA changes. Our proposed study, along with works by Pedoia, Valentina, et al. [27], Swiecicki, Albert, et al. [28] stands out by employing MRI images. MRI, despite being more expensive and time-consuming, offers superior soft tissue contrast, allowing for direct visualization of cartilage degeneration, meniscal tears, and bone marrow lesions, which can lead to a richer feature set for deep learning models. The dataset size and composition also play a crucial role; for instance, studies with larger and potentially more diverse datasets generally provide more robust training environments for complex models. Furthermore, the classification techniques vary significantly, ranging from traditional CNNs (VGG, ResNet, DenseNet, EfficientNet) to object detection networks (YOLO, Faster R-CNN) and our Vision Transformer. While CNNs are excellent at local feature extraction, ViTs excel at capturing global contextual information, which, when combined with the rich data from MRI and effective transfer learning, can yield strong performance. The higher accuracies observed in some X-ray-based studies (e.g., Wang et al. at 95% and Dharmani et al. at 89%) suggest that for X-ray images, highly optimized CNN architectures and potentially specific preprocessing or task definitions (e.g., binary classification vs. multi-class KL grading) can achieve excellent results. The 88% accuracy on MRI images, given the inherent complexity and subtlety of features in multi-class KL grading from MRI, demonstrates competitive performance and highlights the potential of ViTs for comprehensive OA assessment.

Discussion

Although our study shows that Vision Transformers work well on MRI images for K-OA KL grading, these findings must be understood in the context of real-world clinical practice. Because of its broad availability, cheap cost, and quick collection time, radiography (X-ray) is still the gold standard for initial OA diagnosis and severity evaluation. The most prevalent first diagnosis technique for OA globally is the KL grading system, which was originally used for X-rays. It is deeply established in ordinary clinical procedures and big epidemiological research. However, MRI offers a distinct advantage in visualizing the soft tissue components of the joint, such as articular cartilage, menisci, and synovium, which are directly involved in the pathogenesis of OA. This superior soft tissue contrast allows for the detection of earlier changes in cartilage morphology, subchondral bone marrow lesions, and synovial inflammation that are not discernible on plain radiographs. Consequently, while MRI is not typically the first-line imaging modality for routine OA diagnosis due to its significantly higher cost and longer acquisition times, its use is becoming increasingly prevalent in specific clinical scenarios. These include cases with equivocal X-ray findings, disproportionate pain relative to radiographic severity, suspected meniscal or ligamentous injury, or for monitoring disease progression in clinical trials where subtle changes in cartilage volume or composition need to be tracked. The increasing availability of more affordable and faster MRI protocols, alongside the growing recognition of its ability to provide comprehensive structural information, suggests a gradual expansion of its role in OA management, particularly in specialized clinics and research settings. The proposed work, by demonstrating the power of AI in interpreting these rich MRI datasets, aims to further enhance the clinical utility of MRI in these evolving contexts, providing a more detailed and potentially automated assessment that complements, rather than replaces, traditional X-ray methods.

ViTs are limited by their intensive computational and data requirements. They need large datasets for optimal performance, and their effectiveness can be reduced with the smaller, specialized datasets common in medical research, even when using transfer learning. Their high computational cost to train and run also poses a barrier for healthcare providers and researchers with limited resources.

Conclusion

In this study, we successfully employed a highly efficient Vision transform method to automate the diagnosis of knee osteoarthritis (K-OA) using MRI image data and achieved 88% accuracy. The results demonstrate a significant improvement in performance with the utilization of this method, leading to higher accuracy in diagnosing K-OA from MRI images compared to other existing methods. The implementation of the Vision transform model yielded superior accuracy results on the MRI dataset. This innovative approach holds promise for simplifying and accelerating the diagnosis of K-OA for radiologists and medical practitioners, potentially enabling earlier interventions. Such advancements are anticipated to benefit patients by facilitating timely and effective treatment, thereby reducing the impact of the disease’s severity, which can worsen without prompt diagnosis.

Acknowledgments

Clinical trial number

Not applicable.

Authors’ contributions

Ms. Punita Panwar: Writing-original draft, Methodology, Formal analysis, Data collection, Conceptualization, Dr. Sandeep Chaurasia: Supervision, Dr. Jayesh Gangrade: Supervision, Manuscript Writing, Editing and Implementation, Dr. Ashwani Bilandi: Data Visualization, Validation.

Funding

Open access funding provided by Manipal University Jaipur. Not applicable.

Data availability

The data is not currently publicly available but can be requested by contacting the researcher at [Punitapanwar7@gmail.com] (mailto:Punitapanwar7@gmail.com).

Declarations

Ethics approval and consent to participate

This study was reviewed and approved by the institutional review boards of Dr. Navneet Imaging & Path Lab and Kamal Diagnostic Center on November 2, 2022. Informed written consent was obtained from all participating patients. The study was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki. The approval documentation from these review boards is attached.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Sandeep Chaurasia, Email: sandeep.chaurasia@jaipur.manipal.edu.

Jayesh Gangrade, Email: jayesh.gangrade@jaipur.manipal.edu, Email: jgangrade@gmail.com.

References

  • 1.Wittenauer R, Smith L, Aden K. Background paper 6.12 osteoarthritis. World Health Organisation; 2013.
  • 2.Sharma L. Osteoarthritis of the knee. N Engl J Med. 2021;384(1):51–9. [DOI] [PubMed] [Google Scholar]
  • 3.Kim DH, Kim SC, Yoon JS, Lee YS. Are there harmful effects of preoperative mild lateral or patellofemoral degeneration on the outcomes of open wedge high tibial osteotomy for medial compartmental osteoarthritis?? Orthop J Sports Med. 2020;8: 2325967120927481. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Hame SL, Alexander Reginald A. Knee osteoarthritis in women. Curr Rev Musculoskelet Med. 2013;6:182–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Saini D, Chand T, Chouhan DK, Prakash M. A comparative analysis of automatic classification and grading methods for knee osteoarthritis focussing on X-ray images. Biocybern Biomed Eng. 2021;41(2):419–44. [Google Scholar]
  • 6.Roemer FW, Kwoh CK, Hayashi D, Felson DT, Guermazi A. The role of radiography and MRI for eligibility assessment in DMOAD trials of knee OA. Nat Rev Rheumatol. 2018;14:372–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Juras V, Chang G, Regatte RR. Current status of functional MRI of osteoarthritis for diagnosis and prognosis. Curr Opin Rheumatol. 2020;32:102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kellgren J, Lawrence J. Radiological assessment of osteo-arthrosis. Ann Rheum Dis. 1957;16(4):494–502. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Hayashi D, Guermazi A, Roemer FW. MRI of osteoarthritis: the challenges of definition and quantification. Seminars in musculoskeletal radiology. Volume 16. New York, NY, USA: Thieme Medical; 2012. pp. 419–30. [DOI] [PubMed] [Google Scholar]
  • 10.Hootman JM, et al. Updated projected prevalence of self-reported doctor‐diagnosed arthritis and arthritis‐attributable activity limitation among US adults, 2015–2040. Arthritis Rheumatol. 2016;68(7):1582–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.5, Barbour, Kamil E, et al. Vital signs: prevalence of doctor-diagnosed arthritis and arthritis-attributable activity limitation—United states, 2013–2015. Morb Mortal Wkly Rep. 2017;66(9):246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Zhang B, et al. Attention-based CNN for KL Grade Classification: Data from the Osteoarthritis Initiative, 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA. 2020. pp. 731-735. 10.1109/ISBI45749.2020.9098456.
  • 13.Wang Y, et al. Learning from highly confident samples for automatic knee osteoarthritis severity assessment: data from the osteoarthritis initiative. IEEE J Biomedical Health Inf. 2021;26(3):1239–50. [DOI] [PubMed] [Google Scholar]
  • 14.Cueva J, Humberto, et al. Detect Classif Knee Osteoarthr Diagnostics. 2022;12(10):2362. [Google Scholar]
  • 15.Chen P, et al. Fully automatic knee osteoarthritis severity grading using deep neural networks with a novel ordinal loss. Comput Med Imaging Graph. 2019;75:84–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Dalia Y, et al. DeepOA: Clinical Decision Support System for Early Detection and Severity Grading of Knee Osteoarthritis, 2021 5th International Conference on Computer, Communication and Signal Processing (ICCCSP), Chennai, India. 2021. pp. 250-255. 10.1109/ICCCSP52374.2021.9465522.
  • 17.Tiulpin A, Thevenot J, Rahtu E, Lehenkari P. Saarakkala,‘“Automatic knee osteoarthritis diagnosis from plain radiographs: A deeplearning-based approach.”’ Sci Rep. 2018;8(1):1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Pedoia V, Lee J, Norman B, Link TM, Majumdar S. ‘Diagnosing osteoarthritis from T2 maps using deep learning: An analysis of the entire osteoarthritis initiative baseline cohort.’ Osteoarthritis Cartilage. 2019;27(7):1002–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Guan B, Liu F, Mizaian AH, Demehri S, Samsonov A, Guermazi A, Kijowski R. ‘“Deep learning approach to predict pain progression in knee osteoarthritis,.”’ Skelet Radiol. 2022;51:363–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Tiulpin A, Klein S, Bierma-Zeinstra SMA, Thevenot J, Rahtu E, Meurs JV, Oei EHG, Saarakkala S. Multimodal machinelearning-based knee osteoarthritis progression prediction from plain radiographs and clinical data. Sci Rep. 2019;9(1): 20038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Alshareef EA, Ebrahim FO, Lamami Y, Milad MB, Eswani MS, Bashir SA, Elbahrit E. Knee osteoarthritis severity grading using vision transformer. J Intell Fuzzy Syst. 2022;43(6):8303–13. [Google Scholar]
  • 22.Guida C, Zhang M, Shan J. Knee osteoarthritis classification using 3d CNN and MRI. Appl Sci. 2021;11(11): 5196. [Google Scholar]
  • 23.Grandini M, Bagli E, Visani G. Metrics for multi-class classification: an overview. ArXiv. 2020: ArXiv:2008.05756.
  • 24.Panwar P, Chaurasia S, Gangrade J. Classification of knee osteoarthritis using deep learning a rigorous analysis check for updates. ICT Syst Sustainability: Proc ICT4SD 2023. 2023;1(765):489. [Google Scholar]
  • 25.Leung K, et al. Prediction of total knee replacement and diagnosis of osteoarthritis by using deep learning on knee radiographs: data from the osteoarthritis initiative. Radiology. 2020;296(3):584–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Liu B, Luo J, Huang H. Toward automatic quantification of knee osteoarthritis severity using improved faster R-CNN. Int J Comput Assist Radiol Surg. 2020;15:457–66. [DOI] [PubMed] [Google Scholar]
  • 27.Pedoia V, et al. Diagnosing osteoarthritis from T2 maps using deep learning: an analysis of the entire osteoarthritis initiative baseline cohort. Osteoarthritis Cartilage. 2019;27(7):1002–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Swiecicki A, et al. Deep learning-based algorithm for assessment of knee osteoarthritis severity in radiographs matches performance of radiologists. Comput Biol Med. 2021;133: 104334. [DOI] [PubMed] [Google Scholar]
  • 29.Thomas KA, Kidziński Ł, Halilaj E, Fleming SL, Venkataraman GR, Oei EH, Delp SL. Automated classification of radiographic knee osteoarthritis severity using deep neural networks. Radiol Artif Intell. 2020;2(2):e190065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Wang Y, Wang X, Gao T, Du L, Liu W. ‘“An automatic knee osteoarthritis diagnosis method based on deep learning: Data from the osteoarthritis initiative,.”’ J Healthcare Eng. 2021;2021:1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Supatman, Yuniarno EM, Purnomo MH. Classification Anterior and Posterior of Knee Osteoarthritis X-Ray Images Grade KL-2 Using Deep Learning with Random Brightness Augmentation, 2022 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), Surabaya, Indonesia. 2022. pp. 1–5. 10.1109/CENIM56801.2022.10037483.
  • 32.Dharmani BC, Khatri K. ‘‘Deep learning for knee osteoarthritis severity stage detection using X-ray images,’’ in Proc. 15th Int. Conf. Commun. Syst. Netw. (COMSNETS). 2023. pp. 78–83.
  • 33.Cueva JH, Castillo D, Espinós-Morató H, Durán D, Díaz P, Lakshminarayanan V. ‘“Detection and classification of knee osteoarthritis,.”’ Diagnostics. 2022;12(10): 2362. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data is not currently publicly available but can be requested by contacting the researcher at [Punitapanwar7@gmail.com] (mailto:Punitapanwar7@gmail.com).


Articles from BMC Musculoskeletal Disorders are provided here courtesy of BMC

RESOURCES