Skip to main content
The Journal of International Medical Research logoLink to The Journal of International Medical Research
. 2025 Feb 4;53(2):03000605251318195. doi: 10.1177/03000605251318195

Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints

Jun Fukae 1,, Yoshiharu Amasaki 1, Yuichiro Fujieda 2, Yuki Sone 1, Ken Katagishi 1, Tatsunori Horie 3, Tamotsu Kamishima 4, Tatsuya Atsumi 2
PMCID: PMC11795604  PMID: 39904596

Abstract

Objective

To study the classification performance of a pre-trained convolutional neural network (CNN) with transfer learning by artificial joint ultrasonography images in rheumatoid arthritis (RA).

Methods

This retrospective study focused on abnormal synovial vascularity and created 870 artificial joint ultrasound images based on the European League Against Rheumatism/Outcome Measure in Rheumatology scoring system. One CNN, the Visual Geometry Group (VGG)-16, was trained with transfer learning using the 870 artificial images for initial training and the original plus five additional images for second training. The models were then tested for the ability to classify joints using real joint ultrasound images obtained from patients with RA. The study was registered in UMIN Clinical Trials Registry (UMIN000054321).

Results

A total of 156 clinical joint ultrasound images from 74 patients with RA were included. The initial model showed moderate classification performance, but the area under curve (AUC) for grade 1 synovitis was particularly low (0.59). The second model showed improvement in classifying grade 1 synovitis (AUC 0.73).

Conclusions

Artificial images may be useful for training VGG-16. The present novel approach of using artificial images as an alternative to actual images for training a CNN has the potential to be applied in medical imaging fields that face difficulties in collecting real clinical images.

Keywords: Artificial image, convolutional neural network, rheumatoid arthritis, synovial vascularity, VGG-16

Introduction

Artificial intelligence (AI) has made remarkable progress in image recognition and has also progressed in the medical imaging field, such that it can now classify clinical images and detect disease abnormalities.1,2 AI requires a number of actual clinical images for advancement and training to obtain its classification abilities, and construction of datasets of medical images is currently underway all over the world. However, the collection of actual clinical images may be difficult due to issues regarding the personal privacy of patients.

Joint ultrasonography is essential for evaluating disease severity in patients with rheumatoid arthritis (RA). A semi-quantitative 4-grade scoring system was developed by the European League Against Rheumatism/Outcome Measure in Rheumatology (EULAR/OMERACT) working group to measure abnormal synovial vascularity. 3 The scoring system showed significantly good inter-rater agreement among multiple human raters, but the raters needed clinical training and experience to become experts. 4 In contrast, AI does not require much time to learn. The present authors devised an idea to use artificial images, which are not subject to personal privacy protection issues and are easy to create, instead of actual clinical images for training AI. As one of the AI technologies, deep learning mimics brain neural networks via mathematical models. Convolutional neural networks (CNNs) are a representative image recognition technology and one aspect of deep learning.

The aim of the present study was to evaluate a CNN that was trained using artificial joint ultrasound images to classify actual joint ultrasound images from patients with RA. This study has been previously published as a preprint on medRxiv. 5

Materials and methods

Convolutional neural network

The Visual Geometry Group (VGG)-16 by MATLAB® software R2024a (MathWorks, Inc., Natick, MA, USA) was used in this study. VGG-16 has 13 convolutional layers and is often used in research for medical imaging classification. 6 VGG-16 was pre-trained using more than one million variable images from the ImageNet database (https://www.image-net.org/index.php), and could classify 1000 objects. In the present study, the pre-trained VGG-16 was further enhanced with transfer learning using artificial images created for this study. During model construction, the last three layers of VGG-16 were removed and replaced with a fully connected layer, a dropout layer, a softmax layer, and a classification layer. The fully connected layer had the learning rate coefficient set to ten times for both weights and biases and L2 regularization was applied. The dropout layer randomly deactivated 30% of the nodes. The softmax and classification layers were adapted for a new four-level classification. The training process was carefully tuned and overfitting was minimized. Specifically, to prevent overfitting, regularization and data augmentation were applied and early stopping was implemented when the validation performance no longer improved during training. As a result of this tuning, the following parameters were set for transfer learning. The maximum number of epochs was set to 11, the batch size was 32, and the learning rate was set as 0.0001. Data were shuffled before each epoch, and the validation data were set to supply every 30 batches. For validation, the training data were randomly split into a 4:1 ratio and the latter portion was used as the validation dataset.

Artificial training data for the convolutional neural network

Seventeen basic ultrasound joint images were created by a rheumatologist (JF) using Clip Studio Paint® illustration software, version 3.02 (CELSYS, Tokyo, Japan), by hand-drawing the images with a Wacom Intuos S® pen tablet (Wacom, Saitama, Japan) as the digital input device. These basic images were drawn as dorsal scanning images of metacarpophalangeal (MCP) joints showing abnormal synovial vascularity according to the EULAR/OMERACT scoring system for synovitis in RA (grade 0, five images; grade 1, four images; grade 2, four images; and grade 3, four images). Synovial vascularity was drawn as an aggregation of red pixel dots that localized in the joint capsule. Structure alteration, such as bone erosion, was not included in the images. Representative basic images are shown in Supplementary Figure 1. Each basic image contained several imaging layers that included joint tissue, bone surface and vascular images, respectively. At least one modification, such as object moving and scaling, was added to the imaging layers of each basic image and then all layers were combined to generate one image by a rheumatologist (JF) or modifier (YFuk) (Supplementary Figure 2). All images were confirmed by the rheumatologist (JF) as to whether they followed scoring rules. In total, 870 images were obtained (grade 0, 212 images; grade 1, 236 images; grade 2, 222 images; and grade 3, 200 images).

Clinical images from patients with RA for testing performance of the convolutional neural network

Real joint ultrasonography images were retrospectively selected from a population of patients aged >18 years who were diagnosed with RA and had positive findings from joint ultrasonography examinations performed at Kuriyama Red Cross Hospital between July 2021 and April 2024. All of the included patients satisfied the 2010 American College of Rheumatology (ACR)/EULAR classification criteria for RA, 7 and no other study inclusion or exclusion criteria were applied.

This study was approved by the local ethics committees of Kuriyama Red Cross Hospital (Approval number: 2024-0502-02). The study outline was published on the hospital’s homepage at https://kuriyama.jrc.or.jp/outpatient/, and an opt-out management strategy was used to collect the patient information. All patient details were de-identified. The study was conducted in accordance with the 1975 Declaration of Helsinki, as revised in 2013, and has been reported according to STARD (Standard for Reporting of Diagnostic Accuracy Studies) guidelines. 8

Two expert raters (KK, TH) evaluated synovial vascularity according to the EULAR/OMERACT scoring system, and the weighted kappa value of inter-rater agreement among the two raters was calculated. A third expert rater (YFuj) double-blindly evaluated images that had differing scores between the first two raters. All images with scores that matched between two raters were included in the real image dataset used to test the model.

Additional training data for the convolutional neural network

The artificially created images were used to train the VGG-16, and real clinical images were used to test the model. The initial model was unable to classify some clinical images correctly (Table 1). Therefore, misclassified real clinical images that were truly grade 1 but were classified as grade 2 were randomly selected and used to create additional artificial images. Five artificial images were created from each of the misclassified real clinical images, with each artificial image modelled after the original misclassified clinical image. These artificial images were not identical to the original images and were manually created to ensure they were not overly similar to the originals. The training dataset was then altered by adding in the additional artificial images, which resulted in worsening of the classification power for some cases (data not shown). Finally, five artificial images based on one misclassified clinical image were selected. The VGG-16 was then newly trained with transfer learning as a second model by combining the original plus the five artificial images.

Table 1.

Confusion matrix of the initial Visual Geometry Group (VGG)-16 model in classification of synovitis in ultrasound images from patients with rheumatoid arthritis.

True classification VGG-16 output classification
Grade 0 Grade 1 Grade 2 Grade 3
Grade 0 33 1 9 0
Grade 1 3 8 30 0
Grade 2 0 0 65 2
Grade 3 0 0 3 2

Study progression

In the present study, artificial images were used for training the VGG-16 to establish the initial model, and then the initial model classified real clinical images. Error analysis was performed on the misclassified real images, and then additional artificial images were created. Original plus additional images were used for training the VGG-16 to establish the second model, summarized in the flow chart shown in Figure 1.

Figure 1.

Figure 1.

Flow chart of the study process: artificial images were used for training Visual Geometry Group (VGG)-16 to establish the initial model, and then classification of real clinical images was performed (black arrows). Errors were analyzed and the original plus additional images were then used for training to establish and train the second model, and then classification of real clinical images was performed (red arrows).

Cross-validation for the second model

To assess performance and generalizability of the second model, a five-fold cross-validation was performed. The training dataset was split into five parts and the model was trained five times, using one part as the validation dataset and the remaining data as the training set (i.e. a 4:1 ratio). The mean of accuracy, precision, recall and F score across the five cross-validation runs were calculated.

Statistics analyses

Grade data are presented as n prevalence. Indexes of accuracy, precision, recall, F-score, and specificity were calculated to evaluate CNN performance of each grade classification. The F-score was the harmonic mean of precision and recall ranging from 0 to 1 (approaching 1 meant good balance between precision and recall). Receiver Operating Characteristic (ROC) curves were used to show the trade-off between sensitivity (or recall) and specificity, and the area under curve (AUC) of each ROC curve was calculated. Confusion matrices for multiclass classification were used to show the whole picture of grade classification. Inter-rater agreement among the two raters was analyzed according to the weighted kappa value, which approaches 1 as concordance becomes stronger (0.61–0.80 is considered to be good, and > 0.8 is considered to be very good). Statistical analyses were performed with the use of Excel® for Microsoft 365, version MSO 2404 (Microsoft, Redmond, WA, USA) and MedCalc®, version 22.023 (MedCalc Software, Mariakerke, Belgium).

Results

During the evaluation of synovial vascularity in real ultrasound images according to the EULAR/OMERACT scoring system, the weighted kappa value of inter-rater agreement among the two raters was 0.80 (95% confidence interval [CI] 0.73, 0.86). A total of 29 images with differing scores between the first two raters were double-blindly evaluated by the third expert rater. Consequently, a total of 156 real MCP joint ultrasonography images from 74 patients with RA were included in the study test dataset: grade 0, 43 images; grade 1, 41 images; grade 2, 67 images; and grade 3, five images. The characteristics of the included patients are shown in Supplementary Table 1.

Using the present VGG-16 with transfer learning by artificial images, the initial model showed the following representative results for three trials of independent learning. The confusion matrix for multiclass classification of synovitis in RA for the initial model is shown in Table 1, with the respective values for accuracy, precision, recall, F-score, specificity, and AUC for the four grades shown in Table 2. The ROC curve of the initial model for each classification grade is shown in Figure 2. The weighted kappa value of inter-rater agreement between the true grades and the initial model’s output was 0.62 (95% CI 0.52, 0.71).

Table 2.

Evaluation metrics for the four synovitis classification grades in the initial VGG-16 model.

Synovitis classification grade Evaluation metric
Accuracy Precision Recall F-score Specificity AUC
Grade 0 0.92 0.92 0.77 0.84 0.97 0.87
Grade 1 0.78 0.89 0.20 0.32 0.99 0.59
Grade 2 0.72 0.61 0.97 0.75 0.53 0.75
Grade 3 0.97 0.50 0.40 0.44 0.99 0.69

AUC, Area under the curve; VGG, Visual Geometry Group.

Figure 2.

Figure 2.

Receiver Operating Characteristic (ROC) curves of the initial model to classify synovitis in ultrasound images of metacarpophalangeal joints, showing: (a) Grade 0; (b) Grade 1; (c) Grade 2; and (d) Grade 3. CI, confidence interval.

Further training of the VGG-16 CNN with transfer learning by combining the original plus additional training data to create a second model showed the following representative results for three trials of independent learning. The confusion matrix for multiclass classification of synovitis in RA for the second model is shown in Table 3, with the respective values for accuracy, precision, recall, F-score, specificity, and AUC shown in Table 4. The ROC curve of the second model for each classification grade is shown in Figure 3. The weighted kappa value of inter-rater agreement between the true grades and the second model’s output was 0.69 (95% CI 0.60, 0.78). The second model was evaluated using five-fold cross-validation, achieving a mean ± SD accuracy of 0.995 ± 0.00626, mean precision of 0.996 ± 0.00581, mean recall of 0.996 ± 0.00615, and mean F-score of 0.996 ± 0.00606.

Table 3.

Confusion matrix of the secondary Visual Geometry Group (VGG)-16 model in classification of synovitis in ultrasound images from patients with rheumatoid arthritis.

True classification VGG-16 output classification
Grade 0 Grade 1 Grade 2 Grade 3
Grade 0 37 2 4 0
Grade 1 6 21 14 0
Grade 2 3 5 58 1
Grade 3 0 0 4 1

Table 4.

Evaluation metrics for the four synovitis classification grades in the second VGG-16 model.

Synovitis classification grade Evaluation metric
Accuracy Precision Recall F-score Specificity AUC
Grade 0 0.90 0.80 0.86 0.83 0.92 0.89
Grade 1 0.83 0.75 0.51 0.61 0.94 0.73
Grade 2 0.80 0.73 0.87 0.79 0.75 0.81
Grade 3 0.97 0.50 0.20 0.29 0.99 0.60

AUC, Area under the curve; VGG, Visual Geometry Group.

Figure 3.

Figure 3.

Receiver Operating Characteristic (ROC) curves of the secondary model to classify synovitis in ultrasound images of metacarpophalangeal joints, showing: (a) Grade 0; (b) Grade 1; (c) Grade 2; and (d) Grade 3. CI, confidence interval.

Discussion

The present study was a novel attempt at using artificial images that were created by digital illustration for training of a medical CNN in the field of rheumatology. The results showed that a pre-trained VGG-16 with transfer learning using artificial illustrated images was able to classify real clinical ultrasound joint images. The models showed moderate classification ability and good inter-rater agreement with human raters. The practical application of the present models might lead to more consistent and objective assessment in clinical trials and practice.

Traditionally, real-world clinical images have been used to train CNNs in the medical field. The reasons for using actual images were based on the idea that medical images are precise depictions of normal and abnormal conditions, and medical CNNs could not learn pathological features without using actual images. In previous studies on joint ultrasonography in RA, several groups have studied the classification of joint images using CNNs.912 Real clinical images were used to train the CNNs and good results were reported in their studies. One of the studies used 1678 real ultrasound images for training CNN, and their model achieved an overall four class accuracy of 0.839 in the EULAR/OMERACT synovitis score classification. 11 However, collecting a large number of clinical images was described as not easy, and there were various problems in collecting training data related to data transparency and privacy protection. These issues were strongly dependent on local and international laws and regulations.

The basic artificial joint images that were used in the present study were illustrated by an experienced rheumatologist (JF) and were added to with modifications (by JF and YFuk) to increase the number of training images. The process proceeded with ease and required no specialized illustration skills or deep medical knowledge. In contrast to real clinical images, the artificial images cause no difficulty in collection, and no problems regarding patient data protection and privacy. In the present study, the confusion matrix (Table 1) of the initial model revealed that classification power for grade 1 was inferior to that of the other grades. Training data were analyzed, focusing on images of grade 1 that were misclassified as grade 2, then additional training images were created as explained in the methods section. A second model trained by the original plus additional training images improved the classification power (Table 3). For these additional data, several real grade 1 images misclassified as grade 2 were randomly selected and then five artificial images were created from each of them. When adding these images for training, there were some cases that rather worsened the classification results. Therefore, through trial and error, useful images were finally selected. The present method of using artificial images has the advantages of increasing the amount of available training data and responding to misclassification problems.

There was a risk of data leakage regarding the second model, so training data splitting and data preprocessing independence were checked, and then cross-validation was performed. The second model showed high cross-validation accuracy and moderate classification performance for the real images. Next, the original image that was the basis for the additional 5 images was removed from the test dataset, then classification was performed. Even with the image removed, the overall improvement in grade classification was comparable (data not shown). From this result, the risk of data leakage was concluded to be low in the second model. Further validation will be necessary to confirm the generalizability of the model.

The focus of the present study was abnormal synovial vascularity. Power Doppler ultrasound rendered the vascular image in red with a background of black and grey, so VGG-16 might easily focus on it. In the pathology of abnormal synovial vascularity of RA, it is well known that blood flow increases as the inflammation worsens, and the EULAR/OMERACT group defined the degree of synovial vascularity as a simple semi-quantitative 4-grade score.3,4 Thus, it was easy to reproduce abnormal synovial vascularity in the artificial images according to the four grades.

Generative adversarial networks (GANs) have been the focus of medical image for deep learning research, because of their precision image generation ability. 13 Although the present artificial images were not finely detailed, they were useful as training data. The novel approach of using artificial images for training CNN as an alternative to real clinical images has the potential to be applied in medical imaging fields that face difficulties in collecting real images. However, further research will be necessary to confirm this.

The results of the present study may be limited be several factors. The number of real images for grade 3 was small, so the sample size of grade 3 images needs to be increased to certify reliability. The artificial images that were created were more monotonous and less varied in detail than real images, and therefore, the present model might respond to some cases but not to all. Nonetheless, the methods of creating additional training images were easy and allowed rapid correction of misclassification errors. Feedback of the error analysis linked to training data diversification and improvement of model performance. However, there is always a risk for data leakage or overfitting, therefore the feedback process needs to proceed with caution.

Medical professionals accumulate experience in examining real clinical images and also learn pathology with schematic images. We hypothesize that CNNs may increase the performance of medical image classification when they learn with both real and artificial images, and this is a topic for future research.

Regarding the potential of CNNs, they were originally developed as technology for detecting imaging features by mathematical processing, but occasionally discovered new features or focus points that were previously unknown, and thus, analysis by this method was important. The feedback from new findings of AI might provide new knowledge to humans. Creating training data based on new knowledge and then using them for training AI might expand its potential. Humans and AI each might have a positive influence on the other. In the medical field, progress in AI might be supportive, but not suppressive, to expand the potential of human clinicians.

Supplemental Material

sj-pdf-1-imr-10.1177_03000605251318195 - Supplemental material for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints

Supplemental material, sj-pdf-1-imr-10.1177_03000605251318195 for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints by Jun Fukae, Yoshiharu Amasaki, Yuichiro Fujieda, Yuki Sone, Ken Katagishi, Tatsunori Horie, Tamotsu Kamishima and Tatsuya Atsumi in Journal of International Medical Research

sj-pdf-2-imr-10.1177_03000605251318195 - Supplemental material for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints

Supplemental material, sj-pdf-2-imr-10.1177_03000605251318195 for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints by Jun Fukae, Yoshiharu Amasaki, Yuichiro Fujieda, Yuki Sone, Ken Katagishi, Tatsunori Horie, Tamotsu Kamishima and Tatsuya Atsumi in Journal of International Medical Research

sj-pdf-3-imr-10.1177_03000605251318195 - Supplemental material for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints

Supplemental material, sj-pdf-3-imr-10.1177_03000605251318195 for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints by Jun Fukae, Yoshiharu Amasaki, Yuichiro Fujieda, Yuki Sone, Ken Katagishi, Tatsunori Horie, Tamotsu Kamishima and Tatsuya Atsumi in Journal of International Medical Research

Acknowledgement

We thank Yusuke Fukae for generating training data as an image modifier.

Author contributions: Study conception and design: JF, YA, YFuj, TK, TA.

Data preparation and acquisition: JF, YA, YS, KK.

Data analysis: JF, YA, KK, TH, YFuj.

Manuscript preparation: JF, YA, YFuj, TK, TA.

The authors declare that there is no conflict of interest.

Funding: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Data availability

The training data for initial and secondary models are available from the link below upon reasonable request. https://center6.umin.ac.jp/cgi-bin/ctr_e/ctr_view.cgi?recptno=R000061975.

Supplemental material

Supplementary files for this study are available online.

References

  • 1.McMaster C, Bird A, Liew DFL, et al. Artificial intelligence and deep learning for rheumatologists. Arthritis Rheumatol 2022; 74: 1893–1905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Momtazmanesh S, Nowroozi A, Rezaei N. Artificial intelligence in rheumatoid arthritis: current status and future perspectives: a state-of-the-art review. Rheumatol Ther 2022; 9: 1249–1304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.D’Agostino MA, Terslev L, Aegerter P, et al. Scoring ultrasound synovitis in rheumatoid arthritis: a EULAR-OMERACT ultrasound taskforce—Part 1: definition and development of a standardised, consensus-based scoring system. RMD Open 2017; 3: e000428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Terslev L, Naredo E, Aegerter P, et al. Scoring ultrasound synovitis in rheumatoid arthritis: a EULAR-OMERACT ultrasound taskforce-Part 2: reliability and application to multiple joints of a standardised consensus-based scoring system. RMD Open 2017; 3: e000427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Fukae J, Amasaki Y, Fujieda Y, et al. Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints. medRxiv 2024; 2024.08.30.24312848. DOI: 10.1101/2024.08.30.24312848 (accessed 18 December 2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv; 1409.1556v6 [cs.CV], 10.48550/arXiv.1409.1556 (2015, accessed 13 May 2024). [DOI]
  • 7.Aletaha D, Neogi T, Silman AJ, et al. 2010 Rheumatoid arthritis classification criteria: an American College of Rheumatology/European League Against Rheumatism collaborative initiative. Arthritis Rheum 2010; 62: 2569–2581. [DOI] [PubMed] [Google Scholar]
  • 8.Bossuyt PM, Reitsma JB, Bruns DE; STARD Group et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. BMJ 2015; 351: h5527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Andersen JKH, Pedersen JS, Laursen MS, et al. Neural networks for automatic scoring of arthritis disease activity on ultrasound images. RMD Open 2019; 5: e000891. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Cipolletta E, Fiorentino MC, Moccia S, et al. Artificial intelligence for ultrasound informative image selection of metacarpal head cartilage. A pilot study. Front Med (Lausanne) 2021; 8: 589197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Christensen ABH, Just SA, Andersen JKH, et al. Applying cascaded convolutional neural network design further enhances automatic scoring of arthritis disease activity on ultrasound images from rheumatoid arthritis patients. Ann Rheum Dis 2020; 79: 1189–1193. [DOI] [PubMed] [Google Scholar]
  • 12.Wu M, Wu H, Wu L, et al. A deep learning classification of metacarpophalangeal joints synovial proliferation in rheumatoid arthritis by ultrasound images. J Clin Ultrasound JCU 2022; 50: 296–301. [DOI] [PubMed] [Google Scholar]
  • 13.Chen Y, Yang XH, Wei Z, et al. Generative adversarial networks in medical image augmentation: a review. Comput Biol Med 2022; 144: 105382. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-pdf-1-imr-10.1177_03000605251318195 - Supplemental material for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints

Supplemental material, sj-pdf-1-imr-10.1177_03000605251318195 for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints by Jun Fukae, Yoshiharu Amasaki, Yuichiro Fujieda, Yuki Sone, Ken Katagishi, Tatsunori Horie, Tamotsu Kamishima and Tatsuya Atsumi in Journal of International Medical Research

sj-pdf-2-imr-10.1177_03000605251318195 - Supplemental material for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints

Supplemental material, sj-pdf-2-imr-10.1177_03000605251318195 for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints by Jun Fukae, Yoshiharu Amasaki, Yuichiro Fujieda, Yuki Sone, Ken Katagishi, Tatsunori Horie, Tamotsu Kamishima and Tatsuya Atsumi in Journal of International Medical Research

sj-pdf-3-imr-10.1177_03000605251318195 - Supplemental material for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints

Supplemental material, sj-pdf-3-imr-10.1177_03000605251318195 for Pre-trained convolutional neural network with transfer learning by artificial illustrated images classify power Doppler ultrasound images of rheumatoid arthritis joints by Jun Fukae, Yoshiharu Amasaki, Yuichiro Fujieda, Yuki Sone, Ken Katagishi, Tatsunori Horie, Tamotsu Kamishima and Tatsuya Atsumi in Journal of International Medical Research

Data Availability Statement

The training data for initial and secondary models are available from the link below upon reasonable request. https://center6.umin.ac.jp/cgi-bin/ctr_e/ctr_view.cgi?recptno=R000061975.


Articles from The Journal of International Medical Research are provided here courtesy of SAGE Publications

RESOURCES