Skip to main content
Patterns logoLink to Patterns
. 2023 Aug 3;4(9):100806. doi: 10.1016/j.patter.2023.100806

AIDMAN: An AI-based object detection system for malaria diagnosis from smartphone thin-blood-smear images

Ruicun Liu 1,5, Tuoyu Liu 1,5, Tingting Dan 2,5, Shan Yang 1, Yanbing Li 1, Boyu Luo 1, Yingtan Zhuang 1, Xinyue Fan 1, Xianchao Zhang 3,4,, Hongmin Cai 2,∗∗, Yue Teng 1,6,∗∗∗
PMCID: PMC10499858  PMID: 37720337

Summary

Malaria is a significant public health concern, with ∼95% of cases occurring in Africa, but accurate and timely diagnosis is problematic in remote and low-income areas. Here, we developed an artificial intelligence-based object detection system for malaria diagnosis (AIDMAN). In this system, the YOLOv5 model is used to detect cells in a thin blood smear. An attentional aligner model (AAM) is then applied for cellular classification that consists of multi-scale features, a local context aligner, and multi-scale attention. Finally, a convolutional neural network classifier is applied for diagnosis using blood-smear images, reducing interference caused by false positive cells. The results demonstrate that AIDMAN handles interference well, with a diagnostic accuracy of 98.62% for cells and 97% for blood-smear images. The prospective clinical validation accuracy of 98.44% is comparable to that of microscopists. AIDMAN shows clinically acceptable detection of malaria parasites and could aid malaria diagnosis, especially in areas lacking experienced parasitologists and equipment.

Keywords: artificial intelligence, malaria diagnosis, YOLOv5, transformer, multi-scale attention, West Africa

Graphical abstract

graphic file with name fx1.jpg

Highlights

  • An AI-based object detection system has been developed to detect malaria parasites

  • The YOLOv5 and Transformer models were combined for detection and classification

  • A CNN classifier is used to reduce interference from false positive cells

  • The clinical validation accuracy is comparable to that of microscopists

The bigger picture

Malaria is among the most severe threats to human health worldwide. There were 228 million cases worldwide in 2021, of which ∼95% occurred in Africa. Early and precise diagnosis can reduce transmission and prevent deaths. Accordingly, accurate and affordable systems for rapid detection of malaria parasites are urgently required. Several artificial intelligence systems that employ deep learning models to detect malaria parasites have been reported, but clinical diagnosis remains challenging. Here, we developed an artificial intelligence-based object detection system for malaria diagnosis. This system employs a deep learning algorithm for detection of Plasmodia in thin-blood-smear images with a prospective clinical validation accuracy of 98.44%, comparable to that of microscopists. Our model shows clinically acceptable malaria parasite detection and could aid in malaria diagnosis in resource-limited regions, especially areas lacking experienced parasitologists and equipment.


Accurate and timely diagnosis of malaria is problematic in remote and low-income areas. Liu et al. develop an artificial intelligence-based object detection system for malaria diagnosis (AIDMAN). In this system, the YOLOv5 model and the Transformer model were combined to perform an entire process from image analysis to malaria diagnosis. A heatmap of the most characteristic cells is generated to reduce interference caused by false positive cells. AIDMAN shows clinically acceptable detection of malaria parasites and could aid in malaria diagnosis.

Introduction

Malaria is among the most severe threats to human health worldwide. Approximately 200 million new cases of malaria and 400,000 malaria-related deaths are reported globally each year.1,2,3,4,5,6 For instance, there were 247 million cases of malaria and 619,000 deaths worldwide in 2021, of which ∼95% of cases occurred in Africa.1,2 Early and precise diagnosis of malaria can reduce transmission and prevent deaths. Accordingly, accurate and affordable systems for rapid detection of malaria parasites in blood samples are urgently required.

Malaria is conventionally diagnosed by microscopy examination of blood smears, rapid diagnostic tests (RDTs), or polymerase chain reaction (PCR)-based methods, all of which are indispensable for current malaria control strategies.7,8,9,10,11,12,13,14,15 The cost and complexity of PCR-based methods have prevented their widespread use in areas where malaria is common.16,17,18,19 Malaria RDTs are effective diagnostic tools as they do not require trained experts or complex PCR equipment, and they can diagnose within 15–30 min. However, according to the World Health Organization (WHO) and others, malaria RDTs suffer from several shortcomings including low specificity, inability to quantify parasite density, sensitivity to heat and humidity, and higher costs compared with microscopy-based methods.15,20,21 In particular, histidine-rich protein 2 (HRP2) antigen-based RDTs are most effective for detection of Plasmodium falciparum, but the issue of false negative RDT results due to target gene deletions remains unresolved.22,23

The use of microscopy images of blood smears to detect malaria parasites is inexpensive, rapid, and universal. Although it can be limited by poor image resolution and misidentification of impurities, traditional microscopy examination of blood smears is currently the gold standard for malaria diagnosis worldwide. Nevertheless, the requirements for experienced and skilled microscopists as well as effective quality control and quality assurance systems remain a crucial challenge for malaria diagnosis, especially in Africa.24,25 Several artificial intelligence (AI) systems that employ deep learning models to detect malaria parasites have been reported recently.26,27,28,29,30,31,32,33,34,35,36,37 One group trained a model based on convolutional neural networks (CNNs) using 1,034 infected-cell images and 1,531 non-infected-cell images obtained from the University of Alabama.26 This model achieved a diagnostic accuracy of 95%. Liang et al. employed 27,000 blood-cell images at three different magnifications to train a model that detected Plasmodia with an accuracy of 98.08%.28 Khan et al. achieved 97.98% accuracy using a CNN that employs split-transform-merge and channel squeezing-boosting principles.34 However, these methods typically suffer from two limitations. Firstly, the datasets used are not universal, and they contain an insufficient number of blood cells and few morphological types of malaria parasites.26,29 The most commonly used dataset is the National Institute of Health (NIH) Malaria Dataset maintained by the National Library of Medicine. Unlike clinical images of blood cells typically collected in Africa, those in the NIH dataset have few overlapping cells and dye impurities. Secondly, some models focus on individual red blood cells (RBCs) for detection rather than complete thin-blood-smear images, which limits their clinical application.29,30,31 There have also been attempts to detect malaria parasites using blood-smear images.38,39 For example, Loh et al. used the Mask R-CNN deep learning model to segment, classify, and count Plasmodium falciparum-infected RBCs,38 while Koirala et al. developed a custom deep learning architecture (YOLO-mp) to detect malaria parasites in thick-blood-smear microscopy images that achieved an average accuracy of 94.07%.39

Herein, we developed an AI-based object detection system for malaria diagnosis (AIDMAN; Figure 1). This system employs a deep learning algorithm for detection of Plasmodia in thin-blood-smear images to facilitate malaria diagnosis. The YOLOv5 object detection model40 and the Transformer model41,42 were combined to perform an entire process from image analysis to malaria diagnosis. For each blood-smear image, a heatmap of the most characteristic cells is generated and used for diagnosis, reducing interference caused by false positive cells. Finally, the system was verified by application to clinically diagnosed patients.

Figure 1.

Figure 1

Schematic of AIDMAN developed in this study

(A–D) (A–D) consists of four modules: (A) data collection, (B) detection of image sections, (C) classification of malaria parasites in the patches, and (D) malaria diagnosis of blood-smear images.

(E) Network structure diagram of AAM for cellular classification.

(F) Module structure diagram for the local context aligner. P and N represent the positive and negative samples, respectively.

Results

Malaria detection of patches

We took 150 images from SmartMalariaNET and obtained 35,489 patches by performing data splitting at the image level. Table S1 shows distribution details for the datasets. Some samples of infected and uninfected images are shown in Figure S1. Compared with the counting results for manual detection by trained microscopists, YOLOv5 performed well for cell detection using the dataset, with an average precision value of 90.8% (Figure 2A illustrates the cell detection process by YOLOv5). The dataset for classification of cells contains 5,654 patches from 1,822 thin-blood-smear images. Three trained microscopists classified the patches. Patches with malaria parasites were defined as positive samples, while those without malaria parasites were defined as negative samples. The dataset includes images containing impurities and aperture effects. Table S2 summarizes the cell classification datasets. The datasets were randomly divided into a training set of 3,393 patches, a verification set of 1,131 patches, and a testing set of 1,130 patches.

Figure 2.

Figure 2

Performance of AIDMAN

(A) Detection of cells for patches in the image taken by a smartphone (top) and the image enlarged by a smartphone (bottom). The original, label, and result images predicted using YOLOv5 are shown.

(B) ROC used to evaluate AAM performance for classifying the presence or absence of malaria parasites in cells.

(C) Generation of datasets for the stratification of blood smears. After AAM, a score for each patch is obtained; the 25 patches with the highest scores are sorted according to score; a heatmap is obtained using the AAM and reassembled into a larger map.

(D) ROC used to evaluate CNN performance for classification of blood smears to diagnose malaria.

A local context aligner that uses the encoding process of Transformer was introduced into the attentional aligner model (AAM), and the multi-head attention mechanism allows the network to capture more features. Therefore, we optimized the model by changing the number of heads. With head number = 2, all indicators for the model were highest, as shown in Table 1. In the AAM feature extractor, features from different scales make different contributions to the identification of malaria. We explored the optimal feature extractor by varying the number of scales from 1 to 5. No more than five different scales were tested because the size of the input (64 × 64 pixels) restricts the features of the scale. When the feature extractor was operated at five different scales (head number = 2), interference was reduced, and more parasites were correctly identified, with all indicators for the model at their highest values, as shown in Table 1. The accuracy, precision, sensitivity, specificity, F1 score, and area under the receiver operating characteristic curve (AUC) values were 98.62%, 98.62%, 98.62%, 98.62%, 98.62%, and 99.92%, respectively. The receiver operating curve (ROC) for the AAM is shown in Figure 2B.

Table 1.

Quantitative classification results

Classification of cells
Heads ACC PRE SENS SPE F1 AUC
2 98.62 98.62 98.62 98.62 98.62 99.92
4 93.16 93.22 93.16 93.18 93.16 98.72
8 95.55 95.60 95.55 95.56 95.55 99.43
Scales ACC PRE SENS SPE F1 AUC
1 97.77 97.37 98.22 97.32 97.79 99.85
2 98.31 98.31 98.31 98.31 98.31 99.91
3 98.20 98.20 98.20 98.20 98.20 99.89
4 98.59 98.59 98.59 98.59 98.59 99.87
5 98.62 98.62 98.62 98.62 98.62 99.92

Malaria diagnosis of thin-blood-smear images

Patches ACC PRE SENS SPE F1 AUC
1 89.00 89.38 89.00 88.633 89.10 94.70
4 92.00 91.96 92.00 88.75 92.94 97.64
9 94.00 94.00 94.00 92.63 94.00 97.28
16 94.00 93.99 94.00 91.20 93.95 98.31
25 97.00 97.13 97.00 94.18 96.96 98.84
36 91.00 91.34 91.00 91.09 91.09 97.77

Malaria diagnosis from thin-blood-smear images

Although the AAM performed well for malaria detection from single patches, it was still prone to false positives in overall diagnosis because there are far more negative cells than positive cells in one thin-blood-smear image, which leads to misdiagnosis. The accuracy of the AAM in malaria diagnosis of whole images from thin blood smears was 87.10%, with a false positive rate of 19.02%.

We then explored diagnosis of the overall thin-blood-smear image using a dichotomous CNN model based on the AAM. According to patch scores, a heatmap was generated from the top patches (1, 4, 9, 16, 25, 36), and these heatmaps were stitched into a new image, as shown in Figure 2C. Table S3 shows the distribution of the datasets used to classify the blood-smear images. The dataset for diagnosis of thin blood smears contains 496 images comprising a training set of 297 images, a verification set of 99 images, and a testing set of 100 images. As shown in Table 1, weightings from AAM were used to mosaic the heatmaps for the 25 patches with the highest scores to obtain a heatmap for each blood-smear image. For classification blood-smear images, the accuracy, precision, sensitivity, specificity, F1 score, and AUC values were 97%, 97.13%, 97%, 94.18%, 96.96%, and 98.84%, respectively. The ROC curve of the CNN is shown in Figure 2D.

In addition, we compared the diagnostic performance of images reconstructed from 25 patches in the original thin-blood-smear image or from the generated heatmaps. The indicator values of the reconstructed heatmap as input to the CNN model were higher than those of the original blood smear (accuracy, precision, sensitivity, specificity, F1 score, and AUC values were 93%, 93.02%, 93%, 89.26%, 92.92%, and 92.47%, respectively).

Prospective clinical validation of AIDMAN

Clinical validation of AIDMAN was based on 64 patients with thin blood smears at the Sierra Leone-China Friendship Hospital whose images were not previously used to develop the model. Images of thin blood smears were acquired using different microscopes at multiple magnifications. Microscopy diagnosis of thin blood smears was performed by three experts, each with more than 5 years of experience. Detection accuracies of microscopy examination, malaria RDTs, and AIDMAN were compared, and the results are shown in Table 2 and Table S4. Microscopists accurately identified 34 infected blood smears (positive) and 30 non-infected blood smears (negative). Malaria RDTs returned 32 positive and 2 negative results for the 34 positive samples (Figures 3A and S2) and 29 negative and 1 positive results for the 30 negative samples (Figures 3B and S3). AIDMAN yielded 33 positive and 1 negative results for the 34 positive samples (Figure 3A) and correct results for all 30 negative samples (Figure 3B). Thus, the detection accuracy of AIDMAN (98.44%) was comparable with that of microscopists.

Table 2.

Comparison of the accuracies of different malaria diagnosis methods

Detection method No. patients
Total Accuracy, %
Positive Negative
Microscopic examination positive 34 0 34 100
negative 0 30 30
RDTs positive 32 1 33 95.31
negative 2 29 31
AIDMAN positive 33 0 33 98.44
negative 1 30 31

Figure 3.

Figure 3

Three expert microscopists diagnosed images of infected and non-infected thin blood smears to validate the results of RDTs and AIDMAN

Discussion

In this study, we developed an AI-based object detection system capable of rapid and simple malaria diagnosis in remote and low-income areas. The framework was designed, from dataset selection to algorithm development, with clinical applicability in mind and to contribute to the field of AI-aided malaria diagnosis. Specifically, YOLOv5 was combined with Transformer to achieve an all-in-one operation from cell segmentation to malaria diagnosis. As a result, AIDMAN handles interference such as image background artifacts, cell overlap, dye impurities, and aperture effects extremely well, achieving accuracies of 98.62% for cells and 97% for blood-smear images. For malaria diagnosis using thin-blood-smear images, AIDMAN was evaluated in terms of accuracy, precision, sensitivity, specificity, F1 score, and AUC, achieving values of 97%, 97.13%, 97%, 94.18%, 96.96%, and 98.84%, respectively. Additionally, the performance of the AI algorithm was compared with that of malaria RDTs and microscopy examination in a clinical context. In prospective clinical validation, AIDMAN achieved a diagnostic accuracy of 98.44%, similar to that of microscopists. Thus, it may be extremely valuable for the diagnosis of malaria in Africa.

Previous research on the detection of malaria parasites by machine learning relied on expensive slide microscopes to scan blood smears and used images acquired from expensive digital cameras to train the deep learning model.43,44,45,46 Clearly, this is not a practical approach for low-resource and/or remote settings, such as those encountered in West Africa. However, integration of smartphones with AI models for malaria diagnosis presents a potential solution.45 AI-aided diagnosis solutions reduce reliance on microscopists and may thus be more suitable for successful deployment in the field. We are currently attempting to develop AIDMAN for smartphones. This involves training the model and predictions on mobile devices or offline training of the model and then importing it to mobile devices for prediction. Several AI systems for smartphone-based malaria parasite detection have been reported.47,48,49 For instance, Yang et al. implemented a deep learning application for smartphones to detect malaria parasites in thick-smear images,47 and Yu et al. developed an Android application for detecting malaria parasites in blood smears, providing greater flexibility than traditional parasite detection methods.49 Automated malaria parasite detection on smartphones is a promising alternative to traditional malaria diagnosis, especially in resource-limited areas. Finally, malaria diagnosis might be just one element of a suite of diagnostic web tests that could be run on this type of system. Several other applications could be performed simultaneously using the same images, for instance, cell counting or detection of other hemoparasite-caused conditions such as trypanosomiasis, babesiosis, and microfilaria.50,51,52,53,54

The present study has some limitations. Our datasets were established using data from a limited number of patients; hence, the generalizability of the algorithm remains to be determined. In addition, AIDMAN only detects malaria parasites in thin-blood-smear images in which malaria parasites maintain their original morphology, making it difficult to identify different species and life cycle stages. Thick blood smears involve a larger volume of blood and a larger number of malaria parasites per blood volume than thin blood smears; hence, malaria detection using thick blood smears must also be considered as a means for clinical diagnosis.55,56 In future studies, we will endeavor to obtain more blood smears from different countries and clinical settings to introduce variability into our dataset and thereby test the generalizability of AIDMAN. Regarding best practice, we recommend additional tuning and validation prior to deployment in any new setting, followed by pilot field validation. In addition, the clinical workflow design needs to be improved through consultation with local health workers and subsequent pilot field validation. Furthermore, once deployed, the performance of the algorithm should be tracked and optimized by periodically testing it against results obtained by microscopists and then recalibrating the model.

Experimental procedures

Resource availability

Lead contact

Further information and resources should be directed to the lead contact, Yue Teng (yueteng@sklpb.org).

Materials availability

This study did not generate new unique reagents.

Method details

Our proposed AIDMAN system for automatic diagnosis of malaria is schematized in Figures 1A–1F. The process comprises modules for dataset collection, detection of image sections, classification of cells, and diagnosis of blood-smear images. For the first module, a smartphone camera was placed on the eyepiece of a microscope and used to obtain blood-smear images, as shown in Figure 1A. For the second module, shown in Figure 1B, the inputs were these smartphone images, which showed either infected or non-infected RBCs. After pre-processing, sections of images clearly showing individual or overlapping RBCs were identified and saved as “patches.” As shown in Figure 1C, these patches were inputs for the third module, which assigned each a score reflecting the probability of parasites in patches. Finally, in the fourth module, the input was a heatmap image constructed from the 25 patches with the highest score, and the output was a decision on whether the entire blood-smear image contained malaria parasites, as shown in Figure 1D.

AAM

The AAM was developed on the Keras platform based on Transformer and U-Net architectures.41,57,58 It comprises a feature extractor, local context aligner, and multi-scale attention modules, as shown in Figure 1E. The custom AAM performed well at extracting information about malaria parasites from images and distinguishing parasites from dye impurity and aperture effects.

Feature extractors

Different feature layers handle different types of information; higher feature layers are more concerned with global information, while lower feature layers are more concerned with local features. The malaria parasite is identified by a purple ring inside the cell with one or two dark purple nuclei inside the ring. In our system, the feature extractor operates at different scales. For lower scales, fine nuclei can be extracted; for intermediate scales, purple rings can be extracted; and for higher scales, cell features can be extracted. We then utilized up-sampling in U-Net to make the images from the different feature layers the same size.57

X=[X1,X2,X3,X4,X5] (Equation 1)

Local context aligner

Using the encoding process of Transformer (Figure 1F), we used high-scale features to aggregate low-scale features and thereby distinguish malaria parasites and impurity artifacts. Before multi-head attention, position coding is required. Here, we improved the position-coding method according to the characteristics of malaria parasites (i.e., their ring structure). As well as horizontal position coding, vertical encoding (grid position encoding) was also performed. For each position (i, j), we calculated position encoding using the following formulae:

Pi,j=[Pi(H),Pj(V)] (Equation 2)
Pi(H)={sin(0.5i/(10,000id),ifiisevencos(0.5(i1)/(10,000i1d),ifiisodd (Equation 3)
Pj(V)={sin(0.5j/(10,000jd),ifjisevencos(0.5(j1)/(10,000j1d),ifjisodd (Equation 4)

where Pi(H) and Pj(V) represent the horizontal and vertical position encodings, respectively; d represents the dimension; and i and j denote the position in the image. A new variable is obtained by summing the features obtained from the feature extraction module and the grid position encoding as follows:

X¯=X+P. (Equation 5)

For the local context aligner, the input to the multi-head attention layer consisted of features at two different scales and included the grid position encoding. For multi-head attention, the input was divided into multiple heads to form multiple subspaces, which allowed the model to process information concerned with the differences between malaria parasites and impurities. The lower-dimensional receptive field feature was used to project keys (K) and values (V), and the higher-dimensional receptive field feature was used to project queries (Q). We used different linear maps for queries, keys, and values, which have dimensions of dq, dk, and dv, respectively, h times. We then performed the self-attention process in parallel on each of the projected versions of queries, keys, and values, yielding dmodel-dimensional output values. These were concatenated and once again projected, resulting in the final values. The multi-head attention mechanism allows the model to consider information from different representation subspaces in different positions.

MultiHead(Q,K,V)=Concat(head1,,headh)Wowhereheadi=selfattention(QWiQ,KWiK,VWiV) (Equation 6)
K=V=X¯i (Equation 7)
Q={X¯i+1ifi<5X¯0ifi=5 (Equation 8)

where WiQRdmodel×dq, WiKRdmodel×dk, WiVRdmodel×dv, and WORhdv×dmodel.

In our model, h = 2. We used two parallel attention heads. For each head, the formula dk=dv=dmodelh=64 was used. Because of the reduction in the size of each head, the total computational cost was similar to that for a single attention head for all dimensions. For multi-head attention, self-attention was used, and the input was composed of query Q, key K, and value V. The weight of values was obtained using the following formula42,59,60,61:

selfattention(Q,K,V)=softmax(QKTdk)V. (Equation 9)

Multi-scale attention

Multi-scale features play different roles in malaria diagnosis. Hence, we developed a multi-scale attention module to strengthen the discriminating features and aggregate the extracted features, formulated as follows61:

Y=multihead(Q,K,V), (Equation 10)
f(Yi)=tanh(W·Yi+B)·U,and (Equation 11)
Att(Yi)=exp(f(Yi))j=1Lexp(f(Yi)), (Equation 12)

where Y denotes the output of the alignment by the local context aligner; W and U are trainable parameters; L is the number of features at different scales; and Att(Yi) serves as the feature map attention. Using the above formula, we calculated the multi-scale attention weight value. Since cells have a variety of shapes and sizes, malaria parasites can occupy many different places within a cell. Therefore, we assigned different attention weights to the characteristics of different malaria parasites and impurities. Thus, when the inputs were different patches, different features at the same location would have different attention weights according to Equation 8. Multi-scale attention mechanisms allow analysis to be performed from a global perspective alongside an in-depth and detailed analysis of certain smaller components. Thus, the final classification results are based on aggregated features from multiple scales.

Generation of patches

A patch is a sub-image of individual RBCs (or overlapping cells that cannot be distinguished into individual cells) detected in a thin-blood-smear image by a target detection model, and a thin-blood-smear image usually contains multiple patches. These patches may show malaria parasites or impurities. Herein, we use the YOLOv5 target detection model to generate patches,40 the input for which is a blood-smear image taken using a smartphone, and the output is the patch showing a cell image. YOLOv5 can detect cells of different colors, shapes, and sizes. The images in Figure 2A are original views taken by a smartphone and enlarged views of cell regions, illustrating how YOLOv5 detects cell patches. The green circles in Figure 2A represent the detection of intact cells. Even for unlabeled parts of images during the annotation process, YOLOv5 can still detect cells for the prediction. The yellow circles in Figure 2A represent failure to detect incomplete cells.

Each blood smear can generate 250−350 patches showing cells, with fluctuations in the number of cells identified depending on how the blood smears were performed. Here, we segment patches from a blood smear to reduce the impact of differences relating to cells, image background, and dye impurities. From these sub-images, a smartphone-acquired patch dataset was created and manually annotated by a majority vote of three experts, each with >5 years of experience in microscopy diagnosis of malaria. Some negative samples did contain other objects, such as staining and impurity artifacts. Cells in a blood smear have different sizes, hence the pixel sizes of patches vary, and patches were subsequently resized to 64 × 64 pixels to meet the input requirements of the different classification algorithms employed.

Malaria diagnosis of thin-blood-smear images

Using the above steps, we could determine whether a malaria parasite was present in a particular patch. However, due to its sensitivity to the influence of impurities, such as dye and aperture effects, the AAM was prone to false positive diagnosis of thin-blood-smear images. Therefore, we added this step to reduce the false positive rate. A dataset was created for the classification of entire blood smears. Figure 2C shows part of the data build process for the stratification of blood smears using the 25 top patches as an example. First, to determine whether a blood smear contains malaria parasites, it was divided into multiple patches. Second, each patch was fed into the trained AAM and scored. Third, patches were sorted according to scores, a heatmap was generated from the highest-scoring patches, and these heatmaps were stitched into a 320 × 320 × 3 image as shown in Figure 2C. By repeating the above steps, multiple combined heatmaps were generated, and the categories to which these heatmaps belonged were identified. Finally, a dichotomous CNN model was trained with the data generated above, and the model was used to complete sample diagnosis.

Performance optimization and evaluation

Once the model is trained, new test samples can be processed quickly. The average processing time for each blood smear was 1 s using a workstation comprising an NVIDIA RTX 3090 GPU (24 GB), 64 GB RAM, and an Intel Core i9-10900K CPU (3.70 GHz). Six widely used metrics, accuracy (ACC), sensitivity (SENS), precision (PRE), specificity (SPE), F1 score (F1), and the AUC, were employed to evaluate the performance of the system. We utilized average precision (AP) to determine image section metrics as follows:

ACC=TP+TNTP+TN+FP+FN (Equation 13)
SENS=TPTP+FN (Equation 14)
PRE=TPTP+FP (Equation 15)
SPE=TNTN+FP (Equation 16)
PRE=TPTP+FP (Equation 17)
F1=2×PRE×SENSPRE+SENS (Equation 18)
AUC=ipositiveclassrankiM(1+M)2M×N (Equation 19)

where P and N denote the number of positive and negative samples, respectively; TP is the number of true positives; FP is the number of false positives; TN is the number of true negatives; FN is the number of false negatives; and M and N are the number of positive and negative samples, respectively.

Dataset used for performance evaluation

We used SmartMalariaNET (Figshare: https://doi.org/10.6084/m9.figshare.22679839) to evaluate the performance of AIDMAN.62 SmartMalariaNET contains Giemsa-stained thin-blood-smear slides taken by smartphones at the Sierra Leone-China Friendship Hospital and the Rokupa Government Hospital. The overall dataset contains three datasets that are used to train and test models for detection of patches showing cells, cell classification, and malaria diagnosis from thin-blood-smear images. In the present study, we used 1,822 thin-blood-smear images from 140 patients. Thin-blood-smear images were taken by smartphone cameras attached to the eyepiece of a microscope, which does not require directly obtaining relevant information from patients or subjects. Therefore, this work does not compromise personal privacy, and it satisfies the conditions for exemption from informed consent and ethical review.

Acknowledgments

We thank colleagues at the Sierra Leone-China Friendship Hospital and Sierra Leone-China Friendship Biological Safety Laboratory, Freetown, Sierra Leone, who helped us during the current study.

Author contributions

R.L., S.Y., T.D., T.L., H.C., X.Z., and Y.T. conceived the study, implemented models, and analyzed the results. Y.Z., X.F., and B.L. contributed to the writing. R.L., Y.L., Y.Z., X.F., and B.L. assisted with data collection and labeling. S.Y., T.D., and H.C. designed visualizations and metrics. H.C., X.Z., and Y.T. performed data analysis.

Declaration of interests

The authors declare no competing interests.

Published: August 3, 2023

Footnotes

Supplemental information can be found online at https://doi.org/10.1016/j.patter.2023.100806.

Contributor Information

Xianchao Zhang, Email: zhangxianchao@zjxu.edu.cn.

Hongmin Cai, Email: hmcai@scut.edu.cn.

Yue Teng, Email: yueteng@sklpb.org.

Supplemental information

Document S1. Figures S1–S3 and Tables S1–S4
mmc1.pdf (617.4KB, pdf)
Document S2. Article plus supplemental information
mmc2.pdf (4.1MB, pdf)

Data and code availability

References

  • 1.World Health Organization . 2020. World Malaria Report 2020: 20 Years of Global Progress and Challenges. 9240015795. [Google Scholar]
  • 2.World Health Organization . 2022. World Malaria Report 2022. 9789240064898.https://www.who.int/publications/i/item/9789240064898 [Google Scholar]
  • 3.Feachem R.G.A., Chen I., Akbari O., Bertozzi-Villa A., Bhatt S., Binka F., Boni M.F., Buckee C., Dieleman J., Dondorp A., et al. Malaria eradication within a generation: ambitious, achievable, and necessary. Lancet. 2019;394:1056–1112. doi: 10.1016/S0140-6736(19)31139-0. [DOI] [PubMed] [Google Scholar]
  • 4.Ghebreyesus T.A. The malaria eradication challenge. Lancet (London, England) 2019;394:990–991. doi: 10.1016/S0140-6736(19)31951-8. [DOI] [PubMed] [Google Scholar]
  • 5.Amambua-Ngwa A., Amenga-Etego L., Kamau E., Amato R., Ghansah A., Golassa L., Randrianarivelojosia M., Ishengoma D., Apinjoh T., Maïga-Ascofaré O., et al. Major subpopulations of Plasmodium falciparum in sub-Saharan Africa. Science. 2019;365:813–816. doi: 10.1126/science.aav5427. [DOI] [PubMed] [Google Scholar]
  • 6.World Health Organization . World Health Organization; 2022. World Malaria Report 2022. [Google Scholar]
  • 7.Díaz G., González F.A., Romero E. A semi-automatic method for quantification and classification of erythrocytes infected with malaria parasites in microscopic images. J. Biomed. Inf. 2009;42:296–307. doi: 10.1016/j.jbi.2008.11.005. [DOI] [PubMed] [Google Scholar]
  • 8.Ohrt C., Sutamihardja M.A., Sutamihardja M.A., Tang D., Kain K.C. Impact of microscopy error on estimates of protective efficacy in malaria-prevention trials. J. Infect. Dis. 2002;186:540–546. doi: 10.1086/341938. [DOI] [PubMed] [Google Scholar]
  • 9.Ruberto C.D., Dempster A., Khan S., Jarra B. Springer; 2001. Morphological Image Processing for Evaluating Malaria Disease; pp. 739–748. [Google Scholar]
  • 10.Alam M.S., Mohon A.N., Mustafa S., Khan W.A., Islam N., Karim M.J., Khanum H., Sullivan D.J., Haque R. Real-time PCR assay and rapid diagnostic tests for the diagnosis of clinically suspected malaria patients in Bangladesh. Malar. J. 2011;10:175–179. doi: 10.1186/1475-2875-10-175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Masanja I.M., McMorrow M.L., Maganga M.B., Sumari D., Udhayakumar V., McElroy P.D., Kachur S.P., Lucchi N.W. Quality assurance of malaria rapid diagnostic tests used for routine patient care in rural Tanzania: microscopy versus real-time polymerase chain reaction. Malar. J. 2015;14:85–87. doi: 10.1186/s12936-015-0597-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Wongsrichanalai C., Barcus M.J., Muth S., Sutamihardja A., Wernsdorfer W.H. Vol. 6. of American Journal of Tropical Medicine and Hygiene; 2007. A review of malaria diagnostic tools: microscopy and rapid diagnostic test (RDT). Defining and Defeating the Intolerable Burden of Malaria III: Progress and Perspectives: Supplement to Volume 77. [PubMed] [Google Scholar]
  • 13.Ranasinghe S., Ansumana R., Lamin J.M., Bockarie A.S., Bangura U., Buanie J.A.G., Stenger D.A., Jacobsen K.H. Attitudes toward home-based malaria testing in rural and urban Sierra Leone. Malar. J. 2015;14:80–89. doi: 10.1186/s12936-015-0582-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Mouatcho J.C., Goldring J.P.D. Malaria rapid diagnostic tests: challenges and prospects. J. Med. Microbiol. 2013;62:1491–1505. doi: 10.1099/jmm.0.052506-0. [DOI] [PubMed] [Google Scholar]
  • 15.Obeagu E.I., Chijioke U., Ekelozie I. Malaria rapid diagnostic test (RDTs) Ann. Clin. Lab. Res. 2018;6 [Google Scholar]
  • 16.Valkiunas G., Iezhova T.A., Krizanauskiene A., Palinauskas V., Sehgal R.N.M., Bensch S. A comparative analysis of microscopy and PCR-based detection methods for blood parasites. J. Parasitol. 2008;94:1395–1401. doi: 10.1645/ge-1570.1. [DOI] [PubMed] [Google Scholar]
  • 17.Yin J., Li M., Yan H., Zhou S. Considerations on PCR-based methods for malaria diagnosis in China malaria diagnosis reference laboratory network. Biosci. Trends. 2018;12:510–514. doi: 10.5582/bst.2018.01198. [DOI] [PubMed] [Google Scholar]
  • 18.Ramakers C., Ruijter J.M., Deprez R.H.L., Moorman A.F.M. Assumption-free analysis of quantitative real-time polymerase chain reaction (PCR) data. Neurosci. Lett. 2003;339:62–66. doi: 10.1016/s0304-3940(02)01423-4. [DOI] [PubMed] [Google Scholar]
  • 19.Snounou G., Viriyakosol S., Jarra W., Thaithong S., Brown K.N. Identification of the four human malaria parasite species in field samples by the polymerase chain reaction and detection of a high prevalence of mixed infections. Mol. Biochem. Parasitol. 1993;58:283–292. doi: 10.1016/0166-6851(93)90050-8. [DOI] [PubMed] [Google Scholar]
  • 20.Boyce M.R., O’Meara W.P. Use of malaria RDTs in various health contexts across sub-Saharan Africa: a systematic review. BMC Publ. Health. 2017;17:470–515. doi: 10.1186/s12889-017-4398-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.World Health Organization . 2018. Malaria Rapid Diagnostic Test Performance. [Google Scholar]
  • 22.Poti K.E., Sullivan D.J., Dondorp A.M., Woodrow C.J. HRP2: Transforming Malaria Diagnosis, but with Caveats. Trends Parasitol. 2020;36:112–126. doi: 10.1016/j.pt.2019.12.004. [DOI] [PubMed] [Google Scholar]
  • 23.Bosco A.B., Nankabirwa J.I., Yeka A., Nsobya S., Gresty K., Anderson K., Mbaka P., Prosser C., Smith D., Opigo J., et al. Limitations of rapid diagnostic tests in malaria surveys in areas with varied transmission intensity in Uganda 2017-2019: Implications for selection and use of HRP2 RDTs. PLoS One. 2020;15 doi: 10.1371/journal.pone.0244457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Mathison B.A., Pritt B.S. Update on malaria diagnostics and test utilization. J. Clin. Microbiol. 2017;55:2009–2017. doi: 10.1128/JCM.02562-16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Yitbarek T., Nega D., Tasew G., Taye B., Desta K. Performance Evaluation of Malaria Microscopists at Defense Health Facilities in Addis Ababa and Its Surrounding Areas, Ethiopia. PLoS One. 2016;11 doi: 10.1371/journal.pone.0166170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Olugboja A., Wang Z. IEEE); 2017. Malaria Parasite Detection Using Different Machine Learning Classifier; pp. 246–250. [Google Scholar]
  • 27.Pattanaik P., Mittal M., Khan M.Z., Panda S. Malaria detection using deep residual networks with mobile microscopy. Journal of King Saud University-Computer and Information Sciences. 2020 [Google Scholar]
  • 28.Liang Z., Powell A., Ersoy I., Poostchi M., Silamut K., Palaniappan K., Guo P., Hossain M.A., Sameer A., Maude R.J. IEEE); 2016. CNN-Based Image Analysis for Malaria Diagnosis; pp. 493–496. [Google Scholar]
  • 29.Fatima T., Farid M.S. Automatic detection of Plasmodium parasites from microscopic blood images. J. Parasit. Dis. 2020;44:69–78. doi: 10.1007/s12639-019-01163-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Kashtriya V., Doegar A., Gupta V., Kashtriya P. Identifying malaria infection in red blood cells using optimized stepincrease convolutional neural network model. Int. J. Innovative Technol. Explor. Eng. 2019;8:813–818. [Google Scholar]
  • 31.Masud M., Alhumyani H., Alshamrani S.S., Cheikhrouhou O., Ibrahim S., Muhammad G., Hossain M.S., Shorfuzzaman M. Leveraging deep learning techniques for malaria parasite detection using mobile application. Wireless Commun. Mobile Comput. 2020;2020 [Google Scholar]
  • 32.Rajaraman S., Antani S.K., Poostchi M., Silamut K., Hossain M.A., Maude R.J., Jaeger S., Thoma G.R. Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images. PeerJ. 2018;6 doi: 10.7717/peerj.4568. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Li D., Ma Z. Residual attention learning network and SVM for malaria parasite detection. Multimed. Tool. Appl. 2022;81:10935–10960. [Google Scholar]
  • 34.Khan S.H., Shah N.S., Nuzhat R., Majid A., Alquhayz H., Khan A. Malaria parasite classification framework using a novel channel squeezed and boosted CNN. Microscopy. 2022;71:271–282. doi: 10.1093/jmicro/dfac027. [DOI] [PubMed] [Google Scholar]
  • 35.Huq A., Pervin M.T. 2020. Robust Deep Neural Network Model for Identification of Malaria Parasites in Cell Images; pp. 1456–1459. 5-7 June 2020. [Google Scholar]
  • 36.Kumar A., Sarkar S., Pradhan C. In: Deep Learning Techniques for Biomedical and Health Informatics. Dash S., Acharya B.R., Mittal M., Abraham A., Kelemen A., editors. Springer International Publishing; 2020. Malaria Disease Detection Using CNN Technique with SGD, RMSprop and ADAM Optimizers; pp. 211–230. [DOI] [Google Scholar]
  • 37.Shewajo F.A., Fante K.A. Tile-based microscopic image processing for malaria screening using a deep learning approach. BMC Med. Imag. 2023;23:39. doi: 10.1186/s12880-023-00993-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Loh D.R., Yong W.X., Yapeter J., Subburaj K., Chandramohanadas R. A deep learning approach to the screening of malaria infection: Automated and rapid cell counting, object detection and instance segmentation using Mask R-CNN. Comput. Med. Imag. Graph. 2021;88 doi: 10.1016/j.compmedimag.2020.101845. [DOI] [PubMed] [Google Scholar]
  • 39.Koirala A., Jha M., Bodapati S., Mishra A., Chetty G., Sahu P.K., Mohanty S., Padhan T.K., Mattoo J., Hukkoo A. Deep Learning for Real-Time Malaria Parasite Detection and Counting Using YOLO-mp. IEEE Access. 2022;10:102157–102172. [Google Scholar]
  • 40.YOLOv5 2020. https://github.com/ultralytics/yolov5
  • 41.Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser Ł., Polosukhin I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017;30 [Google Scholar]
  • 42.Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S. An image is worth 16x16 words: transformers for image recognition at scale. arXiv. 2020 doi: 10.48550/arXiv.2010.11929. Preprint at. [DOI] [Google Scholar]
  • 43.Poostchi M., Silamut K., Maude R.J., Jaeger S., Thoma G. Image analysis and machine learning for detecting malaria. Transl. Res. 2018;194:36–55. doi: 10.1016/j.trsl.2017.12.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Kuo P.-C., Cheng H.-Y., Chen P.-F., Liu Y.-L., Kang M., Kuo M.-C., Hsu S.-F., Lu H.-J., Hong S., Su C.-H., et al. Assessment of expert-level automated detection of Plasmodium falciparum in digitized thin blood smear images. JAMA Netw. Open. 2020;3 doi: 10.1001/jamanetworkopen.2020.0206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Saeed M.A., Jabbar A. Smart diagnosis” of parasitic diseases by use of smartphones. J. Clin. Microbiol. 2018;56 doi: 10.1128/JCM.01469-17. e01469-17-e01417. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Vijayalakshmi A., Rajesh Kanna B. Deep learning approach to detect malaria from microscopic images. Multimed. Tool. Appl. 2020;79:15297–15317. [Google Scholar]
  • 47.Yang F., Poostchi M., Yu H., Zhou Z., Silamut K., Yu J., Maude R.J., Jaeger S., Antani S. Deep learning for smartphone-based malaria parasite detection in thick blood smears. IEEE J. Biomed. Health Inform. 2020;24:1427–1438. doi: 10.1109/JBHI.2019.2939121. [DOI] [PubMed] [Google Scholar]
  • 48.Pirnstill C.W., Coté G.L. Malaria diagnosis using a mobile phone polarized microscope. Sci. Rep. 2015;5:1–13. doi: 10.1038/srep13368. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Yu H., Yang F., Rajaraman S., Ersoy I., Moallem G., Poostchi M., Palaniappan K., Antani S., Maude R.J., Jaeger S. Malaria Screener: a smartphone application for automated malaria screening. BMC Infect. Dis. 2020;20:825. doi: 10.1186/s12879-020-05453-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Zhou Y., Cheung Y.-m. Probabilistic rank-one discriminant analysis via collective and individual variation modeling. IEEE Trans. Cybern. 2020;50:627–639. doi: 10.1109/TCYB.2018.2870440. [DOI] [PubMed] [Google Scholar]
  • 51.Zhou Y., Lu H., Cheung Y.-M. Probabilistic rank-one tensor analysis with concurrent regularizations. IEEE Trans. Cybern. 2021;51:3496–3509. doi: 10.1109/TCYB.2019.2914316. [DOI] [PubMed] [Google Scholar]
  • 52.Vinkeles Melchers N.V.S., Coffeng L.E., de Vlas S.J., Stolk W.A. Standardisation of lymphatic filariasis microfilaraemia prevalence estimates based on different diagnostic methods: a systematic review and meta-analysis. Parasites Vectors. 2020;13:302–309. doi: 10.1186/s13071-020-04144-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Büscher P., Gonzatti M.I., Hébert L., Inoue N., Pascucci I., Schnaufer A., Suganuma K., Touratier L., Van Reet N. Equine trypanosomosis: enigmas and diagnostic challenges. Parasites Vectors. 2019;12:1–8. doi: 10.1186/s13071-019-3484-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Sanchez E., Vannier E., Wormser G.P., Hu L.T. Diagnosis, treatment, and prevention of Lyme disease, human granulocytic anaplasmosis, and babesiosis: a review. JAMA. 2016;315:1767–1777. doi: 10.1001/jama.2016.2884. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Abdurahman F., Fante K.A., Aliy M. Malaria parasite detection in thick blood smear microscopic images using modified YOLOV3 and YOLOV4 models. BMC Bioinf. 2021;22:112. doi: 10.1186/s12859-021-04036-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Li M., Comba I.Y., Eberly A.R., Abu Saleh O.M. Lord of the "rings": A case of Plasmodium falciparum. IDCases. 2022;27 doi: 10.1016/j.idcr.2022.e01407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Long J., Shelhamer E., Darrell T. Fully convolutional networks for semantic segmentation. arXiv. 2015:3431–3440. doi: 10.48550/arXiv.1411.4038. Preprint at. [DOI] [PubMed] [Google Scholar]
  • 58.Fan Z., Dan T., Yu H., Liu B., Cai H. IEEE); 2020. Single Fundus Image Super-resolution via Cascaded Channel-Wise Attention Network; pp. 1984–1987. [DOI] [PubMed] [Google Scholar]
  • 59.Shen J., Tang X., Dong X., Shao L. Visual object tracking by hierarchical attention siamese network. IEEE Trans. Cybern. 2020;50:3068–3080. doi: 10.1109/TCYB.2019.2936503. [DOI] [PubMed] [Google Scholar]
  • 60.Zhao H., Jia J., Koltun V. Exploring self-attention for image recognition. arXiv. 2020:10076–10085. doi: 10.48550/arXiv.2004.13621. Preprint at. [DOI] [Google Scholar]
  • 61.He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition. arXiv. 2016:770–778. doi: 10.48550/arXiv.1512.03385. Preprint at. [DOI] [Google Scholar]
  • 62.Liu R., Liu T., Dan T., Yang S., Li Y., Luo B., Zhuang Y., Fan X., Zhang X., Cai H., Teng Y. 2023. Original Images (1296 Pics) of AIDMAN. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Liu R., Liu T., Dan T., Yang S., Li Y., Luo B., Zhuang Y., Fan X., Zhang X., Cai H., Teng Y. 2023. AIDMAN Steps; pp. 1–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Teng Y., Liu R., Liu T., Dan T., Yang S., Li Y., Luo B., Zhuang Y., Fan X., Zhang X., Cai H. 2023. AIDMAN_CODE. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Document S1. Figures S1–S3 and Tables S1–S4
mmc1.pdf (617.4KB, pdf)
Document S2. Article plus supplemental information
mmc2.pdf (4.1MB, pdf)

Data Availability Statement


Articles from Patterns are provided here courtesy of Elsevier

RESOURCES