Skip to main content
IEEE - PMC COVID-19 Collection logoLink to IEEE - PMC COVID-19 Collection
. 2021 Feb 22;21(9):11084–11093. doi: 10.1109/JSEN.2021.3061178

Face Mask Assistant: Detection of Face Mask Service Stage Based on Mobile Phone

Yuzhen Chen 1, Menghan Hu 1,, Chunjun Hua 1, Guangtao Zhai 2, Jian Zhang 1, Qingli Li 1, Simon X Yang 3
PMCID: PMC8768979  PMID: 36820762

Abstract

Coronavirus Disease 2019 (COVID-19) has spread all over the world since it broke out massively in December 2019, which has caused a large loss to the whole world. Both the confirmed cases and death cases have reached a relatively frightening number. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the cause of COVID-19, can be transmitted by small respiratory droplets. To curb its spread at the source, wearing masks is a convenient and effective measure. In most cases, people use face masks in a high-frequent but short-time way. Aimed at solving the problem that we do not know which service stage of the mask belongs to, we propose a detection system based on the mobile phone. We first extract four features from the gray level co-occurrence matrixes (GLCMs) of the face mask’s micro-photos. Next, a three-result detection system is accomplished by using K Nearest Neighbor (KNN) algorithm. The results of validation experiments show that our system can reach an accuracy of 82.87% (measured by macro-measures) on the testing dataset. The precision of Type I ‘normal use’ and the recall of type III ‘not recommended’ reach 92.00% and 92.59%. In future work, we plan to expand the detection objects to more mask types. This work demonstrates that the proposed mobile microscope system can be used as an assistant for face mask being used, which may play a positive role in fighting against COVID-19.

Keywords: COVID-19 pandemic, use time of face mask, textural feature, SARS-CoV-2, machine learning, image processing

I. Introduction

COVID-19 has wreaked havoc around the world and the spread of COVID-19 has not been curbed totally by June, 2020. According to the situation report of the World Health Organization (WHO), there have been 9,653,048 confirmed cases and 491,128 deaths by 27 June 2020 [1]. Recently, many researchers have developed a variety of detection methods of COVID-19, such as reverse transcription-polymerase chain reaction (RT-PCR) technology [2], [3], X-ray [4] and chest CT imaging [5]. In addition to the need for breakthroughs in testing techniques like respiratory monitoring [6], SARS-CoV-2 must be controlled at source to prevent a sustaining surge in COVID-19 confirmed and death cases. Small respiratory droplets can transmit SARS-CoV-2, which is known to be transmissible from presymptomatic and asymptomatic individuals [7]. One method to reduce virus’s spreading is to wear masks in public [7]. Advice on using face masks proposed by the WHO [8], analysis [7] and researches [9] has supported that wearing masks can prevent the spread of potential viruses during pandemic. An evidence research even thinks that wearing masks in public is a symbol of social solidarity among the response to the worldwide pandemic [10]. The WHO recommends that the surgical face masks used by health workers should be discarded once used and that the extended use of faces masks can be considered during severe shortages [11]. Because face masks only work when being worn within their periods of validity. On the other hand, more often in daily life, the majority of people wear surgical face masks in short-time but high-frequent way and in low-risk or middle-risk environment. The face masks may be used twice or three times, even four or five times if in relatively safe environment and maintained properly, which is reasonable and common especially under the condition where face masks are in shortage.

In this case, there is no doubt that the service life of masks depends on the use methods, use intensity, environmental condition and the material of masks, etc. In daily life, it often happens that common people forget the service stage of the current mask after having used it in high-frequent but short-time way. Meanwhile, the service life of face masks decreases differently after being used in different conditions. As a result, the masks’ remaining time of effective protection varies. For those face masks whose use time is unknown, continuing to use them may lead to poor protection and other unexpected incidents; discarding them directly is an inappropriate behavior. It is a waste of face masks, especially in the case that masks are in shortage during the epidemic. It is necessary to detect the service stage of our face masks, but there are few researches on the service stage of face masks. Therefore, we aim at studying what service stage the face mask is in, which may play a positive role in protecting the uninfected people and reducing the spread of virus.

According to Javid et al., wearing a mask in public may become our unified action in the fight against COVID-19 [12]. In the war of fighting against COVID-19, the environmental effect of this pandemic is in a sudden surge in use of plastic products to protect the general public, patients, health and service workers, such as gloves, plastic, respirators [13][15], etc. Some reports indicate that disposable masks cause enormous plastic waste around the world [16], [17]. It is estimated that thousands of million masks will be consumed every day all over the world, so it is important to make full use of the mask. Due to the short service life and worldwide use of masks, it is not practical for masks to adopt special machines to centralized detection. Hence, it is urgently needed to develop an easy-to-operate, portable testing device for detecting anytime and anywhere. Therefore, in this research, we propose a portable mask service stage detection system based on mobile phone. Its work procedures are as follows: 1) users first take a micro-photo of the mask being used with a mobile microscope. The micro-photo of the selected region in mask is photographed via a mobile microscope (TIPSCOPE, Convergence (Wuhan) Technology Ltd., China) with magnification of 400x; 2) the micro-graph of the mask is uploaded in the WeChat Applet; 3) the result of back-end detection eventually can be obtained through WeChat Applet. This detection device is simple, easy to operate and effective.

In the common fight, the ordinary people often wear surgical masks while N95 masks are only recommended for healthcare workers or professionals at high risk of coming into contact with patients [18]. Radonovic et al. analyze the data which come from 7 health care delivery systems and 4 seasons of peak viral respiratory illness, and find that in this trial, wearing N95 respirators and wearing medical masks result in no significant difference in the incidence of laboratory-confirmed influenza [19]. Additionally, the research suggests that N95 respirators should not be recommended for the general public, namely, those not in close contact with influenza or suspected patients [18]. Using N95 respirators may lead to discomfort, like headaches [20]. Even a previous study [21] has reported that there is an inverse relationship between the level of wearing N95 respirators and the risk of clinical respiratory illness. Therefore, we choose surgical mask, a disposable type, as the research object, which is the most widely used. It makes experimental data more scientific, and ultimately our detection system can benefit more people to the greatest extent. In this study, after obtaining the micro-graphs of masks at different wearing periods, we analyze the texture features of the photos.

Methods of texture analysis include fractal analysis, Fourier transformation and the gray level co-occurrence matrix (GLCM) [22]. Haralick et al. report that the GLCM is a method to quantify the spatial relationship between adjacent pixels in an image [23]. GLCM is widely used in disease detection [24][26], skin texture analysis [27], defect detection [28], fabric classification [29], egg fertility identification [30], etc. Safira et.al report a research to detect abnormalities of nails. They extract textural characteristics processed with GLCM of the image data and classifying method KNN is chosen to classify the abnormal nails [26]. Gustian et.al classify troso fabrics using a combination of two methods, Support Vector Machine (SVM) with Ones Against All and SVM with Ones Against One, based on GLCM and Principle Component Analysis (PCA). The results of methods based on GLCM are better than the ones based on PCA [29]. The research results of Saifullah et.al also show that the method of combination of GLCM and Backpropagation can provide a 93% accuracy rate when detecting egg fertility. Given the above, GLCM is feasible and efficient in image texture feature analysis. Therefore, we choose GLCM to analyze the texture features of masks [30].

Due to K Nearest Neighbor method’s simple implementation and distinguished performance in classification tasks [31], it is widely employed in text categorization [32], [33], medical domain [34], eating patterns exploration [35], leaf disease identification [36], indoor localization [37], etc. Hamed et al. compare the performance of KNN variant (KNNV) algorithm (which derived from KNN) and other three algorithms in the classification of COVID-19 patients, and find that KNNV algorithm could classify COVID-19 patients more efficiently and accurately [34]. On the basis of the texture features extracted from the micro-photos taken in different period time, this paper employs KNN algorithm to establish the classifiers for detecting the use time of mask.

In this paper, we propose a portable face mask service stage detection system, which is easy to operate and can work anytime and anywhere. It is based on the texture features of face masks and ground-truth of mask service stage data. To further quantify the differences of masks in different periods, the method of texture analysis is introduced. After extracting texture features from the micro-photos of face masks in different use periods, KNN algorithm is performed to detect to which service period the face mask belongs. The testing data are afterward used to validate the effectiveness of the proposed system.

The main contributions of this paper are threefold. First, we combine the texture features of face masks’ micro-photos and their corresponding service stages based on the method of texture analysis. Based on GLCM, the texture features are successfully extracted from micro-photos obtained by high magnification lens attached to the mobile phone. Subsequently, we propose a detection method to judge the service stage of face masks with a KNN algorithm. Finally, based on the two contributions mentioned above, we have implemented a detection Wechat Applet for face masks using the data collected from three persons’ wearing face masks in daily life, which may contribute to protecting the uninfected people and reducing the spread of virus.

II. Methods

A brief introduction to the proposed face mask service stage detection method is shown below. We initially use the portable and delicate mobile microscope device to get the micro-photos of face masks. After obtaining the image, we extract texture features data from it. During the extraction process, we use GLCM to capture the contrast, energy, correlation, and homogeneity of the object face mask. Subsequently, KNN algorithm is employed to propose a classification model with the input texture features and the corresponding ground-truth use time. Eventually, we package the system into a library function in Matlab and embed it to the Wechat Applet we develop. This has been proved effective by similar previous researches [38], [39].

A. Overview of Data Acquisition

As for the the detection device, Fig. 1 displays the hardware configuration of the detection device: a mobile microphone (TIPSCOPE, Convergence (Wuhan) Technology Ltd., China), a phone with available system camera, etc. The detachable mobile microphone, which is the key to the hardware, is particularly chosen to display the details of face masks viz. ash particle, droplets or other debris more explicitly. It should be noted that the maximum magnification can reach 400x if the system camera and mobile microscope work together. In this research, we use system camera without magnification to get more scope of face masks.

Fig. 1.

Fig. 1.

The overview of the portable detection device’s “Hardware”. The “Hardware” is composed of a mobile microscope, a phone with available system camera, etc. Specifically, a) and b) represent the mobile microscope and system camera, respectively; and c) shows the final removable device mainly assembled by a) and b).

Fig. 2 shows how to collect the experimental data. To ensure the validity of the collected data, we photograph three inside locations of the face mask each time: left location (about one-third of the face mask and close to the left), middle location (about two-thirds of face mask), right (about one-third of the face mask and close to the right), as shown in Fig. 3. It is the spots that are very close to nose and mouth, so we believe the three spots can reflect the details that we want, like droplets, dust, etc. Under the magnification of 400x, these 3 regions may cover 1% area of the mask.

Fig. 2.

Fig. 2.

Scenario of the portable detection device’s working. While the device is on work, the flash must meet the condition that the flash is always “on” to provide enough light.

Fig. 3.

Fig. 3.

The three inside photographed locations of the face mask: left location (about one-third of the face mask and close to the left), middle location (about two-thirds of face mask), right (about one-third of the face mask and close to the right).

To contain more usage scenarios, the face masks are worn in different conditions. Meanwhile, the flash of the phone’s camera should be always “on” to provide enough light. The resolution of phone system camera should be at least Inline graphic and the larger, the better. We collect the images of a face mask used from day zero (new) today five, namely the mask used is originally from Type I and then is used until turned into type II, then type III. Based on the conditions mentioned above, three images are immediately photographed after the face mask has been used for an arbitrary hour of the daytime and then the face mask is kept in dry and clean environment for the following analysis. What is worthy considering is that the ambient conditions like temperature, humidity and air quality, also have an impact on the testing. Due to the relatively good preservation that we control, we hold the opinion that ambient conditions’ influence in a short period of time is relatively small so we ignore the influence.

B. Gray Level Co-Occurrence Matrix (GLCM)

Texture can be used to characterize the tonal or gray-level variations in an image [40]. The gray level co-occurrence matrix (GLCM) is chosen to disclose the texture features hiding in the face masks. GLCM is a second-order statistical method which calculates the frequency of pixel pairs with the same gray-level in an image and uses additional knowledge obtained from spatial pixel relations [41]. The co-occurrence matrix uses edge information to embed the distribution of gray-scale transformation [42]. Owing to the fact that much of the information required is embedded in GLCM, it emerges as a simple but effective technique.

14 measures of textural features are proposed by Haralick et al., and some of these measures are related to specific textural characteristics of the image [23]. In our study, we choose contrast, correlation, energy and homogeneity as measures. Specifically, contrast refers to the drastic change in gray level between adjacent pixels. High contrast images own high spatial frequencies. Correlation represents the linear correlation in an image. The higher the correlation value is, the more linear the gray-scale relationship between adjacent pixel pairs is. Energy stands for texture uniformity or pixel pair repetitions. High energy is produced when the distribution of gray level values is constant or periodic. Homogeneity is sensitive to the existence of near diagonal elements in a GLCM, indicating the similarity of adjacent pixels in gray scale.

With the increase of the use times, the wear of the mask causes a change in micro-structure of the mask’s some parts, which leads to the difference between the worn-out parts and the unworn parts. Thus, the images of the mask’s different service stages may perform variously in correlation and energy. Meanwhile, due to humans’ respiration, cough and other respiratory related activities, more and more small particles impurities and droplets will adhere to the inside surface to the face mask. They probably result in the difference in the four measures mentioned above, among the images of different service stages.

Every element in GLCM contains second-order statistics, probability values for changes between gray levels Inline graphic and Inline graphic for a particular displacement Inline graphic and angle Inline graphic, labeled as Inline graphic (normalized) [43]. Let Inline graphic be the number of gray levels in the image, thus, the size of GLCM is Inline graphic. In this research, we set Inline graphic as 8. Subsequently, the measures are calculated as

B.

where Inline graphic is the mean of GLCM while Inline graphic stands for the variance of GLCM.

C. K Nearest Neighbor (KNN) Algorithm

Considering that this task is relatively simple, the model can be constructed with a relatively simple pattern recognition method. At the same time, in view of the deployment to the mobile phone terminal, the established model should be easy to deploy to the mobile phone terminal and convenient to maintain later. Compared with other algorithms, due to its simple implementation and significant classification performance, KNN is a very popular method in statistics and ranks top ten data mining algorithms [44][47].

KNN algorithm is different from model-based methods which first use training samples to build a model, and then predict the test samples through the learned model [48][50]. No training phase is required for the model-free KNN method. Instead, it conducts classification tasks obeying the following procedures: first, training samples are attached with labels; next, the distance is calculated between the test sample and the training samples in each label; subsequently, after comparing the distances and obtaining the tests nearest neighbors, the serial number is obtained; finally, classification is generated, as shown in Fig. 4. Considering the application scenario in our work, we finally utilize the Euclidean distance. The distance is computed as:

C.

where the Inline graphic stands for the number of labels.

Fig. 4.

Fig. 4.

The flowchart of KNN algorithm. First, labels are attached to the training samples. For every test sample, the Inline graphic is calculated between the test sample and the training samples in each label, which follows the equation 5. By comparing the distances, the test sample’s nearest neighbors viz.the minimum distance are obtained, and serial number Inline graphic is then produced. Eventually, the final classification is set as label Inline graphic.

The main task in our research is to train the optimal Inline graphic value after carrying out KNN classification successfully according to the efficiency of the classification performance.

D. Portable Device Based on Mobile Phone for Detecting Use Time of Mask

After optimizing the system, the system is embedded to the Wechat Applet we have developed. To enrich our experimental data, we enlarge the data collection time from the first day to the fifth day (about an hour used time each day). It is generally recommended to replace the surgical mask every four hours. If the face mask is used for a long time, the large particles will be blocked on the mask surface or the ultrafine particles will be blocked in the pores of the mask filter material, resulting in the decrease of filtration efficiency and the increase of respiratory resistance [51]. According to the detection time, the detection results are within three categories viz. type I: normal use (day 0 to 1 viz. the face masks can be used securely), type II: early warning (day 2 to 3 viz. the face masks can be used safely, but it is close to the end of its service life) and type III: not recommended (day 4 to 5 viz. the face mask can be used in shortage, but it is better to change a new one). We choose the ‘day’ as unit based on the fact that we use face masks in high-frequent but short-time way and ordinary people hardly use face masks for 4 hours continuously. Additionally, on basis of using the same face mask several times, we classify the face masks belonging to day 4 and day 5 the type ‘not recommended’ to counterbalance the increase in use times.

Fig. 5 reveals the workflow of the detection. After we have obtained the micro-photo of the face mask with mobile microscope and registered an account in the Wechat Applet, the micro-image is uploaded to the back-end of the system. Accordingly, the detection result will be sent in the message from the back-end within five seconds.

Fig. 5.

Fig. 5.

The workflow chart of the face mask detection system. a) stands for collecting the micro-image of the face mask being used. b) with its branches b1), b2) and b3) means registering an account used to receive the detection result and the following uploading task. c) is the result sent by the Wechat Applet from the back-end. (The face mask used in the sample is in its service life of type I: normal use.)

E. Evaluation Metrics

To verify the efficiency of the detection system, evaluation indicators viz. confusion matrix, accuracy, precision, recall, Inline graphic, micro-measures and macro-measures are considered.

  • 1)

    Confusion matrix: we assume that “Positive” means the positive samples and that “Negative” means the negative samples. Meanwhile, “True” represents that the prediction is right while “False” represents that the prediction is wrong. As a result, “TP” and “TN” mean that the positive sample is classified as “Positive” and that the negative sample is labeled as “Negative”, respectively. “FP” and “FN” represent that the negative sample is labeled as “Positive” and that the positive sample is classified as “False”. The four indicators make up the confusion matrix.

  • 2)
    Accuracy: it is a ratio that is used to estimate the classification ability of a model within the range from 0 to 1. Generally speaking, the larger accuracy is, the better the classification is. It can be calculated by the following equation:
    graphic file with name M20.gif
  • 3)
    Precision: precision is only used to evaluate the classification ability of the positive samples within the range from 0 to 1. It is obvious that the larger precision is, the more effective the system is. It is computed by:
    graphic file with name M21.gif
  • 4)
    Recall: it is a ratio from 0 to 1. Obviously, the more it is close to 1, the better the system is. The calculation equation is:
    graphic file with name M22.gif
  • 5)
    Inline graphic: it is a harmonic mean of recall and precision. In this study, we consider the weight of recall and precision the same, which means attaching the weight of 0.5 to either of them. It is calculated by:
    graphic file with name M24.gif
  • 6)
    Micro-measures: metrics are calculated globally by counting the number of the total true positives, false negatives and false positives. They are competed as:
    graphic file with name M25.gif
    where Inline graphic means the total number of labels.
  • 7)
    Macro-measures: metrics are calculated for each label, and then the weighted means are produced (normally the weight is the same except that the number of the samples in each label varies greatly).
    graphic file with name M27.gif
    where Inline graphic means the total number of labels, and the Inline graphic is Inline graphic of every label calculated by Equation 9.

III. Experimental Results and Analysis

The life span of face masks has little relationship with different people but much with the condition of use environment. When people wear face masks, what they do will influence the life span of face masks. As a result, during experiments, the use conditions of face masks cover speaking frequently, talking with others, running, shopping in supermarkets, shopping in vegetable markets, riding and wandering, etc. We have collected the micro-images of selected regions of masks within day 0 (new) to day 5, and obtain the labels of samples (Type I: day 0 (new) to day 1, Type II: day 2 to day 3 and Type III: day 4 to day 5) according to its use time. Every time the face mask has been used for an arbitrary hour of the daytime, three images are immediately photographed and then the face mask is kept in dry and clean environment for the following analysis. Therefore, we collect a total of 87 micro-images of surgical masks via the mobile microscope.

Every time three micro-photos (the left, the middle and the right) are photographed, they are initially averaged to eliminate existing disturbance, and then one final training sample is produced. As a result, after extracting texture features, we obtain 29 sets of training samples. The original 87 micro-images are used to carry out validation experiments. The evaluation indicators mentioned above are elaborated as the follows.

A. Experimental Results

To figure out the detailed information about the classification of the use time, we plot the confusion matrix of the three types as demonstrated in Fig. 6. As can be seen from the results, 71 samples among 87 samples viz. 81.61% are predicted correctly, which suggests the model performs relatively well on the whole.

Fig. 6.

Fig. 6.

The confusion matrix of the classified results. Each row is the number of real labels and each column is the number of classified labels. Type I, Type II and Type III represent ‘normal use’, ‘early warning’ and ‘not recommended’, respectively.

To qualify the efficiency, the overall indicators of the system are shown in Table. I, and the more detailed evaluation metrics of each type are shown in Table. II. Specifically, in terms of micro-measures and macro-measures, it is obvious that each indicator is more than 81%, suggesting that our system performs relatively well. According to the equations of micro-measures and macro-measures, we can easily see that macro-measures are more convincing and scientific than micro-measures. It can be noted that the proposed system performs better under the evaluation of macro-measures than micro-measures, especially the macro-precision reaching 82.87%. Therefore, it proves the efficiency of the detection system. Furthermore, as revealed in Table. II, Inline graphic of all types are over 80%, and even the precision of type I and the recall of type III reach 92%, indicating the system’s ability to work efficiently. However, note that the precision of type III is only 71.43%. For the reason that most false predictions of the type I and type II are labeled type III, the precision of type III decreases.

TABLE I. The Overall Metrics of Type I, II and III.

Micro measures Micro Precision 81.61%
Micro Recall 81.61%
Micro Inline graphic 81.61%
Macro measures Macro Precision 82.87%±8.50%
Macro Recall 81.98%±7.50%
Macro Inline graphic 81.66%±1.40 %

TABLE II. The Metrics of Type I, II and III, Respectively.

Metrics Precision Recall Inline graphic
Type I 92.00% 76.67% 83.64%
Type II 85.19% 76.67% 80.70%
Type III 71.43% 92.59% 80.65%

B. Influence of K Factor on the Efficiency of System

As mentioned above, we specially aim at obtaining the optimal Inline graphic to get the most effective detection system. In KNN algorithm, K plays a key role in the performance of system. In other words, if the K factor is too small, there may exist the following problems: 1) the phenomenon of over fitting; 2) easy to be influenced by outliers; and 3) the system is too complex. While the K factor is too large, there may appear the elaborated problems: 1) the phenomenon of under fitting; 2) the system is constrained by the sample equilibrium; and 3) the system is too simple.

Hence, in this study, we analyze the correctly classified samples’ number and accuracy under the condition that K’s values vary from 4 to 16, as demonstrated in Fig. 7. Accordingly, we can find the optimal-k-value is 6, with the number TP 71 and the accuracy 81.61%. Additionally, K going from 4 to 6, the curve roughly goes up. For K ranging from 6 to 16, there are two stages in the descent tendency of the curve. First, the curve declines rapidly when K is from 6 to 9. Then it oscillates downward from 9 to 15, and the performance of system descends sharply when the K is changed to 16. The phenomenon referred above conforms to the theoretic prediction. Surprisingly, it is exactly consistent with the conclusion that the fixed optimal-k-value for all test samples should be Inline graphic (where Inline graphic and Inline graphic is the number of training samples proposed by Lall and Sharma [52].

Fig. 7.

Fig. 7.

The influence of K factor on the performance of system. The Y axis on the right stands for the number of correctly predicted samples, and the Y axis on the left shows the corresponding accuracy. Intuitively, it can be seen that the optimal K is 6 with the optimal accuracy 81.61% and correct number 71 (total number is 87).

C. Comparison of Measures in Three Types Categories

we analyze the measures of all samples to inquire the tendency of modeling measures over time changes. Fig. 8 displays the micro-photos and their corresponding GLCMs’ measures of three different service stages viz. ‘normal use’ stage, ‘early warning’ stage and ‘not recommended’ stage. As we can see from the micro-photos, when the use time increases, the mask becomes more and more dirty and there are many impurities and small droplets in the holes of the mask. It proves the accuracy of the analysis mentioned above that the protection becomes less effective over time. The curves reveal the variation tendency of the measures. Specifically, contrast and correlation increase while energy and homogeneity decrease over time. It should be noted that this rule can’t be applied to all data, since the detection result is determined by composite force of the four measures. As a result, there exists the situation that some data do not obey the rule.

Fig. 8.

Fig. 8.

The micro-photos of three types face masks and their corresponding GLCMs’ measures. a), b) and c) represent the face masks in ‘normal use’ stage, ‘early warning’ stage and ‘not recommended’ stage, respectively. d), e), f) and g) are curve of four measures viz. contrast, correlation, energy and homogeneity.

D. Comparison of Experimental Results Using Original Photos and Micro-Photos

To validate the necessity of microscope, we record the results of using the original photo and micro-photo, which are shown in Fig. 9. The two photos represent the same location of the face mask and belong to the type ‘not recommended’. In Fig. 9, we can see many details, such as droplets in the holes, etc. Meanwhile, we can see no details in the original photo except the obvious stain. It can be inferred that we will tell little difference if there is no stain or other obvious dirt. The detection result of the micro-photo is ‘not recommended’ while the original photo ’s result is ‘normal use’, which verifies the indispensability of the microscope in this system.

Fig. 9.

Fig. 9.

The original photo and the micro-photo of the same face mask. a) displays the micro-photo obtained by microscope while b) shows the original photo gained only by mobile phone’s camera.

In other words, the photos obtained without using microscope aren’t able to be used as the detection input.

E. Dichotomous Model of Experimental Results

On the basis of system’s working normally, we further have the idea whether we can simplify the system. As a result, we propose a dichotomous model with only two types viz. ‘normal use’ and ‘not recommended’. Considering that both ‘normal use’ and ‘not recommended’ type face masks can protect people effectively, we merge them into one type named ‘normal use’. Accordingly, the other type is ‘not recommended’. In the test samples, the number of ‘normal use’ is 60 (54 samples are detected correctly) and ‘not recommended’ is 27 (23 samples are detected correctly). To counterbalance the uneven distribution of samples, we assign the weight viz. Inline graphic and Inline graphic to each type. As expected, the accuracy becomes better and increases to 88.51%. Since there are two types, it is easier for the model to tell which type the detected face mask belongs to. In some simple cases, we may consider this scheme.

F. Cost Analysis and Limitation of Face Mask Condition

The system mainly consists of a microscope and a mobile phone. Nowadays, almost everyone has a mobile phone, and the microscope can be purchased for about a few dollars. Therefore, our inspection systems are very cost-effective.

Moreover, the system mentioned above being able to be used to detect its service life, some conditions of the face mask itself may also affect their use life. For example, there may be lint inside the face mask when using face mask a few times. In that case, we can choose to discard the mask and use a new one to ease the discomfort. Meanwhile, if the face mask being used gets wet by accident, we must stop using it whatever stage it belongs to.

IV. Conclusion

In this paper, we propose a service stage detection method based on a mobile microscope which can be used to obtain the micro-photos of the face mask being used.

In our detection method, we first get the micro-photos of face mask being used, which may reflect some details such as droplets or other obvious dirt. Then, we extract texture features from the micro-photos using the method of GLCM and choosing four measures viz. contrast, correlation, energy and homogeneity as the features. Subsequently, KNN method is applied to work on the detect the service stage. In validation experiments, the obtained system achieves a relatively good result with an accuracy of 82.87% (measured by macro-measures) on the testing dataset. For more detailed metrics of each type, the precision of Type I ‘normal use’ and the recall of type III ‘not recommended’ reach 92.00% and 92.59%, respectively. The results of our experiments provide an idea of using photos to detect the use time of face masks to distinguish whether the face mask can be used sequentially or not. We faced with the current severe situation of COVID-19 and possible shortage of face masks, our research may work as an assistant to help the common people to use face masks more correctly. Furthermore, it can exert positive influence on protecting uninfected people and stopping the possibly infected patients from spreading virus.

In future work, different types of face masks should be taken into consideration to expand the types of detecting objects. Meanwhile, deep learning has been shown to perform well in computer vision field [53], [54]. Based on the current portability and practicability, we will try using some smarter solutions, such as deep learning, in larger data set to reach the goal of achieving a higher accuracy. As the amount of data increases in the future and the number of mask’s types grows, we may experiment with more feature extraction methods as well such as Gabor filters to build better models.

Funding Statement

This work was supported in part by the National Natural Science Foundation of China under Grant 61901172, Grant 61831015, and Grant U1908210; in part by the Shanghai Sailing Program under Grant 19YF1414100; in part by the “Chenguang Program” through the Shanghai Education Development Foundation and Shanghai Municipal Education Commission under Grant 19CG27; in part by the Science and Technology Commission of Shanghai Municipality under Grant 19511120100, Grant 18DZ2270700, and Grant 18DZ2270800; in part by the Foundation of Key Laboratory of Artificial Intelligence, Ministry of Education under Grant AI2019002; and in part by the Fundamental Research Funds for the Central Universities.

References

  • [1].Coronavirus Disease (COVID-19) Situation Report-159, World Health Org., Geneva, Switzerland, 2020. [Google Scholar]
  • [2].Lan L.et al. , “Positive RT-PCR test results in patients recovered from COVID-19,” J. Amer. Med. Assoc., vol. 323, no. 15, pp. 1502–1503, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Hussein H. A., Hassan R. Y. A., Chino M., and Febbraio F., “Point-of-care diagnostics of COVID-19: From current work to future perspectives,” Sensors, vol. 20, no. 15, p. 4289, Jul. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Wang L. and Wong A., “COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images,” 2020, arXiv:2003.09871. [Online]. Available: http://arxiv.org/abs/2003.09871 [DOI] [PMC free article] [PubMed]
  • [5].Bernheim A.et al. , “Chest CT findings in coronavirus disease-19 (COVID-19): Relationship to duration of infection,” Radiology, vol. 295, no. 3, Jun. 2020, Art. no. 200463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Wang Y.et al. , “Unobtrusive and automatic classification of multiple people’s abnormal respiratory patterns in real time using deep neural network and depth camera,” IEEE Internet Things J., vol. 7, no. 9, pp. 8559–8571, Sep. 2020. [Google Scholar]
  • [7].Howard J.et al. , “Face masks against COVID-19: An evidence review,” Proc. Nat. Acad. Sci. USA, vol. 118, no. 4, Jan. 2021, Art. no. e2014564118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Advice on the Use of Masks in the Context of COVID-19: Interim Guidance, World Health Org., Geneva, Switzerland, Jun. 2020. [Google Scholar]
  • [9].Greenhalgh T., Schmid M. B., Czypionka T., Bassler D., and Gruer L., “Face masks for the public during the COVID-19 crisis,” Brit. Med. J., vol. 369, p. m1435, Apr. 2020. [DOI] [PubMed] [Google Scholar]
  • [10].Cheng K. K., Lam T. H., and Leung C. C., “Wearing face masks in the community during the COVID-19 pandemic: Altruism and solidarity,” Lancet, Apr. 2020. [Online]. Available: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30918-1/fulltext [DOI] [PMC free article] [PubMed]
  • [11].Rational Use of Personal Protective Equipment for Coronavirus Disease (COVID-19) and Considerations During Severe Shortages: Interim Guidance, World Health Org., Geneva, Switzerland, Apr. 2020. [Google Scholar]
  • [12].Javid B., Weekes M., and Matheson N., “COVID-19: Should the public wear face masks?” Brit. Med. J., Clin. Res. Ed., vol. 369, p. m1442, Apr. 2020. [DOI] [PubMed] [Google Scholar]
  • [13].Klemeš J. J., Fan Y. V., Tan R. R., and Jiang P., “Minimising the present and future plastic waste, energy and environmental footprints related to COVID-19,” Renew. Sustain. Energy Rev., vol. 127, Jul. 2020, Art. no. 109883. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Mihai F.-C., “Assessment of COVID-19 waste flows during the emergency state in Romania and related public health and environmental concerns,” Int. J. Environ. Res. Public Health, vol. 17, no. 15, p. 5439, Jul. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Saadat S., Rawtani D., and Hussain C. M., “Environmental perspective of COVID-19,” Sci. Total Environ., vol. 728, Aug. 2020, Art. no. 138870. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Sangkham S., “Face mask and medical waste disposal during the novel COVID-19 pandemic in Asia,” Case Stud. Chem. Environ. Eng., vol. 2, Sep. 2020, Art. no. 100052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Parkinson J.. (2020). Coronavirus: Disposable Masks ‘Causing Enormous Plastic Waste.’ [Online]. Available: https://www.bbc.com/news/uk-politics-54057799 [Google Scholar]
  • [18].Long Y.et al. , “Effectiveness of N95 respirators versus surgical masks against influenza: A systematic review and meta-analysis,” J. Evidence-Based Med., vol. 13, no. 2, pp. 93–101, May 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Radonovich L. J.et al. , “N95 respirators vs medical masks for preventing influenza among health care personnel: A randomized clinical trial,” J. Amer. Med. Assoc., vol. 322, no. 9, pp. 824–833, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Cowling B. J., Zhou Y., Ip D. K. M., Leung G. M., and Aiello A. E., “Face masks to prevent transmission of influenza virus: A systematic review,” Epidemiol. Infection, vol. 138, no. 4, pp. 449–456, Apr. 2010. [DOI] [PubMed] [Google Scholar]
  • [21].Chen X., Chughtai A. A., and MacIntyre C. R., “Herd protection effect of N95 respirators in healthcare workers,” J. Int. Med. Res., vol. 45, no. 6, pp. 1760–1767, Dec. 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Kanai R.et al. , “Discriminant analysis and interpretation of nuclear chromatin distribution and coarseness using gray-level co-occurrence matrix features for lobular endocervical glandular hyperplasia,” Diagnostic Cytopathol., vol. 48, no. 8, pp. 724–735, Aug. 2020. [DOI] [PubMed] [Google Scholar]
  • [23].Haralick R. M., Shanmugam K., and Dinstein I., “Textural features for image classification,” IEEE Trans. Syst., Man, Cybern., vol. SMC-3, no. 6, pp. 610–621, Nov. 1973. [Google Scholar]
  • [24].Gibbs P. and Turnbull L. W., “Textural analysis of contrast-enhanced MR images of the breast,” Magn. Reson. Med., Off. J. Int. Soc. Magn. Reson. Med., vol. 50, no. 1, pp. 92–98, Jul. 2003. [DOI] [PubMed] [Google Scholar]
  • [25].Rathore S., Hussain M., Iftikhar M. A., and Jalil A., “Ensemble classification of colon biopsy images based on information rich hybrid features,” Comput. Biol. Med., vol. 47, pp. 76–92, Apr. 2014. [DOI] [PubMed] [Google Scholar]
  • [26].Safira L., Irawan B., and Setianingsih C., “K-nearest neighbour classification and feature extraction GLCM for identification of Terry’s nail,” in Proc. IEEE Int. Conf. Ind. 4.0, Artif. Intell., Commun. Technol. (IAICT), Jul. 2019, pp. 98–104. [Google Scholar]
  • [27].Ou X., Pan W., and Xiao P., “In vivo skin capacitive imaging analysis by using grey level co-occurrence matrix (GLCM),” Int. J. Pharmaceutics, vol. 460, nos. 1–2, pp. 28–32, Jan. 2014. [DOI] [PubMed] [Google Scholar]
  • [28].Shabir M. A., Hassan M. U., Yu X., and Li J., “Tyre defect detection based on GLCM and Gabor filter,” in Proc. 22nd Int. Multitopic Conf. (INMIC), Nov. 2019, pp. 1–6. [Google Scholar]
  • [29].Gustian D. A., Rohmah N. L., Shidik G. F., Fanani A. Z., Pramunendar R. A., and Pujiono, “Classification of troso fabric using SVM-RBF multi-class method with GLCM and PCA feature extraction,” in Proc. Int. Seminar Appl. Technol. Inf. Commun. (iSemantic), Sep. 2019, pp. 7–11. [Google Scholar]
  • [30].Saifullah S. and Permadi V. A., “Comparison of egg fertility identification based on GLCM feature extraction using backpropagation and K-means clustering algorithms,” in Proc. 5th Int. Conf. Sci. Inf. Technol. (ICSITech), Oct. 2019, pp. 140–145. [Google Scholar]
  • [31].Zhang S., Li X., Zong M., Zhu X., and Cheng D., “Learning k for kNN classification,” ACM Trans. Intell. Syst. Technol., vol. 8, no. 3, pp. 1–19, 2017. [Google Scholar]
  • [32].Du S. and Li J., “Parallel processing of improved KNN text classification algorithm based on Hadoop,” in Proc. 7th Int. Conf. Inf., Commun. Netw. (ICICN), Apr. 2019, pp. 167–170. [Google Scholar]
  • [33].Li Z., Jia L., and Su B., “Improved K-means algorithm for finding public opinion of Mount Emei tourism,” in Proc. 15th Int. Conf. Comput. Intell. Secur. (CIS), Dec. 2019, pp. 192–196. [Google Scholar]
  • [34].Hamed A., Sobhy A., and Nassar H., “Accurate classification of COVID-19 based on incomplete heterogeneous data using a KNN variant algorithm,” Res. Square, to be published. [Online]. Available: https://europepmc.org/article/ppr/ppr158832, doi: 10.21203/rs.3.rs-27186/v1. [DOI] [PMC free article] [PubMed]
  • [35].Newby P. K. and Tucker K. L., “Empirically derived eating patterns using factor or cluster analysis: A review,” Nutrition Rev., vol. 62, no. 5, pp. 177–203, May 2004. [DOI] [PubMed] [Google Scholar]
  • [36].Krithika N. and Selvarani A. G., “An individual grape leaf disease identification using leaf skeletons and KNN classification,” in Proc. Int. Conf. Innov. Inf., Embedded Commun. Syst. (ICIIECS), Mar. 2017, pp. 1–5. [Google Scholar]
  • [37].Hoang M. T.et al. , “A soft range limited K-nearest neighbors algorithm for indoor localization enhancement,” IEEE Sensors J., vol. 18, no. 24, pp. 10208–10216, Dec. 2018. [Google Scholar]
  • [38].Jiang Z.et al. , “Detection of respiratory infections using RGB-infrared sensors on portable device,” IEEE Sensors J., vol. 20, no. 22, pp. 13674–13681, Nov. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Imran A.et al. , “AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app,” Informat. Med. Unlocked, vol. 20, Jan. 2020, Art. no. 100378. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Hall-Beyer M.. GLCM Texture: A Tutorial v. 3.0 March 2017. [Online]. Available: https://prism.ucalgary.ca/handle/1880/51900 [Google Scholar]
  • [41].Wu Q., Gan Y., Lin B., Zhang Q., and Chang H., “An active contour model based on fused texture features for image segmentation,” Neurocomputing, vol. 151, pp. 1133–1141, Mar. 2015. [Google Scholar]
  • [42].Xing Z. and Jia H., “Multilevel color image segmentation based on GLCM and improved salp swarm algorithm,” IEEE Access, vol. 7, pp. 37672–37690, 2019. [Google Scholar]
  • [43].Löfstedt T., Brynolfsson P., Asklund T., Nyholm T., and Garpebring A., “Gray-level invariant Haralick texture features,” PLoS ONE, vol. 14, no. 2, Feb. 2019, Art. no. e0212110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [44].Zhang S., Li X., Zong M., Zhu X., and Wang R., “Efficient kNN classification with different numbers of nearest neighbors,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 5, pp. 1774–1785, May 2018. [DOI] [PubMed] [Google Scholar]
  • [45].Liu H., Li X., and Zhang S., “Learning instance correlation functions for multilabel classification,” IEEE Trans. Cybern., vol. 47, no. 2, pp. 499–510, Feb. 2017. [DOI] [PubMed] [Google Scholar]
  • [46].Wu X.et al. , “Top 10 algorithms in data mining,” Knowl. Inf. Syst., vol. 14, no. 1, pp. 1–37, 2007. [Google Scholar]
  • [47].Zhang S., “Nearest neighbor selection for iteratively kNN imputation,” J. Syst. Softw., vol. 85, no. 11, pp. 2541–2552, Nov. 2012. [Google Scholar]
  • [48].Yu J., Gao X., Tao D., Li X., and Zhang K., “A unified learning framework for single image super-resolution,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 4, pp. 780–792, Apr. 2014. [DOI] [PubMed] [Google Scholar]
  • [49].Zhu Q., Shao L., Li X., and Wang L., “Targeting accurate object extraction from an image: A comprehensive study of natural image matting,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 2, pp. 185–207, Feb. 2015. [DOI] [PubMed] [Google Scholar]
  • [50].Shao L., Liu L., and Li X., “Feature learning for image classification via multiobjective genetic programming,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 7, pp. 1359–1371, Jul. 2014. [Google Scholar]
  • [51].Li J.. (2020). How Long Does the Surgical Mask Last? [Online]. Available: https://m.baidu.com/bh/m/detail/vc_7516980125353001641 [Google Scholar]
  • [52].Lall U. and Sharma A., “A nearest neighbor bootstrap for resampling hydrologic time series,” Water Resour. Res., vol. 32, no. 3, pp. 679–693, Mar. 1996. [Google Scholar]
  • [53].Voulodimos A., Doulamis N., Doulamis A., and Protopapadakis E., “Deep learning for computer vision: A brief review,” Comput. Intell. Neurosci., vol. 2018, pp. 1–13, Feb. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [54].Guo Y., Liu Y., Oerlemans A., Lao S., Wu S., and Lew M. S., “Deep learning for visual understanding: A review,” Neurocomputing, vol. 187, pp. 27–48, Apr. 2016. [Google Scholar]

Articles from Ieee Sensors Journal are provided here courtesy of Institute of Electrical and Electronics Engineers

RESOURCES