Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Feb 4;16(2):e0246061. doi: 10.1371/journal.pone.0246061

Mobile apps for detecting falsified and substandard drugs: A systematic review

Agustín Ciapponi 1,2,*,#, Manuel Donato 1,2,#, A Metin Gülmezoglu 3,#, Tomás Alconada 1,2,#, Ariel Bardach 1,2,#
Editor: Vijayaprakash Suppiah4
PMCID: PMC7861418  PMID: 33539433

Abstract

The use of substandard and counterfeit medicines (SCM) leads to significant health and economic consequences, like treatment failure, rise of antimicrobial resistance, extra expenditures of individuals or households and serious adverse drug reactions including death. Our objective was to systematically search, identify and compare relevant available mobile applications (apps) for smartphones and tablets, which use could potentially affect clinical and public health outcomes. We carried out a systematic review of the literature in January 2020, including major medical databases, and app stores. We used the validated Mobile App Rating Scale (MARS) to assess the quality of apps, (1 worst score, 3 acceptable score, and 5 best score). We planned to evaluate the accuracy of the mobile apps to detect SCM. We retrieved 335 references through medical databases and 42 from Apple, Google stores and Google Scholar. We finally included two studies of the medical database, 25 apps (eight from the App Store, eight from Google Play, eight from both stores, and one from Google Scholar), and 16 websites. We only found one report on the accuracy of a mobile apps detecting SCMs. Most apps use the imprint, color or shape for pill identification, and only a few offer pill detection through photographs or bar code. The MARS mean score for the apps was 3.17 (acceptable), with a maximum of 4.9 and a minimum of 1.1. The ‘functionality’ dimension resulted in the highest mean score (3.4), while the ‘engagement’ and ‘information’ dimensions showed the lowest one (3.0). In conclusion, we found a remarkable evidence gap about the accuracy of mobile apps in detecting SCMs. However, mobile apps could potentially be useful to screen for SCM by assessing the physical characteristics of pills, although this should still be assessed in properly designed research studies.

Introduction

The World Health Organization (WHO) defines substandard or “out of specification” medicines as authorized medical products that fail to meet either their quality standards or specifications, or both [1]. Counterfeit or falsified medicines are products that deliberately or fraudulently misrepresent their identity, composition or source. Poor-quality medicines have important adverse health consequences, including the potential for treatment failure, the development of antimicrobial resistance, and serious adverse drug reactions, including death [2]. Apart from safety and effectiveness concerns, substandard and counterfeit medicines (SCMs) can carry economic costs such as the treatment of adverse events by the health system and resources wasted, leading to complications that are borne by consumers, facilities and third-payers. Yet, there are also potential indirect costs to consider, such as loss of productivity due to extra-days of illness, and reduced sales and tax revenues coming from regular medicines [3,4]. SCMs not only leads to public health and economic consequences, but it also weakens efforts, for example, to attain the United Nations Sustainable Development Goal related to achieving universal access to safe and effective care, including essential medicines [5].

Quality assurance of medicines represents a significant challenge for governments, regulators, and pharmaceutical companies at a global level [6,7]. That is why the only way to fight against this problem is a multifaceted approach with all the actors involved participating and targeting various levels of the pharmaceutical supply chain from developers to consumers [810]. With this purpose, a variety of technologies from mobile apps and handheld devices to sophisticated analytical chemistry methods, have been developed to detect SCMs [11]. Mobile health (mHealth) is a general term for the use of mobile phones and other wireless technology in medical care. Some software applications (apps) may be able to identify authorized medicines and discriminate against SCMs by detecting differences in several aspects, such as shape, color and others. These apps generally have a database of visual characteristics of currently authorized medicines and use a phone camera to compare it against the sample product to evaluate. Also, some technologies, mostly desktop applications, rely on the same database to detect drug inconsistencies [12,13].

The full extent of the SCM problem is largely unknown, and scientific research is variable and of poor methodological quality [14]. However, an increase in SCM is a growing global concern. If an inexpensive and widely available technology such as an app can help in the screening and possible detection of SCM, it could have significant potential use. Our objective was to systematically search, identify, synthesize and compare relevant mobile apps whose use could beneficially impact on public health decision-making.

Methods

We conducted a systematic review of the literature following the Cochrane methods and the PRISMA guidelines for reporting in S1 File [15,16]. Literature searches designed by a trained librarian were conducted in January 2020 in the Cochrane Database of Systematic Reviews (CDSR), Cochrane Central Register of Controlled Trials (CENTRAL), Database of Abstracts of Reviews of Effects (DARE), MEDLINE, EMBASE, Clinicaltrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP), MedRxiv and SciFinder. The basic search strategy designed for Medline (PubMed) included the following terms: (Mobile Applications[Mesh] OR Mobile App*[tiab] OR Electronic App*[tiab] OR Portable App*[tiab] OR Software App*[tiab] OR Mobile Software[tiab] OR Portable Software[tiab] OR Mobile Based[tiab] OR Medical App*[tiab] OR Mobile Authentic*[tiab] OR Cell Phone[Mesh] OR Mobile Phone*[tiab] OR Cell Phone*[tiab] OR Mobile Telephon*[tiab] OR Cell Telephon*[tiab] OR Cellular Phone*[tiab] OR Cellular Telephon*[tiab] OR Smartphone*[tiab] OR QR[tiab] OR Computers, Handheld[Mesh] OR Handheld Device[tiab] OR iPad[tiab] OR Pill Identificat*[tiab]) AND (Counterfeit Drugs[Mesh] OR Counterfeit[tiab] OR Counterfeit[tiab] OR Fake[tiab] OR Adulterated[tiab] OR Imitation*[tiab] OR Fraudulent[tiab] OR spurious [tiab]). The search terms were modified to suit the requirements of particular databases as was detailed in S2 File. We also searched non-peer reviewed technical reports and other online information, including Apple’s App Store, Google Play Store, and a directed search in Google Scholar. We searched references of included articles and relevant literature reviews.

The strategy for Google Scholar included the following terms: Pill Identifier Tool Medication OR Medicine OR Drug OR Falsified OR Counterfeit OR Substandard OR Surveillance OR Authentication OR Mobile OR Software.

The systematic review protocol was registered in the PROSPERO (CRD42020163075) database in January 2020. Regarding primary eligibility criteria, we included apps claiming to analyze authorized medicines or SCMs identification in their description, aimed to directly detect and recognize solid oral medications, published in English, Spanish or Portuguese language, and available to download from the mentioned app stores. Mobile apps were excluded if they targeted non-human medications, identified illicit drugs, or examined herbals or Chinese medicines. After screening the results for each search database, the selected app names were added. If an app was listed in both stores we report them with a single value after consensus. We downloaded and installed the remaining apps as the first step to assess their eligibility for this review. Apps failing to launch in the test devices were excluded. All Apple test devices ran iPhone operating system (iOS, Apple Inc) 13.3, and all Android test devices ran Android 10.0. We also extracted general information and relevant secondary features that the apps offer, such as information provided about authorized medicines, security and privacy-related features, data sharing and social media and technical support.

All unique articles were independently assessed by two reviewers (MD and TA) based on title and abstract. Those marked for inclusion, or whose title and abstract were not sufficient to determine inclusion, were then reviewed using the full text. Data extraction and risk of bias (quality) assessment were also performed independently by these reviewers, with oversight from two senior reviewers (AC and AB). The risk of bias was planned to be assess by the Cochrane tool for Randomized Controlled Trials (RCTs) designs [17]. For other designs, including cross-sectional or cohorts, we used the NIH Study Quality Assessment Tools [18]. For content analysis, we used the Mobile App Rating Scale (MARS) that is a simple, objective, and reliable tool for classifying and assessing the quality of mobile apps [12]. The MARS app quality is a 19-item, expert-based rating scale to assess the quality of mobile apps. Each question from MARS uses a 5-point scale (1 = inadequate, 2 = poor, 3 = acceptable, 4 = good, and 5 = excellent). This expert scale consists of multiple dimensions that assess different quality aspects of apps, including end-user engagement features, aesthetics and content quality. Comprehensiveness and accuracy of the content and information of the app for the MARS are assessed on questions #15 and #16. We completed all phases of the study selection using COVIDENCE®, a web-based platform designed for the processing of systematic reviews [19]. Authors of articles were contacted to obtain missing or supplementary information when necessary.

A pre-designed general data extraction form was used after pilot testing. We resolved disagreements during all phases by consensus by the two initial reviewers (MD and TA) and, when necessary, a third reviewer (AC or AB) decided on them if a consensus was not reached. We extracted the following: general information about the study (publication type, year of publication, journal, authors’ names, and language), research location (geographical region, country, province, city, and setting) and study population (sample size, age at enrollment, living in rural or urban area under, and dates of initiation and ending of data collection). We performed descriptive analyses of the extracted data, and we structured data in tables, to describe the mobile apps for detecting SCMs which included a set of both general information and relevant secondary features of the apps, and which were available in the app stores.

Results

Our search strategy retrieved 335 references through the literature search in databases and 42 reports of technologies coming from other sources, such as Apple’s App Store, Google Play Store and Google search engines. We finally included 25 mobile apps, 16 websites that compare the given medication with large internal databases, one preprint study evaluating one mobile app and other study comparing four mobile apps (Fig 1), along with a detail of their main characteristics to facilitate comparisons. These reports referred to the usability and the ability of apps to correctly identify pills. However, did not find direct assessment of SF products in the real-world.

Fig 1. Records flow diagram.

Fig 1

Considering the nature of the data, we could not perform a meta-analysis.

Table 1 and Fig 2 summarize the general characteristics of the reviewed apps and the app quality mean MARS scores available on platforms. Of the 25 apps, eight were from Apple’s App Store, eight from Google Play, eight were listed in both stores and one (MedSnap) on none of the platforms, but was identified through Google Scholar and MedRxiv. Twenty-two developers developed the 24 apps available on platforms, and 17 were currently available to detect pills without any cost. Seven were available after purchase or offered more options through a subscription. The last update dates for the Android apps were from February 2010 to March 2020, whereas the iOS apps were generally the most frequently updated, and dates ranged from October 2015 to March 2020. The average rating for the apps was between acceptable and good, with a mean score of 3.78, ranging from a minimum of 1 to a maximum of 5, but sometimes it was not available.

Table 1. Characteristics of included mobile apps for detecting SCMs.

Name Platform Developer Version Last update Cost Average user Rating (out of 5) Engagement (out of 5) Functionality (out of 5) Aesthetics (out of 5) Information (out of 5) MARS mean score (out of 5)
Advanced Pill Identifier & Drug Info Android appmaniateam 1.6 21/8/18 Free 3.7 3.4 4.3 3.3 3.5 3.6
CheckFake Android dlt.sg Apps 1.0 14/4/18 Free 4.4 1.8 1 1.3 1 1.3
Drug Facts by PillSync.com iOS Scanidme inc 5.2 26/6/19 Free 4.1 3.4 3.2 3 3.3 3.2
Drug Facts Pill ID iOS Scanidme inc 5.2 1/7/19 Free NA 3.4 3.2 3 3.3 3.2
Drug Interaction Checker iOS/Android HYDL 13 1/2/19 Free NA 3 3 3 3 3
Drug Search App Android Drug Search App 1.6 14/9/18 Free 3.6 1.4 1 1 1 1.1
Drugs.com Medication Guide iOS/Android Drugs.com 2.9.7 31/10/19 Free 4.9 4.2 4.5 4.3 4.3 4.3
Epocrates iOS/Android Epocrates, Inc. 20.1 13/2/20 Free 4.3 4.8 4.8 5 4.4 4.7
IBM Micromedex Drug Reference iOS/Android Micromedex 2.1b815 19/3/20 2.99 $ annual 4.3 5 5 5 4.6 4.9
iNarc: Pill Finder and Identifier iOS Amit Barman 4 1/10/15 0.99 $ NA 1.6 2 2.3 2 2
Lexicomp iOS/Android Lexicomp 5.3.2 18/3/20 799 $ annual 4.1 5 5 5 5 5
Medscape iOS/Android WebMD, LLC 7.3.1 10/2/20 Free 4.6 5 5 5 4.6 4.9
MedSnap Own platform MedSnap NA NA Not free NA 4 4.2 3.7 4.1 4
Pepid iOS/Android Pepid, LLC 6.2 20/3/20 Free NA 2 2.5 2 2.3 2.2
Pill identifier Android Giant Brains Software 2.6 28/8/18 0.99 $ annual NA 3.6 4 3.7 3.5 3.7
Pill identifier Android Walhalla Dynamics 7.1.1662.r 1/2/10 Free 3.1 3.6 4 3.7 3.5 3.7
Pill Identifier and Drug List iOS Mobixed LLC 3.9 17/9/19 Free NA 3 3.5 4 3.3 3.4
Pill Identifier by Drugs.com iOS Drugs.com 2.97 12/2/20 0.99 $ 1 3.2 4.2 3.3 3.5 3.5
Pill Identifier Mobile App iOS Eric Phung 2.5 1/3/19 1.99 $ NA 1.6 1.7 1.7 1.5 1.6
Pill Identifier Pro and Drug Info Android Mobilicks 1.0.3 15/2/19 Free 3.7 3.2 4 3.7 3.2 3.5
pill+: Prescription Pill Finder and Identifier iOS Amit Barman 4 2/10/15 0.99 $ 5 1.6 2 2 1.5 1.8
PillFinder 2.0 iOS MedApp sp. Zo.o. 2.0.1 1/3/16 Free NA 2 2.2 2.3 2.2 2.1
Pillid.com iOS/Android Douglas McKalip 1.2.1 7/9/17 Free 3.8 2.6 4 2.3 2.7 2.9
Prescription Pill Identifier Android Giant Brains Software 2.5 10/9/18 Free 3.8 2.6 3.5 2.6 3.3 3
Smart Pill Identifier Android iConiq Studios 0.1.2 5/4/19 Free 2.2 1.6 3.5 3 2.1 2.6

Fig 2. Mean MARS score per mobile app by users’ ratings.

Fig 2

The size of the circles is proportional to the quintiles of MARS mean score, and their color identifies the platform source. The axes of the graph indicate the average user ratings and the number of raters.

Most apps claim to detect pills by evaluating the imprint, color or shape, and only a few offer the possibility of identification through a photograph or bar code. The app quality mean MARS score (Table 1) of the 25 apps was good with 3.17, with a maximum of 4.9 (excellent) for Medscape and IBM Micromedex Drug Reference, and a minimum of 1.1 (inadequate) for Drug Search App. The mean scores of the four dimensions of MARS were examined to investigate the magnitude of the differences in quality in each dimension (Table 1). The functionality dimension resulted in the highest mean score (3.4), whereas the engagement dimension and the information dimension showed the lowest average score (3.0). It should be noted that of the 25 apps, only 4 were verified by evidence in published scientific literature, these being MedSnap, Epocrates, Medscape and IBM Micromedex (Table 2) [20,21]. On the other hand, all of the websites (16) identified out of these platforms are intended for the general population (Table 3). Nevertheless, all the websites and available software use imprint, color or shape to identify the pills.

Table 2. Detailed MARS score for all included apps.

App Name Platform 1. Entertainment 2. Interest 3. Customization 4. Interactivity 5. Target Group 6. Performance 7. Ease of Use 8. Navigation 9. Gestural Design 10. Layout 11. Graphics 12. Visual Appeal 13. Accuracy 14. Goals* 15. Accuracy 16. Comprehensiveness 17. Visual* 18. Credibility 19. Evidence Base
Advanced Pill Identifier & Drug Info Android 3 4 3 3 4 4 5 4 4 4 3 3 4 3 3 3 4 4 NA
CheckFake Android 1 1 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 NA
Drug Facts by PillSync.com iOS 4 4 3 3 3 4 3 3 3 3 3 3 4 3 3 3 3 4 NA
Drug Facts Pill ID iOS 4 4 3 3 3 4 3 3 3 3 3 3 4 3 3 3 3 4 NA
Drug Interaction Checker iOS/Android 3 3 3 3 3 3 3 3 3 3 3 3 3 NA 3 3 3 3 NA
Drug Search App Android 1 1 1 1 3 1 1 1 1 1 1 1 1 1 NA NA NA 1 NA
Drugs.com Medication Guide iOS/Android 4 4 4 4 5 5 4 5 4 5 4 4 5 4 4 4 5 4 NA
Epocrates iOS/Android 5 5 5 4 5 5 4 5 5 5 5 5 5 5 5 5 5 4 2
IBM Micromedex Drug Reference iOS/Android 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 2
iNarc: Pill Finder and Identifier iOS 1 1 2 2 2 2 2 2 2 2 2 3 2 NA NA 2 2 2 NA
Lexicomp iOS/Android 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 NA
Medscape iOS/Android 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 2
MedSnap Own platform 4 4 4 4 4 5 4 4 4 4 3 4 5 5 4 4 5 3 3
Pepid iOS/Android 2 2 2 2 2 3 2 2 3 2 2 2 2 2 2 3 3 2 NA
Pill Identifier and Drug List iOS 3 3 3 3 3 3 4 3 4 4 4 4 4 3 3 4 3 3 NA
Pill Identifier by Drugs.com iOS 3 3 3 3 4 4 5 4 4 4 3 3 4 2 3 4 4 4 NA
Pill identifier by Giant Brains Software Android 4 4 3 3 4 4 4 4 4 4 4 3 3 4 3 3 4 4 NA
Pill identifier by Walhalla Dynamics Android 4 4 3 3 4 4 4 4 4 4 4 3 3 4 3 3 4 4 NA
Pill Identifier Mobile App iOS 1 1 2 2 2 2 1 2 2 1 2 2 2 NA NA NA NA 1 NA
Pill Identifier Pro and Drug Info Android 3 3 3 3 4 4 4 4 4 4 4 3 3 3 3 3 4 3 NA
pill+: Prescription Pill Finder and Identifier iOS 1 1 2 2 2 2 2 2 2 2 2 2 2 NA 1 1 2 1 NA
PillFinder 2.0 iOS 2 3 1 2 2 2 2 2 3 2 3 2 1 NA NA 2 2 4 NA
Pillid.com iOS/Android 3 3 2 2 3 4 4 4 4 3 2 2 3 3 2 2 3 3 NA
Prescription Pill Identifier Android 3 4 1 1 4 3 4 4 3 3 3 2 4 3 3 3 4 3 NA
Smart Pill Identifier Android 2 2 1 1 2 2 4 4 4 3 3 3 2 2 2 2 3 2 NA

Table 3. Additional resources identified from the search strategy.

Name Features Function Recipient
AARP Webpage Pill identifier. Search by Imprint, Shape or Color Consumers
CVS Pharmacy Webpage Pill identifier. Search by Imprint, Shape or Color Consumers
Drugs.com Webpage Pill identifier. Search by Imprint, Shape or Color Consumers
Epocrates Webpage Pill identifier. Search by Imprint, Shape, Color, Scoring, Clarity, Coating or Flavor Consumers
IBM Micromedex Solutions Webpage Pill identifier. Search by Imprint, Shape, Color or Form Consumers
Medscape Webpage Pill identifier. Search by Imprint, Shape, Color, Scoring or Form Consumers
MedSnap Mobile app Pill scanner Consumers/Regulator
Pepid Webpage Pill identifier. Search by Imprint, Shape or Color Consumers
PillBox Webpage Pill identifier. Search by Imprint, Shape, Color, Size or DEA schedule Consumers
RedCrossDrugstore Webpage Pill identifier. Search by Imprint, Shape, Color, Scoring, Clarity, Coating or Flavor Consumers
RxID.ca Webpage Pill identifier. Search by Imprint, Shape, Color, Scoring, Coating, Surface or Logo Consumers
RxList.com Webpage Pill identifier. Search by Imprint, Shape or Color Consumers
RxResouse.org Webpage Pill identifier. Search by Imprint, Shape, Color or Scoring Consumers
RxSaver Webpage Pill identifier. Search by Imprint, Shape, Color, Scoring or Size Consumers
WebMD.com Webpage Pill identifier. Search by Imprint, Shape or Color Consumers
WebPoisonControl Webpage Pill identifier. Search by Imprint, Shape or Color Consumers
WellRx Webpage Pill identifier. Search by Imprint, Shape or Color Consumers

Relevant secondary features for each app available on platforms were extracted and shown in Fig 3. The overall number of apps with drug database access was 12 (out of 25), and 9 were free to access. Drug history tracking was available in only 6 apps where one has some cost, 16 apps allow sharing while 5 have some cost, and 7 apps require login and provide password protection with only 2 having any cost.

Fig 3. Frequency of apps, available on platforms, by selective secondary features.

Fig 3

Our search found a cross-sectional study that intended to compare Epocrates, Medscape, IBM Micromedex and Google for clinical management information designed for healthcare professionals for a one-week period [20]. Medical students at an academic hospital in the United States used a score for satisfaction of search and user interface based on a 1 to 5 scale, with 1 representing lowest quality and 5 describing the highest quality. The study concluded that Medscape (Satisfaction 4.92) was the most preferred free mobile app evaluated due to its interactive and educational features, followed by IBM Micromedex and Epocrates (Satisfaction 4.58 and 4.42, respectively). This study was used to complete MARS question #19 (evidence base) about whether the app had been trialed. Further, we found a preprint study whit the objective to evaluate the sensibility and specificity of MedSnap, that is not available on the app stores. In the study [21], MedSnap models were created from trusted and authentic medications and tested against samples of authentic/trusted or falsified artesunate, artemether-lumefantrine, azithromycin, and ciprofloxacin. Results were 100% sensitivity and specificity to detect authentic and counterfeit drugs from 48 samples tested. This study allowed us to complete MARS question #19 and others parts of the questionnaire. It should be noted that this app is the only one where the accuracy to detect counterfeit drugs was formally evaluated through a study.

We used the NIH Study Quality Assessment Tools for the two included studies and in both cases a poor score was obtained, which means a high risk of bias (S3 File).

Discussion

We found 25 apps and 16 websites for pill identification potentially useful for detecting SCM. Despite the variety of available apps, there was only two scientific publications of observational studies considered as high risk of bias, that included three apps and one of them as a preprint report. This highlights the lack of studies evaluating apps and the great need to carry out rigorous studies that evaluate the functioning and usefulness of apps for pill identification and eventually to detect SCMs. MedSnap was the only app where the sensitivity and specificity to detect authentic and counterfeit drugs was assessed [21]. Most mobile apps were developed as pill identifiers, and yet developers state that the apps have the capacity to also detect SCMs (as an example of this would be the apps Medsnap and CheckFake) [22,23]. In real life these apps are mostly used to identify pills by elderly people or people with difficulties to recognize medications but not to identify SCMs. Although it cannot be considered a proxy of accuracy, we rated and compared the quality of the apps globally and by different domains. The global app quality mean MARS score from the 25 apps available on platforms was classified as acceptable (3.17) and it is similar to some review and patients experience studies for some mobile apps for others objectives [13,2428]. Despite the aim of these apps, a low average score in the ‘information dimension’ (quantity and quality) was observed based on MARS. This is due partly because some of these apps do not seem to use a reliable or verifiable information source for medicines’ attributes. Nevertheless, for the evidence-based question inside the ‘information dimension’, we found only two low-quality studies which evaluated the performance of four highly recognized apps (Epocrates, IBM Micromedex Drug Reference, Medscape and MedSnap) [20,21]. Regarding the MARS questions #15 and #16, which assessed the accuracy and comprehensiveness of the content and information, only 6 of 25 apps had a MARS score higher than 4. This could be a concerning issue since 76% of the surveyed apps were not adequately backed up by any reliable information. As Kim et al, we also noted associations between the average user rating and the information dimension; this means that users may critically evaluate applications and these ratings can potentially be important tools for selecting apps [25].

Most of the identified mobile apps were aimed at consumers or general population, and just one was designed to be used by regulatory authorities, such as customs security personnel. Eight of the 25 apps (32%) were paid apps, and all the websites were open access. This proportion of paid apps was consistent with other studies that systematically reviewed the Google Play Store and Apple’s App Store [13,24,25]. Eight of the 24 apps available in the stores had no average score from the users, but had a very low MARS mean score in our study (from 1.6 to 3.7). Like Kim et al, we found a correlation between the worst scores from the public and low scores in our MARS aesthetics and engagement dimensions [25].

SCMs are a worldwide problem with direct consequences for human health. The problem is much more severe in low- and middle-income countries due to their relatively weak health systems and regulatory processes. In an analysis of 215 misoprostol samples, 55% were within specifications, 85 (40%) were below average in 90% of the labelled content [29]. Of the 85 samples, 14 contained no misoprostol at all. There is a tangible threat to global health security, for example, in antiparasitic drugs (chiefly antimalarials) and antibiotics, increasing transmission, morbidity, mortality, and resistance. This is also the case for medications affecting maternal, neonatal and child health, like misoprostol or mifepristone. In this regard, mobile apps could be a valuable and inexpensive tool for patient empowerment and they have the potential to detect these drugs and save lives at a low cost. However, the fight against falsified and substandard drugs needs a multi-pronged approach with all the stakeholders, and although technology alone cannot solve the problem, it can be an important tool for consumers [11,14]. Those countries with lower regulatory capacity are the most vulnerable to SCM medications. The goal is to include guarantee of good manufacturing practices, and to impose audit control measures on manufacturing companies and distribution of medications [30,31].

While MARS score is helpful to indicate user-friendliness, there is a significant gap in rigorous evaluation of these apps. The most important limitation is that we cannot extrapolate the identification of a pill by a device to the categorization of a product as counterfeit or substandard. To find out whether the product it really of poor quality, additional confirmatory tests will be necessary. Additionally, we do not know how consumers use these applications or what their performance looks like in real life, so this makes conclusions about their accuracy in the detection of substandard-quality drugs difficult. Moreover, the reproducibility of the search on app platforms was not standardized, but iteration using keywords and related apps reduce the probability of missing apps. Finally, mobile apps have frequent updates, and new apps are published on a daily basis.

Our systematic review highlighted an important evidence gap in diagnostic accuracy of mobile apps detecting SCMs, and there is a need for primary studies addressing this issue. A unified global effort to address the important problem of counterfeit and substandard drugs is necessary. Our findings suggest that, although there is no single technology that can meet all the desired requirements for detecting SCMs, mobile apps could constitute a potential valuable real-world tool available to large number of potential users to counter the serious consequences of the SCM problem and help achieve the United Nations’ Sustainable Development Goals.

Supporting information

S1 File. PRISMA checklist.

(DOCX)

S2 File. Search strategy.

(DOCX)

S3 File. NIH study quality assessment tools for observational studies.

(XLSX)

Acknowledgments

We would like to express our sincere thanks to our librarian Daniel Comandé, for leading the search strategy. In addition, we would like to thank Antonella Francheska Lavelanet for providing feedback and edits to this paper.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

This work was funded by an independent grant from HRP (the UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction https://wwww.who.int/reproductivehealth/hrp/en/). The funder’s role was limited to developing the terms of references and reviewing the drafts. The funders had no role in the final content or decision to publish the manuscript.

References

  • 1.World Health Organization. Definitions of Substandard and Falsified (SF) Medical Products. 2017 [cited 15 Mar 2020]. Available: https://www.who.int/medicines/regulation/ssffc/definitions/en/.
  • 2.Kovacs S, Hawes SE, Maley SN, Mosites E, Wong L et al. Technologies for Detecting Falsified and Substandard Drugs in Low and Middle-Income Countries. PLoS One. 2014;9: e906 10.1371/journal.pone.0090601 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.World Health Organization. A Study on the Public Health and Socioeconomic Impact of Substandard and Falsified Medical Products. Geneva, Switzerland: World Health Organization. 2017. Available: https://www.who.int/medicines/regulation/ssffc/publications/se-study-sf/en/.
  • 4.Fernandez FM, Hostetler D, Powell K, Kaur H, Green MD, Mildenhall DC, et al. Poor quality drugs: grand challenges in high throughput detection, countrywide sampling, and forensics in developing countries. Analyst. 2011;136: 3073–3082. 10.1039/c0an00627k [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.World Health Organization. Health in 2015: From MDGs, Millennium Development Goals to SDGs, Sustainable Development Goals. Geneva, Switzerland: World Health Organization. 2015.
  • 6.Mackey TK, Nayyar G. A review of existing and emerging digital technologies to combat the global trade in fake medicines. Expert Opin Drug Saf. 2017;16: 587–602. 10.1080/14740338.2017.1313227 [DOI] [PubMed] [Google Scholar]
  • 7.Roth L, Nalim A, Turesson B, Krech L. Global landscape assessment of screening technologies for medicine quality assurance: stakeholder perceptions and practices from ten countries. Global Health. 2018;14: 43 10.1186/s12992-018-0360-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Vickers S, Bernier M, Zambrzycki S, Fernandez FM, Newton PN, Caillet C. Field detection devices for screening the quality of medicines: a systematic review. BMJ Glob Heal. 2018;3: e000725 10.1136/bmjgh-2018-000725 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Cuomo RE, Mackey TK. An exploration of counterfeit medicine surveillance strategies guided by geospatial analysis: lessons learned from counterfeit Avastin detection in the US drug supply chain. BMJ Open. 2014;4: e006657 10.1136/bmjopen-2014-006657 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Rasheed H, Hollein L, Holzgrabe U. Future Information Technology Tools for Fighting Substandard and Falsified Medicines in Low- and Middle-Income Countries. Front Pharmacol. 2018;9: 995 10.3389/fphar.2018.00995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Hamilton WL, Doyle C, Halliwell-Ewen M, Lambert G. Public health interventions to protect against falsified medicines: a systematic review of international, national and local policies. Health Policy Plan. 2016;31: 1448–1466. 10.1093/heapol/czw062 [DOI] [PubMed] [Google Scholar]
  • 12.Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile App Rating Scale: A New Tool for Assessing the Quality of Health Mobile Apps. JMIR mHealth uHealth. 2015;3: e27 10.2196/mhealth.3422 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Santo K, Richtering SS, Chalmers J, Thiagalingam A, Chow CK, Redfern J. Mobile Phone Apps to Improve Medication Adherence: A Systematic Stepwise Process to Identify High-Quality Apps. JMIR Mhealth Uhealth. 2016;4: e132 10.2196/mhealth.6742 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Ozawa S, Evans DR, Bessias S, Haynie DG, Yemeke TT, Laing SK, et al. Prevalence and Estimated Economic Burden of Substandard and Falsified Medicines in Low- and Middle-Income Countries: A Systematic Review and Meta-analysis. JAMA Netw open. 2018;1: e181662 10.1001/jamanetworkopen.2018.1662 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Higgins JPT, Thomas J, Chandler J, MS. C, Li T, Page MJ, et al. Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated August 2019). Cochrane, 2019. Cochrane, editor. 2019.
  • 16.Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339 10.1136/bmj.b2700 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Higgins JPT, Savović J, Page MJ, Elbers RG SJ. Chapter 8: Assessing risk of bias in a randomized trial. In: Higgins J, Thomas J, Chandler J, et al, editor. Cochrane Handbook for Systematic Reviews of Interventions version 60 (updated July 2019). 2019 [cited 15 Mar 2020]. Available: www.training.cochrane.org/handbook.
  • 18.National Institutes of Health (NIH). “Study Quality Assessment Tools.” 2020 [cited 15 Mar 2020]. Available: www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools.
  • 19.Covidence systematic review software, Veritas Health Innovation, Melbourne, Australia. 2019.
  • 20.Khalifian S, Markman T, Sampognaro P, Mitchell S, Weeks S, Dattilo J. Medical student appraisal: searching on smartphones. Appl Clin Inform. 2013;4: 53–60. 10.4338/ACI-2012-10-CR-0047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Hymel PA, Brossette SE, Wong DY, Zheng N. An Evaluation of the MedSnap Medication Authentication System. medRxiv. 2020. 10.1101/2020.05.06.20093427 [DOI] [Google Scholar]
  • 22.Medsnap Verify Services. 2020 [cited 17 Dec 2020]. Available: https://www.medsnap.com/verify/.
  • 23.Google play. CheckFake (dlt.sg App for drug counterfeit). 2020 [cited 17 Dec 2020]. Available: https://play.google.com/store/apps/details?id=sg.dlt.checkfake&hl=es_419.
  • 24.Bardus M, van Beurden SB, Smith JR, Abraham C. A review and content analysis of engagement, functionality, aesthetics, information quality, and change techniques in the most popular commercial apps for weight management. Int J Behav Nutr Phys Act. 2016;13: 35 10.1186/s12966-016-0359-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kim BY, Sharafoddini A, Tran N, Wen EY, Lee J. Consumer Mobile Apps for Potential Drug-Drug Interaction Check: Systematic Review and Content Analysis Using the Mobile App Rating Scale (MARS). JMIR mHealth uHealth. 2018;6: e74 10.2196/mhealth.8613 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Guo Y, Yang F, Hu F, Li W, Ruggiano N, Lee HY. Existing Mobile Phone Apps for Self-Care Management of People With Alzheimer Disease and Related Dementias: Systematic Analysis. JMIR aging. 2020;3: e15290 10.2196/15290 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Woods LS, Duff J, Roehrer E, Walker K, Cummings E. Patients’ Experiences of Using a Consumer mHealth App for Self-Management of Heart Failure: Mixed-Methods Study. JMIR Hum factors. 2019;6: e13009 10.2196/13009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Adam A, Hellig JC, Perera M, Bolton D, Lawrentschuk N. “Prostate Cancer Risk Calculator” mobile applications (Apps): a systematic review and scoring using the validated user version of the Mobile Application Rating Scale (uMARS). World J Urol. 2018;36: 565–573. 10.1007/s00345-017-2150-1 [DOI] [PubMed] [Google Scholar]
  • 29.World Health Organization. WHO Drug Information. Quality of medicines. 2016;30.
  • 30.World Health Organization. WHO Global Surveillance and Monitoring System for Substandardand Falsified Medical Products. Geneva, Switzerland: World Health Organization; 2017.
  • 31.Fadlallah R, El-Jardali F, Annan F, Azzam H, Akl EA. Strategies and Systems-Level Interventions to Combat or Prevent Drug Counterfeiting: A Systematic Review of Evidence Beyond Effectiveness. Pharmaceut Med. 2016;30: 263–276. 10.1007/s40290-016-0156-4 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Vijayaprakash Suppiah

27 Oct 2020

PONE-D-20-22666

Mobile apps for Detecting Falsified and Substandard Drugs: A systematic review

PLOS ONE

Dear Dr. Ciapponi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Dec 11 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Vijayaprakash Suppiah, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please clarify the last date of your search for articles and applications.

3. You have provided a table of characteristics of the mobile applications; however, you have not summarized the characteristics (and quality) of the two studies you included.

4. Neither publication bias nor study heterogeneity have been assessed. Please clarify why this is so.

5. In your PRISMA checklist, you indicate that you did not present results of any risk assessment of bias across studies. However, in the Methods section, you indicate that risk of bias was assessed. Please present a summary of your findings if any.

6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: My main concern with this paper is that it is not a systematic review of mobile apps for detecting falsified and substandard drugs (SF drugs). Rather, it is a review of 24 apps and 16 websites whose function is pill identification, and one app (MedSnap) whose function is to detect SF drugs. The authors used the MARS assessment tool to evaluate the usability of each app and website, but they didn’t actually assess any SF products. The authors point out that “we cannot extrapolate the identification of a pill by a device to the categorization of a product as counterfeit or substandard.” So, this manuscript reviewed the ease of use of “pill identification apps,” and that’s how it should be titled and pitched. I think the gaps in SF drug detection are serious and speak to a real problem in the world, but it’s not fair to evaluate apps designed for pill identification on this basis.

→change the title to remove the idea that the mss reviews mobile apps and websites whose function is to detect SF drugs

→add discussion of the pill identification task and its context

→add discussion of the technical challenges for detecting SF drugs via pill image analysis (this is why none of the apps/sites that rely on the user inputting pill color/shape/imprint data are capable of SF detection)

Most of the pill identification apps and websites, such as the highly rated Medscape website, ask the user to input pill imprint numbers, color, and shape, and match that information to a market-specific database. Only the most carelessly made fake products would fail to match at this level. The authors say (line 197) that “developers claim that the apps have the capacity to also detect SCMs”.

→I would like this statement to be backed up by data about which pill identifier products make this claim.

Tools such as MedSnap utilize pill photographs which can be compared to a library of known “good” pills via image analysis tools. This method detects subtle differences between the imprint or tablet size/shape/color of a genuine manufacturer and a counterfeit version. (That was the study they did in Laos on fake artesunate and coartem.)

→There is no evidence that this kind of image analysis can detect substandard products.

→The authors might consider discussing the CD3/CDx device, which is another image analysis tool that uses multispectral imaging of pharmaceuticals—there is a published field testing record for a range of SF products.

Ranieri N et al (2014) Evaluation of a new handheld instrument for the detection of counterfeit artesunate by visual fluorescence comparison. Am J Trop Med Hyg 91:920–924. https://doi.org/10.4269/ajtmh.13-0644

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Marya Lieberman

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Feb 4;16(2):e0246061. doi: 10.1371/journal.pone.0246061.r002

Author response to Decision Letter 0


19 Nov 2020

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

� Done

2. Please clarify the last date of your search for articles and applications.

� Done: January 2020

3. You have provided a table of characteristics of the mobile applications; however, you have not summarized the characteristics (and quality) of the two studies you included.

We found only two studies which evaluated the performance of four highly recognized apps (Epocrates, IBM Micromedex Drug Reference, Medscape and MedSnap).[20,21] All these apps (the focus of our systematic review) are described in Table 1.

However, we expanded the narrative description of these studies and its quality as it can see below:

“Our search found a cross-sectional study that intended to compare Epocrates, Medscape, IBM Micromedex and Google for clinical management information designed for healthcare professionals for a one-week period.[20] Medical students at an academic hospital in the United States used a score for satisfaction of search and user interface based on a 1 to 5 scale, with 1 representing lowest quality and 5 describing the highest quality. The study concluded that Medscape (Satisfaction 4.92) was the most preferred free mobile app evaluated due to its interactive and educational features, followed by IBM Micromedex and Epocrates (Satisfaction 4.58 and 4.42, respectively). This study was used to complete MARS question #19 (evidence base) about whether the app had been trialed. Further, we found a preprint study whit the objective to evaluate the sensibility and specificity of MedSnap, that is not available on the app stores. In the study,[21]”

“We used the NIH Study Quality Assessment Tools for the 2 included studies and in both cases a poor score was obtained, which means a high risk of bias. This demonstrates the lack of studies evaluating apps and the great need to carry out rigorous studies that evaluate the functioning and usefulness of apps for pill identification and eventually to detect SCMs. Likewise, the risk of bias assessment for RCTs could not be carried out since no study with this type of design was found.”

4. Neither publication bias nor study heterogeneity have been assessed. Please clarify why this is so.

We performed a tabular and narrative synthesis of each identified mobile app. Since a meta-analysis is not applicable and there is no positive or negative result of apps’ descriptive reports, publication bias couldn’t be assessed. However, our exhaustive search and inclusion criteria prevent omissions of relevant studies.

We considered that a formal assessment of heterogeneity was not applicable. However, we describe and discuss the differences found among included apps.

5. In your PRISMA checklist, you indicate that you did not present results of any risk assessment of bias across studies. However, in the Methods section, you indicate that risk of bias was assessed. Please present a summary of your findings if any.

We summarized in Table 1 and 2 and Figure 2 the general characteristics of the reviewed apps and the app quality mean MARS scores available on platforms.

The MARS score is the instrument to assess the apps quality.

We added the quality assessment of the only two studies using NIH instrument for these cross-sectional studies.

“We used the NIH Study Quality Assessment Tools for the 2 included studies and in both cases a poor score was obtained, which means a high risk of bias. This demonstrates the lack of studies evaluating apps and the great need to carry out rigorous studies that evaluate the functioning and usefulness of apps for pill identification and eventually to detect SCMs. Likewise, the risk of bias assessment for RCTs could not be carried out since no study with this type of design was found.”

6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

� Done

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

________________________________________

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

________________________________________

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

________________________________________

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

________________________________________

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: My main concern with this paper is that it is not a systematic review of mobile apps for detecting falsified and substandard drugs (SF drugs). Rather, it is a review of 24 apps and 16 websites whose function is pill identification, and one app (MedSnap) whose function is to detect SF drugs.

Considering the fulfillment of different definitions of systematic reviews, we considered that our study is indeed a systematic review:

*DARE: ≥ 4 criteria out of the first 5 (1-3 are mandatory)

#Oxman and Guyatt 1991: 1-4 & 6-7

#Cochrane Collaboration, CRD, MOOSE, Potsdam Consultation, QUOROM, AHRQ: 1-4 & 6-8

1. Were inclusion/exclusion criteria reported? Yes

2. Was the search adequate? Yes

3. Were the included studies synthesised? Yes, a Tabular and narrative synthesis

4. Was the validity of the included studies assessed?

Yes, we used the NIH for the only 2 studies identified. Additionally, the other were descriptions of apps that were classified by MARS.

5. Are sufficient details about the individual included studies presented? Yes, in tables and figures

6. Was the data extraction process adequate? Yes

Data extraction and risk of bias (quality) assessment were also performed independently by these reviewers, with oversight from two senior reviewers (AC and AB).

7. Was the study selection process adequate? Yes

All unique articles were independently assessed by two reviewers (MD and TA) based on title and abstract. Those marked for inclusion, or whose title and abstract were not sufficient to determine inclusion, were then reviewed using the full text.

8. Was ‘PICO’ used to focus the question(s)?

Not completely applicable, but PICO relevant components are derived from the presented objectives

*DARE. (Accessed 04/08/2011, 2011, at http://www.crd.york.ac.uk/cms2web/AboutDare.asp.)

# Sander L, Kitcher H. Systematic and Other Reviews: Terms and Definitions Used by UK Organizations and Selected Databases. Systematic Review and Del-phi Survey. In: National Institute for Health and Clinical Excellence. London; 2006.

The authors used the MARS assessment tool to evaluate the usability of each app and website, but they didn’t actually assess any SF products. The authors point out that “we cannot extrapolate the identification of a pill by a device to the categorization of a product as counterfeit or substandard.” So, this manuscript reviewed the ease of use of “pill identification apps,” and that’s how it should be titled and pitched. I think the gaps in SF drug detection are serious and speak to a real problem in the world, but it’s not fair to evaluate apps designed for pill identification on this basis.

→change the title to remove the idea that the mss reviews mobile apps and websites whose function is to detect SF drugs

Thank for such appropriate comment. We searched for direct assessments of SF products using these apps and this define the title and aims of our study. Unfortunately, we did not find any reported experiments of real-world SFs using these technologies. We highlighted this evidence gap and we recommended caution to interpret the pill identification quality scores due to the indirectness of the findings. A sentence was added in the result section to clarify this point.

‘’These reports referred to the usability and the ability of apps to correctly identify pills. However, did not find direct assessment of SF products in the real-world.”

→add discussion of the pill identification task and its context

We added this sentence in the discussion section

“In real life these apps are mostly used to identify pills by elderly people or people with difficulties to recognize medications but not to identify SCMs.”

→add discussion of the technical challenges for detecting SF drugs via pill image analysis (this is why none of the apps/sites that rely on the user inputting pill color/shape/imprint data are capable of SF detection)

While detecting SF drugs via pill image analysis does not allow proper SC drugs detection it might help as an initial screening test, particularly with negative results.

Most of the pill identification apps and websites, such as the highly rated Medscape website, ask the user to input pill imprint numbers, color, and shape, and match that information to a market-specific database. Only the most carelessly made fake products would fail to match at this level. The authors say (line 197) that “developers claim that the apps have the capacity to also detect SCMs”.

→I would like this statement to be backed up by data about which pill identifier products make this claim.

Two apps claim to have the capacity to detect SCMs: Medsnap and CheckFake

Tools such as MedSnap utilize pill photographs which can be compared to a library of known “good” pills via image analysis tools. This method detects subtle differences between the imprint or tablet size/shape/color of a genuine manufacturer and a counterfeit version. (That was the study they did in Laos on fake artesunate and coartem.)

→There is no evidence that this kind of image analysis can detect substandard products.

MedSnap detects SF drugs via pill image analysis. Although it does not allow a proper SC drugs detection, it still might help as an initial screening test, that could be particularly useful if it discard SC drugs.

→The authors might consider discussing the CD3/CDx device, which is another image analysis tool that uses multispectral imaging of pharmaceuticals—there is a published field testing record for a range of SF products.

Ranieri N et al (2014) Evaluation of a new handheld instrument for the detection of counterfeit artesunate by visual fluorescence comparison. Am J Trop Med Hyg 91:920–924. https://doi.org/10.4269/ajtmh.13-0644

________________________________________

Thank you but our systematic review was focused only on mobile apps.

Attachment

Submitted filename: Response to reviewers.docx

Decision Letter 1

Vijayaprakash Suppiah

17 Dec 2020

PONE-D-20-22666R1

Mobile apps for detecting falsified and substandard drugs: A systematic review

PLOS ONE

Dear Dr. Ciapponi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jan 31 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Vijayaprakash Suppiah, PhD

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: PONE-D-20-22666R1 Mobile apps for detecting falsified and substandard drugs: A systematic review

I am flabbergasted by the general lack of documentation and regulatory oversight of these products and think it’s worthwhile for the mss to be published to highlight this evidence gap.

I’m still having a problem with the title. The authors conducted a systematic review of apps and websites that are used for pill identification. Only two of them make any claims to detect falsified or substandard products, and only one of those has any evidence base to support the claims. Can you have a systematic review of one manuscript?

-->Maybe, call it “a systematic review of pill identification apps with potential utility for detecting SF drugs”?

-->Please add the references to the MedSnap and CheckFake distributor’s claims of SF detection to the mss.

From the app store entry, CheckFake is a bar code checker, not a pill checker, and does not seem to be supported by or connected with any company—I didn't see a website or published literature on it. Pill checkers need to be maintained and upgraded as new brands or drugs enter the market, or as the app is used in markets that offer different brands. In the future it would be interesting to evaluate whether each app is "really" available in different regions, eg, whether there is a company behind it to support users and upgrade the app and whether the product really works for the brands found in Europe, America, Africa, etc.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Marya Lieberman

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Feb 4;16(2):e0246061. doi: 10.1371/journal.pone.0246061.r004

Author response to Decision Letter 1


18 Dec 2020

Reviewer #1: PONE-D-20-22666R1 Mobile apps for detecting falsified and substandard drugs: A systematic review

Point by point answers

I am flabbergasted by the general lack of documentation and regulatory oversight of these products and think it’s worthwhile for the mss to be published to highlight this evidence gap.

Thank you for the comment, we have reflected now this key message in the conclusions of the abstract

“In conclusion, we found a remarkable evidence gap about the accuracy of mobile apps in detecting SCMs.”

The conclusions of the manuscript already clearly reflected this concept

“Our systematic review highlighted an important evidence gap in diagnostic accuracy of mobile apps detecting SCMs, and there is a need for primary studies addressing this issue.”

I’m still having a problem with the title. The authors conducted a systematic review of apps and websites that are used for pill identification. Only two of them make any claims to detect falsified or substandard products, and only one of those has any evidence base to support the claims. Can you have a systematic review of one manuscript?-->Maybe, call it “a systematic review of pill identification apps with potential utility for detecting SF drugs”?

Dear reviewer, our research question was about mobile apps for detecting falsified and substandard drugs and our protocol was registered in PROSPERO (CRD42020163075) under this name. We consider a good research practice to be consistent with the protocol, regardless of the findings finally obtained.

The name of a systematic review refers to the research question that guide the search strategy of a systematic review. Is not only possible that a systematic review has only one included study, but may be a dessert systematic review with cero included studies (There are multiple examples in the Cochrane Library). Of course, as you suggested, we have reinforced the first key message highlighting the identified evidence gap.

Additionally, the pill identification functionality could indirectly help to detect falsified and substandard drugs.

This limitation in the identified evidence is also clearly discussed in the manuscript.

-->Please add the references to the MedSnap and CheckFake distributor’s claims of SF detection to the mss.

Done. These are the references included in the manuscript now:

22. Medsnap Verify Services. 2020 [cited 17 Dec 2020]. Available: https://www.medsnap.com/verify/

23. Google play. CheckFake (dlt.sg App for drug counterfeit). 2020 [cited 17 Dec 2020]. Available: https://play.google.com/store/apps/details?id=sg.dlt.checkfake&hl=es_419

From the app store entry, CheckFake is a bar code checker, not a pill checker, and does not seem to be supported by or connected with any company—I didn't see a website or published literature on it. Pill checkers need to be maintained and upgraded as new brands or drugs enter the market, or as the app is used in markets that offer different brands. In the future it would be interesting to evaluate whether each app is "really" available in different regions, eg, whether there is a company behind it to support users and upgrade the app and whether the product really works for the brands found in Europe, America, Africa, etc.

Thank you for the comment. We have corrected the information about the CheckFake app. We had to include it because this reference met our inclusion criteria, however our assessment of the quality and reliability of the information of this app was evaluated with the worst of the values of the MARS questionnaire.

Attachment

Submitted filename: Responses Reviewers comments 2.docx

Decision Letter 2

Vijayaprakash Suppiah

13 Jan 2021

Mobile apps for detecting falsified and substandard drugs: A systematic review

PONE-D-20-22666R2

Dear Dr. Ciapponi,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Vijayaprakash Suppiah, PhD

Academic Editor

PLOS ONE

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: (No Response)

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: (No Response)

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: (No Response)

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Marya Lieberman https://orcid.org/0000-0003-3968-8044

Acceptance letter

Vijayaprakash Suppiah

26 Jan 2021

PONE-D-20-22666R2

Mobile apps for detecting falsified and substandard drugs: A systematic review

Dear Dr. Ciapponi:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Vijayaprakash Suppiah

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. PRISMA checklist.

    (DOCX)

    S2 File. Search strategy.

    (DOCX)

    S3 File. NIH study quality assessment tools for observational studies.

    (XLSX)

    Attachment

    Submitted filename: Response to reviewers.docx

    Attachment

    Submitted filename: Responses Reviewers comments 2.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES