Skip to main content
World Psychiatry logoLink to World Psychiatry
. 2019 Jan 2;18(1):97–98. doi: 10.1002/wps.20592

Towards a consensus around standards for smartphone apps and digital mental health

John Torous 1, Gerhard Andersson 2, Andrew Bertagnoli 3, Helen Christensen 4, Pim Cuijpers 5, Joseph Firth 6,7, Adam Haim 8, Honor Hsin 9, Chris Hollis 10, Shôn Lewis 7, David C Mohr 11, Abhishek Pratap 12, Spencer Roux 1, Joel Sherrill 8, Patricia A Arean 12
PMCID: PMC6313231  PMID: 30600619

Mental disorders impact one in four people worldwide, yet access to care is challenging for those who suffer from them1. Mental health apps offer the potential to overcome access barriers for the nearly three billion people projected to own a smartphone by 2020.

Although there are over 10,000 mental health apps commercially available, there are few resources available to help end users (patients, clinicians and health care organizations) to evaluate the quality and suitability of these products. Thus, there is an urgent need for an agreement about appropriate standards, principles and practices in research and evaluation of these tools.

We represent leaders in mHealth research, industry and health care systems from around the globe, and we seek here to promote consensus on implementing these standards and principles for the evaluation of mental health apps. At a minimum, standards should include consideration of: a) data safety and privacy, b) effectiveness, c) user experience/adherence, d) data integration. Our consensus on the challenges and recommendations in each of these areas is presented below.

Data safety and privacy. Given the climate today regarding the misuse of online data such as email and social media, mental health apps must ensure that data storage, use and sharing practices fulfill health care standards for handling patient health information data2. Like with all sensitive health data, smartphones‐based sensor data such as global positioning system (GPS), voice, keyboard usage, photos, video and overall phone usage behavior are features that many mental health apps collect, posing significant privacy challenges2, 3.

Our recommendations are: a) agreed upon standards for data storage, use and sharing are needed; b) data storage, use and sharing policies must be made transparent to users of the app; c) if data are shared with external partners (e.g., researchers), the partner's storage, use and sharing plans must be shared with the end user; d) the end user must have the option to “opt out” of sharing his/her information; e) any language regarding data storage, use and sharing must be written at a maximum of a 6th grade reading level; f) technical security reviews and data audits are necessary to guarantee that apps follow the standards they set out and ensure that new vulnerabilities are quickly identified.

App effectiveness. Most mental health apps that are sold as therapeutic tools have not undergone rigorous evaluation, but instead claim that they are evidence based because they are informed by evidence based treatments4. Even when apps do have an evidence base, changes in technology may mean that app updates need to be re‐evaluated for their efficacy. Small cosmetic changes, platform changes and aspect changes do not likely require a retest of an intervention, as long as the therapeutic principle that has been evaluated remains intact. Particularly where the aim is to increase reach, engagement and adherence rather than efficacy, A/B testing may be most appropriate. However, significant changes, such as adding a new therapeutic principle or substantial changes to that principle, must demonstrate efficacy through the same evaluation pathways as novel therapeutics.

Our recommendations are: a) newly adapted therapeutic principles, which should be identified and defined, must undergo controlled clinical trials to determine their efficacy and effectiveness; b) small changes to an app with an evidence base need not undergo another clinical trial, but any major change requires a re‐evaluation of app effectiveness; c) a nosology for mental health apps5 and guidelines to match the necessary level of evidence for each app's use cases and risks6 should be developed.

User experience/adherence. Many patient end users stop using a health app two weeks after download7. Clinician end user adherence is influenced by familiarity with technology and app match to the clinician's therapeutic expertise. Lack of adherence is likely a function of app usability, as the input of clinician and patient end users is often missing when a mental health app is designed, resulting in apps that do not align with the preferences and goals of the intended users6.

Our recommendations are: a) user‐centered/user experience (UX) design methods should be employed when creating an app; this includes involving the intended end user in the development, and conducting as‐is workflow analysis to ensure that the app is useful and usable, and that it fits into the fabric of the person's life, not producing unnecessary burden to the end user; b) when usability is evaluated, developers should report use statistics to all end users; c) standards concerning best practice in user design research for mental health apps should be articulated.

Data integration. Apps should allow appropriate electronic health record (EHR) integration and sharing of health information with clinicians. One challenge is that EHRs have non‐uniform data integration requirements and not all support use of application programming interface (API) for data exchange. In the US, there is a strong move towards allowing patients access to their electronic health record information via SMART Health IT (https://apps.smarthealthit.org/), an open, standards‐based technology platform that enables innovators to create apps that run across platforms. However, there are few agreed upon internal data standards to facilitate this level of interoperability.

Our recommendations are: a) mental health apps that are intended to be used in conjunction with health care systems should employ methods to ensure interoperability with electronic health records; b) mental health apps will need to document the processes they use to ensure the secure exchange of information between platforms; c) internal data standards for interoperability are needed, much like those outlined in http://www.openmhealth.org/.

As mHealth transitions towards medical care in the mental health field, now is the critical moment for researchers, clinicians, service‐users, policy makers and funders to guide that transition and ensure that these tools meet rigorous standards, as is required of any novel therapeutic.

Movement in this direction is taking place. In the US, the Food and Drug Administration has announced that it is moving away from evaluating individual apps, and focusing its regulatory efforts on the app makers. Additionally, US professional groups like the American Psychiatric Association and the American Medical Association are creating app evaluation frameworks8. In the UK, the National Health Service has recently re‐opened the App Library in beta phase, providing recommendations for apps across a range of conditions including mental health, and the British Standards Institute has published standards for health app development. In the European Union, the National Institute of Health and Care Excellence (NICE) is actively developing standards for apps and other technology based behavioral change interventions.

We thus make a final recommendation that these organizations, and others, come together to set universal standards for mental health app quality control, and that those standards include at a minimum the review of data security, app effectiveness, usability, and data integration.

H. Hsin is an employee of Verily Life Sciences. The views expressed here are those of the authors and do not represent official views of Verily Life Sciences.

References


Articles from World Psychiatry are provided here courtesy of The World Psychiatric Association

RESOURCES