Skip to main content
PLOS One logoLink to PLOS One
. 2021 Dec 30;16(12):e0261006. doi: 10.1371/journal.pone.0261006

Implementation of an electronic patient-reported measure of barriers to antiretroviral therapy adherence with the Opal patient portal: Protocol for a mixed method type 3 hybrid pilot study at a large Montreal HIV clinic

Kim Engler 1,*, Serge Vicente 2, Yuanchao Ma 1, Tarek Hijal 3, Joseph Cox 4,5, Sara Ahmed 6, Marina Klein 1,4,7, Sofiane Achiche 8, Nitika Pant Pai 7, Alexandra de Pokomandy 1,4,9, Karine Lacombe 10, Bertrand Lebouché 1,4,9
Editor: Ethan Moitra11
PMCID: PMC8717992  PMID: 34969046

Abstract

Background

Adherence to antiretroviral therapy (ART) remains problematic. Regular monitoring of its barriers is clinically recommended, however, patient-provider communication around adherence is often inadequate. Our team thus decided to develop a new electronically administered patient-reported outcome measure (PROM) of barriers to ART adherence (the I-Score) to systematically capture this data for physician consideration in routine HIV care. To prepare for a controlled definitive trial to test the I-Score intervention, a pilot study was designed. Its primary objectives are to evaluate patient and physician perceptions of the I-Score intervention and its implementation strategy.

Methods

This one-arm, 6-month study will adopt a mixed method type 3 implementation-effectiveness hybrid design and be conducted at the Chronic Viral Illness Service of the McGill University Health Centre (Montreal, Canada). Four HIV physicians and 32 of their HIV patients with known or suspected adherence problems will participate. The intervention will involve having patients complete the I-Score through a smartphone application (Opal), before meeting with their physician. Both patients and physicians will have access to the I-Score results, for consideration during the clinic visits at Times 1, 2 (3 months), and 3 (6 months). The implementation strategy will focus on stakeholder involvement, education, and training; promoting the intervention’s adaptability; and hiring an Application Manager to facilitate implementation. Implementation, patient, and service outcomes will be collected (Times 1-2-3). The primary outcome is the intervention’s acceptability to patients and physicians. Qualitative data obtained, in part, through physician focus groups (Times 2–3) and patient interviews (Times 2–3) will help evaluate the implementation strategy and inform any methodological adaptations.

Discussion

This study will help plan a definitive trial to test the efficacy of the I-Score intervention. It will generate needed data on electronic PROM interventions in routine HIV care that will help improve understanding of conditions for their successful implementation.

Clinical trial registration

ClinicalTrials.gov identifier: NCT04702412; https://clinicaltrials.gov/.

Introduction

Routinely collecting data on patient-reported outcome measures (PROMs) for individual patient care can benefit both people living with HIV and their providers, yet it is seldom done in HIV clinical practice [1]. For patients, it may help ensure that HIV care is person-centered and in line with their needs [1]. For providers, given the multidimensional and chronic nature of HIV clinical assessment and follow-up, the use of PROMs could facilitate efficient application of clinical guidelines in a context of time and resource constraints [2].

While past syntheses of effectiveness evidence for PROM use across specialties in routine care have typically found mixed results, with inconsistent impacts on patient outcomes [3,4], a more recent systematic review published in 2019 finds the evidence supports PROM use in standard care, particularly to improve patient-provider communication and decision-making in clinical practice [5]. Furthermore, the international momentum building for PROM use [6] may increase with the current COVID-19 pandemic. Indeed, there are calls for a scale up of electronic PROM implementation in this crisis for the remote follow-up of chronic conditions, in part, to better screen patients and promptly manage their needs [7].

The management of antiretroviral therapy (ART) adherence for the treatment of HIV is among the areas that could profit from greater PROM use. Successful ART remains essential to a near-normal life expectancy; however, many on ART have suboptimal adherence [8,9], even on single-tablet regimens [10,11]. In a recent study, only 23% percent of adults initiating a single-tablet regimen were considered adherent over a six-month period versus 12% among those who initiated a multiple tablet regimen, based on prescription fill dates [10]. Clinically recommended strategies to foster adherence include ongoing monitoring of barriers to adherence among people living with HIV [12]. Yet, several studies point to inadequate patient-provider communication around ART adherence and its impediments [1317] and many HIV providers underestimate their patients’ adherence difficulties [18,19]. In addition, individuals with HIV collectively report a multitude of barriers to adherence, including a variety of cognitive, emotional, social, and material issues as well as health service-related barriers [20,21], the proper evaluation of which may be time-consuming for providers [22].

For these reasons, with patient [23] and provider [24] involvement, we are developing a PROM of barriers to ART adherence, the Interference-Score (or I-Score), for electronic administration. I-Score data will be collected from patients and shared with their providers via Opal, a patient portal and smartphone app. This award-winning app [25], which is currently in use at the Cedars Cancer Centre of the McGill University Health Centre (MUHC), will be configured to respond to the needs of patients with HIV. Opal can give patients access to appointment schedules, laboratory test results, educational material, waiting room management tools, and PROMs. Electronic administration of our PROM was crucial as it simplifies score integration within the clinical workflow, allows for longitudinal presentation of scores as well as remote monitoring, and through Opal, it provides access to several other useful and potentially empowering patient-centered functions.

Aim and objectives

With the present mixed method pilot study, drawing on implementation science, the aim is to develop the methods and tools necessary to undertake a more robust evaluation of the implementation and effectiveness of the I-Score PROM-within-Opal innovation (henceforth, the I-Score intervention) in routine HIV care with individuals on ART. This study’s primary objectives are to evaluate stakeholder perceptions of the I-Score innovation (Objective 1) and evaluate the implementation strategy (Objective 2) in terms of recommended implementation science metrics for PROMs in routine care [26]. Its secondary objective (Objective 3) is to determine if the intervention shows promise and the chosen outcomes are useful, by observing collected data on select effectiveness outcomes (patient and service outcomes).

Guiding frameworks

It is important that a credible causal explanation of a digital health innovation’s intended impacts be provided [27]. Indeed, in an electronic PROM-based intervention, conceptual or theoretical frameworks specify the mechanisms through which the intervention is expected to have its effects [28], facilitating appropriate outcome selection and the interpretation of results [29].

This pilot study will be guided, in part, by an intervention logic chain, depicted by the boxes in Fig 1, and adapted from the frameworks of Greenhalgh and colleagues [28,29]. The core of the intervention involves having patients complete the PROM prior to their HIV clinic visit and having both patients and providers receive and review the results. The left arrow in Fig 1 presents the key components of the implementation framework used, which will guide qualitative analysis. Specifically, these are the five broad domains of potential influence on implementation of Damschroder and colleagues’ [30] Consolidated Framework for Implementation Research (CFIR) within which are grouped 39 distinct constructs. Hence, it is assumed that flow through the logic chain can be affected by features of the intervention, settings, individuals, and implementation process involved. The CFIR is a flexible and widely used framework in implementation research, including for PROM-based initiatives [26].

Fig 1. Guiding implementation framework and I-Score intervention logic chain.

Fig 1

Asterisks indicate elements of the logic chain which will be examined as a part of this pilot study.

Another working framework (Fig 2) presents the broad hypothesized relationships between the implementation strategy used for the I-Score and the categories of study outcomes addressed. Borrowing from the frameworks of Stover and colleagues [26] and Santana and Feeny [31] it, in part, conceives successful implementation of I-Score use in standard HIV care, as potentially generating cascading effects on service and patient outcomes.

Fig 2. Relationship between the I-Score implementation strategy and study outcomes.

Fig 2

Asterisks indicate outcomes for which data will be collected as a part of this pilot study.

Materials and methods

This study received approval by the McGill University Health Centre Research Ethics Board on January 18, 2021 (Study ID CTNPT039/ 2021–7190). Specifically, the Cells, Tissues, Genetics & Qualitative research panel approved the study.

Study design

This 6-month pilot study will adopt a one-arm mixed method type 3 implementation-effectiveness hybrid design and be conducted in a single clinical site (Fig 3). Type 3 hybrid designs emphasize testing the implementation strategy of an evidence-based intervention, and to a lesser extent, reporting on intervention effectiveness [32].

Fig 3. Pilot study design.

Fig 3

Mixed methods were adopted in this study as multiple methods are recommended for studying intervention implementation and related challenges in complex systems, like HIV clinics [33]. The integration of the quantitative and qualitative data collected will occur toward study end within a convergent parallel design [34]. Those directly involved in the analyses will decide upon the specifics of integration. Reporting of this study will seek to satisfy the standards of Good Reporting of a Mixed Methods Study [35] and the Standards for Reporting Implementation Studies [36].

Setting and participants

The study setting is a large hospital-based clinic in Montreal, Quebec, Canada. This clinic, the Chronic Viral Illness Service (CVIS) of the MUHC, offers multidisciplinary care to over 1600 adults living with HIV. The CVIS and several team members have experience with implementation science methods and related pilot studies [e.g., 37].

Among the 16 physicians actively treating individuals with HIV at the CVIS, four will be recruited to participate as well as 32 of their adult patients. This sample size amply meets rule of thumb recommendations for one-arm pilot studies [38]. To participate, patients must be confirmed HIV positive, aged at least 18 years old, on combination ART, irrespective of duration, and literate in English or French. They must own a smartphone with an appropriate data plan and/or home Wi-Fi connection, since currently, Opal is ideally suited to a smartphone interface. They must also be willing to download the smartphone app. Finally, patients must have had known or suspected adherence problems in the past 12 months, based on: a detectable viral load test result per local standards [39], and/or report by the patient or healthcare team (by the physician, nurse, social worker, or pharmacist). At least ten female patients among the 32 will be recruited, to ensure sufficient representation of women living with HIV [40]. Patients may not participate if they are: concurrently enrolled in a clinical trial; affected by a cognitive impairment or medical instability that prevents them from participating in all aspects of the study; insufficiently able to use the app with the technical support provided; receiving treatment for hepatitis C or have completed treatment 3 months ago or less; or being treated for hepatitis B with a medication other than their combination ART.

Recruitment and consent process

Physicians and patients are expected to be enrolled from July to August 2021. Physicians treating patients with HIV at the CVIS will be asked individually to participate, by oral (e.g., phone) or email invitation. Patients of participating physicians will be recruited and consented in two ways: 1) when they visit the clinic; or 2) prior to an upcoming clinic visit. When they visit the clinic to meet with a health or social service provider (e.g., physician, nurse, social worker, psychologist), the provider will briefly describe the study to determine interest. Alternatively, suitable patients with upcoming visits will be identified by the physician and then contacted by a neutral clinic staff member to inform them of the study. If the individual is interested, the study coordinator will present the project in greater detail to them in person or over the phone, check the eligibility criteria, and obtain consent (See S1 Appendix for the patient consent form). Then, an appointment for a teleconference (e.g., on Zoom) or in person meeting at the clinic will be made with the participant prior to their next regular clinic appointment with their physician to deliver training to use the app and to complete the I-Score. The app’s installation and functionality on the patient’s smartphone will also be verified. Given efforts to limit in-hospital visits and risks for patients during the COVID-19 pandemic, in-clinic appointments with research staff will be avoided, where possible.

The I-Score intervention

During the study, participating patients will visit with their HIV physician three times, at Time 1 (T1), month 3 (T2), and month 6 (T3), prior to which they will complete the I-Score, as instructed. This visit schedule was selected as it concords with guidelines for the clinical follow-up of HIV in Quebec [41], while maximizing the collection of repeated study measurements. Given COVID-19, one visit (T2) will be done remotely (by phone or teleconferencing), while the other visits will be held at the CVIS. The I-Score PROM contains 20 items, covering 6 domains of barriers to ART adherence: cognitive and emotional aspects; lifestyle factors; the social and material context; the health experience and state; characteristics of ART; and the healthcare system and its services. Respondents indicate how often each barrier made adherence difficult in the past 4 weeks, with an 11-point scale, from 0% (never made it difficult) to 100% (always made it difficult). Details on the measure’s development are published elsewhere [20,23,24,42]. Example items include “I was not motivated to take my medication,” “I felt isolated or alone,” “I had another health condition to deal with (for example, depression, diabetes, or heart disease),” and “My medication cost coverage was not sufficient.”

The core intervention of this pilot consists of having individuals with HIV on ART register on the Opal app and complete the I-Score PROM prior to each of three consecutive visits with their HIV physician. The patients will receive a reminder to complete the I-Score one week before their visit and they will have immediate access to their results. The HIV physician will acquire the I-Score results before each visit via the ORMS dashboard, an appointment and questionnaire management tool integrated with Opal and designed specifically for healthcare providers. The option of graphically presenting scores over time will also be available, allowing the comparison of past and present scores. It is expected that patients and physicians will review the I-Score results so they can be considered during the clinic visit (Fig 1).

Opal cybersecurity

The technical cybersecurity aspects of the Opal app conform to the security and governance recommendations for the development of a patient portal, as identified by the MUHC’s Security and Governance team, to ensure the confidentiality of patient data. For details, see the multimedia S2 Appendix of Kildea et al. [25].

The implementation strategy

The multilevel implementation strategy, designed for this study, addresses known facilitators and barriers to implementing electronic PROMs in routine clinical practice [43] and draws on recognized implementation strategies [44,45]. S1 Table presents the correspondence between the facilitators and barriers targeted, the chosen implementation strategies to address them, and their relationship to components of the implementation framework used in this study, the CFIR. As a part of our approach, we will conduct an “Educational meeting” by teleconference with providers to formally teach them about the intervention and its rationale and respond to concerns. We will provide “Training and consultation” on the PROM and app by hiring an “Application Manager” (AM). The AM will train patients and providers and be available to them on an ongoing basis, as needed, preferably by phone or teleconference. They will also help monitor the quality of PROM data. Appointing such a coordinator (or Quality Assurance officer) is a recommended strategy to minimize the impact of missing PROM data [46]. The AM will thus oversee the completeness of PROM data collected and manage any system or software problems, which are potential disadvantages of computer or web-based PROM administration [47]. Hence, overall, the AM will participate in the “Facilitation” of the PROM’s implementation. Another strategy aims to meaningfully “Involve patients and providers” in the I-Score’s implementation. Notably, following the I-Score administrations at T2 and T3, physicians will participate in a focus group (by teleconference), while a short semi-structured interview will be conducted with each patient (by telephone or teleconference). Throughout the study, the AM will take field notes on the problems encountered by participants and this feedback will enable “Cyclical small tests of change” to improve implementation, using an evaluation approach guided by the Consolidated Framework for Implementation Research [48]. This way, we will “Promote adaptability” of the I-Score process, to enable adjustments to local considerations while maintaining the intervention’s core components, namely, I-Score completion by the patient via Opal prior to the clinic visit and review of scores by the physician in conjunction with the visit. Many peripheral components will be adaptable, such as the timing and number of reminders to complete the I-Score and how I-Score results are presented to providers on the ORMS dashboard.

Data collection

The data collection period is expected to extend from about September 2021 to February 2022.

Quantitative component

The quantitative component will have three sources of data: 1) participant self-report; 2) electronic medical records; and 3) passive data (e.g., on app use to assess fidelity). Patient self-report data will be collected via Opal. Physician self-report data will be obtained with paper questionnaires.

At T1, T2 (3 months) and T3 (6 months), a study questionnaire will be administered to participants, especially to assess implementation outcomes. At T1, the study questionnaire will be composed of 31 questions for patients and 24 for physicians, while at T2 and T3, it will have 21 questions for patients and 19 for physicians. Based on the metadata of patients who have used Opal, these questionnaires should take less than 10 minutes to complete, considering that patients take 10–15 seconds per question, at first completion. At T1, the questionnaire will ask about socio-demographics (e.g., year of birth, preferred language, sex, sexual orientation, ethnic group identity, immigration, education, income) and digital technology use as well as pose general health questions for patients (year of diagnosis with HIV, treatment satisfaction) and clinical practice questions for physicians (years practicing in HIV, current number of HIV patients) (For the full content of Time 1 study questionnaires, see S2 Appendix). The measures of digital technology use are as follows: frequency of mobile device use (adapted from Schnall et al. [49]), having a health app on one’s mobile device [50], extent of health app use (adapted from Balapour et al. [51]), confidence in reporting medical information using mobile technology [51], and intention to report personal health data with a mobile device app, if asked by a provider [51]. This information will help contextualize the findings and describe the sample. Clinical data, namely, HIV viral load in copies/mL, to determine viral suppression, will be extracted from patients’ medical health record at the clinic, at T1 and T3.

Qualitative component

The qualitative component will have three sources of data: 1) 1-hour focus groups with all physicians (T2, T3); 2) 45-minute interviews with patients (T2, T3), until core theme saturation (an intermediate sample of 15 should be sufficient at each time point [52]); and 3) the Application Manager field notes, recorded on a standardized form (T1-T3). Focus groups and patient interviews will be conducted by an experienced interviewer and, if possible, through a teleconferencing platform such as Zoom. Participants will have the option of accessing the teleconference by telephone or the Internet. The patient’s name will not be shown. Audio recordings of the focus groups and interviews will be manually transcribed, extracting nominal information. Each will be guided by a similar semi-structured interview schedule, in English or French, depending on preferred language. It will ask about the participants’ experience with I-Score use and its implementation as well as about facilitating and impeding factors. The schedule of study procedures for patients and physicians can be found in Table 1.

Table 1. Schedule of study procedures for participants.
Procedure Timeline
Prior to study start Study start (baseline) Month 3 Month 6
Be screened and/or consented Patients Physicians
Attend educational meeting Physicians
Receive training on the I-Score measure and Opal Patients Physicians Patients Physicians Patients Physicians Patients Physicians
Complete the I-Score measure via Opal Patients Patients Patients
Examine the I-Score measure results via the ORMS dashboard Physicians Physicians Physicians
Attend HIV patient-physician visit (online or in person) Patients Physicians Patients Physicians Patients Physicians
Complete the post-visit checklist Physicians Physicians Physicians
Complete the online sociodemographic questionnaire Patients Physicians
Complete the online study questionnaire (after the clinic visit (s)) Patients Physicians Patients Physicians Patients Physicians
Possibly participate in an online qualitative interview (after the clinic visit) Patients Patients
Participate in an online focus group (after several clinic visits with participating patients) Physicians Physicians
Receive compensation Patients Patients Patients

Study metrics and instruments

Details on the constructs assessed; the instruments and metrics used; the chosen thresholds for success, if applicable; the participant group contributing data; and the timing of data collection are presented in Table 2. Many of the chosen metrics are based on those recommended by Stover et al. [26]. Importantly, the authors emphasize the need to standardize evaluation metrics in patient-reported measure implementation and to distinguish between those used to assess perceptions of the innovation and those used to assess the implementation strategy. Not meeting the set thresholds for success, in this study, will signify that modifications are necessary before proceeding to a definitive trial [53].

Table 2. Implementation science metrics and effectiveness outcomes collected for the pilot study.
Objective Construct Data collected Threshold for success Participant group Timing
Patients Physi-cians
Objective 1—Evaluate perceptions of the I-Score innovation Acceptability Primary outcome:
Acceptability E-Scale [54]
Score M ≥ 24 T1, T2, T3
% likely to recommend the I-Score [55] ≥ 80% T1, T2, T3
Net Promoter Score [55] > 0 T1, T2, T3
Appropriate-ness Perceived compatibility subscale [56] Score M ≥ 5.5 - T1, T2, T3
Appropriateness of Intervention Measure [57] Score M ≥ 4 T1, T2, T3
Feasibility Consent rate (and reasons for refusal) ≥ 70% T1
Retention rate ≥ 80% T1, T2, T3
Missing PROM (I-Score) data rate (e.g., due to non-completion, network failure) ≤ 10% T1-T3
Feasibility of Intervention Measure [57] Score M ≥ 4 T1, T2, T3
Fidelity % patients who complete the I-Score on time ≥ 90% - T1, T2, T3
% physicians who review the I-Score results on time ≥ 90% - T1, T2, T3
Objective 2—Evaluate the implemen-tation strategy Acceptability Barriers and facilitators to implementation, based on the qualitative data collected a - T1-T3
Appropriate-ness Perceived fit of the implementation strategy within the clinic, based on the qualitative data collected a - T1-T3
Feasibility % of included physicians participating in the implementation activities (educational meeting, focus groups) ≥ 80% - T1, T2, T3
Rate of technical issues, based on the Application Manager’s notes - - - T1-T3
Fidelity How and why the implementation strategy was adapted, based on the qualitative data collected a - T1-T3
Objective 3 –Determine preliminary interventioneffective-ness Patient management Checklist of physician actions following review of the I-Score results p≤ 0.05 - T1, T2, T3
Barriers to ART adherence The I-Score PROM p≤ 0.05 - T1, T2, T3
Adherence to ART Self-Rating Scale Item [58] p≤ 0.05 - T1, T2, T3
Viral load The HIV RNA viral load, as indicated in the patient’s medical file
(> 50 copies/mL = detectable)
p≤ 0.05 - T1, T3

PROM: Patient-reported outcome measure; ART: Antiretroviral therapy.

a Qualitative data include the Application Manager’s notes (T1-T3), the qualitative interviews with patients and the focus groups with physicians (T2, T3).

Objective 1 -evaluate perceptions of the I-Score intervention

The primary outcome of this pilot study will be acceptability, as measured by the Acceptability E-scale (AES) for web-based PROMs (alpha coefficient: .76) [54]. Acceptability is related to how agreeable, palatable, or satisfactory an intervention is perceived to be by stakeholders [59]. It will be measured, at T1, T2, and T3, with an adapted version of the AES and administered to both patient and physician participants. The scale has 6 items rated on a 5-point Likert scale that varies depending on the item. Example items of the original measure include “How would you rate your overall satisfaction with this computer program?”, and “How easy was this computer program […] for you to use?” A summary score is obtained by adding the item scores (range: 6–30). A score of at least 24 (80% of maximum) indicates high acceptability and usability, as suggested by the scale developers.

Acceptability will also be measured at T1, T2 and T3 with a variant of the Net Promoters Score (NPS) used by England’s National Health Service (NHS) and labelled the Friends and Family Test [55]. The NPS is considered a measure of user satisfaction [60]. A single question will be asked (“How likely are you to recommend the I-Score?”) and rated on a 5-point Likert scale (1 = Extremely unlikely, 2 = Unlikely; 3 = Neither likely nor unlikely; 4 = Likely; 5 = Extremely likely). From this measure, the percentage recommending the I-Score will be calculated (score of 4 or 5), with a success threshold of 80% or more. An NPS-type score will also be calculated by creating three groups: promoters (score of 5), passives (score of 4), and detractors (score of 1–3). Subtracting the percentage of detractors from the promoters provides the NPS. NPS scores range from -100 to 100. A positive score (> 0) will be considered good [61], and a score of ≥ 50, excellent.

Appropriateness concerns the perceived fit or relevance of the intervention for the particular users, setting, or problem at hand [59]. It will be measured, at T1, T2, and T3, with two instruments. One concerns the perceived compatibility of the I-Score with the physicians’ work. The perceived compatibility of an information technology innovation broadly relates to how consistent it is perceived to be with the potential users’ values, needs, and past experiences [56]. It will only be collected from physicians, with a compatibility subscale developed by Moore and Benbasat (alpha coefficient: .86) [56]. It contains four items (e.g., “Using [the IT innovation] is compatible with all aspects of my work”, “Using [the IT innovation] fits into my work style”). These are rated on a 7-point Likert scale, from Extremely disagree to Extremely agree, and averaged to produce the subscale score (range: 1–7). A minimum average score of 5.5 is the threshold set for compatibility.

In addition, a 4-item scale, the Appropriateness of Intervention Measure [57], will be completed by all participants (alpha coefficient: .91). Example items include “This [evidence-based practice] seems fitting” and “This [evidence-based practice] seems like a good match.” Items are scored on a five-point scale of agreement, from 1 = Completely disagree to 5 = Completely agree and averaged for a total score (range: 1–5). An average score of at least 4 will indicate adequate appropriateness with this instrument.

Feasibility relates to the extent to which our I-Score intervention is successfully used or carried out within the study site [59]. To determine feasibility, data will be collected on the consent rate, defined as the proportion of approached eligible patients and physicians who consent to participate. Individuals who choose not to participate will be asked to provide select sociodemographic information (sex, year of birth, preferred language) and their reason(s) with a checklist on a refusal form. If 70% or more agree to participate, the study will be judged feasible on this aspect. We will also examine the retention rate, indicated by the proportion of patients and physicians who complete the study. Eighty percent will be considered the benchmark for success. Missing I-Score data rates due to network failure as well as patient and provider non-completion of self-reported questionnaire data will be calculated as well. The criterion for success on this metric is at least 90% of items completed per participant. Furthermore, participants will complete the Feasibility of Intervention Measure (alpha coefficient: .89) [57], at T1, T2 and T3, a four-item self-report measure that is appropriate for different stakeholder groups (e.g., patients, providers). Example items include “This [evidence-based practice] seems possible” and “This [evidence-based practice] seems doable.” Average scores of at least 4 (range: 1–5), indicative of agreement on the 5-point response scale, will signify the I-Score intervention’s feasibility.

Fidelity is the degree to which the intervention was implemented as specified in the protocol [59]. It will be indicated by patient and provider adherence to core components of the intervention. Thresholds for success, from T1 to T3, are: 90% patient completion of the I-Score prior to meeting with the physician; 90% provider review of the patient’s I-Score results prior to or during the clinic visit.

Objective 2 -evaluate the implementation strategy

Evaluation of the implementation strategy will be performed in relation to the same constructs as for the first objective. However, the assessment of acceptability, appropriateness, and fidelity will be solely based on analysis of qualitative data (see Table 2). As to feasibility, it will be assessed in terms of the rate of technical issues encountered and recorded in the Application Manager’s notes, and the percentage of providers who participated in the implementation activities (i.e., education meeting, focus groups), with a success threshold set at 80% or more.

Objective 3 -determine preliminary intervention effectiveness

This pilot study will collect data on one service outcome, patient management. It will be verified with a checklist submitted to participating physicians to allow them to record, per patient encounter, if they received the I-Score results on time, if they reviewed them prior to or during the clinic visit, if they were discussed during the visit, and if the I-Score identified concerning barriers. Then, they will check off any clinical actions that were taken based on the I-Score results or any related patient-provider discussion (e.g., recording issues in medical notes, referring to another health professional, ordering a test, changing a medication or treatment, providing advice or education).

The patient outcomes assessed in this pilot are barriers to adherence, self-reported adherence, and viral load. Barriers to ART adherence will be assessed at T1, T2, and T3, with our previously described PROM. Adherence will be examined with the Self-Rating Scale Item (SRSI) [58] at T1, T2, and T3. It is a one item measure of treatment adherence (i.e., ‘‘Rate your ability to take all your medications as prescribed” [over the past 4 weeks], rated on a 6-point scale (i.e., Very poor, Poor, Fair, Good, Very good, and Excellent). Viral load, a clinical indicator of viral activity (e.g., infectiousness) and treatment response, will be treated as a dichotomous variable based on whether, as indicated in the patient’s medical file, the HIV RNA viral load is detectable (over 50 copies), or not. Undetectability is a goal of HIV treatment. The most recent viral load test result at T1 and T3, will be collected.

Data analysis

The period of qualitative and quantitative data analysis is projected to extend from approximately July 2021 to April 2022.

Quantitative analysis

Time 1 questionnaires for people living with HIV and for HIV physicians will be summarized with descriptive statistics. For continuous variables, the minimum, the maximum, the mean, and the standard deviation will be reported. For ordinal and nominal qualitative variables, we will report absolute and relative frequencies (proportions).

As, specified, for most quantitative metrics relating to Objectives 1 and 2, as recommended for pilot studies, score targets were set to evaluate the ability to proceed to a definitive trial [53]. For people living with HIV and HIV physicians, continuous outcomes expressed on a Likert scale will be summarized with the minimum, the maximum, the mean, and the standard deviation at T1, T2 and T3. Binary outcomes (yes or no) will be reported with absolute and relative frequencies (proportions) at T1, T2 and T3. The means and proportions of T1, T2 and T3 will be confronted with their corresponding thresholds for success, presented in Table 2. To study the tendency of means and proportions for people living with HIV over time, a Linear Mixed Model or a Generalized Linear Mixed Model will be used, for continuous and binary outcomes, respectively. The response variable of each model will be the corresponding outcome and the independent variable will be the time (T1, T2 and T3). The null hypothesis of no time effect on the corresponding outcome will be tested with a Student’s t-test on the regression coefficient. If the null hypothesis is rejected, we will perform post-hoc Student’s paired t-tests between all combinations of time points to show between which time points means and proportions differed significantly. Additionally, to verify if each threshold for success is met at the end of the study, we will test the null hypothesis that each mean or proportion at T3 is inferior to its corresponding threshold with a Student’s t-test. For all analyses, a significance level of 5% will be adopted. Finally, where appropriate, Cronbach’s alpha will be calculated to evaluate the internal consistency of subscales.

Regarding the patient outcomes of Objective 3, barriers to adherence and adherence will be summarized with the minimum, the maximum, the mean, and the standard deviation. Viral load will be reported by absolute and relative frequencies, as it is considered a dichotomous variable. For the service outcome obtained from the physician checklist, we will report the proportion of clinic visits when physicians took action based on the I-Score results, among the visits where an adherence barrier of concern was identified by physicians. Proportions will be reported for T1, T2 and T3 and globally, across time periods. To evaluate evidence of a statistically significant difference in our chosen effectiveness outcomes, we will run a Student’s paired t-test for barriers to adherence and adherence and a McNemar test for viral load, between T1 and T3. To complete the analysis of service outcomes, we will use a logistic regression model, considering only the visits where an adherence barrier of concern was identified by physicians. The dependent variable is the binary variable of whether or not an action was taken by physicians and the independent variable is the time, considered as a factor with three independent levels (T1, T2 and T3). We will test the null hypothesis that time has no effect on the probability of taking action, with a t-test on the regression coefficient. We will conclude the analysis by testing the null hypothesis of equality of proportions between all pairwise combinations of time points, with a Student’s t-test between two proportions, performing a Bonferroni correction for multiple tests. For all analyses, a significance level of 5% will be adopted.

Qualitative analysis

The study’s qualitative material (i.e., focus groups, interviews, Application Manager notes) will be submitted to content analysis [62], focusing on the manifest content. Deductive content analysis will be favored, allowing implementation barriers and facilitators identified to be categorized with an existing framework, while remaining open to emerging categories. Deductive content analysis allows categories to be compared at different periods, fitting with the study’s longitudinal design [63]. For this purpose, the Consolidated Framework for Implementation Research (CFIR) will be used [30]. Analysis will involve three phases [62]: 1) preparation, when the analyst attempts to get a sense of the entire dataset through immersion in the data; 2) organizing, during which an unconstrained categorization matrix will be devised with the CFIR’s constructs, and the data will be coded, accordingly. At this point, the qualitative data management software, Atlas.ti version 8, will be used to code and categorize the material; and 3) reporting, which involves presenting the described contents (meanings) of the categories and addressing trustworthiness [62]. A product of these analyses will be matrices of facilitators, barriers and potential solutions raised by patients and physicians, at each main qualitative data collection period, using the CFIR. These will allow for the tracking of categories over time [64], to help identify patterns. Two trained coders will be involved in the qualitative analyses which will be discussed during periodic team meetings, including any discrepancies in coding or interpretation.

For the cyclical small tests of change of the implementation strategy, consistent with the approach by Keith et al. [48], the qualitative data will be coded and categorized with the CFIR. We will further structure and document our cyclical small tests of change by drawing on the iterative Plan-Do-Study-Act (PDSA) approach for quality improvement [65]. During the ‘plan’ stage, the stakeholder feedback collected will help to periodically identify and document factors that are affecting the intervention and associated changes to the implementation and/or peripheral components of the intervention that could lead to improvement. Related predictions will be explicitly articulated [65]. During the ‘do’ stage, changes will be tested. At the ‘study’ stage, the extent of the change’s success will be evaluated against the prediction(s) and documented with subsequent qualitative or quantitative data, per the study’s design, and the Application Manager’s field notes. The ‘act’ stage will see further adaptations, depending on the successfulness of the change, and/or the initiation of another cycle of change. For each PDSA cycle undertaken, all decisions and relevant information will be recorded, following the PDSA theoretical framework developed by Taylor et al. [65].

Mixed methods analysis

The quantitative and qualitative data will be analyzed separately and subsequently brought together for comparison, for a more complete interpretation of the results. Areas of convergence and divergence will be highlighted.

Data management

The Opal team (TH, YM) will oversee data management for this study. Patient-reported data will be electronically collected directly through Opal. This data will be stored in a local server protected by the MUHC. The Opal team will manage patient information through the Opal app, always respecting data security and confidentiality. For this study’s purpose, relevant deidentified data will be extracted by the team and stored on a password protected USB key or hard drive for subsequent analysis.

Discussion

Electronic PROM use for clinical practice with individuals living with HIV is limited. To our knowledge, this is the first pilot study of an intervention aimed at implementing systematic consideration of patient-identified ART adherence barriers with an electronic PROM in routine HIV care. This pilot study will help build needed knowledge on impediments to and strategies for implementing these tools in HIV care [1]. As such, it acts as a standalone study, providing useful and rich data to others considering similar interventions in similar contexts. It will generate data that will improve understanding of conditions for successful implementation as well as test and solidify the implementation strategy. Furthermore, it may shed light on the mechanisms of similar PROM interventions. Overall, it will produce useful data to design a definitive effectiveness trial of the I-Score intervention.

Anticipated problems

There are many potential barriers to implementing PROMs in care, such as provider reticence (e.g., due to concerns for increased workload). Our multi-pronged implementation strategy directly seeks to mitigate numerous common barriers to implementing PROMs in clinical practice [43]. For details, see S1 Table. Also in our favor is the study site (CVIS), which is a highly active center for HIV research where several of its investigators are experienced in implementation science methods.

With the use of self-reported adherence measures, there are accuracy concerns. These measures are known for being prone to recall bias, if not misremembering, given, for instance, the mundane nature of medication-taking [66,67]. They are also deemed vulnerable to social desirability bias, if not intentional deception, given, for example, patient beliefs about the consequences of admitting adherence problems [66,67]. These processes could influence the types of adherence barriers people are able or willing to report when completing our PROM. Conversely, PROM administration in routine care is recognized to give patients permission to raise health problems with their providers [68]. Furthermore, our understanding is that barriers are multidimensional and interconnected, where a common barrier such as forgetting, can be associated with numerous others (e.g., substance use, HIV stigma, life demands, co-morbidity) [20]. Hence, while our PROM is not expected to capture the full details of an individual’s barriers, a fuller portrait may emerge through conversations the PROM instigates with providers. Our team has also vied to involve people living with HIV with a range of methods throughout the development of our instrument (via committee meetings, cognitive interviewing, etc.) to ensure its relevance, acceptability, and appropriateness for use in HIV care. This pilot study will allow us to further gauge the utility of the information provided by the PROM and to potentially make adjustments to improve accuracy.

Finally, an added concern is the continued spread of COVID-19. Physicians have been advised to use telemedicine and teleconsultations, whenever possible, to limit the spread of COVID-19 [69,70]. Given the uncertain evolution of the pandemic and associated public health response, methodological adjustments to this study may be required, for instance, to further limit in-person participant visits with physicians and research team members.

Conclusion

The PROM initiative concerned by this study challenges traditional care paradigms with a more patient-centered approach. It aims to shift an HIV treatment paradigm emphasizing biomedical markers (i.e., viral load) in adherence management. Systematic monitoring of patient-reported adherence barriers could allow for a more preventative approach and help ensure adherence management addresses patients’ priorities. Indeed, the I-Score PROM includes only the most highly valued barriers in terms of relevance and importance to HIV care, as rated by people living with HIV and providers in our Delphi consultation [42]. As to the app through which the PROM is administered and its features, it may help redress the patient-provider knowledge imbalance and empower patients in their care [71].

Supporting information

S1 Checklist. SPIRIT 2013 checklist: Recommended items to address in a clinical trial protocol and related documents*.

(DOC)

S1 Table. Relationship between the identified facilitators/barriers of PROM implementation, the implementation framework (CFIR), and the pilot study’s implementation strategies.

PROM = patient-reported outcome measure; CFIR = Consolidated Framework for Implementation Research; a Reproduced or adapted from Foster et al. [41]; b Based on Damschroder et al. [25]; c Based on the taxonomies of Powell et al. [42,43].

(DOCX)

S1 Appendix. Patient consent form.

(DOCX)

S2 Appendix. Patient and physician Time 1 study questionnaires.

(DOCX)

S1 Dataset. World Health Organization Trial Registration Data Set for study CTNPT039.

(DOCX)

S1 File. The I-Score/Opal implementation pilot study.

(PDF)

Acknowledgments

We wish to thank the patients and health and social service providers who participated in the development of the I-Score PROM and in adapting Opal for HIV care. On these projects, we also thank the Quebec SPOR Support Unit -McGill Methodological Developments Platform, for sharing their expertise and resources.

Data Availability

All relevant quantitative data from this study will be made available upon study completion.

Funding Statement

BL, KE, SA, and MK received pilot funding from the CIHR Canadian HIV Trials Network (https://www.hivnet.ubc.ca/) to conduct this study (Grant # CTNPT039). BL and KE received funding from Merck Canada Inc. to develop the I-Score PROM under their Investigator-Initiated Study Program (Grant # IISP-53538). BL received funding from the Quebec Strategy for Patient-Oriented Research (SPOR) Support Unit (Methodological Developments) also to develop the I-Score PROM (Grant # M006). BL received funding from Merck Canada Inc./MSD France to configure the Opal patient portal for HIV care at the Chronic Viral Illness Service of the McGill University Health Centre (Grant # 65364). KL and BL received funding from MSD Avenir to lead a Parisian trial modeled on this pilot study (Grant # DS-2018-0072). The funders had and will not have a decisional role in study design, data collection, data analysis, or preparation of manuscripts for publication based on its results.

References

  • 1.Kall M, Marcellin F, Harding R, Lazarus JV, Carrieri P. Patient-reported outcomes to enhance person-centred HIV care. The Lancet HIV. 2020;7(1):68. doi: 10.1016/S2352-3018(19)30345-5 [DOI] [PubMed] [Google Scholar]
  • 2.Kjær ASHK Rasmussen TA, Hjollund NH Rodkjaer LO, Storgaard M. Patient-reported outcomes in daily clinical practise in HIV outpatient care. Int J Infect Dis. 2018;69:108–114. doi: 10.1016/j.ijid.2018.02.015 [DOI] [PubMed] [Google Scholar]
  • 3.Boyce M, Browne J. Does providing feedback on patient-reported outcomes to healthcare professionals result in better outcomes for patients? A systematic review. Qual Life Res. 2013;22(9):2265–2278. doi: 10.1007/s11136-013-0390-0 [DOI] [PubMed] [Google Scholar]
  • 4.Valderas JM, Kotzeva A, Espallargues M, Guyatt G, Ferrans CE, Halyard MY, et al. The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Qual Life Res. 2008;17(2):179–193. doi: 10.1007/s11136-007-9295-0 [DOI] [PubMed] [Google Scholar]
  • 5.Ishaque S, Karnon J, Chen G, Nair R, Salter AB. A systematic review of randomised controlled trials evaluating the use of patient-reported outcome measures (proms). Qual Life Res. 2019;28(3):567–592. doi: 10.1007/s11136-018-2016-z [DOI] [PubMed] [Google Scholar]
  • 6.Rutherford C, Campbell R, King M, Speerin R, Soars L, Butcher A, et al. Implementing patient-reported outcome measures into clinical practice across nsw: mixed methods evaluation of the first year. Applied Research in Quality of Life. 2020. [Google Scholar]
  • 7.Marandino L, Necchi A, Aglietta M, Di Maio M. COVID-19 emergency and the need to speed up the adoption of electronic patient-reported outcomes in cancer clinical practice. JCO Oncol Pract. 2020;16(6):295–298. doi: 10.1200/OP.20.00237 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Bezabhe WM, Chalmers L, Bereznicki LR, Peterson GM. Adherence to antiretroviral therapy and virologic failure: a meta-analysis. Medicine (Baltimore). 2016;95(15):e3361. doi: 10.1097/MD.0000000000003361 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Ortego C, Huedo-Medina TB, Llorca J, Sevilla L, Santos P, Rodriguez E, et al. Adherence to highly active antiretroviral therapy (HAART): a meta-analysis. AIDS Behav. 2011;15(7):1381–1396. doi: 10.1007/s10461-011-9942-x [DOI] [PubMed] [Google Scholar]
  • 10.Cohen J, Beaubrun A, Bashyal R, Huang A, Li J, Baser O. Real-world adherence and persistence for newly-prescribed HIV treatment: single versus multiple tablet regimen comparison among US medicaid beneficiaries. AIDS Res Ther. 2020;17:12. doi: 10.1186/s12981-020-00268-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Clay PG, Yuet WC, Moecklinghoff CH, Duchesne I, Tronczyński KL, Shah S, et al. A meta-analysis comparing 48-week treatment outcomes of single and multi-tablet antiretroviral regimens for the treatment of people living with HIV. AIDS Res Ther. 2018. Oct 30;15(1):17. doi: 10.1186/s12981-018-0204-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Department of Health and Human Services (DHHS). Panel on Antiretroviral Guidelines for Adults and Adolescents. Guidelines for the Use of Antiretroviral Agents in Adults and Adolescents Living with HIV. http://www.aidsinfo.nih.gov/ContentFiles/ AdultandAdolescentGL.pdf. (accessed April 7, 2020).
  • 13.Barfod TS, Hecht FM, Rubow C, Gerstoft J. Physicians’ communication with patients about adherence to HIV medication in San Francisco and Copenhagen: a qualitative study using Grounded Theory. BMC Health Serv Res. 2006;6:154. doi: 10.1186/1472-6963-6-154 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Beach MC, Roter DL, Saha S, Korthuis PT, Eggly S, Cohn J, et al. Impact of a brief patient and provider intervention to improve the quality of communication about medication adherence among HIV patients. Patient Educ Couns. 2015;98(9):1078–1083. doi: 10.1016/j.pec.2015.05.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Malta M, Petersen ML, Clair S, Freitas F, Bastos FI. Adherence to antiretroviral therapy: a qualitative study with physicians from Rio de Janeiro, Brazil. Cad Saúde Pública. 2005;21(5):1424–1432. doi: 10.1590/s0102-311x2005000500015 [DOI] [PubMed] [Google Scholar]
  • 16.Laws MB, Beach MC, Lee Y, Rogers WH, Saha S, Korthuis PT, et al. Provider-patient adherence dialogue in HIV care: results of a multisite study. AIDS Behav. 2013;17(1):148–159. doi: 10.1007/s10461-012-0143-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Wilson IB, Laws MB, Safren SA, Lee Y, Lu M, Coady W, et al. Provider-focused intervention increases adherence-related dialogue but does not improve antiretroviral therapy adherence in persons with HIV. J Acquir Immune Defic Syndr. 2010;53(3):338–347. doi: 10.1097/QAI.0b013e3181c7a245 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Fredericksen R, Crane PK, Tufano J, Ralston J, Schmidt S, Brown T, et al. Integrating a web-based, patient-administered assessment into primary care for HIV-infected adults. J AIDS HIV Res. 2012;4(2):47–55. doi: 10.5897/jahr11.046 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Miller LG, Liu H, Hays RD, Golin CE, Beck CK, Asch SM, et al. How well do clinicians estimate patients’ adherence to combination antiretroviral therapy?. J Gen Intern Med. 2002;17(1):1–11. doi: 10.1046/j.1525-1497.2002.09004.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Engler K, Lènàrt A, Lessard D, Toupin I, Lebouché B. Barriers to antiretroviral therapy adherence in developed countries: a qualitative synthesis to develop a conceptual framework for a new patient-reported outcome measure. Aids Care. 2018;30(Sup1):17–28. doi: 10.1080/09540121.2018.1469725 [DOI] [PubMed] [Google Scholar]
  • 21.Shubber Z, Mills EJ, Nachega JB, Vreeman R, Freitas M, Bock P, et al. Patient-reported barriers to adherence to antiretroviral therapy: A systematic review and meta-analysis. PLoS Med. 2016;13(11):e1002183. doi: 10.1371/journal.pmed.1002183 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Genberg BL, Lee Y, Rogers WH, Wilson IB. Four types of barriers to adherence of antiretroviral therapy are associated with decreased adherence over time. AIDS Behav. 2015;19(1):85–92. doi: 10.1007/s10461-014-0775-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Lessard D, Engler K, Toupin I, Routy J-P, Lebouché B. Evaluation of a project to engage patients in the development of a patient-reported measure for HIV care (the I-score study). Health Expectations. 2019;22(2):209–225. doi: 10.1111/hex.12845 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Toupin I, Engler K, Lessard D, Wong L, Lènàrt A, Spire B, et al. Developing a patient-reported outcome measure for HIV care on perceived barriers to antiretroviral adherence: assessing the needs of HIV clinicians through qualitative analysis. Qual Life Res. 2018;27(2):379–388. doi: 10.1007/s11136-017-1711-5 [DOI] [PubMed] [Google Scholar]
  • 25.Kildea J, Battista J, Cabral B, Hendren L, Herrera D, Hijal T, et al. Design and development of a person-centered patient portal using participatory stakeholder co-design. Journal of Medical Internet Research. 2019;21(2):11371. doi: 10.2196/11371 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Stover AM, Haverman L, van Oers HA, Greenhalgh J, Potter CM; ISOQOL. Using an implementation science approach to implement and evaluate patient-reported outcome measures (PROM) initiatives in routine care settings. Qual Life Res. 2020. Epub 2020 Jul 10. doi: 10.1007/s11136-020-02564-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Murray E, Hekler EB, Andersson G, Collins LM, Doherty A, Hollis C, et al. Evaluating digital health interventions: key questions and approaches. Am J Prev Med. 2016;51(5):843–851. doi: 10.1016/j.amepre.2016.06.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Greenhalgh J, Dalkin S, Gooding K, Gibbons E, Wright J, Meads D, et al. Functionality and feedback: a realist synthesis of the collation, interpretation and utilisation of patient-reported outcome measures data to improve patient care. Health Serv Deliv Res. 2017;5(2). doi: 10.3310/hsdr05020 [DOI] [PubMed] [Google Scholar]
  • 29.Greenhalgh J, Long AF, Flynn R. The use of patient reported outcome measures in routine clinical practice: lack of impact or lack of theory? Social Sci Med. 2005;60:833e43. doi: 10.1016/j.socscimed.2004.06.022 [DOI] [PubMed] [Google Scholar]
  • 30.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science. 2009;4(1). doi: 10.1186/1748-5908-4-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Santana MJ, Feeny D. Framework to assess the effects of using patient-reported outcome measures in chronic care management. Qual Life Res. 2014;23(5):1505–13. doi: 10.1007/s11136-013-0596-1 [DOI] [PubMed] [Google Scholar]
  • 32.Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–226. doi: 10.1097/MLR.0b013e3182408812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Peters DH, Tran NT, Adam T. Implementation research in health: a practical guide. Alliance for Health Policy and Systems Research. World Health Organization; 2013. [Google Scholar]
  • 34.Creswell JW, Plano-Clark VL. Designing and Conducting Mixed Methods Research. 2nd Edition. London, UK: Sage Publications; 2011. [Google Scholar]
  • 35.O’Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. J Health Serv Res Policy. 2008;13(2):92–98. doi: 10.1258/jhsrp.2007.007074 [DOI] [PubMed] [Google Scholar]
  • 36.Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for reporting implementation studies (StaRI) statement. BMJ. 2017;356:i6795. doi: 10.1136/bmj.i6795 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Cox J, Linthwaite B, Engler K, Lessard D, Lebouché B, Kronfli N. A type ii implementation-effectiveness hybrid quasi-experimental pilot study of a clinical intervention to re-engage people living with hiv into care, ’lost & found’: an implementation science protocol. Pilot and Feasibility Studies. 2020;6:29–29. doi: 10.1186/s40814-020-0559-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Sim J, Lewis M. The size of a pilot study for a clinical trial should be calculated in relation to considerations of precision and efficiency. Journal of Clinical Epidemiology. 2011;65:301–308. doi: 10.1016/j.jclinepi.2011.07.011 [DOI] [PubMed] [Google Scholar]
  • 39.Institut national de santé publique du Québec [INSPQ]. Expert Consensus: Viral Load and the Risk of HIV Transmission. Report. Direction des risques biologiques et de la santé au travail, 2014. Retrieved November 25, 2020 from: https://www.inspq.qc.ca/pdf/publications/1987_Viral_Load_HIV_Transmission.pdf.
  • 40.Public Health Agency of Canada. Summary: Estimates of HIV Incidence, Prevalence and Canada’s Progress on Meeting the 90-90-90 HIV targets, 2016. Public Health Agency of Canada, 2018. Retrieved August 7, 2020 from: https://www.canada.ca/en/public-health/services/publications/diseases-conditions/summary-estimates-hiv-incidence-prevalence-canadas-progress-90-90-90.html.
  • 41.Ministère de la santé et des services sociaux. La thérapie antirétrovirale pour les adultes infectés par le VIH—Guide pour les professionnels de la santé du Québec. Gouvernement du Québec. Last updated February 2, 2019. Retrieved August 7, 2020 from: https://publications.msss.gouv.qc.ca/msss/document-000733/?&date=DESC&sujet=vih-sida&critere=sujet. [Google Scholar]
  • 42.Engler K, Ahmed S, Lessard D, Vicente S, Lebouché B. Assessing the content validity of a new patient-reported measure of barriers to antiretroviral therapy adherence for electronic administration in routine HIV care: proposal for a web-based Delphi study. JMIR Res Protoc. 2019;8(8):e12836. doi: 10.2196/12836 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Foster A, Croot L, Brazier J, Harris J, O’Cathain A. The facilitators and barriers to implementing patient reported outcome measures in organisations delivering health related services: a systematic review of reviews. J Patient Rep Outcomes. 2018;2:46. doi: 10.1186/s41687-018-0072-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–157. doi: 10.1177/1077558711430690 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (eric) project. Implement Sci. 2015;10:21–21. doi: 10.1186/s13012-015-0209-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Mercieca-Bebber R, Palmer MJ, Brundage M, Calvert M, Stockler MR, King MT. Design, implementation and reporting strategies to reduce the instance and impact of missing patient-reported outcome (PRO) data: a systematic review. BMJ Open. 2016;6:e010938. doi: 10.1136/bmjopen-2015-010938 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.International Society for Quality of Life Research [ISOQOL] (prepared by Aaronson N, Choucair A, Elliott T, Greenhalgh J, Halyard M, Hess R, et al.). User’s Guide to Implementing Patient-Reported Outcomes Assessment in Clinical Practice, Version: November 11, 2011. [DOI] [PubMed]
  • 48.Keith RE, Crosson JC, O’Malley AS, Cromp D, Taylor EF. Using the Consolidated Framework for Implementation Research (CFIR) to produce actionable findings: a rapid-cycle evaluation approach to improving implementation. Implement Sci. 2017;12:15. doi: 10.1186/s13012-017-0550-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Schnall R, Cho H, Liu J. Health information technology usability evaluation scale (health-itues) for usability assessment of mobile health technology: validation study. JMIR Mhealth and Uhealth. 2018;6(1):4. doi: 10.2196/mhealth.8851 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Mahmood A, Kedia S, Wyant DK, Ahn S, Bhuyan SS. Use of mobile health applications for health-promoting behavior among individuals with chronic medical conditions. Digit Health. 2019;5:2055207619882181. doi: 10.1177/2055207619882181 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Balapour A, Reychav I, Sabherwal R, Azuri J. Mobile technology identity and self-efficacy: implications for the adoption of clinically supported mobile health apps. International Journal of Information Management. 2019;49:58–68. [Google Scholar]
  • 52.Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qualitative Health Research. 2017;27(4):591–608. doi: 10.1177/1049732316665344 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10:1. doi: 10.1186/1471-2288-10-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Tariman JD, Berry DL, Halpenny B, Wolpin S, Schepp K. Validation and testing of the Acceptability E-scale for web-based patient-reported outcomes in cancer care. Appl Nurs Res. 2011;24(1):53–58. doi: 10.1016/j.apnr.2009.04.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.National Health Service England. NHS England Review of the Friends and Family Test. 2014. Retrieved August 13, 2020 from https://www.england.nhs.uk/wp-content/uploads/2014/07/fft-rev1.pdf.
  • 56.Moore G, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research. 1991;2(3):192–222. [Google Scholar]
  • 57.Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):1–12. doi: 10.1186/s13012-016-0533-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Feldman BJ, Fredericksen RJ, Crane PK, Safren SA, Mugavero MJ, Willig JH, et al. Evaluation of the single-item self-rating adherence scale for use in routine clinical care of people living with HIV. AIDS Behav. 2013;17(1):307–318. doi: 10.1007/s10461-012-0326-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. doi: 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Hamilton DF, Lane JV, Gaston P, Patton JT, Macdonald DJ, Simpson AH, et al. Assessing treatment outcomes using a single question: the net promoter score. Bone Joint J. 2014;96-B(5):622–8. doi: 10.1302/0301-620X.96B5.32434 [DOI] [PubMed] [Google Scholar]
  • 61.Stirling P, Jenkins PJ, Clement ND, Duckworth AD, McEachan JE. The Net Promoter Scores with Friends and Family Test after four hand surgery procedures. J Hand Surg Eur. 2019;44(3):290–295. doi: 10.1177/1753193418819686 [DOI] [PubMed] [Google Scholar]
  • 62.Elo S, Kyngäs H. Original methodology: the qualitative content analysis process. J Adv Nurs. 2008;62(1):107–115. doi: 10.1111/j.1365-2648.2007.04569.x [DOI] [PubMed] [Google Scholar]
  • 63.Vaismoradi M, Turunen H, Bondas T. Content analysis and thematic analysis: implications for conducting a qualitative descriptive study. Nursing and Health Sciences. 2014;15(3):398–405. [DOI] [PubMed] [Google Scholar]
  • 64.Damschroder L, Hall C, Gillon L, Reardon C, Kelley C, Sparks J, et al. The Consolidated Framework for Implementation Research (CFIR): progress to date, tools and resources, and plans for the future. Implementation Sci. 2015;10(Suppl 1):A12. [Google Scholar]
  • 65.Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan–do–study–act method to improve quality in healthcare. BMJ Qual Saf. 2014;23:290–298. doi: 10.1136/bmjqs-2013-001862 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Williams AB, Amico KR, Bova C, Womack JA. A proposal for quality standards for measuring medication adherence in research. AIDS Behav. 2013;17(1):284–297. doi: 10.1007/s10461-012-0172-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Wilson IB, Carter AE, Berg KM. Improving the self-report of HIV antiretroviral medication adherence: is the glass half full or half empty? Curr HIV/AIDS Rep. 2009. Nov;6(4):177–86. doi: 10.1007/s11904-009-0024-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Greenhalgh J, Gooding K, Gibbons E, Dalkin S, Wright J, Valderas J, et al. How do patient reported outcome measures (PROMs) support clinician-patient communication and patient care? A realist synthesis. J Patient Rep Outcomes. 2018;2:42. doi: 10.1186/s41687-018-0061-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.American Medical Association. AMA quick guide to telemedicine in practice. Updated June 22, 2020. Access July 3, 2020. https://www.ama-assn.org/practice-management/digital/ama-quick-guidetelemedicine-Practice.
  • 70.Royal College of Physicians and Surgeons of Canada. Telemedicine and virtual care guidelines (and other clinical resources for COVID-19). Updated May 21, 2020. Accessed July 3, 2020. http://www.royalcollege.ca/rcsite/documents/about/covid-19-resources-telemedicine-virtual-care-e#qc.
  • 71.Rigby M, Georgiou A, Hyppönen H, Ammenwerth E, de Keizer N, Magrabi F, et al. Patient portals as a means of information and communication technology support to patient-centric care coordination–the missing evidence and the challenges of evaluation. Yearbook of Medical Informatics. 2015;24(01):148–159. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Ethan Moitra

25 May 2021

PONE-D-20-37626

Implementation of an electronic patient-reported measure of barriers to antiretroviral therapy adherence with the Opal patient portal: a mixed method type 3 hybrid pilot study at a large Montreal HIV clinic

PLOS ONE

Dear Dr. Engler,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. As you will see, two reviewers were impressed with your study and recognized its potential impact on the field. Our third reviewer had some concerns about moving forward with this publication prior to obtaining more data. I do not think we should postpone this work, however they do raise important concerns that are worth addressing. Please attend to all of their comments as I think these will be readily addressable and should result in an improved manuscript.

Please submit your revised manuscript by Jul 03 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Ethan Moitra

Academic Editor

PLOS ONE

Brown University

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following in the Financial Disclosure section:

"BL, KE, SA, and MK received pilot funding from the CIHR Canadian HIV Trials Network

(https://www.hivnet.ubc.ca/) to conduct this study (Grant # CTNPT039).

BL, KE received funding from Merck Canada Inc. to conduct the I-Score PROM development study under their Investigator-Initiated Study Program (Grant # IISP-53538). The funders had and will not have a decisional role in study design, data collection, data analysis, or preparation of manuscripts for publication based on its results. "

We note that you received funding from a commercial source: Merck Canada Inc.

Please provide an amended Competing Interests Statement that explicitly states this commercial funder, along with any other relevant declarations relating to employment, consultancy, patents, products in development, marketed products, etc.

Within this Competing Interests Statement, please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your amended Competing Interests Statement within your cover letter. We will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This was a really interesting and detailed protocol of a pilot hybrid mixed methods study. I look forward to reading the results of the pilot and potentially the final trial when it is ready. It may be good to use the term protocol in your title since it isn't immediately clear. In your article, you showed knowledge of implementation research and mixed methods design and analysis of data. The pilot is complicated so even though you included sufficient amounts of detail, I did have to keep going back and forth between sections to clarify any questions I had.

Overall, I think this is a good sound study. I do have a few minor questions /comments below:

1) It would be good to keep consistent with the use of acronyms. Sometimes you use PROM and sometimes you use "patient-reported outcome measures" (e.g. on line 87 and on line 97).

2) Line 134 - You mention how many patients are in the HIV clinic, but it would also be good to mention how many physicians are in the clinic. It is interesting to know since you mention your sample size for the pilot will be 5 physicians.

3) Line 139 - Is there a inclusion criteria related to length of time on ART? You mention later that participants have to have signs of adherence problems within the last 12 months, but it wasn't clear if they had to have been taking ART for that long.

4) Line 156 - Many studies have physicians involved in recruiting their patients, but I just wondered if you had considered the possible ethical issues associated with this? Patients may feel undue pressure or obligation to participate.

5) Line 188 - Table S1 was not available or I was not able to find it so I was unable to review

6) Line 193 - Will the Application Manager be the one meeting with the patients/providers before T1 to train them?

7) Line 207 - With the cyclical tests of change, does this mean the process may be adapted for participants e.g. from T2 to T3? Would the change occur for all participants or be on an individual basis?

8) More detail on the researchers involved would be useful and interesting. Who will be running the focus groups? Who and how many researchers will be analysing the data? How will disagreements between researchers be handled? Particularly for qualitative data, having an overview of the background of the researchers is important.

9) Table 1 (Line 238) - At the very end it mentions compensation for the patients, but I was unable to find any detail on what this compensation was.

10) When reviewing Table 2 (Line 248), I was wondering how long approximately will the questionnaires take at the different times points for both patients and physicians? It was also not exactly clear whether the questionnaires were filled in before the patient meets their physician or afterwards at each time point.

11) Line 330 - change qualitative to quantitative

12) You do mention that you have published on the barriers measure elsewhere, but it would have been good to have included a little more detail on this - how many barriers and the type of barriers it covers.

Well done again on a great paper! Good luck with the study!

Reviewer #2: The protocol is clearly written and robust and only minor comments are attached.

Minor Comments:

1. It is a awkward to write (ln. 49) that PROMs are rarely used in clinical practice and then state (ln. 54) that evidence for PROM is mixed and then beneficial in the following paragraph. Unclear if referring to PROMs overall or PROMs related to HIV.

2. The team is investigating barriers to ART adherence and cite some recently literature on the measurement of this topic. However, as others have shown, one major limitation is relying on participants to describe what are their most important barriers, which may be based on convenience and social desirability. For example, studies show that the #1 barrier to ART adherence is "simply forgetting," irrespective of viral load, thus, it could be that simply forgetting is a consequence of neurocognitive functioning, depression, or social desirability in not wanting to report substance use. The promise of an PROM intervention must take into account why PLWH report the barriers they do as you may miss the targets that truly drive non-adherence.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 1

Ethan Moitra

23 Nov 2021

Implementation of an electronic patient-reported measure of barriers to antiretroviral therapy adherence with the Opal patient portal: protocol for a mixed method type 3 hybrid pilot study at a large Montreal HIV clinic

PONE-D-20-37626R1

Dear Dr. Engler,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. A reviewer confirmed that you addressed all of their comments. Unfortunately, one of the other reviewers was not available for a second review. As such, I assessed your responsiveness to this person's comments and I think they were addressed. Lastly, our statistical reviewer (reviewer #3) continued to raise concerns about proceeding to publication now vs. when you enroll participants. I think these are helpful comments and I would encourage you to consider them as you move forward in this study. However, given your focus on the protocol of this study, rather than the findings, I believe this manuscript is suitable for publication.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Ethan Moitra

Academic Editor

PLOS ONE

Brown University

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #2: Yes

Reviewer #3: Partly

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: The authors have adequately responded to all comments raised by the reviewers. I have no further comments.

Reviewer #3: This is a very well written report of describing the protocol of a pilot study aiming to evaluate patients’ and physicians’ perceptions of the I-Score intervention and its implementation strategy.

From reading the text there is a sense that the COVID-19 is impacting on the feasibility of this very well designed project even of this pilot phase with a small sample size of n=30. Data were supposed to be collected by end of February then extended to April-August 2021. It is now mid May and no data are yet available. Although the publication of the protocol might be useful in its own right, I suggest submission is postponed until the firsts 30 patients are included and some data are shown.

Some comments to the implementation plan below

1. Unclear why the plan is to recruit only PLWH with known or suspected adherence problems. It might be useful to have a control group of PLWH with current VL≤50 to see whether they also might show fatigue between T1 and T3 and to compare patterns with those of the suspected non-adherent.

2. I think that there should be a plan for how to control for possible collider bias. Included population will be a selected sample of PLWH who own a smartphone with an appropriate data plan and/or home Wi-Fi connection as the Opal app is suited to a smartphone interface. People with smartphone are typically different from those who do not own one, being younger more literate and with higher self-health awareness. These factors can also be causes of the outcomes of interests thus introducing collider bias. There should be a plan for extracting a representative sample of the universe population at the outset to be able to compare characteristics of included and excluded and in case perform weighted analysis to control for collider bias.

3. Unclear why paper questionnaires are needed at all. Could not everything be collected through the app?

4. Unclear how Zoom meetings will be kept anonymised and who will be able to access the recordings and through which platform.

5. Because the pilot study only has a sample size of 30, the threshold of p=0.05 for objective 3 seems to be too high. Assume 0.05 if the type I error for the final analysis and this could be seen as an interim analysis after 30 people are in so I would correct for type I error inflation

6. I wonder whether adherence should be collected through more validated tools such as the VAS scale (percentage of missed dose over the previous 4 weeks). Not sure whether it is implementable in the Opal app but should not be too complicated (a bar going from 0-100% in which a position could be pointed at?). This would guarantee to collect a more accurate value than a 5-scale ordered category, potentially reducing misclassification of exposure and residual confounding

7. Objective 3 seems to be structured such as every participant is a control of self. Because of the relatively short follow-up this might lead to low power of paired tests (most people will probably show flat trajectories at least in the quantitative measures). It might be useful to have a control group to whom the app was not available at all to compare results across groups and better evaluate the effectiveness of the intervention. Although randomisation would be ideal, it could be done as an observational study by controlling for key confounders.

Minor points

Lines 89 and 286: is ‘select’ a word? Should it be ‘selected’ instead?

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: No

Acceptance letter

Ethan Moitra

20 Dec 2021

PONE-D-20-37626R1

Implementation of an electronic patient-reported measure of barriers to antiretroviral therapy adherence with the Opal patient portal: protocol for a mixed method type 3 hybrid pilot study at a large Montreal HIV clinic

Dear Dr. Engler:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Ethan Moitra

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Checklist. SPIRIT 2013 checklist: Recommended items to address in a clinical trial protocol and related documents*.

    (DOC)

    S1 Table. Relationship between the identified facilitators/barriers of PROM implementation, the implementation framework (CFIR), and the pilot study’s implementation strategies.

    PROM = patient-reported outcome measure; CFIR = Consolidated Framework for Implementation Research; a Reproduced or adapted from Foster et al. [41]; b Based on Damschroder et al. [25]; c Based on the taxonomies of Powell et al. [42,43].

    (DOCX)

    S1 Appendix. Patient consent form.

    (DOCX)

    S2 Appendix. Patient and physician Time 1 study questionnaires.

    (DOCX)

    S1 Dataset. World Health Organization Trial Registration Data Set for study CTNPT039.

    (DOCX)

    S1 File. The I-Score/Opal implementation pilot study.

    (PDF)

    Attachment

    Submitted filename: Response to Reviewers.pdf

    Data Availability Statement

    All relevant quantitative data from this study will be made available upon study completion.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES