Skip to main content
PLOS One logoLink to PLOS One
. 2020 May 8;15(5):e0232082. doi: 10.1371/journal.pone.0232082

Patient reported postoperative pain with a smartphone application: A proof of concept

Bram Thiel 1,2,*, Marc B Godfried 1, Elise C van Huizen 3, Bart C Mooijer 3, Bouke A de Boer 4,¤, Rover A A M van Mierlo 5, Johan van Os 6, Bart F Geerts 7, Cor J Kalkman 8
Editor: Peter M ten Klooster9
PMCID: PMC7209286  PMID: 32384103

Abstract

Postoperative pain management and pain assessment are still lacking the perspective of the patient. We have developed and studied a prototype smartphone application for patients to self-record postoperative pain. The main objective was to collect patient and stakeholder critique of improvements on the usability in order to develop a definitive version. The secondary objective was to investigate if patient self-recording compared to nurse-led assessment is a suitable method for postoperative pain management. Fifty patients and a stakeholder group consisting of ten healthcare- and ICT professionals and two members of the patient council participated in this study.

Main outcome

Thirty patients (60%) found it satisfying or very satisfying to communicate their pain with the app. Pain experienced after surgery was scored by patients as ‘no’: 3 (6%), ‘little’: 5 (10%), ‘bearable’: 25 (50%), ‘considerable’: 13 (26%) and ‘severe’: 1 (2%). Forty-five patients (90%) were positive about the ease of recording. Forty-five patients (90%) could correctly record their pain with the app. Thirty-eight patients (76%) agreed that in-app notifications to record pain were useful. Two patients (4%) were too ill to use the application. Based on usability feedback, we will redesign the pain intensity wheel and the in-app pain chart to improve clarity for patients to understand the course of their pain.

Secondary outcomes

The median patient recorded pain app score 4.0 (range 0 to 10) and nurse recorded numerical rating scale (NRS) for pain NRS 4.0 (range 0 to 9) were not statistically different (p = 0.06). Forty-two percent from a total of 307 patient pain app scores were ≥ 5 (on a scale from 0 no pain at all to 10 worst imaginable pain). Of these, 83% were recorded as ‘bearable’ while only in 18% of the recordings patients asked for additional analgesia. The results suggest that self-recording the severity of postoperative pain by patients with a smartphone application could be useful for postoperative pain management. The application was perceived as user-friendly and had high satisfaction rates from both patients and stakeholders. Further research is needed to validate the 11-point numeric and faces pain scale with the current gold standards visual analogue scale (VAS) and NRS for pain.

Introduction

Postoperative pain assessment is predominantly performed by nursing staff. Patients are asked to rate the severity of their pain on a visual analogue scale (VAS) or a numerical rating scale (NRS). Both VAS and NRS are currently the gold standards for pain assessment and are often used to compare for severe postoperative pain between hospitals [1, 2]. However, it is conceivable that what we now consider as a valid outcome in reality ignores the patient’s perspective [3]. For example, nurses sometimes may decide to alter the patients reported score, e.g., the recorded NRS may be a ‘negotiated’ result, or they ‘average’ the patient-reported pain score with their own assessment of the observed pain [4]. Another problem of current postoperative pain management is the outside pressure from patient safety and healthcare improvement programs mandated by governments to reduce the number of patients with pain scores higher than NRS 7 without having clarity on how patients value these scores [5]. Postoperative pain management based only on NRS rather than the patient’s valuation might lead to analgesic over-administration due to strict adherence to a NRS threshold for administering analgesics established by protocol [6]. Lastly, the willingness from nurses to record pain scores is possibly low due to their workload and experienced high administrative burden [7, 8].

Self-assessment and recording of postoperative pain by patients might be the solution to overcome the aforementioned problems. The results of a study evaluating the effectiveness of using a self-reporting pain board in 50 oncology patients suggested that self reporting reduces under-assessment and provides a reliable and effective means of assessing pain [9]. In addition, hospitalized patients like to be in control and are willing to contribute to their treatment and the recording of symptoms (e.g. pain) in their electronic medical record (EMR) [9, 10]. Yet, most hospitals do not provide options or tools for patients to contribute to their EMR.

In the present study we have collected data of patient and stakeholder experiences on the usability of a newly developed smartphone application for self-recording postoperative pain by hospitalized patients. The main objective was to collect recommendations and improvements to adjust the smartphone application to its final version. Furthermore, we have collected patients’ self-recorded pain scores with the application and pain scores recorded by nurses to assess the agreement between scores, and to determine if patient self-recording of postoperative pain with a smartphone application might be a suitable tool for postoperative pain management.

Materials and methods

Study design

A prospective mixed method cohort study was conducted in OLVG Hospital a Dutch general teaching hospital situated at two locations in Amsterdam, the Netherlands. The study was conducted according to the principles of Good Clinical Practice and to the Declaration of Helsinki [11]. The OLVG Hospital medical ethics committee considered this study as not being covered by the Medical Research Involving Human Subjects Act (WMO). The study was approved by the local institutional board of OLVG Hospital. Written informed consent was obtained from all participating patients. The study is registered with a summary of the study protocol in de Dutch Trial Register (Nederlands Trial Register, NTR) with number NL6565 [12].

The prototype application was built by Logicapps, Amsterdam, the Netherlands. The application was developed to be accessible free of charge for patients using smartphones with an Android operating system. During the study, the application was securely connected with the data server of Logicapps according to the standards of privacy protection mandated by Dutch law. Furthermore, a Conformité Européenne certificate (CE certificate) to indicate conformity with health, safety and environmental protection standards for products within the European Economic Area (EEA) was not mandated during the conduct of this study [13].

Participants and recruitment

Patients, aged 18 years or older, undergoing elective non-outpatient surgery and being in possession of a smartphone with Android operating system were regarded as suitable participants. Patients not able to read or understand Dutch were excluded from participation in this study. Patients were asked to participate in this study during their visit to the outpatient pre-anesthesia clinic. Participants received an information-letter providing details about the study. In addition, a researcher helped the patients with downloading the application to their smartphone and provided instructions on how to use the application and about the course of the study. A patient sample size of 50 patients, based on sample size calculation for qualitative studies, was estimated to be sufficient for collecting end-user feedback [14, 15]. Patients were recruited in consecutive order of hospital admission.

Furthermore, based on their interest and experience, we asked 4 anesthesiologists, 3 physician assistants, 2 medical students, 1 software engineer and 2 patients from de OLVG patient council, to be part of a stakeholder group to comment on the application. We estimated that the opinions of 12 stakeholders should provide sufficient information [16, 17].

Study procedure

During preoperative assessment patients willing to participate received a verbal- and ‘hands on’ instruction on how to use the application. After downloading the application from Google Play™, patients were required to fill in the following information upon opening the application; name, date of birth and gender. Patients were then asked to allow that the application could send notifications to their mobile devices. The patients’ identity was verified and the application was connected to the research database after authorization by a researcher. Furthermore patients received a ‘pencil and paper’ questionnaire for the evaluation of the application (S1 and S2 Files).

During admission, patients received standard care provided by the ward nurses and were treated for their postoperative pain according to the local standards based on the Guidelines for treatment of Postoperative Pain of the Dutch society of anesthesiologists [18]. Moreover, postoperative pain intensity was regularly verbally assessed on a scale from 0 (no pain) to 10 (worst imaginable pain) by ward nurses at least once every eight hours and thereafter recorded in the patient’s electronic medical record (EMR). This method of pain assessment is common practice in most Dutch hospitals and commonly referred to as a ‘numerical rating scale’. The use of the application during the course of the study had no implications for the treatment of postoperative pain. Furthermore, the application did not provide any in-app advice on treatment or analgesic medication to the patients or to the medical staff.

After surgery, patients were discharged from the postoperative anesthesia care unit (PACU) under the condition that their pain was ‘bearable’ with a NRS lower or equal to 4. Back on the nursing ward, patients could start using the application unrestricted to self-report on their postoperative pain. In addition, all patients were notified three times daily fixed at 8:00, 14:00 and 22:00 by the application to record pain. If the application was activated to record pain, the app opened on the home screen ‘My Pain score’ (Fig 1). In this screen patients could record their pain by answering the following question ‘are you in pain: at rest or during movement?’ Furthermore, they could define the intensity of their pain on a scale from 0 (no pain) to 10 (worst imaginable pain) by choosing the desired number with an on-screen numeric wheel. The default starting position of the wheel was set at 5 and the wheel was provided with text anchors at both ends of the scale referring to no pain (0) and worst imaginable pain (10). The pain intensity wheel was also equipped with a custom-designed 11-face scale to clarify and indicate the direction of the scale. The faces scale used in the application was designed to fit with the 11-point numerical scale and is based on the 6-faces scale recommended by the 2015 pain guideline from the Dutch college of General Practitioners [19]. The use of an 11-face pain scale seems appropriate for measuring acute postoperative pain in adult patients as was shown in a study amongst orthopedic surgical patients [20]. After recording their pain intensity, patients were asked to answer three additional questions (Fig 2A, 2B and 2C); ‘Is the pain bearable?’, ‘Is extra pain relief required?’ and ‘Contact the nurse?’ After these questions were completed, patients received a message from the application that their pain score had been recorded and that the in-app pain chart was updated accordingly (Fig 3). The first day after surgery, a researcher visited the patients on the ward to collect the questionnaire and had a short briefing of their participation.

Fig 1. Home screen: My pain score.

Fig 1

Fig 2.

Fig 2

A. additional pain question: Is the pain bearable?. B. PAIN app additional pain question: Is extra pain relief required?. C. PAIN app additional pain question: Contact the nurse?.

Fig 3. In-app pain chart.

Fig 3

Outcomes

To compile an overview of patient characteristics and surgical procedures an information specialist extracted the following variables from the electronic medical record (EMR): age, type of surgery and hospital length of stay.

The main outcome was feedback from patients and stakeholders to improve the prototype application. We provided patients with a 12-item questionnaire (S1 and S2 Files). Questions had to be answered with a 5-point Likert-scale. In the questionnaire we have used the following categorization for pain; “no pain”, “little pain”, “bearable pain”, “considerable pain”, “severe pain” for patients to value their overall experienced pain during the admission [1, 21, 22]. Stakeholders were individually questioned during a semi structured interview conducted by a researcher after they had the opportunity to examine the application (S3 and S4 Files). Patient questionnaire and stakeholder interviews were derived from a previously conducted survey that evaluated quality assessment criteria and usability testing for smartphone applications intended for self-management of pain [23]. Questions were translated into Dutch language and assessed by our department of communication to check for comprehensibility and ambiguity.

The secondary outcome was to investigate differences in the number, intensity, and timing of the pain scores recorded by patients with the application compared to the pain scores recorded by nurses in EMR.

Statistical analysis and reporting

Patient characteristics are described with categorical data presented in numbers and percentages and continuous data with mean and (interquartile) range depending on their distribution.

We grouped feedback from patients and stakeholders into the following themes: design, usability, content, and workflow, obtained from previous research [2426]. In line with the methodology of framework analysis, 3 researchers (EvH, BT and MG) independently rated the comments and recommendations from the patients and stakeholders in order to create a ranking of necessary modifications ordered by importance and ease of adjustment [27].

Differences between patient reported pain scores and nurse reported pain scores were tested for significance using Mann Whitney U test. All statistical calculations were carried out using SPSS statistics version 20.0 (SPSS Inc., Chicago, IL). P-values less than 0.05 were considered statistically significant. The study results are reported according to the Strengthening the reporting of observational studies in epidemiology (STROBE) guideline and Standards for Reporting Qualitative Research (SRQR) [28, 29]

Results and discussion

Patient recruitment and demographics

The study was conducted between 3/10/2017 and 7/11/2017. We identified 223 patients who met the inclusion criteria (Fig 4).

Fig 4. Patient recruitment flowchart.

Fig 4

A total of fifty patients possessing a smartphone with Android operating system 4.0 or higher were included. Two patients, who had signed informed consent, were not connected to the study database due to technical reasons. These patients were considered lost to follow up and were replaced.

Two patients did not use the application and did not fill out the questionnaire. Stated reasons were they were feeling too ill after complicated laparoscopic gastric sleeve resection resulting in severe postoperative pain and severe nausea. One patient used the app but did not answer the questionnaire and could not be reached after hospital discharge. One patient used the application but only partially completed the questionnaire.

Eighteen patients participating were men and 32 were women. Mean age was 49 years (range 21 to 72 years). The severity of the surgery varied from minor (e.g. tonsillectomy) to major (e.g. hip arthroplasty). Postoperative hospitalization was 1.5 days (range 0.3 to 8.9) (Table 1). Indicating the experience patients have gained in using the app, the median number of pain app recordings before questionnaire completion was 3 (range 0 to 13).

Table 1. Patient characteristics and surgical procedures.

Total number of patients 50
Age 49 (21–72)
Male sex 18
Female sex 32
Postoperative hospitalization 1.5 (0.3–8.9)
Surgical procedures
    Spinal surgery 16
    Laparoscopic Gastric Bypass 16
    Hip Arthroplasty 5
    Knee Arthroplasty 2
    Mamma Reduction 2
    Tibia Osteosynthesis 2
    Laparoscopic Hernia Repair 2
    Parotidectomy 1
    Tonsillectomy 1
    Abdominoplasty 1
    Septoplasty 1
    Expansion Sphincter Pharyngoplasty 1

Data are expressed in mean with range and in numbers

Main outcome

Thirty patients (60%) rated communicating the degree of pain with the application as satisfying or very satisfying (Table 2). The overall experienced postoperative pain was valued as no pain by 3 patients (6%), little pain in 5 patients (10%), 25 patients (50%) valued their pain as bearable and 13 (26%) valued their pain as considerable. One patient (2%) experienced severe pain. Asking patients if they could easily and correctly record their pain with the application 45 (90%) agreed or totally agreed. Asking about the three times daily notifications to score pain 38 patients (76%) agreed or totally agreed that this was useful. Regarding the overall appearance of the app 40 patients (80%) found it attractive or very attractive. Asking if it would be beneficial to contact a nurse with the application 9 (18%) of the patients reported that it would not be beneficial. The in-app pain intensity chart was valued as useful or very useful by 40 patients (80%).

Table 2. Results patient questionnaires.

Did you like communicating the degree of pain with this app? 6 (12%) very satisfying 24 (48%) satisfying 17 (34%) ok 0 (0%) not satisfying 0 (0%) very unsatisfying
How much pain did you experience after surgery? 3 (6%) No 5 (10%) little 25 (50%) bearable 13 (26%) considerable 1 (2%) severe pain
The application is easy to use. 28 (56%) totally agree 17 (34%) agree 1 (2%) no opinion 0 (0%) disagree 0 (0%) totally disagree
With the application I can correctly record my pain. 21 (42%) totally agree 24 (48%) agree 1 (2%) no opinion 0 (0%) disagree 0 (0%) totally disagree
It is useful that the application reminds me to record my pain with notifications. 19 (38%) totally agree 19 (38%) agree 6 (12%) ok 1 (2%) disagree 1 (2%) totally disagree
How do you rate the appearance of the application? 7 (14%) very attractive 33 (66%) attractive 6 (12%) ok 0 (0%) unattractive 0 (0%) very unattractive
How do you rate the used colors? 12 (24%) very attractive 25 (50%) attractive 9 (18%) ok 0 (0%) unattractive 0 (0%) very unattractive
How do you rate the used fonts? 15 (30%) very attractive 27 (54%) attractive 4 (8%) ok 0 (0%) unattractive 0 (0%) very unattractive
How do you rate the layout of the application? 17 (36%) very attractive 23 (46%) attractive 5 (10%) ok 0 (0%) unattractive 0 (0%) very unattractive
How do you rate recording your pain with the application? 25 (50%) very useful 16 (32%) useful 4 (8%) ok 0 (0%) not useful 1 (2%) not useful at all
Would you like to be able to call a nurse with the application? 16 (32%) very useful 10 (20%) useful 9 (18%) ok 1 (2%) not useful 9 (18%) not useful at all
How do you rate the in-application the pain chart? 17 (34%) very useful 23 (46%) useful 5 (10%) ok 1 (2%) not useful 0 (0%) not useful at all

Data are expressed in numbers and percentages; the difference in total numbers per question is explained by 3 patients who did not or partially complete the questionnaire

All professionals agreed that the design of the application looked appealing and had neutral colors (Table 3). They shared the opinion that the in-app pain intensity chart was unclear. For example, there is no distinction between entered pain scores at rest or during movement and there is no information provided at what time point the pain was recorded.

Table 3. Stakeholder comments and recommendations for improvement.

Design Recommendation Rating
Modern look No recommendation
Various font styles Standardize font style +
Composed and clear No recommendation
Screens not properly aligned Standardize alignment ++
Screen grouping and–proportion looks odd Standardize grouping and proportion +
Small navigation buttons at the bottom of the screens Adjust buttons to bigger size +++
Buttons ‘rest’ and ‘movement’ are not the same size Adjust buttons to equal size +++
Pain intensity wheel must be more prominent and bigger Pain wheel in separate screen +++
Scale of the pain intensity wheel is not clear at a glance Add information about the scale of the wheel ++
Too little distinction between the 11 faces scale Use the frequently used 6 faces scale ++
Use of color and fonts is appropriate No recommendation
Usability/Workflow Recommendation Rating
Easy No recommendation
User-friendly No recommendation
Simple and quick No recommendation
It is unclear that the pain intensity must be confirmed with separate button Add ‘how to use’ information ++
Progress authentication process is not clear Add a waiting table ++
Not clear how many screens the app contains Add screen bullets +++
Going backward in the app is difficult Add ‘how to use’ information +++
It is possible to accidently skip screens and questions Add ‘how to use’ information ++
Content Recommendation Rating
Pain intensity wheel is not clear Highlight the chosen pain intensity ++
Clear and relevant questions No recommendation
Pain assessment questions: ‘rest’ and ‘movement’ are separate questions Reformulate: are you bothered by pain during: rest, coughing, movement, not at all? +++
Pain assessment question: term ‘bearable pain’ Reformulate into acceptable pain ++
Pain assessment question: Do you need extra analgesia? Reformulate: Should something be done about your pain? +++
Pain assessment question: Contact nurse? Reformulate: Should the nurse visit you? +++
Patient have to go through all questions even when not in pain Start assessment with: Are you in pain? If not no other questions are asked. +++
If the patient indicates that the pain is not bearable Automatically notify a nurse ++
Pain chart: no differentiation between pain at rest and during movement Add separate lines in the chart for different recordings +++
Pain chart: not informative enough Add date and time of the pain recordings and medication +++
Pain chart: axis is too small Add a day chart and a week chart +++
In-app feedback: not personalized e.g. add patient name +++
In-app feedback: not informative Tailor feedback to pain recording, medication, add normal references of other patients +++
In- app feedback: not clear what actions are to be expected Add information: what actions can a patient expect, e.g. a nurse is notified. +++
In-app feedback: disappears too quick Adjust feedback time +++
Additional comments Stakeholder
“Is the nurse or acute pain service informed when the patient records severe pain in his EPD? This might be a good idea.” Anesthetist
“Aren’t you afraid that patients will manipulate or exaggerate their pain intensity scoring for quicker or better care?” Pain nurse / Patient council member
“It is important for nurses to recognize unbearable pain, but what to do when no extra analgesia is needed?” Pain nurse
“The usefulness of the Pain Chart for patients is questionable, too much focus on pain during admission.” Anesthetist
“Add mean scores of other patients to the pain chart.” Patient council member
“Adding mean scores from other patients could provide information for the patient about the course of pain ‘am I doing well or not’” Patient council member
“Adding mean scores from other patients to the pain chart is not appropriate and could cause a negative effect on the course of pain if the patients score is under the mean.” Student software engineering / Anesthetist
“It is questionable what the added value is of an in-app pain chart for patients, they might become too focused on their pain. Adding mean scores from other patients is a bad idea because: focus on their pain, pain is a unique and individual experience, placebo but also nocebo effects.” Anesthetist
“The app must be available in more languages.” Patient council member / Anesthetist

The items mentioned in the table were independently rated by three researchers to determine the importance and ease of customization: + indicates low importance and/or difficult to customize; ++ indicates moderate importance and/or not very difficult to customize; +++ highly important and/or easy to customize.

Secondary outcome

Patients recorded 307 times their pain score with the app, while nurses recorded 396 times a NRS for pain. The median patient recorded pain app score was 4.0 (range 0 to 10), the median nurse recorded NRS for pain was 4.0 (range 0 to 9) (p = 0.06). The differences in pain recordings ranging from ‘0 to 4’, ‘5 to 7’ and ‘8 to 10’ between patients and nurses were respectively 11% (p = 0.0024), 9% (p = 0.0096) and 2% (p = 0.27) (Table 4). One hundred ninety seven patient pain app scores (64%) were rated as pain during rest. Patients asked 92 times (29%) for extra analgesia and 63 times (20%) for a nurse. One hundred thirty one patients pain app scores (42%) were ≥ 5, of these, 109 were still rated as bearable and only 55 times they asked for extra analgesia (Table 5).

Table 4. Differences in pain intensity recordings between patients and nurses.

Pain intensity Patient recordings (307) Nurse recordings (396) Δ, Δ % (95%CI) p (two-tailed)
0–4 176 (57%) 271 (68%) 95, 11% (3.8 to 18) 0.0024
5–7 105 (34%) 100 (25%) 5, 9% (2 to 16) 0.0096
8–10 26 (8%) 25 (6%) 1, 2% (-2 to 7) 0.27

Data are expressed in numbers and percentages. P value of 0.05 is considered statistically significant.

Table 5. Results in-app pain assessment questions.

Patient pain intensity Bearable (yes) Extra analgesia (yes) Need for nurse (yes) Pain at Rest
0–4 (176) 165 (93%) 37 (21%) 23 (13%) 121 (69%)
5–7 (105) 95 (90%) 38 (36%) 30 (29%) 63 (60%)
8–10 (26) 14 (54%) 17 (65%) 10 (39%) 13 (50%)

Data are expressed in numbers and percentages.

Discussion

We have developed and tested a smartphone application for patients to self-record their postoperative pain as a first step of implementing patient self-recorded postoperative pain. Our main objective was to collect patient and stakeholder critique on the usability of the application in order to develop a definitive version of the application. Our secondary objective was to compare the self-recorded pain scores of the patients with the NRS pain scores recorded by nurses.

Both patients and stakeholders agreed that the application was easy to use and that its simplicity and design fitted the purpose of pain recording. Moreover, patients were willing and motivated to record their pain with the application. The difference in median pain intensity scores between those recorded by patients with the app and recorded by nurses were not statistically significant, therefore self-registration of postoperative pain with a smartphone application seems a comparable method to nurse pain assessment.

Our findings are in accordance with the results of several studies that have shown that pain applications are well used and appreciated by patients [2426]. However, in the present study two patients stated that they were too ill to use the application due to severe postoperative pain and severe nausea. This is an important finding because it might influence the postoperative pain outcome if only patients with minor to moderate pain who are not very ill are willing to use the application. It also emphasizes that even with upcoming eHealth developments the ward nurse still has a very important role in our patient care. This is confirmed by the statement of some of patients in this study who denied when they were asked if it would be beneficial to call for a nurse with the app. One patient stated that ‘When I need a nurse immediately I’ll use the button next to my bed’.

The results of our study comparing overall pain intensity scores between patients and nurses might prove that self-recording is reliable method for pain assessment. However, there are some differences, the percentage of pain scores ranging from ‘0 to 4’ recorded by patients with the app was lower compared with the NRS recordings of the nurses (p = 0.0024). Moreover, the percentage of pain scores ranging from ‘5 to 7’ recorded by patients with the app was higher compared with the NRS recordings of nurses (p = 0.0096) although these differences are probably not clinically relevant.

Although our findings are promising, patients and stakeholders suggested several important improvements to the application. One, to redesign the pain intensity wheel by giving it a color or an arrow indicating the severity of pain. Both patients and stakeholders reported that they experienced difficulties in selecting a pain score. It was not always clear whether they should rate their pain at rest or during movement. More importantly, the pain intensity wheel has not yet been validated in relation to VAS or NRS, which are currently considered the gold standards for pain assessment. Two, to add notifications to record pain as a red dot on the corner of the app. One patient recorded his pain only once, despite the fact that he was in the hospital for 5 days. The stated reason was that he had forgotten to record it. He said: ‘I only watch the red notifications on the corners of the app icon. The notifications appeared as text messages and not as a red dot’. One of the professionals suggested adding a notification an hour after scoring a high pain score valued as unbearable or after the administration of medication. Three, to add more information to the in-app pain chart, such as pain at rest and during movement and a distinct time point of each pain score. The day to day pain intensity chart was added to provide patients their overview of the registered pain intensity scores. Most of the comments on the chart came from the interviews with the stakeholders they suggested that the chart might put too much focus on a patient’s pain. This might result in aggravating behavior, when the pain does not decrease over time. Yet, the stakeholders suggested that it would be useful displaying analgesic medication in the application pain chart to show the patients the relation between medication (intervention) and their pain score. Four, to add more and clear personalized feedback messages in the app. Last, reformulate some of the in-app pain questions. For example: Are you in pain at rest or during movement? It might be better to start the pain assessment with the questions; ‘Are you in pain?’ followed by; ‘How much pain do you have?’ Then ask the patient if their pain is bearable and if they are bothered by their pain at rest, during breathing, during coughing, during movement or not bothered at all.

This study is not without limitations. First, the study was performed at two locations of OLVG Hospital in Amsterdam, which may not benefit the generalizability of the results. Second, only patients between 21 and 72 years old were included. However, the results of a study among 47 children and adolescents ranging from 9 to 18 years with cancer pain showed a high compliance rate and satisfaction ratings during a clinical feasibility test of a pain assessment smartphone application [26]. In addition, national survey’s on the public adherence of mobile phones indicate that even the majority of elderly are in possession of a smartphone [30, 31]. That, unfortunately, does not prove that these elderly are able to use it during their hospital stay. In addition, patients who are too ill or in severe acute pain will be reluctant to use the application as we saw from our own data. Postoperative pain assessment performed by a nurse at regular intervals will therefore be a prerequisite. Third, we also acknowledge that the present study only included patients with a relatively short hospital stay.

The validity of the application in non-surgical patient populations still needs to be determined. This study is a first step in the process of developing an evidence-based smartphone application for pain recording. Real-time patient data, recorded with a smartphone, seems a promising method in better understanding the course of pain and pain management [32]. Currently there is an increase in the number of medical smartphone applications brought to the market without proper testing and scientific evaluation [23, 33]. Only few medical smartphone applications are designed with a value sensitive approach. As shown in other medical fields, in which new technology is being developed, value sensitive design seems to provide a workable basis to facilitate patient and healthcare profession values and integrate them into the final design [34]. Furthermore, it is still unclear what changes we have to expect in postoperative pain outcomes and how daily routine of nurses and doctors will change under the implementation of patients self-recording pain on a larger scale.

The process of designing and testing of the application is still ongoing. Version 2.0 of the application is currently in development in accordance with patients’ and stakeholders’ comments obtained with this study. Much work still needs to be done to thoroughly adjust and examine the psychometric features of the app’s 11-point numeric and faces pain scale and validate it against VAS and NRS. Furthermore, we suggest that it is important to use the application to ‘close the loop’ in pain assessment and treatment. In the new version of the application we added the question: ‘Should something be done about your pain?’ If the patient answers ‘yes’, a comment for the nurses to discuss the pain appears in the patient electronic medical record. Really closing the loop between patients and nurses so that the patient is able to contact or warn the nurse with the application does require a technical IT solution which to date has not yet been realized.

In addition, we established a consortium of Dutch hospitals to study the possibilities of patient reported postoperative outcomes with a smartphone application for postoperative pain management and to be able to compare the anesthesia practices in different hospitals nationwide.

Conclusions

The results of our study suggest that a smartphone application for self-recording postoperative pain by hospitalized patients is a user-friendly method with a high satisfaction rate for the majority of patients and stakeholders and that it could provide an outcome comparable to nurses pain assessment.

Supporting information

S1 File. Patient questionnaire, original.

(DOCX)

S2 File. Patient questionnaire, English translation.

(DOCX)

S3 File. Stakeholder interview, original.

(DOCX)

S4 File. Stakeholder interview, English translation.

(DOCX)

S1 Data

(XLSX)

S2 Data

(XLSX)

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

SIDN fund, public benefit organization for Dutch internet domain registration (Url: www.SIDNfonds.nl), partially funded the development of the smartphone application. The funders had no role in the study design, data collection and analysis, decision to publish or preparation of the manuscript.

References

  • 1.Breivik H, Borchgrevink PC, Allen SM, Rosseland LA, Romundstad L, Hals EK, et al. Assessment of pain. British journal of anaesthesia. 2008;101(1):17–24. 10.1093/bja/aen103 [DOI] [PubMed] [Google Scholar]
  • 2.Meissner W, Ullrich K, Zwacka S. Benchmarking as a tool of continuous quality improvement in postoperative pain management. European journal of anaesthesiology. 2006;23(2):142–8. 10.1017/S026502150500205X [DOI] [PubMed] [Google Scholar]
  • 3.van Dijk JF, van Wijck AJ, Kappen TH, Peelen LM, Kalkman CJ, Schuurmans MJ. Postoperative pain assessment based on numeric ratings is not the same for patients and professionals: a cross-sectional study. International journal of nursing studies. 2012;49(1):65–71. 10.1016/j.ijnurstu.2011.07.009 [DOI] [PubMed] [Google Scholar]
  • 4.van Dijk JF, Vervoort SC, van Wijck AJ, Kalkman CJ, Schuurmans MJ. Postoperative patients' perspectives on rating pain: A qualitative study. International journal of nursing studies. 2016;53:260–9. 10.1016/j.ijnurstu.2015.08.007 [DOI] [PubMed] [Google Scholar]
  • 5.van Boekel RL, Steegers MA, de Blok C, Schilp J. [Pain registration: for the benefit of the inspectorate or the patient?]. Nederlands tijdschrift voor geneeskunde. 2014;158:A7723 [PubMed] [Google Scholar]
  • 6.Shipton EA, Shipton EE, Shipton AJ. A Review of the Opioid Epidemic: What Do We Do About It? Pain and therapy. 2018;7(1):23–36. 10.1007/s40122-018-0096-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Carr DB, Goudas LC. Acute pain. Lancet (London, England). 1999;353(9169):2051–8. [DOI] [PubMed] [Google Scholar]
  • 8.Manias E, Botti M, Bucknall T. Observation of pain assessment and management, the complexities of clinical practice. Journal of clinical nursing. 2002;11(6):724–33. 10.1046/j.1365-2702.2002.00691.x [DOI] [PubMed] [Google Scholar]
  • 9.Kim EB, Han HS, Chung JH, Park BR, Lim SN, Yim KH, et al. The effectiveness of a self-reporting bedside pain assessment tool for oncology inpatients. Journal of palliative medicine. 2012;15(11):1222–33. 10.1089/jpm.2012.0183 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Pellino TA, Ward SE. Perceived control mediates the relationship between pain severity and patient satisfaction. Journal of pain and symptom management. 1998;15(2):110–6. [PubMed] [Google Scholar]
  • 11.World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. Jama. 2013;310(20):2191–4. 10.1001/jama.2013.281053 [DOI] [PubMed] [Google Scholar]
  • 12.Nederlands Trial register [Available from: https://www.trialregister.nl].
  • 13.Ekker A. Medische Apps, is certificeren nodig?: 2013. [Google Scholar]
  • 14.Krejcie RV MD. Determining sample size for research activities. Educational and Psychological Measurement. 1970;30:3. [Google Scholar]
  • 15.Hennink MM, Kaiser BN, Marconi VC. Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough? Qualitative health research. 2017;27(4):591–608. 10.1177/1049732316665344 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Guest G, Bunce A, Johnson L. How Many Interviews Are Enough? Field Methods. 2016;18(1):59–82. [Google Scholar]
  • 17.Malterud K, Siersma VD, Guassora AD. Sample Size in Qualitative Interview Studies: Guided by Information Power. Qualitative health research. 2016;26(13):1753–60. 10.1177/1049732315617444 [DOI] [PubMed] [Google Scholar]
  • 18.Richtlijn Postoperatieve Pijn 2012 [Available from: https://www.anesthesiologie.nl].
  • 19.De NHG-Standaard PIJN 2015 [Available from: https://www.nhg.org/standaarden/volledig/nhg-standaard-pijn].
  • 20.Van Giang N, Chiu HY, Thai DH, Kuo SY, Tsai PS. Validity, Sensitivity, and Responsiveness of the 11-Face Faces Pain Scale to Postoperative Pain in Adult Orthopedic Surgery Patients. Pain Manag Nurs. 2015;16(5):678–84. 10.1016/j.pmn.2015.02.002 [DOI] [PubMed] [Google Scholar]
  • 21.Bech RD, Lauritsen J, Ovesen O, Overgaard S. The Verbal Rating Scale Is Reliable for Assessment of Postoperative Pain in Hip Fracture Patients. Pain research and treatment. 2015;2015:676212 10.1155/2015/676212 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Hjermstad MJ, Fayers PM, Haugen DF, Caraceni A, Hanks GW, Loge JH, et al. Studies comparing Numerical Rating Scales, Verbal Rating Scales, and Visual Analogue Scales for assessment of pain intensity in adults: a systematic literature review. Journal of pain and symptom management. 2011;41(6):1073–93. 10.1016/j.jpainsymman.2010.08.016 [DOI] [PubMed] [Google Scholar]
  • 23.Reynoldson C, Stones C, Allsop M, Gardner P, Bennett MI, Closs SJ, et al. Assessing the quality and usability of smartphone apps for pain self-management. Pain medicine (Malden, Mass). 2014;15(6):898–909. [DOI] [PubMed] [Google Scholar]
  • 24.Santo K, Richtering SS, Chalmers J, Thiagalingam A, Chow CK, Redfern J. Mobile Phone Apps to Improve Medication Adherence: A Systematic Stepwise Process to Identify High-Quality Apps. JMIR mHealth and uHealth. 2016;4(4):e132 10.2196/mhealth.6742 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Lalloo C, Shah U, Birnie KA, Davies-Chalmers C, Rivera J, Stinson J, et al. Commercially Available Smartphone Apps to Support Postoperative Pain Self-Management: Scoping Review. JMIR mHealth and uHealth. 2017;5(10):e162 10.2196/mhealth.8230 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Stinson JN, Jibb LA, Nguyen C, Nathan PC, Maloney AM, Dupuis LL, et al. Development and testing of a multidimensional iPhone pain assessment application for adolescents with cancer. J Med Internet Res. 2013;15(3):e51 10.2196/jmir.2350 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC medical research methodology. 2013;13:117 10.1186/1471-2288-13-117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495–9. 10.1016/j.ijsu.2014.07.013 [DOI] [PubMed] [Google Scholar]
  • 29.O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51. 10.1097/ACM.0000000000000388 [DOI] [PubMed] [Google Scholar]
  • 30.Deloitte Global Mobile Consumer Survey 2018—the Netherlands [Available from: https://www2.deloitte.com/nl/nl/pages/technologie-media-telecom/articles/global-mobile-consumer-survey.html].
  • 31.Centraal Bureau voor Statistiek: Centraal Bureau voor Statistiek; [Available from: https://www.cbs.nl/en-gb].
  • 32.Stone AA, Broderick JE. Real-time data collection for pain: appraisal and current status. Pain medicine (Malden, Mass). 2007;8 Suppl 3:S85–93. [DOI] [PubMed] [Google Scholar]
  • 33.Portelli P, Eldred C. A quality review of smartphone applications for the management of pain. British journal of pain. 2016;10(3):135–40. 10.1177/2049463716638700 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.van Andel J, Leijten F, van Delden H, van Thiel G. What makes a good home-based nocturnal seizure detector? A value sensitive design. PloS one. 2015;10(4):e0121446 10.1371/journal.pone.0121446 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Erik Loeffen

22 Aug 2019

PONE-D-19-17106

Patient reported postoperative pain with a smartphone application: a proof of concept

PLOS ONE

Dear Thiel,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Oct 06 2019 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Erik Loeffen

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include copies, in both the original language and English, as Supporting Information.

3. Please note that PLOS ONE has specific guidelines on software sharing (https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-software) for manuscripts whose main purpose is the description of a new software or software package. In this case, new software must conform to the Open Source Definition (https://opensource.org/docs/osd) and be deposited in an open software archive. Please see https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-depositing-software for more information on depositing your software.

4. Thank you for stating the following in the Competing Interests section:

"I have read the journal's policy and the authors of this manuscript have the following competing interests: Rover A.A.M. van Mierlo is employe at Logicapps Inc. the company that built the application commissioned by OLVG. OLVG and Logicapps have no commercial interest in the application. The application is available free of charge for patients. "

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

5. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

Additional Editor Comments (if provided):

Thank you for this revised version, incorporating the remarks I have made about conflict of interest (CoI). Although I appreciate you taking it seriously, in a next instant I would advise authors to just be transparant about the CoI's, not necessarily removing the author from the list (as all authors should have contributed in a way that justifies them being an author, removing someone who has done that amount of work is a bit harsh, and, if anything, might make the appearance around CoI more sketchy). It's not necessary to change this again now, but food for thought for a next time. For now, Van Mierlo is still listed as author in the manuscript itself, which should be removed in accordance with the article's metadata.

Regarding content, the manuscript does need quite some work to make it methodologically sound. Please see the reviewers' comments for guidance. I invite authors to send a revised version (with answers to reviewers). Good luck, and thank you.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: No

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is a study of a smart phone application for inpatient acute pain management. The authors show reasonable concordance with the phone app and nurses assessment.

In the future including the question, do you need additional pain medication could close the loop in assessment and treatments.

Major point:

As the authors note, …record their pain by answering the following question ‘are you in pain: at rest or during movement?’ Furthermore, they could define the severity of pain on a scale from 0 (no pain) to 10 (worst imaginable pain)….

There is typically a large difference between pain at rest and pain with activities. Depending upon the surgery, a VAS or NRS of 4 or more points. …”It was not always clear whether they should rate their pain at rest or during movement…” The same may apply to the nurses assessments.

This is a significant limitation of the study.

Other points:

Introduction: “the patient’s valuation possibly leads to analgesic over-administration and may even contribute to opioid addiction which is a major problem in health care.” We have not yet established short term inpatient opioid use causing opioid addiction. There are some associations but this is not yet established.

Reviewer #2: This manuscript describes a mixed methods study of the use and usability of a pain measurement application for cellular phones with Android operating systems. The manuscript is unclear on numerous occasions throughout and the methods have several serious flaws. The introduction needs to clearly identify the unique feature/s of the app or the study’s methods to support the importance of the work. The introduction needs to clearly identify the weaknesses of common post-operative pain measures and measurement methods that are likely to be specifically addressed by the app. What reasons for not participating were given by the 31 patients who declined to enroll? The methods describe that patients were not permitted to use their cell phones until their self-reported pain ratings were 4 or lower on a 0-10 pain scale. This methodology restricted the range of patients’ ratings using the app. The results need to provide descriptive statistics – including minimum and maximum - for the number of ratings provided per patient before questionnaire completion. Some patients did not stay for an entire day in the hospital so it is possible they had very little experience using the app before completing the survey about the app. Patients’ and nurses’ pain ratings were provided asynchronously so it does not make sense to test their agreement. In addition, why did the researchers choose to use nurses’ documented pain ratings as comparators to patients’ app ratings after they described how nurses alter patients’ self-reported pain ratings in the introduction? Regardless, nurse and patient agreement would be best compared by Bland Altman analyses with two groups of patients – one with the app and one with investigator recorded verbal pain ratings. The app uses an 11-level faces pain scale. The manuscript provides no reference for the reliability and validity of this faces pain scale. The methods provide little detail on the procedure for interviewing the healthcare professionals. For example, were they interviewed individually or in groups? It appears they were asked to examine the app at the start of the interview. Was this examination meant to be comparable to patients’ experiences actually using the app? Rationales for the results focus on the times of pain ratings and division of results by patient cohorts were not provided.

Minor Concerns

• Manuscript needs to clarify the terms “pain intensity slider” and “pain chart” early

• Abstract needs to clarify how patients’ and nurses’ pain ratings were “comparable”

• The terms “benchmark” and “benchmarking” appear throughout the manuscript without clear meaning

• The meanings of “back-office system of Logicapps,” “ease of customization,” “CE certificate,” and “validation questions” need to be provided

• Correct “a CE certificate for was not necessary…”

• The purpose of the references provided after the sentences describing sample size are unclear

• Throughout the manuscript, pain ratings are categorized (e.g., “considerable”) without any reference supporting the categorizations

• Are “bearable” and “acceptable” meant to be the same category of pain ratings?

• Are the “stakeholders” and “experts” also the “healthcare professionals”? Manuscript states “patients and stakeholders” and “patients and experts.” Participants, who were patient advisors, should not be described as “healthcare professionals” unless they were also licensed healthcare professionals.

• Correct “from the patients ands experts,” “chart does not,” “patients (green) record”

• The figures show the app asked if patients wanted additional medication, but they did show the app asking if the patients wanted nursing assistance as was stated in the manuscript

• Need to adhere to a guideline for reporting qualitative methods such as O’Brien et al. (2014) in addition to the STROBE guideline

• Manuscript refers to the reliability of the app, but the reliability of the app was not examined in the manuscript

• Correct “Although the healthcare professionals suggested displaying analgesic medication, in the in-app pain intensity chart as well”

• All the figures need to be in English

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 May 8;15(5):e0232082. doi: 10.1371/journal.pone.0232082.r002

Author response to Decision Letter 0


6 Oct 2019

Rebuttal letter: PONE-D-17106

Title: Patient reported postoperative pain with a smartphone application: a proof of concept

Date: October 6th 2019

Dear Editorial Board, Dear Erik Loeffen,

On behalf of all authors who contributed to this manuscript I thank you for the opportunity to submit a revised version. We thank the editor and both reviewers for their valuable remarks and comments. In this rebuttal letter we will address all comments separately. We would also like to mention that the tables and reference list are only correct in the clean version copy, as we made changes to it which could not adequately be processed in the track changes version.

Editorial comments:

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io.

A summary of the study protocol is published in the Dutch trail register www.trialregister.nl. The Nederlands Trial Register (NTR) is a publicly accessible and freely searchable register in which current prospective studies run in the Netherlands or that are carried out by Dutch researchers. The study registration number is: NL6565. We added the following sentences to the manuscript: “The study is registered with a summary of the study protocol in de Dutch Trial Register (Nederlands Trial Register) with number NL6565”, accompanied with the NTR website URL in the reference list. We think this provides enough information for researchers who are interested in the protocol and the study method.

When submitting your revision, we need you to address these additional requirements (point 1).

We corrected and renamed the files for figures, tables and supplemental materials.

Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include copies, in both the original language and English, as Supporting Information (point 2).

The survey is not under a copyright. We derived questions and answer options from the study of Reynoldson et al 2014 and translated them into Dutch 1. This has already been mentioned in the original manuscript (page 8). The survey is uploaded as a Supporting Information file in Dutch (original language) and English.

Please note that PLOS ONE has specific guidelines on software sharing for manuscripts whose main purpose is the description of a new software or software package. In this case, new software must conform to the Open Source Definition and be deposited in an open software archive (point 3).

We understand the importance of sharing software. The objective of our study is to collect patient and stakeholder recommendations to improve the prototype pain application and to evaluate if patient self-reporting with an application is a workable method for postoperative pain management. We have no intentions to study and specify the technical background of the application. The application is tailored for OLVG hospital. The ‘source code’ of the application cannot be made publicly because of security and the risk of abuse by third parties.

Thank you for stating the following in the Competing Interests section: "I have read the journal's policy and the authors of this manuscript have the following competing interests: Rover A.A.M. van Mierlo is employee at Logicapps Inc. the company that built the application commissioned by OLVG. OLVG and Logicapps have no commercial interest in the application. The application is available free of charge for patients. "

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf (point 4).

We confirm the following statement ‘This does not alter our adherence to PLOS ONE policies on sharing data and materials’ and added it to the new cover letter. We are very grateful if you change the online submission form on our behalf as you suggested. Furthermore, it was never our intention not to be transparent about the CoI. Removing the author seemed the best option to make the submission possible and was done at the insistence of Van Mierlo himself. We kept Van Mierlo listed as author without omitting the clarity of his competing interests.

In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available (point 5).

We uploaded the study’s minimal underlying data set as a Supporting Information file.

Reviewer comments:

Reviewer #1: In the future including the question, do you need additional pain medication? Could close the loop in assessment and treatments.

We agree with reviewer #1 that it is important to close the loop in pain assessments and treatments. In the version of the application that we are currently working on we added a similar question: ‘Should something be done about your pain?’ The formulation of this question is based on semantic arguments. Using the terms ‘additional pain medication’ could give the expectation to patients that pain medication always is an option and available. In reality, there are also other interventions such as, psychological support, posture and exercise advice, and applying heat and comfort. Closing the loop between patients and nurses does require in a technical solution so that the patient can contact or warn the nurse with the application this does require a technical solution which is not yet provided.

Reviewer #1: As the authors note…record their pain by answering the following question ‘are you in pain; at rest or during movement? Furthermore, they could define the severity of pain on a scale from 0 (no pain) to 10 (worst imaginable pain)…There is typically a large difference between pain at rest and pain with activities. Depending upon the surgery, a VAS or NRS of 4 or more points…´It was not always clear whether they should rate their pain at rest or during movement…´The same may apply to the nurses’ assessments.

To reviewer #1 this appears to be a limitation of the study but we see it is as a result from the study which we possibly have not clarified enough. This study was performed to collect patient and stakeholder comments to improve the application. One of the results is that patients are confused about whether they rate their pain in rest or during movement. One of the improvements of the new 2.0 version of the app is that it starts with the question ‘are you in pain?’ This is mentioned on page 19 of the manuscript.

Reviewer #1: ‘’the patient’s valuation possibly leads to analgesic over-administration and may even contribute to opioid addiction which is a major problem in health care.’’ We have not yet established short term inpatient opioid use causing addiction. There are some associations but this is not yet established.

We agree with reviewer #1 on this point and have revised this section of the manuscript.

Reviewer#2: The introduction needs to clearly identify the weaknesses of common post-operative pain measures and measurement methods that are likely to be specifically addressed by the app.

We have revised the introduction on several points.

Reviewer#2: What reasons for not participating were given by the 31 patients who declined to enrol?

According to Dutch legislation (Medical Research act). Patients can refuse to participate in medical research without giving reasons. Furthermore, if a patient refuses to participate, it is not allowed to document stated reasons and publish these. We do very much see the relevance for this manuscript in elucidating any potential (selection) bias but we regret we cannot and may not share the relevant data if we could.

Reviewer#2: The methods describe that patients were not permitted to use their cell phones until their self-reported ratings were 4 or lower on a 0-10 pain scale.

This is not correct, as already stated in the manuscript.’After surgery patients were discharged from the postoperative anesthesia care unit (PACU) under the condition that their pain was ‘acceptable’ with a numerical rating scale (NRS) lower or equal to 4. Back on the nursing ward, patients could start using the application unrestricted to self-report on their postoperative pain’.

Reviewer#2: The results need to provide descriptive statistics-including minimum and maximum – for the number of ratings provided per patient before questionnaire completion. Some patients did not stay for an entire day in the hospital so it is possible they had very little experience using the app before completing the survey about the app.

We agree with reviewer#2 that this provides a better insight into how long the patient has actually used the application. Unfortunately we have not asked patients to provide the questionnaire with data and time. The patient questionnaires were collected by one of the researchers on the day of discharge.

Reviewer#2: Patients’ and nurses’ pain ratings were provided asynchronously so it does not make sense to test their agreement. In addition, why did the researchers choose to use nurses’ documented pain ratings as comparators to patients’ app ratings after they described how nurses alter patients’ self-reported pain ratings in the introduction?

The statistical analysis for comparing the differences in patient and nurse pain recordings is done with non-parametric testing using Mann Whitney U test as stated in the manuscript. As reviewer#2 already concluded the pain ratings were provided asynchronously so we did not test for agreement. The Argument for using nurse pain recordings to compare with is that there is no other alternative; nurse pain recordings are still the gold standard for post-operative pain assessment

Reviewer#2: regardless, nurse and patient agreement would be best compared by Bland Altman analysis with two groups of patients – one with the app and one with investigator recorded verbal pain ratings.

We are not sure if we understand this comment. Bland-Altman analysis is the quantification of the agreement between two quantitative methods by studying the mean difference and constructing limits of agreement under the condition that both methods are designed to measure the same parameter and that measurements are taken on the same time point 2. Separate patient groups as suggested by reviewer#2 is in our opinion not possible.

Reviewer#2: The app uses an 11-level faces pain scale. The manuscript provides no reference for the reliability and validity of this faces pain scale.

The facial expression is added to the numerical rating scale to clarify the meaning of the numbers and the direction of the scale from 0 (no pain at all) to 10 (worst pain imaginable). The two scales are thus provided jointly. As we often see when we only use numerical rating scale many patients use the scale in reverse to rate their pain, as if to rate achievements and experiences with a number on a scale from 0 to 10, with 10 as the best positive result. We have added the following to the manuscript ‘The NRS was also equipped with an 11-face pain scale to clarify- and indicate the direction of the NRS. The use of an 11-face pain scale seems appropriate for measuring acute postoperative pain in adult patients as was shown in a study amongst orthopaedic surgical patients’ Provided with a suitable reference.

Reviewer#2: The methods provide little detail on the procedure for interviewing the healthcare professionals. For example, were they interviewed individually or in groups? It appears they were asked to examine the app at the start of the interview. Was this examination meant to be comparable to patients’ experiences actually using the app?

The stakeholders were interviewed individually. Indeed, they were asked to examine the application before the start of the interview. This was not meant to be comparable to patients’ experiences who could actually use the application for the time being admitted to hospital. We revised the methods section of the manuscript to provide more clarity.

Reviewer#2: Rationales for the result focus on the times of pain ratings and division of the results by patient were not provided. We agree with reviewer#2 that the manuscript provides no clarity about the rationales for the secondary objectives. The secondary objective was to investigate if patient self-recording compared to nurse-led assessment is a suitable method for postoperative pain management. We have adjusted the abstract, introduction and materials and methods to explain our rationales better.

Reviewer#2: Manuscript needs to clarify the terms “pain intensity slider” and “pain chart” early.

We corrected ‘pain intensity slider’ into ‘Numerical Rating Scale’ throughout the manuscript to provide more clarity. The in-app pain chart was already explained in the methods section with an accompanying image. We have not adjusted this.

Reviewer#2: Abstract needs to clarify how patients’ and nurses’ pain ratings were “comparable”.

We adjusted the abstract with the following sentence: The results suggest that the overall median pain intensity scores from patients recorded with a smartphone application compared with pain scores recorded by nurses’ show no statistical significant difference and therefore could be used for postoperative pain management.

Reviewer#2: The terms “benchmark” and “benchmarking” appear throughout the manuscript without clear meaning.

We have corrected the terms “benchmark” and “benchmarking” throughout the manuscript with appropriate reformulation.

Reviewer#2: The meanings of “back-office system of Logicapps,” “ease of customization,” “CE certificate,” and “validation questions” need to be provided.

We corrected ‘back-office system’ into ‘data server’. We corrected validation questions into ‘in-app pain assessment questions’. We corrected “ease of customization” into “ease of adjustment”

We explained ‘CE certificate’ in the manuscript as Conformite European certificate (CE certificate) this is a certificate to indicate conformity with health, safety and environmental protection standards for products used and sold within the European Economic Area (EEA) such a certificate is not mandated during development and research of a smartphone application.

Reviewer#2: Correct “a CE certificate for was not necessary…”

We added Conformite European certificate (CE certificate) and corrected the sentence.

We thank reviewer#2 for the comments: The purpose of the references provided after the sentences describing sample size are unclear. In this reference the theory behind the methods used to estimate the sample size are explained

Reviewer#2: Throughout the manuscript, pain ratings are categorized (e.g., “considerable”) without any reference supporting the categorizations In the patient questionnaire we have used the following categorization; “no pain” , “little pain”, “bearable pain”, “considerable pain”, “severe pain”. This was meant as a verbal rating scale for patients to value their overall experienced pain during the post surgical admission. The original Dutch terms used in the questionnaire are derived from the studies of Breivik et al 2008 3 and Jensen et al 2011 4. Currently there is no validated verbal rating scale for Dutch language available. We have stated this in the outcomes section of the revised manuscript.

Reviewer#2: Are “bearable” and “acceptable” meant to be the same category of pain ratings?

We exchanged ‘acceptable’ throughout the manuscript in ‘bearable’ in accordance with the request in the application and patient questionnaire.

Reviewer#2: Are the “stakeholders” and “experts” also the “healthcare professionals”? Manuscript states “patients and stakeholders” and “patients and experts.” Participants, who were patient advisors, should not be described as “healthcare professionals” unless they were also licensed healthcare professionals.

We corrected ‘healthcare professionals’ throughout the manuscript into ‘stakeholders’ and explained the composition of this group in the abstract and methods section.

Reviewer#2: correct “from the patients and experts,” “chart does not,” “patients (green) record”.

We corrected these sentences in the manuscript.

Reviewer#2: The figures show the app asked if patients wanted additional medication, but they did show the app asking if the patients wanted nursing assistance as was stated in the manuscript.

We are not sure if we understand this comment all screens of the application are uploaded as a figure. Possibly due to the Dutch language used in de figures there might be some misinterpretation, for the revised manuscript we translated all figures into English.

Reviewer#2: Need to adhere to a guideline for reporting qualitative methods such as O’Brien et al. (2014) in addition to the STROBE guideline.

We agree with reviewer#2 that the suggested reference provides a useful checklist for reporting. We used it as a final check before submitting the revised version of the manuscript, Furthermore, we added the following to the manuscript ‘the study results are reported according to the Strengthening the reporting of observational studies in epidemiology (STROBE) guideline for observational studies and Standards for reporting qualitative research (SRQR)’. O’Brien et al 2014 5 has been added to the manuscript reference list.

Reviewer#2: Manuscript refers to the reliability of the app, but the reliability of the app was not examined in the manuscript.

We agree with reviewer#2, we did not test for reliability. We have changed the terms reliable and reliability into ‘comparable’ in several sections of the manuscript

Reviewer#2: correct “Although the healthcare professionals suggested displaying analgesic medication, in the in-app pain intensity chart as well”

We corrected this sentence into: ‘Yet, the stakeholders suggested that it would be useful displaying analgesic medication in the application pain chart to provide patients more information about their treatment’.

Reviewer#2: All the figures need to be in English.

We translated all figures to English where this was not sufficiently done so.

References

1. Reynoldson C, Stones C, Allsop M, et al. Assessing the quality and usability of smartphone apps for pain self-management. Pain medicine (Malden, Mass). 2014;15(6):898-909.

2. Bland JM, Altman DG, Measuring agreement in method comparison studies, Statistical Methods in Medical Research 1999, 8(2), 135–160. https://doi.org/10.1177/096228029900800204

3. Breivik H, Borchgrevink PC, Allen SM, et al. Assessment of Pain, British Journal of Anaesthesia 2008; 101(1);17-24

4. Jensen Hjermstad M, Fayers PM, Haugen DF, et al. Studies comparing numeric rating scales, verbal rating scales and visual analogue scale for assessment of pain intensity in adults: a systematic literature review. Journal of Pain and Symptom Management 2011;41(6):1073-93

5. O'Brien BC1, Harris IB, Beckman TJ, Reed DA, Cook DA, Standards for reporting qualitative research: a synthesis of recommendations, Acad Med. 2014 Sep;89(9):1245-51. doi: 10.1097/ACM.0000000000000388.

Sincerely yours, also on behalf of my co-authors

Bram Thiel, MSc

OLVG Hospital

Department of Anesthesiology

PO Box 95500; 1090 HM Amsterdam, the Netherlands

E: b.thiel@olvg.nl;

T: +31614584978

Attachment

Submitted filename: Rebuttal letter_6_10_2019.docx

Decision Letter 1

Peter M ten Klooster

18 Dec 2019

PONE-D-19-17106R1

Patient reported postoperative pain with a smartphone application: a proof of concept

PLOS ONE

Dear Thiel,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Since the previous two reviewers were quite critical of the original manuscript, but unable to review your revisions, I’ve invited an additional reviewer to examine the manuscript. This reviewer noted several remaining major issues with the manuscript and with the revisions made in response to the previous review. I fully agree with the sensible and important comments from this reviewer. Based on the review, I’ve decided that the manuscript still needs to be thoroughly improved (major revision) to be considered for publication.

Should you decide to submit a revision, please carefully address all the issues raised by the reviewer in your response letter and make the required changes in the manuscript.

In addition, please pay particular attention to and thoroughly check the accuracy of all statistics throughout the entire manuscript. The reviewer already pointed out several errors or inconsistencies between the abstract and the manuscript body. I also noted a potential error with the numbers in the abstract. For instance, in the abstract you described that “…Two patients (1%) were …”, whereas this percentage should probably be 4% considering the sample of 50 patients. Finally, make sure that the track changes and clean version match exactly as there appear to be several discrepancies between both in the current versions, which hinders a careful review process.

==============================

We would appreciate receiving your revised manuscript by Feb 01 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Peter M ten Klooster, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #3: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #3: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #3: No

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #3: Thank you for the opportunity to review this interesting paper on using a smartphone to assess pain post operatively. Unfortunately, even though it has been revised, I find a couple major problems and many minor issues with this manuscript, most of which were not mentioned by the previous reviewers.

1. The revisions to the manuscript have added the label of NRS or Numerical Rating Scale to describe the app pain scale used. This is just not accurate. The Numerical Rating Scale is not just using the numbers 0-10 to describe pain. It should include these numbers along a line, with the whole line visible at once. Often there are anchor words at either end. The app, as shown in Figure 1, does not do this. I assume a line with number is what is used by the nurse. If instead the nurse used some other form of the numbers or it was purely verbal, it would be good to add this to the manuscript. Your Kim et al 2012 citation from the Journal of Palliative Care clearly shows a line with numbers scale where you can see the whole line at once. Your first citation, for the NRS, Breivik et al. 2008 from BJA also shows a line with number for the NRS.

Please refrain from referring to the app scale as the NRS. Much work has been done to validate the proper NRS as a good pain scale. Is there any scientific evidence of this wheel scale (where you can only see a few numbers at once and the faces seem to be the focus) being consistent and valid as a pain scale? What was the starting position of the wheel? Was it 0, 5, or 10? This needs to be well explained or cited within the paper.

As seen though in Table 3, but not really touched on in the discussion, there is a moderately important comment from the stakeholders that the 11 faces look too similar and that the frequently used 6 face scale should be used. Yes, exactly, this is most likely referring to the Faces Pain Scale - Revised. Now, I understand you are using a new 11-faces pain scale to match the 11 numerical values. Amazingly, the paper that you cite for this, Van Giang et. al 2015 in Pain Management Nursing does not have an image of the scale. The original 11-face FPS paper, cited in that paper as Kim and Buschmann, 2016 in International Journal of Nursing Studies has 11 faces that appear to have more detail and look different than the very simple faces you used. Where did your faces come from? Were they designed by you for this app? That is not implied by your text. It would be best to use existing scales in the app to rely on some existing scientific evidence. Additionally, there is lots of good and critical feedback in Table 3 that can be used to improve your app. Perhaps once that is done, you can do a better test with it.

2. Leading from the above point, the second major problem is that the secondary objective is completely mixed up with the primary objective. That is, comparing the pain scores from this app with the nurse scores is not really meaningful, if you are now going to make large adjustments on the app - thus one of the two things you are comparing is disappearing (being greatly altered) in the future. If you are going to change the pain scale, which you should, then the comparison is not meaningful. I would suggest removing this secondary objective.

I also found the secondary objective results confusing, as you say that the patient and nurse scores are similar enough that one could substitute for the other (based on no significant difference in the medians) but then dive into the details of how certain ranges on the scales are significantly different.

Minor issues:

I'm not sure how relevant these below points will be given the major issues, but I have included them anyway. I understand that it can be quite an exercise to attempt to publish this work in English when it was all done in Dutch, but please take more care in the future to have the manuscript carefully looked through internally before you submit.

3. Please double check your numbers for the results. You have an error for the same number in the abstract and the manuscript body, assuming your table is correct. In the abstract

"‘severe’: 24 (28%)." should be "‘severe’: 14 (28%)."

In the body:

"and 30 (28%) valued their pain as considerable to severe" should be "and 14 (28%) valued their pain as considerable to severe"

4. Please be clearer about where (and how many places) the study took place. You currently have:

"study was conducted in a Dutch general hospital situated at two locations in Amsterdam, the Netherlands"

Ok...so this is two locations of the same hospital? Or is this two areas of one hospital? In the discussion (limitations paragraph) you have:

"the study was performed in a single center setting"

5. Some typos/grammar problems in the manuscript:

Participants and recruitment:

"...provided them instructions how to use the application..." should be:

"...provided them instructions on how to use the application..."

"...A patient sample size of 50 patients was, based on sample size calculation for qualitative studies, was estimated..." should be:

"...A patient sample size of 50 patients, based on sample size calculation for qualitative studies, was estimated..."

Figure 4: "adndroid" and "particoipate".

Discussion:

"under the implementation of patients self-recording pain on a lager scale."

should be:

"under the implementation of patients self-recording pain on a larger scale."

"If a patient answer is yes, a comment for the nurses to discus the pain appears in the patient electronic medical record."

Should be:

"If a patient answer is yes, a comment for the nurses to discuss the pain appears in the patient electronic medical record."

Conclusion:

"...high satisfaction rate for the majority of patients stakeholders and that it provides outcome comparable to nurses pain assessment"

Should be:

"...high satisfaction rate for the majority of patients stakeholders and that it provides an outcome comparable to nurses pain assessment."

6. Patients were notified three times daily to assess their pain. At what times? Was it different per patient based on when they started or at the same time for all patients?

7. This phrasing is confusing:

"We determined the following items from the literature: design, usability, content, and workflow indexing the feedback from the patients and stakeholders"

I think you mean to say that you grouped feedback into these themes or categories, as has been done before (by cited papers).

8. How come the percentages in the rows of Table 2 all add up to different amounts (below 100)? Do these represent patients who did not reply to these questions? You have the brackets wrong in the cell that's in the 10th row and 3rd column.

9. Part of the Discussion reads:

"the number of pain scores ranging from NRS ‘0 to 4’ and NRS ‘5 to 7’ recorded by patients were statistically significant higher compared with the recordings of nurses (p 0.0024, p 0.0096)"

This is not really written correctly. You should say "the percentage" not "the number" and the '0 to 4' range was lower while the '5 to 7' range was higher than when recorded by nurses, not both higher.

10. Reviewer #2 has a good point, that "The results need to provide descriptive statistics-including minimum and maximum – for the number of ratings provided per patient before questionnaire completion". I do not understand your response. The reviewer is referring to statistics summarizing the number of ratings per participant using the app. By "questionnaire" in your response do you mean the app ratings or the end questionnaire? Either you are saying that the data in the app does not have time stamp (which can't be true since you provide an in-app graph of ratings over time) or perhaps you are saying the questionnaire was completed before the actual end use of the app...ie. patients used the app even after the questionnaire. This second possibility would be quite odd, but regardless you could summarize the total number of ratings done by each patient.

11. Later Reviewer #2 also asks "For example, were they interviewed individually or in groups?". You should put the answer to this question into your actual manuscript as it is an important point.

12. Finally, I see that you said "We would also like to mention that the tables and reference list are only correct in the clean version copy, as we made changes to it which could not adequately be processed in the track changes version."

Unfortunately, in the regular body of the manuscript (not in tables and references) there are a number of difference between the tracked version and the cleaned version. It seems some things were adjusted in both separately. This makes it difficult to review. Some examples are on lines 170-171, 315, 352-353.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 May 8;15(5):e0232082. doi: 10.1371/journal.pone.0232082.r004

Author response to Decision Letter 1


3 Feb 2020

Rebuttal letter: PONE-D-19-17106R1

Title: Patient reported postoperative pain with a smartphone application: a proof of concept

Date: January 31st 2020

Dear Editorial Board, Dear Peter ten Klooster,

On behalf of all authors who contributed to this manuscript I thank you for the opportunity to submit a revised version. We are grateful to the editor and the reviewer for their valuable comments and suggestions for improvement. Below we address each comment separately.

Editorial comments:

Please pay particular attention to and thoroughly check the accuracy of all statistics throughout the entire manuscript. The reviewer already pointed out several errors or inconsistencies between the abstract and the manuscript body. I also noted a potential error with the numbers in the abstract. For instance, in the abstract you described that “…Two patients (1%) were …”, whereas this percentage should probably be 4% considering the sample of 50 patients.

The editor and the reviewer identified several inconsistencies in the descriptive statistics. We performed a thorough check of all descriptives in the abstract and the body of the manuscript. We found inconsistencies in the abstract, the body of the manuscript and table 2 and corrected these. From the abstract we have removed the following ‘Experienced pain after surgery was scored by patients as ‘no to mild’: 8 (16%), ‘bearable’: 25 (50%) and ‘considerable’ or ‘severe’: 24 (28%) (track changes lines 36-37).

In the body of the manuscript we corrected the following in line with table 2 ‘The overall experienced postoperative pain was valued as no pain by 3 patients (6%), or little pain in 5 patients (10%), 25 patients (50%) valued their pain as bearable and 13 (26%) valued their pain as considerable. One patient (2%) experienced severe pain’ (track changes lines 239-242).

In table 2 we have corrected the cell in row 12, column 3 from: 24 (48%) in 23 (46%). Thereby we also corrected this sentence from the results section: ‘The in-app pain intensity chart was valued as (very) useful by 41 patients (82%) in ‘The in-app pain intensity chart was valued as (very) useful by 40 patients (80%)’ (track changes lines 247-248).

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io

As already mentioned in the previous rebuttal letter a summary of the study protocol is published in the Dutch trial register (www.trialregister.nl). The Dutch trial register is a publicly accessible and freely searchable register for prospective studies run in the Netherlands or that are carried out by Dutch researchers. The study registration number is: NL6565 (track changes lines 91-93).

Reviewer comments:

1.The revisions to the manuscript have added the label of NRS or Numerical Rating Scale to describe the app pain scale used. This is just not accurate. The Numerical Rating Scale is not just using the numbers 0-10 to describe pain. It should include these numbers along a line, with the whole line visible at once. Often there are anchor words at either end. The app, as shown in Figure 1, does not do this. I assume a line with number is what is used by the nurse. If instead the nurse used some other form of the numbers or it was purely verbal, it would be good to add this to the manuscript. Your Kim et al 2012 citation from the Journal of Palliative Care clearly shows a line with numbers scale where you can see the whole line at once. Your first citation, for the NRS, Breivik et al. 2008 from BJA also shows a line with number for the NRS.

Please refrain from referring to the app scale as the NRS. Much work has been done to validate the proper NRS as a good pain scale. Is there any scientific evidence of this wheel scale (where you can only see a few numbers at once and the faces seem to be the focus) being consistent and valid as a pain scale? What was the starting position of the wheel? Was it 0, 5, or 10? This needs to be well explained or cited within the paper.

We agree with the reviewer that referring to the pain app scale as numerical rating scale (NRS) is not appropriate. Indeed Breivik et al. 2008 shows a line with numbers for the NRS but they also mention in their paper ‘An NRS with numbers from 0 to 10 (‘no pain’ to ‘worst pain imaginable’) is more practical than a VAS, easier to understand for most people, and does not need clear vision, dexterity, paper, and pen. One can even determine the intensity of pain accurately using telephone interview, a computerized telephone interview, and recording of NRS data by the patient directly into the database of a computer via the telephone keyboard’ this is why we chose to refer to the pain app scale as NRS. In the manuscript we now use the neutral term ‘pain app scale’ and have corrected this throughout the manuscript.

The nurses in OLVG hospital routinely assess postoperative pain verbally by asking patients to indicate the severity of their pain with a number from 0 to 10, which is common practice in most Dutch hospitals. This is now mentioned in the manuscript : ‘Moreover, postoperative pain intensity was regularly verbally assessed on a scale from 0 (no pain) to 10 (worst imaginable pain) by ward nurses at least once every eight hours and thereafter recorded in the patient’s electronic medical record (EMR). This method of pain assessment is common practice in most Dutch hospitals and commonly referred to as ‘numerical rating scale’ (track changes lines 127-132). The starting position of the wheel was default set at 5, this in now mentioned in the manuscript: ‘The default starting position of the wheel was set at 5 and the wheel was provided with text anchors at both ends of the scale referring to no pain (0) and worst imaginable pain (10)’ (track changes lines 145-147).

As seen though in Table 3, but not really touched on in the discussion, there is a moderately important comment from the stakeholders that the 11 faces look too similar and that the frequently used 6 face scale should be used. Yes, exactly, this is most likely referring to the Faces Pain Scale - Revised. Now, I understand you are using a new 11-faces pain scale to match the 11 numerical values. Amazingly, the paper that you cite for this, Van Giang et. al 2015 in Pain Management Nursing does not have an image of the scale. The original 11-face FPS paper, cited in that paper as Kim and Buschmann, 2016 in International Journal of Nursing Studies has 11 faces that appear to have more detail and look different than the very simple faces you used. Where did your faces come from? Were they designed by you for this app? That is not implied by your text. It would be best to use existing scales in the app to rely on some existing scientific evidence. Additionally, there is lots of good and critical feedback in Table 3 that can be used to improve your app. Perhaps once that is done, you can do a better test with it.

We agree with the comments from the reviewer on both the pain scale and faces scale used the smartphone application. Both the pain scale and 11-faces scale were designed for this application. The faces scale used in the application was designed to fit with the 11 point numerical scale and was based on the 6 faces scale recommended by the 2015 pain guideline from the Dutch college of General Practitioners. This is now mentioned in the manuscript with reference (track changes lines 149-151). We disagree that it is not useful to test with both the pain scale and faces scale in this phase of developing an application. It is very important early in the design of eHealth tools to use end-user feedback to further improve the application one of the objectives of the study.

We agree that the used pain scale and 11-faces scale are not validated yet. We now mention this in the abstract and manuscript body ’Further research is needed to validate in-app pain scale and the 11-faces scale with the current gold standard VAS and NRS for pain’ (track changes lines 48-49) ‘More importantly, the pain intensity wheel has not yet been validated in relation to VAS or NRS, which are currently considered the gold standard for pain assessment (track changes lines 356-358) ‘Much work still needs to be done to thoroughly adjust and examine the psychometric features of the in-app pain intensity wheel and validate it against VAS and NRS’ (track changes lines 404-406).

2. Leading from the above point, the second major problem is that the secondary objective is completely mixed up with the primary objective. That is, comparing the pain scores from this app with the nurse scores is not really meaningful, if you are now going to make large adjustments on the app - thus one of the two things you are comparing is disappearing (being greatly altered) in the future. If you are going to change the pain scale, which you should, then the comparison is not meaningful. I would suggest removing this secondary objective.

We understand the reviewer on this point but we do not fully agree and we believe we have valid arguments to underpin this; the objective of this study is to collect data for improvement of the application and to see if patient self-recording of postoperative pain with a smartphone application could be used for postoperative pain management. We believe it is appropriate, even necessary, in this early stage of developing a smartphone application, to assess the agreement between patient-recorded and nurse-recorded pain scores, to better understand what is needed to improve and validate the used pain scale and faces scale.

I also found the secondary objective results confusing, as you say that the patient and nurse scores are similar enough that one could substitute for the other (based on no significant difference in the medians) but then dive into the details of how certain ranges on the scales are significantly different.

We agree with the reviewer that the results of the secondary objectives might confuse the reader. Therefore we have decided to remove the following paragraphs (including figure 5) from the manuscript to provide more clarity:

‘Patients record their pain more widely distributed throughout day and night (Figure 5). Nurses record pain mainly during their daily rounds, at 9:00, 12:00, and 18:00 and between 21:00 and 22:00. During the night, from 0:00 to 3:00, patients recorded pain 20 times and nurses recorded pain 4 times, respectively with median NRS 5 (range 1 to 9) and median NRS 6.5 (range 3 to 7) with a difference of 1.5 (p = 0.723). During the night, from 0:00 to 8:00, patients recorded their pain 54 times and nurses recorded pain 57 times, respectively with median NRS 5 (0 to 9) and median NRS 3 (0 to 8) with a difference of 2 (p = 0,03)’. Figure 5: Time distribution of pain score entries (track changes lines 303-310).

‘More importantly, the pain scores recorded by patients during the night were statistically significant higher compared with those recorded by nurses. Those patients with nightly high pain scores could potentially benefit from using the pain application, especially if we are able to establish a feedback loop from the app to the patients’ electronic medical record combined with an automated on-demand nurse warning system. However, in a recent study amongst 105 chronic pain patients, pilot testing a smartphone application with and without 2-way messaging between patient and healthcare provider. Patients in the 2-way massaging group felt that their healthcare providers were less responsive (29). Therefore it seems very important that the nurse is able to respond to the patients call when introducing 2-way messaging or a feedback loop in daily hospital care’ (track changes lines 342-351).

Minor issues:

3. Please double check your numbers for the results. You have an error for the same number in the abstract and the manuscript body, assuming your table is correct. In the abstract

"‘severe’: 24 (28%)." should be "‘severe’: 14 (28%)."

In the body:

"and 30 (28%) valued their pain as considerable to severe" should be "and 14 (28%) valued their pain as considerable to severe"

Thank you for pointing this out. We have corrected the descriptive statistics in the abstract (track changes lines 29-30) and the manuscript body (track changes lines 239-242), (track changes lines 247-248), according to the results from table 2.

4. Please be clearer about where (and how many places) the study took place. You currently have:

"study was conducted in a Dutch general hospital situated at two locations in Amsterdam, the Netherlands"

Ok...so this is two locations of the same hospital? Or is this two areas of one hospital? In the discussion (limitations paragraph) you have:

"the study was performed in a single center setting"

The study was conducted in OLVG Hospital which has two locations situated in different parts of Amsterdam. We adjusted the discussion section with the following sentences ‘This study is not without limitations. First, the study was performed on two locations of OLVG Hospital situated at different parts in Amsterdam which may not benefit the generalizability of the results’ (track changes lines 377-379).

5. Some typos/grammar problems in the manuscript:

"...provided them instructions how to use the application..." should be:

"...provided them instructions on how to use the application..."

This has been corrected (track changes line 108).

"...A patient sample size of 50 patients was, based on sample size calculation for qualitative studies, was estimated..." should be:

"...A patient sample size of 50 patients, based on sample size calculation for qualitative studies, was estimated..."

This has been corrected (track changes line 109).

Figure 4: "adndroid" and "particoipate".

This has been corrected, together with other adjustments of the figure.

"under the implementation of patients self-recording pain on a lager scale."

should be:

"under the implementation of patients self-recording pain on a larger scale."

This has been corrected (track changes line 401).

"If a patient answer is yes, a comment for the nurses to discus the pain appears in the patient electronic medical record."

Should be:

"If a patient answer is yes, a comment for the nurses to discuss the pain appears in the patient electronic medical record."

This has been corrected (track changes line 409).

"...high satisfaction rate for the majority of patients stakeholders and that it provides outcome comparable to nurses pain assessment"

Should be:

"...high satisfaction rate for the majority of patients stakeholders and that it provides an outcome comparable to nurses pain assessment."

This is corrected (track changes line 419).

6. Patients were notified three times daily to assess their pain. At what times? Was it different per patient based on when they started or at the same time for all patients?

All patients were notified at fixed times at 8:00 am, 14:00 pm and 22:00 pm. We adjusted the text into ‘In addition, all patients were notified three times daily fixed at 8:00 am, 14:00 pm and 22:00 pm by the application to record pain’ (track changes lines 139-141).

7. This phrasing is confusing: "We determined the following items from the literature: design, usability, content, and workflow indexing the feedback from the patients and stakeholders"

I think you mean to say that you grouped feedback into these themes or categories, as has been done before (by cited papers).

We adjusted the phrasing into ‘we grouped feedback from patients and stakeholders into the following themes determined the following items from the literature: design, usability, content, and workflow, obtained from previous research indexing the feedback from the patients and stakeholders (25-26)’ track changes lines 186-188).

8. How come the percentages in the rows of Table 2 all add up to different amounts (below 100)? Do these represent patients who did not reply to these questions? You have the brackets wrong in the cell that's in the 10th row and 3rd column.

Indeed, the different percentages in the rows of table 2 are due to the fact that 3 patients did not or partially complete the questionnaire. This was already mentioned in the text (track changes lines 207-210). We have now added: ‘one patient used the application, but only partially completed the questionnaire’ (track changes lines 210-211). Furthermore, we adjusted the table 2 legend with the following sentence ‘the difference in total numbers per question is explained by 3 patients who did not or partially complete the questionnaire’ (track changes line 250-251). The brackets in cell row 10, column 3 have been corrected

9. Part of the Discussion reads: "the number of pain scores ranging from NRS ‘0 to 4’ and NRS ‘5 to 7’ recorded by patients were statistically significant higher compared with the recordings of nurses (p 0.0024, p 0.0096)"

This is not really written correctly. You should say "the percentage" not "the number" and the '0 to 4' range was lower while the '5 to 7' range was higher than when recorded by nurses, not both higher.

Thank you. We have corrected the text to: ‘However, there are some differences, the percentage of pain scores ranging from ‘0 to 4’ recorded by patients with the app was lower compared with the NRS recordings of the nurses (p = 0.0024). Moreover, the percentage of pain scores ranging from ‘5 to 7’ recorded by patients with the app was higher compared with the NRS recordings of nurses (p = 0.0096) although these differences are probably not clinically relevant’ (track changes lines 334-340).

10. Reviewer #2 has a good point, that "The results need to provide descriptive statistics-including minimum and maximum – for the number of ratings provided per patient before questionnaire completion". I do not understand your response. The reviewer is referring to statistics summarizing the number of ratings per participant using the app. By "questionnaire" in your response do you mean the app ratings or the end questionnaire? Either you are saying that the data in the app does not have time stamp (which can't be true since you provide an in-app graph of ratings over time) or perhaps you are saying the questionnaire was completed before the actual end use of the app...ie. patients used the app even after the questionnaire. This second possibility would be quite odd, but regardless you could summarize the total number of ratings done by each patient.

We agree on this point that our answer was not clear. In this current revision we have now provided descriptive statistics for the number of ratings per patient before questionnaire completion in the Results section: ‘Indicating the experience patients have gained in using the app. The median number of pain recordings by patients before questionnaire completion was 3 with a range of 0 to 13’ (track changes lines 214-216).

11. Later Reviewer #2 also asks "For example, were they interviewed individually or in groups?” You should put the answer to this question into your actual manuscript as it is an important point.

We agree with the reviewer that this is important information. It was already mentioned in the manuscript. ‘Stakeholders were individually questioned during a semi-structured interview conducted by a researcher after they had the opportunity to examine the application (supporting information 3 and 4)’ (track changes lines 173-175).

12. Unfortunately, in the regular body of the manuscript (not in tables and references) there are a number of differences between the tracked version and the cleaned version. It seems some things were adjusted in both separately. This makes it difficult to review. Some examples are on lines 170-171, 315, 352-353.

We want apologize for making the reviewing process more complicated. We have now corrected all identified errors and carefully addressed all editorial and reviewer comments in the current track changes version of the manuscript and made an exact copy of the changes in the ‘clean version’.

Attachment

Submitted filename: Rebuttal letter_31_1_2020.docx

Decision Letter 2

Peter M ten Klooster

18 Feb 2020

PONE-D-19-17106R2

Patient reported postoperative pain with a smartphone application: a proof of concept

PLOS ONE

Dear Thiel,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Your manuscript was re-reviewed by the same reviewer. As you can see from the comments below, the reviewer was generally satisfied with the revisions made in response to the previous major issues. Some minor textual issues remain, which should be relatively easy to adjust.

We would appreciate receiving your revised manuscript by Apr 03 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Peter M ten Klooster, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #3: Thank you to the authors for addressing my comments. Figure 4 has been much improved. The removal of some of the secondary outcome text strengths the paper. There are only minor revisions to be made now.

Regarding the response to my previous point about the primary and secondary objectives being entangled, I agree with the authors that this study has clearly helped them understand what is needed to improve the app and the pain and faces scale, although I fail to see how comparing the resulting scores to nurse-recorded pain scores contributes to this. The goal of that original secondary objective was clearly to show the benefit of the app, rather than find its weaknesses. Nonetheless this can stay mentioned in the paper as I find the removal of the confusing parts of the secondary outcome adequate. Removing the paragraphs and Figure 5 narrows the focus of the paper, which is an improvement.

Since the results paragraph about when pain scores were done (during the night, etc.) has been removed, please remove the 2 sentences from the discussion, lines 340-342 as they do not make sense given the lack of associated data in the shortened results.

You have not actually removed all mentions of your in-app scale as an NRS from your paper. This should still be corrected in the Secondary outcome section, where for example on line 292 of the Tracked Changes version you refer to the "patient recorded NRS". Please adjust this section accordingly.

Additionally, the authors refer to "the in-app pain scale and 11-faces scale" in the abstract, but from what I see in Figure 1 the numbers and faces all go together on a single wheel (are moved as one), so the whole thing should be referred to as the "in-app pain scale" or "in-app pain wheel". At the end of the abstract, it could be referred to as the "in-app pain scale with 11 faces" or "the app's 11 point numeric and faces pain scale" or either option with "wheel" instead of "scale".

In the Study Procedures section, thank you for adding the notification times on lines 139 and 140 as this fits with the three dots per day in Figure 3. Please remove the "am" and "pm" from these lines, as you are using a 24-hour clock, so it is unnecessary.

In the Results and Discussion - Main Outcome section, I take issue with the way that you have labelled the lumped together responses for "satisfying and very satisfying" as well as "agree and totally agree" by using brackets around "very" and "totally". It is not immediately obvious that you are giving the sum of these two responses. I would much prefer you change "(very) satisfying" to "satisfying or very satisfying" and "(totally) agreed" to "agreed or totally agreed". This is how you listed it in the abstract.

Please fix your new reference 19, which is missing "http" in the URL.

Some small grammar issues to adjust:

On line 30 of the tracked changes version, thank you for fixing the counts and percents, but you have two "and"s in the sentence. The first one should just be a comma.

Line 131, in order to make this sentence grammatical correct please change it to "referred to as a 'numerical rating scale'" or "referred to as the 'numerical rating scale'". If using "the", perhaps you should capitalize it: 'Numerical Rating Scale'.

Line 214 onto 215, you have an odd sentence fragment: "Indicating the experience patients have gained in using the app." This is not a complete sentence because it does not have a subject. Please combine this with the next sentence in some way so that it is grammatically correct.

Please add a closing single quote on line 332 to end the quote from the patient.

In the discussion, your points are numbered, but there is no "Four". Please fix this by changing "Five," to "Four," on line 371.

Thanks for the clarification on line 377, but please change "on two locations" to "in two locations" or "at two locations" as these are proper ways to say it in English. Also please change "situated at different parts in Amsterdam" to "situated at different parts of Amsterdam" or simply "in Amsterdam".

Remove the extra period on line 388 and add a period at the end of your conclusion.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 May 8;15(5):e0232082. doi: 10.1371/journal.pone.0232082.r006

Author response to Decision Letter 2


6 Apr 2020

Editorial comments:

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io

As already mentioned in the previous rebuttal letter a summary of the study protocol is published in the Dutch trial register (www.trialregister.nl). The Dutch trial register is a publicly accessible and freely searchable register for prospective studies run in the Netherlands or that are carried out by Dutch researchers. The study registration number is: NL6565. (Track changes lines 85-87)

Reviewers' comments:

1. Since the results paragraph about when pain scores were done (during the night, etc.) has been removed, please remove the 2 sentences from the discussion, lines 340-342 as they do not make sense given the lack of associated data in the shortened results.

Removed: Moreover, patients recorded their pain more divided through the day and more during the night compared to nurses. Nurses tend to record pain during specific moments as part of their daily routine and care rounds for patients. (Tack changes lines 331-333)

2. You have not actually removed all mentions of your in-app scale as an NRS from your paper. This should still be corrected in the Secondary outcome section, where for example on line 292 of the Tracked Changes version you refer to the "patient recorded NRS". Please adjust this section accordingly.

Adjusted into: Patients recorded 307 times their pain score with the app, while nurses recorded 396 times a NRS for pain. The median patient recorded pain app score was 4.0 (range 0 to 10), the median nurse recorded NRS for pain was 4.0 (range 0 to 9) p = 0.06. The differences in pain recordings ranging from ‘0 to 4’, ‘5 to 7’ and ‘8 to 10’ between patients and nurses were respectively 11% (p = 0.0024), 9% (p = 0.0096) and 2% (p = 0.27) (Table 4). One hundred ninety seven patient pain app scores (64%) were rated as pain during rest. Patients asked 92 times (29%) for extra analgesia and 63 times (20%) for a nurse. One hundred thirty one patient pain app scores (42%) were ≥ 5, of these, 109 were still rated as bearable and only 55 times they asked for extra analgesia (Table 5). (Track changes lines 289-297)

3. Additionally, the authors refer to "the in-app pain scale and 11-faces scale" in the abstract, but from what I see in Figure 1 the numbers and faces all go together on a single wheel (are moved as one), so the whole thing should be referred to as the "in-app pain scale" or "in-app pain wheel". At the end of the abstract, it could be referred to as the "in-app pain scale with 11 faces" or "the app's 11 point numeric and faces pain scale" or either option with "wheel" instead of "scale".

Adjusted the following sentence in abstract: Further research is needed to validate the 11-point numeric and faces pain scale in-app pain scale and the 11-faces scale with the current gold standards visual analogue scale (VAS) and NRS for pain. (Track changes lines 45-46)

Adjusted sentence in the discussion: Much work still needs to be done to thoroughly adjust and examine the psychometric features of the app’s 11-point numeric and faces pain scale and validate it against VAS and NRS. (Track changes lines 386-387)

4. In the Study Procedures section, thank you for adding the notification times on lines 139 and 140 as this fits with the three dots per day in Figure 3. Please remove the "am" and "pm" from these lines, as you are using a 24-hour clock, so it is unnecessary.

Removed: “am” and “pm” (Track changes lines 132-133)

5. In the Results and Discussion - Main Outcome section, I take issue with the way that you have labelled the lumped together responses for "satisfying and very satisfying" as well as "agree and totally agree" by using brackets around "very" and "totally". It is not immediately obvious that you are giving the sum of these two responses. I would much prefer you change "(very) satisfying" to "satisfying or very satisfying" and "(totally) agreed" to "agreed or totally agreed". This is how you listed it in the abstract.

We have changed it into the following: Thirty patients (60%) rated communicating the degree of pain with the application as satisfying or very satisfying (Table 2). The overall experienced postoperative pain was valued as no pain by 3 patients (6%), little pain in 5 patients (10%), 25 patients (50%) valued their pain as bearable and 13 (26%) valued their pain as considerable. One patient (2%) experienced severe pain. Asking patients if they could easily and correctly record their pain with the application 45 (90%) agreed or totally agreed. Asking about the three times daily notifications to score pain 38 patients (76%) agreed or totally agreed that this was useful. Regarding the overall appearance of the app 40 patients (80%) found it attractive or very attractive. Asking if it would be beneficial to contact a nurse with the application 9 (18%) of the patients reported that it would not be beneficial. The in-app pain intensity chart was valued as useful or very useful by 40 patients (80%). (Track changes lines 232-242)

6. Please fix your new reference 19, which is missing "http" in the URL.

Fixed reference 18

18. Richtlijn Postoperatieve Pijn 2012 [Available from: http://www.anesthesiologie.nl].

Fixed reference 19

19. De NHG-Standaard PIJN 2015 [Available from: http://www.nhg.org/standaarden/volledig/nhg-standaard-pijn].

(Track changes lines 436 and 437)

7. On line 30 of the tracked changes version, thank you for fixing the counts and percents, but you have two "and"s in the sentence. The first one should just be a comma.

Corrected into: Pain experienced after surgery was scored by patients as ‘no’: 3 (6%), ‘little’: 5 (10%), ‘bearable’: 25 (50%), ‘considerable’: 13 (26%) and ‘severe’: 1 (2%). (Track changes lines 29-30)

8. Line 131, in order to make this sentence grammatical correct please change it to "referred to as a 'numerical rating scale'" or "referred to as the 'numerical rating scale'". If using "the", perhaps you should capitalize it: 'Numerical Rating Scale'.

Corrected into: This method of pain assessment is common practice in most Dutch hospitals and commonly referred to as a ‘numerical rating scale’. (Track changes lines 124-125)

9. Line 214 onto 215, you have an odd sentence fragment: "Indicating the experience patients have gained in using the app." This is not a complete sentence because it does not have a subject. Please combine this with the next sentence in some way so that it is grammatically correct.

Corrected into: Indicating the experience patients have gained in using the app, the median number of pain app recordings before questionnaire completion was 3 (range 0 to 13). (Track changes lines 209-211)

10. Please add a closing single quote on line 332 to end the quote from the patient.

Added: One patient stated that ‘When I need a nurse immediately I’ll use the button next to my bed’. (Track changes line 323)

Added: He said: ‘I only watch the red notifications on the corners of the app icon’. (Track changes line 342)

11. In the discussion, your points are numbered, but there is no "Four". Please fix this by changing "Five," to "Four," on line 371.

Corrected into: Four, to add more and clear personalized feedback messages in the app. (track changes line 353)

12. Thanks for the clarification on line 377, but please change "on two locations" to "in two locations" or "at two locations" as these are proper ways to say it in English. Also please change "situated at different parts in Amsterdam" to "situated at different parts of Amsterdam" or simply "in Amsterdam".

Corrected into: First, the study was performed at two locations of OLVG Hospital in Amsterdam…(Track changes lines 359-360)

13. Remove the extra period on line 388 and add a period at the end of your conclusion.

Removed and added a period (track changes lines 369 and 401)

Decision Letter 3

Peter M ten Klooster

8 Apr 2020

Patient reported postoperative pain with a smartphone application: a proof of concept

PONE-D-19-17106R3

Dear Dr. Thiel,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Peter M ten Klooster, Ph.D.

Academic Editor

PLOS ONE

Acceptance letter

Peter M ten Klooster

27 Apr 2020

PONE-D-19-17106R3

Patient reported postoperative pain with a smartphone application: a proof of concept

Dear Dr. Thiel:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Peter M ten Klooster

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Patient questionnaire, original.

    (DOCX)

    S2 File. Patient questionnaire, English translation.

    (DOCX)

    S3 File. Stakeholder interview, original.

    (DOCX)

    S4 File. Stakeholder interview, English translation.

    (DOCX)

    S1 Data

    (XLSX)

    S2 Data

    (XLSX)

    Attachment

    Submitted filename: Rebuttal letter_6_10_2019.docx

    Attachment

    Submitted filename: Rebuttal letter_31_1_2020.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES