Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2025 May 22;2024:162–171.

A Computationally-guided Qualitative Analysis to Understand User Experiences with Different Types of Mobile Personal Health Records

Zainab A Balogun 1, Pronob K Barman 1, Bianka K Onwumbiko 1, Tera L Reynolds 1
PMCID: PMC12099338  PMID: 40417496

Abstract

Mobile personal health records (mPHR) are smartphone apps that grant patients portable and continuous access to their medical records, thereby increasing the potential for patients to play an active role in managing their health. An extensive body of literature has focused on understanding user experiences with web-based tethered PHRs (i.e., patient portals) offered by healthcare organizations. However, patients’ opinions of smartphone-based PHRs have received less attention. To address this gap, we used a computationally-guided qualitative analysis approach to analyze user reviews of six tethered and four interconnected mPHR apps available on both Google Play and Apple app stores. This approach resulted in identifying dimensions of user experiences related to usability, usefulness, and important features to users. Our findings reveal many similarities in user experiences for HCO-tethered and HCO-independent interconnected PHRs. However, there are some differences in user experiences between the types of PHRs and the different devices and platforms.

Introduction

For care to be person-centered, patients need to be active participants in their healthcare. Access to personal health information, such as laboratory test results, is a minimum requirement for patient engagement. Over the past few decades, technology and policy developments have worked to meet this minimum requirement. In terms of technology, personal health records (PHRs) enable patients to manage their health information electronically. In their seminal 2006 paper, Tang et al. defined three types of PHRs based on the level of integration with healthcare organization (HCO) technologies such as electronic health record (EHR) systems – from not integrated at all (i.e., a standalone PHR that requires patients to add their medical information manually) to interoperable (i.e., an integrated PHR which can be connected to multiple HCOs)13. All three types have also been available as smartphone apps, which we refer to as mobile personal health records (mPHRs). mPHRs provide patients with electronic health information (EHI) portability and accessibility4, enabling them to actively engage in their health anywhere and anytime. Some mPHRs allow peripheral connections to devices (e.g., smartwatches) as well5,6, enabling individuals to track patient-generated data such as heart rate, steps, and sleep duration.

The most recent U.S. health information technology policy, the 21st Century Cures Act (Cures Act), has had two major effects relevant to PHRs. First, it has made EHI more available to patients than ever. The information blocking rule of the Cures Act requires healthcare organizations (HCOs) to provide patients with their EHI immediately upon their request, with few exceptions. To comply with this legislation within the current technological and clinical context, most HCOs determined that the only feasible solution was to proactively make EHI immediately available through patient portals (also known as tethered PHRs in the literature) before patients make a request7. Patient portals are a web and smartphone-based technology offered by many HCOs to provide patients with read-only access to EHI; they often include features for virtual communication with providers (i.e., secure messaging and video visits), managing appointments and medications, and paying bills3. Second, the Cures Act has also led to HCOs providing standards-based application programming interface (API) access for easier EHI sharing with trusted third-party apps. This has created a technical foundation for interoperable mPHRs, making this option more feasible than ever before21.

Although there is a significant body of evidence on patient experiences with HCO-provided patient portals816 and the benefits of patient portals1720, this has largely focused on web-based experiences. There has been less attention to patient opinions on smartphone applications for accessing EHI. In addition, despite the inherent advantages and disadvantages of the different types of mPHRs, such as patient portal apps often providing features for two-way communication but offering an incomplete picture of the patients’ health and standalone and interoperable PHRs allowing patients to combine records from multiple HCOs but rarely offering opportunities for interaction with one’s healthcare provider, there is a dearth of evidence on patient experiences with the different types of mPHRs. The limited existing literature on patient experiences with mPHRs have largely centered on the MyChart patient portal app (Epic, Verona, Wisconsin)23,24. Given these tools’ ubiquity and growing importance in the patient-centered healthcare paradigm, there is an urgent need to understand the functionality and user perceptions of the different types of PHR apps. Towards this end, we conducted an app review focused on the top six most popular HCO-provided (i.e., tethered) and the top four HCO-independent (i.e., standalone or interoperable) PHR apps that were available in both the Apple App and Google Play Stores. We compared these apps based on the number of ratings, the user ratings, the features available, and the content of user reviews to better understand patients’ experiences using these different tools. The insights gained from this work will contribute to the existing literature on PHRs by answering the following research questions: 1) What are the differences in user experiences between HCO-provided mPHR and HCO-independent mPHR? 2) Are there differences in user experiences based on operating system (OS, i.e., iOS vs. Android)?

Methods

We used a three-phase approach to identify relevant mPHRs and understand user perceptions of these apps: (1) App Selection, (2) Topic Modeling, and (3) Content Analysis (summarized in Figure 1).

Figure 1.

Figure 1.

The overall flow of the study includes three phases: (1) app selection adapted from the PRISMA standard, (2) topic modeling, and (3) content analysis.

Phase 1: App Selection

The app selection process was adapted from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standard and had three steps: identifying all potentially relevant apps, screening identified apps for relevance, and categorizing and sampling relevant apps for analysis. First, we identified apps by starting with a list of candidate apps created by Reynolds et al. to identify all patient-facing apps that could automatically import clinical data (e.g., prescriptions), particularly through standards-based APIs25. Briefly, Reynolds et al. searched “Top” health & fitness and medical iOS and Android app lists to identify popular apps and augmented these lists with a targeted search of the Apple and Google Play Stores using keywords such as “medical record.” This resulted in a list of apps that included iOS mPHRs (N=132) and Android mPHRs (N=148)25. We leveraged a Python web crawler script to download metadata (e.g., ratings) and user-generated content (i.e., app reviews) from each app store’s API.

Second, the first author (ZAB) screened the apps for relevance to our research goals. We excluded apps not represented in both app stores (N=210, iOS=97, Android=112, including one duplicate) because one of our project aims was to understand the similarities and differences in user experiences between Android and Apple apps. We then assessed the 35 apps for popularity, excluding those with ratings fewer than 250 (n=11) and those whose app store pages could not be reached at the time of the analysis (n=2). Third, we categorized the remaining 22 apps by reading through the app names and descriptions, including examining the screenshots of the apps on the app stores and assigning them a PHR type as defined by Tang and colleagues1. Categorizations were discussed amongst the research team, and disagreements were resolved. We purposively sampled 10 of the 22 apps for analysis to maximize variation, including multiple PHR categories and a variety of developers (e.g., large companies like Epic27 versus smaller ones like Meditech). The final sample of 10 apps included a total of 112,941 reviews (Android=109,105 and iOS=3,836; *Note: Apple App Store restricts the number of reviews that can be downloaded to 500 recent reviews at a time ).

Phase 2: Topic Modelling

In the next phase, we implemented a Latent Dirichlet Allocation (LDA) approach to identify topics of user experience expressed in reviews, leveraging the Gensim library’s capabilities in Python. We treated each different type of app for each platform as a dataset (e.g., iOS tethered PHRs were one dataset) for a total of six datasets. We first pre-processed reviews by stripping the text of newline characters, punctuations, special characters, and numbers, then eliminated extraneous white spaces. This step was crucial to remove noise and irrelevant data, thus refining the text for analysis. Additionally, we standardized the text by converting all characters to lowercase. The subsequent refinement of our datasets involved excluding words with a character count of two or less on the rationale that such brief words typically hold minimal contextual value. We then addressed the issue of word frequency by identifying and removing the top 30 words that, despite their prevalence, were determined to be of limited analytical value (e.g., App). To further enhance the quality of our data, we curated a comprehensive list of stopwords, integrating common English stopwords with an additional set specifically tailored to our domain of expertise (e.g., MyChart, The, is). This list was instrumental in filtering out ubiquitous terms that often obfuscate meaningful thematic patterns. Finally, each dataset underwent preprocessing to tokenize the text and convert it to a bag-of-words format.

We then applied LDA to a range of possible topic numbers, specifically from five to 25, in increments of five. We calculated two metrics to evaluate the quality of the topics extracted by the LDA model: the coherence score, which measures the semantic similarity between high-scoring words in each topic28, and the perplexity score, indicative of the model’s predictive performance29. The coherence score was based on the ‘CV’ measure, while the perplexity was obtained directly from the LDA model’s log perplexity function. High coherence and lower perplexity scores typically indicate better topic models30. We chose the number of topics that optimized the coherence and perplexity scores.

We identified the optimal number of topics for each of our six datasets by balancing the coherence and perplexity scores. For each dataset, we plotted these metrics to visually aid in determining the optimal number of topics, which then served as a diagnostic tool, revealing the trade-off between the topics’ granularity and the mode. Once the optimal number of topics was established for each dataset, we trained a final LDA model with the specified number of topics. This process resulted in groupings of words representing the underlying topics for each dataset.

Phase 3: Content Analysis

In Phase 3, we adapted computationally guided qualitative analysis approach used by Gauthier et al. (2022)26; our process involved three steps: coding the word groupings generated from the topic models and validating codes by searching for the words in the datasets to understand context and meaning, code consolidation, and categorization. In the first step, and as is standard in topic modeling, we first needed to interpret the word groupings. ZAB coded each word grouping (110 topics) by logically interpreting latent word combinations and validated these codes with PKB. Then, to further validate latent words whose meanings were not immediately apparent, we randomly selected 10 user reviews from each app (total=200) in which a given latent word appeared to understand the context in which the words were used to describe user experiences. This step enabled a deeper understanding of the topic coding and resulted in changes to the codebook.

In the second step, we combined similar codes into consolidated topics. For instance, communication-related codes such as sending messages to providers, contacting specialists, notification messages, and contacting doctor’s office were mapped to the category Communication with Care Team. Also, codes like the appointment feature, keeping track of appointments, appointment reminders, schedule care visits, and virtual appointment visits were mapped to the Appointment category.

In the final phase, we categorized the consolidated topics to further aid in understanding and presenting our results.

Results

Table 1 summarizes the ten apps selected for analysis. Based on the app store category, the ten apps were categorized as medical (n=9) and health and fitness (n=1). According to its description, the only health and fitness app (Healow) enabled manual self-tracking, goal setting, and clinical data access.

Table 1.

A list of included apps and their PHR category, App Store Category, and Average Ratings.

App Name App Vendor PER Category App Store Category Avg. Rating (N)
iOS Android
Kaiser Permanente Kaiser Permanente Tethered-Comprehensive PHR Medical 4.0 (197,000) 4.7 (143,583)
MyUPMC UPMC Tethered-Comprehensive PHR Medical 4.8 (531,000) 4.5 (8,156)
MyChart EPIC Tethered-Comprehensive PHR Medical 4.5 (697,000) 4.6 (87,788)
MyHealthONE HCA Healthcare, Inc. Tethered-Comprehensive PHR Medical 4.6 (125,000) 4.2 (2,582)
MyQuest for Patients Quest Diagnostics Incorporated Tethered-Diagnostics PHR Medical 4.6 (293,000) 4.6 (8,425)
NAVICA Abbott Tethered-Diagnostics PHR Medical 2.9 (170,00) 1.8 (798)
FollowMyHealth Allscripts Healthcare Solutions Inc (now called Veradigm) Interconnected PHR Medical 4.8 (507,000) 1.8 (143,583)
Healow eClinicalWork s LLC Interconnected PHR Health and Fitness 3.3 (750,00) 3.8 (69,697)
HealtheLife Oracle Cerner Corporation Interconnected PHR Medical 2.2 (474) 3.5 (1,542)
Meditech MHealth Meditech Interconnected PHR Medical 2.5 (251) 3.1 (835)

Among the apps selected, six were categorized as tethered (HCO-dependent), and four were interconnected PHR apps (HCO-independent interoperable). Among the tethered PHR apps identified, we categorized them into tethered-comprehensive (n=4) and tethered-diagnostic (n=2) because of the distinct services and features available. Specifically, diagnostic laboratories exclusively offer testing services such as scheduling onsite laboratory tests, purchasing home diagnostic tests and test kits (e.g., COVID-19), and remote proctored testing. As a result, the app features were not as robust as those offered in the tethered-comprehensive PHRs and interconnected PHRs, which provided users with a similar range of features. Our findings also revealed that two iOS apps are over one rating higher on the rating scale compared to Android, while only one Android app can say the same.

Computationally-guided Content Analysis Results

Overall, 110 topics were extracted for analysis, as shown in Table 2. For each smartphone OS, we analyzed n=55 topics. However, the number of topic models extracted for each PHR category varied for each dataset because we prioritized the quality of the topics by balancing the perplexity and coherence scores. The Tethered-Comprehensive PHR had the least number of topics generated (n=25), followed by the Interconnected PHR (n=40), while the Tethered-Diagnostic PHR had the highest number of topics (n=45) analyzed.

Table 2.

Number of word clusters (topics) generated for each operating system (OS) and PHR category.

OS Interconnected PER Tethered-Comprehensive PHR Tethered-Diagnostic PER
IOS 25 10 20
Android 15 15 25

Our analysis revealed that user feedback falls under ten consolidated topics in three categories (i.e., dimensions of the categories), as shown in the two right-most columns of Table 3. The three categories of user feedback are: usability, usefulness, and features. We found five important dimensions of usability: app authentication, data entry, user-friendliness, technical issues, OS & hardware compatibility, security, and technical issues. While there was only one dimension of usefulness – integrated access, which talked about multiple features and data access – often reviews discussing specific features also talked about their usefulness. However, feature consolidated topics sometimes discussed the usability of the feature instead or in conjunction with the usefulness, meaning that these consolidated topics did not fit into either of the other two categories.

Table 3.

Examples of word clusters from the topic modeling and the associated manual codes developed to describe the word clusters, as well as the final ten consolidated topics.

Sample Word Cluster Codes Consolidated Topics Descriptions Categories
password, login, screen, covid, account, tests, site, enter, process, new Enter password to login App Authentication Account verification to access electronic health information App Usability
date, enter, info, birth, information, insurance, type, scroll, option, iPhone Entering personal information Data entry Entering personal information into the application
user, friendly, navigate, easily, enter, recommend, important, healthcare, downloaded, stuff User friendly (easy to navigate healthcare) User Friendly Easy to use and navigate App Usability
home, screen, page, buttons, click, iPhone, phone, access, crashes, button Crashing at home page (at opening the app) Technical Issues Technical problems that make it difficult to complete tasks
phone, device, hour, place, site, browser, useless, download, rooted Rooted phone device Smartphone OS and Hardware Compatibility Device and operating system-related challenges
medical, information, health, access, care, records, place, provider, doctors, team Access to medical records and care team in one place Integrated access to health information & services Perceptions of multiple digital health services and personal health information access Usefulness
mychart, office, download, video, waiting, phone, visit, online, amazing, appointment Virtual appointments visits Appointments Making and managing appointments through PHRs Features
nan, enjoy, recognize, probably, certain, lots, clearly, contacting, specialist, shot Contacting specialist Communication With Care Team Experience with multiple types of communication through PHR
meds, check, place, difficult, new, labs, test, simple, physician, info Check lab test result Medical Record Access Viewing medical record such as laboratory results and vaccination record
emed, account, timer, testing, result, phone, proctor, tests, log, kit Supervised testing (with a proctor) Self-testing (with a kit) & Proctored Testing Purchasing and completing home tests

Figure 2 shows the proportions of each consolidated topic in the six datasets. Technical issues were the most discussed feedback in the usability category. The technical issues discussed in reviews broadly centered around general app performance issues such as features not working, app crashing, screen freezing, and inability to set-up accounts. As this user vented in their review, “(app) Won’t let me set up an account. I have tried 4+ times & each time, the app freezes & shuts down. I am supposed to test for work & am not sure how that will happen if the app doesn’t work.” (Tethered-diagnostic PHR). This user is clearly frustrated with the technical issues they encountered with just the first step of using this app. It is likely that if they are not able to set up the account, they will not be able to perform the testing mandated by their employer, which could affect their ability to perform their job.

Figure 2.

Figure 2.

A clustered heatmap of consolidated topic proportions.

User experience related to usefulness centered on the value patients placed on certain features such as appointments, secure messaging, medical records, testing, prescription refills, and the comprehensiveness of interconnected and tethered comprehensive PHR apps that include many key features that support health management in one place. For example, one user wrote: “This app is INCREDIBLE. Such convenience. User-friendly, clear, [and] easy to use. [I] Love the colors! [It is so] smart to get prescriptions mailed to my house--3 months’ worth for only ONE copay! Thanks, Kaiser.” (Tethered-comprehensive PHR). This user associates the usefulness (convenience) of the app with being able to use the feature (prescription refill) on the app, owing this to the usability of the app (user-friendly and easy to use).

Next, we will specifically present the results to answer our research questions by comparing the similarities and differences in dimensions of user experience based on (1) operating system platform and (2) type of PHR.

Similarities and Differences in Dimensions of User Experience Based on Operating System (OS)

We found that user experience appears to differ most based on the OS platform. Android users commented on usability more across all mPHR types than iOS users. Specifically, in total, 84.9% of user reviews for the Android-tethered diagnostic PHRs (3,562/4,192) and 60.1% of the user reviews for Android-Interconnected PHRs (5,4003/89,805) were about usability. Meanwhile, usability feedback for iOS counterparts was not as prominent when considering the topic category, with only 30.6% of reviews for iOS-Interconnected focusing on usability (417/1,363) and 17.8% (178/1,000) for Tethered-Diagnostic PHRs.

A notable difference in OS experience was involving smartphone security. We encountered topics on app security in our dataset relating to “rooted devices” that were mapped to the OS and hardware compatibility consolidated topic. Rooting is specific to Android phones and involves disabling the built-in security features, which gives users more administrative control over their smartphones32. Some users did not know that their devices were rooted until they attempted to install the PHR app and failed, while others were aware that their smartphones were rooted. This topic emerged among specific apps but was not limited to one PHR category. The Android apps preventing installation on rooted devices were MyQuest for Patients (Tethered-Diagnostic PHR) and Healow (Interconnected PHR). For instance, this MyQuest Android user said, “If you have full access to your phone, this app specifically tells you it won’t run on rooted devices. Sure, that’s the developer’s prerogative, but it results in a non-functional app for me.” Many other users expressed similar displeasure at this device restriction that limits their access and use of certain mPHRs. This security feature, however, ensures that patients’ EHI is secure from potential security attacks.

Furthermore, unique hardware differences impacted user experiences. For instance, word clusters included iPad and landscape. User feedback revealed difficulty with using the apps in this mode. This iOS Kaiser Permanente user expressed in their feedback: “This app does not pan to landscape mode in iPad Pro! Why!?! Having to hold my iPad upright in portrait mode is excruciatingly painful, exhausting, and annoying. It makes typing a message to my doctor much more difficult when I have a perfectly good Magic keyboard that I could use and can’t because it cannot be viewed in the landscape.” This comment suggests that the user prefers to access the mPHR through a larger screen so that it is easier for them to view and to interact with the mPHR using a keyboard rather than a smartphone touch screen. Another user states, “It’s very difficult to see all the data from a phone, so it should be designed as an iPad app. While it can be used as an iPad app, it does not re-layout the data for iPad viewing (iOS diagnostic PHR user). Normally, mobile devices are designed to switch to two different layouts, but it appears that the mPHR apps force users to stick to one. The many reviews like this reveal that users expect flexibility with the viewing experience, and limiting the users to portrait layout mars their experience as this user suggests, “[I] have to spend the entire [telehealth] visit holding my iPad upright because the app is only available in portrait view.” Hence, their suggestion to app developers is to fix the app to support landscape view.

Finally, we found OS system upgrades to be disruptive to patient experiences in both operating systems. For instance, several users reported that after an update, certain features stopped working on their phones. One app reviewer would describe the experience as “terrible” because “Since the latest update, I can’t download my appointments onto either of my Android devices. I tried emailing the developer, but my emails are ignored.” Other users reported the apps becoming unstable due to this (i.e., app updates), like this user who narrated that they “updated the app last week, now, when I sign in, it says “unauthorized access,” and I can’t see my health record anymore. My Android version is 7.0. I can’t revert to the older version of the Cerner app.” This experience is not limited to Android users; iPhone users also experience disruptions due to iOS upgrades, like this user who expressed that “Previous functionality was lost with recent mobile platform upgrade. You [are] no longer able to ‘add to calendar’ your appointments.…” Sadly, in these scenarios, the patients could not fix these issues themselves; more so, technical support was not readily available. Therefore, ready access to healthcare services and information is disrupted. To express their displeasure due to an unsuccessful attempt to find technical support, the iOS reviewer stated, “This app jeopardizes my health!” While OS upgrades are inevitable and essential due to technological advancements, these user reviews express the disruption and concern users feel when updates result in disruptions in access to an App they have come to rely on.

Similarities and Differences in Dimensions of User Experience Based on mPHR Types

Interconnected and tethered comprehensive PHRs provide a similar user experience, as they include many overlapping features and a similar level of comprehensiveness that users appreciate. However, there are differences as highlighted in Figure 2. For example, for both Android and iOS Tethered-Comprehensive mPHRs there are fewer reviews focused on usability issues than the other types of mPHRs.

In contrast, the experience of using Tethered-Diagnostic mPHRs is quite different as they are specialized for a single aspect of healthcare (i.e., laboratory testing). Unsurprisingly, Tethered-Diagnostic mPHR user’s comments focused on the key feature of this PHR type: at-home testing. Particularly, we found feedback relating to self-testing (with kits, e.g., for COVID-19), and supervised home testing with a proctor using hardware features in their mobile devices, such as video and microphones. As this user review describes: “…We were traveling back to the US from an international trip and need[ed] a negative test that was proctored within 1 business day of travel. I woke up at 6 am and had no wait time to connect with a proctor and was done 25 minutes later. Make sure to turn your auto lock off and have good service. My results were sent to me seconds after I got off the video call. 10/10 recommend if you need a negative test when traveling back to the US!” (iOS Tethered-Diagnostic). While this user’s experience was positive, enabling them to make an urgent work trip, other users’ experiences were not. For example, we found several situations where the mobile device failed, resulting in patients falling back on the web versions of these PHRs on a computer system. Like this user who posted “Navica is not an all-in-one app: you actually perform the test with a proctor through a browser on the Emed site. That site has serious technical problems; this is the fifth test I’ve performed, and each one had audio/visual difficulties. I used an HP laptop (camera issues) and Google Pixel 4a (mic input non-functional).” (Android Tethered-Diagnostic). Findings like this imply that there are still some mobile hardware and OS challenges that diagnostic mPHR developers need to be aware of and overcome to make digital healthcare more accessible to patients.

Discussion and Conclusion

The findings from this work present practical implications for the design of mPHR apps that support patient access to medical records and health services. By applying a computationally-guided content analysis of our data set consisting of 112,941 user reviews, we could extract latent topics to identify similarities and differences in user experiences with respect to OS platform and PHR type. The broader categories derived from our analysis, such as app usability and usefulness, are constructs that align with the technology acceptance model (TAM)39,40 and demonstrate the factors impacting the adoption and use of consumer-facing health technologies. Several user reviews demonstrated that the ability to use the app features (e.g., book appointments and communicate with care team) to accomplish health-related tasks easily is a function of the perceived usefulness of PHR apps. Furthermore, our ten consolidated topics largely align with the dimensions of user experience derived by Alramahi and colleagues24,41, who employed a similarly methodical approach to mining electronic word-of-mouth (e-WOM) data derived from the MyChart patient portal on both Android and iOS App Stores.

Our work, however, extends previous literature by presenting notable similarities and differences in patients’ use of different types of mPHRs, including the similarities and differences in the device platforms from which patients install and use the apps. Among the ten apps that were analyzed, our findings reveal that there appear to be many similarities in user experiences with respect to the features provided by Tethered-Comprehensive and Interconnected mPHRs. In alignment with literature33,34, our findings show that the HCO-independent Interconnected and HCO-provided Tethered-Comprehensive mPHR provide patients with basic functionality to view medical records and interact with and access health services. Specifically, Harahap and colleagues34 itemized the basic (e.g., view medical record and view administrative record) and advanced functionality (e.g., Medication Management, Communication, and Appointment Management) a PHR system should have. Our result showed that, for the most part, interconnected and tethered comprehensive PHR apps have these functionalities, and, in terms of features, the PHR types were almost indistinguishable. These similarities were mostly found during our analysis of the latent topics generated by our LDA model signaling user experience with features provided by these two PHR categories (e.g., Appointments, communication with care team, and medical record access) as well as features not provided (e.g., Self-testing, and proctored supervised testing). Also, considering Epic MyChart’s monopoly of EHR systems in the United States35,36, our analysis of smaller Interconnected PHR vendors showed that these “smaller” players are also capable of supporting the health needs of the patient population of smaller HCO and private practices that cannot afford the cost of larger commercial EHRs like MyChart.

Variations in Device Capabilities Impacting Patient Access to Medical Records

Regular updates and bug fixes are common practices in the software lifecycle, ensuring optimal functionality. These updates appear disruptive to the user’s experiences and are often done without prior warning to the consumers. Several reviews indicated this breakdown in accessing medical records as individuals reported having to fall back on web versions on their computers or mobile phone browsers. In fact, many seemed unaware of the new updates done to the apps until they tried to use the mPHR apps, only to find several things that had changed on the app (e.g., features malfunctioning) or the app crashed or froze altogether. Our results show that patients cannot revert to prior versions of the apps that work on their phones. In fact, as it stands, it appears that app maintainers expect consumers to be aware and know what to do, as this statement on the UPMC app store page suggests: “For the best experience, please update your mobile device and tablet to the latest operating system. If you experience issues, completely close the app, restart your device, and open the application.”38, and we found this to be similar to messaging on other apps in our study. Furthermore, these version upgrades appear more frequently than users upgrade their mobile phones, thereby rendering the device obsolete. Consequently, many app reviewers reported a downgrade in user experience. The implication of these challenges is that they hamper patients’ ability to view medical records and access medical services on the go. Potentially, these challenges could be mitigated if smartphone users are given the option to choose on the app store the version of the app that is most compatible with their phones and to roll back failed installation when the ability to access health records is disrupted or if updates impede the app performance.

Furthermore, certain security mechanisms are implemented to ensure a secure operating environment for smartphone devices, such as device integrity evaluation42. Such a mechanism is employed to validate the integrity of a smartphone by detecting rooting or jailbreaking. Our findings suggest that some users would like the autonomy to make a choice in the trade-off between user control and security, but more app developers and vendors are serious about ensuring that consumer health information is secure on PHR apps, especially for Android devices with varying quality and security features. Several users reported not being able to install the PHR apps on their phones or not being able to get through authentication once device rooting has been detected. Such findings were not recorded among iOS device users. IOS probably already caters to its device evaluation integrity using Apple’s security architecture. This finding is a welcomed development, as it suggests that more app vendors are now aligning with HIPPA requirements for user authentication43.

Acknowledgments

BKO is supported by the National Institute on Aging T32 Epidemiology of Aging Training Program (T32 AG000262). Thank you to Abhishek Patharkar for his contributions during the development phase of this research study.

Figures & Tables

References

  • 1.Tang PC, Ash JS, Bates DW, Overhage JM, Sands DZ. Personal health records: Definitions, benefits, and strategies for overcoming barriers to adoption. Journal of the American Medical Informatics Association. 2006;13(2):121–126. doi: 10.1197/jamia.M2025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Spil T, Klein R. 2014 47th Hawaii International Conference on System Sciences. IEEE; 2014. Personal Health Records Success: Why Google Health Failed and What Does that Mean for Microsoft HealthVault? pp. 2818–2827. doi:10.1109/HICSS.2014.353. [Google Scholar]
  • 3.Detmer D, Bloomrosen M, Raymond B, Tang P. Integrated personal health records: Transformative tools for consumer-centric care. BMC Medical Informatics and Decision Making. 2008. p. 8. http://www.scopus.com/inward/record.url?eid=2-s2.0-57349188957&partnerID=40&md5=2e115ef9ab74ce4fe4be29843868c65e . [DOI] [PMC free article] [PubMed]
  • 4.Lee G, Park JY, Shin SY, et al. Which Users Should Be the Focus of Mobile Personal Health Records? Analysis of User Characteristics Influencing Usage of a Tethered Mobile Personal Health Record. Telemedicine and e-Health. 2016;22(5):419–428. doi: 10.1089/tmj.2015.0137. doi:10.1089/tmj.2015.0137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Kirwan M, Duncan MJ, Vandelanotte C, Mummery WK. Using Smartphone Technology to Monitor Physical Activity in the 10,000 Steps Program: A Matched Case–Control Trial. J Med Internet Res. 2012;14(2):e55. doi: 10.2196/jmir.1950. doi:10.2196/jmir.1950. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Roehrs A, Da Costa CA, Righi RDR, De Oliveira KSF. Personal Health Records: A Systematic Literature Review. J Med Internet Res. 2017;19(1):e13. doi: 10.2196/jmir.5876. doi:10.2196/jmir.5876. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Reynolds TL, Cobb JG, Steitz BD, Ancker JS, Rosenbloom ST. The State-of-the-Art of Patient Portals: Adapting to External Factors, Addressing Barriers, and Innovating. Appl Clin Inform. 2023;14(04):654–669. doi: 10.1055/s-0043-1770901. doi:10.1055/s-0043-1770901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Haun JN, Lind JD, Shimada SL, et al. Evaluating user experiences of the secure messaging tool on the veterans affairs’ patient portal system. Journal of Medical Internet Research. 2014;16(3) doi: 10.2196/jmir.2976. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Carryer J, Kooienga S. Patients’ experience and understanding of E-portals in rural general practice: an ethnographic exploration. J Prim Health Care. 2017;9(4):262–268. doi: 10.1071/HC17016. doi:10.1071/HC17016. [DOI] [PubMed] [Google Scholar]
  • 10.Chung S, Martinez MC, Frosch DL, Jones VG, Chan AS. Patient-Centric Scheduling With the Implementation of Health Information Technology to Improve the Patient Experience and Access to Care: Retrospective Case-Control Analysis. J Med Internet Res. 2020;22(6):e16451. doi: 10.2196/16451. doi:10.2196/16451. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Giardina TD, Modi V, Parrish DE, Singh H. The patient portal and abnormal test results: An exploratory study of patient experiences. Patient Exp J. 2015;2(1):148–154. [PMC free article] [PubMed] [Google Scholar]
  • 12.Henshaw D, Okawa G, Ching K, Garrido T, Qian H, Tsai J. Access to Radiology Reports via an Online Patient Portal: Experiences of Referring Physicians and Patients. Journal of the American College of Radiology. 2015;12(6):582–586.e1. doi: 10.1016/j.jacr.2015.01.015. doi:10.1016/j.jacr.2015.01.015. [DOI] [PubMed] [Google Scholar]
  • 13.Nazi KM, Turvey CL, Klein DM, Hogan TP. A Decade of Veteran Voices: Examining Patient Portal Enhancements Through the Lens of User-Centered Design. J Med Internet Res. 2018;20(7):e10413. doi: 10.2196/10413. doi:10.2196/10413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Reynolds TL, Ali N, Zheng K. What Do Patients and Caregivers Want? A Systematic Review of User Suggestions to Improve Patient Portals. AMIA Annual Symposium Proceedings. 2020. doi:33936483. [PMC free article] [PubMed]
  • 15.Keselman A, Slaughter L, Smith CA, et al. Towards consumer-friendly PHRs: patients’ experience with reviewing their health records. AMIA . Annual Symposium proceedings / AMIA Symposium AMIA Symposium. Published online 2007 399-403. [PMC free article] [PubMed]
  • 16.Mohsen K, Kildea J, Lambert SD, Laizner AM. Exploring Cancer Patients’ Perceptions of Accessing and Experience with Using the Educational Material in the Opal Patient Portal. Support Care Cancer. 2021;29(8):4365–4374. doi: 10.1007/s00520-020-05900-4. doi:10.1007/s00520-020-05900-4. [DOI] [PubMed] [Google Scholar]
  • 17.Alturkistani A, Qavi A, Anyanwu PE, Greenfield G, Greaves F, Costelloe C. Patient Portal Functionalities and Patient Outcomes Among Patients With Diabetes: Systematic Review. J Med Internet Res. 2020;22(9):e18976. doi: 10.2196/18976. doi:10.2196/18976. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Ammenwerth E, Hoerbst A, Lannig S, Mueller G, Siebert U, Schnell-Inderst P. Effects of Adult Patient Portals on Patient Empowerment and Health-Related Outcomes: A Systematic Review. Stud Health Technol Inform. 2019;264:1106–1110. doi: 10.3233/SHTI190397. doi:10.3233/SHTI190397. [DOI] [PubMed] [Google Scholar]
  • 19.Apter AJ, Bryant-Stephens T, Perez L, et al. Patient Portal Usage and Outcomes Among Adult Patients with Uncontrolled Asthma. J Allergy Clin Immunol Pract. 2020;8(3):965–970.e4. doi: 10.1016/j.jaip.2019.09.034. doi:10.1016/j.jaip.2019.09.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Carini E, Villani L, Pezzullo AM, et al. The impact of digital patient portals on health outcomes, system efficiency, and patient attitudes: Updated systematic literature review. Journal of Medical Internet Research. 2021;23(9) doi: 10.2196/26189. doi:10.2196/26189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Dameff C, Clay B, Longhurst CA. Personal Health Records: More Promising in the Smartphone Era? JAMA. 2019;321(4):339. doi: 10.1001/jama.2018.20434. doi:10.1001/jama.2018.20434. [DOI] [PubMed] [Google Scholar]
  • 22.Rolnick J, Ward R, Tait G, Patel N. Early Adopters of Apple Health Records at a Large Academic Medical Center: Cross-sectional Survey of Users. J Med Internet Res. 2022;24(1):e29367. doi: 10.2196/29367. doi:10.2196/29367. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ahmed A, Nasralah T, Wahbeh A, Noteboom C. Patients’ Characteristics Effecting the Use of a MyChart Patient Portal. 2019. doi:10.24251/HICSS.2019.482.
  • 24.Al-Ramahi M, Noteboom C. Mining User-generated Content of Mobile Patient Portal: Dimensions of User Experience. Trans Soc Comput. 2020;3(3):1–24. doi:10.1145/3394831. [Google Scholar]
  • 25.Reynolds TL, Kaligotla M, Zheng K. In AMIA Annual Symposium Proceedings 2022. Vol. 2022. American Medical Informatics Association; Investigating the interoperable health app ecosystem at the start of the 21st Century Cures Act; p. p. 942. [PMC free article] [PubMed] [Google Scholar]
  • 26.Gauthier RP, Costello MJ, Wallace JR. “I Will Not Drink With You Today”: A Topic-Guided Thematic Analysis of Addiction Recovery on Reddit. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 2022 Apr 27. pp. pp. 1–17.
  • 27.Dey K. Patient Portal Market Research Report [Internet] 2024. [Updated 2024 Mar; cited 2024 Mar 17]. Available from: https://www.marketresearchfuture.com/reports/patient-portal-market-10748.
  • 28.Newman D, Bonilla EV, Buntine W. Improving topic coherence with regularized topic models. Advances in neural information processing systems. 2011. p. 24.
  • 29.Kim M, Lee H, Lee SH, Kim JH. 2023 IEEE International Conference on Big Data and Smart Computing (BigComp) IEEE; 2023. What rather than how : A DMR topic modeling analysis of news coverage on the British Museum; pp. 167–173. doi:10.1109/BigComp57234.2023.00036. [Google Scholar]
  • 30.Blei DM, Ng AY, Jordan MI. Latent dirichlet allocation. Journal of machine Learning research. 2003 Jan;3:993–1022. [Google Scholar]
  • 31.Erlingsson C, Brysiewicz P. A hands-on guide to doing content analysis. African journal of emergency medicine. 2017 Sep 1;7(3):93–9. doi: 10.1016/j.afjem.2017.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Vears DF, Gillam L. Inductive content analysis: A guide for beginning qualitative researchers. Focus on Health Professional Education: A Multi-Professional Journal. 2022 Mar 1;23(1):111–27. [Google Scholar]
  • 33.Okta. Rooted Devices: Definition, Benefits & Security Risks [Internet] www.okta.com [Updated 2023 Nov; cited 2024 Mar 16]. Available from: https://www.okta.com/identity-101/rooted-device/#:~:text=A%20rooted%20device%20allows%20a.
  • 34.Harahap NC, Handayani PW, Hidayanto AN. Functionalities and issues in the implementation of personal health records: systematic review. Journal of medical Internet research. 2021 Jul 21;23(7):e26236. doi: 10.2196/26236. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Health Level Seven International; 2014. HL7 Personal Health Record System Functional Model, Release 1. URL: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=88. [Google Scholar]
  • 36.Koppel R, Lehmann CU. Implications of an emerging EHR monoculture for hospitals and healthcare systems. Journal of the American Medical Informatics Association. 2015 Mar 1;22(2):465–71. doi: 10.1136/amiajnl-2014-003023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Colicchio TK, Cimino JJ, Del Fiol G. Unintended consequences of nationwide electronic health record adoption: challenges and opportunities in the post-meaningful use era. Journal of medical Internet research. 2019 Jun 3;21(6):e13313. doi: 10.2196/13313. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. https://apps.apple.com/us/app/myupmc/id1365606965?ls=1 .
  • 39.Tavares J, Oliveira T. Electronic health record patient portal adoption by health care consumers: an acceptance model and survey. Journal of medical Internet research. 2016 Mar 2;18(3):e5069. doi: 10.2196/jmir.5069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Kim J, Park HA. Development of a health information technology acceptance model using consumers’ health behavior intention. Journal of medical Internet research. 2012 Oct 1;14(5):e2143. doi: 10.2196/jmir.2143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Al-Ramahi M, Noteboom C. A systematic analysis of patient portals adoption, acceptance and usage: The trajectory for triple aim?
  • 42.Spychalski D, Rode O, Ritthaler M, Raptis G. In2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI) IEEE; 2021 Jul 27. Conceptual design and analysis of a mobile digital identity for eHealth applications; pp. pp. 1–4. [Google Scholar]
  • 43.Baldwin JL, Singh H, Sittig DF, Giardina TD. InHealthcare. No. 3. Vol. 5. Elsevier; 2017 Sep 1. Patient portals and health apps: Pitfalls, promises, and what one might learn from the other; pp. pp. 81–85. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES