Abstract
Introduction:
Use of Integrated Health Information Systems (IHIS) for the provision of healthcare services benefits both healthcare professionals and patients, while requiring continuous evaluation and upgrading to fully support its role.
Aim:
The main purpose of the study was to develop an evaluation framework for hospitals utilizing IHIS, within the three main areas identified as Human factor, Technology and Organization.
Material and methods:
The questionnaire consisted of 43 questions, with 17 questions (related to categories procedures, system quality and satisfaction), 25 questions (related to categories, safety and collaboration) and 1 question related to accessibility to the system (within the category system quality). Three open questions were added to evaluate users’ perception on what was needed for the improvement of health services in their respective hospitals for all 3 variables being evaluated. The open questions were included to allow participants to express their opinion in a more detailed setting. A database was developed, and the data were processed and analyzed.
Results:
Factor analysis formed 5 categories for the evaluation framework. Cronbach’s alpha coefficient was found in all categories to be above > 0.85.
Conclusion:
Evaluation frameworks can be designed, developed and implemented by using different methodologies. For an evaluation framework to be effective it should be designed and implemented based on the aims and purpose of the research and the specific needs of the particular healthcare setting or hospital. Considering the categories satisfaction, collaboration, safety, system quality, procedures, and by using Likert scale and open questions in the current study, DIPSA can provide a holistic image of IHIS by evaluating any hospital system.
Keywords: Health Information Systems, Information Technology, Hospital Information Systems, DIPSA evaluation framework
1. INTRODUCTION
Integrated health information system (IHIS) can improve health care service provision by organizing, collecting, processing and sharing electronically information within the environment of an organization (1-4). A well-implemented IHIS can be more efficient by reducing the time needed to gather important information and making them available to healthcare professionals (5, 6), reducing errors in the clinical setting, providing support to healthcare professionals, improving the management of information (2, 3), and improving the patient’s access to healthcare which will result in both social and economic benefits (2). If not used correctly, IHIS can negatively affect the provision of healthcare services, and this is usually related to inherent problems of the system (errors, crashes, software or other limitations that affect the tasks of the users) (5, 7), or inappropriate training and support of the personnel, which can lead to absence or use of wrong information in decision-making, thereby impacting the patient’s general health (8). The high demand for safe, high-quality and cost-effective provision of health care is now a challenge for every healthcare professional/institution (9, 10). Therefore, not only the implementation but a continuous evaluation and upgrade of existing IHIS is crucial to meet these demands (4).
According to Ammenwerth et al. (11) “Evaluation is the act of measuring or exploring properties of a health information system (in planning, development, implementation, or operation), the result of which informs a decision to be made concerning that system in a specific context”. There is not an ideal and specific way of evaluating healthcare systems. Evaluation methods can be complex, single or combined, and with a lot of variables (12). Evaluation frameworks mainly describe or measure features or categories of HIS that will guide towards the improvement of the system (13), and can be done at different time points during the development, implementation and post-implementation of the IHIS (14), and taking into consideration the different stakeholders that will interact with or benefit from the system (15). Evaluations can also be subjective, based on personal assessments, or objective that are based on systematic assessments, and the implementation of any evaluation framework should be based on the needs of the organization (16). Other factors that can affect an evaluation is the context of the evaluation (use, communication, effectiveness or organization), the method of the evaluation (qualitative, quantitative or both), the different users/stakeholders, and the purpose of the evaluation (17), as well as the investment in evidence-based assessment (15). An evaluation can be formative, implemented when the system is created or being installed, and summative, focusing in the effectiveness of an already installed system (14). When an evaluation framework is implemented correctly, all stakeholders benefit with improved safety, timely, effective and efficient provision of health care services.
2. AIM
Our main objective was to develop an evaluation framework for hospitals utilizing IHIS, within the three main areas identified as Human factor, Technology and Organization, that would help identify any existing deficiencies in the system.
3. MATERIAL AND METHODS
Sample: In two public hospitals (General Hospital of Nicosia and General Hospital of Amochostos) in the Republic of Cyprus, three hundred and nine (309) out of the 1503 healthcare professionals (including doctors, nurses and other paramedical professionals) participated in the study. The selected sample is representative of the general population with a confidence interval of 95% and a margin of error of 5% (18). For the selection of the sample, a stratified random sampling was used based on the profession and the hospital of each participant.
Questionnaire: A cross-sectional qualitative study was conducted with a questionnaire measuring some key aspects of the following variables: technology, human factor, and organization. The questionnaire consisted of 43 questions, with 17 questions (related to categories procedures, system quality and satisfaction) selected from Otieno et al. (19), 25 questions (related to categories safety and collaboration) from Viitanen et al. (20), and 1 question related to accessibility to the system (within the category system quality) was based on DeLone and McLean’s IS success model (21). Three open questions were added to evaluate users’ perception on what was needed for the improvement of health services in their respective hospitals for all 3 variables being evaluated. That was done in case any important information might have been missed in the Likert-scale questions. The questionnaire in English was translated to the Greek language through bilingual translation in both directions. For better adaptation of the questionnaire, a pilot study was conducted with random sampling where 20 participants that use IHIS completed the questionnaire (22). Anonymity and confidentiality were strictly maintained for the data of the questionnaires. Permission was obtained for the distribution/application of the questionnaire from the National Bioethics Committee of Cyprus, and the Ministry of Health of Cyprus, and according to the guidelines provided by the Personal Data Protection Bureau of Cyprus.
Statistical analysis
Data were processed and stored in SPSS version 24. Individual questions were assigned values according to the responses given by the participants, in the Likert scale, where 1 was the lowest and 5 the highest value (1-5). The reliability of the questionnaire was then tested using Cronbach–alpha coefficient. The final questionnaire consisted of 42 questions, where 1 question (related to the category procedures) deemed neutral was excluded. The 42 questions were then subjected to factor analysis, based on the principal component analysis (PCA), to identify interpretable and meaningful factors within the questionnaire applied. Varimax rotation of factor analysis was based in eigenvalues ≥ 1, and questions were categorized in groups if the loading factor was ≥ 0.41. A category was formed only if there were at least 5 questions to be more representative of the category. Each category then was tested with Cronbach–alpha coefficient to measure their reliability. The usefulness and appropriateness of factor analysis in evaluating our data was tested using the Bartlett’s test of sphericity and the Kaiser-Meyer-Olkin measure of sampling adequacy. Bartlett’s test was conducted to evaluate the relationship between the questions, therefore detecting the suitability of the categories identified by the factor analysis (threshold value ≥ 0.05). In addition, the Kaiser-Meyer-Olkin measure of sampling adequacy evaluated the adequacy of utilizing Factor Analysis for our variables (threshold value ≤0.50).
4. RESULTS
Questionnaire: An initial pilot study did not find any issues with the questionnaire that required changes or remediation (22), and therefore, the questionnaire with 43 questions was distributed to the healthcare professionals of the two public hospitals in Cyprus. In addition to the questionnaire, demographic data was also obtained (not shown). The reliability of the questionnaire was asserted with a Cronbach’s alpha coefficient of 0.95, which was above the threshold of 0.80, and therefore acceptable for research purposes (23). As mentioned previously, 1 question deemed neutral was excluded from the analysis.
Factor analysis: The usefulness of Factor Analysis to evaluate our dataset was confirmed by both the Bartlett’s test of sphericity (< 0.01) and the Kaiser-Meyer-Olkin test (0.835).
Following Factor Analysis, the original 7 categories (22) initially defined by the 42 questions were reduced to 5 categories, represented by 27 questions. Factor Analysis defined potential 10 categories, but only 5 of them fulfilled the defined criteria for a category (at least 5 questions, with a value of ≥ 0.41). These categories referred to Satisfaction (Category 1), System Quality (Category 2), Collaboration (Category 3), Procedures (Category 4) and Safety (Category 5). Table 1 indicates that categories 1-5 included 27 questions out of 42, with the values of the loading factors of each question. When a question had a loading factor of ≥ 0.41 in 2 or more categories, then the question was included to the category with the highest factor.
Table 1. Factor analysis.
Rotated component matrix | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Question | Categories | |||||||||
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |
1 | .271 | .164 | .804 | .183 | ||||||
2 | .183 | .175 | .831 | .210 | .188 | |||||
3 | .259 | .114 | .180 | .758 | .256 | .139 | -.105 | |||
4 | .261 | .350 | .691 | .103 | .106 | |||||
5 | .108 | .112 | .226 | .672 | .451 | -.173 | ||||
6 | .134 | .301 | .740 | .312 | .228 | .187 | ||||
7 | .170 | .261 | .834 | .108 | .141 | |||||
8 | .222 | .884 | .114 | .138 | .117 | |||||
9 | .203 | .108 | .781 | .227 | .205 | -.118 | .109 | |||
10 | .339 | -.118 | .724 | .120 | .239 | .329 | ||||
19 | .379 | .149 | .221 | .123 | .582 | .414 | ||||
20 | .477 | .101 | .336 | .455 | .292 | .222 | .161 | |||
21 | .257 | .209 | .710 | .139 | .208 | .153 | ||||
22 | .462 | .653 | .312 | .130 | -.111 | |||||
24 | .174 | .177 | .120 | .204 | .410 | .202 | .578 | .135 | -.245 | |
29 | .104 | .374 | .190 | .210 | .535 | .351 | -.109 | |||
30 | .679 | .258 | .258 | .136 | .114 | .324 | -.252 | |||
31 | .755 | .212 | .251 | .309 | .166 | .118 | -.141 | |||
32 | .782 | .280 | .138 | .198 | .138 | .118 | ||||
33 | .593 | .330 | .229 | .343 | .238 | .150 | ||||
34 | .510 | .433 | .313 | .372 | .175 | .168 | .131 | .109 | ||
35 | .380 | .622 | .181 | .270 | .349 | .123 | ||||
36 | .236 | .706 | .191 | .290 | .120 | .153 | .158 | |||
39 | .529 | .585 | .193 | .142 | .217 | .148 | ||||
40 | .724 | .146 | .172 | .306 | -.202 | |||||
41 | .208 | .743 | .184 | .231 | .129 | .158 | .121 | |||
42 | .259 | .588 | .156 | .420 | .214 | .172 | .139 | -.163 | ||
17 | .342 | .195 | .219 | .360 | .305 | .295 | .362 | .137 | -.264 | |
13 | .410 | .332 | .238 | .369 | .434 | .157 | .144 | |||
27 | .380 | .182 | .150 | .731 | ||||||
37 | .517 | .469 | .331 | .123 | .100 | .333 | .114 | |||
38 | .482 | .457 | .157 | .160 | .209 | .394 | -.181 | .223 | .240 | |
43 | .136 | .128 | .247 | .221 | .158 | .700 | .193 | |||
11 | .158 | .108 | .106 | .248 | .162 | .793 | .104 | |||
12 | .261 | .223 | .141 | .224 | .696 | .139 | .209 | |||
23 | .160 | .183 | .131 | .102 | .155 | .771 | .107 | |||
15 | .152 | .194 | .877 | |||||||
25 | -.532 | -.180 | -.310 | -.196 | .266 | .103 | ||||
26 | .139 | -.149 | -.247 | .218 | .149 | -.101 | .752 | -.191 | ||
28 | -.320 | .719 | .341 | |||||||
14 | .358 | .316 | .150 | .230 | .449 | -.344 | -.178 | |||
16 | .426 | .228 | .131 | .160 | .545 | .323 | .144 |
The 5 categories shown in Table 2 measured different aspects of the IHIS. Category Satisfaction measured if users of IHIS were satisfied with the system, by considering effort, quality and performance. Quality of the System was measured by considering availability, reliability, access and quality of information of the system. The category Collaboration measured if the system supported collaboration among healthcare professionals. The category Procedures examined the daily procedures of healthcare professionals, and finally the last category was related to the Safety of the system, that would benefit the patient’s safety and prevent any errors (Table 2). All categories were included within 3 factors: System Quality and Safety referred to information systems, and therefore, were included in the factor Technology. Categories Collaboration and Satisfaction were about healthcare professionals therefore, and thus were grouped as Human Factor. Finally, procedures had to do with the factor Organization. Cronbach’s alpha coefficient for all categories was between 0.855 – 0.939, indicating that they were all reliable and acceptable for research purposes (23).
Table 2. Features of categories.
Factor | Name of the category | Number of Questions | Cronbach’s alpha coefficient |
---|---|---|---|
Technology | System quality | 6 | 0.913 |
Safety | 6 | 0.855 | |
Human Factor | Collaboration | 5 | 0.939 |
Satisfaction | 5 | 0.916 | |
Organization | Procedures | 5 | 0.898 |
Finally, these five categories put together formed the evaluation framework DIPSA, where every letter matched with each of the five categories in Greek language. The letter D derived from the Greek word “Διαδικασίες”which means procedures, the letter I from the Greek word “Ικανοποίηση” which means satisfaction, the letter P matched from the Greek word “Ποιότητα” which means quality (of the systems), the letter S from the Greek word “Συνεργασία” which means collaboration, and the letter A matched from the Greek word “Ασφάλεια”which means safety.
5. DISCUSSION
The main purpose of the study was to develop an evaluation framework for hospitals, that utilize IHIS. The three broad factors, namely Human, Technology and Organization (22, 24) were further divided into categories to help identify any existing deficiencies in the system. The main factors were similar with those used in the HOT-fit framework of Yusof et al. (24) as well as, two of the validated categories (system quality and user satisfaction) of the D&M IS Success Model of DeLone et al. (21) were the same.
The evaluation framework took into consideration the questionnaires developed by Otieno et al. (19) and Viitanen et al. (20). User satisfaction which affects directly and proportionally the use of IHIS (25); Collaboration (communication and support)–impacting on the efficiency and effectiveness of healthcare provision; and system quality (access, ease of use, ease of learning, response time, reliability and flexibility) (21, 24)–which affects positively the expectations and needs of healthcare professionals (14) and benefits the quality of work environment and job performance (21). IHIS can also improve safety in healthcare by providing clear documentation and precise information that can be used in decision making (26). Being able to rely (reliability) on the systems can affect positively the quality of the systems (20). Alfarraj et al. (27) suggested that technical issues are important in an evaluation framework, which indeed was included in DIPSA within category Safety. Lastly, Viitanen et al. (20) assessed how the system was compatible to the daily tasks of healthcare professionals, which was included in the evaluation model as the category procedures. Considering the uniqueness and specific needs within a hospital setting, the development of evaluation frameworks can be very complex. Based on the literature, the DIPSA evaluation framework covered the most important areas, although we cannot exclude that there might be additional factors that weren’t detected, or more important factors (within the additional 5 potential categories) that were not fully supported by our analysis, and that might require further updating and refining.
Comparison of the frameworks: The DIPSA evaluation framework measured the categories satisfaction, collaboration, safety, system quality and procedures. Several evaluation frameworks measured the same or similar categories (24, 28-31). Regarding the time frame of the evaluation frameworks, some were implemented during the formative phase of the system (28, 29, 31, 32), as well as the summative phase of the system (31, 32). Several evaluation frameworks used questionnaires (29-33), but fewer were implemented utilizing observation (24, 28, 32) and interviews (24, 28, 33).
Implementing an evaluation framework in IHIS can be a very complex process (12, 15, 17). Several evaluation frameworks from the literature were studied based on their purpose, methods and results, and were compared with the DIPSA evaluation framework. Westbrook et al. (32) developed and implemented the Multimethod Evaluation Model (MEM) on an electronic medication management system. The similarities with DIPSA were that both frameworks focused on the quality and safety of the HIS, both were addressed to doctors, nurses and allied health professionals, but differed in the methodology used. MEM consisted of a questionnaire, a gadget and live observations, whereas DIPSA used only a questionnaire. DIPSA evaluated the whole system of information present in the IHIS whereas MEM evaluation took place only in the summative phase, MEM was implemented in both formative and summative phase of the systems. A limitation for the evaluation framework DIPSA, was that the IHIS within the hospitals were already implemented, therefore, preventing an evaluation during the formative phase of IHIS.
Another evaluation framework was the Performance of Routine Information System Management (PRISM) measuring similar categories namely, technical, organizational and behavioral factors with the purpose to strengthen the systems. The major difference observed in PRISM was related to the methodology used, which consisted of interviews and observation of the participants. PRISM was used only in the formative phase of the systems, and no detailed information on the characteristics of the participants was provided. However, PRISM has been used in developing countries, aiming to increase transparency and accountability and showed improvement in quality of the systems and use of information. PRISM has also been used to develop courses, training manuals and has been taught in universities (28).
A two-phased mixed methods evaluation framework developed by Boland et al. (33) included the categories ease of use, function integration, anxiety during use and effect on workflow. It was only used in the summative phase of the system and could meet the needs of the users in the specific areas. However, this framework was very restrictive, including only expert-users. Very similar to DIPSA is the evaluation framework Health IT Usability Evaluation Model (Health-ITUEM), with similar categories including the quality of the system, safety and procedures. The categories were also analyzed using factor analysis, but differently from DIPSA, it was used in the formative phase of the systems and only addressed nurses (29). Another framework also restricted to nurses, which included the categories satisfaction, communication, IT support and usability was the Realistic evaluation framework of Oroviogoicoechea et al. (30). The Clinical Information Systems Success Model (CISSM), evaluated the categories quality of the system and satisfaction of nurses and was only implemented during the formative phase of the system (31).
The Human, Organization, and Technology–Fit (HOT-fit) evaluation framework, included the same major factors as DIPSA (Technology, Human factor, and Organization) but not the subcategories. This is an indication that different clinical settings can result in a different evaluation framework, even though Yusof et al. (24) argued that this evaluation framework could potentially be comprehensive for any system in general. The tools that were used were observation, interviews and document analysis only in the formative phase of the systems. Evaluation frameworks can be designed, developed and implemented by using different methodologies. For an evaluation framework to be effective it should be designed and implemented based on the aims and purpose of the research and the specific needs of the particular healthcare setting or hospital.
6. CONCLUSION
Considering the categories satisfaction, collaboration, safety, system quality, procedures, and by using Likert scale and open questions in the current study, DIPSA can provide a holistic image of IHIS by evaluating any hospital system. However, as with any evaluation framework, DIPSA should also be continuously updated to improve the provision of healthcare services.
Financial support and sponsorship:
None.
Conflicts of interest:
There are no conflicts of interest.
Author’s Contribution:
A.S. was responsible for the acquisition and analysis of the data for the work.A.S., J.M., Z.R. and E.N.Y. gave substantial contribution to the conception or design of the work, the analysis and interpretation of the data for the work, and revising it critically for important intellectual content.
REFERENCES
- 1.Gotham IJ, Le LH, Sottolano DL, Schmit KJ. An informatics framework for public health information systems: a case study on how an informatics structure for integrated information systems provides benefit in supporting a statewide response to a public health emergency. Information Systems and eBusiness Management. 2015 Nov 2015;13(4):713–749. [Google Scholar]
- 2.Sligo J, Gauld R, Roberts V, Villa L. A literature review for large-scale health information system project planning, implementation and evaluation. Int J Med Inf. 2017;97(1):86–97. doi: 10.1016/j.ijmedinf.2016.09.007. [DOI] [PubMed] [Google Scholar]
- 3.Ahmadian L, Dorosti N, Khajouei R, Gohari SH. Challenges of using Hospital Information Systems by nurses: comparing academic and non-academic hospitals. Electronic Physician. 2017;9(6):4625–4630. doi: 10.19082/4625. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Mantas J, Hasman A. Amsterdam: IOS Press; 2002. Studies in Health Technology and Informaticn. [Google Scholar]
- 5.Mason C, Leong T. Clinical information systems in the intensive care unit. Anaesthesia & Intensive Care Medicine. 2016 Jan;17(1):13–16. [Google Scholar]
- 6.Wyatt JC. Evidence-based Health Informatics and the Scientific Development of the Field. Stud Health Technol Inform. 2016;222:14–24. [PubMed] [Google Scholar]
- 7.Rigby M, Ammenwerth E. The Need for Evidence in Health Informaticn. Stud Health Technol Inform. 2016;222:3–13. [PubMed] [Google Scholar]
- 8.Magrabi F, Ong M, Coiera E. Health IT for Patient Safety and Improving the Safety of Health IT. Stud Health Technol Inform. 2016;222:25–36. [PubMed] [Google Scholar]
- 9.Ali M, Cornford T, Klecun E. Exploring control in health information systems implementation. Stud Health Technol Inform. 2010;160(Pt 1):681–685. [PubMed] [Google Scholar]
- 10.Piscotty RJ, Kalisch B, Gracey-Thomas A. Impact of Healthcare Information Technology on Nursing Practice. Journal of Nursing Scholarship. 2015 Jul;47(4):287–293. doi: 10.1111/jnu.12138. [DOI] [PubMed] [Google Scholar]
- 11.Ammenwerth E, Brender J, Nykänen P, Prokosch H, Rigby M, Talmon J. Visions and strategies to improve evaluation of health information systems: Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. International Journal of Medical Informatics. 2004 Jun;73(6):479–491. doi: 10.1016/j.ijmedinf.2004.04.004. [DOI] [PubMed] [Google Scholar]
- 12.Georgiou A, Westbrook JI, Braithwaite J. An empirically-derived approach for investigating Health Information Technology: the Elementally Entangled Organisational Communication (EEOC) framework. BMC Med Inform Decis Mak. 2012 Jul;68(6947):12–68. doi: 10.1186/1472-6947-12-68. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Craven CK, Doebbeling B, Furniss D, Holden RJ, Lau F, Novak LL. Evidence-based Health Informatics Frameworks for Applied Uses. Stud Health Technol Inform. 2016;222:77–89. [PubMed] [Google Scholar]
- 14.Yusof MM, Papazafeiropoulou A, Paul RJ, Stergioulas LK. Investigating evaluation frameworks for health information systems. Int J Med Inf. 2008 06;77(6):377–385. doi: 10.1016/j.ijmedinf.2007.08.004. [DOI] [PubMed] [Google Scholar]
- 15.Ammenwerth E, Rigby M. Preface. In: Elske Ammenwerth MR, editor. Evidence-Based Health Informatics Promoting Safety and Efficiency Through Scientific Methods and Ethical Policy. Vol. 222. Netherlands: IOS Press BV; 2016. pp. vii–xi. [Google Scholar]
- 16.Brender M. J. Theoretical Basis of Health IT Evaluation. Stud Health Technol Inform. 2016;222:39–52. [PubMed] [Google Scholar]
- 17.Andargoli A, Eslami Scheepers H, Rajendran D, Sohal A. Health information systems evaluation frameworks: A systematic review. Int J Med Iinform. 2017 01;97:195–209. doi: 10.1016/j.ijmedinf.2016.10.008. [DOI] [PubMed] [Google Scholar]
- 18.SurveyMonkey. Sample Size Calculator. 2017.
- 19.Otieno OG, Toyama H, Asonuma M, Kanai-Pak M, Naitoh K. Nurses’ views on the use, quality and user satisfaction with electronic medical records: questionnaire development. J Adv Nurs. 2007;60(2):209–219. doi: 10.1111/j.1365-2648.2007.04384.x. [DOI] [PubMed] [Google Scholar]
- 20.Viitanen J, Hyppönen H, Lääveri T, Vänskä J, Reponen J, Winblad I. National questionnaire study on clinical ICT systems proofs: Physicians suffer from poor usability. Int J Med Inf. 2011;80(10):708–725. doi: 10.1016/j.ijmedinf.2011.06.010. [DOI] [PubMed] [Google Scholar]
- 21.DeLone WH, McLean ER. The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. J Manage Inf Syst. 2003;19(4):9–30. [Google Scholar]
- 22.Stylianides A, Mantas J, Roupa Z, Yamasaki EN. Evaluation of an Integrated Health Information System (HIS) in a Public Hospital in Cyprus: A Pilot Study. Stud Health Technol Inform. 2017;238:44–47. [PubMed] [Google Scholar]
- 23.Viladrich C, Angulo-Brunet A, Doval E. A journey around alpha and omega to estimate internal consistency reliability. Anales de Psicología. 2017;33(3):755–782. [Google Scholar]
- 24.Yusof MM, Kuljis J, Papazafeiropoulou A, Stergioulas LK. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit) Int J Med Inf. 2008;77(6):386–398. doi: 10.1016/j.ijmedinf.2007.08.011. [DOI] [PubMed] [Google Scholar]
- 25.Eugene Y, Fulham M, Feng DD. Electronic Medical Records. Technological Fundamentals. 2008:29–49. [Google Scholar]
- 26.Seidling M, Hanna Bates, David W. Evaluating the Impact of Health IT on Medication Safety. In: Elske Ammenwerth MR, editor. Evidence-Based Health Informatics Promoting Safety and Efficiency Through Scientific Methods and Ethical Policy. IOS Press, Netherlands: IOS Press; 2016. p. 195. [Google Scholar]
- 27.Alfarraj O, Abugabah A. Extending Information System Models to the Health Care Context: An Empirical Study and Experience from Developing Countries. International Arab Journal of Information Technology (IAJIT) 2017;14(2):159–167. [Google Scholar]
- 28.Aqil A, Lippeveld T, Hozumi D. PRISM framework: a paradigm shift for designing, strengthening and evaluating routine health information systems. Health policy plan. 2009;24(3) doi: 10.1093/heapol/czp010. 217-217-228. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Yen P. Y. Health Information Technology Usability Evaluation: Methods, Models, and Measures Columbia University. 2010.
- 30.Oroviogoicoechea C, Watson R. A quantitative analysis of the impact of a computerised information system on nurses’ clinical practice using a realistic evaluation frameworks. International Journal of Medical Informatics. 2009 Dec;78(12):839–849. doi: 10.1016/j.ijmedinf.2009.08.008. [DOI] [PubMed] [Google Scholar]
- 31.Garcia-Smith D, Effken JA. Development and initial evaluation of the Clinical Information Systems Success Model (CISSM) International Journal of Medical Informatics. 2013 Jun;82(6):539–552. doi: 10.1016/j.ijmedinf.2013.01.011. [DOI] [PubMed] [Google Scholar]
- 32.Westbrook JI, Braithwaite J, Georgiou A, Ampt A, Creswick N, Coiera E. Multimethod Evaluation of Infromation and Communication Technologies in Health in the Context of Wicked Problems and Sociotechnical Theory. Journal Of The American Medical Informatics Association. JAMIA. 2007;14:746–754. doi: 10.1197/jamia.M2462. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Boland MR, Rusanov A, So Y, Lopez-Jimenez C, Busacca L, Steinman RC. From expert-derived user needs to user-perceived ease of use and usefulness: A two-phase mixed-methods evaluation framework. Journal of Biomedical Informatics. 2014 Dec;52:141–150. doi: 10.1016/j.jbi.2013.12.004. [DOI] [PMC free article] [PubMed] [Google Scholar]