Skip to main content
. 2024 Mar 8;6:1332707. doi: 10.3389/fdgth.2024.1332707

Table 4.

Overview of the identified ethical barriers and facilitators.

Topic
Barriers Facilitators
Consent (n = 16 studies) Ethical concerns appear when patients are compelled to use a specific eHealth technology due to a lack of alternatives (47, 56) Ensure transparent information disclosure to users (42, 43, 45, 49, 56, 58, 63, 65)
Deleting data from users who have withdrawn consent may be impossible, especially when it is anonymized (43, 63) Obtain explicit consent for data sharing (36, 45, 47, 49, 56, 63)
Patient data may be shared with other providers or third parties without the patient's explicit consent (47) Uphold users’ right to withdraw consent (43, 47)
Ensuring the acceptance and consent of both patients and medical professionals is challenging (70) Streamline the process of obtaining and managing consent (36, 53)
Patients may lack the ability to provide consent in emergency situations (45) Empower patients to control access to their data (47)
Privacy concerns can arise when parents have access to their children's records (45) Supply age-appropriate information to inform children, even if they lack legal capacity for consent (43)
Concerns arise about obtaining patient consent for various treatment options (59) Present information to patients in an easily understandable format to aid in data interpretation (45)
The process of obtaining consent can be resource intensive (40)
Users often overlook the fine print or simply click on “agree” without reading it carefully (51)
Transparency of data (n = 14 studies) Opacity in the functioning of AI algorithms (39, 45, 56, 63) Ensure transparency in data quality assessment (63, 65)
Lack of awareness among patients about the storage and sharing of their (sensitive) data (51, 66) Ensure transparency in the decision-making process based of AI data (39, 56)
Insufficient methodological transparency in deep learning models (70) Promote the development of “open source” health technologies (58, 65)
The increasing complexity of algorithms leads to decreased decision support precision in earlier (older) models (39) Engage all relevant stakeholders in decision-making, potential adoption, and discussions regarding data usage boundaries (53, 60)
Enhance transparency in data infrastructure and data flow (67)
Identify key stakeholders in the decision-making process for system and data-related matters (45)
Inclusiveness and diversity (n = 13 studies) AI may contain biases that can unintentionally exclude or harm individuals (39, 56, 70) Develop technologies that do not discriminate (43, 47, 49, 65)
Inequity in access and use of healthcare technology (60, 69) Create user-friendly software to enhance ease of use (47, 58, 60)
eHealth technologies could potentially be used as an excuse to reduce the provision of high-quality care by trained health professionals (43) Ensure that individuals facing particular needs or risks are encompassed by the social security system's protection (51)
Favoring users who willingly share health-related data over those who do not share (51)
Ongoing monitoring and privacy violations can lead to increased stigma around patients (38)
Algorithms may not consider patient preferences (39)
Balancing individual responsibility with communal solidarity can be challenging (51)
Striking a balance between societal benefits and potential harms is difficult (45)
Responsibility (n = 12 studies) Ambiguity regarding the accountable party for collected data (41, 52, 58, 62, 67, 70) Clarify the responsible party for technology validation and outline potential consequences in case of any harm (45, 52, 65, 66)
Lack of regulatory and ethical clarity regarding accountability, moral responsibility, and legal liability (56) Support patient autonomy and respect their decision-making (43)
Role confusion among healthcare professionals using AI for decision-making, necessitating a balance with their own judgments (45) Incorporate human agency and oversight (49)
Risk of excessive reliance or complacency induced by AI tools (56) Establish clear agreements with IT providers regarding update and security responsibilities (67)
Informant patients that the data they generate at home will influence their physician's clinical decisions (62)
Ensure patients are aware of the extent of access they have to the technology and the associated responsibilities (62)
Validation of eHealth (n = 10 studies) Lack of clear certification systems or transparent guides for assessment (42, 53) Ensure legal clarity and ethical soundness in technology validation (42, 49)
Uncertainty regarding the type of clinical and socio-economic evidence required from manufacturers (53) Establish certification of medical devices and align the required clinical evidence with European Medical Device Regulation Standards (53, 63)
Limited availability of high-quality evidence for eHealth (62) Continuously validate eHealth technologies through clinical assessments (58)
Difficulty in accessing complete and generalizable evidence for efficacy and effectiveness (62) Develop a comprehensive framework with balanced regulations and innovation-friendly criteria for health technology assessment (53)
Challenges faced by medical ethical committees in assessing novel eHealth solutions due to their unknown impact or burden (55) Mandate manufacturers to conduct clinical safety evaluations before market release or deployment (37)
The significant withdrawal of patients from studies can diminish the value of data analysis (59) Base all technology components on evidence-based principles (58)
Promote eHealth technologies with shared benefits and measurable outcomes (66)
Utilize real-world datasets from clinical trials for evidence generation and impact assessment (53)
Monitoring and Follow Up of Data Output (n = 7 studies) The abundance of health technology choices and rapid innovation poses challenges among healthcare professionals (62, 69) Emphasize that AI should complement rather than replace healthcare professionals, shifting their roles from processors to expert overseers (56)
Use of technologies that upload and share data may give individuals a sense of being under surveillance (51) Ensure healthcare professionals have prompt access to information to enhance the speed and quality of their care decisions (60)
Technologies may result in healthcare professionals feeling obligated to be available or responsible all the time (52) Establish a technology that issues warnings at the organizational level rather than targeting individual healthcare professionals (52)
False alerts generated by health technology (43)
Liability (n = 6 studies) Concerns about the potential legal liability for harm to a patient's health (62) Promote transparency regarding accountability (39, 65)
Lack of clarity in legislation and regulations concerning liability and accountability for both producers and healthcare providers (57) Provide guidance on responsibilities and liabilities when different components interact with each other (48)
Manufacturers’ concerns about potential liability due to external communication infrastructure vulnerabilities that could lead to damages (57)
Absence of accountability for the accuracy and correctness of shared data (51)
Implementation and Compliance with ethical policy, guidelines, and frameworks (n = 3 studies) The typical industry practice of rapid prototype development with iterative cycles may not align with ethical standards (40) Enhance the adaptability of ethical frameworks (40)
Develop regulatory and ethical frameworks for public-private partnerships (60)
Supply specialized guidance to specify the role of digital health in clinical practice (62)