Abstract
The widespread adoption of virtual care technologies has quickly reshaped healthcare operations and delivery, particularly in the context of community medicine. In this paper, we use the virtual care landscape as a point of departure to envision the promises and challenges of artificial intelligence (AI) in healthcare. Our analysis is directed towards community care practitioners interested in learning more about how AI can change their practice along with the critical considerations required to integrate AI into their practice. We highlight examples of how AI can enable access to new sources of clinical data while augmenting clinical workflows and healthcare delivery. AI can help optimize how and when care is delivered by community practitioners while also improving practice efficiency, accessibility, and the overall quality of care. Unlike virtual care, however, AI is still missing many of the key enablers required to facilitate adoption into the community care landscape and there are challenges we must consider and resolve for AI to successfully improve healthcare delivery. We discuss several critical considerations, including data governance in the clinic setting, healthcare practitioner education, regulation of AI in healthcare, clinician reimbursement, and access to both technology and the internet.
Keywords: Paediatrics, Artificial Intelligence, Community medicine, Regulation
Artificial Intelligence (AI) is on the verge of transforming healthcare delivery across Canada. AI is the field of computer science focused on enabling computers to make decisions and perform tasks similar to humans. Plausible futures now exist where AI technology can contribute to health empowerment, increase access to care, and improve outpatient management of acute and chronic health conditions. Community healthcare practitioners in particular have an opportunity to revolutionize healthcare delivery by embracing emerging AI solutions while remaining advocates for their patients and communities.
The successful use of virtual care throughout the COVID-19 pandemic has demonstrated how new technologies can add agility and resilience to our healthcare systems. Virtual care has improved patient access to an array of specialists while shifting in-person clinical encounters to the comfort of patient homes. In Ontario, primary virtual care increased 56-fold from March to July of 2020 due to the COVID-19 pandemic (1). The rapid increase was partially mediated by the creation of new billing codes for physicians, creating the financial sustainability required for the adoption of virtual care, while simultaneously incentivizing the private sector to innovate new solutions (2). In addition, guidance documents like the Virtual Care Playbook have taught community practitioners about the practical boundaries of virtual care implementation, alleviating many concerns associated with these new clinical workflows (3).
Clinical tools powered by AI can build upon the success of virtual care. Community practitioners can monitor the health status of patients in their home environments by gathering data directly from wearable health sensors and automating the processing of that data. The use of natural language processing algorithms can automate administrative aspects of care, increasing the number of healthcare touch points for patients and families. This can help optimize how and when care is delivered by improving practice efficiency, accessibility, and the overall quality of care. Unlike virtual care, however, many of the key enablers needed to facilitate AI’s adoption into the community care landscape are still missing and there are related challenges that we must overcome.
In this manuscript, we offer examples of how AI can enable access to new sources of clinical data while augmenting clinical workflows and healthcare delivery. We also discuss critical considerations that must be addressed in order to incorporate AI safely and effectively into community practice, namely:
Data governance challenges
Healthcare practitioner education
Regulation of healthcare AI
Clinician reimbursement
Accessibility to technology and internet
NEW SOURCES OF CLINICAL DATA
Although virtual care has opened new avenues to healthcare access in outpatient settings, it has been limited by our inability to gather clinical information about a patient’s physiological state or progress outside of history taking. New technologies, such as digital stethoscopes paired with algorithms capable of diagnosing paediatric heart murmurs and adventitious breath sounds, are poised to augment traditional examination techniques (4,5). Other sources of clinical data are emerging from the use of mobile smart devices, including smartphones, wearable sensors, and other outpatient home monitoring devices (e.g., digital glucose monitors). These technologies capture high-resolution physiological data such as heart rate, oxygen saturation, blood glucose levels, blood pressure, electrocardiograms, sleep quality, daily movements, and more (6).
Studies are using AI to identify important signals that map wearable data to clinical endpoints such as fitness levels, stress, clinical laboratory measurements, cardiac health, and mental health (7–10). Wearables can also be used to predict a patient’s shifting health status. For example, prolonged lack of sleep and decreased activity may indicate a depressive episode for a child with major depressive disorder (11). Another study outlines real-world use of ECG embedded wearable sensors for outpatient paediatric electrophysiology monitoring, potentially replacing the need for a traditional Holter monitor (12). By leveraging data that are passively collected by consumer devices patients already wear, improvements to care can be made with minimal extra effort from patients.
Given the large number of patients managed by community care practitioners, home monitoring with wearables may seem like a daunting task. AI can help by allowing for automated insights and notifications directed toward patients and practitioners. Remote assessment of a patient’s physiological state both before they need to come in and after they have left the doctor’s office can enable streamlined diagnostics and earlier interventions. This is particularly impactful for children because clinic visits can be anxiety-inducing, leading to poor cooperation and inaccurate clinical exams. Having access to wearables providing AI-powered insights about activity, sleep, and physiology, combined with clinical context may also enable chronic illness self-management, wellness, and a sense of ownership over one’s own health. A working group from the National Institutes of Health identified factors for successful implementation of wearables in healthcare (13), including having clearly defined clinical use cases, integration into healthcare systems, and reimbursement models.
AUGMENTED CLINICAL WORKFLOWS
Physicians presently spend twice as much time performing administrative tasks than interacting with patients (14). The administrative burden can be even higher in community settings where infrastructure is often limited. This contributes to decreased career satisfaction, higher rates of burnout, and has a negative impact on patient care (15). Automating administrative tasks with AI-powered technologies can help free up physician time that can then be redirected toward patient care. One class of technologies focused on achieving this goal are intelligent documentation support systems (IDSS). Among the most promising forms of IDSS are digital scribes. Digital scribes help document patient encounters by using aspects of speech recognition and natural language processing (NLP) to transcribe clinical conversations and extract, classify, and summarize the pertinent information (16). Prominent companies like Google and Amazon are developing these products (17–19), but a recent scoping review argues that more research into their clinical validity, usability, and utility is required prior to practice integration (20). Quiroz et al. identified the need to overcome the challenges of high ambient noise, following unstructured conversations, and incorporating nonverbal forms of communication into these systems (21).
Similar NLP models are also capable of reviewing large volumes of electronic health record data and can predict common diagnoses with accuracy comparable to experienced paediatricians, potentially allowing for improved diagnostics and automation when integrated into clinical workflows (22).
AUGMENTED HEALTHCARE DELIVERY
Automated conversational agents (often called chatbots) using NLP to mimic human interactions (23) are currently being developed to provide more healthcare touch points for patients. These agents can be used for screening, monitoring, administration of treatment plans, health literacy enhancement, mental health supports, and diagnostics (23). In the paediatric population, conversational agents focused on providing mental health supports could have tremendous impact given the high disease burden and expected rise in cases post-pandemic (24,25). Although real-world validation of mental health chatbots is limited, one study demonstrated a 40% reduction in self-reported symptoms of depression between high and low/non users of an AI chatbot employing cognitive behavioral therapy (26). Conversational agents focused on prevention and adherence to treatment of chronic conditions, such as obesity, diabetes, and asthma, are also showing promising results in paediatrics (27,28).
One major advantage of these agents is that they are embedded in the digital world, where children and adolescents are increasingly more comfortable (29). Moreover, paediatric patients may be more comfortable sharing information with anonymous chatbots versus adult practitioners (30). Regardless, barriers exist to the successful integration of conversational agents into a hybrid healthcare delivery system. To begin, current chatbots cannot provide nuanced recommendations for complex life situations, thus limiting their utility to well-defined healthcare scenarios. Work is also needed to ensure consent can be properly obtained and that data privacy and confidentiality are respected (31).
Practitioners may benefit from having guidelines about which patients are best suited for the use of chatbots, as well as how these tools should be monitored and maintained. Furthermore, practitioners will need clarity on how they will be compensated for and how to manage any associated medical liability. There must be clear standards specifying when practitioners are allowed or obligated to intervene during a chatbot conversation. This is paramount in paediatrics because of the duty to report when imminent harm to self or others and/or physical or sexual abuse are suspected.
CRITICAL CONSIDERATIONS FOR COMMUNITY CARE PRACTITIONERS
The uptake of AI into the community practice landscape is complicated by barriers that inhibit adoption. Major concerns include uncertainty around the governance of patient data at the clinic level, associated liability for the misuse of that data, and uncertainty about how to evaluate the true safety and generalizability of AI tools.
To navigate issues at the clinic level, clinicians will need to understand best practices in data governance, particularly when engaging with third party vendors. Unfortunately, the privacy law landscape in Canada is complicated and outdated, with federal legislation (i.e., the Personal Information Protection and Electronic Documents Act [PIPEDA]) governing how organizations collect, use and disclose personal information in the course of commercial activities (32). The provinces and territories also have privacy laws, several of which are substantially similar to PIPEDA and therefore exempt health information custodians in those jurisdictions from having to comply with PIPEDA (33). Despite this already complicated landscape, regulations across Canada do not specifically address AI and fall short in providing adequate guidance (34). Expecting healthcare practitioners to navigate complex legal requirements to understand how to leverage electronic health record data may lead to unintended misuse of patient data and/or a reluctance to adopt AI technologies. Guidance from professional organizations such as the Royal College of Physicians and Surgeons of Canada (RCPSC) and the Canadian Medical Protective Association will likely be necessary to highlight data governance best practices on the community clinic level for AI (i.e., similar to the Virtual Care Playbook mentioned above).
There is an emerging consensus that AI-related training must be incorporated into undergraduate, postgraduate, and continuing medical education. The RCPSC Task Force on AI and Emerging Digital Technologies recommends integrating digital health literacy into the existing CanMEDS framework (35) and the World Health Organization (WHO) recommends teaching about both mathematical concepts and ethical and legal issues, including the risk of bias in AI systems (36). Since AI has the potential to propagate systemic biases reflected in datasets (37,38), critical data literacy is needed so that physicians are empowered to mitigate harms (39). In the context of wearables, for example, a critical education about the risks of over-surveillance is essential. Digital health surveillance may lead to increased stress and anxiety, particularly when false positive findings require follow-up (40). The Privacy Commissioner of Canada has also raised concerns about the nature of consent when wearables are used. Is implied consent sufficient to allow for 24/7 health monitoring or should express consent be obtained for specific purposes? (41) Ubiquitous, routine data gathering from the home may blur the boundaries between what is medicalized and what is not. Social scientists coin this breakdown a “context collapse,” and it can lead to the medicalization of the lived experience of childhood (42,43). Surveillance may be warranted for conditions requiring consistent observation to prevent adverse outcomes, but it may be problematic otherwise. In addition, patient-level access to technology (i.e., consumer wearables and high-speed internet) is important. Canada’s existing digital divide will lead to equity challenges in healthcare access, utilization, and outcomes if policy makers are not attentive to these concerns.
Regulatory bodies also play important roles in enabling primary care practitioners to safely utilize AI. For example, Health Canada currently requires those who build and sell/license AI solutions to obtain regulatory approval for software when it is intended to be used for a medical purpose, including “supporting or providing recommendations to health care professionals, patients or non-healthcare professional caregivers about prevention, diagnosis, treatment, or mitigation of a disease or condition” (44). The level of regulatory scrutiny this software receives is based on the risk associated with its use. Although Health Canada reserves the right to re-assess, it is actually innovators themselves who decide how to categorize the level of risk inherent in their products and which regulatory pathway (and level of scrutiny), if any, it should be submitted to for testing and approval. Health Canada also expressly excludes software from regulation when it fulfills four conditions. These conditions are not rigid or determinative, but software that interacts with a medical device will generally not itself be regulated as a medical device if it is (a) “not intended to acquire, process, or analyze a medical image or signal, (b) ‘intended to display, analyze, or print medical information,’ (c) ‘only intended to support’ provider decision-making,” and (d) “not intended to replace … clinical judgment” (44). Community practitioners will need to understand these regulatory requirements so they can be attentive to whether and how a product has been tested and validated before they adopt it in their practice. Table 1 provides further information practitioners will need to consider in order to confidently select AI models for deployment in community clinics.
Table 1.
Key considerations community care practitioners should be aware of when selecting and integrating AI in the clinic
| Theme | Key considerations |
|---|---|
| Generalizability | Information on the patient populations used for AI model training and validation, including age, gender, geographic locations, and ethnicity, along with the distribution of relevant medical tests and diagnoses, is required |
| This is necessary to understand if the model is generalizable to different patient populations. Without this information there is a risk that clinicians may inadvertently utilize models in their practice that are not applicable to the population of their specific clinic and subsequently not be useful for their patients | |
| Data Governance and Cybersecurity | AI systems deployed in the clinic will often require access to patient electronic health record data. The utilization of these data, in particular identifiable personal health information, is governed under various privacy laws across Canada (e.g., the Personal Health Information Protection Act in Ontario). Thus, it is critical for an AI vendor to demonstrate how their utilization of health record data satisfies local data governance and privacy law standards prior to being granted access to patient data. It is also important to understand who specifically is accessing data, what data are being accessed, where those data are being stored, for how long and for what purposes? If data are being stored on a cloud server, is the cloud server located in Canada? If not, are there data governance implications based on local laws and/or best practices for where the data are being stored and how have any such issues been addressed by the vendor? |
| Although cybersecurity can be challenging for practitioners to understand, it is also important to ensure that best practices in this area are adhered to. A practitioner should make very direct inquiries of a vendor, such as “if your server is breached (i.e., hacked), what type of information may be stolen?”; “Will data be encrypted and thus protected in this scenario?”; “Will only deidentified data be at risk or could identifiable patient information be lost?”; “How are the AI models protected from being manipulated?” These questions provide a starting point for community clinicians seeking to understand the cybersecurity risks associated with a vendor’s AI solution | |
| Model Validation | Information on how models have been validated (i.e., has a prospective clinical trial been completed, etc.) along with the most recent model performance metrics is required |
| The reported performance should contain clinically relevant outcome metrics such as sensitivity, specificity, positive predictive value, etc. The types of outcome metrics required will vary depending on the associated clinical use case. Note, model accuracy alone is NOT sufficient and may be highly misleading | |
| Understanding the degree of uncertainty present when a model makes a prediction is also important. For example, predictions should ideally be paired with confidence intervals or an associated certainty metric to allow clinicians to gauge how best to utilize AI based predictions when making clinical decisions. Of note, if a clinic has data quality issues that are not represented in the training data used to develop the model there may be a decline in model performance when deployed | |
| Model Maintenance and Online Learning Protocols | Insight into how ongoing model validation and subsequent model retraining/updating will occur is important as machine learning models will have variations in performance as environmental shifts in the data occur. Some AI use cases will require frequent model updating while others may not. For example, an AI model predicting influenza may require regular retraining to account for shifts in seasonal variation, whereas a model predicting the likelihood of appendicitis may not require retraining given the consistency in disease presentation and prevalence |
| Clinicians must also be aware that machine learning models have the potential to decline in performance over time and should understand how this is being managed (i.e., how is the model vendor monitoring performance and ensuring it is maintained?) | |
| Bias and Equity Asswessments | When machine learning models are being validated, subgroup analysis should be conducted to determine if model performance remains equitable across age, gender, ethnicity, race, location, etc. Given that machine learning models are trained using healthcare data that invariably contains some degree of bias captured within it, transparency around model bias assessments will be essential to ensuring community practitioners utilize AI responsibly |
| Failure to understand this aspect of model performance prior to deployment in the clinic may lead to harm in underrepresented populations. Model performance metrics can appear very impressive on high-level review; however, models may still significantly underperform in particular patient populations, leading to inequitable care delivery. Transparency around understanding what models work best for what patients will be necessary to ensure model biases can be mitigated and managed appropriately |
It is not entirely clear where the burden will fall for ensuring model performance information is communicated to healthcare practitioners. Sendak et al. argue for the development of standardized “Model Facts” labels to provide relevant information required to safely utilize AI (45). This proposal builds upon the concept of standardized pharmaceutical labelling with a focus on model efficacy, generalizability, and associated warnings. We propose that these labels also include detailed information about the demographics captured within the dataset used to develop and validate models, along with assessments of model bias. Knowing who a tool can and cannot work for is essential in mitigating harm. We must also, however, undertake initiatives to collect diverse, representative data and train models that limit bias—otherwise, AI tools will only perform optimally for patients who have characteristics similar to those represented in the original datasets.
Regardless of the progress made with respect to education, regulation, data governance, and privacy—AI will not likely see widespread clinical adoption in Canada until it is made financially feasible for community care practitioners to use it. This requires support for both the upfront financial investment required to integrate AI into the clinic and for the ongoing use of AI. In the USA, the Centers for Medicare and Medicaid Services (CMS) instituted reimbursement codes for physicians to enable remote patient monitoring, including the use of software to facilitate such tasks (46). Germany also covers costs through their nationwide statutory health insurance program to allow clinicians to prescribe preapproved technologies to patients (47). Canadian payers must also invest in health system innovation by adopting similar reimbursement strategies. Forcing patients to pay out-of-pocket for software subscriptions in order to gain access to AI-enabled healthcare will only further the digital divide.
Finally, Canada suffers from a large rural-urban divide in providing access to digital infrastructure, with a disproportionate impact being felt by Indigenous communities. Quality and affordability of internet access also vary significantly across age, region, and income (48). Community practitioners will need to advocate for improved internet infrastructure, access, and affordability so that Canada’s digital divide does not continue to be a social determinant that negatively impacts healthcare outcomes.
CONCLUSION
AI has the potential to revolutionize community care by freeing up practitioner time, enabling quicker and more responsive care for patients, improving access to remote and rural areas, and improving the quality and safety of the care delivered. Enabling practitioners to responsibly integrate AI into clinical practice will be essential. Working with regulators and educators, we must individually and collectively develop strong foundations for regulation and data governance, adapted for community practice. Combined with targeted education, reimbursement schedules, and reliable internet access, the above will help catalyze the widespread adoption of responsible AI into Canadian healthcare.
Contributor Information
Devin Singh, Hospital for Sick Children, Toronto, Ontario, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
Sujay Nagaraj, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada; Department of Computer Science, University of Toronto, Toronto, Ontario, Canada.
Ryan Daniel, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
Colleen Flood, Centre for Health Law, Policy and Ethics, University of Ottawa, Ottawa, Ontario, Canada.
Dina Kulik, Hospital for Sick Children, Toronto, Ontario, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
Robert Flook, Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada.
Anna Goldenberg, Hospital for Sick Children, Toronto, Ontario, Canada; Department of Computer Science, University of Toronto, Toronto, Ontario, Canada; Canadian Institute for Advanced Research, Toronto, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada.
Michael Brudno, Department of Computer Science, University of Toronto, Toronto, Ontario, Canada; Canadian Institute for Advanced Research, Toronto, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada; University Health Network, Toronto, Ontario, Canada.
Ian Stedman, School of Public Policy and Administration, York University, Toronto, Ontario, Canada.
FUNDING
D.S. was funded by The SickKids Foundation, R.D. by T-CAIREM Summer Research Studentship Award, M.B. by Canadian Institute for Health Research (CIHR) and Genome Canada, and A.G. by The SickKids Foundation.
POTENTIAL CONFLICTS OF INTEREST
D.S. is the co-founder and CEO of a Canadian healthcare technology start-up company called Hero AI. D.K. is a Board member of Resilient Kids Canada. There are no other disclosures. There are no other reported conflicts of interest. All authors have submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Conflicts that the editors consider relevant to the content of the manuscript have been disclosed.
REFERENCES
- 1. Glazier RH, Green ME, Wu FC, Frymire E, Kopp A, Kiran T.. Shifts in office and virtual primary care during the early COVID-19 pandemic in Ontario, Canada. CMAJ. 2021;193(6):E200–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Powers BW, Drzayich Antol D, Zhao Y, Haugh GS, Roman O, Shrank WH, et al. Association Between Primary Care Payment Model and Telemedicine Use for Medicare Advantage Enrollees During the COVID-19 Pandemic. JAMA Health Forum. 2021;2(7):e211597. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Royal College of Physicians and Surgeons of Ontario. Telemedicine and virtual care guidelines (and other clinical resources for COVID-19) [Internet]. 2020. Available from: https://www.royalcollege.ca/rcsite/documents/about/covid-19-resources-telemedicine-virtual-care-e
- 4. Thompson WR, Reinisch AJ, Unterberger MJ, Schriefl AJ.. Artificial Intelligence-Assisted Auscultation of Heart Murmurs: Validation by Virtual Clinical Trial. Pediatr Cardiol. 2019;40(3):623–9. [DOI] [PubMed] [Google Scholar]
- 5. Kevat A, Kalirajah A, Roseby R.. Artificial intelligence accuracy in detecting pathological breath sounds in children using digital stethoscopes. Respir Res. 2020;21(253). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Iqbal SMA, Mahgoub I, Du E, Leavitt MA, Asghar W.. Advances in healthcare wearable devices. npj Flexible Electron. 2021;5(9). [Google Scholar]
- 7. Goodday SM, Friend S.. Unlocking stress and forecasting its consequences with digital technology. npj Digital Med. 2019;2(75). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Dunn J, Kidzinski L, Runge R, Witt D, Hicks JL, Schüssler-Fiorenza Rose SM, et al. Wearable sensors enable personalized predictions of clinical laboratory measurements. Nat Med. 2021;27(6):1105–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Bayoumy K, Gaber M, Elshafeey A, Mhaimeed O, Dineen EH, Marvel FA, et al. Smart wearable devices in cardiovascular care: where we are and how to move forward. Nature Reviews Cardiology. 2021;18(8):581–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Hickey BA, Chalmers T, Newton P, Lin CT, Sibbritt D, McLachlan CS, et al. Smart devices and wearable technologies to detect and monitor mental health conditions and stress: A systematic review. Sensors. 2021;21(10). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Zhang Y, Folarin AA, Sun S, Cummins N, Bendayan R, Ranjan Y, et al. Relationship between major depression symptom severity and sleep collected using a wristband wearable device: Multicenter longitudinal observational study. JMIR mHealth and uHealth. 2021;9(4). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Roelle L, Dalal AS, Miller N, Orr WB, van Hare G, Avari Silva JN.. The impact of direct-to-consumer wearables in pediatric electrophysiology telehealth clinics: A real-world case series. Cardiovasc Digit Health J. 2020;1(3):169–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Smuck M, Odonkor CA, Wilt JK, Schmidt N, Swiernik MA.. The emerging clinical role of wearables: factors for successful implementation in healthcare. npj Digital Med. 2021;4(45). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Sinsky C, Colligan L, Li L, Prgomet M, Reynolds S, Goeders L, et al. Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. Ann Intern Med. 2016;165(11):753–60. [DOI] [PubMed] [Google Scholar]
- 15. Shanafelt TD, Dyrbye LN, Sinsky C, Hasan O, Satele D, Sloan J, et al. Relationship Between Clerical Burden and Characteristics of the Electronic Environment With Physician Burnout and Professional Satisfaction. Mayo Clin Proc. 2016;91(7):836–48. [DOI] [PubMed] [Google Scholar]
- 16. Coiera E, Kocaballi B, Halamaka J, Laranjo L.. The digital scribe. npj Digital Med. 2018;1(58). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Robin Healthcare. Robin Healthcare | Automated clinic notes, coding and more [Internet]. 2019. Available from: https://www.robinhealthcare.com
- 18. Amazon Web Services. Amazon Comprehend Medical [Internet]. Amazon. 2021. Available from: https://aws.amazon.com/comprehend/medical/
- 19. Nuance. Ambient clinical intelligence: The exam of the future has arrived [Internet]. 2019. Available from: https://www.nuance.com/healthcare/ambient-clinical-intelligence.html
- 20. van Buchem MM, Boosman H, Bauer MP, Kant IMJ, Cammel SA, Steyerberg EW.. The digital scribe in clinical practice: a scoping review and research agenda. npj Digital Med. 2021;4(57). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Quiroz JC, Laranjo L, Kocaballi AB, Berkovsky S, Rezazadegan D, Coiera E.. Challenges of developing a digital scribe to reduce clinical documentation burden. npj Digital Med. 2019;2(114). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Liang H, Tsui BY, Ni H, Valentim CCS, Baxter SL, Liu G, et al. Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nat Med. 2019;25(3):433–8. [DOI] [PubMed] [Google Scholar]
- 23. Milne-Ives M, de Cock C, Lim E, Shehadeh MH, de Pennington N, Mole G, et al. The Effectiveness of Artificial Intelligence Conversational Agents in Health Care: Systematic Review. Journal of Medical Internet Research. 2020;22(10). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Erskine HE, Moffitt TE, Copeland WE, Costello EJ, Ferrari AJ, Patton G, et al. A heavy burden on young minds: The global burden of mental and substance use disorders in children and youth. Psychol Med. 2015;45(7):1561–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Cost KT, Crosbie J, Anagnostou E, Birken CS, Charach A, Monga S, et al. Mostly worse, occasionally better: impact of COVID-19 pandemic on the mental health of Canadian children and adolescents. Eur Child Adolesc Psychiatry. 2022;31(4):671–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Inkster B, Sarda S, Subramanian V.. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: Real-world data evaluation mixed-methods study. JMIR mHealth and uHealth. 2018;6(11). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Stephens TN, Joerin A, Rauws M, Werk LN.. Feasibility of pediatric obesity and prediabetes treatment support through Tess, the AI behavioral coaching chatbot. Transl Behav Med. 2019;9(3):440–7. [DOI] [PubMed] [Google Scholar]
- 28. Kadariya D, Venkataramanan R, Yip HY, Kalra M, Thirunarayanan K, Sheth A. KBot: Knowledge-enabled personalized chatbot for asthma self-management. In: Proceedings - 2019 IEEE International Conference on Smart Computing, SMARTCOMP 2019. 2019:138–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Chassiakos YR, Radesky J, Christakis D, Moreno MA, Cross C, Hill D, et al. Children and adolescents and digital media. Pediatrics. 2016;138(5):e20162593. [DOI] [PubMed] [Google Scholar]
- 30. Rideout V, Robb MB.. Social Media, Social Life: Teens Reveal Their Experiences, 2018 | Common Sense Media [Internet]. Common Sense Media. 2018. p. 60. Available from: https://www.commonsensemedia.org/research/social-media-social-life-teens-reveal-their-experiences-2018
- 31. Kretzschmar K, Tyroll H, Pavarini G, Manzini A, Singh I.. Can Your Phone Be Your Therapist? Young People’s Ethical Perspectives on the Use of Fully Automated Conversational Agents (Chatbots) in Mental Health Support. Biomed Inform Insights. 2019;11:117822261982908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Office of the Privacy Commissioner of Canada. Summary of privacy laws in Canada - Office of the Privacy Commissioner of Canada [Internet]. 2018. Available from: https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/02_05_d_15/#heading-0-0-1
- 33. Canadian Minister of Justice. Personal Information Protection and Electronic Documents Act [Internet]. 26(2)(b) 2019. Available from: https://laws-lois.justice.gc.ca/PDF/P-8.6.pdf
- 34. Scassa T. AI and Data Protection Law. In: Florian Martin-Bariteau & Teresa Scassa, eds, Artificial Intelligence and the Law in Canada [Internet]. Toronto: LexisNexis, Canada, 2021; 2020. Available from: https://ssrn.com/abstract=3732969 [Google Scholar]
- 35. Royal College of Physicians and Surgeons of Canada. Artificial intelligence (AI) and emerging digital technologies [Internet]. 2018. Available from: https://www.royalcollege.ca/rcsite/health-policy/initiatives/ai-task-force-e
- 36. World Health Organization. Ethics and Governance of Artificial Intelligence for Health. 2021;150. Available from: https://www.who.int/publications/i/item/9789240029200
- 37. Obermeyer Z, Powers B, Vogeli C, Mullainathan S.. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–53. [DOI] [PubMed] [Google Scholar]
- 38. Chen IY, Pierson E, Rose S, Joshi S, Ferryman K, Ghassemi M.. Ethical Machine Learning in Healthcare. Annu Rev Biomed Data Sci [Internet]. 2021;4(1):123–44. Available from: https://arxiv.org/abs/2009.10576 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Sander I. What is critical big data literacy and how can it be implemented? Internet Policy Review. 2020;9(2):1–22. [Google Scholar]
- 40. Rosman L, Gehi A, Lampert R.. When smartwatches contribute to health anxiety in patients with atrial fibrillation. Cardiovasc Digit Health J. 2020;1(1):9–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Office of the Privacy Commissioner of Canada. Wearable Computing - Challenges and opportunities for privacy protection [Internet]. 2014. p. 19. Available from: https://www.priv.gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2014/wc_201401/ [Google Scholar]
- 42. Marvin C. Your smart phones are hot pockets to us: Context collapse in a mobilized age. Mobile Media Commun. 2013;1(1):153–9. [Google Scholar]
- 43. Davies B. ‘Personal Health Surveillance’: The Use of mHealth in Healthcare Responsibilisation. Public Health Ethics. 2021;14(3):268–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Health Canada. Guidance Document: Software as a Medical Device (SaMD): Definition and Classification. 2019; Available from: https://www.canada.ca/content/dam/hc-sc/documents/services/drugs-health-products/medical-devices/application-information/guidance-documents/software-medical-device-guidance-document/software-medical-device-guidance-document.pdf
- 45. Sendak MP, Gao M, Brajer N, Balu S.. Presenting machine learning model information to clinical end users with model facts labels. npj Digital Med. 2020;3(41). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Centre for Medicare & Medicaid Services. Overview of the Medicare Physician Fee Schedule [Internet]. 2021. Available from: https://www.cms.gov/medicare/physician-fee-schedule/search/overview
- 47. Federal Institute for Drugs and Medical Devices. The Fast-Track Process for Digital Health Applications (DiGA) according to Section 139e SGB V.A guide for manufacturers, Service Provides and Users. 2020;1–124. Available from: https://www.bfarm.de/SharedDocs/Downloads/EN/MedicalDevices/DiGA_Guide.html
- 48. Andrey S, Masoodi MJ, Malli N, Dorkenoo S.. Mapping Toronto’s Digital Divide [Internet]. Ryerson Leadership Lab and Brookfield Institute for Innovation + Entrepreneurship. 2021. Available from: https://www.ryersonleadlab.com/digital-divide
