Skip to main content
Sage Choice logoLink to Sage Choice
. 2024 Mar 28;32(3):214–219. doi: 10.1177/10398562241241473

ChatGPT in private practice: The opportunities and pitfalls of novel technology

Kirk Lehman 1,, Emeil Aroney 2, Isabella Wu 2
PMCID: PMC11103917  PMID: 38545872

Abstract

Objective

This article explores the transformative impact of OpenAI and ChatGPT on Australian medical practitioners, particularly psychiatrists in the private practice setting. It delves into the extensive benefits and limitations associated with integrating ChatGPT into medical practice, summarising current policies and scrutinising medicolegal implications.

Conclusion

A careful assessment is imperative to determine whether the benefits of AI integration outweigh the associated risks. Practitioners are urged to review AI-generated content to ensure its accuracy, recognising that liability likely resides with them rather than with AI platforms, despite the lack of case law specific to negligence and AI in the Australian context at present. It is important to employ measures that ensure patient confidentiality is not breached and practitioners are encouraged to seek counsel from their professional indemnity insurer. There is considerable potential for future development of specialised AI software tailored specifically for the medical profession, making the use of AI more suitable for the medical field in the Australian legal landscape. Moving forward, it is essential to embrace technology and actively address its challenges rather than dismissing AI integration into medical practice. It is becoming increasingly essential that both the psychiatric community, medical community at large and policy makers develop comprehensive guidelines to fill existing policy gaps and adapt to the evolving landscape of AI technologies in healthcare.

Keywords: ChatGPT, artificial intelligence, psychiatry, medicolegal, confidentiality


Psychiatrists, mental health professionals and all Australian medical practitioners are currently discovering the unprecedented opportunities created by OpenAI and ChatGPT. If doctors are not already using it, they have probably heard of it being used, or seen it heralded or criticised in the media. 1 Guidelines exist amongst some state-run hospital and health services forbidding its usage, such as in Perth’s South Metropolitan Health Service which includes five hospitals; 2 however, in private practice, there is a decision to be made by practice directors and their doctors. What are practitioners to do? Forbidding its use outright provides the most medicolegally protected position, however, stifles early adoption of new technology and the ability to explore its benefits. Alternately, using the service without guidelines or knowledge of the risks involved is a pathway likely to lead to patient harm and damages.

The opportunities and potential benefits from using ChatGPT or other forms of artificial intelligence (AI) are boundless – we currently do not know where these begin and end. These can include producing draft notes and reports in clinical scenarios, through to more complicated tasks such as providing chat-bot psychotherapy. Nevertheless, there are many risks which are taken when using the service, ranging from information inaccuracy, 3 breaching confidentiality 4 or providing inappropriate therapeutic advice 5 ; all which have the potential to harm patients.

Presently, there is a lack of a proactive approach to policy in Australia, contrasting international jurisdictions, such as Chile. 6 Modern studies have demonstrated that AI systems are able to accurately predict how certain patients will react to certain psychotropic medications. 7 However, whilst AI may enhance efficiency and decision-making, it cannot trump the doctor-patient relationship and replace the essential human elements of empathy and emotional connection. 8

Currently, there are no RANZCP or AMA guidelines, policies, or protocols to govern use. To complicate the matter, some health services have opted to ban them outright. 9 This contrasts with international efforts where the concept of ‘neuro-rights’ is being explored to protect patients in the context of neurotechnological advances. 10 Such considerations are crucial for the Australian medical community to contemplate in developing guidelines that align with global best practices.

When would a medical practitioner use the ChatGPT platform in practice?

Psychiatrists along with other practitioners may utilise the ChatGPT platform in their regular practice for both clinical and administrative purposes. This includes documentation of patient notes, drafting referral letters and automating administrative tasks. More complicated processes, such as triaging patients, providing clinical analysis or even psychotherapy sessions, are also becoming possible. 1 Additionally, it may play a role in developing more accurate screening tests for individuals suffering from symptoms of psychiatric disorders. 1

The integration of AI offers a range of potential benefits that can enhance efficiency and productivity of systems in a variety of ways. ChatGPT’s natural language processing abilities can, albeit controversially, help streamline documentation processes and transform clinical notes into well-structured letters and forms. Other tasks such as scheduling of follow-up appointments or sharing of test results can be automated and personalised. Accordingly, this technology can be valuable in alleviating the administrative workload of healthcare workers and allow them to prioritise patient care, whilst simultaneously streamlining workflows leading to significant time and cost savings.

However, practitioners should approach the use of AI with caution. It is important that they evaluate both the advantages and potential risks associated with using ChatGPT, while also ensuring that they adhere to ethical and medicolegal standards. 1

The known (and unknown) risks of using ChatGPT

The utilisation of ChatGPT in psychiatry pose risks that require consideration when incorporating it into practice. Confidentiality is a significant concern as the platform processes sensitive patient information, potentially leading to unintended breaches of privacy. When used to draft medical notes, there is currently no assurance of patient confidentiality and the security risks associated with the platform are still not entirely understood. 2 In Australia, health records are classified as sensitive information for which special privacy protections are in place under the Commonwealth Privacy Act 1988 and relevant state and territory legislation, such as Health Records Act 2001 (NSW) and the Privacy and Personal Information Protection Act 1998 (NSW). 11 Healthcare providers should be warned that any intentional or unintentional disclosure of health information into AI platforms could be a notifiable data breach under the Privacy Act. 11

Additionally, there is risk associated with errors, especially if AI is used for patient notes or referral letters, emphasising the importance of doctors reviewing the information generated by ChatGPT. This is significant because liability rests with the practitioner, who is the author of the notes and letters. According to Dr Owen Bradfield, a GP and Chief Medical Officer at indemnity insurance organisation MIPS, ‘From a medicolegal perspective, a doctor tasked with completing a discharge summary already has a legal, ethical, and professional obligation to ensure that it is accurate and correct and that its that its creation and disclosure does not result in a breach of the doctor’s duty of confidentiality’. 2 These duties persevere and continue to exist for doctors whether or not AI is used.

There is also the potential for inaccuracies in outputs provided by the AI, referred to as ‘hallucinations’, highlighting the challenge of relying solely on AI-generated content without careful inspection. Current limitations of ChatGPT presented by Open AI include social biases, hallucinations and adversarial prompts. 12 This emphasises the importance of verifying ChatGPT-generated information with reliable sources and encouraging vigilance among practitioners when using AI technology, or risk missing patient threatening errors (Figures 13). 13

Figure 2.

Figure 2.

An example of a ChatGPT ‘correction’ to an ill-conceived question. 12

Figure 1.

Figure 1.

An example of a ChatGPT response to a clinical question. 12

Figure 3.

Figure 3.

A ChatGPT ‘hallucination’ – there were 706 survivors of the Titanic. 16

The WHO has raised concerns stating that ‘the data used to train AI may be biased, generating misleading or inaccurate information that could pose risk to health, equity and inclusiveness’. 14 This programmable bias can manifest in various ways, such as disparities in diagnosis, treatment recommendations or patient outcomes, particularly affecting marginalised or underrepresented groups. For example, over or under-prescription based on demographic factors such as socioeconomic factors or race, rather than individual circumstances. 2 The lack of transparency in AI reasoning contributes to the risks, as it may lead practitioners to have an unclear comprehension of the processes that lead to conclusions drawn by generative AI. 15

The position of Australian medical indemnity insurers

Australian medical indemnity insurers have adopted a conservative stance regarding the integration of ChatGPT into medical practice, emphasising concerns related to privacy, security and the inherent limitations of artificial intelligence.

Avant emphasises a doctors’ duty of confidentiality and the need to protect patient information from unauthorised use and disclosure. Avant has warned doctors that inserting patient details into ChatGPT has the potential to constitute as a breach of confidentiality or privacy. 17 MIGA advocates for a cautious approach, emphasising transparency, user education and wider AI literacy amongst the industry, aiming to address and mitigate potential risks. MIPS has highlighted that medicolegally, a doctor’s obligations to accuracy and confidentiality persist whether or not AI is employed. 1

Overall, the Australian medical indemnity insurers collectively share a similar conservative and cautious stance towards AI technology, emphasising the importance of practitioners’ ethical and legal obligations when navigating the complexities of AI integration.

Practical considerations

It is essential to keep in mind several practical considerations when considering the integration of AI platforms into healthcare practices. There is no assurance of confidentiality when uploading information to AI services, emphasising the need for practitioners to exercise caution and vigilance when using ChatGPT to facilitate tasks. Given doctors’ duty of care, it is crucial not to breach patient privacy, and ensure practitioners be acutely aware of the limitations of AI; it does not replace their clinical knowledge.

The state of AI regulation is evolving and complex. It has been recently reported that the Commonwealth Attorney-General’s department is seeking to regulate future uses of AI, data flows and to mandate mechanisms which ensure that the public are being made aware of how their data is being accessed and used. 2

The risks of utilising ChatGPT in healthcare may be magnified due to AI platforms disclaiming responsibility for damages. 18 However, this is yet to be tested in the Australian legal environment. Currently, practitioners appear to assume all accountability and risk when utilising the technology; however, there is a lack of case law specific to negligence and AI in the Australian context at present. 11 Plaintiffs would include both their treating practitioner as well as the relevant AI organisation as respondents in a civil suit in the event of a negligence claim – or defendant practitioners and their organisation to include their AI service in a third party claim should they believe fault rests with them.

Considering the duty of care owed to patients, it becomes crucial to combine AI-generated information with verification through medical knowledge. Despite ChatGPT generating convincingly robust text, the practitioner signing it off is the one responsible for it.

In addition to the above is the matter of budget; integrating a novel system into an existing healthcare practice, whether large or small, brings with it a significant cost burden. 19 This needs to be weighed against the potential financial benefits as well as the difficult to quantify intangible benefits of saved time, reduction in administrative load, improved access to care, patient experience and clinician satisfaction. 20

Conclusion

Given the rapid advancement and novel nature of AI platforms and the absence of explicit legal prohibitions or guidelines for AI technology, 17 practitioners and practice owners must weigh up the benefits against its pitfalls. Incorporation into the healthcare system requires careful consideration, as liability will likely rest with practitioners or their respective organisations. The absence of Australian case law specific to negligence and AI in healthcare settings highlights a lack of legal precedent to guide practitioners in this evolving field.

Distinct protocols may govern AI use within individual institutions, providing employees specified guidelines. However, in private practice, where practitioners hold decision-making authority, a careful assessment is imperative to determine whether the benefits of AI integration outweigh the associated risks. Practitioners are urged to review AI-generated content to ensure its accuracy, recognising that the liability resides with them rather than the AI platform, once they sign approve or act on it. It is important to employ measures that ensure patient confidentiality is not breached and practitioners are encouraged to seek counsel from their medical indemnity insurer.

Despite these risks, there is potential for adoption of specialised AI software tailored for the medical profession. Adding domain-specificity to the GPT model could serve as an enabling framework for the use of ChatGPT in healthcare. This refers to a sandboxed version of the AI model limited to datasets relevant to the field. Restricting the domain the model works in reduces hallucinations (plausible sounding but factually or contextually incorrect responses). This same model could also be instructed to refuse tasks considered the responsibility of a medical professional, or require practitioner authorisation to do so, reinforcing the need for review of any output. This could make AI systems more appropriate for the Australian clinical and legal landscape.

It is essential to embrace technology and actively address its challenges rather than dismissing AI integration into medical practice. This article advocates for the RANZCP to establish guidelines focused on leveraging the best available evidence from international contexts regarding the use or integration of AI in healthcare settings. It is becoming increasingly essential that both the psychiatric community, medical community at large and policy makers develop comprehensive guidelines to fill existing policy gaps and adapt to the evolving landscape of AI technologies in healthcare. Currently, regulation is undoubtedly lagging behind technological advancement. By embracing innovative technologies, practioners can lead the way in the rapidly evolving landscape of healthcare.

Footnotes

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iDs

Kirk Lehman https://orcid.org/0000-0002-6622-7107

Emeil Aroney https://orcid.org/0009-0007-6433-2791

References


Articles from Australasian Psychiatry are provided here courtesy of SAGE Publications

RESOURCES