Abstract
Generative artificial intelligence and Large Language Models reshape labor dynamics and occupational health practices. As AI continues to evolve, there’s a critical need to customize ethical considerations for its specific impacts on occupational health. Recognizing potential ethical challenges and dilemmas, stakeholders and physicians are urged to proactively adjust the practice of Occupational Medicine in response to shifting ethical paradigms. By advocating for a comprehensive review of the International Commission on Occupational Health ICOH Code of Ethics, we can ensure responsible medical AI deployment, safeguarding the well-being of workers amidst the transformative effects of automation in healthcare.
Keywords: Generative Artificial Intelligence, Large Language Models, Code of Ethics
1. Introduction
Picking up the legacy of Alan Turing, who in 1950 asked himself the question, “can machines think?” proposing the test named after him and first coined in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence, a seminal event for artificial intelligence as a field where a group of scientists set out to teach machines to use language, form concepts, self-improve, and solve problems originally reserved for humans [1, 2], Artificial Intelligence (AI) is a field of computer science aimed at creating algorithms and systems capable of mimicking human cognitive functions [3, 4].
After several drafts and revisions, the European Parliament approved the final text of the AI Act* on 13 March 2024, becoming the first in the world to try to give clear rules and bans on the development of one of the most disruptive and revolutionary technologies. All artificial intelligence applications and systems operating in the European Union will be classified according to four risk levels (minimal, limited, high, and unacceptable) to protect EU citizens based on the Treaty on European Union (TEU) and the Treaty on the Functioning of the European Union (TFEU).
Integrating digital technology, including AI, is revolutionizing the occupational landscape, redefining the types of available jobs and how work is organized and managed. This change is unstoppable and involves all productive sectors. Some of the most recognizable and prevalent instances of artificial intelligence in occupational settings include human resource (HR) software tools, collaborative robots (cobots) utilized in industrial settings, virtual assistants in customer service centers, wearable devices utilized for real-time training and digital platforms facilitating freelance or “gig” employment opportunities [5, 6].
AI-based HR systems are used, for example, for task assignment, performance evaluations, and activity monitoring, such as the technique of people analytics, defined as the use by human resources of data on behavior, relationships, and human traits to make business decisions [7].
Smart personal protective equipments (PPE) are traditional protection systems combined with electronic components (such as sensors that can be incorporated into helmets or safety glasses and mobile or fixed systems via cameras) capable of continuously recording data on the worker, the work environment and the use of the device itself. They allow, for example, the verification of a person on the ground, mapping of hazardous areas, immediate alert of the release of substances hazardous to workers’ health but also allergens, noise, and chemicals, or to monitor stress, physical fatigue or to record vital parameters such as core body temperature and heart rate [8-11].
Chatbots represent a model of human-computer interaction and are designed to simulate human conversation, especially via text or voice, using artificial intelligence algorithms. They can be integrated into various platforms, such as websites, instant messaging apps, social media, and email. They can be designed to perform different roles, from customer support services to virtual tutors or conversation companions. In the world of work, they are increasingly used for various purposes. Some chatbots analyze workers’ communication patterns to assess the risk of mental health problems, such as burnout, and they can also provide personalized support to workers, such as personalized mindfulness practices through ad hoc platforms like MindBot, an EU-funded project [12]. They are also used to provide information on safety procedures to workers at any time or to manage customer assistance requests, freeing humans working in call centers. Virtual reality is used to train workers towards safer and healthier behaviors. Additionally, cobots or collaborative robots are employed to assist workers and improve efficiency and safety in the workplace across a variety of sectors and applications, such as assembly, logistics, healthcare, agriculture, and more [13 14], allowing people to be removed from dangerous physical work and environments with chemical and ergonomic hazards, even if they can pose safety issues [15].
Another example of technological integration occurs in the Gig Economy, where digital platforms offer an ecosystem benefiting both workers, known as gig workers (e.g., couriers), and clients or companies. According to 2021 data from the EU Science Hub [16 17], gig workers represent a rapidly growing workforce, with millions of people employed in the European sector.
From these examples of existing workplace applications, it is confirmed that emerging technologies such as artificial intelligence, automation, the Internet of Things (IoT), including healthcare IOT, robotics, and blockchain, are radically transforming how organizations operate and manage their activities [18], meeting the definition of AI-based worker management (AIWM) technologies.
Data Mining (DM), based on the analysis of Big Data – a subset of Data Science - through Deep Learning (DL) – a subset of ANNs with multi-layer (deep) structures within Machine Learning (ML) - can help researchers in the process of Knowledge Discovery in Databases (KDD) [19-21].
The implementation of increasingly complex technological models, also based on AI, will be able to facilitate the transition to a 5p Occupational Medicine (personalized, preventive, predictive, participatory precision medicine) as would seem possible from the study of digital twin, digital replica that mirrors the physical entity in real-time or near-real-time, enabling simulations and predictive analytics, allowing users to forecast performance, behavior, and potential issues before they occur in the physical world, which could be implemented in Occupational Medicine for the study and prediction of the pathogenesis mechanisms of technopathy providing a deeper understanding of the physical world and enabling data-driven based evidence [19, 22, 23].
In recent years, generative artificial intelligence (GAI), which evolved from Machine Learning (ML), has undergone rapid development. This branch of AI consists of algorithms that learn from large amounts of data to create new content in various formats, such as text, images, video, audio, and code. These models are known for their ability to perform a wide range of tasks, including writing, poetic composition, literature review, translation, and text adaptation to different contexts [24], simulating human cognitive processes, even if there is no specific correlation between the computer states and cognitive states of the brain. There are large language models (LLMs) within the realm of generative AI. These systems are based on a complex of artificial neural networks (ANNs) capable of mimicking the brain structure and handling vast volumes of written information. They can be employed in various contexts, such as automatic translation, text creation, and question answering [25], generating human-like text. LLMs have rapidly spread globally in the last year and a half and have immediately demonstrated their potential to revolutionize the global medical sector [26]. The potential applications of LLMs in the medical field mainly concern medical education, scientific research, medical clinical practice, and the doctor-patient relationship [27]. The use of chatbots like ChatGPT, based on Generative Pretrained Transformer (GPT), a specific type of LLM developed by OpenAI [28], trained on vast amounts of text data also represents an opportunity in Occupational Medicine because of their capacity to generate coherent, contextually relevant sentences. Unlike traditional chatbots, which are often designed to perform specific tasks or respond to predefined questions, models like ChatGPT are better suited for generating fluid and natural language responses across a wide range of conversational contexts without the need to be programmed or trained on specific data [29].
GAI systems and subsets could be employed as virtual assistants for Occupational Medicine professionals, providing instant responses regarding current health and safety regulations in the workplace [30 31]. They could also be used to draft informative documents and company communications and write and review risk assessment documentation to fulfill employer regulatory obligations. Finally, they could be employed to develop more efficient management systems for health surveillance, ensuring more comprehensive data collection.
However, it’s essential to recognize that the integration of AI and machine learning in healthcare also presents challenges, including concerns about data privacy and security, the need for robust regulatory frameworks, and ensuring that these technologies are accessible and fair for all patients, as highlighted by the EU-OSHA’s Healthy Workplaces Campaign Safe and Healthy Work in the Digital Age, running from 2023 to 2025. At the top there are LLM hallucinations, a term result of an anthropomorphization of AI lexicon and currently used when generative AI systems based on Large Language Models (LLM) connect, misinterpret data and produce erroneous information that appears coherent and plausible but lacks factual accuracy or medical validity [32]. This phenomenon poses a significant problem for medical applications because healthcare professionals might encounter AI-generated content that looks accurate but could lead to incorrect diagnoses or treatments if relied upon without scrutiny because of the so-called counterfactual bias or the tendency to consider an incorrect factual premise true. The causes of LLM hallucinations can be varied and complex. One of the main reasons is the sensitivity of LLMs to training data or the presence of patterns that can mislead the algorithm. Ambiguous or non-representative data can generate erroneous responses that reflect distorted interpretations of reality with significant ethical and safety implications. These are errors that are sometimes so gross that they are immediately obvious. However, continued progress in LLM training (e.g., human feedback) will constantly reduce gross hallucinations to form more reliable generative models. However, regardless of whether issues of hallucinations are adequately addressed, healthcare providers should be aware of the spectrum of capabilities and limitations of generative AIs, ensuring that medical decisions are based on reliable evidence and professional knowledge rather than solely relying on AI-generated text that may not always be accurate or clinically appropriate. Cautious use and careful fact-checking are crucial, alongside transparency, surveillance, and regulation, as already warned [33-35].
Therefore, careful consideration and ongoing evaluation are necessary to harness AI’s full potential in improving healthcare services while effectively addressing these challenges [36].
2. Discussion
The risks arising from digitization in the workplace fall within the scope of Council Directive 89/391/EEC [32], the framework directive on occupational health and safety, and the national legislation that has implemented it. In addition to protecting workers from work-related risks, it also establishes the employer’s responsibility to ensure safety and health in the workplace. The main risks arising from technological integration include loss of awareness of events, excessive reliance or potential loss of specific job-related skills, demotion, loss of autonomy and employment, social isolation, privacy violations, and inability to draw clear boundaries between social and private life due to 24/7 access to technologies [5].
Even gig workers, while enjoying a certain autonomy typical of freelance workers, are subject to a high degree of control over their activities by digital platforms they work for through management algorithms. During the COVID-19 health emergency, the European Commission recognized gig workers as essential workers, emphasizing the crucial role they play in ensuring the continuous functioning of vital services for public safety and health [38]. However, the ambiguous nature of their employment status has made it complex to classify them legally and protect their rights, particularly concerning health and safety at work.
In August 2023, the ILO published a working paper entitled ‘Generative AI and Jobs: A Global Analysis of Potential Effects on Job Quantity and Quality’, which presents a comprehensive analysis of the potential exposure of occupations and tasks to Generative Artificial Intelligence, specifically LLMs based on Generative Pre-Trained Transformers (GPTs), and the possible implications of such exposure for job quantity and quality. Unlike previous waves of technological transformation that primarily affected low-skill and repetitive jobs with the highest potential for automation, machine learning systems can enhance performance in non-routine tasks. The proliferation of GPT-based LLMs further underscores this evolving trend, given their capacity to execute cognitive tasks such as analyzing text, drafting documents and messages, or scraping private repositories and the web for additional information. Consequently, this new wave of automation will primarily target a different group of workers, typically associated with ‘knowledge work,’ including clerical jobs. The anticipated impact appears to be not the obliteration of jobs but rather potential changes in the quality of jobs, particularly regarding work intensity and autonomy. The socio-economic impacts of Generative Artificial Intelligence are not predetermined; rather, they will largely depend on how its deployment is managed, necessitating policies that support an orderly, equitable, and consultative transition [39].
Psychosocial risks may worsen with the spread of generative AI and LLM in the workplace, or new risks may arise. At present, there are no official sources that deal with the use of AI in Occupational Medicine. Only recently, the World Health Organization (WHO) issued two documents related to AI [40] and LLMs and Generative AI [41]. At the same time, the International Code of Ethics for Occupational Health Professionals, last updated in 2014, contains neither recommendations nor guidelines on using AI.
WHO documents offer safety recommendations for the usage of AI in healthcare, covering six key areas: documentation and transparency, total product lifecycle and risk management, intended use and validation, data quality, privacy and data protection, and engagement and collaboration. They also integrate their previous 2021 publication [42], as the latter didn’t consider the potential applications of LLMs as they were not as advanced yet, by providing more than 40 recommendations. Its goal is to ensure the appropriate use of LMMs to protect public health, emphasizing the need to carefully consider the risks associated with developing and using generative AI technologies to improve healthcare.
The WHO identifies five areas of application of LMMs in healthcare: clinical diagnosis, patient-guided care, administrative tasks, medical training, and scientific research for drug development. The same guidelines outline ethical considerations and best practices for developing, implementing, and responsibly using these models. WHO highlights the need to consider crucial aspects such as privacy, data security, transparency, accountability, and fairness when using these technologies. Based on what has recently been published by the WHO, we believe that the rapid progress of artificial intelligence in the medical field must also be considered in Occupational Medicine to analyze its possible uses and ensure its correct application. Occupational physicians should consider the impact of AI and, in particular, of LLMs from two perspectives:
- The impact that generative AI may have in terms of employment and organizational well-being. Its introduction may cause new work hazards due to the possible workers’ demotion, burnout, and alienation caused by employers’ increasing adoption of AI. This is also underscored by the WHO guidelines, where the potential loss of jobs and the need for workers to reinvent themselves and adapt to AI-enabled jobs were inserted among the risks for the healthcare systems.
- The impact of generative AI on occupational physicians’ daily practice. For example, the use of LLMs on patients’ private data may pose ethical and privacy concerns about how user data is handled and stored when submitted to third-party systems. Also, LLM hallucinations may lead to wrong diagnoses if used unchecked by the health professional.
To raise awareness of Generative AI opportunities and potential downfall and regulate its utilization in the medical fields, we believe that the WHO guidelines should be incorporated into the ethical codes in force in Occupational Medicine like the International Code of Ethics published by the International Commission on Occupational Health (ICOH)[43 44].
Several key considerations deserve integration into the ICOH Code of Ethics:
- Transparency and explainability. The code should underscore, reiterate, and enforce the necessity of transparency among workers concerning clinical decision-making supported by LLMs, with occupational health professionals being the first responsible for the transparent utilization of such technologies.
- Human oversight and accountability. In a landscape where LLMs wield substantial influence over clinical decisions, elucidating the responsibilities of occupational health practitioners and other professionals in generative AI usage is imperative.
- Data privacy and confidentiality. Privacy concerns must be reaffirmed within the code, emphasizing the confidentiality of health surveillance data processed through LLMs.
- Equity and bias. Given the potential for LLMs to reflect and amplify biases, the code should stress the importance of mitigating discrimination risks and promoting equitable access to and utilization of these technologies, particularly in health surveillance and clinical decision-making.
- Ethical use and learning. Continuous education and professional development are essential. Occupational health practitioners should undergo appropriate training in the ethical and responsible use of LLMs and engage in ongoing professional development to stay abreast of technological advancements and emerging ethical dilemmas.
- Continuous Monitoring and updating. The code should advocate for continuous monitoring and evaluation of the ethical implications and effectiveness of LLMs utilization in clinical practice and Occupational Medicine research, focusing on identifying and rectifying any issues or challenges that may arise.
Updating the ICOH code of ethics in Occupational Medicine in light of the WHO guidelines would help ensure that using LLMs in this field is ethical and respects workers’ rights.
As pointed out in the preface of the publication “The International Code of Ethics for Occupational Health Professionals” by the Italian National Institute for Insurance against Accidents at Work (INAIL) [45], the Code represents a starting point and not an arrival point in a dynamic process involving the entire occupational health community. Therefore, the development and application of professional standards according to a multidisciplinary approach that remains in step with the times should express itself, along the lines of the WHO, on the new ethical challenges and the governance of generative AI given the possible applications of LLMs in Occupational Medicine and in general in the international occupational health landscape.
The Italian Society of Occupational Medicine (SIML) could later adopt the updated code of ethics. This could pave the way for drafting national guidelines for using GAI and LLMs in Occupational Medicine.
3. Conclusion
Artificial Intelligence (AI) has the potential to revolutionize Occupational Medicine by improving efficiency, accuracy, and personalized healthcare for workers. However, addressing challenges such as data privacy, ethics, and integration is crucial for successful implementation.
Updating the ICOH Code of Ethics in Occupational Medicine, considering recent developments in Generative AI, could help maximize workers’ health and safety benefits while embracing this technological revolution. On the other hand, it would also enable the containment of associated ethical and clinical risks arising from using LLMs by occupational physicians. Moreover, alongside integrating the code of ethics, the definition of specific Occupational Medicine guidelines for using LLM in Occupational Medicine could be a starting point for new generations of young occupational physicians entering this field. The discipline of Occupational Medicine cannot afford to remain stagnant in the face of the scientific world’s embrace of Generative AI. In this rapid technological evolution, it is crucial to set clear ethical and governance limits and promote open-mindedness among all occupational health and safety protection professionals. Integrating these innovations into the workplace and the clinical practice of occupational physicians intelligently and responsibly is a challenge we must all undertake. The future will not see artificial intelligence replace the occupational physician but, possibly, the occupational physician who will be able to use, according to scrutiny and criticism, artificial intelligence as a further step forward in the health and safety of workers.
Footnotes
* Artificial Intelligence Act (https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html)
Declaration of Generative AI and AI-Assisted Technologies in the Writing Process:
None.
Funding:
This research received no external funding.
Institutional Review Board Statement:
Not applicable.
Informed Consent Statement:
Not applicable.
Declaration of Interest:
The authors declare no conflict of interest. The authors’ findings and conclusions do not represent the official position of national and international organizations mentioned in the paper.
Authors Contribution Statement:
The authors have contributed equally to this work.
References
- A. M. TURING, I.—COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, October 1950. :433–460. https://doi.org/10.1093/mind/LIX.236.433. [Google Scholar]
- McCarthy J, Minsky ML, Rochester N, Shannon CE. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine. 2006;27(4):12. Doi: https://doi.org/10.1609/aimag.v27i4.1904. [Google Scholar]
- Haug, Charlotte J, Drazen JM. Artificial Intelligence and Machine Learning in Clinical Medicine, 2023. New Eng J Med. 2023;388(13):1201–1208. doi: 10.1056/NEJMra2302038. Doi: 10.1056/NEJMra2302038. [DOI] [PubMed] [Google Scholar]
- Bhattad PB, Jain V. Artificial Intelligence in Modern Medicine – The Evolving Necessity of the Present and Role in Transforming the Future of Medical Care. Cureus. 2020 doi: 10.7759/cureus.8041. Published online. Doi: 10.7759/cureus.8041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leso V, Fontana L, Iavicoli I. The occupational health and safety dimension of Industry 4.0. Med Lav. 2018 Oct 29;110(5):327–338. doi: 10.23749/mdl.v110i5.7282. Doi: 10.23749/mdl.v110i5.7282. PMID: 30378585; PMCID: PMC7682172. [DOI] [PMC free article] [PubMed] [Google Scholar]
- EU-OSHA (European Agency for Safety and Health at Work) 2019 OSH and the Future of Work: benefits and risks of artificial intelligence tools in (Available online: https://osha.europa.eu/en/publications/osh-and-future-work-benefits-and-risks-artificial-intelligence-tools-workplaces , accessed 05 February 2024) [Google Scholar]
- Collins L, Fineman DR, Tsuchida A. Walsch L, Volini E. Rewriting the rules for digital age. Deloitte University press; 2017. People Analytics: Recalculating the Route; pp. 97–106. Available online: https://www2.deloitte.com/content/dam/Deloitte/global/Documents/About-Deloitte/central-europe/ce-global-human-capital-trends.pdf , accessed 15 February 2024. [Google Scholar]
- Mokhtari F, Cheng Z, Wang CH, Foroughi J. Advances in Wearable Piezoelectric Sensors for Hazardous Workplace Environments. Global Challenges. 2023;7(6) doi: 10.1002/gch2.202300019. Doi: 10.1002/gch2.202300019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baldassarre A, Mucci N, Padovan M, et al. The role of electrocardiography in Occupational Medicine, from Einthoven’s invention to the digital era of wearable devices. Int J Environ Res Public Health. 2020;17(14) doi: 10.3390/ijerph17144975. Doi: 10.3390/ijerph17144975. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Prince SA, Elliott CG, Scott K, Visintini S, Reed JL. Device-measured physical activity, sedentary behaviour and cardiometabolic health and fitness across occupational groups: A systematic review and meta- analysis. International Journal of Behavioral Nutrition and Physical Activity. 2019;16(1) doi: 10.1186/s12966-019-0790-9. Doi: 10.1186/s12966-019-0790-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Piwek L, Ellis DA, Andrews S, Joinson A. The Rise of Consumer Health Wearables: Promises and Barriers. PLoS Med. 2016;13(2) doi: 10.1371/journal.pmed.1001953. Doi: 10.1371/journal.pmed.1001953. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Horizon 2020 Mental Health promotion of cobot Workers in Industry 4.0. https://cordis.europa.eu/project/id/847926 , Doi: 10.3030/847926 [Last Accessed 06-02-2024] [Google Scholar]
- Murashov V, Hearl F, Howard J. Working safely with robot workers: Recommendations for the new workplace. J Occup Environ Hyg. 2016;13(3) doi: 10.1080/15459624.2015.1116700. Doi: 10.1080/15459624.2015.1116700. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paliga M. The Relationships of Human-Cobot Interaction Fluency with Job Performance and Job Satisfaction among Cobot Operators—The Moderating Role of Workload. Int J Environ Res Public Health. 2023;20(6) doi: 10.3390/ijerph20065111. Doi: 10.3390/ijerph20065111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- EU-OSHA (European Agency for Safety and Health at Work. “Foresight on new and emerging occupational safety and health risks associated with digitalisation by 2025” European Risk Observatory Report. 2018 Available online: https://osha.europa.eu/en/publications/foresight-new-and-emerging-occupational-safety-and-health-risks-associated. [Last Accessed 20-02-2024] [Google Scholar]
- Berg J, Hilal A, El S, Horne R. World employment and social outlook: Trends 2021. International Labour Organization. 2021 [Google Scholar]
- Wu D, Huang JL. Gig work and gig workers: An integrative review and agenda for future research. J Organizat Behav. 2024;45(2):183–208. Doi: https://doi.org/10.1002/job.2775. [Google Scholar]
- Adel A. Future of industry 5.0 in society: human-centric solutions, challenges and prospective research areas. J Cloud Comp. 2022;11:40. doi: 10.1186/s13677-022-00314-5. Doi: https://doi.org/10.1186/s13677-022-00314-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Topol Review (2019) Preparing the healthcare workforce to deliver the digital future. Final Report February 2019 – A Call For Evidence. A Middleton invited to contribute. Health Education England. Available online: https://topol.hee.nhs.uk . [Google Scholar]
- Boffetta P, Collatuzzo G. Application of P4 (Predictive, Preventive, Personalized, Participatory) Approach to Occupational Medicine. Med Lav. 2022 Feb 22;113(1):e2022009. doi: 10.23749/mdl.v113i1.12622. Doi: 10.23749/mdl.v113i1.12622. PMID: 35226650; PMCID: PMC8902745. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blobel B, Kalra D. Editorial: Managing healthcare transformation towards P5 medicine. Front Med (Lausanne) 2023 Aug 25;10:1244100. doi: 10.3389/fmed.2023.1244100. Doi: 10.3389/fmed.2023.1244100. PMID: 37692783; PMCID: PMC 10485846. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kamel Boulos MN, Zhang P. Digital Twins: From Personalised Medicine to Precision Public Health. J Pers Med. 2021;11(8):745. doi: 10.3390/jpm11080745. Doi: https://doi.org/10.3390/jpm11080745. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Armeni P, Polat I, Maria De Rossi L. Digital Twins for Health: Opportunities, Barriers and a Path Forward. IntechOpen. 2023 Doi: 10.5772/intechopen.112490. [Google Scholar]
- Shoja MM, Van de Ridder JMM, Rajput V. The Emerging Role of Generative Artificial Intelligence in Medical Education, Research, and Practice. Cureus. 2023 Jun 24;15(6):e40883. doi: 10.7759/cureus.40883. Doi: 10.7759/cureus.40883. PMID: 37492829; PMCID: PMC10363933) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wayne Xin Zhao. A Survey of Large Language Models. Arxiv. Doi: 10.48550/arXiv.2303.18223. [Google Scholar]
- Lee Peter, et al. “Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine. Reply.”. New Eng J Med. 2023;388(25):2400. doi: 10.1056/NEJMc2305286. Doi: 10.1056/NEJMc2305286. [DOI] [PubMed] [Google Scholar]
- Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8) doi: 10.1038/s41591-023-02448-8. Doi: 10.1038/s41591-023-02448-8) [DOI] [PubMed] [Google Scholar]
- Naveed H, Khan AU, Qiu S, et al. A Comprehensive Overview of Large Language Models. arXiv. 2023 arXiv:2307.06435. [Google Scholar]
- Chakraborty Chiranjib, et al. “Overview of Chatbots with special emphasis on artificial intelligence-enabled ChatGPT in medical science.”. Front Artif Intell. 31 Oct. 2023;6:1237704. doi: 10.3389/frai.2023.1237704. Doi: 10.3389/frai.2023.1237704. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sridi C, Brigui S. The use of ChatGPT in Occupational Medicine: opportunities and threats. Ann Occup Environ Med. 2023;35:e42. doi: 10.35371/aoem.2023.35.e42. Doi: 10.35371/aoem.2023.35.e42. PMID: 38029273; PMCID: PMC 10654530) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Padovan M, Cosci B, Petillo A, et al. ChatGPT in Occupational Medicine: A Comparative Study with Human Experts. Bioengineering. 2024;11(1):57. doi: 10.3390/bioengineering11010057. Doi: https://doi.org/10.3390/bioengineering11010057. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ji Z, Lee N, Frieske R, et al. Survey of hallucination in natural language generation. ACM Comput. Surv. 2023;55:1–38. [Google Scholar]
- Mutti A. Hey James, Write an Editorial for “La Medicina del Lavoro”. Med Lav. 2023;114(2):e2023014. Doi: 10.23749/mdl.v114i2.14451. Epub 2023 Apr 13. PMCID: PMC10133773. [Google Scholar]
- Menz BD, Modi ND, Sorich MJ, Hopkins AM. Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance: Weapons of Mass Disinformation. JAMA Intern Med. 2024;184(1):92–96. doi: 10.1001/jamainternmed.2023.5947. Doi: 10.1001/jamainternmed.2023.5947. [DOI] [PubMed] [Google Scholar]
- EU-OSHA (European Agency for Safety and Health at Work) Worker management through AI. From technology development to the impacts on workers and their safety and health. Discussion paper. 2024 Available online: https://osha.europa.eu/en/highlights/ai-worker-management-worker-safety-and-health-considered. [Last Accessed 19-03-2024] [Google Scholar]
- Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1) doi: 10.1038/s41746-023-00873-0. Doi: 10.1038/s41746-023-00873-0) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Council Directive 89/391/EEC of 12 June 1989 on the introduction of measures to encourage improvements in the safety and health of workers at work. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:31989L0391. [Last Accessed 6-02-2024] [Google Scholar]
- Friedland J, Balkin DB. When gig workers become essential: Leveraging customer moral self-awareness beyond COVID-19. Bus Horiz. 2023 Mar-Apr;66(2):181–190. doi: 10.1016/j.bushor.2022.05.003. Doi: 10.1016/j.bushor.2022.05.003. Epub 2022 May 15. PMID: 35601275; PMCID: PMC9107384. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gmyrek P, Berg J, Bescond D. Generative AI and jobs: A global analysis of potential effects on job quantity and quality, ILO Working Paper 96 (Geneva, ILO) 2023 https://doi.org/10.54394/ FHEM8239. [Google Scholar]
- Geneva: World: Health Organization; 2023. Regulatory considerations on artificial intelligence for health. License: CC BY-NC-SA 3.0 IGO. [Google Scholar]
- Guidance on large multi-modal models. Geneva: World Health Organization; 2024. Ethics and governance of artificial intelligence for health. License: CC BY-NC-SA 3.0 IGO. [Google Scholar]
- Geneva: World Health Organization; 2021. Ethics & Governance of Artificial Intelligence for Health”. https://www.who.int/publications/i/item/9789240029200 , accessed 30 January 2024. [Google Scholar]
- International Commission on Occupational Health (ICOH) International code of ethics for occupational health professionals. ICOH. (Third edition) 2014 [Google Scholar]
- The ICOH International Code of Ethics for Occupational Medicine Practitioners: Historical Fortunes and Future Perspectives in Italy. Med Lav. 2016;107(6):485–489. [PubMed] [Google Scholar]
- INAIL. Il codice internazionale di etica per gli operatori di medicina del lavoro. 2016 ISBN-978-88-7484-511-8. [PubMed] [Google Scholar]
