Table 2.
Barriers and possible solutions to incorporating AI in clinical practice.
| Possible Solutions | |
|---|---|
| Challenges of validating AI-enabled algorithms | |
| Variability in reported information about AI-enabled algorithms | Use predefined dataset description forms including detailed information and justification about the training dataset such as:
|
| Lack of generalizability |
|
| Overfitting AI-enabled algorithm |
|
| Need for reproducibility and repeatability | Test the AI-enabled algorithm on multiple datasets that were built for similar intended use and indication. Of note, certain algorithms, such as those utilizing deep learning, should not be tested on the same training dataset; the training, validation, and test datasets always need to be clearly distinguished. |
| Researcher-clinician-industry collaboration | Involvement of all key stakeholders throughout the process of building and validating the AI-enabled algorithms through strategic meetings along the timeline |
| Challenges of implementing AI-enabled technology in practice | |
| Wide variety of AI-enabled products to choose from | Key points for consideration:
|
| Lack of knowledge about AI algorithms | Increase in awareness and education regarding how AI algorithms learn and what are the various biases associated with its use |
| Lack of knowledge about when to use AI algorithms | Need for prospective studies assessing clinical noninferiority in a similar population and investigating how the AI–human team performs. |
| Lack of ease in operability | User-friendly interface, improved findability and accessibility |
| Physiological explanation of the working model | An explanation of why the sleep data show high risk (eg, dementia, where AI says it is because of decreased delta-to-theta power ratio during N3, eg, narcolepsy, where AI says it is because of an SOREMP during nocturnal sleep and presence of mixed REM-wake states) |
| Difficulty in utilizing AI as a tool complementing human skills | AI algorithm highlights the epochs that were difficult to predict or shows the features used to make predictions for clinician oversight, using natural human languages such as ChatGPT (OpenAI, San Francisco, California, USA)51 |
| Patient privacy and workflow integration concerns |
|
| Continuous updates of AI algorithms can create differences in clinical outcome | Aftermarket surveillance by the manufacturing company and the need to revalidate algorithms after significant updates |
| Patient access to AI algorithm-generated clinical data | Need for shared decision making with clinician taking the lead to integrate data from medical-grade and consumer-grade devices and formulating treatment plans with patient input |
AI = artificial intelligence, IT = internet technology, REM = rapid eye movement, SOREMP = sleep-onset REM period.