Skip to main content
. 2024 May 10;5(5):100988. doi: 10.1016/j.patter.2024.100988

Table 5.

Overview of possible solutions to the AI deception problem

Regulation: policymakers should robustly regulate AI systems capable of deception. Both LLMs and special-use AI systems capable of deception should be treated as high risk or unacceptable risk in risk-based frameworks for regulating AI systems.
Bot-or-not laws: policymakers should support bot-or-not laws that require AI systems and their outputs to be clearly distinguished from human employees and outputs.
Detection: technical researchers should develop robust detection techniques to identify when AI systems are engaging in deception.
Making AI systems less deceptive: technical researchers should develop better tools to ensure that AI systems are less deceptive.