Table 5.
Regulation: policymakers should robustly regulate AI systems capable of deception. Both LLMs and special-use AI systems capable of deception should be treated as high risk or unacceptable risk in risk-based frameworks for regulating AI systems. |
Bot-or-not laws: policymakers should support bot-or-not laws that require AI systems and their outputs to be clearly distinguished from human employees and outputs. |
Detection: technical researchers should develop robust detection techniques to identify when AI systems are engaging in deception. |
Making AI systems less deceptive: technical researchers should develop better tools to ensure that AI systems are less deceptive. |