Table 4.
Persistent false beliefs: human users of AI systems may get locked into persistent false beliefs, as imitative AI systems reinforce common misconceptions, and sycophantic AI systems provide pleasing but inaccurate advice. |
Political polarization: human users may become more politically polarized by interacting with sycophantic AI systems. Sandbagging may lead to sharper disagreements between differently educated groups. |
Enfeeblement: human users may be lulled by sycophantic AI systems into gradually delegating more authority to AI. |
Anti-social management decisions: AI systems with strategic deception abilities may be incorporated into management structures, leading to increased deceptive business practices. |