Skip to main content
. 2024 Jan 23;3(1):pgae004. doi: 10.1093/pnasnexus/pgae004

Fig. 2.

Fig. 2.

This shows where bad-actor-AI activity will likely happen, i.e. across the bad-actor—vulnerable-mainstream ecosystem (left panel). It comprises interlinked bad-actor communities (colored nodes) and vulnerable mainstream communities (white nodes, which are communities to which bad-actor communities have formed a direct link). This empirical network is shown using the ForceAtlas2 layout algorithm (60) which is spontaneous, hence sets of communities (nodes) appear closer together when they share more links. Different colors correspond to different platforms (see Fig. S1). Small ring shows 2023 Texas shooter’s YouTube community as illustration. Ordered circles shows successive sets of white nodes with 1,2,3, etc. links from 4Chan hence they experience a net spring force toward the core that is 1,2,3, etc. times as strong, so they will be roughly 1,2,3, etc. times more likely to receive future bad-actor-AI content and influence. Right panel shows Venn diagram of the topics discussed within the distrust subset (see text and Section S4 for fuller explanation). Each circle denotes a category of communities that discuss a specific set of topics, listed at bottom. The medium size number is the number of communities discussing that specific set of topics, and the largest number is the corresponding number of individuals, e.g. gray circle shows that 19.9M individuals (73 communities) discuss all 5 topics. Number is red if a majority are antivaccination; green if majority is neutral on vaccines. Only regions with >3% of total communities are labeled. Antivaccination dominates. Overall, this figure shows how bad-actor-AI could quickly achieve global reach and could also grow rapidly by drawing in communities with existing distrust.