Skip to main content
. 2020 Oct 7;5:47. doi: 10.1186/s41235-020-00252-3

Table 3.

Description of participants, methods, and measures for each experiment

Experiment 1 Experiment 2 Experiment 3 Experiment 4
Participants 472 from Amazon Mechanical Turk (Mage = 35.12, 243 female) 1108 from Amazon Mechanical Turk (Mage = 35.19, 618 female) 1129 from Amazon Mechanical Turk (Mage = 34.40, 645 female) 1175 from Lucida (Mage = 45.46, 606 female)
Conditions Emotion induction, reason induction Emotion induction, reason induction, control Emotion induction, reason induction, control Emotion induction, reason induction, control
News headlines 6 fake headlines (half democrat-consistent, half Republican-consistent) 6 fake, 6 real headlines (half democrat-consistent, half Republican-consistent) 5 fake, 5 real headlines (all politically concordant based on force-choice Trump versus Clinton question) 6 fake, 6 real headlines (half Democrat-consistent, half Republican-consistent)
Scale questions on use of reason/emotion (Likert: 1–5) Not included Included Included Included
Participant Inclusion Criteria Restricted to United States; 90% HIT Approval Rate Restricted to United States; 90% HIT Approval Rate Restricted to United States; 90% HIT Approval Rate Typical Lucid Representative Sample

Lucid, an online convenience sampling platform comparable to Mechanical Turk, is purported to have a larger pool of subjects than MTurk, less professionalized subjects, and subjects more similar to US benchmarks regarding demographic, political, and psychological profiles (see Coppock and McClellan 2019)