Skip to main content
Springer logoLink to Springer
. 2022 Jun 16;55(4):1641–1652. doi: 10.3758/s13428-022-01885-6

Can we measure individual differences in cognitive measures reliably via smartphones? A comparison of the flanker effect across device types and samples

Thomas Pronk 1,2,3,, Rebecca J Hirst 3, Reinout W Wiers 1, Jaap M J Murre 1
PMCID: PMC10250264  PMID: 35710865

Abstract

Research deployed via the internet and administered via smartphones could have access to more diverse samples than lab-based research. Diverse samples could have relatively high variation in their traits and so yield relatively reliable measurements of individual differences in these traits. Several cognitive tasks that originated from the experimental research tradition have been reported to yield relatively low reliabilities (Hedge et al., 2018) in samples with restricted variance (students). This issue could potentially be addressed by smartphone-mediated administration in diverse samples. We formulate several criteria to determine whether a cognitive task is suitable for individual differences research on commodity smartphones: no very brief or precise stimulus timing, relative response times (RTs), a maximum of two response options, and a small number of graphical stimuli. The flanker task meets these criteria. We compared the reliability of individual differences in the flanker effect across samples and devices in a preregistered study. We found no evidence that a more diverse sample yields higher reliabilities. We also found no evidence that commodity smartphones yield lower reliabilities than commodity laptops. Hence, diverse samples might not improve reliability above student samples, but smartphones may well measure individual differences with cognitive tasks reliably. Exploratively, we examined different reliability coefficients, split-half reliabilities, and the development of reliability estimates as a function of task length.

Keywords: Flanker effect, Experimental effects, Individual differences, Reliability, Internet, Smartphones, Web applications

Introduction

Psychology has been called a science of two research traditions: the experimental and correlational traditions (Cronbach, 1957). The experimental tradition focuses on identifying cognitive processes that are universal across individuals (Cronbach, 1957), while the correlational tradition focuses on the assessment of individual differences (e.g., IQ). The presence of a cognitive process has traditionally been demonstrated by detecting the presence of an experimental effect. This effect is produced by manipulating a cognitive process, which in turn affects performance on an outcome measure such as the correctness or speed of a response (Goodhew & Edwards, 2019). Variation between individuals in the magnitude of an experimental effect decreases the power with which it can be detected. Hence, a relatively homogeneous sample might reveal an experimental effect more easily than a diverse sample. In contrast, the correlational tradition focuses on identifying the relationship between cognitive processes across individuals. Identifying such relationships requires variation between participants, i.e., individual differences in the magnitude of experimental effects. Hence, a relatively diverse sample might reveal individual differences more easily than a homogeneous sample.

Cognitive tasks originating from the correlational tradition, such as simple reaction time, have been demonstrated to measure individual differences reliably (Baker et al., 1986; Hamsher & Benton, 1977). However, it has been disputed whether cognitive tasks originating from the experimental tradition can measure individual differences reliably. A landmark paper in this controversy presented a replication of seven classical cognitive tasks across three studies (Hedge et al., 2018). Estimated via test-retest and split-half methods, all replications but one found reliability coefficients below 0.8. The finding that many cognitive tasks are perfectly suitable for detecting experimental effects, but apparently unsuitable for measuring individual differences was coined by Hedge et al. (2018) as a “reliability paradox”. These replications were conducted in a lab setting in a relatively homogeneous sample, namely psychology undergraduates. Hence, one possible explanation for the low reliabilities could be a lack of diversity in the sample, and consequentially, a lack of sufficient variation in the magnitude of experimental effects between participants.

Diverse samples can be relatively difficult to recruit for research in lab settings, but perhaps easier via the internet (Birnbaum, 2004; Reips, 2000; Woods et al., 2015). Web applications are a suitable technology for administering cognitive tasks deployed via the internet because web applications are based on open standards (such as HTML, CSS, and JavaScript) and are supported by web browsers across a wide range of devices. This allows web applications to be administered not only on keyboard devices (such as desktops and laptops) but also on touchscreen devices (such as tablets and smartphones). Ownership of keyboard devices can be limited to a relatively affluent stratum of the population, but smartphone ownership is increasingly ubiquitous (O’Dea, 2022; Pew Research Center, 2016). Hence, research deployed via the internet and administered via smartphones may have access to diverse samples (Dufau et al., 2011), and so provide more reliable measurements of individual differences.

Establishing whether this potential can be fulfilled requires systematic empirical verification, for which the current study aims to be the first step. Below, we first review extant studies that replicated effects with cognitive tasks that were implemented as web applications. Secondly, we review the timing accuracy of web applications, followed by a review of the unique features of smartphone user interfaces. We distill some design guidelines for cognitive tasks administered as web applications on smartphones. Finally, we justify our selection of the flanker effect for the focus of this study; arguing why data collected via the Eriksen flanker task may show higher reliabilities in more diverse samples and why this task may be suitable for smartphone administration. Based on these reviews, we designed a study that systematically compares samples and device types.

Replicating experimental effects with cognitive tasks as web applications

Studies that have administered cognitive tasks via web applications inside and outside of the lab have generally replicated experimental effects found with these tasks (Barnhoorn et al., 2015; Bazilinskyy & de Winter, 2018; Crump et al., 2013; de Leeuw & Motz, 2016; Frank et al., 2016; Germine et al., 2012; Hilbig, 2016; Semmelmann et al., 2016; Semmelmann & Weigelt, 2017). Whereas most tasks were administered on keyboard devices, some of these replications were administered on tablets (Frank et al., 2016; Semmelmann et al., 2016) and one on smartphones (Bazilinskyy & de Winter, 2018). Most studies were conducted on student samples, with some studies including non-student samples, such as convenience samples or samples recruited via Amazon Mechanical Turk (Bazilinskyy & de Winter, 2018; Crump et al., 2013; Germine et al., 2012; Semmelmann, 2017). One study examined the reliability of measuring individual differences with internal consistency methods such as Cronbach’s alpha and first-second split-half reliability, stratified by task conditions (Germine et al., 2012). Across four non-student samples and five cognitive tasks, Germine et al. (2012) found reliability estimates that were comparable to lab studies.

The studies above showed promising results in replicating experimental effects on keyboard devices. However, only a small number of studies administered cognitive tasks via smartphones or examined the reliability with which individual differences could be measured. None have performed a systematic comparison of samples (student versus non-student) and devices (laptops versus smartphones). We conducted such a systematic comparison, with a focus on individual differences instead of experimental effects.

Timing accuracy of commodity devices

When studies are deployed via the internet, they are administered on commodity devices (i.e., devices owned by participants). Doubts have been voiced about whether these devices are sufficiently accurate at timing stimuli and measuring response times (RTs) (Plant & Quinlan, 2013; van Steenbergen & Bocanegra, 2016). Studies that have measured stimulus duration via photometry on desktops and laptops, found that accuracy varied per combination of physical device, operating system, and browser, henceforth jointly denoted as device (Anwyl-Irvine et al., 2021; Barnhoorn et al., 2015; Bridges et al., 2020; Garaizar et al., 2014; Garaizar & Reips, 2019; Reimers & Stewart, 2015). In a recent study that included smartphones, duration accuracy ranged from near-perfect in best-case scenarios, to worst-case scenarios where timing could be off up to 66.6 ms (Pronk et al., 2020). This renders cognitive tasks that require particularly short or precise stimulus durations less suitable for administration on commodity devices.

Measurements of RT may make noisy overestimations, with the mean and variance of this noise varying across devices (Anwyl-Irvine et al., 2021; Bridges et al., 2020; Neath et al., 2011; Pronk et al., 2020; Reimers & Stewart, 2015). Noisy RT measurements may have only a modest effect on the reliability with which experimental effects can be measured (Reimers & Stewart, 2015), as confirmed by online studies that consistently replicated experimental effects found with cognitive tasks (Bazilinskyy & de Winter, 2018; de Leeuw & Motz, 2016; Germine et al., 2012; Hilbig, 2016; Semmelmann et al., 2016; Semmelmann & Weigelt, 2017). However, noise introduced by devices into RT measurements can be an issue when scoring a task based on absolute RT (i.e., a score based on an aggregation of RTs across a single condition). For instance, in repeated measure designs, participants using different devices between time points can introduce systematic errors (Reimers & Stewart, 2015). Additionally, device noise may have a relatively strong impact on the measurement of individual differences based on absolute RT (Pronk et al., 2020). In contrast, relative RT (i.e., a score which is based on a difference between aggregated RTs across two or more conditions) is less affected by the aforementioned issues (Bridges et al., 2020; Pronk et al., 2020; Reimers & Stewart, 2015). Hence, for repeated measures designs or individual differences research, we recommend using relative RTs when possible, even though, all things considered, relative RTs may be less reliable than absolute RTs, especially when the RTs of the conditions that are subtracted from each other are correlated (for example, Peter et al., 1993).

Flanker task design for smartphones

Cognitive tasks requiring speeded responses commonly present participants with a two-alternative forced-choice task, in which a participant provides one of two responses in each trial. However, when a keyboard response device is used, it is easy to increase the number of response options available—for instance, the left and right index fingers for two response options, adding the left and right middle fingers if four response options are required. Smartphone operation is different from keyboard device operation in several ways. Smartphones are generally handheld, have comparatively small screens in a larger variety of aspect ratios, and are operated by pressing touch-sensitive areas on the screen. This limits cognitive tasks in terms of the amount of information that can be presented simultaneously and the number of response options that can be provided conveniently (Passell et al., 2021).

Based on the considerations above, we deem the flanker effect (also known as the incongruency cost effect; Ridderinkhof et al., 2021), as measured via the flanker task, or Eriksen flanker task, suitable for administration on commodity devices: generally speaking, the flanker task does not require particularly brief or precise stimulus timing, and the flanker effect is scored via relative RT by subtracting mean RT in congruent conditions from mean RT in incongruent conditions. Indeed, two studies have replicated the flanker effect with flanker tasks implemented as web applications on desktops and laptops (Crump et al., 2013; Semmelmann & Weigelt, 2017). Extending to smartphones, we expect the flanker task to be suitable as well, as it presents only a limited amount of information on the screen simultaneously (in our paradigm, five stimuli next to each other) and offers two response options. Finally, as it is a relatively popular cognitive task, findings with the flanker task can be informative for the field in general.

Individual differences in the flanker effect

Concerning individual differences, Hedge et al. (2018) found test-retest intra-class correlation (ICC) coefficients of .40 and .57 in their student sample. In contrast, as reviewed by Ridderinkhof et al. (2021), several studies found test-retest Spearman-Brown adjusted Pearson correlations and ICCs of .77 and higher (Fan et al., 2002; MacLeod et al., 2010; Wöstmann et al., 2013; Zelazo et al., 2014). This difference in reliabilities becomes more striking when taking into account that the number of trials used to score the flanker effect was 240 per condition in Hedge et al. (2018), but up to 96 in the studies reviewed by Ridderinkhof et al. (2021). The samples used in the latter studies were more diverse with regard to age, gender, and education level, lending credence to our hypothesis that a more diverse sample can yield more reliable individual differences in cognitive tasks such as the flanker task.

To examine whether a more diverse sample indeed yields more reliable individual differences in the flanker effect, we compared a relatively diverse sample—recruited from the general population of the United Kingdom—with a student sample, both taking part via their keyboard devices. We hypothesized (H1) that, when the flanker task was administered via keyboard devices, the diverse sample would yield more reliable individual differences than the student sample. To examine whether smartphones measured individual differences in the flanker effect with a reliability that was as high as keyboard devices, we also administered the flanker task to a relatively diverse sample via their smartphones. We hypothesized (H2) that smartphone administration would not yield less reliable measurements of individual differences than keyboard device administration. These hypotheses were preregistered (https://osf.io/zvr6c), with open access data and materials available at https://osf.io/fwx2n.

Reliability as a function of internal consistency and task length

Our main hypotheses were tested with a design that was similar to Hedge et al. (2018). Specifically, we calculated test-retest ICCs for absolute agreement (Koo & Li, 2016; Mcgraw & Wong, 1996; Shrout & Fleiss, 1979). This reliability estimate takes into account both the strength of the linear relation of the flanker effect between task administrations and the consistency with which the flanker effect ranks participants from low to high. Test and retest were about a week after each other; we assume a flanker effect that is temporally stable within participants (Kopp et al., 2021). Exploratively, we examined a reliability coefficient that makes less strong assumptions about absolute agreement of scores: Pearson correlations (Parsons et al., 2019).

Exploratively, we also examined a method for estimating reliability that makes less strong assumptions about temporal stability, namely split-half reliabilities (Parsons et al., 2019). As splitting methods, we included both Monte Carlo splitting (Williams & Kaufmann, 2012) and permutated splitting (Kopp et al., 2021; Parsons et al., 2019; Williams & Kaufmann, 2012). Note that the explorative analyses in our pre-registration mention a comparison of various other splitting methods. We have restricted ourselves to Monte Carlo and permutated splitting because both have been considered relatively robust (Pronk et al., 2022), so any differences found between them in this study could be informative in recommending splitting methods for future research. Secondly, we examined how split-half and test-retest reliability estimates develop as a function of the number of trials in the flanker task by repeating the analyses with shortened flanker tasks constructed by subsampling trials. This approach is similar to the supplementary analyses of Hedge et al. (2018) for examining whether reliability stabilizes at a certain number of trials. Similar to Williams and Kaufmann (2012), we examined whether reliability estimates of the flanker effect follow the Spearman-Brown prophecy formula for increasing test length.

To the best of our knowledge, this is the first study that systematically compares the reliability of individual differences in a cognitive task effect across samples and device types. The results of this study may yield insight into whether the “reliability paradox” (Hedge et al., 2018) can in part be addressed by employing more diverse samples, accessible through commodity devices (smartphones). Additionally, the results may inform whether tasks meeting the design guidelines we laid out for cognitive tasks on smartphones offer similarly reliable measurements of individual differences as keyboard devices do. Finally, our explorative analyses may give some insight into the temporal stability of the flanker effect and how reliability develops with increasing trial counts.

Methods

Participants

For each condition, we aimed to recruit 152 British participants via Prolific (www.prolific.co). For the condition with students, we filtered on participants who were psychology students. For the conditions with diverse samples, we stratified sampling into eight strata of 19 participants. These strata were formed by each combination of three demographic variables: sex (male versus female), age (younger than 45 versus 45 years to 70), and education level (non-academic versus academic). Non-academic was defined as not being a student as well as having either no formal qualifications, or having completed secondary education, high school, or technical/community college as the highest educational level. “Academic” was defined as having completed an undergraduate, graduate, or doctorate degree. For the keyboard conditions, we allowed participants to only take part with a desktop or laptop device, while for the smartphone condition we allowed participants to only take part with a smartphone device.

Design

The study consisted of three between-subjects conditions: (1) a student sample taking part via a keyboard device (direct replication of setup in Hedge et al., 2018, for a single task) (2) a diverse sample taking part via a keyboard device (diversity of sample extension) (3) a diverse sample taking part via a smartphone (extension of sample and device). Each condition consisted of two sessions, in which an identical flanker task was administered. The sessions were separated in time by one to two weeks.

Measures

Each trial of the flanker task consisted of one target, which was an arrow pointing left or right; see the task materials repository. The target was flanked by two distractors on the left and two on the right. In the congruent condition, the distractors were arrows pointing in the same direction as the target. In the incongruent condition, they pointed in the opposite direction. All arrows were scaled to 30% of the height of the screen and were black in front of a 50% grey background (of 50% luminance). Each trial started with a fixation cross at the location of the target arrow. The fixation cross had one of 10 randomly selected durations formed by each whole multiple of 50 ms in the range of 500 to 950 ms. Next, the target and distractors were presented, and remained onscreen, until a response was given. On keyboard devices, participants were instructed to press the S key with their left index finger for left-pointing targets and the L key with their right index finger for right-pointing targets. On smartphone devices, participants were instructed to hold their devices with both hands in landscape orientation, pressing a touch-sensitive area in the bottom-left with their left thumb and a touch-sensitive area in the bottom-right with their right thumb, for left- and right-pointing target arrows, respectively.

At the start of the task, it was ensured that the device screen had a sufficiently high aspect ratio (≥ 1.6), barring further participation if this was not the case. Also, it was ensured that the screen was in landscape mode, with participants instructed to turn their device to landscape if this was not the case. Next followed one practice block and two main blocks, with a break after each block. The practice block presented eight trials, balancing target arrow direction and condition. Each of the two main blocks presented each of the 40 combinations of (a) left- or right-pointing target arrow, (b) congruent or incongruent condition, and (c) 10 fixation durations, four times. Hence, a total of 160 congruent and 160 incongruent trials were presented. This number of trials per condition was based on the supplementary analyses of Hedge et al. (2018), which showed reliability stabilizing at about 160 trials per condition. Trials were presented in pseudorandom order.

During each block, feedback was presented when a response was too fast (i.e., a response during the fixation cross) or when a response was incorrect (e.g., target pointed right but the response was left). Feedback was presented for at least one second and required a response to continue to the next trial. In the practice block, feedback was also presented following correct responses; this was aimed to aid participant comprehension of the task instructions.

Also following Hedge et al. (2018), participants were excluded if their accuracy was below 60% in either session (see Results – Participants). RTs below 100 ms and RTs greater than three times each individual’s median absolute deviation (3MAD) were excluded from the analysis. The flanker effect was calculated as the difference in mean RTs for correct responses between the incongruent and congruent task conditions.

The flanker task was implemented in PsychoJS (Bridges et al., 2020), which is the online counterpart of PsychoPy (Peirce, 2007; Peirce et al., 2019; Peirce & MacAskill, 2018). The source code of the task is available at https://osf.io/mhg5e.

Procedure

The first session consisted of a study briefing and informed consent, after which participants completed the first administration of the flanker task. Participants were requested to complete the second session about a week later. The second session consisted of another administration of the flanker task, followed by a debriefing. At the end of the first session and the beginning of the second session, participants were requested to use the same device for both sessions.

Data analysis

All data analyses were performed using R version 4.1.1 (R Core Team, 2021). Both hypotheses were tested via one-sided z-tests for the difference between Fisher z-transformed correlations across independent samples. These correlations were between the magnitude of the flanker effect in the first and second sessions. For both hypotheses, we assumed a medium (d = 0.3) effect, but with different levels of type 1 and type 2 errors. For H1 we tested whether the diverse sample on keyboard devices had a higher test-retest reliability than the student sample on keyboard devices with α = 0.05 and β = 0.2, considering a p-value < α as evidence for H1. For H2 we tested whether the test-retest reliability of the diverse sample on smartphone devices was as high as the diverse sample on keyboard devices. To this end, we tested whether the diverse sample on keyboard devices had a higher test-retest reliability than the diverse sample on smartphones, with α = 0.2 and β = 0.05, considering a p-value ≥ α as evidence for H2. The latter is comparable to a non-inferiority test with the smallest effect size of interest being d = 0.3 (Lakens et al., 2018, 2021). For both hypotheses, sufficient power and sensitivity could be obtained with 141 participants, as determined with G*Power 3.1 (Erdfelder et al., 1996). In practice, we oversampled to account for any drop-out and exclusion (see Results for details).

As a primary reliability coefficient, we calculated an ICC for absolute agreement, using two-way mixed effect models (Koo & Li, 2016; Mcgraw & Wong, 1996; Shrout & Fleiss, 1979). As an alternative reliability coefficient that only assesses the linear relation between test and retest scores or parts yielded by a split-half procedure, we calculated Pearson correlation coefficients.

To examine how reliability develops as a function of flanker task length, we constructed flanker datasets with 40, 80, …, 320 trials by subsampling trials (i.e., random sampling without replacement) stratified by arrow direction and congruence. Constructed in this fashion, a 40-trial flanker represents a flanker task of one-eighth of the original length, while a 320-trial flanker is the original dataset in a randomized trial order. Next, we calculated a reliability coefficient via one of three methods: test-retest correlation, permutated split-half, and Monte Carlo split-half. This procedure was replicated 10,000 times, averaging the estimates over replications via a simple mean. Upon suggestion by a reviewer, we also calculated the mean of Fisher z-transformed coefficients, followed by back-transforming the mean z-transformed value to a correlation coefficient. In line with the findings of Feldt and Charter (2006), the latter approach yielded coefficients that differed from the simple mean approach by 0.01 at most.

We compared the subsampled coefficients with predictions from the Spearman-Brown prophecy formula as follows. First, we assumed 101 full-length reliability coefficients of 0.00, 0.01, …, 1.00. For each of these 101 coefficients, we calculated the reliability coefficients predicted by the Spearman-Brown prophecy formula for a test of 18, 28, 38, …, and full length, yielding 101 reliability curves. Equation 1 shows the Spearman-Brown formula, where ρxx is the reliability of the full-length test, n is the length of the shortened test proportional to the full-length test, and ρxx* is the predicted reliability of the shortened test. Per group, reliability coefficient (test-retest correlation, permutated split-half, and Monte Carlo split-half), Spearman-Brown curve, and test length, we calculated the squared difference between the subsampled coefficient for test length X and the corresponding Spearman-Brown coefficient for test length X. Per group and reliability coefficient we selected the best-fitting Spearman-Brown curve as the curve whose sum of squared differences over test lengths was the smallest. All split-half analyses were conducted with the splithalfr R package (Pronk, 2021).

ρxx*=nρxx1+n-1ρxx 1

Results

Participants

The sampling specification in our first round of data collection did not prevent a subset of participants from taking part in both sample conditions (n = 2) or both device conditions (n = 34). These participants were excluded; therefore, we recruited new participants to compensate for this data loss. A total of 467 participants started the experiment in a single sample or device condition, of which 466 completed the first flanker and 449 also completed the second flanker. The number of days between the first and second flankers was 6.5 to 7.4 for 75% of participants.

Our pre-registration did not mention dropping any participants with flanker score outliers. However, our preliminary analyses revealed two participants with flanker scores with absolute z-values above 15, both of which were in the Diverse Keyboard group. To assess the impact of these two participants on reliability estimates, relative to the exclusion of any other participants, we calculated 10,000 test-retest ICCs of the Diverse Keyboard group, each time excluding two random participants with the restriction that neither was one of the above two outliers. The highest ICC thus obtained was 0.24. Excluding both outliers yielded an ICC of 0.61, giving a reasonable indication that the two identified participants were outliers with high leverage. Hence, we excluded both outliers from succeeding analyses.

All participants met the inclusion criterion of 60% correct in both sessions, with the lowest being 85%. The final sample had 153 participants in the Student Keyboard group, 153 in Diverse Keyboard, and 141 in Diverse Smartphone. Table 1 shows demographics per sample and device condition. As is common for students, the Student Keyboard sample had an age that was lower and less varied than that of the other samples. As tends to be common for psychology students, the Student Keyboard sample had more female participants than male participants, while the other samples had a more balanced sex distribution.

Table 1.

Demographics per sample and device condition

Demographic Student Keyboard Diverse Keyboard Diverse Smartphone
Mean age (years) 21.6 36.8 37.8
SD of age (years) 3.6 15.3 14.1
# Male 27 78 69
# Female 126 75 72
# Low education level 76 71
# High education level 77 70

Additionally, we examined which devices were used to take part by parsing the UserAgent string (Mozilla, 2022). We detected 23 unique combinations of OS and browser, while one UserAgent could not be parsed. In the first session, the five most frequent combinations were: Windows Chrome (n = 150), MacOS X Chrome (n = 64), Android Chrome (n = 62), iOS Safari (n = 51), and Mac OS X Safari (n = 30). In 20 cases, we detected a different OS or browser being used in the first and second sessions.

Flanker descriptives

Table 2 shows descriptives of flanker measures. Below, we report on statistically significant differences between congruent/incongruent trials, sessions, and groups, that may aid interpretation of our main hypothesis tests. As dependent variables, we used the flanker effect, mean RTs on congruent and incongruent trials (i.e., flanker score composite measures), trials with incorrect responses, and RT outliers (i.e., excluded trials that might differ between conditions). Count data was tested via Mann-Whitney tests (unpaired) and Wilcoxon tests (paired), mean RTs via t-tests, and variance of RTs via Levene’s tests. The standard flanker effect was found, with scores being greater than zero in all groups and sessions (ps < 0.001), indicating slower response times on incongruent as opposed to congruent trials. Additionally, a congruency effect was found on responses that were removed before calculating a flanker effect, namely the number of RTs > 3MAD and incorrect responses (ps < 0.001)

Table 2.

Descriptives of flanker measures per group and session. Con: congruent trials. Inc: incongruent trials. As the median number of RTs < 100 ms was zero across conditions, these are not reported in the table

Measure Session Student keyboard Diverse keyboard Diverse smartphone
Con Inc Con Inc Con Inc
Median of % RT > 3MAD 1 3.75 5.00 3.12 5.00 3.12 5.31
2 3.44 5.00 4.06 5.31 3.44 5.31
Median of % incorrect 1 0.31 1.25 0.31 0.94 0.00 0.62
2 0.31 1.25 0.31 0.94 0.00 0.62
Mean of mean correct RTs (ms) 1 455 492 498 528 550 592
2 442 477 485 513 537 578
SD of mean correct RTs (ms) 1 26 24 34 35 26 27
2 21 20 26 25 31 29
Correlation mean correct RTs congruent and incongruent 1 0.98 0.99 0.97
2 0.98 0.99 0.98
Mean of flanker score (ms) 1 37 30 41
2 35 28 42
SD of flanker score (ms) 1 13 18 21
2 13 16 19

Comparing sessions, the mean RTs of correct responses were lower in session 2 than in session 1 for both congruent and incongruent conditions (ps ≤ 0.04). None of the other measures showed significant differences between sessions, except for an increase in the number of RTs > 3MAD from session 1 to session 2 in the congruent trials of the Diverse Keyboard group (p = 0.005). In none of the groups did flanker scores significantly differ between sessions (ps ≥ 0.11). Hence, while participants got faster at the flanker task overall, the flanker effect was relatively constant over sessions.

Finally, we compared the means and variances of flanker scores between groups. For both sessions, mean flanker scores were lower in the Diverse Keyboard group than in the Student Keyboard and Diverse Smartphone groups (ps < 0.001). For both sessions, the variance in flanker scores was higher in the Diverse Keyboard group than in the Student Keyboard group (ps ≤ 0.006), but they were not significantly different between the Diverse Keyboard and Diverse Smartphone groups (ps ≥ 0.12).

Test-retest reliabilities

Our main hypotheses were tested via z-tests on Fisher z-transformed ICCs between the flanker scores of sessions 1 and 2. We found no evidence for H1, as the Diverse Keyboard group (r = 0.61) did not have a higher test-retest reliability than the Student Keyboard group (r = 0.55), d = 0.09, p = 0.21. We did find evidence for H2, as the Diverse Keyboard group (r = 0.61) also did not have a higher test-retest reliability than the Diverse Smartphone group (r = 0.63), d = −0.03, p = 0.62. Pearson correlations were close to ICCs, being at most 0.01 higher, so for the explorative analyses below, we only report ICCs.

Split-half reliabilities

Exploratively, we analyzed split-half reliability estimates. Overall, these were higher than test-retest reliabilities, with estimates from permutated splits being 0.63, 0.75, and 0.82 for the Student Keyboard, Diverse Keyboard, and Diverse Smartphone groups, respectively. Estimates from Monte Carlo splits were higher still, being 0.71, 0.79, and 0.84, respectively. Testing our main hypotheses on split-half reliability coefficients, we find a higher reliability in the Diverse Keyboard group than in the Student Keyboard group when split permutated (d = 0.24, p = 0.02), but not when split Monte Carlo (d = 0.18, p = 0.057). The Diverse Smartphone group did not have a lower reliability than the Diverse Keyboard group, neither split permutated (d = −0.17, p = 0.92) nor Monte Carlo (d = −0.14, p = 0.88). Across groups, the distributions of permutated and Monte Carlo split-half estimates were disjoint by at least 27%, suggesting a relatively large effect of splitting method on split-half reliability estimates.

Figure 1 shows the reliability estimates obtained via subsampling and best-fitting Spearman-Brown predictions. For permutated and test-retest reliability, Spearman-Brown-predicted curves match the subsampled curves well. Assuming this would also apply to flanker tasks of increased length, we could use the Spearman-Brown formula to predict how long the flanker task would need to be to achieve a test-retest reliability of 0.8. This would require flanker tasks of 1054, 819, and 747 trials, for the Student Keyboard, Diverse Keyboard, and Diverse Smartphone group, respectively. For Monte Carlo split reliability estimates, Spearman-Brown predicted curves did not match the subsampled curves well. Note that, as flanker length approaches zero, one would expect that its reliability estimate would approach zero as well. However, the Monte Carlo estimates stay relatively high. This might indicate that Monte Carlo splits overestimate reliability, especially for relatively short tasks.

Fig. 1.

Fig. 1

Reliability coefficients as a function of flanker length

Discussion

Research deployed via the internet and administered via smartphones may have access to more diverse samples than the student samples commonly recruited for lab research (Dufau et al., 2011). Diverse samples could have more variation in their traits. Since reliable measurements of individual differences require variation in the trait measured, more diverse samples could yield more reliable measurements of individual differences. Hence, research deployed via the internet and administered via smartphones could potentially address the issue of cognitive tasks having relatively low reliabilities (coined the “reliability paradox” by Hedge et al., 2018). In the Introduction, we formulated four criteria for determining whether a cognitive task is, in principle, suitable for commodity laptops and smartphones: no very brief or precise stimulus timing, relative response times (RTs), a maximum of two response options, and a small number of graphical stimuli. We identified the flanker task and associated flanker effect as meeting these criteria. Hence, the flanker effect was deemed suitable for testing whether the reliability of individual differences measured with cognitive tasks can be improved by diverse samples and smartphones.

We operationalized reliability as test-retest ICCs for absolute agreement. We hypothesized (H1) that a more diverse sample would yield higher test-retest ICCs for the flanker effect than a student sample. While the diverse sample indeed showed more variation in the flanker effect, we did not confirm that a more diverse sample yields higher test-retest ICCs. Additionally, we hypothesized (H2) that smartphones would not yield lower test-retest ICCs than laptops. In line with H2, smartphones did not show a lower variation in the flanker score nor lower ICCs. Hence, we can confirm that smartphones appeared to be just as suitable for reliably measuring individual differences in the flanker effect as laptops were. Exploratively, we examined an index of reliability that does not assess absolute agreement, but only the linear relationship between test and retest scores (Pearson correlations). Within groups, flanker scores did not differ significantly between test and retest sessions, and Pearson correlations were close to ICCs. Hence, we conclude that differences in reliability between the groups included in this study are mostly reflected in the linear relation between test and retest scores, and not by the absolute agreement between test and retest scores.

At first sight, it might appear counterintuitive that the diverse sample did have a significantly higher variation in the flanker effect than the student sample, but not a significantly higher test-retest reliability. We offer two possible explanations for this result. Firstly, note that descriptively, the test-retest reliabilities were higher in the diverse sample, but not high enough to be statistically significant. Hence, our first explanation is that between-subjects tests of differences between correlations are simply not very sensitive, as also reflected by the relatively large samples we required for detecting a medium effect. Secondly, it may be that the higher variance in the flanker effect as measured by the task did not so much reflect a higher variance in trait, but a higher variance in error.

Test-retest reliabilities require multiple flanker administrations and assume a trait that is relatively stable over time. In contrast, split-half reliabilities can be estimated from a single administration and may be more strongly affected by a participant’s state instead of trait. Exploratively, we analyzed split-half reliability estimates. Overall, split-half reliabilities were slightly higher than test-retest reliabilities. In line with test-retest reliability estimates, we found no differences in split-half reliabilities between devices. In contrast to test-retest reliabilities, we did find a higher split-half reliability in the diverse sample than in the student sample, but only when splitting task data via the permutated method. Taken together, these results could suggest that split-half reliabilities are more sensitive to participant state than test-retest reliabilities, as indicated by the overall higher values of split-half reliability estimates. Split-half methods could yield slightly more sensitive measures of the reliability of a trait than test-retest methods, as we did not confirm our hypothesis on sample differences in test-retest reliabilities, but we did in a split-half reliability.

Finally, we examined how reliability estimates developed as a function of flanker task length by constructing subsampled datasets. For both test-retest and permutated split-half reliabilities, the relation between flanker length and reliability coefficient could be well-modeled by the Spearman-Brown prophecy formula. This quality could be useful for estimating the number of flanker trials required to reach a given reliability level. For instance, in our results, a test-retest reliability of 0.8 would require roughly 750 to 1000 trials. In contrast to the findings of Williams and Kaufmann (2012) and our findings on test-retest and permutated split-half reliabilities, Monte-Carlo split-half reliabilities were not well modeled by the Spearman-Brown prophecy formula. In particular, reliability estimates at low numbers of trials were relatively high, which might indicate that Monte Carlo splitting overestimates reliability in short tasks. Hence, for tasks with relatively low numbers of trials we recommend estimating split-half reliabilities using permutated splitting instead.

Based on modeling work in Pronk et al. (2020), we recommended relative RTs for being more robust against the inaccuracies in RT measurements introduced by commodity laptops, desktops, and smartphones. However, scores based on relative RT may be inherently less reliable than absolute RTs (e.g., Lord et al., 1968). Hence, mental chronometry via web applications might face a challenging impasse: either attempt to use reliable measures based on absolute RT, which could be attenuated by device accuracy, or use less reliable measures based on relative RT that may remain robust against device accuracy. This challenge might be addressed, in part, by applying more sophisticated psychometrics to cognitive tasks. For instance, we found congruence effects not only present in the flanker effect, but also in various other measures traditionally disregarded when scoring a flanker effect, such as RT outliers. Models that take this information into account, such as the diffusion model (Ratcliff, 1978), could offer richer, and perhaps more reliable, measures of mental processes underlying the flanker effect.

A more explicit measurement model might not only be more reliable overall, but also offer a more nuanced interpretation of reliability estimates. For instance, in the context of cognitive tasks, our applications of permutated split-halves and the Spearman-Brown prophecy formula imply a parallel measurement model (Warrens, 2016). Hence, while we interpreted test-retest and split-half reliabilities differently, psychometrically we equated them (Warrens, 2015). Rouder and Haaf (2019) formulated a measurement model for RT data that explicitly distinguishes features of a test and of a construct. This model not only can be useful for obtaining measures of a construct that are relatively independent of task properties, such as the number of trials, but can also offer a more principled interpretation of different approaches to reliability estimation, such as split-half versus test-retest reliability.

In addition to psychometrics, future research could conduct a more comprehensive assessment of the potential of diverse samples and smartphones by including a wider variety of cognitive task paradigms. As a first assessment of this kind, we chose a between-subjects study design since this was relatively robust against learning effects across successive administrations. However, between-subjects comparisons of correlation coefficients require relatively large samples to have sufficient power for detecting moderate effects. Hence, we only examined a single cognitive task, the flanker task. The flanker effect was relatively constant across sessions, suggesting an absence of learning effects. Hence, more comprehensive studies could perhaps improve on power by varying devices and tasks within-subjects. A second avenue could be an examination of procedural differences between the flanker designs of Hedge et al. (2018) and the current study on the one hand, and designs by studies that found higher reliabilities on the other (Fan et al., 2002; MacLeod et al., 2010; Wöstmann et al., 2013; Zelazo et al., 2014). For instance, the majority of these studies feature a cue of varying validity. Such procedural variation might keep the participant more attentive, and so yield higher quality data.

In summary, based on our preregistered hypotheses, we found no evidence that the reliability paradox may be resolved via diverse samples. Hence, students may be just as suitable for individual differences research as a more diverse sample. We found reliability estimates ranging from 0.55 to 0.63 with numbers of trials suitable for online administration (300), from which we carefully draw optimism that a sufficiently reliable flanker task could be feasible. Additionally, for researching individual differences in cognitive tasks online, commodity smartphones may be just as capable as laptops.

We recommend that researchers consider using smartphones for cognitive task research if a paradigm so allows. Given their versatility and ubiquity, smartphones are cost-effective and could be valuable in reaching more diverse or specific samples. Of particular interest could be the increase in scale allowed by online methods. Continuing our careful optimism, we proposed a number of avenues for assessing and improving the reliability of cognitive tasks, as well as increasing the power of designs that compare their reliabilities. A comprehensive examination would require numbers of participants that may be prohibitive for a lab study. However, it may well be feasible online.

Acknowledgments

Financing for participant recruitment was provided by Open Science Tools and the Behavioural Science Lab, Faculty of Social and Behavioural Sciences, University of Amsterdam. We would like to thank Richard Ridderinkhof for his advice on the flanker task design. Additionally, we thank BrowserStack for providing free cross-browser testing infrastructure to open-source software projects, thus enabling us to conduct automated end-to-end tests of the flanker web application on a large number of platforms before deploying it.

Funding

This study was funded by Open Science Tools and the Behavioural Science Lab, Faculty of Social and Behavioural Sciences, University of Amsterdam.

Declarations

Ethics approval

Approval was obtained from the ethics committee of the University of Amsterdam. The procedures used in this study adhere to the tenets of the Declaration of Helsinki.

Consent to participate

All participants were properly informed about the study procedure and provided active consent to participate.

Consent for publication

All participants consented to having anonymized data be made publicly accessible.

Conflicts of interest

Not applicable.

Footnotes

Open practices statement

The flanker artwork, flanker web application, anonymized data, and data processing scripts that were used for this study, are available via https://osf.io/fwx2n. The study was preregistered at https://osf.io/zvr6c.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Anwyl-Irvine AL, Dalmaijer ES, Hodges N, Evershed JK. Realistic precision and accuracy of online experiment platforms, web browsers, and devices. Behavior Research Methods. 2021;53:1407–1425. doi: 10.3758/s13428-020-01501-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Baker SJ, Maurissen JPJ, Chrzan GJ. Simple reaction time and movement time in normal human volunteers: a long-term reliability study. Perceptual and Motor Skills. 1986;63(2):767–774. doi: 10.2466/pms.1986.63.2.767. [DOI] [Google Scholar]
  3. Barnhoorn JS, Haasnoot E, Bocanegra BR, van Steenbergen H. QRTEngine : An easy solution for running online reaction time experiments using Qualtrics. Behavior Research Methods. 2015;47:918–929. doi: 10.3758/s13428-014-0530-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bazilinskyy P, de Winter JCF. Crowdsourced measurement of reaction times to audiovisual stimuli with various degrees of asynchrony. Human Factors. 2018;60(8):1192–1206. doi: 10.1177/0018720818787126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Birnbaum MH. Human research and data collection via the internet. Annual Review of Psychology. 2004;55:803–832. doi: 10.1146/annurev.psych.55.090902.141601. [DOI] [PubMed] [Google Scholar]
  6. Bridges D, Pitiot A, MacAskill MR, Peirce JW. The timing mega-study: Comparing a range of experiment generators, both lab-based and online. PeerJ. 2020;8:Article e9414. doi: 10.7717/peerj.9414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Cronbach LJ. The two disciplines of scientific psychology. American Psychologist. 1957;12(11):671–684. doi: 10.1037/h0043943. [DOI] [Google Scholar]
  8. Crump MJC, Mcdonnell JV, Gureckis TM. Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PLoS ONE. 2013;8(3):Article e57410. doi: 10.1371/journal.pone.0057410. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. de Leeuw JR, Motz BA. Psychophysics in a web browser? Comparing response times collected with JavaScript and Psychophysics Toolbox in a visual search task. Behavior Research Methods. 2016;48:1–12. doi: 10.3758/s13428-015-0567-2. [DOI] [PubMed] [Google Scholar]
  10. Dufau S, Duñabeitia JA, Moret-Tatay C, McGonigal A, Peeters D, Alario F-X, Balota DA, Brysbaert M, Carreiras M, Ferrand L, Ktori M, Perea M, Rastle K, Sasburg O, Yap MJ, Ziegler JC, Grainger J. Smart phone, smart science: How the use of smartphones can revolutionize research in cognitive science. PLoS ONE. 2011;6(9):Article e24974. doi: 10.1371/journal.pone.0024974. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Erdfelder E, Faul F, Buchner A. GPOWER: A general power analysis program. Behavior Research Methods, Instruments, & Computers. 1996;28:1–11. doi: 10.3758/BF03203630. [DOI] [Google Scholar]
  12. Fan J, McCandliss BD, Sommer T, Raz A, Posner MI. Testing the efficiency and independence of attentional networks. Journal of Cognitive Neuroscience. 2002;14(3):340–347. doi: 10.1162/089892902317361886. [DOI] [PubMed] [Google Scholar]
  13. Feldt LS, Charter RA. Averaging internal consistency reliability coefficients. Educational and Psychological Measurement. 2006;66(2):215–227. doi: 10.1177/0013164404273947. [DOI] [Google Scholar]
  14. Frank MC, Sugarman E, Horowitz AC, Lewis ML, Yurovsky D. Using tablets to collect data from young children. Journal of Cognition and Development. 2016;17(1):1–17. doi: 10.1080/15248372.2015.1061528. [DOI] [Google Scholar]
  15. Garaizar P, Reips U-D. Best practices: Two web-browser-based methods for stimulus presentation in behavioral experiments with high-resolution timing requirements. Behavior Research Methods. 2019;51:1441–1453. doi: 10.3758/s13428-018-1126-4. [DOI] [PubMed] [Google Scholar]
  16. Garaizar P, Vadillo MA, López-de-Ipiña D. Presentation accuracy of the web revisited: animation methods in the HTML5 era. PLoS ONE. 2014;9(10):Article e109812. doi: 10.1371/journal.pone.0109812. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Germine L, Nakayama K, Duchaine BC, Chabris CF, Chatterjee G, Wilmer JB. Is the web as good as the lab? Comparable performance from web and lab in cognitive/perceptual experiments. Psychonomic Bulletin & Review. 2012;19(5):847–857. doi: 10.3758/s13423-012-0296-9. [DOI] [PubMed] [Google Scholar]
  18. Goodhew SC, Edwards M. Translating experimental paradigms into individual-differences research: Contributions, challenges, and practical recommendations. Consciousness and Cognition. 2019;69:14–25. doi: 10.1016/j.concog.2019.01.008. [DOI] [PubMed] [Google Scholar]
  19. Hamsher KDS, Benton AL. The reliability of reaction time determinations. Cortex. 1977;13(3):306–310. doi: 10.1016/S0010-9452(77)80040-3. [DOI] [PubMed] [Google Scholar]
  20. Hedge C, Powell G, Sumner P. The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior Research Methods. 2018;50:1166–1186. doi: 10.3758/s13428-017-0935-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Hilbig BE. Reaction time effects in lab- versus web-based research: Experimental evidence. Behavior Research Methods. 2016;48:1718–1724. doi: 10.3758/s13428-015-0678-9. [DOI] [PubMed] [Google Scholar]
  22. Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine. 2016;15(2):155–163. doi: 10.1016/j.jcm.2016.02.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Kopp B, Lange F, Steinke A. The reliability of the wisconsin card sorting test in clinical practice. Assessment. 2021;28(1):248–263. doi: 10.1177/1073191119866257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., … Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2, 168–171. 10.1038/s41562-018-0311-x
  25. Lakens D, Pahlke F, Wassmer G. Group sequential designs : A tutorial. PsyArXiv; 2021. [Google Scholar]
  26. Lord FM, Novick MR, Birnbaum A. Statistical theories of mental test scores. Addison-Wesley; 1968. [Google Scholar]
  27. MacLeod JW, Lawrence MA, McConnell MM, Eskes GA, Klein RM, Shore DI. Appraising the ANT: Psychometric and theoretical considerations of the attention network test. Neuropsychology. 2010;24(5):637–651. doi: 10.1037/a0019803. [DOI] [PubMed] [Google Scholar]
  28. Mcgraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychological Methods. 1996;1(1):30–46. doi: 10.1037//1082-989X.1.1.30. [DOI] [Google Scholar]
  29. Mozilla. (2022, February 18). User-Agent. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent
  30. Neath I, Earle A, Hallett D, Surprenant AM. Response time accuracy in Apple Macintosh computers. Behavior Research Methods. 2011;43:Article 353. doi: 10.3758/s13428-011-0069-9. [DOI] [PubMed] [Google Scholar]
  31. O’Dea, S. (2022, February 23). Number of smartphone users worldwide from 2016 to 2021. https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/
  32. Parsons S, Kruijt A-W, Fox E. Psychological science needs a standard practice of reporting the reliability of cognitive-behavioral measurements. Advances in Methods and Practices in Psychological Science. 2019;2(4):378–395. doi: 10.1177/2515245919879695. [DOI] [Google Scholar]
  33. Passell E, Strong RW, Rutter LA, Kim H, Scheuer L, Martini P, Grinspoon L, Germine L. Cognitive test scores vary with choice of personal digital device. Behavior Research Methods. 2021;53:2544–2557. doi: 10.3758/s13428-021-01597-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Peirce JW. PsychoPy-Psychophysics software in Python. Journal of Neuroscience Methods. 2007;162(1–2):8–13. doi: 10.1016/j.jneumeth.2006.11.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Peirce JW, MacAskill MR. Building Experiments in PsychoPy. Sage; 2018. [Google Scholar]
  36. Peirce JW, Gray JR, Simpson S, MacAskill M, Höchenberger R, Sogo H, Kastman E, Lindeløv JK. PsychoPy2: Experiments in behavior made easy. Behavior Research Methods. 2019;51:195–203. doi: 10.3758/s13428-018-01193-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Peter JP, Churchill GA, Jr, Brown TJ. Caution in the use of difference scores in consumer research. Journal of Consumer Research. 1993;19(4):655–662. doi: 10.1086/209329. [DOI] [Google Scholar]
  38. Pew Research Center. (2016, February 22). Smartphone ownership and internet usage continues to climb in emerging economies. https://www.pewresearch.org/wp-content/uploads/sites/2/2016/02/pew_research_center_global_technology_report_final_february_22__2016.pdf
  39. Plant RR, Quinlan PT. Could millisecond timing errors in commonly used equipment be a cause of replication failure in some neuroscience studies? Cognitive, Affective, & Behavioral Neuroscience. 2013;13:598–614. doi: 10.3758/s13415-013-0166-6. [DOI] [PubMed] [Google Scholar]
  40. Pronk, T. (2021, September 29). splithalfr: Estimates split-half reliabilities for scoring algorithms of cognitive tasks and questionnaires. https://github.com/tpronk/splithalfr
  41. Pronk T, Wiers RW, Molenkamp B, Murre JMJ. Mental chronometry in the pocket? Timing accuracy of web applications on touchscreen and keyboard devices. Behavior Research Methods. 2020;52:1371–1382. doi: 10.3758/s13428-019-01321-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Pronk T, Molenaar D, Wiers RW, Murre JMJ. Methods to split cognitive task data for estimating split-half reliability: A comprehensive review and systematic assessment. Psychonomic Bulletin & Review. 2022;29:44–54. doi: 10.3758/s13423-021-01948-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. R Core Team . R: A language and environment for statistical computing. R Foundation for Statistical Computing; 2021. [Google Scholar]
  44. Ratcliff R. A theory of memory retrieval. Psychological Review. 1978;85(2):59–108. doi: 10.1037/0033-295X.85.2.59. [DOI] [Google Scholar]
  45. Reimers S, Stewart N. Presentation and response timing accuracy in Adobe Flash and HTML5/JavaScript Web experiments. Behavior Research Methods. 2015;47:309–327. doi: 10.3758/s13428-014-0471-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Reips U-D. The web experiment: advantages, disadvantages, and solutions. In: Birnbaum MH, Birnbaum MO, editors. Psychology experiments on the Internet. Academic Press; 2000. pp. 89–117. [Google Scholar]
  47. Ridderinkhof KR, Wylie SA, van den Wildenberg WPM, Bashore TR, van der Molen MW. The arrow of time: Advancing insights into action control from the arrow version of the Eriksen flanker task. Attention, Perception, & Psychophysics. 2021;83:700–721. doi: 10.3758/s13414-020-02167-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Rouder JN, Haaf JM. A psychometrics of individual differences in experimental tasks. Psychonomic Bulletin & Review. 2019;26:452–467. doi: 10.3758/s13423-018-1558-y. [DOI] [PubMed] [Google Scholar]
  49. Semmelmann, K. (2017). Web technology and the Internet: the future of data acquisition in psychology? Doctoral dissertation, Ruhr-Universität Bochum.
  50. Semmelmann K, Weigelt S. Online psychophysics: reaction time effects in cognitive experiments. Behavior Research Methods. 2017;49:1241–1260. doi: 10.3758/s13428-016-0783-4. [DOI] [PubMed] [Google Scholar]
  51. Semmelmann K, Nordt M, Sommer K, Röhnke R, Mount L, Prüfer H, Terwiel S, Meissner TW, Koldewyn K, Weigelt S. U can touch this: How tablets can be used to study cognitive development. Frontiers in Psychology. 2016;7:Article 1021. doi: 10.3389/fpsyg.2016.01021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Shrout PE, Fleiss JL. Intraclass correlations : Uses in assessing rater reliability. Psychological Bulletin. 1979;86(2):420–428. doi: 10.1037/0033-2909.86.2.420. [DOI] [PubMed] [Google Scholar]
  53. van Steenbergen H, Bocanegra BR. Promises and pitfalls of web-based experimentation in the advance of replicable psychological science: A reply to Plant (2015) Behavior Research Methods. 2016;48:1713–1717. doi: 10.3758/s13428-015-0677-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Warrens MJ. On Cronbach’s alpha as the mean of all split-half reliabilities. In: Millsap R, Bolt D, van der Ark L, Wang W-C, editors. Quantitative psychology research. Springer proceedings in mathematics & statistics. Springer International Publishing; 2015. pp. 293–300. [Google Scholar]
  55. Warrens MJ. A comparison of reliability coefficients for psychometric tests that consist of two parts. Advances in Data Analysis and Classification. 2016;10:71–84. doi: 10.1007/s11634-015-0198-6. [DOI] [Google Scholar]
  56. Williams BJ, Kaufmann LM. Reliability of the Go/No Go Association Task. Journal of Experimental Social Psychology. 2012;48(4):879–891. doi: 10.1016/j.jesp.2012.03.001. [DOI] [Google Scholar]
  57. Woods AT, Velasco C, Levitan CA, Wan X, Spence C. Conducting perception research over the Internet: a tutorial review. PeerJ. 2015;3:Article e1058. doi: 10.7717/peerj.1058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Wöstmann NM, Aichert DS, Costa A, Rubia K, Möller HJ, Ettinger U. Reliability and plasticity of response inhibition and interference control. Brain and Cognition. 2013;81(1):82–94. doi: 10.1016/j.bandc.2012.09.010. [DOI] [PubMed] [Google Scholar]
  59. Zelazo PD, Anderson JE, Richler J, Wallner-Allen K, Beaumont JL, Conway KP, Gershon R, Weintraub S. NIH Toolbox Cognition Battery (CB): Validation of executive function measures in adults. Journal of the International Neuropsychological Society. 2014;20(6):620–629. doi: 10.1017/S1355617714000472. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Behavior Research Methods are provided here courtesy of Springer

RESOURCES