Skip to main content
PLOS One logoLink to PLOS One
. 2022 Dec 1;17(12):e0278409. doi: 10.1371/journal.pone.0278409

Information acquisition and cognitive processes during strategic decision-making: Combining a policy-capturing study with eye-tracking data

Alice Pizzo 1,*, Toke R Fosgaard 2, Beverly B Tyler 3, Karin Beukel 2
Editor: Iván Barreda-Tarrazona4
PMCID: PMC9714927  PMID: 36454962

Abstract

Policy-capturing (PC) methodologies have been employed to study decision-making, and to assess how decision-makers use available information when asked to evaluate hypothetical situations. An important assumption of the PC techniques is that respondents develop cognitive models to help them efficiently process the many information cues provided while reviewing a large number of decision scenarios. With this study, we seek to analyze the process of answering a PC study. We do this by investigating the information acquisition and the cognitive processes behind policy-capturing, building on cognitive and attention research and exploiting the tools of eye-tracking. Additionally, we investigate the role of experience in mediating the relationship between the information processed and judgments in order to determine how the cognitive models of student samples differ from those of professionals. We find evidence of increasing efficiency as a function of practice when respondents undergo the PC experiment. We also detect a selective process on information acquisition; such selection is consistent with the respondents’ evaluation. While some differences are found in the information processing among the split sample of students and professionals, remarkable similarities are detected. Our study adds confidence to the assumption that respondents build cognitive models to handle the large amounts of information presented in PC experiments, and the defection of such models is not substantially affected by the applied sample.

Introduction

Policy-capturing represents a prominent approach to study decision-making and strategic choices. An advantage of the method is that it allows researchers to infer what information has the most influence on respondents’ assessments, judgments, and choices [13].

More specifically, policy-capturing is a methodology employed to assess how decision-makers use available information when asked to evaluate a hypothetical situation [4]. The purpose of policy-capturing (PC) is to capture individual decision-making policies, which reveal how they weigh, use and select information [5]. A policy-capturing experiment consists of repeated judgments in which respondents are asked to judge a series of simulated scenarios that are characterized by various degrees of attributes (information cues). The PC technique presumes that decision-makers in practice must make decisions based on more decision cues than humans can cognitively process [6]. Thus, their PC decision scenarios include more information cues in order to be more realistic [7]. They also suggest that respondents’ evaluations in the judgment exercise can then be regressed on the variation in the attributes or information cues to determine what attributes of the decision scenario are significantly impacting the evaluation. The resulting coefficient estimates of the attributes indicate the relative importance of the attributes and provide an overview of the patterns and weightings used by the decision-makers, while avoiding the social desirability bias often associated with self-reporting [3].

The widespread use of policy-capturing (PC) experiments, in general, is highlighted in several methodological reviews [3, 8, 9]. PC, with a range of different design specifications, has been applied to investigate the decision rules in dozens of high-ranking publications in organizational and management research [3], and employed in a wide range of topics: job search [10, 11], compensation [12], employee discipline [13], job analysis [14], sexual harassment [15], employment interviews [16, 17], contract arbitration [18], motivation [4], promotion evaluations [19], financial investment judgments [20], and executive decision-making [2, 6, 21]. Moreover, PC has also been widely used outside organizational and management research in fields such as medical decision-making [22, 23], psychology [24, 25], and sociology [26, 27]. The present paper targets the assumptions and the underlying cognitive mechanisms of the specific design of policy-capturing developed by Hitt et al. [6] in 1979, applied in several papers of the organizational literature [2, 2832].

An important assumption behind the policy-capturing technique is that as respondents review several scenarios one at a time, and make judgments, they develop cognitive models to help them process, interpret, and integrate the complex set of information provided in the policy-capturing experiment [6]. Therefore, each evaluation or judgement is the result of both the information supplied in the scenarios and the subjective cognitive models that participants bring with them and develop while participating in the exercise [2, 33]. This assumption builds on behavioral decision theory, which argues that due to bounded rationality [3436], when processing large amounts of information, humans seek to reduce their cognitive effort through the formation of heuristics and preferences in order to exclude some available information in uncertain and complex contexts and simplify the decision process [20, 3740]. Indeed, policy-capturing studies that include more information than humans can cognitively process have found that decision-makers approach ill-structured decisions with complex mental models to integrate the information into a single judgment [2, 6, 41, 42]. A similar assumption is foundational to the attention-based view of the firm, which recognizes that human beings have limited cognitive capabilities when processing all the available information that potentially is relevant for making decisions and judgements [43, 44].

Although PC has been prominent and its use widespread, the literature lacks an experimental investigation of the cognitive processes underlying the methodology. To circumvent that lack, this study builds on cognitive and attention research and exploits the potential of eye-tracking to study the process of answering a PC experiment. Additionally, we investigate the role of experience in mediating the relationship between the information process and the judgments to verify the reliability and appropriateness of the samples recruited. Indeed, Karren et al. [3] point out that despite being an important criterion in most research studies, few of the reviewed PC papers have analyzed the reliability of their decision makers’ judgments. This level of analysis relates to an ongoing academic debate on the use of students as experimental subjects [4547]. Indeed, working with students as subjects, when leading to externally valid conclusions, can be a strategic decision in terms of availability, reachability, and costs. That is also the reason why we believe that not finding a substantial difference in the cognitive processes behind PC is a rather positive finding in the perspective of running policy-capturing studies on strategic decision-making with students as subjects. In sum, our research question is what characterizes the cognitive processes of respondents undergoing a policy-capturing experiment? Are they affected by the applied sample composition?

Policy-capturing and choice modelling methods share many similarities, as emphasized by Aiman-Smith et al. [9], and we therefore build our analysis upon the experimental approaches previously applied to choice modelling studies, such as in Meissner et al. [48] and in Hoeffler et al. [49]. It is important to note that differences do exist between the methods, as such the number of attributes the respondents are exposed to, whether the extracted rating is continuous or not, and whether the attribute selection is theoretically driven or empirically driven. Aiman-Smith et al. [9] identify several analogies in various disciplines that use similar regression-based methods for investigating decision-making. For example, traditional conjoint analysis is the preferred method in marketing research, contingent preference is prominent in environmental and social policy research, and policy-capturing is more common in strategy and human resource management studies. Given the strong similarities, the methodologies employed in choice modelling can be considered useful guidelines for researchers applying policy-capturing [3, 50]. Moreover, also research based on choice modelling has used eye-tracking techniques to explore how participants process information in order to test the assumptions of choice experiments [5161]. Prior work at the intersection of attention and choice modelling contributes to our four hypotheses about how participants complete the PC experiment. The hypotheses, further discussed below, are that respondents become more selective, more consistent, and more efficient over the course of the experiment, and, finally, that experience also affects these cognitive processes.

We have completed a PC experiment and measured the participants’ attention processes with an eye-tracking device. The topic of the PC experiment is inter-organizational collaboration, but in the present paper, we exclusively report on the processes of answering the experiment, not on the measured drivers of inter-organizational collaboration. We have designated an entire sister paper to address the evaluation outcomes of the PC experiment [62]. We will merely connect the answers in the PC experiment to attention. Doing this, we find evidence that respondents of a policy-capturing study build cognitive models to cope with the large amount of information provided. More precisely, the effort, measured as a function of time spent looking at the information that needs to be evaluated, decreases with practice, signaling an increase in efficiency. Furthermore, a selection of attention also emerges among the different attributes that characterize an evaluation scenario. The selection is associated with the respondents’ evaluation of the PC scenarios. The applied sample, and hence experience, is found to have little influence on the detected cognitive models, although a few differences do apply. The student sample is quicker in reducing the effort, but they are not systematically diverging in their attention patterns. They spend a similar amount of time on scenarios, and they show similar patterns of attention, which undergo a selection process for both. Interestingly, both samples are found to show consistency in what they weigh to be important and what they state it is.

The paper is structured as follows. In the next section we build the theoretical framework upon which the empirical analysis is based, in the following section we develop the hypotheses. We move on to a discussion of the methods, present the results, and end with a discussion of our results, limitations, and conclusions.

Theoretical framework

The attention-based theory developed by Ocasio [43] defines attention as the “noticing, encoding, interpreting, and focusing of time and effort” on the relevant information processing to evaluate choices and make decisions. Attention itself represents a good indicator of the limited information processing capability of humans [63]. In situations with more information to be processed than what is cognitively possible for decision makers to handle, selective attention is described by Ocasio [44] as a “process by which individuals focus information processing on a specific set of sensory stimuli at a moment in time”. Subjects choose which stimuli to attend to and which to screen out [64].

The close link between attention and decisions is dissociated from the traditional economics assumption, which states that every decision maker opts for a certain choice based on pre-existing preferences and all the available information [65]. In behavioral decision research, the constructive preferences approach argues that preferences are constructed by the decision maker within the specific task and context of a decision [66, 67]. The availability of information in a given context and the features that characterize the contextual environment are essential determinants of the constructive preferences.

Attention-based theory assumes a similar process of decision-making as the one described above [43]. While the decision context relates to the information that needs to be evaluated and processed by the decision maker, some subjective characteristics are also influential factors in the cognitive process. Age, gender, type of education, and nationality are some of them. Experience is another subjective characteristic. In the organizational domain, in accordance with the stream of research on long-term working memory [68], respondents with experience are found to encode and retrieve information more rapidly than inexperienced respondents and are able to efficiently access the knowledge acquired later. Therefore, the cognitive process that respondents undergo during a policy-capturing experiment, composed of simulated scenarios related to the working place and strategic organizational decisions, might be moderated by their level of professional experience [69]. Scholars have also found that experience is related to the information-reduction hypothesis [70]: expertise allows respondents to direct more selective attention to stimuli that are relevant to their decision-making [71]. Thereby, attention is allocated more efficiently by experienced respondents.

Finally, relevant to the present study is the effort to combine eye-tracking with choice-based research to explore to what extent respondents make coherent decisions: they might be influenced by the survey context, information cues, ordering effects, and their own experience as reflected in their demographic characteristics. Several studies have combined choice modelling experiments with visual attention data to test the choice experiments’ undergoing cognitive processes [40, 5156, 72, 73]. Scholars have indeed applied eye-tracking as a measure of attention to quantify individuals’ information processing in choice-based exercises in different domains such as decision-making in economics [51, 56, 74, 75], consumer choice [48, 52, 7581], medical decision-making [54, 82], and food choice in the sustainability field [8385]. In a comprehensive review of eye-tracking, Orquin [86] explains how choices and attention are interrelated, while Ashby et al. [87] provide a review of the reasons why the use of eye-tracking methodologies has increased in the field of behavioral decision-making. The present study seeks to combine the measurements of eye movements and the lessons learned for choice-based models to investigate the cognitive processes underlying this policy-capturing technique.

Hypotheses

In our study we seek answers to four hypotheses related to the cognitive processes underlying the policy-capturing technique, by combining the PC experiment with eye-tracking.

Eye-tracking has been identified as a tool that can inform about the process of answering top-down assigned tasks, such as PC studies [8688]. The idea is that tasks and goals of an experiment should motivate respondents to use deliberate reasoning to analyze opportunities, within their information processing cognitive constraints, to make up their judgements [37, 8991]. Visual attention represents the psychological construct of focus in eye-tracking research. The notion behind the practice of quantifying attention is the so-called eye-mind assumption, which assumes a tight link between what is seen and what is cognitively processed [92]. Successively, the relative eye-mind hypothesis was developed by Huettig et al. [93] who specified how the most active part in the working memory will eventually determine the likely direction of the eye movement for any given moment in time.

The use of eye-tracking has provided a better understanding of how visual design of a task shapes attention [94], how attention develops with the repetition of a certain task [86], and how accumulated attention affects the cognitive models that predict decisions [95]. More specifically, scholars have identified that the number of fixations and the time spent, two of the basic eye-tracking metrics, on a specific area of interest is an indicator of how much attention the individual directs to that area [96, 97].

Efficiency

To learn about the cognitive processes underlying a PC experiment, we investigate the development of efficiency, which can be measured as the reduction in attention found with repetitions of the same task [77, 98100]. Especially in repeated tasks, respondents get better at extracting information and retaining it over time [87]. If the respondents are asked to repeat the same decision task multiple times, it is expected that the amount of effort required to perform the evaluation will reduce over time since respondents take their decisions based on previous choices, internal decision rules, or rules of thumbs [49, 101]. Because eye-tracking research has found experimental respondents to spend progressively less time looking at the information provided before making a judgement in later repetitions than they do in early repetitions [102], a similar decreasing path of attention is expected to emerge. Hence, we hypothesize the following.

  • H1.1: respondents of a PC exercise become more efficient in directing attention to attributes with practice.

We test this hypothesis by using the eye-tracking measure of time spent as in Hoeffler et al. [49]. Time spent is the total amount of time devoted to the available information in each scenario.

Selectivity

Because in a policy-capturing exercise the availability of the material provided in each scenario exceeds the amount of information that respondents can process [29], we expect that respondents will assess the policy-capturing exercise based on only a selection of attributes and that those selected attributes are more important for their answer in the PC experiment. This conjecture is consistent with the attention-based literature, the findings from eye-tracking research, and its applications on choice-based studies. Thus, the hypothesis is summarized as follows.

  • H1.2: respondents become more selective in the information acquisition with practice during a PC exercise.

To test this hypothesis, we focus on the eye-tracking measure of fixations as in Meissner et al. [48]. A fixation is defined as the dwelling of attention on a certain piece of information.

Consistency

Consistency is a key feature of the information acquisition process behind any repeated choice exercise, as it opens for predictability [48, 49]. We are interested in investigating the relationship between what respondents do while answering a PC exercise (in other words, what mental schemes they apply or how they weigh attributes) and what they state they do (i.e., what they think and say is important for their decisions).

  • H1.3: respondents are consistent between what they weigh as important and what they say is important.

Despite including eye-tracking data in the analysis, we investigate this matter mainly by comparing respondents’ evaluation extracted from the completion of the PC experiment and a self-reported evaluation where the respondents rank the available information from the most to the least important [49].

Experience

We are also able to investigate the filtering effect of experience thanks to the sample composition of the experiment. Indeed, we recruited eighteen professionals working in the science industry and twenty-six MSc students with training in science. The impact of experience, intended as the number of years of professional experience after the most recent degree, is analyzed across the split sample of students and professionals, by the means of statistical estimations from the PC outcomes and the eye-tracking metrics. Because students on average have less professional experience than professionals working in the industry (1.84 years vs. 11.19 years of working experience), their ability to make judge about inter-organizational collaboration is likely limited [103]. Both the organizational and the experimental literature have suggested that there exists a difference between actual workers in the field and students in the lab, since experience plays a role in experiments [104]. On this premise, the second level of analysis of this research converges to the following hypothesis.

  • H2: experience affects the cognitive processes behind a PC exercise.

To test this, we replicate all previous evidence by focusing on the sample composition. On one hand, experience is expected to have a boosting effect on efficiency because professionals are more familiar with the evaluation context of the PC exercise, therefore the required effort to fill in the survey can be expected to be lower. On the other hand, the effort dedicated to the task could be higher for experienced respondents, since students have less sense of the collaborations that they are asked to evaluate and might skip over the exercise faster.

Materials & methods

The policy-capturing decision models and procedure

In this study, policy-capturing tool was used to investigate industrial scientists’ assessment of potential collaborative opportunities with academics [6, 30, 31]. We created a policy-capturing survey using the online survey software Qualtrics to determine the attributes that scientists focus on and weight more heavily when they evaluate research collaboration opportunities with university academics. The Qualtrics policy-capturing exercise included two instruments which alternate randomly across respondents: 1) the policy-capturing block, which consisted of an instruction page, 30 randomly ordered pages, each describing a collaboration decision scenario, and a final page where participants rank ordered the importance of the decision attributes used in the scenarios; 2) a background survey collecting information on participants’ demographics and attitudes. While the first block includes individual direct questions to be answered on a Likert scale, the second block constitutes the actual experimental task—the policy-capturing exercise.

1. The policy-capturing instrument

The first page of the policy-capturing instrument provided participants with instructions for the exercise and showed them the two questions they would be asked after they reviewed the information included in each scenario. The 30 scenarios that followed the instructions described potential collaborations consisted of 16 decision attributes that were randomly given different degrees of weight, and two questions asking participants to rate the attractiveness of the collaborative opportunity. We formulated 16 attributes to describe each of the 30 scenarios of potential collaborations with university academics. The number and the design of the scenarios were set-up by following the methodology as in Hitt et al.; Tyler et al. [6, 29]. More specifically, Hitt et al. [6] were the authors who in 1979 developed the specific design of the policy-capturing (PC) questionnaire applied in several papers thereafter. The specificity of this version of PC concerns the number of the repetition of the scenarios, the number of the attributes included and repeated in each scenario, the randomization of the order in which the scenarios are administered to the respondents versus the iteration of the same orders of the attributes (randomized only once during the design phase). The application of the policy-capturing methodology, as develop by Hitt et al. [6], while providing a close reference to prior and well-established literature, made us preserve the key feature of the questionnaire design. The attributes of the PC exercise have been formulated in accordance with the constructs of four relevant organizational theories extracted from the organizational literature. Specifically, the four theories were identified within those considered important for collaborations between professional and academic scientists: transaction cost economics [105], resources-based view [106], regulatory focus theory [107, 108] information economics [109]. An overview of the 16 attributes can be found in Table 1 and the exact layout used and repeated verbatim over all 30 scenarios can be seen in S1 Fig, which shows a screenshot of one selected scenario.

Table 1. Theoretical constructs.

Overview of the theoretical constructs behind every attribute of the PC scenarios.

Theory Construct Attribute Text
Transaction Costs Economics Asset Specificity Level of investments in equipment required for this project that cannot be used in other research projects (i.e., investments that cannot be transferred to other collaborations).
Small Numbers Number of other partners currently interested in cooperating with you.
Formal Governance Extent to which this collaboration will be coordinated by and controlled by detailed contracts.
Informal Governance Favourability of the collaborative partner’s cooperative history.
Information Economics Asymmetric Information Disciplinary overlap between your technical knowledge and that of the other partner in this collaboration.
Asymmetric Information Degree to which the other partner in this collaboration possesses intangible assets that are difficult for you to value.
Adverse Selection Your familiarity and knowledge of the collaborative partner’s knowledge, skills and capabilities.
Adverse Selection Extent to which the collaborative partner’s co-authors and colleagues are considered to be reputable.
Resources Based View Financial Resources Financial resources the collaborative partner’s organisation is committing to support the collaboration.
Human Capital Resources Extent to which the collaboration provides you with access to valuable, rare intellectual talents.
Physical Capital Resources Extent to which the collaboration will give you access to valuable, rare equipment.
Imitability Degree to which the intellectual capital created in this research collaboration will be openly and broadly shared.
Regulatory Focus Theory Prevention & Emotional Extent to which not meeting goals on this collaboration will undermine future collaborations.
Prevention & Vigilance Need for partners to diligently/constantly search for problems and difficulties during the collaboration.
Promotion & Behaviour Extent to which future activities in the collaboration will be decided by the partners based on intermediate outcomes.
Promotion & Relational Degree to which this collaboration can be expected to establish a close, trust-based relationship between the researchers.

Prior to conducting the study, we organized a focus group with a number of junior and senior researchers in the organizational area at the local university to test the representativeness of the attributes. Moreover, feedback from senior experts in the use of policy-capturing exercises were sought. Although the policy-capturing questionnaire was designed to assess the influence of the sixteen attributes suggested by the four theories, the weighting by the respondents of these attributes is not the main focus of analysis, but it rather serves as a tool to investigate how the process of decision-making unfolds.

The attributes were displayed in the same graphical order, while the degrees characterizing the 16 attributes, distributed on a five-point Likert scale—ranging between low, moderately low, average, moderately high, high, were randomly assigned for each of the scenarios. We therefore end up with 30 unique scenarios, identified as different opportunities with different degrees of the describing attributes. The mixture of elements in the 30 scenarios were determined from a fractional factorial design. To further verify the independence of the attributes, the correlation of the attribute degrees within and across theory was tested and found to be ranging between ±0.45. For each scenario, the respondents were asked to evaluate the attractiveness of the collaborative opportunity by responding to two questions on a seven-point Likert scale ranging from 1 (“very unattractive/very low”) to 7 (“very attractive/very high”) (31). The two evaluation rankings were kept the same over the thirty scenarios and were formulated as follows:

“Based on the information provided above and your experience, please rate the attractiveness of this collaboration?”

“Based on the information provided above and your experience, what is the probability that you would further explore this collaboration?

The Qualtrics software also randomly determined and recorded the order at which the 30 scenarios appeared to the respondents. Hence, using the degrees of the attributes that characterized each simulated scenario as the independent variables and the resulting evaluation rating completed by the respondents as the dependent variable, regression analysis could be performed to reveal the respondents’ decision models [6]. S1 Fig provides a graphic overview of what is meant by degrees of the attributes that characterized each simulated scenario. In short, respondents were instructed to provide an evaluation of 30 scenarios, which are described by a list of 16 attributes (located in the same position in each scenario) that vary within a certain range of values (five degrees from “low” to “high”) in a random fashion. While the position of the attribute is the same for every scenario, its characterization (its degree) changes randomly across scenarios. All respondents eventually answer to the same 30 scenarios but each in a different random order.

The final page of the policy-capturing instrument asked respondents to rank order the sixteen attributes used to create the 30 scenarios according to the respondent’s perceived importance.

2. Additional survey

In addition to the policy-capturing instrument, each respondent completed a survey consisting of demographic data, and a part with a number of scales measuring attitudes and preferences related to their work environment and professional life. The material included in the online survey consisted of these two blocks, which were randomly ordered along with the policy instrument by Qualtrics for each respondent to control for potential order effects.

Eye-tracking measures

The eye-tracking equipment enabled us to collect evidence to study the cognitive process that respondents of the policy-capturing go through. Eye-tracking is a common technology to measure where someone is looking or how they visually scan a specific situation. Eye-tracking measures are a useful tool for both qualitative and quantitative research, as it allows researchers to tap into non-conscious processes including biases, heuristics, and preference formation [110].

Visual attention, also defined as “selectivity in perception”, is performed in people by moving their eyes around the situation at display (screen, text, graphic material, or choice scene). Since only 2% of the visual area is projected into the fovea, the central part of the retina at high density of sensory neurons, the eyes need to move in order to inspect stimuli and fully acquire information [111]. Technically speaking, the eye-tracking equipment calculates the location of the respondent’s fixations and gaze points, which are the basic output measures of interest and the most used terms. Gaze points show what the eyes are looking at. Our eye-tracker, with a sampling rate of 60 Hz, can collect 60 individual gaze points per second. If a series of gaze points is very close in time and space, the gaze cluster constitutes a fixation, denoting a period where the eyes are locked towards an object. As mentioned above, a fixation is defined as the instance in which the eyes are stably resting on a certain stimulus. Instead, the quick movement of the eyes between different consecutive fixations is called a saccade [112]. Researchers, such as Rayner [111], have demonstrated that information acquisition happens only in correspondence with a fixation, and not during saccades. For this reason, a number of eye-tracking measures are based on fixations as the unit of analysis. The duration threshold in milliseconds (ms) to identify a fixation is 100 ms. Such threshold was not deliberately decided by the authors but employed as part of the data segmentation of the eye-tracking software used for the study (iMotions A/S, version 7.1). In iMotions, if the duration of a fixation is less than 100 ms, it gets discarded. Only fixations candidates that are longer than 100 ms count as a fixation. Moreover, the eye-tracker collects the high frequency attention data of the entire period in which respondents are participating in the study. To organize the massive amount of data collected, some Areas of Interest (AOI) were defined to better summarize the attention data. More specifically, the areas were divided into three main groups: The attributes—The degrees—The answers. Attention on every single group can be analyzed separately. This structure enables to discern the composition of each scenario and to gather eye-tracking evidence at the needed level of details. S1 Fig shows the structure of the AOIs for one randomly selected scenario from the policy-capturing experiment. The eye-tracking measures, such as gazes, fixations, revisits, etc., recorded in the AOIs were used in our analysis. A brief explanation of these common eye-tracking measures follows below.

The metric Time Spent (briefly introduced above) quantifies the amount of time in milliseconds (ms) that respondents spend looking at a particular item or AOI. As for prior literature, spending more total time looking at a specific piece of information is an indicator of preference for making a decision that is consistent with that information [58, 113, 114].

The variable Fixation count the number of fixations registered in a specific AOI; its interpretation is similar to time spent but the unit of measure (counts, not time) and its order of magnitude serves as a test case to quantify attention.

Eye-tracking set-up

To collect eye-tracking data, we used a remote Tobii X2-60 eye-tracker magnetically attached to the bottom of a monitor, in an on-sight room reserved for the experiment. In addition to the monitor, a mouse and a keyboard were made available to the respondents to indicate their decisions in the study. The 24-inch monitor had a resolution of 1920 x 1080 pixels, and it was connected to a computer behind a partition, where a researcher controlled the software (iMotions A/S, version 7.1). The survey was built on the Qualtrics platform and connected to iMotions as a plug-in. Before starting the experiment, respondents received information about the procedure of the study, the content of the survey and further instructions regarding the use of the eye-tracker. Once the respondents had asked their questions, the eye-tracker was calibrated on a 9-point calibration to ensure sufficient precision of the tool: the calibration outcome was considered to be positive if the mean difference between the measured gaze data and the target point was under or equal 40 pixels. Respondents completed the study one at a time, with a researcher present in the room operating the eye-tracking and no time limit.

In relation to data collection, the software iMotions combined data from the survey and from the eye-tracker in synchro. iMotions automatically segmented eye-tracking variables (e.g., number of fixations, amount of time spent, number of gazes, time to first fixation, hit time, number of revisit fixation duration). The eye-tracking data were stored on iMotions in the lab computer, while the policy-capturing decision data were stored online on Qualtrics and eventually downloaded. The two datasets were separately analyzed, then made compatible and merged for common analysis. Thanks to the online link plug-in function of iMotions, it was possible to overlap and temporally align what was showcased on the computer screen and the data collected by the eye-tracker.

Pilot

A pilot study was conducted to verify the initial structure of the policy-capturing survey and to pre-test the technical set-up of the eye-tracking equipment (including both the respondents’ monitor and the researcher’s workstation of the eye-tracking setup). A researcher recruited twenty-eight students, by presenting the project to two Master of Science classes and by encouraging students to participate in a pilot of the experiment in October 2018. The students made individual appointments to participate in the pilot study in a departmental lab on campus. In the lab, the researcher provided each respondent with a simple overview of the study, and requested they sign a written consent form. No minor was included in the study. The form was safely stored by the researcher. Participants signed the consent in front of the researcher. Moreover, respondents had to agree to the terms and conditions on the first page of the Qualtrics program in order to proceed to the survey. The results of the pilot study were used to improve the experimental materials.

Sample

To implement the study, we obtained a sample composed by students and professionals with training in science. To solicit science professionals, between January and April 2019 we contacted four companies in sectors representing science in the local area. Each manager identified and invited key employees to participate in the study, and then provided us with a list of names of those willing to participate. We received the names of forty-six professionals who were e-mailed a request to participate in the study and were given two options: answering to the policy-capturing study on their personal computer or on a computer in our mobile on-site eye-tracking lab. While twenty-two respondents selected the first online option, eighteen respondents showed up in the on-site lab for individual appointments and participated in the policy-capturing while being eye-tracked. Students in a Master of Science Entrepreneurship and Innovation class were also asked to participate in the study in November-December 2018. To reduce potential attrition and to balance the incentive that professionals might have because of their managers’ influence, students were told that one of them would be randomly drawn to receive a cash prize of approximately 60 euros. Thirty-two students agreed to participate, however only twenty-six completed the experiment. Ten of the students were enrolled in a Food Innovation and Health program and sixteen were enrolled in a Food Science and Technology program. Both subsamples took the policy-capturing exercise in the eye-tracking setting. When the respondents went to the department eye-tracking lab, the same procedure as for the professionals was followed: they participated one at the time, were given a brief overview initially, signed the informed consent for the eye-tracking experiment, and clicked the required box on the terms and conditions for the policy-capturing study. No minor was included in the study. Participants signed the consent in front of the researcher, who safely stored the form. Moreover, respondents had to agree to the terms and conditions on the first page of the Qualtrics program in order to proceed to the survey. The twenty-six students provided complete data and one of the students was randomly selected to receive the cash prize. Thus, our results report data collected for twenty-six students and eighteen professionals. The sample size is in line with policy-capturing studies of reference [6, 29]. The average age of the combined sample was 36 years old (25.5 for the student sample and 42.9 for the professionals), and the percentage of males was 36% (19% for professionals and 61.5% for students).

The policy-capturing exercise was performed during the academic year 2018/2019 as part of a funded project at the local university. Several academic outcomes, with distinct scope, level of analysis and research focus, have been developed by exploiting the data collected during that time; as a result, the methodological description is similar and shared with the method section of our other current working paper. In accordance with Danish legislation, there was no need for an institutional review board approval (IRB) for this study, since sensitive data—as defined by the Danish Data Protection Agency—was not retrieved from participants. The study did collect written consent from the participants in line with good ethical research practice. Written and digital consent was obtained. The complete dataset, without any identification of the participants, is posted on the Figshare platform [115].

Results

Our empirical strategy is based on previous work that investigate the cognitive processes behind decision-making. Specifically, we apply the constructs and the analytical framework from Hoeffler et al. [49], who study the process of constructing stable preferences, which is relevant for our hypotheses on efficiency and consistency. Furthermore, we also use the framework of Meissner et al. [48], who study the role of attention in choice tasks, which is relevant for our hypothesis on selectivity. We decided to opt for this approach to seek legitimization in applying eye-tracking to investigate the policy-capturing methodology, which, to the authors’ knowledge, was never done before.

For clarity, we begin by defining the terminology we use in the rest of the paper. We provide a short definition of each construct used in the paper. We define an attribute as one of the features that characterize a certain alternative [48], which in our case is the attributes that characterize the 30 policy-capturing scenarios that respondents are asked to evaluate. We describe effort as the amount of time that respondents invest in delivering an answer, measured as their response time [49]. Efficiency is the process in which decision-making progressively requires less effort to reach a resolution of a task: it is closely related to effort, that is also defined as the amount of mental energy required to make up one’s mind [48]. Selectivity refers to the process of selecting to attend on progressively smaller amounts of information. We refer to preferences as the relative importance respondents place on attribute both during the PC evaluation and the final ranking exercise. Consistency in decision-making relates to a non-contradictory pattern of choices. It is different from what Hoeffler et al. [49] call violations, which can be defined as the magnitude of the mismatch between two rankings (in our case, between the PC attributes ranking and the rated attributes ranking). Experience is professional expertise, understood in this policy-capturing study as the number of years of professional experience after the most recent degree of education. Despite efficiency and selectivity both representing a measure of attention, we believe that they play two important roles in completing our analysis respectively. On one hand, the two concepts are applied independently from each other and test different mechanisms, which we believe justifies reporting both. On the other hand, two different eye-tracking measures are employed to study efficiency and selectivity, as explained in detail in the section below.

Our results on the respondents’ decision processing are based on three different sources of information:

  1. PC data: the evidence originated from the policy-capturing evaluation exercise. We use answers about collaboration in the 30 policy-capturing scenarios to assess what pieces of information respondents weigh in their evaluation.

  2. Ranked data: the evidence provided in the rating task at the end of the PC assessment. The individual respondent ranks the elements of the collaboration scenarios from most to the least important for their assessment. It was not possible to repeat a ranking position.

  3. Eye-tracked data: the evidence originating from the eye-tracking technology. We select the attention data that is relevant for our analysis, the AOIs. In particular, we select attention on the attributes and on the degrees in the collaboration scenarios.

Efficiency (H1.1.)

Our starting point for investigating the information acquisition process behind the policy-capturing method is to check whether respondents become more efficient throughout the evaluation process. The idea is that, facing the massive amount of information in the PC, the decision maker aims to be as efficient as possible in evaluating the choice scenarios by using the most important information while limiting the time spent overall. During what Hoeffler et al. [49] call the constructive preferences building phase, cognitive effort decreases, and efficiency increases accordingly. Because every choice is a process following which preferences are consolidated to then land at a resolute assessment [49], we expect to find a decreasing pattern of attention allocated to the attributes and degrees over time. A proxy for effort, and therefore for efficiency, is the time spent, calculated as the average times spent on respectively the attributes and the degrees. On average, respondents of our PC exercise dedicate 15.58 seconds to look at the attributes in each scenario, and 4.76 seconds to the degrees in each scenario. The development of attention over time is shown in Fig 1, which also includes the split between Professionals and Students.

Fig 1. Attention.

Fig 1

Pattern of attention measured in time spent (seconds) for attributes (Panel A) and degrees (Panel B) over the 30 repeated collaboration scenarios, by professionals and students. The logarithmic form of the variable of interest is depicted in the graph.

Fig 1 shows the logarithmic form of the average time, in seconds, per scenario spent on respectively all the attributes and all the degrees. A clearly descending pattern can be seen for the attributes, while attention on the degrees is at a stable and lower level. The difference between the two types of AOIs is naturally affected by the content and the size of the area: the attributes describe in words the characteristics of the scenario; the degrees describe, with a simple cross on a scale, the extent to which the attribute applies. See S1 Fig for reference. We find a significant reduction in average time spend on the attributes in the first ten scenarios compared to the last ten (the difference is 12.16 seconds, a pair t-test p<0.001). For the degrees, the same comparison is also significant, but smaller of magnitude (the different is 12.46 seconds, p = 0.015).

We also test the efficiency on the attributes and the degrees econometrically using a random effects regression clustered at the individual level (Table 2). We use a random-effect model due to the panel-data nature of the attention data over scenarios.

Table 2. Regression table H1.

Random effects regression models for both attributes and degrees covering hypothesis 1.1 and hypothesis 1.2 (p-values within parentheses).

Hypothesis 1.2—efficiency Hypothesis 1.2—selectivity
for attributes for degrees for attributes for degrees
Rounds -0.064*** -0.064*** -0.022*** -0.022*** 0.122*** 0.122*** 0.042* 0.042*
(0.000) (0.000) (0.000) (0.000) (0.000) (0.000) (0.030) (0.030)
Gender -1.022 -1.339* 1.704 1.685
(0.145) (0.048) (0.192) (0.184)
National 0.497 0.627 -1.184 -1.326
(0.446) (0.320) (0.337) (0.268)
Education 0.257 0.240 -0.517 -0.568
(0.380) (0.397) (0.338) (0.282)
Years of work experience -0.015 -0.031 0.068 0.104*
(0.730) (0.481) (0.156) (0.041)
Constant 9.499*** 8.584*** 7.800*** 7.116*** 3.804*** 5.685* 6.443*** 8.437***
(0.000) (0.000) (0.000) (0.000) (0.000) (0.014) (0.000) (0.000)
R-squared 0.044 0.082 0.006 0.080 0.047 0.094 0.006 0.078
N 1320 1320 1320 1320 1320 1320 1320 1320

* p<0.05,

** p<0.01,

*** p<0.001

d.v. log(time spent per scenario)

d.v. sum of no fixation attributes per scenario

We treat the average time spent as dependent variable and use a log-transformation to better model the nature of the data. The variable round (ranging from 1 to 30 according to the individual chronological order of the scenarios) is treated as an explanatory variable. We add several demographic controls (i.e., gender, nationality, educational level, working experience). We find that round is highly significant, regardless of the control variables being added or not (t = -0.064; p<0.001), suggesting a systematic reduction in effort (i.e., an increase in efficiency) over the repetition of the scenarios. To have a more relative measure of attention and as a sensitivity check, we replicated the same analysis as above (Fig 1 and Table 2) with an individual measure of time spent on attributes and degrees as the proportion of the total time spent on all 30 scenarios for each participant: the same results hold.

The rich data structure allows to link the process of the decreasing effort to both the importance respondents seem to put on the attributes according to the PC data and the importance they reveal in the Ranked data. For the PC data, we regress the respondents’ answers of the collaboration opportunity across all scenarios on the degree (the value) of each of the attributes. Based on these individual regressions, we extract whether attributes are significant or not. We then compare the reduction in effort (decrease in average time spent on the first ten compared to the last ten scenarios) with the significance dummy from the PC data in a t-test. We do not find a significantly different reductions in efficiency across the attributes found in the PC data as significant as opposed to non-significant for either attributes (t = 0.262; p = 0.396) or degrees (t = 0.514; p = 0.303).

For the Ranked data, we take a similar approach. We label the highest ranked attributes as important. To make the PC data and the Ranked data comparisons parallel, we allow the same number of attributes to be marked as important, based on the number of significant attributes that each individual shows in the regression. Again, we find no significantly different reductions in efficiency across the attributes ranked highest as opposed to those ranked lowest, either for the attributes (t = -0.190; p = 0.424) or the degrees (t = 0.338; p = 0.367). The results indicates that the decrease in effort, the efficiency gain, is not different across the attributes identified as more important in the PC data and Ranked data.

While our findings suggest that the respondents’ effort is decreasing, and that the respondents become more efficient, the process seems not to be associated with the attributes identified as most important for the individual respondent.

Selectivity (H1.2)

We also explore how our respondents decide to distribute their attention. Generally, selection occurs when attention is pulled to the attributes that represent the most important piece of information to the individuals to make their assessment [48]. The layout of the thirty policy-capturing scenarios facilitates the respondents to detect and focus on the most crucial attributes, and the associated degrees. Indeed, the position of the attributes in the scenario does not change over the repetition of the task. We find that participants on average attend to the first half of the attributes graphically displayed on the screen faster than the second half displayed on the lower part of the screen (time to first fixation is on average respectively 23.66 and 31.10; p<0.001), suggest that a top-down process applies.

Moreover, because of the structure of the scenarios, it is an easy task for the researchers to identify the location of the most important attributes and degrees. We operationalize the selection by coding for each scenario, which of the attributes and the degrees respondents did not fixate on.

Fig 2 displays the average number of attributes receiving no fixations per scenario over time. Each bar of Panel A is calculated as the sum of attributes receiving no fixations within a scenario, averaged among all respondents, while Panel B the same measure for all degrees. Panel A of Fig 2 describes an increasing pattern of zero-fixations on the attributes starting at 2 in the first scenario, growing to 6, to then stabilizing at around 7 in the last scenarios; panel B shows a more constant trend that starts at around 6 zero-fixations and only reaches 8 by the end of the PC exercise. A paired t-test comparing the average zero-fixations of the first ten scenarios with those of the last ten finds a significant difference for attributes (p<0.001), with additional 2.37 attributes receiving no attention on average. For the degrees, the difference is also significant (p = 0.037), with 0.72 additional degrees not being fixed at, on average.

Fig 2. Zero-fixations attributes and degrees.

Fig 2

Distribution of zero-fixations attributes over the succeeding scenarios for attributes (Panel A) and degrees (Panel B). The error bars are depicted in dark red.

We add to this evidence two random effects clustered regressions with the variable round as explanatory variable and the sum of attributes with no fixation for each scenario as the dependent variable. Additionally, we control for demographics variables: gender, nationality, educational level, working experience. The round variable is significant for both the attributes (t = 0.122; p < 0.001) and the degrees (t = 0.042; p = 0.044), suggesting that an attention selection process occur during the policy-capturing exercise. As a contrast to the zero-fixations, we also analysis what is in fact fixated on. A paired t-test comparing the average fixations of the first ten scenarios with average fixations in the last ten scenarios finds a significant difference for attributes (p<0.0001), with 28.8 fewer fixations on average. For the degrees, the difference is also significant (p = 0.0018), with 3.1 fewer average fixations.

Another level of analysis concerns whether attributes and degrees are viewed in isolation, meaning that one is viewed while the corresponding counterpart is not. In other words, we calculate the percent of attributes that are viewed without examining their degree, and the number of degrees that are viewed without examining their attribute. On average, the attributes that are viewed (based on the fixation measure) without examining their degree is 65,3%, while the number of degrees sampled without viewing their attribute is 34,7%. When studying these measures over order, we find that, while the average sum of attributes being seen without checking the respective degree is decreasing over order, the opposite trend is observed for the degrees viewed in isolation. It suggests that participants tend to focus less and less on reading the attributes over time, and more and more on noticing the level of the degrees.

We again exploit the richness of our available data by pulling together eye-tracking, PC, and Ranked data. We test the average number of zero-fixation attributes over attributes being significant or not in the individual regressions on the PC data and over the attributes ranked as the most important or not in the Ranked data. For attributes, we find a significantly lower proportion of zero fixations on attributes which are found as significant in the PC data (t = 1.803; p = 0.036) and on the attributes found to be the highest ranked (t = 2.791; p = 0.002). For degrees, the association is even more pronounced (PC data: t = 2.125; p = 0.017; Ranked data: t = 2.660; p = 0.004). Our results suggest that a clear selection occurs. We find that respondents become increasingly more selective with repetition of the PC experiment, and that those attributes getting attention are associated with the attributes identified as the most important to the respondents.

To also provide evidence on what participants in fact attend to, and not only what they do not attend to, we have performed a parallel analysis of actual fixations on attributes and degrees (each attribute and the corresponding degree is treated as one unit) for each scenario. We find that the numbers of attributes and degrees receiving fixations is significantly lower for the last ten scenarios compared to the first ten (t-test: t = 5.34; p<0.0001). Furthermore, the middle ten scenarios are also significantly different from the first ten scenarios (t = 3.45; p = 0.0003), but not different from the last ten scenarios (t = 0.83; p = 0.2022), suggesting that the selection process main takes place in the beginning. We repeated the analysis at an individual level by comparing the individual number of attributes and degrees together fixated at in the first ten scenarios with the same person’s number of fixations in the last ten scenarios. At the individual level, we confirm that the number of attributes and degrees together fixated at is significantly decreasing (paired t-test t = 3.55; p = 0.0005). Together these results underline that participants go through a process of selecting what items to attend to over the course of the study.

Consistency (H1.3)

We operationalize consistency by obtaining a measure of violations, defined as the number of times respondents violate what they state their decisions are based on and what they actually base their decision on [49]. We calculate violations by matching the PC data and the Ranked data. More specifically, a violation occurs when an attribute found to be significant in the PC assessment regression is given a low rating in the ranking task. To quantify the consistency, the significance level of each attribute is tested against its final ranking.

We find that the average difference between the importance originating from the significance levels of the policy-capturing data and the importance from the rated data is not different from zero (p = 1.000), and in a Wilcoxon signed rank test we do not find a significant difference between the two rankings (z = 0.035; p = 0.971), suggesting that the two preferences measures are generally aligned. We do want to stress that substantial variation exists. The relationship between the two measures is also illustrated in Fig 3. For each of the sixteen positions of Ranked data, we illustrate the average of the associated ranking in the PC ranking data. We observe that the importance of the two measures tend to follow each other, particularly for the attributes rated as most important.

Fig 3. Policy-capturing vs. ranked data.

Fig 3

The relationship between Policy-capturing data and Ranked data.

We also calculate the magnitude of violations between the two rankings [49]. In an ANOVA test, we find that the absolute magnitude of violations is not significantly different across the rankings (F = 1.510; p = 0.094).

Our data also allow us to compared consistency between the PC data and the Ranked data with the eye-tracked data (Attributes and degrees are ranked according to the average time spent on them). We find that the ranking resulting from the PC data does not deviate significantly from the ranking resulting from the eye-track data (attributes: t = -0.085, p = 0.932; degrees: t = 0.103; p = 0.917). Furthermore, the ranking in the Ranked data does not significantly deviate from the ranking resulting from the eye-tracked data (attributes: t = 0.151, p = 0.880; degrees: t = -0.023, p = 0.981). Our results thus suggest that respondents show a great degree of consistency across the PC, Ranked, and eye-tracked data, when ranking what is important to them in the collaboration opportunities.

Experience (H2)

Experience is expected to play a role in assessing the collaboration opportunities. We study experience (years of professional experience) by separating the findings above across our subsamples of students and professionals. Overall, students spend 23.98 seconds looking at the average scenario, while professionals spend 22.88 seconds, the difference is not statistically significant (p = 0.218). Students and professionals focus their attention on different AOIs: while professionals allocate more attention to the attribute degrees than students (318 vs. 283 milliseconds per degree—p = 0.069), students spend more time reading the attribute texts (896 ms vs. 1027 ms per attribute—p = 0.048). The observed difference might be the result of working experience making it faster for professionals to understand the attributes.

Experience and efficiency

Efficiency, the decrease of effort over time, is illustrated for the two sub-samples in Fig 4. The graph shows how the attention paid to the attributes drops dramatically from the first to the last scenario, specifically for professionals. For the degrees, the efficiency is more stable and similar for the two samples. For reference, find the graphic patterns in Fig 1.

Fig 4. Policy-capturing vs. ranked data by sample.

Fig 4

The relationship between Policy-capturing data and Ranked data by sample group.

We find that both types of respondents show a significant decrease of time spent on attributes, between the first ten scenarios and the last ten scenarios (p(professionals) < 0.002; p(students) < 0.001). For degrees, the decrease of time spent is only significant for students (p(professionals) < 0.125; p(students) < 0.016).

As a robustness check, we re-run the random effects regression of the logarithmic function of time spent as the dependent variable for the attributes and the degrees, respectively (Table 3).

Table 3. Regression table H2.

Random effects regression models for both attributes and degrees testing hypothesis 2 (p-values within brackets).

Hypothesis 2—efficiency Hypothesis 2—selectivity
for attributes for degrees for attributes for degrees
Rounds -0.064*** -0.050*** -0.050*** -0.022*** -0.004 -0.004 0.122*** 0.107* 0.107* 0.042* -0.002 -0.002
(0.000) (0.000) (0.000) (0.000) (0.562) (0.562) (0.000) (0.017) (0.017) (0.030) (0.951) (0.951)
Student dummy 0.362 0.255 0.490 0.505 -0.317 1.143 -1.513 0.812
(0.577) (0.841) (0.441) (0.680) (0.774) (0.510) (0.227) (0.648)
Student X Rounds -0.023* -0.023* -0.031*** -0.031*** 0.026 0.026 0.075 0.075
(0.029) (0.029) (0.000) (0.000) (0.607) (0.608) (0.063) (0.064)
Gender -1.093 -1.441* 1.551 1.559
(0.146) (0.048) (0.248) (0.245)
National 0.627 0.783 -1.216 -1.475
(0.374) (0.250) (0.345) (0.220)
Education 0.042 0.030 -0.027 -0.060
(0.697) (0.772) (0.874) (0.721)
Work experience 0.241 0.211 -0.623 -0.684
(0.448) (0.494) (0.293) (0.227)
Constant 9.499*** 9.285*** 9.918** 7.800*** 7.510*** 8.342* 3.804*** 3.991*** 2.023 6.443*** 7.337*** 3.064
(0.000) (0.000) (0.004) (0.000) (0.000) (0.012) (0.000) (0.000) (0.736) (0.000) (0.000) (0.623)
R-squared 0.044 0.045 0.089 0.006 0.010 0.091 0.047 0.048 0.102 0.006 0.013 0.101
N 1320 1320 1320 1320 1320 1320 1320 1320 1320 1320 1320 1320

* p<0.05,

** p<0.01,

*** p<0.001

d.v. log(time spent per scenario)

d.v. sum of no fixation attributes per scenario

The explanatory variable round is included (a range of values from 1 to 30 according to the individual chronological showcasing of the scenarios), together with the student dummy and the interaction between the two (student X round). The interaction term is significant for both attributes and degrees. The outcomes do not change when controlling for a number of demographics (years of work experience, age, gender, nationality). The results suggest that student tend to have a faster development in efficiency for attributes, and students tend to have increase in efficiency in their attention on degrees, which the professionals do not.

An additional interesting feature to test in the context of efficiency is the length of the wording of each attribute description. Thus, a variable indicating the number of words that each attribute is described with (e.g., the first attribute corresponding to 14, the second one with 18, etc.) was created. To investigate the role of the order of scenarios and the attributes length simultaneously, we have run an additional Random Effects regression, which replicates and adds on Table 2, in S1 Table. S1 Table shows that respondents spend systematically less time on the lengthy items over time. When splitting the sample into industry scientists and students, we notice that the effect is driven by students.

Finally, we bring together the eye-tracked data with the PC data. When testing the difference in time spent between the first ten scenarios and the last ten on attributes, no significant association is found with the significant attributes from the PC data (p = 0.147 for professionals; p = 0.265 for students). The same result applies for the comparison with the Ranked data (t = 1.172, p = 0.121 for professionals; t = -1.370, p = 0.085 for students). For the degrees, we find a significant difference in attention for the higher ranked attributes for both students (t = 1.764; p = 0.041) and professionals (t = -1.736; p = 0.039), despite the direction of the effect being negative for professionals (i.e., their attention decreases at later scenarios on the degrees belonging to highly ranked attributes).

Experience and selectivity

An additional consequence of experience is that it boosts encoding and retrieving of information [71]. We are therefore interested in checking whether professionals focus their fixations differently areas compared to students. We replicate the analysis of Hypothesis H1.2 with a split sample between Professionals and Students. We find that both samples have a significant increase in attributes not fixated on from the first ten to the last ten scenarios, although the increase is bigger for students (professionals: difference 2.04, p = 0.014; students: difference 2.60, p<0.001). For degrees, professionals experience a small non-significant reduction in non-fixated attributes, students show however a significant increase in no-fixations (professionals: difference -0.22, p = 0.379; students: difference 1.38, p = 0.001).

We replicate the random effects regression of Hypothesis H1.2, but this time we include the student dummy and the interaction term (student X round), in addition to the explanatory variable round and the attributes controls. See Table 3. Both models with and without controls result in a non-significant interaction effect for either attributes or degree, suggest that students and professionals undergo a similar selection process. We also detect an association between the areas that they evaluate as important in the PC exercise and a lower number of no-fixations attributes, which is only significant for students (attributes: t = 2.108, p = 0.017; degrees: t = 2.943, p = 0.001).

Experience and consistency

Lastly, we look at the consistency difference among the two sample groups, by replicating the same tests on the difference between PC data and Ranked data and on violations. Fig 4 plots the relationship between the ranking provided in the Ranked data exercise and the ranking resulted from the regression of the Policy-capturing evaluation. It is split by sample composition, and it shows that no clear difference in the pattern of this consistency measure emerges among professionals and students.

Surprisingly, we find no significant differences in the Wilcoxon signed rank sum test between rank data and PC data for either of the sample group (professionals: p = 0.851; students: p = 0.917), and no significant difference among the distribution of violations among attributes from the ANOVA test (professionals: F = 1.02, p = 0.438; students: F = 1.13, p = 0.322). The same result applies when we test attention evidence: no significant difference for attributes in PC data ranking vs. eye-tracked ranking (professionals: z = 0.022, p = 0.982; students: z = -0.133, p = 0.894) or in the Ranked data vs. eye-tracked ranking (professionals: z = 0.136, p = 0.892; students: z = 0.078, p = 0.938); and no significant difference for degrees in PC data ranking vs. eye-tracked ranking (professionals: z = 0.191, p = 0.848; students: z = -0.043, p = 0.966) or in the rank data vs. eye-tracked ranking (professionals: z = 0.082, p = 0.935; students: z = -0.130, p = 0.896). Both students and professional seem to be consistent in their rankings of what is important to them in the collaboration opportunities.

Another internal measure of consistency is the extent to which each respondent’s choice is consistent with their judgement by the means of the R-squared of their individual regressions. To address this, we average the individual R-squared of the first half of scenarios and the individual R-squared of the second half to see how much it varies. The R-squared does not show an increasing trend. Moreover, when testing the values of the R-squared for the first ten scenarios against the last ten scenarios on the overall sample, no significant effect is detected (p = 0.507). When splitting the sample by students and industry scientists, no significant effect is found either, which suggest that experience seems not to have an impact on the levels of R-squared of the individual regressions. This result suggests that the individual consistency does not develop over the time course of the experiment, but it remains stable throughout.

A final step of our analysis is to run two regressions with time spent jointly on the attribute and the corresponding degree as dependent variable. The regressions are listed in Table 4. In the first regression, model 1, we apply attribute length, stated disliking (the ranked data of attributes from most liked (= 1) to least liked (= 16)) and order as explanatory variables. In the second regression, model 2, we furhermore add two-way interaction effects between the explanatory variables and a three-way interaction between all of them. Both regressions control for individual dummies and the attribute dummies. All explanation variables are normalized. The first regression highlights that more time is generally spent on longer attributes (everything else kept equal), and that less time is spent on less liked attributes, while time spent is reduced over the course of the policy capturing experiment. In addition, the second regression shows that the longer time spent on more lengthy attributes is fading out over the repetition of the scenarios, suggesting that less time is spent reading the actual content of the attributes over time, but rather the time is spent observing what degrees the different attributes are scaled at.

Table 4. Triple interaction regression table.

Regression models of the relative impact of attribute length, individual ranking, and scenarios order on the time spent on attributes and corresponding degrees (p-values within parentheses). The three explanatory variables are normalized.

Model 1 Model 2
Attribute length 105.137** 106.130**
(0.009) (0.009)
Stated disliking (Ranked Data) -69.628*** -69.710***
(0.000) (0.000)
Scenario order -387.903*** -388.844***
(0.000) (0.000)
Attribute length X Stated disliking (Ranked Data) -19.554
(0.130)
Attribute length X Order -42.331***
(0.000)
Stated disliking (Ranked Data) X Scenario Order 4.773
(0.693)
Triple interaction 23.511
(0.057)
Individual dummies Yes Yes
Attribute dummies Yes Yes
Constant 1348.483*** 1349.410***
(0.000) (0.000)
R-squared 0.26 0.26
N 21120 21120

Discussion & conclusion

In this study, we seek to investigate the cognitive processes behind the policy-capturing technique, as measured by eye-tracking. Our objective is to study the information acquisition and cognitive processes that respondents undergo as they review a series of scenarios while participating in a PC experiment. We do so by analyzing what characterize the cognitive mechanisms that arise as our respondents assess the attractiveness of 30 randomly ordered scenarios.

We find firstly that respondents become more efficient with practice in the PC experiment. The effort required to accrue information about the collaboration scenarios is decreasing during the PC exercise.

Secondly, we find that respondents are selective in their information acquisition as they undergo an increasing number of non-fixated information over the course of the experiment. Moreover, we find that they direct attention towards the attributes that appear to be more important to them. Therefore, a clear selection pattern among the important attributes that characterize the scenarios emerges over time.

Thirdly, we find a consistent link between what respondents evaluate as important attributes in the PC and what they state is important for their evaluation.

Fourthly, we find that the cognitive processing of answering the PC experiment is surprisingly similar among students and professionals. While detecting that students are quicker in reducing their cognitive effort, for instance, we do not observe systematic differences in the amount of time allocated, the pattern of attention and the selective information acquisition process among the split samples. There are some tendencies that underline small differences among subjects but, overall, our findings seem to suggest that students may be a good proxy for more experienced decision makers.

Our results confirm that respondents in policy-capturing studies develop mental shortcuts for how to handle the massive amount of information intentionally provided. As such, our findings support a common assertion of many policy-capturing studies that assume participants develop policies as they review a series of scenarios while participating in a PC experiment, and that these policies influence their information processing and the judgements they make. Our findings show little difference among experienced and less experienced respondent. It is tempting to make the immediate conclusion that the convenient sample, students, can just as well be used as respondents in a future PC student, but one central point to notice is that we find similar processes and consistency among students and professionals, which is not the same as them giving the same evaluations and the same emphasis on certain pieces of information. The outcome of experienced respondents’ evaluation might still be different. All we are concluding is that the process and the consistency are similar across students and professionals.

Policy-capturing experiments share many similarities with studies in choice modelling, and our study draw on insights from research at the intersection of choice-modelling and eye-tracking, when building its hypotheses [48, 49]. It is interesting to note that despite the similarities with policy-capturing experiments, our study produces somewhat different outcomes. Where we find, through attention, that our participants develop cognitive models to make their decisions, evidence from discrete-choice experiments and eye-tracking combined does not find documentation of similar cognitive model being developed, but rather that participants follow systematic search patterns in their information acquisition [52, 116]. A potential reason for this difference could be that policy-capturing studies include more extensive amounts of information in each scenario to be considered compared to choice modelling studies, resulting in a larger need for developing cognitive models to cope with the situation. Another noteworthy difference relates to areas or attributes of a choice situation that are not attended to: attribute non-attendance (ANA). Whereas the eye-tracking discrete-choice experiments find mixed results about how visual ANA, stated ANA, and inferred ANA relate [83, 117] we find that our participants are generally able to state and attend to the attributes they put weight on in their policy assessments. Despite the differences that do exist across policy-capturing and discrete choice experiments, we believe that not only does our study benefit from bridging evidence from eye-tracking discrete-choice modelling research, but that such approach should represent a general inspiration for future studies to integrate insights from related disciplines.

Our findings can be viewed as an application of the eye-mind hypothesis. Our participants tend to attend more to the elements which are both revealed to be important for their answers to the policy-capturing exercise but also rated by themselves as important, suggesting that the eye-mind hypothesis fits also with our applied setting.

Finally, a fundamental methodological contribution of our study is to showcase the potential gains of combining an existing empirical methodology with attention evidence. Such combination allows researchers to go beyond the assumption of a particular cognitive process and to explicitly map the process instead. Methods, such as vignette studies, are often built on more or less explicit cognitive assumptions. We show that a promising way to conduct future research is to directly investigate the information acquisition and cognitive processes of such instruments through attention measures.

Our study is the first to combine eye-tracking and policy-capturing. Although providing novel insights on the attention process during a policy-capturing study, a number of limitations naturally applies. First, following the standard choice design in policy-capturing methods, we keep the order of the displayed items fixed. It would be interesting to randomly vary the order to create a causal relation on micro processes such as center bias and top-down processes. Second, we fixate the information amount of each scenario, following the standard method specification. Randomly varying the amount of information would allow to determine whether using more information indeed results in participants creating heuristics, as assumed in policy-capturing studies.

We hope future studies will appreciate our initial findings that participants do build cognitive shortcuts to complete the policy-capturing exercise, but also continue to explore more how and when those shortcuts develop.

Supporting information

S1 Fig. Scenario layout.

An example of the structure of one of the 30 scenarios and the Areas of Interest distribution.

(TIF)

S1 Table. Student-driven effect.

Random Effects regression showing respondents’ systematic decrease in attention on the lengthy items over time.

(TIF)

Data Availability

The data were now anonymized and made available on the platform Figshare at https://doi.org/10.6084/m9.figshare.19753417.v1.

Funding Statement

This work was supported by the Novo Nordisk Foundation [Grant #21630] - “Investigating the micro foundations of socioeconomic impact of university-industry relations”, but the foundation was not involved in the study design, the data collection/analysis or the writing of the manuscript.

References

  • 1.Barr SH, Hitt MA. A comparison of selection decision models in manager versus student samples. Pers Psychol. 1986;39: 599–617. doi: 10.1111/j.1744-6570.1986.tb00955.x [DOI] [Google Scholar]
  • 2.Hitt MA, Tyler BB. Strategic decision models: Integrating different perspectives. Strateg Manag J. 1991;12: 327–351. [Google Scholar]
  • 3.Karren RJ, Barringer MW. A Review and Analysis of the Policy-Capturing Methodology in Organizational Research: Guidelines for Research and Practice. Organ Res Methods. 2002;5: 337–361. doi: 10.1177/109442802237115 [DOI] [Google Scholar]
  • 4.Zedeck S. An information processing model and approach to the study of motivation. Organ Behav Hum Perform. 1977;18: 47–77. doi: 10.1016/0030-5073(77)90018-6 [DOI] [PubMed] [Google Scholar]
  • 5.Slovic P, Lichtenstein S. Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organ Behav Hum Perform. 1971;6: 649–744. doi: 10.1016/0030-5073(71)90033-X [DOI] [Google Scholar]
  • 6.Hitt MA, Middlemist RD. A Methodology to Develop the Criteria and Criteria Weightings for Assessing Subunit Effectiveness in Organizations. Acad Manag J. 1979;22: 356–374. doi: 10.5465/255595 [DOI] [Google Scholar]
  • 7.Miller GA. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev. 1956;63: 81–97. doi: 10.1037/h0043158 [DOI] [PubMed] [Google Scholar]
  • 8.Aguinis H, Pierce CA, Bosco FA, Muslin IS. First decade of organizational research methods: Trends in design, measurement, and data-analysis topics. Organ Res Methods. 2009;12: 69–112. doi: 10.1177/1094428108322641 [DOI] [Google Scholar]
  • 9.Aiman-Smith L, Scullen SE, Barr SH. Conducting Studies of Decision Making in Organizational Contexts: A Tutorial for Policy-Capturing and Other Regression-Based Techniques. Organ Res Methods. 2002;5: 388–414. doi: 10.1177/109442802237117 [DOI] [Google Scholar]
  • 10.Cable DM, Judge TA. Pay preferences and job search decisions: a person-organization fit perspective. Pers Psychol. 1994;47: 317–348. doi: 10.1111/j.1744-6570.1994.tb01727.x [DOI] [Google Scholar]
  • 11.Rynes SL, Schwab DP, Heneman HG. The role of pay and market pay variability in job application decisions. Organ Behav Hum Perform. 1983;31: 353–364. doi: 10.1016/0030-5073(83)90130-7 [DOI] [Google Scholar]
  • 12.SHERER PD, SCHWAB DP, HENEMAN HG. MANAGERIAL SALARY‐RAISE DECISIONS: A POLICY‐ CAPTURING APPROACH. Pers Psychol. 1987. doi: 10.1111/j.1744-6570.1987.tb02375.x [DOI] [Google Scholar]
  • 13.Klaas BS, Wheeler HN. Managerial decision making about employee discipline: a policy-capturing approach. Pers Psychol. 1990;43: 117–134. doi: 10.1111/j.1744-6570.1990.tb02009.x [DOI] [Google Scholar]
  • 14.Sanchez JI, Levine EL. The rise and fall of job analysis and the future of work analysis. Annual Review of Psychology. 2012. doi: 10.1146/annurev-psych-120710-100401 [DOI] [PubMed] [Google Scholar]
  • 15.York KM. Defining Sexual Harassment in Workplaces: A Policy-Capturing Approach. Acad Manag J. 1989;32: 839–850. doi: 10.5465/256570 [DOI] [Google Scholar]
  • 16.Dougherty TW, Ebert RJ, Callender JC. Policy Capturing in the Employment Interview. J Appl Psychol. 1986;71: 9. doi: 10.1037/0021-9010.71.1.9 [DOI] [Google Scholar]
  • 17.Graves LM, Karren RJ. Interviewer decision processes and effectiveness: an experimental policy‐capturing investigation. Pers Psychol. 1992;45: 313–340. doi: 10.1111/j.1744-6570.1992.tb00852.x [DOI] [Google Scholar]
  • 18.Olson CA, Dell’Omo GG, Jarley P. A Comparison of Interest Arbitrator Decision-Making in Experimental and Field Settings. Ind Labor Relations Rev. 1992;45: 711–723. doi: 10.2307/2524588 [DOI] [Google Scholar]
  • 19.Stumpf SA, London M. Management Promotions: Individual and Organizational Factors Influencing the Decision Process. Acad Manag Rev. 1981;24: 752–766. doi: 10.2307/257631 [DOI] [Google Scholar]
  • 20.Mulligan EJ, Hastie R. Explanations determine the impact of information on financial investment judgments. J Behav Decis Mak. 2005;18: 145–156. doi: 10.1002/bdm.491 [DOI] [Google Scholar]
  • 21.Bonaccio S, Dalal RS. Evaluating advisors: A policy-capturing study under conditions of complete and missing information. J Behav Decis Mak. 2010;23: 227–249. doi: 10.1002/bdm.649 [DOI] [Google Scholar]
  • 22.Raymark PH, Balzer WK, Doherty ME, Warren K, Meeske J, Tape TG, et al. Advance Directives: A Policy-capturing Approach. Med Decis Mak. 1995;15: 217–226. doi: 10.1177/0272989X9501500304 [DOI] [PubMed] [Google Scholar]
  • 23.Cline RR, Gupta K. Drug benefit decisions among older adults: A policy-capturing analysis. Med Decis Mak. 2006. doi: 10.1177/0272989X06288682 [DOI] [PubMed] [Google Scholar]
  • 24.Wryobeck JM, Rosenberg H. The association of client characteristics and acceptance of harm reduction: A policy-capturing study of psychologists. Addict Res Theory. 2005;13: 461–476. doi: 10.1080/16066350500168410 [DOI] [Google Scholar]
  • 25.Brehmer A, Brehmer B. What Have we Learned about Human Judgment from Thirty Years of Policy Capturing? Adv Psychol. 1988;54: 75–114. doi: 10.1016/S0166-4115(08)62171-8 [DOI] [Google Scholar]
  • 26.Wilson MG, Parker P. The Gap Between Immigration And Employment: A Policy-Capturing Analysis of Ethnicity-Driven Selection Biases. New Zeal J Employ Relations. 2007;32. [Google Scholar]
  • 27.Eisenbruch A, Roney J. Social Taste Buds: Evidence of Evolved Same-Sex Friend Preferences from a Policy-Capturing Study. Evol Psychol Sci. 2020; 1–12. doi: 10.1007/s40806-019-00218-9 [DOI] [Google Scholar]
  • 28.Hitt MA, Barr SH. Managerial Selection Decision Models: Examination of Configural Cue Processing. J Appl Psychol. 1989. doi: 10.1037/0021-9010.74.1.53 [DOI] [Google Scholar]
  • 29.Tyler BB, Kevin Steensma H. Evaluating technological collaborative opportunities: A cognitive modeling perspective. Strateg Manag J. 1995. doi: 10.1002/smj.4250160917 [DOI] [Google Scholar]
  • 30.Hitt MA, Ahlstrom D, Dacin MT, Levitas E, Svobodina L. The institutional effects on strategic alliance partner selection in transition economies: China vs. Russia. Organization Science. 2004. doi: 10.1287/orsc.1030.0045 [DOI] [Google Scholar]
  • 31.Hitt MA, Dacin MT, Levitas E, Arregle JL, Borza A. Partner selection in emerging and developed market contexts: Resource-based and organizational learning perspectives. Acad Manag J. 2000. doi: 10.2307/1556404 [DOI] [Google Scholar]
  • 32.Reuer JJ, Tong TW, Tyler BB, Ariño A. Executive preferences for governance modes and exchange partners: An information economics perspective. Strategic Management Journal. 2013. doi: 10.1002/smj.2064 [DOI] [Google Scholar]
  • 33.Calori R, Johnson G, Sarnin P. Ceos’ cognitive maps and the scope of the organization. Strateg Manag J. 1994;15: 437–457. doi: 10.1002/smj.4250150603 [DOI] [Google Scholar]
  • 34.Simon HA. Theories of bounded rationality. Decis Organ. 1972. [Google Scholar]
  • 35.Tversky A, Kahneman D. Judgement under uncertainty. Science (80-). 1974;185: 1124–1131. doi: 10.1126/science.185.4157.1124 [DOI] [PubMed] [Google Scholar]
  • 36.Taylor RN. Psychological determinants of bounded rationality: implications for decision‐making strategies. Decis Sci. 1975;6: 409–429. doi: 10.1111/j.1540-5915.1975.tb01031.x [DOI] [Google Scholar]
  • 37.Kahneman D. Mapping bounded rationality. Am Psychol. 2003;58: 697–720. [DOI] [PubMed] [Google Scholar]
  • 38.Dearborn DC, Simon HA. Selective Perception: A Note on the Departmental Identifications of Executives. Sociometry. 1958;21: 140. doi: 10.2307/2785898 [DOI] [Google Scholar]
  • 39.Gilovich T, Griffin D, Kahneman D. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge university press. Cambridge: Cambridge University Press; 2002. doi: 10.2307/20159081 [DOI] [Google Scholar]
  • 40.Yang L, Toubia O, De Jong MG. A bounded rationality model of information search and choice in preference measurement. J Mark Res. 2015;52. doi: 10.1509/jmr.13.0288 [DOI] [Google Scholar]
  • 41.Schwenk C. Information, Cognitive Biases, and Commitment to a Course of Action. Acad Manag Rev. 1986;11: 298–310. doi: 10.5465/amr.1986.4283106 [DOI] [Google Scholar]
  • 42.Schwenk C. Cognitive simplification processes in strategic decision‐making. Strateg Manag J. 1984;5: 111–128. doi: 10.1002/smj.4250050203 [DOI] [Google Scholar]
  • 43.Ocasio W. Towards an attention-based view of the firm. Strateg Manag J. 1997;18: 187–206. doi: [DOI] [Google Scholar]
  • 44.Ocasio W. Attention to Attention. Organ Sci. 2011. doi: 10.1287/orsc.1100.0602 [DOI] [Google Scholar]
  • 45.Henrich J, Heine SJ, Norenzayan A. The weirdest people in the world? Behav Brain Sci. 2010;33: 61–83. doi: 10.1017/S0140525X0999152X [DOI] [PubMed] [Google Scholar]
  • 46.Frechette GR. Laboratory Experiments: Professionals Versus Students. SSRN. 2011. doi: 10.2139/ssrn.1939219 [DOI] [Google Scholar]
  • 47.Fosgaard TR. Students Cheat more: Comparing Dishonesty of a Student and a Representative Sample in the Laboratory. Scand J Econ. 2020;122: 257–279. doi: 10.1111/sjoe.12326 [DOI] [Google Scholar]
  • 48.Meißner M, Musalem A, Huber J. Eye tracking reveals processes that enable conjoint choices to become increasingly efficient with practice. J Mark Res. 2016;53: 1–17. doi: 10.1509/jmr.13.0467 [DOI] [Google Scholar]
  • 49.Hoeffler S, Ariely D. Constructing stable preferences: A look into dimensions of experience and their impact on preference stability. J Consum Psychol. 1999;8: 113–139. doi: 10.1207/s15327663jcp0802_01 [DOI] [Google Scholar]
  • 50.Aguinis H, Gottfredson RK, Joo H. Best-Practice Recommendations for Defining, Identifying, and Handling Outliers. Organ Res Methods. 2014;17: 351–371. doi: 10.1177/1094428112470848 [DOI] [Google Scholar]
  • 51.Ryan M, Krucien N, Hermens F. The eyes have it: Using eye tracking to inform information processing strategies in multi-attributes choices. Health Econ. 2018;27: 709–721. doi: 10.1002/hec.3626 [DOI] [PubMed] [Google Scholar]
  • 52.Balcombe K, Fraser I, Williams L, McSorley E. Examining the relationship between visual attention and stated preferences: A discrete choice experiment using eye-tracking. J Econ Behav Organ. 2017;144: 238–257. doi: 10.1016/j.jebo.2017.09.023 [DOI] [Google Scholar]
  • 53.Krucien N, Ryan M, Hermens F. Visual attention in multi-attributes choices: What can eye-tracking tell us? J Econ Behav Organ. 2017;135: 251–267. doi: 10.1016/j.jebo.2017.01.018 [DOI] [Google Scholar]
  • 54.Spinks J, Mortimer D. Lost in the crowd? Using eye-tracking to investigate the effect of complexity on attribute non-attendance in discrete choice experiments. BMC Med Inform Decis Mak. 2016;16: 14. doi: 10.1186/s12911-016-0251-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Balcombe K, Fraser I, Mcsorley E. Visual Attention and Attribute Attendance in Multi-Attribute Choice Experiments. J Appl Econom. 2015;30: 447–467. doi: 10.1002/jae.2383 [DOI] [Google Scholar]
  • 56.Uggeldahl K, Jacobsen C, Lundhede TH, Olsen SB. Choice certainty in Discrete Choice Experiments: Will eye tracking provide useful measures? J Choice Model. 2016;20: 35–48. [Google Scholar]
  • 57.Konovalov A, Krajbich I. Over a Decade of Neuroeconomics: What Have We Learned? Organ Res Methods. 2019;22: 148–173. doi: 10.1177/1094428116644502 [DOI] [Google Scholar]
  • 58.Fiedler S, Glöckner A, Nicklisch A, Dickert S. Social Value Orientation and information search in social dilemmas: An eye-tracking analysis. Organ Behav Hum Decis Process. 2013;120: 272–284. [Google Scholar]
  • 59.Glöckner A, Herbold AK. An eye-tracking study on information processing in risky decisions: Evidence for compensatory strategies based on automatic processes. J Behav Decis Mak. 2011;24: 71–98. doi: 10.1002/bdm.684 [DOI] [Google Scholar]
  • 60.Jenke L, Bansak K, Hainmueller J, Hangartner D. Using Eye-Tracking to Understand Decision-Making in Conjoint Experiments. Polit Anal. 2021;29. doi: 10.1017/pan.2020.11 [DOI] [Google Scholar]
  • 61.Krejtz K, Duchowski AT, Niedzielska A, Biele C, Krejtz I. Eye tracking cognitive load using pupil diameter and microsaccades with fixed gaze. PLoS One. 2018;13. doi: 10.1371/journal.pone.0203629 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Pizzo A, Tyler B, Fosgaard T, Beukel K. Bridging the divide between industry and academic scientists. [Google Scholar]
  • 63.Styles EA. The psychology of attention. Psychol Atten Second Ed. 2006. doi: 10.4324/9780203968215 [DOI] [Google Scholar]
  • 64.Levie N. Perceptual load as a necessary condition for selective attention. J Exp Psych Hum Percept Perform. 1995;21: 451–468. [DOI] [PubMed] [Google Scholar]
  • 65.Stigler G. The Economics of Information. J Polit Econ. 1961;69. [Google Scholar]
  • 66.Payne JW, Bettman JR, Johnson EJ. Behavioral decision research: A constructive processing perspective. Annu Rev Psychol. 1992;43: 87–131. doi: 10.1146/annurev.ps.43.020192.000511 [DOI] [Google Scholar]
  • 67.Slovic P, Griffin D, Tversky A. Compatibility Effects in Judgment and Choice. Hogarth R. Inisghts in decision making: Theory and applications. Hogarth R. Chicago: University of Chicago Press; 1990. doi: 10.1017/cbo9780511808098.014 [DOI] [Google Scholar]
  • 68.Ericsson KA, Kintsch W. Long-term working memory. Psychol Rev. 1995;102: 211–245. doi: 10.1037/0033-295x.102.2.211 [DOI] [PubMed] [Google Scholar]
  • 69.Falk A, Heckman JJ. Lab experiments are a major source of knowledge in the social sciences. Science. 2009;326: 535–538. doi: 10.1126/science.1168244 [DOI] [PubMed] [Google Scholar]
  • 70.Haider H, Frensch PA. Eye Movement during Skill Acquisition: More Evidence for the Information-Reduction Hypothesis. J Exp Psychol Learn Mem Cogn. 1999;25: 172–190. doi: 10.1037/0278-7393.25.1.172 [DOI] [Google Scholar]
  • 71.Gegenfurtner A, Lehtinen E, Säljö R. Expertise Differences in the Comprehension of Visualizations: A Meta-Analysis of Eye-Tracking Research in Professional Domains. Educ Psychol Rev. 2011;23: 523–552. doi: 10.1007/s10648-011-9174-7 [DOI] [Google Scholar]
  • 72.van der Lans R, Wedel M. Eye movements during search and choice. International Series in Operations Research and Management Science. 2017. doi: 10.1007/978-3-319-56941-3_11 [DOI] [Google Scholar]
  • 73.Stüttgen P, Boatwright P, Monroe RT. A satisficing choice model. Mark Sci. 2012;31. doi: 10.1287/mksc.1120.0732 [DOI] [Google Scholar]
  • 74.Lewis KE, Grebitus C, Nayga RM. The Impact of Brand and Attention on Consumers’ Willingness to Pay: Evidence from an Eye Tracking Experiment. Can J Agric Econ. 2016;64: 753–777. doi: 10.1111/cjag.12118 [DOI] [Google Scholar]
  • 75.Rihn AL, Yue C. Visual Attention’s Influence on Consumers’ Willingness-to-Pay for Processed Food Products. Agribusiness. 2016;32: 314–328. doi: 10.1002/agr.21452 [DOI] [Google Scholar]
  • 76.Meyerding SGH, Merz N. Consumer preferences for organic labels in Germany using the example of apples–Combining choice-based conjoint analysis and eye-tracking measurements. J Clean Prod. 2018;181: 772–783. doi: 10.1016/j.jclepro.2018.01.235 [DOI] [Google Scholar]
  • 77.Meißner M, Decker R. Eye-tracking information processing in choice-based conjoint analysis. Int J Mark Res. 2010. doi: 10.2501/S147078531020151X [DOI] [Google Scholar]
  • 78.Pärnamets P, Johansson R, Gidlöf K, Wallin A. How Information Availability Interacts with Visual Attention during Judgment and Decision Tasks. J Behav Decis Mak. 2016;29: 218–231. doi: 10.1002/bdm.1902 [DOI] [Google Scholar]
  • 79.Pieters R, Warlop L. Visual attention during brand choice: The impact of time pressure and task motivation. Int J Res Mark. 1999;16. doi: 10.1016/s0167-8116(98)00022-6 [DOI] [Google Scholar]
  • 80.Reutskaja E, Nagel R, Camerer CF, Rangel A. Search dynamics in consumer choice under time pressure: An eye-tracking study. Am Econ Rev. 2011;101. doi: 10.1257/aer.101.2.900 [DOI] [Google Scholar]
  • 81.Russo JE, Leclerc F. An Eye-Fixation Analysis of Choice Processes for Consumer Nondurables. J Consum Res. 1994;21. doi: 10.1086/209397 [DOI] [Google Scholar]
  • 82.Vass C, Rigby D, Tate K, Stewart A, Payne K. An Exploratory Application of Eye-Tracking Methods in a Discrete Choice Experiment. Med Decis Mak. 2018;38: 658–672. doi: 10.1177/0272989X18782197 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Van Loo EJ, Nayga RM, Campbell D, Seo HS, Verbeke W. Using eye tracking to account for attribute non-attendance in choice experiments. Eur Rev Agric Econ. 2018;45: 333–365. doi: 10.1093/erae/jbx035 [DOI] [Google Scholar]
  • 84.Oviedo JL, Caparrós A. Information and visual attention in contingent valuation and choice modeling: Field and eye-tracking experiments applied to reforestations in Spain. J For Econ. 2015;21: 185–204. doi: 10.1016/j.jfe.2015.09.002 [DOI] [Google Scholar]
  • 85.Segovia MS, Palma MA, Nayga RM. The effect of food anticipation on cognitive function: An eye tracking study. PLoS One. 2019;14. doi: 10.1371/journal.pone.0223506 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Orquin JL, Mueller Loose S. Attention and choice: A review on eye movements in decision making. Acta Psychol (Amst). 2013. doi: 10.1016/j.actpsy.2013.06.003 [DOI] [PubMed] [Google Scholar]
  • 87.Ashby NJS, Johnson JG, Krajbich I, Wedel M. Applications and Innovations of Eye-movement Research in Judgment and Decision Making. J Behav Decis Mak. 2016. doi: 10.1002/bdm.1956 [DOI] [Google Scholar]
  • 88.Yarbus A. Eye movements and vision. New York: Plenum Press; 1967. [Google Scholar]
  • 89.Most SB, Scholl BJ, Clifford ER, Simons DJ. What you see is what you set: Sustained inattentional blindness and the capture of awareness. Psychol Rev. 2005;112: 217–242. doi: 10.1037/0033-295X.112.1.217 [DOI] [PubMed] [Google Scholar]
  • 90.Pashler H, Johnston JC, Ruthruff E. Attention and performance. Annu Rev Psychol. 2001;52: 629–651. doi: 10.1146/annurev.psych.52.1.629 [DOI] [PubMed] [Google Scholar]
  • 91.Pieters R, Wedel M. Goal Control of Attention to Advertising: The Yarbus Implication. J Consum Res Inc •. 2007. [Google Scholar]
  • 92.Just MA, Carpenter PA. A theory of reading: From eye fixations to comprehension. Psychol Rev. 1980. doi: 10.1037/0033-295X.87.4.329 [DOI] [PubMed] [Google Scholar]
  • 93.Huettig F, Olivers CNL, Hartsuiker RJ. Looking, language, and memory: Bridging research from the visual world and visual search paradigms. Acta Psychol (Amst). 2011;137. doi: 10.1016/j.actpsy.2010.07.013 [DOI] [PubMed] [Google Scholar]
  • 94.Tatler BW. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. J Vis. 2007. doi: 10.1167/7.14.4 [DOI] [PubMed] [Google Scholar]
  • 95.Krajbich I, Rangel A. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions. Proc Natl Acad Sci U S A. 2011;108: 13852–13857. doi: 10.1073/pnas.1101328108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Meißner M, Oll J. The Promise of Eye-Tracking Methodology in Organizational Research: A Taxonomy, Review, and Future Avenues. Organ Res Methods. 2019. doi: 10.1177/1094428117744882 [DOI] [Google Scholar]
  • 97.Williams DW, Wood MS, Mitchell JR, Urbig D. Applying experimental methods to advance entrepreneurship research: On the need for and publication of experiments. J Bus Ventur. 2019;34: 215–223. doi: 10.1016/j.jbusvent.2018.12.003 [DOI] [Google Scholar]
  • 98.Bialkova S, van Trijp HCM. An efficient methodology for assessing attention to and effect of nutrition information displayed front-of-pack. Food Qual Prefer. 2011. doi: 10.1016/j.foodqual.2011.03.010 [DOI] [Google Scholar]
  • 99.Fiedler S, Glöckner A. The dynamics of decision making in risky choice: An eye-tracking analysis. Front Psychol. 2012. doi: 10.3389/fpsyg.2012.00335 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100.Knoepfle DT, Tao-yi Wang J, Camerer CF. Studying learning in games using eye-tracking. J Eur Econ Assoc. 2009. doi: 10.1162/JEEA.2009.7.2-3.388 [DOI] [Google Scholar]
  • 101.Ariely D, Loewenstein G, Prelec D. “Coherent arbitrariness”: Stable demand curves without stable preferences. Q J Econ. 2003;118: 73–106. doi: 10.1162/00335530360535153 [DOI] [Google Scholar]
  • 102.Carlsson F, Mørkbak MR, Olsen SB. The first time is the hardest: a test of ordering effects in choice experiments. J Choice Model. 2012;5: 19–37. doi: 10.1016/S1755-5345(13)70051-4 [DOI] [Google Scholar]
  • 103.Hewlin PF, Dumas TL, Burnett MF. To thine own self be true? facades of conformity, values incongruence, and the moderating impact of leader integrity. Acad Manag J. 2017;60: 178–199. doi: 10.5465/amj.2013.0404 [DOI] [Google Scholar]
  • 104.Gordon ME, Slade LA, Schmitt N. The “Science of the Sophomore” Revisited: from Conjecture to Empiricism. Acad Manag Rev. 1986. doi: 10.5465/amr.1986.4282666 [DOI] [Google Scholar]
  • 105.Hill CWL. Cooperation, Opportunism, and the Invisible Hand: Implications for Transaction Cost Theory (in Theory Development Forum: Market Discipline and the Discipline of Management). Acad Manag Rev. 1990. [Google Scholar]
  • 106.Barney J. Firm Resources and Sustained Competitive Advantage. J Manage. 1991. doi: 10.1177/014920639101700108 [DOI] [Google Scholar]
  • 107.Brockner J, Higgins ET. Regulatory focus theory: Implications for the study of emotions at work. Organ Behav Hum Decis Process. 2001. doi: 10.1006/obhd.2001.2972 [DOI] [Google Scholar]
  • 108.Idson LC, Liberman N, Higgins ET. Distinguishing gains from nonlosses and losses from nongains: A regulatory focus perspective on hedonic intensity. J Exp Soc Psychol. 2000. doi: 10.1006/jesp.1999.1402 [DOI] [Google Scholar]
  • 109.Stiglitz JE. Information and the Change in the Paradigm in Economics. New Frontiers in Economics. 2004. doi: 10.1017/CBO9780511754357.004 [DOI] [Google Scholar]
  • 110.Lahey JN, Oxley D. The power of eye tracking in economics experiments. American Economic Review. 2016. doi: 10.1257/aer.p20161009 [DOI] [Google Scholar]
  • 111.Rayner K. Eye Movements in Reading and Information Processing: 20 Years of Research. Psychol Bull. 1998. doi: 10.1037/0033-2909.124.3.372 [DOI] [PubMed] [Google Scholar]
  • 112.Purves D., Augustine G. J., Fitzpatrick D., Katz L. C., LaMantia A. S., McNamara J. O., et al. Types of Eye Movements and Their Functions. Neuroscience. 2001. pp. 361–390.11166122 [Google Scholar]
  • 113.Glaholt MG, Reingold EM. Eye movement monitoring as a process tracing methodology in decision making research. J Neurosci Psychol Econ. 2011;4: 125–146. [Google Scholar]
  • 114.Duchowski A. Eye tracking methodology: Theory and practice. Eye Tracking Methodology: Theory and Practice. 2017. doi: 10.1007/978-1-84628-609-4 [DOI] [Google Scholar]
  • 115.Pizzo A, Fosgaard TR., Tyler B, Beukel K. Information acquisition and cognitive processes during strategic decision-making: combining a policy-capturing study with eye-tracking data. In: Figshare Dataset [Internet]. 2022. Available: 10.6084/m9.figshare.19753417.v1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.Lew DK, Whitehead JC. Attribute non-attendance as an information processing strategy in stated preference choice experiments: Origins, current practices, and future directions. Mar Resour Econ. 2020;35. doi: 10.1086/709440 [DOI] [Google Scholar]
  • 117.Chavez D, Palma M, Collart A. Using eye-tracking to model attribute non-attendance in choice experiments. Appl Econ Lett. 2018;25. doi: 10.1080/13504851.2017.1420879 [DOI] [Google Scholar]

Decision Letter 0

Iván Barreda-Tarrazona

8 Apr 2022

PONE-D-22-03769Information acquisition and cognitive processes during strategic decision-making: combining a policy-capturing study with eye-tracking data.PLOS ONE

Dear Dr. Pizzo,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

I have received feedback on your manuscript from two expert reviewers. Both reviewers and myself see merit in your work. However, as the manuscript currently stands, it presents important shortcomings that must be carefully addressed before it could be publised. The issues include data availability, data analysis, connection to the literature and to previous results, and many other. Hence, you should consider this revision opportunity a high risk endavour. In case you decide to undertake the improvement task requested, you must know that I will ask the same two reviewers to consider again your paper for publication.

Please submit your revised manuscript by May 23 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Iván Barreda-Tarrazona, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

3. Thank you for stating the following in your Competing Interests section: 

“NO”

Please complete your Competing Interests on the online submission form to state any Competing Interests. If you have no competing interests, please state "The authors have declared that no competing interests exist.", as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now

 This information should be included in your cover letter; we will change the online submission form on your behalf.

4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

5. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This paper provides an interesting use of eye tracking to uncover the use of information in a policy capturing context. Unfortunately the results you found are relatively uninteresting and only marginally useful to improve policy capture. Here are a number of suggested changes that might strengthen the paper.

1. Relate judgment micro-processes to actual valuations. Understanding the attentional processes is important if it alters the results defined in terms of the derived policy evaluations of the attributes. Currently this seems like half a paper.

2. Clarify your design. It is hard to understand the task used or the design across respondents. The order of the scenarios is randomized across respondents. Is the order of the attributes within scenarios randomized, or their attendant valuations? You indicated that correlation between the 16 attributes (defined linearly 1-5) is less =/- .40 across the 30 tasks. It is surprising you could not find a design with lower correlations. There should be a design with 32 tasks that can generate 16 independent binary variables. Your example design in Figure 1A shows no evidence of any of the attributes having a low (1) rating. Is that true of all scenarios? Having 4 levels may help you find an orthogonal design. Is that what you did?

3. Do you need both efficiency and selectivity? Conceptually, these constructs are inversely related, as a processes which are selective are likely to be less efficient. What is their correlation? The inverse similarity is evident in their statistical patterns is shown in Table 2. It might be wise to focus only on selectivity, as that is normative, provided, as you show, that one attends the attributes that are less likely to be important.

4. Play down the student vs. employed effect. The least interesting findings of this paper stem from the low difference between the processing of students vs. employed. Part of the problem may stem very small samples of respondents and from the inherent variability within categories. The only difference that matters is that it takes a little longer for the students to get used to the attributes, as one would expect. Do the two groups differ in their valuation of attributes?

5. Do a better job with accuracy. Accuracy is key. Currently you only measure it based on the extent to which attention focuses on attributes that are valued by a respondent. A second measure is the extent to which each respondent’s choice is consistent with their judgments, assessed by either the standard error or the R-squares of within-individual regressions. Typically, that internal measure of consistency increases with round as respondents adjust their responses to be consistent with each other. A third measure of consistency is whether a respondents’ direct measures differ with that of their peers. It would be exciting to show that peer consistency increases with round, implying that they merge towards each other’s judgments over time.

6. Separate aggregate from individual analysis. It would be helpful to display a regression pooling all respondents together. That enables tests of shared strategies across individual. Then run regressions within individuals. Those individual tests can assess the extent to which individuals used different strategies. Below are four potentially valuable tests.

7. Test the impact of attributes near the top. Check to see whether most searches move from top to bottom. If that occurs, there may be enough information for a reasonable evaluation to be made for items early in the process, thus limiting time spent at a low accuracy cost.

8. Test the impact of the lengths of the attribute descriptions. With experience, the time taken focused on an attribute may drop because respondents can recognize its value without having to reread the sentence. This drop is more likely for long descriptions.

9. Test whether some respondents may be ignoring the attribute labels and simply averaging degree information. That shortcut is even easier if one evaluates the scenario by its number of 5’s. More complex but almost as easy is the number of 4’s or 5’s, or the number of 5’s minus the number of 1’s. It should be possible to measure saccades down the column to see if they are scanning down to look for other similar ratings.

Finally, test whether respondents are first exploring important attributes and then examining their ratings. A number of respondents initially examine the attributes top down and then examine the degree for each. If so, you will find attribute-to-degree moves but few in reverse. The selectivity then comes in by moving immediately to a few important attributes.

Reviewer #2: The authors present an analysis of eye movements and decisions during a policy capturing (PC) experiment in which participants rated the potential of 30 possible academic collaboration opportunities. Looking at the literature, this seems to be the first, or one of the first studies to use eye tracking to study decision strategies in a policy capturing paradigm. The paradigm, however, resembles the discrete choice experiment (DCE) to quite an extent and in the context of this paradigm, eye tracking has been used extensively. The authors refer to the DCE eye tracking papers, but I think better use could be made between the overlap, and a clearer discussion of the differences as well. Because this is (one of) the first paper(s) studying eye movements in PC, I think the paper makes a significant contribution. That said, I would have several comments that I think need addressing before I can recommend publication.

(1) The authors indicate that they cannot share data due to sensitive content. This is somewhat surprising, because the combination of expert/novice, fixation information, and decisions for the various policies does not allow for tracing the data back to participants. I therefore think the data should be made available (using participant numbers). With these data, I think the exact contents of the presented policies should also be made available. Any demographics such as age and gender can be omitted from the shared data, as they are not used for the analysis and could lead to identification of the participants.

(2) Various plots are shown as bar plots. I think it would be better to use line plots here, using error bars to indicate the variability across participants.

(3) As indicated, the policy capturing (PC) paradigm seems to be quite similar to discrete choice experiments (DCEs), for which there is an extensive literature on whether or not eye movements may reveal cognitive processing. I think you could do more with this literature in your introduction, for example, to make predictions, and in your discussion, to explain why the data are the way that they are. For example, the conclusion from the DCE literature seems to be that participants follow a fairly standard trajectory through the information presented on the screen (very much top-to-bottom, left-to-right), but that eye movements provide relatively little information about cognitive processing (which would be in line with your observations that experts and novices show similar eye movement patterns). Likewise, an important topic in the eye tracking DCE literature is attribute non-attendance (ANA), where three ways of measuring ANA have been identified: stated ANA, inferred ANA and visual ANA (the latter on the basis of eye movements). Studies have suggested that these three may not always be aligned. This could be in contrast with your finding that participants are well able to indicate which attributes are important for their decisions.

(4) That said the policy capture method seems to differ from the DCE method in that many more attributes are shown (typically DCEs have around 4 to 7 attributes), that ratings are collected (rather than choices between options), and that no systematic method seems to be applied to decide which attributes and which levels to present to participants (in DCEs a method called d-efficient designs are used to optimally select attribute levels across choice tasks). I think it would be important to contrast the two methods in your work, and explain what can be learned from the DCE literature for PC and what cannot.

(5) I think the discussion could be more like a discussion. I would very much like to see a comparison between the present results and past findings (there is quite a bit of literature on eye movements for discrete choice experiments), discussions of any differences with past findings, how results fit in existing theories (such as the eye-mind hypothesis mentioned in the introduction) and a discussion of possible limitations of the study and future directions.

(6) I found it difficult to understand how the eye movement data were recorded and analysed. The survey seems to be presented in Qualtrics, which seems to be an online survey tool. Eye movements seem to be collected with a Tobii T60 eye tracker. How was the onset of each question aligned with the eye movements?

(7) For the fixations, did you make use of the automatic segmentation method of the T60 eye track to separate samples into fixations and saccades?

(8) I’m a bit worried about line 208 where you indicate to only use fixations of at least 200ms. That is quite a high threshold. I have seen thresholds of 80ms being used, but looking at the fixation distributions provided by Rayner in his articles and books, I think you may be missing quite a substantial number of fixations in your analysis by using a 200ms cut-off duration.

(9) In line 255 you indicate to follow the method by Hitt et al and Tyler et al. I think it would be important to briefly explain what these methods involve.

(10) Part of the participants seem to have participated online (no eye tracking) and part took part in the eye tracker. What is unclear to me is how these data were analysed. Were eye tracking outcomes based on just the eye movement participants while the decision data were based on all participants? What was done for analyses that involved the combination of decisions and eye tracking data?

(11) The time spent (often called dwell times) seem to be reported in seconds. This may be problematic, because some participants take a long time to decide, while others take less (so those who spend more time on each task contribute more strongly to the average data). Time spent on tasks also decreases over time, and therefore tasks early in the sequence contribute more than later tasks. I would strongly recommend to also consider time spent as a percentage of the total trial duration, to reduce such length effects.

(12) It is unclear to me what the purpose of the pilot was. How much was changed to the study protocol after the pilot?

(13) Please make sure that all statistics are reported with the same number of digits (sometimes there are too many).

(14) Please have the paper proof-read one further time before resubmiting.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Dec 1;17(12):e0278409. doi: 10.1371/journal.pone.0278409.r002

Author response to Decision Letter 0


13 Jun 2022

Thank you for taking into consideration our manuscript further.

We have made a careful effort to address all issues raised by the reviewers and we have included most comments to the manuscript, which now stands as an improved and more complete version. We have clarified the text where needed, and expanded the reference and comparison to the literature and previous results. We have made the data available, once anonymized. Finally, new results were pulled out from the data as suggested by the reviewers or in order to address specific comments. The majority were also integrated in the manuscript.

All details of our revisions are included in the Response to Reviewers document.

We remain available for any inquiry you may have.

Attachment

Submitted filename: Response to reviewers.docx

Decision Letter 1

Iván Barreda-Tarrazona

25 Aug 2022

PONE-D-22-03769R1Information acquisition and cognitive processes during strategic decision-making: combining a policy-capturing study with eye-tracking data.PLOS ONE

Dear Dr. Pizzo,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.Both reviewers see great progress in the way you have responded to their comments. In fact reviewer 2 just wants you to provide an easier to interpret dataset, so I ask you to include a file with the description of each of the variables in the dataset. Please also make sure to carefully consider all suggestions that reviewer 1 is putting forward. After the next round of revision I will make a final decision about the publishability of the paper.

Please submit your revised manuscript by Oct 09 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Iván Barreda-Tarrazona, PhD

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: My apologies for the delay in my response. It is quite a long paper and it has been a few difficult months with everyone else "going back to normal". Thank you for offering the version that shows the changes and the detailed letter with the responses to the comments. This has helped a lot. I think all my comments have been addressed in the revision. When reading the revision I thought that the introduction could have had a bit more information on the similarities and differences between the policy capturing paradigm and discrete choice experiments, but I think the current layout with this information in the discussion also works. I am glad to see that the cut-off for fixations is 100ms (which is a common threshold), instead of 200ms. I also looked through the replies to the other reviewer, and very much appreciate that you back up your responses with relevant analyses.

The one remaining request I would still have, is to provide an overview of what each of the variables in your data-set mean (best shared along with the data-set). I had to look up the .dta format and found that it is from STATA, which I can read in R using the "haven" package, so this works out well (but you may want to add that information as well). For example, what is contained in the variables: qc1-qp1-qp2-qp3-qp4-qp5-qp6-qp7? What is in c1-c2-c3-c4-c5-c6-c7-c8-c9-c10-c11-c12-c13-c14-c15-c16-d1-d2-d3-d4-d5-d6-d7-d8-d9-d10-d11-d12-d13-d14-d15-d16-a1-a2? Some guidance on what is what would be helpful.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Joel Huber

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: PONE22 3792 tracking judgments.docx

PLoS One. 2022 Dec 1;17(12):e0278409. doi: 10.1371/journal.pone.0278409.r004

Author response to Decision Letter 1


2 Oct 2022

Note: We report here below the response to reviewer. It is a copy of the document attached to this revision. We suggest to read it from the doc file. Figures here below are missing.

-----------------------------

Dear Editor, Dear Reviewers,

Thanks for the opportunity to resubmit our paper. We are very thankful for your constructive comments and suggestions which are all well-taken. Below, we are responding to all the comments and are highlighting the changes made in the manuscript as a reaction to the comments (our answers are in blue). Overall, the comments and the suggested changes have, in our opinion, improved the manuscript substantially. We would like to mention that we found a few mistakes in the first “response to reviewers” document. For that reason, we are re-submitting the first reply document (with the corrections), along with this second reply document.

Reviewer 1:

Thank you for your careful responses to the questions in the first draft. I believe there are some important issues that need to be resolved for this to be an important paper that helps understand multi-attribute scenario judgments.

Initial Reactions:

1. Figures 1 and 4. Figure 1 shows time by task order. Perhaps convert both to log scale. Double log learning curves tend to be quite linear. Figure 4 provides very little information not found in Figure 1. You might simply present Figure 4 first and then table 4 showing differences in time spent differ little by professionals and students.

Thank you for the suggestion. We have followed your suggestion by converting the Average Time Spent of the following exhibit into the logarithmic form. In the Manuscript, we keep Figure 4, now renamed as Figure 1, and we take away the old version of Figure 1.

The old version of Figure 4 is dropped, Figures 5 and 6 are now Figure 4 and Figure 5 (see additional change in answer 3). Figure 1 includes the distinction across the professional and the student sample, and the text was updated as follows:

“The development of attention over time is shown in Fig 1, which also includes the split between Professionals and Students.”

Fig 1. Attention. The pattern of attention; time spent (seconds), log transformed, for attributes (Panel A) and degrees (Panel B) over the 30 repeated collaboration scenarios, by professionals and students.

2. Figure 2. Attributes-not-viewed. Are these relevant differences? It might be more meaningful to provide the % of information viewed per attribute rather than a binary scale of the proportion of attributes viewed vs. not viewed.

Thanks for the comments, which are well-taken. Unfortunately, we do not have access to the original raw data from the study due to a change of institutional affiliation. For this reason, we cannot reshape our Areas of Interest (AOIs) and consequently we cannot retrieve data on the percentage of information viewed per attribute (i.e., measure how many words of each item have been viewed). We would also be concerned with the data reliability as the precision the applied eye-tracker is not high enough to create AOIs for each word.

We like the suggestion though, and have created alternative measures, displayed below, which are showing how many average fixations our participants have across the attributes and the degrees.

Figure 2-bis. Fixations. Additional Panels on the average number of fixations of all attributes per each scenario for both degrees and attributes.

The figure illustrates the average intensity of fixation per scenario actually looked at. The trend in both panels resembles, as one can expect, the course of the time spent on each scenario, as addressed in the first hypothesis. We make a comment on this in the manuscript now. It reads as follows:

“As a contrast to the zero-fixations, we also analysis what is in fact fixated on. A paired t-test comparing the average fixations of the first ten scenarios with average fixations in the last ten scenarios finds a significant difference for attributes (p<0.0001), with 28.8 fewer fixations on average. For the degrees, the difference is also significant (p= 0.0018), with 3.1 fewer average fixations.”

3. Figure 5 shows zero-fixation attributes and degree by sample. Consider dropping Figure 5 since there are visual differences that seem apparent but are not statistically reliable. You could simply include student vs. Professional as an analysis table.

Thanks for the suggestion. We have dropped Figure 5 and kept the discussion about the subgroup’s tests in the manuscript. The exhibit related to this result is still Table 3. Because Figure 5 was dropped, the exhibit included in the following section Experience and Consistency is now named Figure 4.

4. Figure R2 does not make sense as the three graphs are virtually identical.

There was indeed a mistake on the exhibits we had uploaded for Figure R2 Panel A and B. We are grateful you have spotted it and we have now substituted them. We find that the updated exhibit is relevant and have decide the keep it in the text.

Fig R1.2. Accuracy. Plot of standard deviation of individual R-squared for the overall sample (Panel A) and for the split sample (Panel B).

Panel A – Overall sample Panel B – Split Sample

Suggested changes:

1. Play down the distinction between attributes and degrees. One possibility is that respondents may simply average the degrees for each scenario, but that would result in null differences between the attributes. A good normative rule specifies that the more important the attribute the more valuable it is to assess its degree. Generally, it is wasteful to spend relevant time on an attribute without considering its degree. If so, then there should be few examples where a person fixates on an important attribute without noting its scale. To test that calculate the percent of attributes that are viewed without examining its degree, and the number of degrees sampled without viewing its attribute. Those may be a small percent (say 20%) and should decrease with round or in cases where substantial time is spent on either (say > 500 ms). Below I suggest an analysis that merges attribute and degree time into one measure of attribute attention.

Thank you for the insightful suggestion. On average, the attributes that are viewed (based on fixation measure) without examining their degree is 65,3%, while the number of degrees sampled without viewing their attribute is 34,7%. The overall picture is that for 45,7% of the attributes both attributes and degrees are checked, in 25,7% neither of them are checked, while in 28,5% either attribute or degree is checked, but the corresponding degree or attribute is not checked.

When plotting this measure over order, we find that, while the average sum of attributes being seen without checking the corresponding degree is decreasing over order, the opposite trend is observed for the degrees viewed in isolation.

Fig R2.2. Information in isolation. Plots of the share of attributes viewed without viewing the corresponding degree and vice-versa.

We added this additional analysis to the manuscript, which reads:

“Another level of analysis concerns whether attributes and degrees are viewed in isolation, meaning that one is viewed while the corresponding counterpart is not. In other words, we calculate the percent of attributes that are viewed without examining their degree, and the number of degrees that are viewed without examining their attribute. On average, the attributes that are viewed (based on the fixation measure) without examining their degree is 65,3%, while the number of degrees sampled without viewing their attribute is 34,7%. When studying these measures over order, we find that, while the average sum of attributes being seen without checking the respective degree is decreasing over order, the opposite trend is observed for the degrees viewed in isolation. It suggests that participants tend to focus less and less on reading the attributes over time, and more and more on noticing the level of the degrees”

2. Play down the selectivity metric from zero fixations. The problem with eye tracking is that many fixations occur randomly or to they are briefly noted and then ignored. On Table 4 the standard errors from the selectivity measures are substantially larger than the efficiency measures. One way to fix that is to define a higher cutoff for what makes a fixation, say at 500 ms. The other way is to simply focus more on what people spend more time on rather than not. Put differently, the important measure is the total time of fixations rather than the distinction between attributes with no fixations vs. some.

Thanks a lot for the comment. We agree that it is interesting to not only look at what participants do not look at, but also what they in fact look at. We have therefore decided to add measures on the number of fixations and time spent looking at the attributes. We cannot change the fixation cut-off because we do not have access to the raw data (as mentioned above).

Regarding the standard error (we assume it is Table 3 and not 4 which is referred to), we would like to highlight that the table is listing the p-values in the parentheses (the explanation of the parenthesis is also included in the title).

After following the suggestion in your comment, we have added an additional analysis and updated the manuscript such that it now reads:

“To also provide evidence on what participants in fact attend to, and not only what they do not attend to, we have performed a parallel analysis of actual fixations on attributes and degrees (each attribute and the corresponding degree is treated as one unit) for each scenario. We find that the numbers of attributes and degrees receiving fixations is significantly lower for the last ten scenarios compared to the first ten (t-test: t = 5.34; p<0.0001). Furthermore, the middle ten scenarios are also significantly different from the first ten scenarios (t = 3.45; p = 0.0003), but not different from the last ten scenarios (t = 0.83; p = 0.2022), suggesting that the selection process main takes place in the beginning. We repeated the analysis at an individual level by comparing the individual number of attributes and degrees together fixated at in the first ten scenarios with the same person’s number of fixations in the last ten scenarios. At the individual level, we confirm that the number of attributes and degrees together fixated at is significantly decreasing (paired t-test t = 3.55; p = 0.0005). Together these results underline that participants go through a process of selecting what items to attend to over the course of the study.”

3. Make a run of 21,000 observations similar to what you did in R2. The initial large estimates and small error terms in R2 lack credibility. Perhaps the measure is in milliseconds? The small errors may come from assuming that the errors within persons and tasks are uncorrelated with each other. If so, better results may come from the following analysis:

Thank you for your precision. As for the previous comment, the digits in the parentheses of Table R2 are actually the p-values, not the standard errors. We forgot to mention this in the Table note. It is now added. The measure is in milliseconds. We address the suggested analysis in the following point.

4. Predict attribute plus degree seconds with the following independent variables: fixed constants for each of the 16 attributes, fixed constants for each of the 43 respondents, continuous measures of attribute length, individual’s ranking for each attribute, and rounds. The main effects will show the relative impact of attribute length, individual ranking, and rounds. Then interactions between rankings, attribute lengths, and rounds (zero-centered) will indicate whether rankings become more predictive of decision time with rounds, whether attribute length becomes less predictive with rounds, and whether attribute rank and length and ranking interact to predict attribute time. The idea is to identify the important factors that alter decision time.

Thanks for suggesting this analysis. It is a very useful way to understand the data. Below you may find the results of the regressions. We have also included the regressions in the manuscript as Table 4, and we have added the following text:

“A final step of our analysis is to run two regressions with time spent jointly on the attribute and the corresponding degree as dependent variable. The regressions are listed in Table 4. In the first regression, model 1, we apply attribute length, stated disliking (the ranked data of attributes from most liked (=1) to least liked (=16)) and order as explanatory variables. In the second regression, model 2, we furhermore add two-way interaction effects between the explanatory variables and a three-way interaction between all of them. Both regressions control for individual dummies and the attribute dummies. All explanation variables are normalized. The first regression highlights that more time is generally spent on longer attributes (everything else kept equal), and that less time is spent on less liked attributes, while time spent is reduced over the course of the policy capturing experiment. In addition, the second regression shows that the longer time spent on more lengthy attributes is fading out over the repetition of the scenarios, suggesting that less time is spent reading the actual content of the attributes over time, but rather the time is spent observing what degrees the different attributes are scaled at.”

Table 4. Triple interaction regression table. Regression models of the relative impact of attribute length, individual ranking, and scenarios order on the time spent on attributes and corresponding degrees (p-values within parentheses). The three explanatory variables are normalized.

Reviewer 2:

My apologies for the delay in my response. It is quite a long paper and it has been a few difficult months with everyone else "going back to normal". Thank you for offering the version that shows the changes and the detailed letter with the responses to the comments. This has helped a lot. I think all my comments have been addressed in the revision. When reading the revision I thought that the introduction could have had a bit more information on the similarities and differences between the policy capturing paradigm and discrete choice experiments, but I think the current layout with this information in the discussion also works. I am glad to see that the cut-off for fixations is 100ms (which is a common threshold), instead of 200ms. I also looked through the replies to the other reviewer, and very much appreciate that you back up your responses with relevant analyses.

Thanks very much for the kind words.

The one remaining request I would still have, is to provide an overview of what each of the variables in your data-set mean (best shared along with the data-set). I had to look up the .dta format and found that it is from STATA, which I can read in R using the "haven" package, so this works out well (but you may want to add that information as well). For example, what is contained in the variables: qc1-qp1-qp2-qp3-qp4-qp5-qp6-qp7? What is in c1-c2-c3-c4-c5-c6-c7-c8-c9-c10-c11-c12-c13-c14-c15-c16-d1-d2-d3-d4-d5-d6-d7-d8-d9-d10-d11-d12-d13-d14-d15-d16-a1-a2? Some guidance on what is what would be helpful.

Thanks for your comment. We have created this detailed description of the dataset. It is printed here below, but we also attach it as a separate document to the re-submission.

Dataset and variable description

Data Structure

For each of the 44 individuals, we have PC data for 30 scenarios.

For each scenario, we have eye-tracking data for 16 criteria, 16 degrees, 2 answers.

Therefore, the dataset is composed by 44 X 30 X 16 X 16 X 2 = 46200 observations

PC data

id = id number randomly assigned

order = ordinal variable indicating the order on which each scenario (out of 30) was shown to each subject

group = subsample dummy

scenario = descriptive name for each of the 30 scenarios

durationinseconds = how many seconds it takes to subjects to answer to the online survey

qc1 = consent page dummy (=1 if consent was provided)

age = sociodemographic numeric variable for age

gender = dummy variable for gender

gender_long = character variable for gender

nation = dummy variable for being a local citizen or a foreigner

nationality = character variable for citizenship

edu_level = categorical variable on education level

years_work_experience = numeric variable on years of working experience following most recent education title

years_research = numeric variable on years of working experience in research

partner_choice = dummy variable indicating whether the subject is able to choose their own collaborative partner

project_choice = dummy variable indicating whether the subject is able to choose their own collaborative project

current_collab = dummy variable indicating whether the subject is currently working on a cross-sector collaborative project

item_1a - item_16a = categorical variable on the ranking position of each item provided by each subject in the ranking exercise of the PC experiment (ranging from 1 to 16)

item_1 - item_16 = categorical variable for degrees (ranging from 1 to 5) characterizing each criteria level

Answer = average numeric variable of Answer 1 and Answer 2

Answer1 = categorical variable describing the assessment likert scale (ranging from 1 to 7) for the first evaluation question of each scenario

Answer2 = categorical variable describing the assessment likert scale (ranging from 1 to 7) for the first evaluation question of each scenario

student = dummy variable

qp1 - qp7 = workplace perception questions, not used in current analysis

Eye-tracking data

aoiname = character variable for descriptive name of each area of interest

criteria_aoi = dummy variable indicating if area of interest is from a criterion

answers_aoi = dummy variable indicating if area of interest is from an answer

degree_aoi = dummy variable indicating if area of interest is from a degree

ttfff_ms = continuous variable indicating the time to first fixations in milliseconds for each area of interest (aoi)

timespent_fms = continuous variable indicating the time in milliseconds spent on each aoi

timespent_f = numeric variable indicating the number of times subjects look at each aoi

revisit_f_revisits = numeric variable indicating the number of times subjects revisit each aoi

fixations_count = numeric variable indicating the number of times subjects fixate each aoi

first_fixation_duration_ms = continuous variable indicating the length of time in ms of the first time subjects fixate each aoi

average_fixations_duration_ms = continuous variable indicating the avergae length of time in ms of fixations for each subjects

c1 - c16 = dummy variable indicating what of the sixteen criteria the eye-tracking data refers to

d1 - d16 = dummy variable indicating what of the sixteen degrees the eye-tracking data refers to

a1 - a2 = dummy variable indicating what of the two answers the eye-tracking data refers to

aoi_c_d = numeric variable indicating the ordering position of the areas of interest for criteria and degrees together

aoi_c = numeric variable indicating the ordering position of the areas of interest for criteria

aoi_cda = descriptive variable indicating names of areas of interest

Attachment

Submitted filename: Response to reviewers_SECOND REVISION_01102022.docx

Decision Letter 2

Iván Barreda-Tarrazona

16 Nov 2022

Information acquisition and cognitive processes during strategic decision-making: combining a policy-capturing study with eye-tracking data.

PONE-D-22-03769R2

Dear Dr. Pizzo,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Iván Barreda-Tarrazona, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: (No Response)

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: I had only one comment left in de last round, which has now been addressed. I have no further comments.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Joel Huber

Reviewer #2: No

**********

Acceptance letter

Iván Barreda-Tarrazona

22 Nov 2022

PONE-D-22-03769R2

Information acquisition and cognitive processes during strategic decision-making: combining a policy-capturing study with eye-tracking data.

Dear Dr. Pizzo:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Iván Barreda-Tarrazona

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Scenario layout.

    An example of the structure of one of the 30 scenarios and the Areas of Interest distribution.

    (TIF)

    S1 Table. Student-driven effect.

    Random Effects regression showing respondents’ systematic decrease in attention on the lengthy items over time.

    (TIF)

    Attachment

    Submitted filename: Response to reviewers.docx

    Attachment

    Submitted filename: PONE22 3792 tracking judgments.docx

    Attachment

    Submitted filename: Response to reviewers_SECOND REVISION_01102022.docx

    Data Availability Statement

    The data were now anonymized and made available on the platform Figshare at https://doi.org/10.6084/m9.figshare.19753417.v1.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES