Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Aug 11;16(8):e0255346. doi: 10.1371/journal.pone.0255346

Trusting the experts: The domain-specificity of prestige-biased social learning

Charlotte O Brand 1,2,*, Alex Mesoudi 1, Thomas J H Morgan 3,4
Editor: Miguel A Vadillo5
PMCID: PMC8357104  PMID: 34379646

Abstract

Prestige-biased social learning (henceforth “prestige-bias”) occurs when individuals predominantly choose to learn from a prestigious member of their group, i.e. someone who has gained attention, respect and admiration for their success in some domain. Prestige-bias is proposed as an adaptive social-learning strategy as it provides a short-cut to identifying successful group members, without having to assess each person’s success individually. Previous work has documented prestige-bias and verified that it is used adaptively. However, the domain-specificity and generality of prestige-bias has not yet been explicitly addressed experimentally. By domain-specific prestige-bias we mean that individuals choose to learn from a prestigious model only within the domain of expertise in which the model acquired their prestige. By domain-general prestige-bias we mean that individuals choose to learn from prestigious models in general, regardless of the domain in which their prestige was earned. To distinguish between domain specific and domain general prestige we ran an online experiment (n = 397) in which participants could copy each other to score points on a general-knowledge quiz with varying topics (domains). Prestige in our task was an emergent property of participants’ copying behaviour. We found participants overwhelmingly preferred domain-specific (same topic) prestige cues to domain-general (across topic) prestige cues. However, when only domain-general or cross-domain (different topic) cues were available, participants overwhelmingly favoured domain-general cues. Finally, when given the choice between cross-domain prestige cues and randomly generated Player IDs, participants favoured cross-domain prestige cues. These results suggest participants were sensitive to the source of prestige, and that they preferred domain-specific cues even though these cues were based on fewer samples (being calculated from one topic) than the domain-general cues (being calculated from all topics). We suggest that the extent to which people employ a domain-specific or domain-general prestige-bias may depend on their experience and understanding of the relationships between domains.

Introduction

The field of cultural evolution seeks to explain broad patterns of cultural change and variation in terms of, amongst other factors, the means by which information is passed from person to person via ‘social learning biases’ [15]. Here, ‘bias’ is meant in the statistical sense of a systematic departure from randomly choosing another person from whom to learn (rather than the normative sense of ‘bias’, implying error or mistakes). Examples of such social learning biases include copying the majority (conformity), copying older individuals, copying when uncertain, or copying prestigious individuals [6]. In this study we focus on the latter: prestige-biased social learning (henceforth prestige-bias), which occurs when learners preferentially learn from individuals who are copied by others, attended to by others, or who generally receive freely conferred deference from others [714]. These individuals are said to have ‘prestige’.

Prestige-bias has been proposed as an adaptive social learning strategy as it provides an efficient short-cut to acquiring adaptive social information when more direct means (e.g. identifying and copying the most skillful/knowledgeable individuals) are unavailable or costly [11]. According to the cultural evolutionary theory of prestige, prestige-bias is only adaptive because the prestige was first acquired due to success, such that prestige-cues (e.g. being copied by, or receiving attention from others) are indirect proxies for success [11]. Consistent with this theory, our previous experimental work shows that participants use prestige information (e.g. who others are copying) when selecting a model from whom to socially learn, but only when a) the prestige information is to some extent related to success and b) direct success information (e.g. score) is unavailable [7]. Importantly, such conclusions were obtained when prestige was an emergent property of participants’ behaviour during the experiment; no deception or manipulation of prestige was employed at any time, thus providing a naturalistic test of the prestige bias theory [7].

Here we build on our previous study [7] by exploring the domain-specificity or domain-generality of prestige-bias, which remains poorly understood. By the terms ‘domain-specificity’ and ‘domain-generality’ we mean whether a prestigious individual’s influence is limited to the specific domain in which they are successful (domain-specific prestige) or whether prestigious individuals who are successful in one domain become influential even outside their domain of expertise (domain-general prestige). For example, extremely prestigious individuals who have gained fame for their expertise are often used as a source of advice outside the domain in which they gained expertise, such as famous sportspeople giving political or medical advice.

Consequently, we can ask whether social learners specifically copy individuals who are prestigious in the domain of interest (domain-specific prestige-bias), or if they copy individuals who are prestigious regardless of the source of this prestige (domain-general prestige-bias). For instance, is a successful and prestigious boat-builder sought out for advice on all kinds of general matters, or is their influence limited to their specific craft? In the original specification of prestige bias, Henrich & Gil-White, p.170 [11] suggest that “…prestige hierarchies can be domain-specific. For example, if I defer to you because of your superior computer skills and you defer to Bob because he is an excellent grass hockey player, I may not give Bob any special deference if grass hockey is not my thing,” [11]. However, they also go on to suggest that “prestigious individuals are influential even beyond their domain of expertise,” p.184 [11], and it is often claimed in the literature that prestigious celebrities who have acquired fame via a specific domain, such as sport or acting, have influence beyond their domain of expertise (see [15] for examples and critical discussion). As such, the domain-specificity of prestige-bias remains unclear.

We suggest that domain-general prestige-bias will be adaptive when success across different domains is correlated. As noted by Henrich & Gil-White, “much of the information that leads to success in one domain will often be transferable to others… acquiring skill in one domain (e.g. a martial art) is often touted as promoting success in many other areas. For example, problem-solving methods, goal-achieving strategies, eye-hand coordination, control over one’s emotions, etc. are useful across several domains,” [11]. Henrich & Gil-White go on to suggest that “because figuring out which combinations of ideas, beliefs, and behaviours make someone successful is costly and difficult, selection favoured a general copying bias, which also tends to make prestigious individuals generally influential (as people copy and internalise their opinions).” (p.184). We suggest that the domain-generality of prestige-bias is an open empirical question, and is dependent on not only how correlated the domains in question are, but also whether observers have an accurate perception of these correlations.

Previous research shows some evidence of the domain-specificity of prestige-bias. One experimental study found that 3–4 year old children preferentially copied the artefact choice of a prestigious demonstrator, defined as the demonstrator to whom bystanders had attended rather than ignored, when their prestige was acquired during an artefact task (i.e. in the same domain), but not when it was acquired on a food-preference task (i.e. in a different domain) [10]. The reverse preference was seen when the demonstrator acquired their prestige on the food-preference task rather than the artefact choice task. Other studies from naturally-occurring groups have provided indirect evidence that prestige-bias may not be as domain-general as suggested by Henrich & Gil-White. For example, one study found that prestige was unrelated to ethnobotanical medicinal knowledge in an indigenous Tsimane community, but instead was related to having a formal position in that society [16]. Their measure of prestige was the number of nominations received when others were asked to list all the important men in the village, suggesting that being highly skilled and knowledgeable in the domain of medicine does not translate to being an important man in the village generally, or gaining a formal position. Similarly, a study of naturally-occurring volunteer groups in Cornwall, such as chess or kayaking clubs, found that prestige ratings were not related to success in a quiz that the group completed together, but were related to formal positions that the individuals already had in their groups, such as teacher, team captain, or group organiser/secretary [17].

Here we present an experiment to directly assess the degree to which participants use domain-specific or domain-general prestige cues when choosing from whom to learn, building on our previous study [7]. As before, instead of manipulating prestige experimentally, we use participant-generated prestige cues and manipulate the choices participants face to directly compare the different types of cues, providing a suitable combination of naturalistic prestige-generation yet also experimental manipulation of our key variable of interest (domain-specificity). Participants answered quiz questions from four different topics: weight estimation, world geography, art history, and language identification. We treat each of these topics as a ‘domain’, and ask whether people use success-derived prestige in one domain/topic as a guide for who to copy in a different domain/topic.

On each of 100 questions, participants could answer for themselves or copy another individual (henceforth ‘demonstrator’) from their group. Depending on the experimental condition, participants could choose who to copy based on (1) the demonstrators’ scores on questions from the current topic (domain-specific success) or (2) the number of times the demonstrators were previously copied (our experimental proxy for prestige) on (a) questions from the current topic (domain-specific prestige), (b) all topics (domain-general prestige) or (c) a different topic (cross-domain prestige).

Our previous experiment using the same quiz and topics [7] found that participants’ scores across both rounds of questions from the same topic were more strongly associated (r between 0.55 and 0.78), than scores across both rounds of different topics (r between 0.24 and 0.6). The associations between scores on a particular topic and overall score across rounds were intermediate between these two ranges. These associations therefore indicate that domain-specific scores (scores on questions within the same topic) are the most reliable cue to choose potential demonstrators from whom to copy when answering questions from the same topic/domain, compared to cross-domain or domain-general cues. Participants, however, were not informed of these correlations, nor had any way to calculate them during the experiment.

Given that domain-specific prestige (derived from the same topic) should be most highly correlated with score on the topic being answered, and that cross-domain prestige (derived from a different topic) will be least correlated with score on the topic being answered, and domain-general prestige bias (derived from all topics) will be intermediate, we formulated the following hypotheses, preregistered at https://osf.io/93az8.

  1. Domain-specific prestige-bias will be employed more often than domain-general prestige-bias and cross-domain prestige

  2. Domain-general prestige-bias will be employed more often than cross-domain prestige-bias

  3. Cross-domain prestige-bias is only employed when access to domain-general or domain-specific prestige information is not available.

Methods

This research was granted ethical approval by the University of Exeter Biosciences ethics committee. Approval number: eCORN001806 v8.1.

Participants

Consistent with our previous study [7], we aimed to recruit ten groups of ten participants via Mechanical Turk for each of four conditions, totaling 400 participants. Participants were randomly assigned to one of ten groups within one of four conditions, giving a between-participant design. Due to difficulties in coordinating multiple groups of participants simultaneously, some participants dropped out at various stages, giving a starting sample size of 397, with 332 participants reaching the final demographics page. We ensured a minimum of 5 participants in each networked group throughout. All participants were above the age of 18 (age range 20–69, mean age = 39.1), with 113 men and 78 women (not every participant specified their gender). All were given a monetary reward for their time of USD$10, and had the opportunity of winning a bonus payment of $20 if they scored over 85% in the quiz. 33 participants received the $20 bonus payment. All participants provided informed consent before being able to proceed to the task. The consent form stated that their participation was entirely voluntary, their data were entirely anonymous, and they could withdraw their involvement at any time by closing their browser.

Materials

The experimental automation platform Dallinger (dallinger.readthedocs.io) was used to create an online game in which groups of players can play and interact simultaneously. Participants answered 100 questions with two alternative answers each, one correct and one incorrect. The 100 questions were split into four topics of 25 questions each: geography, weight estimation, language, and art (see Fig 1 for an example).

Fig 1. Three example screenshots representing what participants saw at different stages of the experiment.

Fig 1

The top screenshot is an example question from the language topic. Participants could either select one of the two blue buttons showing two possible answers (one correct, one incorrect), or select the red button labelled “Ask Someone Else” which allows participants to copy someone else within their group. The number ‘7’ at the bottom is a countdown timer that forces participants to answer within 15 seconds. The second image represents what a participant would see if they chose to “Ask Someone Else” in Round 2 of Condition C, where they could choose to either view Times Chosen Altogether (domain-general prestige) or Times Chosen On This Topic (domain-specific prestige). The bottom image represents what a participant would see if they chose ‘Times Chosen Altogether’ and (domain-general prestige), in which there were only two other players to choose from. Please note that for any given question, participants could have between one and nine other participants to choose from, depending on how many answered individually for that particular question. See Table 1 for the information combinations displayed in the other Conditions. See S1 File for screenshots for all Conditions.

Procedure

Participants were given 100 binary choice questions based on four different general-knowledge or ‘trivia’ style topics, 25 in each category. Participants had fifteen seconds to answer each question, and they scored one point for every question they answered correctly. On each question, instead of answering themselves, participants could choose to “Ask Someone Else” by clicking the corresponding button. This allowed them to see information about the other participants (‘demonstrators’) in their group who had answered that question themselves (if everyone chose to “Ask Someone Else” on a single question then participants were shown a message saying “sorry, everyone chose to ‘ask someone else’ so no one can score points for this question” because there would be no answers available to copy for that question). The information they saw depended on the condition, detailed in Table 1. They then chose a demonstrator whose answer they used for that question. If the chosen demonstrator answered the question correctly, the copying participant also scored a point for that question. If the demonstrator was incorrect, they did not. No one received feedback on whether their answer was right or wrong at any point.

Table 1. Information displayed when choosing to “Ask Someone Else” in Rounds 1 and 2 across conditions, with our predicted choice for Round 2 in bold.

Specific v Cross (Condition A) General v Cross (Condition B) Specific v General (Condition C) Cross v Random (Condition D*)
Round 1 Domain-specific score
Round 2 Domain-specific prestige Or Cross-domain prestige Domain-general prestige Or Cross-domain prestige Domain specific prestige or Domain-general prestige Cross-domain prestige Or Random cue

Specific v Cross (Condition A) compares domain-specific and cross-domain prestige; General v Cross (Condition B) compares domain-general and cross-domain prestige; Specific v General (Condition C) compares domain-specific and domain-general prestige; and Cross v Random (Condition D*) compares cross-domain prestige with a random cue entirely unconnected to success in the task. Please note Condition D was run separately after confirmatory analysis on Conditions A- C, as specified on p.7 of our Pre-registration.

In all conditions, the 100 questions were split into two rounds, 60 questions in Round 1 (15 from each topic), and 40 in Round 2 (10 from each topic). In Round 1, when participants chose to “Ask Someone Else”, they always saw the current domain-specific score of all available demonstrators. However, in Round 2, participants could choose between two kinds of information, which varied according to the experimental condition (see Table 1). The information available included domain-specific prestige (the number of times each demonstrator was copied on Round 1 questions from this topic), domain-general prestige (the number of times copied on all Round 1 topics), cross-domain prestige (the number of times copied on Round 1 questions from a randomly selected, different topic) and a random cue (each demonstrator’s Player ID, which was a randomly generated 3 digit number). The four conditions pitted different pairs of information against each other, to establish participants’ preferences, and across the four conditions we constructed a hierarchy of information, from most to least preferred.

Pre-registered predictions and analyses

The following sets of predictions were preregistered at https://osf.io/93az8. Predictions 1–3 are assumption checks, to make sure that the correlations within and across domains were suitable for testing the subsequent predictions. Predictions 4–6 are our main theoretically-derived, a priori predictions shown in Table 1. Predictions 7 and 8 are follow-up predictions looking at copying frequency and score across conditions. Finally, we have predictions related to our preregistered follow-up Condition, (Condition D), which was run after analysing results for Conditions A-C, as specified in the pre-registration (p.7 heading “Follow-up Analyses”). Please note that Assumption Check 1 differs to our preregistration. This was an oversight, as we wanted to check the assumption that participant scores were more tightly correlated within compared to between topics, not participant prestige, in the same way that we did for our previous study (Brand et al. 2020).

To analyse our data we ran a series of Bayesian multi-level mixed models using the Rethinking package version 1.90 [18] in R version 3.6.0 [19]. Our raw data and analysis scripts can be found at https://github.com/lottybrand/Prestige_2_Analysis. Each model corresponding to each prediction is also included in the S1 File. Model parameters are interpreted as providing evidence of an effect on the outcome if their 89% credible interval (CI) did not cross zero, with 89% CIs the default value in the Rethinking package. 89% credible intervals are used to prevent readers from confusing these with the widely-used 95% confidence intervals and performing unconscious significance tests [18]. Priors were chosen to be weakly regularising, in order to control for both under and overfitting the model to the data. Convergence criteria such as effective sample sizes and Rhat values were used to check for appropriate model convergence throughout, and trace plots were inspected for signs of incomplete mixing when necessary.

Predictions 1–3 (assumption checks)

  1. Domain-specific scores (i.e. the number of times a participant scored correctly in each topic) are more highly correlated between rounds than domain-general score or cross-domain score. This is an assumption check based on our previous findings and the experimental set-up. If this prediction is not upheld it suggests the topics were more tightly correlated than anticipated, and our following predictions may not hold. To test Prediction 1 we calculated Pearson’s Correlation Coefficients between participant’s asocial scores in Round 1 and Round 2 within each topic, and compared these to correlation coefficients between different topics, and compared to overall score.

  2. Across all conditions, when choosing to “Ask Someone Else” in Round 1, participants preferentially copy the highest-scoring demonstrator. This is an assumption-check, to make sure that subsequent copying frequency cues are genuine signals of performance, and a replication of our previous findings. To test this prediction we scored each trial for whether or not the participant chose to copy the highest scoring demonstrator available for each copying instance, and used this as the outcome variable in a binomial model with varying intercepts for group and participant. (Model 1 in S1 File).

  3. Across all conditions, when choosing to “Ask Someone Else” in Round 2, participants preferentially copy the most-copied demonstrator. This is an assumption-check to make sure that people are actually employing prestige-bias when it is potentially useful, and a replication of our previous findings. To test this prediction we scored each trial for whether or not the participant chose to copy the most copied demonstrator available for each copying instance, and modelled this as above (Model 2 in S1 File).

Predictions 4–6 (main A-priori predictions)

  • 4

    In the Specific v Cross Condition (A), participants will predominantly choose to copy based on domain-specific prestige, rather than cross-domain prestige, as domain-specific prestige will be most correlated to score on the relevant domain.

  • 5

    In the General v Cross Condition (B), participants will predominantly choose to copy based on domain-general prestige, rather than cross-domain prestige, as domain-general prestige will be more correlated to score on the relevant domain than cross-domain prestige (as it contains the topic-specific score within it).

  • 6

    In the Specific v General Condition (C), participants will predominantly choose to copy based on domain-specific prestige rather than domain-general prestige, as domain-specific prestige will be more correlated to score on the relevant domain.

To test Predictions 4, 5 and 6 a generalised linear mixed model (Model 3 in S1 File) of all data from Round 2 was used with “chose predicted” (i.e. chose to view the information that we predict in each condition) as the binomial outcome variable (yes or no), with varying intercepts for participant, group and condition.

Predictions 7–8 (follow-up predictions)

  • 7

    The overall frequency of copying (i.e. choosing to “Ask Someone Else”) is higher in the Specific v Cross Condition (A) and the Specific v General Condition (C) than in the General v Cross Condition (B). This is because Conditions A and C both provide domain-specific prestige information which is a more direct cue to domain-specific success, whereas Condition B provides only indirect cues to domain-specific score. To test this prediction, a generalized linear mixed model of all data from Round 2 was used with ‘copied’ as the binomial outcome variable (i.e. chose “Ask Someone Else” or not)

  • 8

    Similarly to above, participants score higher in Conditions in which domain-specific prestige is available because they provide more tightly correlated cues to success with the relevant domain. To test this prediction, a general linear mixed model of data from each condition was used with final score of each participant as the outcome variable and condition as the predictor variable, with a varying intercept for participant and group.

Unregistered predictions

As laid out in the preregistration (p.7 under heading “Follow-up Analyses”), after collecting and analysing data for Conditions A-C and confirming that participants showed a clear preference for domain-specific information over cross-domain and domain-general prestige, we then collected additional data for Condition D which compared cross-domain prestige (the least preferred information) with a random cue (Player ID number). Thus for Condition D, we predicted that participants would predominantly choose to copy based on cross-domain prestige rather than the random cue, as cross-domain prestige contains some correlation with score, whereas the random cue does not. Because Condition D contains the least favoured information, we also predicted lower copying rates and lower scores than the other conditions.

Results

Assumption check (Prediction 1)

As predicted, scores were more strongly associated between Rounds 1 and 2 of the same topic than they were between different topics (Table 2), except for the Geography topic in which the within-topic correlation (r = .47) was lower than two cross-topic correlations (Art (r = .55) and Language (r = .51)). Also as predicted, the overall Round 1 score was intermediate between the within-topic associations and the cross-topic associations for Art and Weight, but not for Geography and Language where the overall correlation was higher than the within-topic correlation. This latter unexpected finding may reflect the fact that overall score contains four times as much data as does the score for any specific topic.

Table 2. Correlation coefficients between Round 1 and Round 2 topic scores representing total variance of Round 2 topic score explained by Round 1 score of the same topic.

Round 1 asocial score Round 2 asocial score Correlation coefficient (95% confidence intervals)
Art Art 0.71 (0.66, 0.76)
Geography Art 0.43 (0.43, 0.52)
Language Art 0.50 (0.41, 0.58)
Weight Art 0.38 (0.29, 0.47)
Overall Round 1 Score Art 0.64 (0.57, 0.70)
Geography Geography 0.47 (0.38, 0.55)
Art Geography 0.55 (0.47, 0.62)
Language Geography 0.51 (0.42, 0.58)
Weight Geography 0.38 (0.29, 0.47)
Overall Round 1 Score Geography 0.59 (0.52, 0.66)
Language Language 0.59 (0.51, 0.65)
Art Language 0.54 (0.46, 0.62)
Geography Language 0.42 (0.33, 0.51)
Weight Language 0.37 (0.28, 0.46)
Overall Round 1 Score Language 0.60 (0.53, 0.67)
Weight Weight 0.57 (0.49, 0.63)
Art Weight 0.47 (0.39, 0.55)
Geography Weight 0.29 (0.19, 0.39)
Language Weight 0.42 (0.33, 0.51)
Overall Round 1 Score Weight 0.55 (0.47, 0.62)

All scores are participants’ asocial score, so only includes scores they achieved without copying others. Bold coefficients shows highest association for that topic.

Assumption checks (Predictions 2 and 3)

As predicted, when participants chose to copy others in Round 1 they preferentially copied the most successful (i.e. highest scoring) demonstrator out of all those available (mean intercept estimate: 3.15, 89% CI: 2.77, 3.55, this corresponds to participants choosing the highest scoring model 95.9% of the time on the probability scale). When participants chose to copy in Round 2, participants preferentially copied the most prestigious (i.e. most copied) demonstrator out of all those available (mean intercept estimate: 4.29, 89% CI: 3.59, 5.10, this corresponds to participants choosing the highest scoring model 98.6% of the time on the probability scale).

A-priori hypothesis tests (Predictions 4, 5 and 6)

As shown in the raw data presented in Fig 2, participants preferred the domain-specific prestige cue when available, followed by domain-general, then cross-domain, and finally random cues, as predicted. Our models confirmed that participants preferentially chose the predicted information as opposed to the alternative information in each condition we tested, showing a strong preference for the predicted information (see Fig 3; Condition A: mean: 3.05 89% CI: 2.42, 3.73 Condition B: 2.51, 89% CI: 1.90, 3.16, Condition C: mean: 2.84, 89% CI: 2.24, 3.46 Condition D: 2.10, 89% CI: 1.34, 2.87).

Fig 2. Raw counts of the information chosen when participants chose to copy someone else’s answer in Round 2.

Fig 2

Total possible copying instances for each condition in Round 2 were: Condition A = 3240, Condition B = 3437, Condition C = 3377, Condition D = 3416.

Fig 3. Model predictions for participants choosing the predicted information compared to the alternative information in Round 2 of the four conditions, on the probability scale.

Fig 3

Follow-up analyses (Prediction 7)

Our model did not provide strong evidence of a difference in copying frequency between Conditions A (18% copying rate, n = 3240), B (14% copying rate, n = 3437) and C (20% copying rate, n = 3377), however there is evidence of participants copying less in Condition D compared to the other conditions (7% copying rate, n = 3416). This was confirmed by computing contrasts between the estimates of the four conditions (mean difference between Conditions A and D: 1.77, 89% CI: 0.72, 2.81. Mean difference between Conditions B and D: 1.26, 89% CI: 0.29, 2.24. Mean difference between Conditions C and D: 1.83, 89% CI: 0.76, 2.86).

Follow-up analysis (Prediction 8)

Similarly to prediction 7, there was not strong evidence that participants scored differently between Conditions A (mean score 71.1%), B (71.6%) and C (72.5%). However, there was strong evidence that participants scored lower in Condition D (mean score 63.5%) compared to Condition C only. This was confirmed by computing contrasts between the conditions (mean difference between Condition D and C: -1.15, 89% CI: -2.23, 0.04). When looking at just the Round 2 scores (in which there was a difference in the information participants could choose), participants scored lower in Condition D (mean score 24.8) compared to Condition B (mean score 27.4, mean difference -1.18 89% CI: -2.02–0.33) and Condition C (mean score 27.4, mean difference -1.19 89% CI: -2.03, -0.32), but not Condition A (mean score 26.5).

The S1 File contains model specifications, a comparison with our previous prestige study that used a similar design [7] and further exploratory analyses, including results showing that copying rate predicts score, and plots showing individual variation in prestige score and copying rate.

Discussion

In order to acquire adaptive information, people often preferentially learn from prestigious individuals, where ‘prestige’ is acquired by being observed and copied by others. While this prestige-biased social learning has been demonstrated in previous research [7], uncertainty remains over the extent to which prestige bias is domain-specific, where people preferentially copy others who have acquired prestige in the same domain as what is being copied, or domain-general, where people preferentially copy others who are generally copied on a range of domains. In this study we experimentally tested whether participants adopted a domain-specific prestige-bias or a domain-general prestige-bias when both options were available to them during an online quiz. This quiz contained different topics that represented different ‘domains’ of knowledge. Participants were able to copy other players’ quiz answers based on their domain/topic-specific scores in Round 1 of the quiz, and were then able to copy other players based on how many times they had previously been copied by others (our measure of ‘prestige’) in Round 2 of the quiz. In a series of pair-wise comparisons between prestige cue types, we found that participants overwhelmingly chose to use domain-specific prestige cues (times copied in the same topic) over both domain-general (time copied overall) and cross-domain (times copied in a different randomly chosen topic) prestige cues, that they preferred domain-general to cross-domain cues, and that they preferred cross-domain to a random cue entirely unconnected to success in the task. We therefore revealed a ‘hierarchy’ of prestige cues in which the most favoured cue, when available, was domain-specific prestige, followed by domain-general, then cross-domain, and lastly random cues. Importantly, as with our previous work [7], prestige cues were an emergent property of participants’ behaviour during the experiment; no deception or manipulation of prestige cues was employed at any time, thus increasing our confidence that such effects might be observed in the real world.

This study adds to the already extensive body of work which shows that people tend to use social information in an adaptive and flexible way, depending on the information that is available to them [5,6,20]. We also provide further evidence of success-biased social learning, in that participants copied the highest scoring player available to them when copying based on score in Round 1 [5,7,21]. Participants also copied the most-copied player available to them when copying based on prestige in Round 2. Our results also support previous evidence suggesting that participants are sensitive to the domain-specificity of prestige when copying or learning from others [10,16,17].

Our study used different topics of a quiz to represent different domains of knowledge, and participants’ scores in Round 1 of a given topic were generally better predictors of their score in Round 2 of the same topic than of any other topic. Thus, copying based on domain-specific prestige was most adaptive for increasing a player’s chances of selecting the correct answer on any given question, and ultimately in achieving a high score (and bonus payment) in the quiz. As the quiz was played live by groups of participants with no feedback on their scores, participants did not have access to how correlated scores were within or between topics. Participants did have experience of each topic in Round 1, presumably aiding their assessment of how related the topics were. Similarly, in real life people do not have direct access to data on how correlated different domains of skill or knowledge are, but presumably gain an intuitive understanding based on their experience or exposure to different domains during their lifetimes. Our predictions assume that people have an intuitive sense of which domains should be correlated, based on their experience. Thus, an individual’s tendency to prefer domain-specificity over generality relies on their own understanding or assessment of how correlated the domains might be. How accurate their perceptions of the correlations are depends on their own experience and expertise in those domains, and likely emerges either through unconscious associative learning, conscious deliberation, or a combination of the two. If direct experience of a domain is lacking, intuitions might be based on stereotypes or folk understanding, leading to inaccurate assessment of relationships between domains, and maladaptive use of prestige-bias. We feel our results are reassuring, in that they suggest people are sensitive to the source of people’s prestige, and that if access to a more relevant source is provided, they will preferentially use it over a less accurate source.

We found strong evidence that participants prefer domain-specific prestige cues to cross-domain prestige-cues, i.e. they would rather copy the answer of someone who was copied extensively on the current topic, than someone who was copied extensively on a different topic. Our topics were explicitly labelled as different to each other, and participants had experience of each topic in Round 1. This explicit labelling and direct experience may be responsible for participants’ strong domain-specific bias. However, if topics were more alike, participants may not show such a strong preference for domain-specific prestige. Importantly, if they had less experience of the topics or if the topics were unfamiliar to them, they may be less able to make a judgement about their correlations. For example, an individual unfamiliar with science may see both a physicist and a biologist as belonging to the same domain of expertise, ‘scientist’, and perhaps incorrectly choose to learn about physics from a biologist. The biologist may be a better model compared to an historian or an artist, but another scientist might choose the more domain-specific model, i.e. the physicist. This may help to explain why prestigious individuals often influence others on topics outside of their domain of expertise, especially within celebrity culture and social media platforms such as Twitter. It may be more informative for someone to listen to a prestigious children’s author on a topic related to biology if they have no other access to biology experts. However another biologist may choose not to listen to a children’s author on this topic as they have access to plenty of other biologist models. Perhaps if people had wider and more direct access to experts on particular topics, they would preferentially weight the opinion of the expert in their domain of expertise rather than someone who had gained expertise in an unrelated different topic. An anecdotal example of this has been demonstrated during the Covid-19 pandemic, in which many scientists have started following epidemiologists on Twitter in order to gain the most domain-specific information relating to the pandemic. However, those outside of science may view any Biologist as equally prestigious, despite having no epidemiological expertise.

As noted by Henrich & Gil-White [11], the emergence of domain-general prestige depends on whether success in multiple domains depends on a common underlying trait. In our quiz, the topics were chosen to be as dissimilar to each other as possible (within the constraints of an online quiz), however we showed that performance across topics was to some extent correlated, either because the domains are drawing on similar cognitive abilities, or because success across domains is correlated with an underlying trait such as education, socioeconomic status, or ‘general intelligence’ such as ‘g’ [2225]. The theoretical justifications and debates surrounding the construct of g are beyond the scope of this paper, but if theoretical and statistical models of g are reliable and replicable, then this would support the argument that domain-general prestige-bias can often be as adaptive as, if not more adaptive than, domain-specific prestige. Just as a general intelligence factor may underpin general cognitive abilities, success across many broad areas of societal interest, such as politics, science or ethics, could plausibly be correlated with underlying traits such as reasoning, decision-making, or evidence-based judgement. For this reason it could be adaptive to use domain-general prestige-bias for large-scale decisions. For example, for successful decision-making on climate change policy, an understanding of data, evidence gathering, political landscapes and policy logistics may all be necessary, and may all be reliant on common training, expertise or experience. In this way, a renowned “thinker” or “intellectual” (e.g. a broad-scope podcast host such as Sean Carroll) may be just as informative on climate policy as a climate scientist; the climate scientist may know the climate change data exceptionally well but may not have any experience of the political landscape or policy logistics.

Interestingly, domain-specific prestige was overwhelmingly preferred to domain-general prestige, even though in our experimental setup the former actually contained a lower volume of information, being based on 15 topic-specific questions from Round 1, than the latter, which was based on all 60 questions from Round 1 (and indeed overall score in Round 1 was a better predictor of topic specific score in Round 2 in two out of four topics). Thus the domain-general prestige information included copying instances from four times as many questions compared to the domain-specific prestige information. This difference in information was also reflected in the prestige scores visible to participants, in that more copying instances would have occurred across all topics compared to within each topic. This suggests that people might be less sensitive to the amount of information than might be expected. Whether our experimental set-up reflects real life depends on a trade-off between depth and breadth of expertise. For example, someone may have a long history of prestige in biology generally, thus lots of information has been gathered about their success as a biologist. Someone else may have recently gained expertise in a specific area of biology, such as molecular genetics. Our results suggest people would rather learn molecular genetics from the recently trained molecular genetics expert than the long-standing general biology expert. Similarly, if someone is renowned for being generally successful or knowledgeable in a variety of domains, do we have more information about their expertise than about the expertise of someone who is a long-time expert in a narrower field? This trade-off between depth and breadth of expertise is an open-question and one worth investigating when trying to understand who people choose to learn from and why.

In summary, we find evidence that domain-specific prestige-bias is preferred to both cross-domain and domain-general prestige-bias, at least when the domains of interest are sufficiently dissimilar to each other, and when individuals have had experience of each domain. Participants revealed a hierarchy of prestige in that domain-specific prestige was most preferred, followed by domain-general, and cross-domain prestige was least preferred. This preference was present despite the fact that the domain-general prestige scores contained more information (were generated based on all topics) compared to domain-specific prestige scores (generated from one topic). This apparent trading of depth over breadth of expertise warrants further experimental investigation to understand who people learn from and why.

Supporting information

S1 File

(PDF)

Data Availability

The data underlying the results presented in the study are available from https://github.com/lottybrand/Prestige_2_Analysis.

Funding Statement

CB and AM were supported by The Leverhulme Trust (grant no. RPG-2016-122 -awarded to AM) https://www.leverhulme.ac.uk/ The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Henrich J, McElreath R. The evolution of cultural evolution. Evol Anthropol Issues News Rev. 2003;12: 123–135. doi: 10.1002/evan.10110 [DOI] [Google Scholar]
  • 2.Kendal RL, Boogert NJ, Rendell L, Laland KN, Webster M, Jones PL. Social Learning Strategies: Bridge-Building between Fields. Trends Cogn Sci. 2018;22: 651–665. doi: 10.1016/j.tics.2018.04.003 [DOI] [PubMed] [Google Scholar]
  • 3.Rendell L, Fogarty L, Hoppitt WJE, Morgan TJH, Webster MM, Laland KN. Cognitive culture: theoretical and empirical insights into social learning strategies. Trends Cogn Sci. 2011;15: 68–76. doi: 10.1016/j.tics.2010.12.002 [DOI] [PubMed] [Google Scholar]
  • 4.Mesoudi A. An experimental comparison of human social learning strategies: payoff-biased social learning is adaptive but underused. Evol Hum Behav. 2011;32: 334–342. doi: 10.1016/j.evolhumbehav.2010.12.001 [DOI] [Google Scholar]
  • 5.Morgan TJH, Rendell LE, Ehn M, Hoppitt W, Laland KN. The evolutionary basis of human social learning. Proc R Soc B. 2012;279: 653–662. doi: 10.1098/rspb.2011.1172 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Laland KN. Social learning strategies. Anim Learn Behav. 2004;32: 4–14. doi: 10.3758/bf03196002 [DOI] [PubMed] [Google Scholar]
  • 7.Brand CO, Heap S, Morgan TJH, Mesoudi A. The emergence and adaptive use of prestige in an online social learning task. Sci Rep. 2020;10: 12095. doi: 10.1038/s41598-020-68982-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Cheng JT, Tracy JL, Foulsham T, Kingstone A, Henrich J. Two ways to the top: Evidence that dominance and prestige are distinct yet viable avenues to social rank and influence. J Pers Soc Psychol. 2013;104: 103–125. doi: 10.1037/a0030398 [DOI] [PubMed] [Google Scholar]
  • 9.Cheng JT, Tracy JL, Ho S, Henrich J. Listen, follow me: Dynamic vocal signals of dominance predict emergent social rank in humans. J Exp Psychol Gen. 2016;145: 536–547. doi: 10.1037/xge0000166 [DOI] [PubMed] [Google Scholar]
  • 10.Chudek M, Heller S, Birch S, Henrich J. Prestige-biased cultural learning: bystander’s differential attention to potential models influences children’s learning. Evol Hum Behav. 2012;33: 46–56. doi: 10.1016/j.evolhumbehav.2011.05.005 [DOI] [Google Scholar]
  • 11.Henrich J, Gil-White FJ. The evolution of prestige: freely conferred deference as a mechanism for enhancing the benefits of cultural transmission. Evol Hum Behav. 2001;22: 165–196. doi: 10.1016/s1090-5138(00)00071-4 [DOI] [PubMed] [Google Scholar]
  • 12.Jiménez ÁV, Mesoudi A. Prestige-biased social learning: current evidence and outstanding questions. Palgrave Commun. 2019;5: 1–12. doi: 10.1057/s41599-019-0228-7 [DOI] [Google Scholar]
  • 13.Lenfesty HL, Morgan TJH. By Reverence, Not Fear: Prestige, Religion, and Autonomic Regulation in the Evolution of Cooperation. Front Psychol. 2019;10. doi: 10.3389/fpsyg.2019.02750 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Pornpitakpan C. The Persuasiveness of Source Credibility: A Critical Review of Five Decades’ Evidence. J Appl Soc Psychol. 2004;34: 243–281. doi: 10.1111/j.1559-1816.2004.tb02547.x. [DOI] [Google Scholar]
  • 15.Acerbi A. Cultural Evolution in the Digital Age. Oxford University Press; 2019. [Google Scholar]
  • 16.Reyes-Garcia V, Molina JL, Broesch J, Calvet L, Huanca T, Saus J, et al. Do the aged and knowledgeable men enjoy more prestige? A test of predictions from the prestige-bias model of cultural transmission. Evol Hum Behav. 2008;29: 275–281. doi: 10.1016/j.evolhumbehav.2008.02.002 [DOI] [Google Scholar]
  • 17.Brand CO, Mesoudi A. Prestige and dominance-based hierarchies exist in naturally occurring human groups, but are unrelated to task-specific knowledge. R Soc Open Sci. 2019;6: 181621. doi: 10.1098/rsos.181621 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.McElreath R. Statistical Rethinking. CRC Press; 2016. [Google Scholar]
  • 19.R: The R Project for Statistical Computing. [cited 15 Dec 2020]. Available: https://www.r-project.org/.
  • 20.Rendell L, Boyd R, Cownden D, Enquist M, Eriksson K, Feldman MW, et al. Why Copy Others? Insights from the Social Learning Strategies Tournament. Science. 2010;328: 208–213. doi: 10.1126/science.1184719 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Mesoudi A. An experimental simulation of the “copy-successful-individuals” cultural learning strategy: adaptive landscapes, producer–scrounger dynamics, and informational access costs. Evol Hum Behav. 2008;29: 350–363. doi: 10.1016/j.evolhumbehav.2008.04.005 [DOI] [Google Scholar]
  • 22.Conway ARA, Kane MJ, Engle RW. Working memory capacity and its relation to general intelligence. Trends Cogn Sci. 2003;7: 547–552. doi: 10.1016/j.tics.2003.10.005 [DOI] [PubMed] [Google Scholar]
  • 23.Kovacs K, Conway ARA. Process Overlap Theory: A Unified Account of the General Factor of Intelligence. Psychol Inq. 2016;27: 151–177. doi: 10.1080/1047840X.2016.1153946 [DOI] [Google Scholar]
  • 24.Kovacs K, Conway ARA. What Is IQ? Life Beyond “General Intelligence.” Curr Dir Psychol Sci. 2019;28: 189–194. doi: 10.1177/0963721419827275 [DOI] [Google Scholar]
  • 25.Savi AO, Marsman M, van der Maas HLJ, Maris GKJ. The Wiring of Intelligence. Perspect Psychol Sci. 2019;14: 1034–1061. doi: 10.1177/1745691619866447. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Miguel A Vadillo

10 Feb 2021

PONE-D-20-39514

Trusting the experts: the domain-specificity of prestige-biased social learning

PLOS ONE

Dear Dr. Brand,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

As you will see, all the reviewers see merit in your study and provide overly positive reviews. Reviewer 3 recommends accepting the present version as is. Reviewers 1 and 2 in contrast suggest a number of revisions. In particular, both reviewers think that the the readability of the paper can be improved, particularly in the Methods and Results sections. I share their concerns (it was also difficult for me to understand those sections of the paper on a first read) and encourage you to take their suggestions to the heart. In addition to the detailed analytical recommendations made by Reviewer 1, I'd suggest reporting raw correlations (instead of R2) as a measure of association between variables, because they are substantially easier to interpret for most readers (R2 are absolutely fine as a measure of model fit, but I think they can be misleading in this context).

Please submit your revised manuscript by Mar 27 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Miguel A. Vadillo, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2.  We note that Figure 1 in your submission contain map images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

2.1.    You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license. 

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

2.2.    If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

The following resources for replacing copyrighted map figures may be helpful:

USGS National Map Viewer (public domain): http://viewer.nationalmap.gov/viewer/

The Gateway to Astronaut Photography of Earth (public domain): http://eol.jsc.nasa.gov/sseop/clickmap/

Maps at the CIA (public domain): https://www.cia.gov/library/publications/the-world-factbook/index.html and https://www.cia.gov/library/publications/cia-maps-publications/index.html

NASA Earth Observatory (public domain): http://earthobservatory.nasa.gov/

Landsat: http://landsat.visibleearth.nasa.gov/

USGS EROS (Earth Resources Observatory and Science (EROS) Center) (public domain): http://eros.usgs.gov/#

Natural Earth (public domain): http://www.naturalearthdata.com/

3. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

3.1.    You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

3.2.    If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: N/A

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This manuscript reports one experiment in which the preferences for different prestige cues are compared, in the context of a quiz game. The authors conclude that prestige drives the tendency to choose who must be copied during the game. Additionally, domain-specific prestige is preferred to cross-domain and domain-general prestige, whereas domain-general prestige is preferred to cross-domain prestige, thus revealing a hierarchy of prestige sources.

In general, I think that the manuscript has merit. The research question is sound, and the methods seem adequate to address the question. The writing is overall clear, although I had some difficulties in understanding the analyses.

What follows is a list of more specific comments on the article.

-First of all, I do not understand well why the phenomenon studied here (relying on prestige cues to choose who to learn from) is presented as a “bias”. It could certainly be understood as a simple rule, or heuristic. However, as far as I know, the term bias conveys (perhaps implicitly) the idea of systematic error or departure from a normative standard. But, how can we determine the standard (unbiased) behavior in a task like this? Is trusting individuals according to their prestige more biased than using their performance as a cue? In fact, it seems like a reasonable rule that should lead to good decisions most of the time (the Introduction describes the “bias” as “adaptative”). I think this needs further comment and clarification, perhaps in the Introduction.

-I appreciate the transparency in the reporting of data and results (sharing data and code, pre-registering hypothesis).

-Although the article is generally clear, I found it difficult to follow in the Analyses/Results section. I am not sure that the way the manuscript is organized actually helps. On the one hand, presenting the predictions and the analyses schematically, before the results, seemed useful the first time I read it. However, in this study, we have different analyses with different dependent variables (if I understood well: probability of choosing the predicted option, copying rate, and quiz score). When we read the Results section, it is easy to forget which variable we are talking about, or what “prediction 3” (explained before) means. I needed to go back and forth several times to understand what the analyses were actually doing. I suggest to improve this by either organizing the text differently, or by adding explanations so that we do not need to go back and read the predictions/analysis plan. Perhaps organizing this section by dependent variables could help (i.e., first we analyze one variable, and when we finish we start with the next one, and we always include comments to remind the purpose and interpretation of the analysis), but I’m not sure.

-When interpreting Figure 3, I wonder about the usefulness of this analysis, which in addition is quite complex to understand. The dependent variable is the probability of choosing the predicted option. Thus, the higher the model estimates, the better the researchers’ predictions were. This is a transformed variable that is not easy to interpret (note that in one condition, choosing “Domain-general” is coded as 1, but in other conditions the same answer is coded as 0). For example, how should we interpret the lack of differences between conditions in this variable? If you had found a difference in one condition against the others, how would you interpret it? I am not sure that the statistical model that is used here is actually useful. At least, I would need the authors to interpret the results when they are presented so that I can fully understand what the analysis reveals. Otherwise I had the impression that the comparison between conditions is meaningless, but I could be wrong.

-I also found the design a little bit overcomplicated, and this may be due to the procedure used.

First, experiments need manipulation and control of variables. Here, the only manipulated variables are the two options that are presented to each participant. The rest of parameters (the participant’s score, the participants’ prestige...) are uncontrolled, as they are generated spontaneously by participants (do I understand well the procedure?). If this is an experiment and we have clear predictions as to which cues should play a role, why not manipulating them directly? This could be achieved easily. For example, imagine that participants are presented a list of potential players to copy. But instead of real players (i.e., other participants) whose scores and parameters are beyond the control of the experimenter, you show them a list of fictitious players whose parameters are fixed by the experimenter (e.g., Player 111 has been copied 3 times in this domain, 5 times in another domain, etc.). Thus, you could manipulate the variables while controlling for other factors. The consequence of surrendering this degree of control is that the data we obtain are weaker, as we do not know whether the performance of the participants is actually affecting their individual decisions to copy/not copy, etc. What happens if, by chance, your 10 participants are very good at Geography and nobody asks for help? This affects the prestige cues that are crucial to your conclusions. At least a clarification about why the current procedure was preferred and what the benefits are is due.

Second, what is the justification for presenting only two options when copying the answers (e.g., “Domain-Specific vs. Cross-Domain”)? I understand that this simplifies the task of the participant. However, the design becomes more complicated (to carry out, to report and to interpret) as we need to compare between pairs of conditions. Why is it not possible to present a list of options that convey all possibilities (Domain-Specific, Domain-General, Cross-Domain, Random Cue)? If there are good reasons to present only two options, I would like to read them in the manuscript.

-The choice of this procedure/design also has consequences in terms of statistical power. Although the sample size is overall good, we have small cell sizes. This compromises the generalization of the results.

-Additionally, and related to the previous point, I would like to see a justification for the sample size: if it was not decided a priori, a sensitivity analysis would suffice, to have an idea about the smallest effect that your study could detect with reasonable power. Choosing a sample size similar to a previous study is generally not a good strategy to decide the N.

-The authors use a variety of domains to test the generalizability of their results. As the authors discuss (in the Discussion), the correlation between domains that participants assume can be important. Could this be a problem when interpreting the current data? For example, if participants believe that the skills necessary to succeed in two of the tasks are similar, or the same (e.g., both Geography and Language are based on general knowledge, therefore the ones succeeding in Geography should do fine in Language), then they might show apparent preference for general domain prestige. I wonder if the results could be different depending on the set of domains used in the experiment: whether they are closely related or they are completely independent (e.g., a pshysical task and an intellectual task). Perhaps this could be added to the Discussion.

More specific comments:

-I would like to see additional information about what participants see on the screen. For example, when they choose to “ask someone else”, what information is then presented? We have screenshots for the questions, but not for these informative screens.

-Is it possible to name the conditions in a more informative way than “condition A, condition B...”? I had to check the Table multiple times to remember which cues are being contrasted in each condition.

-I think that describing in the paper the function used to compute the Pearson coefficients (“cor.test()”) is unnecessary. Is there anything in the way this function works that is important or unusual so that it deserves comment?

-I am curious as to why the credible intervals have a width of 89%. I get that the popular option of 95% (at least it is more popular in frequentist analyses) means less stable estimations. By then, why not 90%? I don’t mean that the manuscript or the analyses are wrong, but the choice of 89% had me wonder, and there is no justification in the text.

Reviewer #2: Introduction

- It is necessary to provide a complete definition of prestige. In this vein, it would be interesting to clarify if there is any connection between source prestige and source credibility, commonly identified to consist of expertise and trustworthiness (for a review, see Pompitakpan, 2004) or other attributes, such as age or gender.

- Please, provide an explanation of the difference between prestige information and direct success information (line 67).

- Please, justify the reason why it is important to better understand domain-specificity and -generality of prestige-bias. Which kind of implications might these two kinds of prestige bias have in real life? It could be very helpful to add some specific examples.

Method_Procedure

In general, the information provided in this section is incomplete or not clear. In several cases, it is necessary to consult previous studies of the authors to fully understand the content. For instance:

- What is the reason why if everyone chose to “Ask Someone Else” then participants were shown a message saying “sorry, everyone chose to ‘ask someone else’ so no one can score points for this question?

- The step “They then chose a demonstrator whose answer they used for that question (lines 207-208)” is confusing. Could you offer a more detail explanation?

Discussion

- Please, explain in more depth the implications the ‘hierarchy’ of prestige detected might have in real life.

Minor comments

7. Line 470: There is a double comma.

Reviewer #3: This is a well-written manuscript on an interesting topic for evolutionary psychology with sound methods and clean results. The pre-registration is also sound and the protocol is followed carefully by the authors. It is true that probably the results are not smashing but this is not a concern in PLOS One (and not a concern in general for me). Therefore, I can just congratulate the authors. If I am not wrong, this is the first time I recommend acceptance of a manuscript in the first revision round.

I have just one very minor comment: it would be interesting to test whether participants choose to copy those who are most copied when they can choose to copy based on scores; that is, what of these two information cues is preferred when are both present? Unfortunately, as far as I understand, the current design does not allow for this analysis and should be left for future research. Maybe the authors want to discuss this point.

Antonio M. Espín

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: Yes: Antonio M. Espín

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Aug 11;16(8):e0255346. doi: 10.1371/journal.pone.0255346.r002

Author response to Decision Letter 0


31 Mar 2021

Thank you for taking the time to carefully read our manuscript. We hope our revisions have now made the methods and results section clearer. We’ve also altered the reporting of the correlation coefficients, as well as other substantial clarifications required by the reviewers, included in the attached "response to reviewers" file.

Attachment

Submitted filename: response_to_reviewers_.docx

Decision Letter 1

Miguel A Vadillo

18 May 2021

PONE-D-20-39514R1

Trusting the experts: the domain-specificity of prestige-biased social learning

PLOS ONE

Dear Dr. Brand,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Below you will find the comments provided by Reviewer 2 and a new reviewer (4). Reviewer 1 declined the invitation to read your manuscript again. As you can see, Reviewers 2 and 4 make radically different recommendations: while Reviewer 2 now recommends acceptance, Reviewer 4 recommends outright rejection. My own impression is that the concerns raised by Reviewer 4 (partially overlapping with those of Reviewer 1) do obscure the interpretation of the results and merit, at least, a detailed discussion in the main text. In the same vein, I think that some of the problems previously detected by Reviewer 1 should lead to further changes in the text.

Both Reviewers 1 and 4 note that the fact that you didn't manipulate prestige cues experimentally undermines your interpretation of the results. This is perhaps clearest in the new review provided by Reviewer 4. The main concern is that because during Round 1 participants only had information about the domain-specific score of the other participants, then all measures of prestige in Round 2 can be seen as proxies for the domain-specific score of the other participants. That is, the information that participants see in Round 2 about who was imitated most often in Round 1 must be determined by the domain-specific scores in Round 1. This obscures whether the results seen in Round 2 are actually driven by prestige cues or rather by inferred domain-specific accuracy. In your response to Reviewer 1 you argue that your procedure provides a more natural test of how prestige dynamics arise in the real world. This is a fair point, but I think it deserves an explicit discussion in the main text. It is absolutely fine to emphasize the advantages of using a naturalistic procedure, but the reader should also be alerted about the potential problems of this strategy.

If I understand the design properly, condition D was not part of the original preregistered protocol and was added after data were already collected for conditions A-C. An unfortunate consequence of this is that all the analyses that include condition D fall outside the scope of the pre-registered protocol. Please note this in the main text (i.e., that all models involving condition D depart from the protocol). Also, condition D is introduced in Table 1, but the fact that this condition was not part of the preregistered protocol is not explained until line 450. Please, explain as early as possible in the ms that condition D was not part of the preregistration.

I understand that conditions A-D were manipulated between participants, but I don't think this is clearly stated anywhere in the manuscript. Please, say so explicitly and consider referring to Groups A-D instead of conditions A-D. Note that this is also relevant to understand Reviewer 4's concerns about your mixed models (i.e., whether participants are nested within groups or crossed).

Reviewer 1 complained that the general procedure was difficult to follow and suggested adding a figure. I think that the new figure 1 does help the reader understand the procedure, but it would be even better to present not only a trial from one particular condition in Round 2, but examples also from Round 1 and the remaining conditions in Round 2.

Reviewer 1 also questioned the adequacy of Figure 3. I do share the feeling that the information conveyed by Figures 2 and 3 is somewhat redundant.

Reviewer 1 asked for some justification of sample size. Although your response is compeling it is true that in the present version of the ms it is difficult to appreciate whether your sample is sufficiently sensitive given the effects you are studying. I don't think a power analysis would be appropriate here (because you are not using frequentist stats), but it would be nice to have some measure that allowed the reader to infer whether the number of observations is large enough to conclude with certainty that the effects you find are conclusively different from zero (or not). I think that Bayes factors against the null hypotheses would serve this purpose well.

Reviewer 1 complained that the analytic strategy was also difficult to follow. I do think that this version is clearer but it would be even better to merge completely the paragraphs where you explain the predictions and the paragraphs where you explain how you will test them. In other words, explain on lines 380-385 that you will test prediction 1 using correlation coefficients. Merge lines 395-401 with Prediction 2 and in Prediction 3 simply explain that the analysis will be as in Prediction 3; and follow the same logis with the remaining predictions and analysis plans. Otherwise the reader is constantly forced to go back to each prediction when reading the proposed analyses.

Minor comments:

line 60: "Here, 'bias' in meant..." -> "in" should be "is"

line 141: Double space after "four"

lines 460-472: on a first read, it is unclear whether this refers to all the analyses or just to the one immediately above. Please clarify this here.

table 2: please report confidence intervals (or credibility intervals) for these correlation coefficients, so that the reader can appreciate to what extent the numerical differences between them are meaningful or not.

Starting on line 568 you report for each the model intercepts and refer to them as "mean coefficient estimate". The reader is forced to go back to the model description to understand that this is an intercept. Please, refer to this as mean intercept estimate instead or alert the reader somehow what these numbers mean.

lines 695-697: The second part of the sentence (i.e., "... and emerge either through unconscious associative learning, conscious deliberation, or both") doesn't add information beyond what's already said in the first half. It simply says that the nature of this type of learning is unknown.

==============================

Please submit your revised manuscript by Jul 02 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Miguel A. Vadillo, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

Reviewer #4: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

Reviewer #4: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: I Don't Know

Reviewer #4: I Don't Know

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #4: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #4: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: The authors of the manuscript "Trusting the experts: the domain-specificity of prestige-biased social learning" have successfully answered the comments I raised in the previous version.

I would still try to make clearer the relationship, if any, between prestige and expertise but this is only a suggestion that in no way modifies my position to accept the manuscript as it stands.

Reviewer #4: In the experiment reported in this manuscript, participants were given the opportunity to choose to answer each one of 100 quiz questions (in two rounds) by themselves, or copying from another participant. In the different conditions of the experiment, the available sources to copy from in the second round of questions were manipulated (using different pairs of sources in each condition) in such a way that participants could choose between two sources with different types of prestige (within-field, between-fields, general, or a random cue). The authors conclude that any type of prestige cue drives decisions (is preferred) relative to no prestige, within-field relative to general and between-fields, and general to between-fields.

As a general comment, I commend the authors for the transparency in performing and reporting the study. Preregistration, and sharing data and code, as done here, is a warranty that the authors are not using analysis flexibility to surreptitiously manipulate results in their favor. Moreover, the theoretically relevant effects are probably strong enough to provide substantial evidence in favor of all the main hypotheses, regardless of the statistical approach used.

Still, the preregistered plan describes the model to test main hypothesis as a Bayesian GLME with the following structure: Chose_predicted ~ intercept + 1|condition + 1|Participant + 1|group +1|topic

If this is the model that was finally run (I am sorry I am not familiar with the specific syntax of the package used for Bayesian analysis here), why is condition modelled as a random-effects factor (random intercept)? As far as I know, random-effects factors are those for which levels can be considered as randomly sampled from the set of possible ones, whereas here levels are actively manipulated. And also, this syntax suggests that group and participant factors were crossed, but in the design participants are actually nested in groups. Please clarify these issues, or correct me if I am misinterpreting something.

My most important concern, however, is not methodological but conceptual. The authors claim that prestige emerges from the task, but I am afraid the very concept of prestige is totally unnecessary to account for the results.

As detailed in the manuscript (and in the authors’ response to a reviewer from the previous round), participants had access to the in-field accuracy of answers from the other participants in the first round. It is true that participants do not have access to information on whether peers also tended to choose the responder with the highest number of correct answers, but assuming that “most people will do as I do” seems rather straightforward to me. In other words, the task is not letting prestige “emerge”, it is just providing an almost perfect proxy (the most frequently chosen responder) to in-field objective accuracy.

Consequently, if, as a responder, I have a perfect proxy to in-field accuracy I also have a less perfect proxy to general accuracy (as in-field accuracy mathematically contributes to general-accuracy), and an even less perfect proxy to between-fields accuracy. That is, the ordering of preferences in the second round does not require to assume the use of any prestige cue at all, but just plain reasoning based on estimated accuracy.

At the present moment, I see no necessity for prestige as an intermediate explanatory construct. As mentioned by one of the reviewers in the previous round, it would have been more convincing to actively manipulate prestige cues. Hence, unless I am wrong here (and I hope I am, given the carefulness with which this study has been carried out), my recommendation will be not accepting this manuscript for publication.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #4: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 2

Miguel A Vadillo

9 Jul 2021

PONE-D-20-39514R2

Trusting the experts: the domain-specificity of prestige-biased social learning

PLOS ONE

Dear Dr. Brand,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

I think that the ms is essentially ready for publication. Before accepting it, I'd like the authors to consider the following changes:

lines 60-62 "Crucially, prestige-bias only evolved, and is only adaptive, because the prestige was first acquired due to success..." -> This sounds like a proven fact ("only evolved") but it is actually speculation based on cultural evolutionary theory of prestige. Consider changing the beginning of the sentence to "According to the cultural evolutionary theory of prestige..." and then remove "Consistent with this cultural evolutionary theory of prestige" in the following sentence.

lines 146-148 In "As before, we use participants..." consider rewriting as "As before, instead of manipulating prestige experimentally, we use participants..."

==============================

Please submit your revised manuscript by Aug 23 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Miguel A. Vadillo, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Aug 11;16(8):e0255346. doi: 10.1371/journal.pone.0255346.r006

Author response to Decision Letter 2


12 Jul 2021

Thank you for taking the time to constructively engage with our work. We have happily made the final changes suggested, as can be seen in our tracked changes document, and the "response to reviewers" document.

Attachment

Submitted filename: response_reviewers_120721.docx

Decision Letter 3

Miguel A Vadillo

15 Jul 2021

Trusting the experts: the domain-specificity of prestige-biased social learning

PONE-D-20-39514R3

Dear Dr. Brand,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Miguel A. Vadillo, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Miguel A Vadillo

29 Jul 2021

PONE-D-20-39514R3

Trusting the experts: the domain-specificity of prestige-biased social learning

Dear Dr. Brand:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Miguel A. Vadillo

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (PDF)

    Attachment

    Submitted filename: response_to_reviewers_.docx

    Attachment

    Submitted filename: response_to_reviewers_rev3.pdf

    Attachment

    Submitted filename: response_reviewers_120721.docx

    Data Availability Statement

    The data underlying the results presented in the study are available from https://github.com/lottybrand/Prestige_2_Analysis.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES