Skip to main content
Science Advances logoLink to Science Advances
. 2024 Oct 23;10(43):eadp8775. doi: 10.1126/sciadv.adp8775

Youths’ sensitivity to social media feedback: A computational account

Ana da Silva Pinho 1,*, Violeta Céspedes Izquierdo 1, Björn Lindström 2, Wouter van den Bos 1,3
PMCID: PMC11498218  PMID: 39441931

Abstract

While it is often argued that continuous exposure to social feedback is specifically challenging for the hypersensitive developing brain, empirical evidence is lacking. Across three studies, we reveal the developmental differences and computational mechanisms that underlie the social media engagement and feedback processing of adolescents and adults. First, using a reinforcement learning model on a large Instagram trace dataset (N = 16,613, 1.6+ million posts), we show that adolescents are more sensitive to social feedback than adults. Second, in an experimental study (N = 194), we show that adolescents’ mood is affected more strongly by a reduction in likes than adults. Last, in a neuroimaging study (N = 96), we show that social media feedback sensitivity is related to individual differences in subcortical-limbic brain volumes of emerging adults. Together, these findings highlight the need for digital competence programs to help youth manage the constant feedback they encounter on social media platforms.


Adolescents are more sensitive to social media feedback than adults, and their mood is affected by variations in likes.

INTRODUCTION

Present-day youth is growing up in a social media–saturated world where technology plays a central role in shaping most of their experiences. Access to social media has become indispensable in the lives of today’s youth, commonly defined as individuals aged 15 to 24 (1). Here, we focus on two developmentally distinct yet partly overlapping periods within this category: adolescence (studies 1 and 2; included ages: 13 to 20) and emerging adulthood (study 3; included ages: 18 to 24). These developmental stages integrate distinct biological, social, and psychological changes. Adolescence is marked by puberty, physical changes, identity exploration, increased independence, and the development of more complex cognitive abilities (2). Emerging adulthood not only involves further psychosocial development, including identity formation, but also transitions to higher education or the job market, as well as financial independence (3). Hence, social media may affect individuals differently depending on the developmental window. The rise of social media use created parental and societal fears over youth’s social and psychological well-being (4, 5), suggesting that the impact of social media on the still-maturing brain increases the chances of developing addictive behaviors or depressive symptoms (68).

One of the main worries is that youths are repeatedly driven to engage in social media use by their increased sensitivity to social feedback and a strong need to belong (6, 9, 10). Receiving likes on social media is experienced as socially rewarding by recipients (11, 12), results in temporary increases in self-esteem (13), and is reported to provide youth with a sense of belonging (14). However, likes have also been shown to be strong reinforcers driving social media engagement in adults, which may lead to compulsive or addiction-like behaviors (11, 15, 16). Furthermore, not receiving feedback can be experienced as social rejection and can reduce self-esteem (17, 18). Adolescence is a developmental period during which both reward and rejection sensitivity are particularly strong (19, 20) and have, respectively, been linked to increased impulsive behavior (21, 22) and depressive symptoms (23, 24). Together, these results support the hypothesis that social media feedback may play a direct role in both increased social media engagement and mental health outcomes for youth. However, there are currently no studies that have directly investigated how youths respond to social feedback on social media platforms.

Over the past decade, research on the link between social media use and mental health outcomes has not yielded consistent results (25, 26). Recent meta-analyses and reviews have identified an overreliance on subjective and high-level measures of social media use such as self-reported screen time, as a key limitation in the field (27, 28). Screen time reveals little about what youths experience or what they do online, and recently, it has been shown that self-reported screen time is not even a good predictor of objectively recorded screen time (29). Considering the widespread mental health crisis among young people (30, 31), alongside the potential influence of social media, and the current limitations in the literature, it is crucial to deepen our understanding of how youths interact with and respond to social media feedback and its effects on their mood. To directly address this issue, we rely on computational analysis of Instagram trace data (i.e., real app posting and feedback data) and on an experimental study.

We built our computational analysis on a validated computational reinforcement learning (RL) model (Fig. 1) (15), based on animal learning theory, which explains how foraging behavior is optimized through the balance between effort and opportunity costs, to ultimately maximize the average rate of rewards (32, 33). The model posits a specific prediction of online engagement as a function of social feedback—in this case, likes. That is, the more likes a person receives, the sooner this person will post again, and vice-versa. In addition, the model assumes that there is an effort cost of posting (opening the app and creating content) that increases with the rate of posting (hence putting a limit on the posting rate).

Fig. 1. Computational approach.

Fig. 1.

(A and B) The RL model states that the agent will select the latency until it will post again, τPostt, after it receives feedback on the current post. This posting latency is drawn from an exponential distribution determined by the current policy (Policyt). It posits that the agent’s posting latency is influenced by the number of likes received (e.g., 23 likes represented by the heart). The model states that the agent maximizes the reward rate by adjusting the policy (Policy update) after receiving a certain number of likes for a particular post. The model policy is adjusted on the basis of the learning rate (α), the change in posting latency (ΔτPostt), and the net reward prediction error (δ). The learning rate parameter indicates the sensitivity to social media feedback, and it is our prime variable of interest. The δ consists of the difference between the reward received and the average net reward rate (R) which considers both the effort cost associated with quick responses and the opportunity cost (or missed opportunities) of slow responding. The RL model was fit to individual Instagram trace data (see Fig. 2, A to C, for frequency distributions), where we estimated individual values for three free parameters: learning rate, α; effort cost sensitivity, C; and initial policy, ρ, based on maximum likelihood estimation. We focused on the two first parameters to test age differences in sensitivity to likes (α) and effort associated with posting (C; see Fig. 2, D and E).

As a result, this model provides two parameters that can explain individual or developmental differences in social media engagement: (i) the learning rate (α), capturing the sensitivity to social feedback; and (ii) the effort cost (C), which captures the effort associated with posting. By quantifying the sensitivity to social media feedback and the effort required to post, this framework provides a structured way to directly test age differences in the impact of social media feedback on social media engagement. On the basis of the reward sensitivity hypothesis (34), we expect adolescents to be more sensitive to social feedback and thus exhibit a higher learning rate compared to adults. In addition, given that adolescents are considered digital natives, we expect them to display a lower effort cost than adults.

RESULTS

Social feedback sensitivity on Instagram

In our preregistered study, we tested the hypothesis that adolescents would be more sensitive to social feedback and show a higher learning rate compared to adults in a large Instagram dataset (35) consisting of Instagram posts of adolescents (study 1: n = 7718, estimated ages: 13 to 19 years) and adults (n = 8895, estimated ages: 30 to 39; totalposts = 1.724.926; for details of the sample, see Materials and Methods). Instagram is a social media platform that is popular with both youths and adults and thus allows for the direct comparison in social feedback sensitivity. As expected, adolescents showed a significantly higher learning rate, that is, 44% greater [mean (M) α = 0.0009], compared to adults [Mα = 0.0006; Welch two-sample t test: t(13965) = 4.81, P < 0.001; with a small effect size d = 0.08; Fig. 2D]. However, adolescents did not significantly show an overall lower effort cost (MC = 81) than adults [MC = 78; Welch two-sample t test: t(10877) = 0.148, P = 0.88; Fig. 2E]. These differences survived numerous robustness checks and model specifications [see Supplementary Results (S1.1) and table S1 for details and model comparison]. Furthermore, given that posts can obtain multiple likes and make users less sensitive to social feedback, we also tested the utility function of likes [see Supplementary Results (S1.2) for more details]. Together, these data support the hypothesis that adolescents’ social media engagement is more strongly motivated by their response to likes compared to adults, and it is not related to their superior skills as digital natives (i.e., not the effort cost).

Fig. 2. Social media engagement of adolescents is driven by sensitivity to social feedback.

Fig. 2.

(A to C) Frequency distributions of Instagram trace data of adolescents (in purple) and adults (in orange). (A) Adolescents on average were slower with a posting latency of 2.20 days, while adults were faster and posted on average every 1.26 days. (B) Overall, adolescents had more likes for their posts with a mean of 207.5 likes compared to adults who had a mean of 97.28. (C) Adolescents posted on average 71 posts, while adults posted on average 132 posts. (D and E) Mean comparison of the model parameters between adolescents and adults: (D) Adolescents showed a significantly higher learning rate than adults, (E) but adolescents’ effort cost was not significantly different from that of adults.

Social media and mood: Experimental evidence

Adolescence is a period of heightened sensitivity to both rewards and social rejection, and the absence of expected social feedback could significantly affect individuals’ moods (for better or worse). Converging evidence from neuroscience studies suggests that mood may be represented as a running average of prediction errors (36, 37); when individuals receive more likes than they expected, their mood will increase, and conversely, receiving fewer likes than expected will worsen it. This specific mechanism suggests that there is a direct link between the learning rate (sensitivity to likes) and mood changes. That is, higher sensitivity leads to greater mood variations in response to changes in likes. Hence, changes in social feedback could affect mental health by triggering constant shifts in mood (38). On the basis of simulations, we predicted that adolescents would show greater mood changes in response to a sudden change in likes received on social media (see Fig. 3B). More specifically, given that fewer or absence of likes may have a more profound effect on their mood, evoking negative feelings (39), we expected that adolescents would decrease posting behavior and report greater reduction in mood when confronted with a sudden decline in the number of likes compared to adults.

Fig. 3. Adolescents feel more negative after a reduction in likes than adults.

Fig. 3.

(A) Participants scrolled a meme feed and posted for 14 min. They posted by pressing “post” and selecting a meme from a set of 6. After posting, they received feedback (32 likes; fig. S3A) and returned to the main screen where they could scroll/post and see feedback for their last post. (B) Mood-RL simulations. Current mood (mt) is updated by mt+1 = mt + ηttmt), where the prediction error (δ) is generated on the basis of the RL model. Mood (m) increases with positive δ (more likes than expected) and decreases with negative δ. The mood learning rate (η) was held constant for both groups, differing only in the core learning rate from the RL model (based on study 1 results). Adolescents show larger mood fluctuations (dark purple line; average mood change) than adults (dark orange line; average mood change) as likes change (dots). Faded lines show 100 simulations of mood change for both groups. (C) Mean mood to variations in likes. Jittered points represent single data points truncated for easy visualization (see fig. S3C for frequency distributions). (D) Mean response latencies (in seconds; ±1 SEM) to high and low rewards. Jittered points represent single data points truncated for visualization (see fig. S3B for frequency distribution).

We used a preregistered online experiment mimicking features of social media platforms (Fig. 3A), such as Instagram, to investigate how social feedback would affect mood. In this experiment, adolescents (study 2: n = 92, ages: 16 to 20) and adult participants (n = 102, ages: 30 to 40) could scroll in a feed and post “memes” for which they receive real feedback (number of likes). To test the impact of changes in likes on mood, we manipulated the amount of likes participants could receive for their posts by providing them with different sets of pre-liked memes. In the high reward (HR) condition, participants received between 28 and 34 likes for their posts, and in the low reward (LR) condition, between 6 and 18 likes (see Materials and Methods for more details). Furthermore, participants reported their mood on three occasions: before the experiment started (T1), between HR and LR conditions (T2), and after the experiment (T3).

As expected, contrasting the HR and LR conditions, adolescents’ mood was more affected (T2: M = −5.96) by a decrease in the number of likes they obtained for their posts than adults (M = −2.71; U = 3875.5, r = 0.17, P = 0.036; Fig. 3C). Furthermore, while both age groups generally reported feeling more positive after the HR condition (T1; M = 0.70; M = 3.89, respectively; and did not differ at this initial point: U = 4115.5, r = 0.12, P = 0.14), adolescents concluded the experiment with a more negative mood (T3: M = −5.26) compared to adults who reported a positive mood (M = 1.19; U = 3867.5, r = 0.18, P = 0.03; Fig. 3C). However, posting latencies did not differ between reward conditions and age groups (interaction effect: b = 0.01, SE = 0.07, z = 0.16, P = 0.87), but adolescents posted less often (main effect of age group: b = 0.36, SE = 0.13, z = 2.75, P = 0.01; Fig. 3D). Together, these findings provide evidence that adolescents’ mood is more strongly affected by variations in social media feedback, particularly they experience more negative mood after receiving fewer likes compared to adults. This effect was independent of their self-reported problematic social media behavior or levels of social anxiety [see Supplementary Results (S1.3 to S1.4) and tables S3 and S4 for additional analyses and robustness checks; see also Supplementary Results (S1.5) for further analysis on sex differences].

Social feedback sensitivity and individual differences in brain volume

Our next study added a set of exploratory neuroimaging analyses in a group of older youth (study 3; N = 96, emerging adults, ages: 18 to 24; Fig. 4, A to D). These participants provided their Instagram trace data (fig. S5, A to C, and table S7) consisting of historical social media data (with a mean of 5.74 years of use and mean age of first post at 14.2; Fig. 4B). These data allow us to focus on long-term effects of longitudinal social media use and identify which brain regions are associated with prolonged exposure to social media feedback. In addition, they filled in questionnaires on self-reported social anxiety and problematic social media behavior. Given our previous findings that adolescents were particularly sensitive to social media feedback, we were specifically interested in the brain regions that were associated with individual differences in learning rates. The sensitivity to social media feedback may be specifically related to the development of subcortical regions involved in feedback processing (34), regions that continue to develop during emerging adulthood (40). First, we fitted the computational RL model to the Instagram trace data to estimate learning rates [see fig. S6A for frequency distribution and Supplementary Results (S1.6) for model comparison]. Then, after extracting the volume of the 83 brain regions, we used random forest analyses with cross-validation (see Materials and Methods) to identify important brain regions associated with our main variable of interest (social feedback sensitivity: α). We used feature permutation importance, with a random variable as a benchmark, to determine the relevant brain regions. The advantage of the random forest method lies in its ability to identify regions that show both linear and nonlinear relationships, and the cross-validation enhances the robustness of our exploratory results.

Fig. 4. Brain regions associated with social feedback sensitivity, social anxiety, and problematic social media use.

Fig. 4.

(A) List of both cortical and subcortical brain regions related to individuals’ sensitivity to social feedback on social media. The volumes of the 83 regions of the Desikan-Killiany [DK; (73)] atlas were extracted (74) from the T1 weighted scans using the FreeSurfer pipeline [see Supplementary Results (S1.9) for more details]. (B) Years of social media use per participant, spanning from the age of their first Instagram post (shown in purple) to the age at which brain data were collected (shown in green). (C) Cortical regions of the DK atlas (left hemisphere on the bottom and right hemisphere on the top) related to feedback processing. (D) Subcortical regions predicting social feedback sensitivity (α; light green) and overlapping regions between α and social anxiety (SA; right ventral DC), between α and problematic media use (PMU; right putamen), and across all (left amygdala). All these features performed better than the benchmark random feature.

As expected, we found that several subcortical regions involved in basic feedback processing are important in predicting social feedback sensitivity (α), such as the amygdala, ventral diencephalon (DC), pallidum, and the putamen, including additional cortical regions (see Fig. 4A). We further explored these findings by performing random forest analyses using social anxiety and problematic social media behavior as dependent variables to determine whether there were similar brain regions associated with social feedback sensitivity. Each construct showed its unique associations with social feedback sensitivity (Fig. 4, B and C), but the amygdala stands out as the single brain region that is associated with all three constructs (Fig. 4C; see fig. S7, A to C, for a full list of brain regions and tables S8 to S10).

DISCUSSION

At a time of increased worries about the impact of social media on the youth, we focused on the most common affordance across all social media platforms, receiving likes. Across two studies using trace and experimental data, we found converging evidence suggesting that youths exhibit heightened sensitivity to social media feedback compared to adults. Replicating earlier work (15), we found that changes in likes may increase or decrease the motivation to engage with the platform. We extended these findings in two major ways: (i) we show that youths are more sensitive to likes and that (ii) changes in likes may result in greater changes in mood. Last, exploratory neuroimaging analyses suggest that the amygdala is a key region that is related to individual differences in sensitivity to likes, social anxiety, and problematic social media use. In the following sections, we unpack each of these results and discuss their broader implications.

Results from study 1 show that the social media engagement of adolescents is more strongly driven by their sensitivity to social media feedback and not just their dexterity in posting as digital natives. It is important to note that the higher learning rate indicates sensitivity not only to receiving more likes than expected but also to the unexpected absence of likes. Consequently, adolescents will engage more strongly with social media platforms if they receive many likes, but at the same time, they will also disengage more quickly when the likes are not forthcoming. Our findings are in line with theories that suggest that youths have a strong motivation to engage with social media to gain social validation (13, 41, 42) but are also affected more strongly by social rejection (43). Another potential mechanism influencing the observed behavior between both age groups is the perceived size and representativeness of the audience. Adults may be less responsive to likes compared to adolescents if their social media audience is smaller and less reflective of their broader social group outside social media. However, this does not exclude the interpretation that adolescent sensitivity to social feedback, well established in previous work (4448), contributes to their stronger responses to social feedback compared to adults in similar experimental settings. This pattern is also observed in our study 2, where both adolescents and adults were given the same experimental manipulations involving social media feedback from unknown sources, and adolescents still exhibited stronger responses to variations in the number of likes than adults. Future research could benefit from examining the relative weights of different sources of likes. For instance, a like from a friend may carry more weight than a like from a random stranger. Nevertheless, enhancing awareness of available features, such as the ability to conceal likes on one’s account, an option implemented by Instagram in 2021, could markedly affect how youths engage with social media.

Next, we experimentally demonstrate that adolescents’ mood is more strongly affected by a reduction in the number of likes they receive on a social media simulator. This finding aligns with existing literature highlighting adolescents’ increased sensitivity to social approval and validation (48) and, therefore, may perceive the number of likes as a direct reflection of their social status and personal worth (13, 41). Particularly, negative emotional responses to lack of positive feedback may be related to heightened rejection sensitivity (49), which, if experienced often, could potentially lead to mental health consequences (50, 51). Adolescents’ sensitivity to social feedback is not unique to social media, as suggested by numerous studies in other contexts (44, 45, 47, 48). This sensitivity and mood variability, in general, accompany their ongoing changes in multiple domains of their lives, such as biological, social, and psychological. Although our experimental study does not focus on or directly measure mood variability, our results indicate that specific patterns of social media feedback may contribute to increased mood variability in adolescence. During this period, individuals may thus be particularly sensitive to social media design features that provide immediate and quantifiable feedback. In contrast, adults may have a more established self-concept and be more resilient to such social media metrics.

Currently, adolescents spend a substantial amount of time on social media [e.g., with 45% of American adolescents saying they use the internet “almost constantly” (52)]. Given this extensive exposure to social media feedback, even small effects of likes on mood may significantly affect adolescents’ mood dynamics in daily life. The amount and frequency of social feedback are much higher than previously possible, creating a more intense environment compared to sporadic offline interactions. Given that even a small intervention can change mood and platform engagement, as our study suggests, it is crucial to further investigate the cumulative effects of feedback, especially since increased mood variability is a key predictor of future mood disorders (51). Our results underscore the urgency of acquiring more comprehensive data and methods, such as data donation and ecological momentary assessment, respectively, to fully understand the implications of social media feedback sensitivity on psychological well-being. Moreover, previous research has indicated sex differences in the relationship between social media use and well-being during adolescence (53, 54). Future empirical work could investigate sex differences in mood responses to social media feedback as well as in the context of reward learning in social media behavior.

Results from our exploratory computational neuroimaging results revealed that individual differences in the volume of key subcortical regions of the valuation network (55) are associated with the sensitivity to likes. This supports the hypothesis that sensitivity to social media feedback in youth is, to some extent, related to developmental differences in brain structure and function (34). Furthermore, studies on broader social interactions have shown that social feedback in the form of exclusion and rejection is related to neural responses in the anterior cingulate cortex, right ventral prefrontal cortex, and amygdala (56, 57), which may be similar in face-to-face interactions and virtual environments (34, 58). Social feedback in the form of acceptance is related to youths’ heightened activation in the ventral striatum and prefrontal cortex (59, 60). In addition, habitual social media checking behaviors among adolescents may be associated with changes in neural sensitivity to the anticipation of general social rewards and punishments which could have implications for psychological adjustment (6). Our findings support the notion that there may be some overlapping neural mechanisms, including key regions of the limbic and valuation network regions, underlying offline and online social feedback sensitivity.

Last, our analyses revealed that the amygdala is a key region involved in processing social media feedback, and it is related to individual differences in problematic social media use and social anxiety. While our results suggest that the amygdala is involved in these processes, it is important to note that this does not imply direct causation, and these processes were also associated with distinct networks of regions. The amygdala’s role in processing emotional responses can contribute to different outcomes depending on individual circumstances and contexts. In addition, we emphasize that we focused on individual differences in normative social anxiety rather than on clinical cases. Future research should include fine-grained developmental windows and longitudinal data to investigate how youths’ sensitivity to online social feedback is associated with the development of subcortical valuation regions.

In our analyses, we put our focus on one of the most salient, and seemingly innocent, affordances that is common across almost all social media platforms, receiving likes. This focus already allowed us to show meaningful developmental variance in an important cognitive mechanism, but of course, social media platforms provide more complex and qualitative opportunities for feedback. A logical next step would be to focus on comments, which are among the most common affordances of social media, offering a potentially more nuanced way of providing feedback. Previous research has already shown that the level of affect associated with messages and comments increases their impact (61). Combining automated sentiment analysis (62) could enable the extension of computational analysis to integrate the impact of comments, including their valence and effect. The current computational framework is an example of and could be integrated into the broader effort to use multimodal data sources. This would help develop precise computational phenotypes to identify the multitude of different social media types (28) and gain deeper insights into the types of users and use to be able to provide tailored interventions (63).

In sum, our results suggest that the current design of social media platforms, characterized by immediate quantified social feedback, may be more impactful for youths. This is a crucial period marked by heightened sensitivity to peer approval and rejection, and the current prevalence of likes as a proxy for online approval fosters a culture of comparison and validation-seeking behavior. It is also a period of rapid socio-emotional changes that can affect long-term mental health and well-being (64). Our findings highlight the importance of considering age-specific user policies and strategies in the design of social media platforms, suggesting two avenues for intervention. First and foremost, platforms should change incentive structures, such as shifting the emphasis away from likes to more meaningful engagement; the possibility of hiding likes is an interesting step in this direction. Second, our results suggest that we should not only focus on strengthening the digital literacy of youth, a generation who may often be more literate anyway but focus on developing skillful emotion regulation in online environments. Here, the constructs of digital competence (65) and digital maturity (66) are helpful tools. Social media–generated emotions can occur frequently, at any time, and even unnoticeable to others. An approach addressing emotion regulation skills online may be crucial for youth to deal with the constant variation of feedback they are exposed to on social media.

MATERIALS AND METHODS

Social media data

The data of study 1 was based on two datasets published by a previous study (35) in which the data were originally collected for 6 months between October 2014 and March 2015, using the Instagram Application Programming Interface (API). These two datasets consist of Instagram data from two developmental groups: adolescence (estimated ages: 13 to 19) and adulthood (estimated ages: 30 to 39). Initially, data from 2 million random Instagram users were collected. These data were further reduced to 10,000 active users per age group based on several automated heuristics and manual verification by human judges [see (35) for a thorough procedure]. The datasets were made available after an agreement had been signed for the use of the Instagram data. Given the purposes of our study, we focused on the datasets that originated from the verified profile-based and tag-based samples including 10,000 adolescents and 10,000 adults. These datasets contained data pertaining to users’ activities, such as number of posts, timestamps (date of the post), and number of likes. On the basis of the procedure used by (15), we excluded individuals with less than 10 posts and those cases in which the timestamp was not available. The final datasets consisted of 1.724.926 posts from 7718 youth and 8895 adult Instagram users.

Study 2 was conducted online, and we recruited participants from English-speaking countries via Instagram to participate in a study about social media. A total of 211 participants (102 adolescents and 109 adults) completed the study, but after data exclusions, the final sample consisted of 92 adolescents and 102 adults (see data exclusion criteria). Participants were compensated with a $5 voucher for their participation in the study, and a participant from each age group was randomly selected to win a $50 voucher.

Study 3 was part of a larger project on social learning and social media use and consisted of a survey completed online and a magnetic resonance imaging (MRI) scan at the Spinoza Centre for Neuroimaging. A total of 106 participants provided their Instagram trace data (posts, timestamps, and number of likes) but also self-reported social anxiety, problematic social media use, and structural brain data. We applied the same criterion as in study 1 and kept only those individuals who had more than 10 posts. This led to a sample reduction of 106 users from whom we had access to Instagram trace data to 96 users and 11,277 Instagram posts. Furthermore, in 2015, Instagram introduced the carousel post which allows the user to post 10 single pieces of content (e.g., images) simultaneously. Although this type of post counts each piece of content as an individual post, the carousel post contains the same number of likes for each post as well as the same timestamp (date of the post; being the posting latency 0 between all individual posts of the carousel post). To make it comparable to trace data in study 1, for each user having this type of posting, we kept only the first post and retained the corresponding number of likes and timestamps. Moreover, these data consist of user historical Instagram trace data integrating all the data since participants created their accounts. This resulted in an average of 5.74 years (SD = 1.96) of social media use among participants and an average age of first post at 14.2 (SD = 2.08). We therefore modeled data that overlaps considerably with the ages of studies 1 and 2. Although structural brain data were collected when individuals were slightly older (age range: 18 to 24) than participating groups in studies 1 and 2, the observed brain structure differences are related to sensitivity to social media feedback over the period of adolescence. Participants provided their informed consent for participation and received a monetary compensation of €25 to complete the study. The current work was approved by the Ethics Review Board Faculty of Social and Behavioral Sciences of the University of Amsterdam (study 1: 2021-DP-13838; study 2: FMG-2485_2023; study 3: 2019-DP-10814) and was performed in accordance with relevant guidelines and regulations.

Study 1

Model recovery and power analysis

To develop our analysis plan and test our hypothesis that youths are more sensitive to online social feedback than adults, we first simulated two datasets generated by the same process. This simulated data only differed in the average learning rate (α) to generate predictions for reward sensitivity between the two developmental populations. To simulate the behavior of our adult group, we drew values for the learning rate from a truncated normal distribution with a mean of 0.002 and SD of 0.002 (truncated by excluding values < 0). In addition, for each simulated agent, we drew a value for the cost function from a uniform distribution between 0 and 1, and the number of observations was drawn from a normal distribution with a mean of 70 and SD of 20 (close to the individual mean posts of the adult empirical dataset). On the basis of these settings, we generated 1000 simulated adult agents. To simulate the adolescent dataset, we changed the mean of the generative distribution for the learning rate to 0.003, keeping all other parameters constant. Our first step was to use these simulations to determine model recovery and how such a minimal increase in learning would lead to a detectable difference in the learning rate between the two developmental groups (by performing power analysis; see fig. S1). After fitting the model, we found that both the learning rate and cost parameters were recovered for the full dataset [r(1959) = 0.18, P < 0.001 and r(1959) = 0.36, P < 0.001, respectively; fig. S2]. Yet, the most important question was whether the model could reliably recover the group differences in the learning rate and, if so, use that to establish the minimal sample size for our empirical data. As expected, we found differences in the learning rate [adult group: mean α = 0.00238; adolescent group: mean α = 0.00294; t(1959) = −7.47, P < 0.001], but no differences were found in the estimated value for the cost parameter [t(1957) = −0.98, P = 0.33].

Study 2

Experiment

The experiment was conducted online, and the recruitment of participants was done via Instagram. Participants provided their informed consent for participation before the start of the study. Overall, the study took between 20 and 25 min (14 min for the experiment and the remaining minutes for instructions and for postexperiment questions about demographics, social media behavior, and social anxiety). Participants were informed that the memes they could select from to post were previously rated by 40 people with a like, neutral, or a dislike on a survey in Qualtrics. For the purposes of the current study, we focused on likes.

The experiment resembled key aspects of a social media platform, such as Instagram. Participants, on one hand, were able to see and like or dislike humorous images (i.e., memes) in a continuous feed such that they could scroll down and see content as they wished. The feed contained 410 memes with no feedback (number of likes) visible to participants. These memes were different from those memes in the reward conditions, but they have been also rated previously with a distribution of between 18 and 28 likes. This allowed us to make a stronger difference in the set of memes between HR and LR conditions. To post, participants could press the button “post” which was always visible on the main screen (see Screenshots of the experiment). When participants decided to post, they could select a meme from a set of six different memes. This was done to prevent participants from creating inappropriate or unethical content while still giving a sense of self-expression. Participants were informed during the instructions that the memes they could select from in each step had been previously rated by 40 people in a prior experiment. Hence, they were aware the feedback was real. The experiment consisted of two blocks following the same order across all participants: first, the HR condition in which the pool of memes had between 28 and 34 likes and, second, the LR condition with memes having between 6 and 18 likes. Participants could see the real feedback (number of likes) for a particular meme every time they posted. Each condition took 7 min to complete. Before the experiment started, participants had the opportunity to try the feed for a minute. In addition, we asked participants to try to post four times so they would get familiarized with posting, and everyone would start with the same posting baseline. Posting response latencies were measured by tracking how much time (in milliseconds and converted to seconds) participants spent interacting with the feed before posting.

Furthermore, participants reported their mood and how they were feeling at that moment (1 = extremely negative to 100 = extremely positive) on three occasions: baseline (before the experiment), after the HR condition, and after the LR condition (end of the experiment). Last, we included postexperimental questions to measure participants’ self-reported social media behavior and well-being to test for group differences in mood change as an exploratory analysis. We focused on participants’ social anxiety (fig. S4, A and C) and problematic social media use (fig. S4, B and D).

Participants and power analysis for reward conditions

We simulated a dataset to determine the required sample size to run our experimental design and detect whether the impact of reward conditions (high versus low) on posting behavior depended on the age group, using a multilevel linear mixed effect model from the simr package (67). We defined a sample of 100 participants per age group (adolescents vs adults) and each participant had both HR and LR conditions. We defined the following parameter values for fixed effects based on reasonable effect sizes: intercept = 0.5, reward = 0.4, age_group = 0.2, and reward_condition*age_group = 0.3, as well as a random intercept for the subject of 0.1. On the basis of a power analysis, a total sample of around 200 participants (100 per age group) would ensure enough power.

Data exclusion and quantities of interest

We excluded data from participants who reported to be aged below 18 years old from countries in which informed consent cannot be provided by oneself if they are 18 years old or younger. Furthermore, we excluded from the main analysis participants who exited the experiment window for a considerable amount of time (we used 10% of the time of the experiment: 1.4 min as a criterion). This resulted in a final sample of 92 adolescents and 102 adult participants.

We were particularly interested in the participants’ mood and their posting behavior. We quantified mood as the difference score between measurement occasions in each age group (T1 = baseline–HR condition; T2 = HR condition–LR condition, and T3 = baseline–LR condition). Because of the limited number of trials within the subject, our experimental data would not allow for the implementation of a reinforcement learning model including mood changes (if we apply the rule of at least 10 posts per individual to allow for learning analysis, the sample would be reduced to 60 adolescents and 80 adults). Hence, we aimed for model-free analyses. We quantified response latencies from the first time a participant posted. Response latencies were computed as the time between two consecutive posts on the task. As comparable to real-world Instagram data (study 1), we also characterized the first latency as undefined and performed a multilevel generalized linear mixed model specifying a Gamma distribution with a log link function, using the glmmTMB package (68).

Study 3

MRI data acquisition

Structural imaging data were acquired on a 3.0 T MRI scanner (Philips Achieva DS, 32-channel head coil) at the Spinoza Centre for Neuroimaging. The scan consisted of two high-resolution T1-weighted anatomical scans (0.70 mm by 0.81 mm by 0.70 mm, FOV = 256 mm by 256 mm by 180 mm, matrix size = 368 by 318 by 257 slices, TR = 11 ms, TE = 5.2 ms, parallel acquisition technique = SENSE), which were averaged.

MRI data preprocessing

The preprocessing and quality control of the MRI scans were done by automatic preprocessing using the fMRIPrep pipeline (fMRIPrep version 1.5.4). This procedure included artifact removal, cortical surface generation, skull-striping, cross-modal registration, and standard space alignment. The obtained T1-weighted images were corrected for intensity nonuniformity with N4BiasFieldCorrection (69), distributed with ANTs 2.2.0 (70) (RRID:SCR_004757), and used as T1w reference throughout the workflow. Next, this reference was skull striped with a Nipype implementation of the antsBrainExtraction.sh (from ANTs), using OASIS30ANTs as the target template. Brain tissue segmentation of cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM) was performed on the brain-extracted T1w using fast [FSL 5.0.9, RRID:SCR_002823; (71)]. Volume-based spatial normalization was implemented, and the T1-weighted images were registered to one standard space (MNI152NLin2009cAsym) through nonlinear registration with antsRegistration (ANTs 2.2.0), using brain-extracted versions of both T1w reference and the T1w template. For spatial normalization, the template selected was ICBM 152 Nonlinear Asymmetrical template version 2009c (72) (RRID:SCR_008796; TemplateFlow ID: MNI152NLin2009cAsym; see workflows in fMRIPrep’s documentation for further details of the pipeline).

Gray matter volume extraction

GM volumes of cortical and subcortical brain areas of both hemispheres corresponding to the Desikan-Killiany atlas (73) were extracted (74) using FreeSurfer (http://surfer.nmr.mgh.harvard.edu/). GM volumes of 89 areas were scaled to account for brain size using the SupraTentorialVolNotVent parameter, which includes GM and WM volumes of the brain (excluding cerebellum, brain stem, ventricles, CSF, and choroid plexus). Given that the size of the ventricles impacts WM and GM volumes, the ventricles were subtracted from the total brain volume. In addition, to prevent collinearity between brain regions, the volume of brain areas that occur in both hemispheres was averaged if the hemispheres were highly correlated (i.e., Pearson’s correlation of 0.7 or higher). This led to the reduction of 6 brain regions and a final number of 83 brain regions included.

Feature extraction

We ran random forest regression and used feature permutation importance to identify relevant brain regions related to social feedback sensitivity, social anxiety, and problematic media use. The random forest was trained using sklearn version 0.24.2 in Python (75). The random forest regression fits several decision tree classifiers on several subsamples of the dataset by applying bootstrapping and uses a different set of features for each decision tree. Each of these individual random forest trees produces a prediction which is averaged into a final prediction. Before training each model to predict the learning rates, social anxiety, and problematic media use separately, we normalized all features by scaling them between 0 and 1 and making them comparable. Furthermore, we applied a leave-one-out cross-validation (LOOCV) outer loop, where one participant in each loop was used to evaluate the model, which was trained on the remaining participants (n − 1). This led to one model per participant in the case of learning rate (n = 76), social anxiety, and problematic media use (n = 84). To increase reliability and robustness, we set the number of decision trees to 1000 but kept the other hyperparameters at default. We used permutation importance (76) because it is a reliable method. It works by randomly shuffling each predictor variable to check how it affects the model accuracy. We averaged permutation importance across five permutations to account for random values being meaningful by chance. Last, this procedure computed a baseline feature “random” consisting of values between 0 and 1 not relevant to predict learning rates, social anxiety, and problematic media use. The random feature is newly computed for each loop within the LOOCV; and those features (i.e., brain regions) that are more important than the random feature (higher importance coefficients compared to random feature) were considered.

Self-reported measures: Study 2 and study 3

Problematic social media use

Participants completed an adapted version of the Compulsive Internet Use Scale (77). This version kept the items replacing “internet” with “social media.” An example question is “Do you find it difficult to stop using social media when you are online?”, and items are assessed on a five-point scale ranging from 1 = “never” to 5 = “very often.”

Social anxiety

Participants completed the Interaction Anxiousness Scale [IAS-3; (78)]. An example question is “I often feel nervous even in casual get-togethers,” and items are assessed on a five-point scale ranging from 1 = “not at all” to 5 = “extremely.”

Statistical analysis

Main analyses were conducted in R statistical software, Rstudio v.1.3.1093 (79). We performed a power analysis of our main analysis in study 1 (independent sample t test, one-sided; for preregistered hypotheses and analyses: https://osf.io/mt2nv/?view_only=6e232108b6754961b783a9e98c042f3a). We computed the effect size (Cohen’s d) by calculating the difference between the means of the generated learning rates across both datasets and dividing it by the pooled SD. The main analysis in study 1 was conducted using the Welch two-sample t test which is robust in the case of large sample sizes. As a robustness check, we additionally computed a nonparametric test (see S2.1 for more details). In study 2 (preregistration: https://osf.io/q2htd/?view_only=ce5582b6d9414f1db552bb83c8d69b66), we simulated a dataset to determine the required sample size to detect the main effect of reward condition by age group in a multilevel linear mixed effect model. We computed the mood difference score between all measurement occasions in both age groups. We were particularly interested in the mood responses between HR and LR conditions (T2). To test mean differences in mood responses between age groups, we performed a Mann-Whitney U test for each time point given the sample size and non-normal distribution nature of the data. To test whether the effect of reward condition on post-latencies was dependent on the age group, we conducted a multilevel generalized linear mixed model specifying a Gamma distribution with a log link function. In study 3, we observed a few outliers, so we winsorized the learning rates before running the random forest to test for the associations with brain regions.

Acknowledgments

Study 1: We thank the authors of the original data extraction for providing us with access to the datasets. Study 3: We thank S. van der Stappen and E. de Groot for assistance with project administration and pipeline and data collection as well as D. Fleur and S. van den Boogaard for assistance with data collection. B.L. acknowledges the support from a Wallenberg Academy Fellow grant from the Knut and Alice Wallenberg Foundation (KAW 2021.0148) and a Starting Grant (SOLAR ERC-2021-STG – 101042529) from the European Research Council.

Funding: This work was supported by the European Research Council (ERC-2018-StG-803338) and the Netherlands Organization for Scientific Research (NWO-VIDI 016. Vidi.185.068), both awarded to W.v.d.B.

Author contributions: Conceptualization: A.d.S.P., V.C.I., B.L., and W.v.d.B. Methodology: A.d.S.P., V.C.I., B.L., and W.v.d.B. Investigation: A.d.S.P. and V.C.I. Data Curation: A.d.S.P. and V.C.I. Validation: A.d.S.P., V.C.I., and W.v.d.B. Formal analysis: A.d.S.P. and W.v.d.B. Project administration: A.d.S.P., V.C.I., and W.v.d.B. Resources: A.d.S.P. Software: A.d.S.P., V.C.I., B.L., and W.v.d.B. Visualization: A.d.S.P. Supervision: A.d.S.P. and W.v.d.B. Funding acquisition: W.v.d.B. Writing—original draft: A.d.S.P. and W.v.d.B. Writing—review and editing: A.d.S.P., V.C.I., B.L., and W.v.d.B.

Competing interests: The authors declare that they have no competing interests.

Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Data and code can be found under the following link: https://osf.io/m7hw6/. The availability of the trace data in study 1 is described in (35). Preregistration for study 1 is provided at https://osf.io/mt2nv/?view_only=6e232108b6754961b783a9e98c042f3a and for study 2 at https://osf.io/q2htd/?view_only=ce5582b6d9414f1db552bb83c8d69b66.

Supplementary Materials

This PDF file includes:

Supplementary Results

Figs. S1 to S7

Tables S1 to S10

Screenshots of the experiment

References

sciadv.adp8775_sm.pdf (6.3MB, pdf)

REFERENCES AND NOTES

  • 1.UNESCO, Youth (UNESCO, 2024); www.unesco.org/en/youth.
  • 2.Dahl R. E., Allen N. B., Wilbrecht L., Suleiman A. B., Importance of investing in adolescence from a developmental science perspective. Nature 554, 441–450 (2018). [DOI] [PubMed] [Google Scholar]
  • 3.Arnett J. J., Emerging adulthood: A theory of development from the late teens through the twenties. Am. Psychol. 55, 469–480 (2000). [PubMed] [Google Scholar]
  • 4.Bell V., Bishop D. V. M., Przybylski A. K., The debate over digital technology and young people. BMJ 351, h3064 (2015). [DOI] [PubMed] [Google Scholar]
  • 5.Orben A., Przybylski A. K., The association between adolescent well-being and digital technology use. Nat. Hum. Behav. 3, 173–182 (2019). [DOI] [PubMed] [Google Scholar]
  • 6.Maza M. T., Fox K. A., Kwon S.-J., Flannery J. E., Lindquist K. A., Prinstein M. J., Telzer E. H., Association of habitual checking behaviors on social media with longitudinal functional brain development. JAMA Pediatr. 177, 160–167 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Keles B., McCrae N., Grealish A., A systematic review: The influence of social media on depression, anxiety and psychological distress in adolescents. Int. J. Adolesc. Youth 25, 79–93 (2020). [Google Scholar]
  • 8.McCrae N., Gettings S., Purssell E., Social media and depressive symptoms in childhood and adolescence: A systematic review. Adolesc. Res. Rev. 2, 315–330 (2017). [Google Scholar]
  • 9.Achterberg M., Becht A., van der Cruijsen R., van de Groep I. H., Spaans J. P., Klapwijk E., Crone E. A., Longitudinal associations between social media use, mental well-being and structural brain development across adolescence. Dev. Cogn. Neurosci. 54, 101088 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Garrett S. L., Burnell K., Armstrong-Carter E. L., Prinstein M. J., Telzer E. H., Linking video chatting, phone calling, text messaging, and social media with peers to adolescent connectedness. J. Res. Adolesc. 33, 1222–1234 (2023). [DOI] [PubMed] [Google Scholar]
  • 11.Rosenthal-von der Pütten A. M., Hastall M. R., Köcher S., Meske C., Heinrich T., Labrenz F., Ocklenburg S., “Likes” as social rewards: Their role in online social comparison and decisions to like other People’s selfies. Comput. Human Behav. 92, 76–86 (2019). [Google Scholar]
  • 12.Davey C. G., Allen N. B., Harrison B. J., Dwyer D. B., Yücel M., Being liked activates primary reward and midline self-related brain regions. Hum. Brain Mapp. 31, 660–668 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Burrow A. L., Rainone N., How many likes did I get?: Purpose moderates links between positive social media feedback and self-esteem. J. Exp. Soc. Psychol. 69, 232–236 (2017). [Google Scholar]
  • 14.Smith D., Leonis T., Anandavalli S., Belonging and loneliness in cyberspace: Impacts of social media on adolescents’ well-being. Aust. J. Psychol. 73, 12–23 (2021). [Google Scholar]
  • 15.Lindström B., Bellander M., Schultner D. T., Chang A., Tobler P. N., Amodio D. M., A computational reward learning account of social media engagement. Nat. Commun. 12, 1311 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Zell A. L., Moeller L., Are you happy for me … on Facebook? The potential importance of “likes” and comments. Comput. Human Behav. 78, 26–33 (2018). [Google Scholar]
  • 17.Jamieson J. P., Harkins S. G., Williams K. D., Need threat can motivate performance after ostracism. Pers. Soc. Psychol. Bull. 36, 690–702 (2010). [DOI] [PubMed] [Google Scholar]
  • 18.Irmer A., Schmiedek F., Associations between youth’s daily social media use and well-being are mediated by upward comparisons. Commun. Psychol. 1, 12 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Shulman E. P., Smith A. R., Silva K., Icenogle G., Duell N., Chein J., Steinberg L., The dual systems model: Review, reappraisal, and reaffirmation. Dev. Cogn. Neurosci. 17, 103–117 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Quarmley M. E., Nelson B. D., Clarkson T., White L. K., Jarcho J. M., I knew you weren’t going to like me! Neural response to accurately predicting rejection is associated with anxiety and depression. Front. Behav. Neurosci. 13, 219 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.van Duijvenvoorde A. C. K., Peters S., Braams B. R., Crone E. A., What motivates adolescents? Neural responses to rewards and their influence on adolescents’ risk taking, learning, and cognitive control. Neurosci. Biobehav. Rev. 70, 135–147 (2016). [DOI] [PubMed] [Google Scholar]
  • 22.Galván A., The teenage brain: Sensitivity to rewards. Curr. Dir. Psychol. Sci. 22, 88–93 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Silk J. S., Siegle G. J., Lee K. H., Nelson E. E., Stroud L. R., Dahl R. E., Increased neural response to peer rejection associated with adolescent depression and pubertal development. Soc. Cogn. Affect. Neurosci. 9, 1798–1807 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Prinstein M. J., Aikins J. W., Cognitive moderators of the longitudinal association between peer rejection and adolescent depressive symptoms. J. Abnorm. Child Psychol. 32, 147–158 (2004). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Odgers C. L., Jensen M. R., Annual research review: Adolescent mental health in the digital age: Facts, fears, and future directions. J. Child Psychol. Psychiatry 61, 336–348 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Panayiotou M., Black L., Carmichael-Murphy P., Qualter P., Humphrey N., Time spent on social media among the least influential factors in adolescent mental health: Preliminary results from a panel network analysis. Nat. Ment. Health 1, 316–326 (2023). [Google Scholar]
  • 27.Parry D. A., Davidson B. I., Sewall C. J. R., Fisher J. T., Mieczkowski H., Quintana D. S., A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use. Nat. Hum. Behav. 5, 1535–1547 (2021). [DOI] [PubMed] [Google Scholar]
  • 28.Sultan M., Scholz C., van den Bos W., Leaving traces behind: Using social media digital trace data to study adolescent wellbeing. Comput. Hum. Behav. Rep. 10, 100281 (2023). [Google Scholar]
  • 29.Hodes L. N., Thomas K. G. F., Smartphone screen time: Inaccuracy of self-reports and influence of psychological and contextual factors. Comput. Human Behav. 115, 106616 (2021). [Google Scholar]
  • 30.Fuligni A. J., Galván A., Young people need experiences that boost their mental health. Nature 610, 253–256 (2022). [DOI] [PubMed] [Google Scholar]
  • 31.UNICEF, Adolescent Mental Health Statistics (UNICEF DATA, 2019); https://data.unicef.org/topic/child-health/mental-health/.
  • 32.Herrnstein R. J., On the law of effect. J. Exp. Anal. Behav. 13, 243–266 (1970). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Niv Y., Daw N. D., Joel D., Dayan P., Tonic dopamine: Opportunity costs and the control of response vigor. Psychopharmacology 191, 507–520 (2007). [DOI] [PubMed] [Google Scholar]
  • 34.Crone E. A., Konijn E. A., Media use and brain development during adolescence. Nat. Commun. 9, 588 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.K. Han, S. Lee, J. Y. Jang, Y. Jung, D. Lee, Teens are from mars, adults are from venus: Analyzing and predicting age groups with behavioral characteristics in instagram, in Proceedings of the 8th ACM Conference on Web Science, WebSci ‘16 (Association for Computing Machinery, 2016), pp. 35–44; 10.1145/2908131.2908160. [DOI] [Google Scholar]
  • 36.Rutledge R. B., Skandali N., Dayan P., Dolan R. J., A computational and neural model of momentary subjective well-being. Proc. Natl. Acad. Sci. U.S.A. 111, 12252–12257 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Eldar E., Niv Y., Interaction between emotional state and learning underlies mood instability. Nat. Commun. 6, 6149 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Bowen R. C., Wang Y., Balbuena L., Houmphan A., Baetz M., The relationship between mood instability and depression: Implications for studying and treating depression. Med. Hypotheses 81, 459–462 (2013). [DOI] [PubMed] [Google Scholar]
  • 39.Reich S., Schneider F. M., Heling L., Zero likes—Symbolic interactions and need satisfaction online. Comput. Human Behav. 80, 97–102 (2018). [Google Scholar]
  • 40.B. Taber-Thomas, K. Pérez-Edgar, Eds., in Emerging Adulthood Brain Development (Oxford Univ. Press, 2014), chap. 8. [Google Scholar]
  • 41.Sherman L. E., Payton A. A., Hernandez L. M., Greenfield P. M., Dapretto M., The power of the like in adolescence: Effects of peer influence on neural and behavioral responses to social media. Psychol. Sci. 27, 1027–1035 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Nesi J., Prinstein M. J., In search of likes: Longitudinal associations between adolescents’ digital status seeking and health-risk behaviors. J. Clin. Child Adolesc. Psychol. 48, 740–748 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Crone E. A., Achterberg M., Dobbelaar S., Euser S., van den Bulk B., van der Meulen M., van Drunen L., Wierenga L. M., Bakermans-Kranenburg M. J., Neural and behavioral signatures of social evaluation and adaptation in childhood and adolescence: The Leiden consortium on individual development (L-CID). Dev. Cogn. Neurosci. 45, 100805 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Towner E., Chierchia G., Blakemore S.-J., Sensitivity and specificity in affective and social learning in adolescence. Trends Cogn. Sci. 27, 642–655 (2023). [DOI] [PubMed] [Google Scholar]
  • 45.Sebastian C., Viding E., Williams K. D., Blakemore S.-J., Social brain development and the affective consequences of ostracism in adolescence. Brain Cogn. 72, 134–145 (2010). [DOI] [PubMed] [Google Scholar]
  • 46.Blakemore S. J., Mills K. L., Is adolescence a sensitive period for sociocultural processing? Annu. Rev. Psychol. 65, 187–207 (2014). [DOI] [PubMed] [Google Scholar]
  • 47.Smith A. R., Steinberg L., Strang N., Chein J., Age differences in the impact of peers on adolescents’ and adults’ neural response to reward. Dev. Cogn. Neurosci. 11, 75–82 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Somerville L. H., Special issue on the teenage brain: Sensitivity to social evaluation. Curr. Dir. Psychol. Sci. 22, 121–127 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Guyer A. E., Silk J. S., Nelson E. E., The neurobiology of the emotional adolescent: From the inside out. Neurosci. Biobehav. Rev. 70, 74–85 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Sperry S. H., Walsh M. A., Kwapil T. R., Emotion dynamics concurrently and prospectively predict mood psychopathology. J. Affect. Disord. 261, 67–75 (2020). [DOI] [PubMed] [Google Scholar]
  • 51.Maciejewski D. F., Keijsers L., van Lier P. A. C., Branje S. J. T., Meeus W. H. J., Koot H. M., Most fare well—But some do not: Distinct profiles of mood variability development and their association with adjustment during adolescence. Dev. Psychol. 55, 434–448 (2019). [DOI] [PubMed] [Google Scholar]
  • 52.M. Anderson, J. Jang, Teens, Social Media & Technology 2018 (Pew Research Center, 2018); www.pewresearch.org/internet/2018/05/31/teens-social-media-technology-2018/.
  • 53.Orben A., Dienlin T., Przybylski A. K., Social media’s enduring effect on adolescent life satisfaction. Proc. Natl. Acad. Sci. U.S.A. 116, 10226–10228 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Nesi J., Prinstein M. J., Using social media for social comparison and feedback-seeking: Gender and popularity moderate associations with depressive symptoms. J. Abnorm. Child Psychol. 43, 1427–1438 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Bartra O., McGuire J. T., Kable J. W., The valuation system: A coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage 76, 412–427 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Eisenberger N. I., Lieberman M. D., Williams K. D., Does rejection hurt? An FMRI study of social exclusion. Science 302, 290–292 (2003). [DOI] [PubMed] [Google Scholar]
  • 57.Chester D. S., Lynam D. R., Milich R., DeWall C. N., Neural mechanisms of the rejection-aggression link. Soc. Cogn. Affect. Neurosci. 13, 501–512 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Wikman P., Moisala M., Ylinen A., Lindblom J., Leikas S., Salmela-Aro K., Lonka K., Güroğlu B., Alho K., Brain Responses to peer feedback in social media are modulated by valence in late adolescence. Front. Behav. Neurosci. 16, 790478 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Guyer A. E., Choate V. R., Pine D. S., Nelson E. E., Neural circuitry underlying affective response to peer feedback in adolescence. Soc. Cogn. Affect. Neurosci. 7, 81–92 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Gunther Moor B., van Leijenhorst L., Rombouts S. A. R. B., Crone E. A., Van der Molen M. W., Do you like me? Neural correlates of social evaluation and developmental trajectories. Soc. Neurosci. 5, 461–482 (2010). [DOI] [PubMed] [Google Scholar]
  • 61.Rathje S., Van Bavel J. J., van der Linden S., Out-group animosity drives engagement on social media. Proc. Natl. Acad. Sci. U.S.A. 118, e2024292118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Pouwels J. L., Araujo T., van Atteveldt W., Bachl M., Valkenburg P. M., Integrating communication science and computational methods to study content-based social media effects. Commun. Methods Meas. 18, 115–123 (2023). [Google Scholar]
  • 63.Beyens I., Pouwels J. L., van Driel I. I., Keijsers L., Valkenburg P. M., Social media use and adolescents’ well-being: Developing a typology of person-specific effect patterns. Communic. Res. 51, 691–716 (2021). [Google Scholar]
  • 64.Crone E. A., Dahl R. E., Understanding adolescence as a period of social-affective engagement and goal flexibility. Nat. Rev. Neurosci. 13, 636–650 (2012). [DOI] [PubMed] [Google Scholar]
  • 65.C. de Vreese, D. A. de Vries, J. Piotrowski, Digital Competence Across the Lifespan (Center for Open Science, 2021); https://osf.io/d5c7n/.
  • 66.Laaber F., Florack A., Koch T., Hubert M., Digital maturity: Development and validation of the Digital Maturity Inventory (DIMI). Comput. Human Behav. 143, 107709 (2023). [Google Scholar]
  • 67.Green P., MacLeod C. J., SIMR: An R package for power analysis of generalized linear mixed models by simulation. Methods Ecol. Evol. 7, 493–498 (2016). [Google Scholar]
  • 68.Brooks M., Kristensen K., van Benthem K., Magnusson A., Berg C., Nielsen A., Skaug H., Mächler M., Bolker B., GlmmTMB balances speed and flexibility among packages for zero-inflated generalized linear mixed modeling. R J. 9, 378–400 (2017). [Google Scholar]
  • 69.Tustison N. J., Avants B. B., Cook P. A., Zheng Y., Egan A., Yushkevich P. A., Gee J. C., N4ITK: Improved N3 bias correction. IEEE Trans. Med. Imaging 29, 1310–1320 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Avants B. B., Epstein C. L., Grossman M., Gee J. C., Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12, 26–41 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Zhang Y., Brady M., Smith S., Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans. Med. Imaging 20, 45–57 (2001). [DOI] [PubMed] [Google Scholar]
  • 72.Fonov V., Evans A. C., Botteron K., Almli C. R., McKinstry R. C., Collins D. L., Brain Development Cooperative Group , Unbiased average age-appropriate atlases for pediatric studies. Neuroimage 54, 313–327 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Desikan R. S., Ségonne F., Fischl B., Quinn B. T., Dickerson B. C., Blacker D., Buckner R. L., Dale A. M., Maguire R. P., Hyman B. T., Albert M. S., Killiany R. J., An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 31, 968–980 (2006). [DOI] [PubMed] [Google Scholar]
  • 74.Fischl B., Salat D. H., Busa E., Albert M., Dieterich M., Haselgrove C., van der Kouwe A., Killiany R., Kennedy D., Klaveness S., Montillo A., Makris N., Rosen B., Dale A. M., Whole brain segmentation: Automated labeling of neuroanatomical structures in the human brain. Neuron 33, 341–355 (2002). [DOI] [PubMed] [Google Scholar]
  • 75.Pedregosa F., Varoquaux G., Gramfort A., Michel V., Thirion B., Grisel O., Blondel M., Prettenhofer P., Weiss R., Dubourg V., Vanderplas J., Passos A., Cournapeau D., Brucher M., Perrot M., Duchesnay É., Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011). [Google Scholar]
  • 76.Altmann A., Toloşi L., Sander O., Lengauer T., Permutation importance: A corrected feature importance measure. Bioinformatics 26, 1340–1347 (2010). [DOI] [PubMed] [Google Scholar]
  • 77.Meerkerk G.-J., Van Den Eijnden R. J. J. M., Vermulst A. A., Garretsen H. F. L., The Compulsive Internet Use Scale (CIUS): Some psychometric properties. Cyberpsychol. Behav. 12, 1–6 (2009). [DOI] [PubMed] [Google Scholar]
  • 78.Nichols A. L., Webster G. D., Designing a brief measure of social anxiety: Psychometric support for a three-item version of the Interaction Anxiousness Scale (IAS-3). Pers. Individ. Dif. 79, 110–115 (2015). [Google Scholar]
  • 79.RStudio Team, Integrated Development for R (Rstudios, Inc., 2020). [Google Scholar]
  • 80.H. Akaike, Information theory and an extension of the maximum likelihood principle, in Selected Papers of Hirotugu Akaike, E. Parzen, K. Tanabe, G. Kitagawa, Eds. (Springer, 1998), pp. 199–213. [Google Scholar]
  • 81.Weil L. G., Fleming S. M., Dumontheil I., Kilford E. J., Weil R. S., Rees G., Dolan R. J., Blakemore S.-J., The development of metacognitive ability in adolescence. Conscious. Cogn. 22, 264–271 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Paulus M., Tsalas N., Proust J., Sodian B., Metacognitive monitoring of oneself and others: Developmental changes during childhood and adolescence. J. Exp. Child Psychol. 122, 153–165 (2014). [DOI] [PubMed] [Google Scholar]
  • 83.Sung S. C., Porter E., Robinaugh D. J., Marks E. H., Marques L. M., Otto M. W., Pollack M. H., Simon N. M., Mood regulation and quality of life in social anxiety disorder: An examination of generalized expectancies for negative mood regulation. J. Anxiety Disord. 26, 435–441 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Peng P., Liao Y., Six addiction components of problematic social media use in relation to depression, anxiety, and stress symptoms: A latent profile analysis and network analysis. BMC Psychiatry 23, 321 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Maciejewski D. F., van Lier P. A. C., Branje S. J. T., Meeus W. H. J., Koot H. M., A 5-year longitudinal study on mood variability across adolescence using daily diaries. Child Dev. 86, 1908–1921 (2015). [DOI] [PubMed] [Google Scholar]
  • 86.Mahalingham T., McEvoy P. M., Clarke P. J. F., Assessing the validity of self-report social media use: Evidence of No relationship with objective smartphone use. Comput. Human Behav. 140, 107567 (2023). [Google Scholar]
  • 87.Ahmed S. P., Piera Pi-Sunyer B., Moses-Payne M. E., Goddings A.-L., Speyer L. G., Kuyken W., Dalgleish T., Blakemore S.-J., The role of self-referential and social processing in the relationship between pubertal status and difficulties in mental health and emotion regulation in adolescent girls in the UK. Dev. Sci. 27, e13503 (2024). [DOI] [PubMed] [Google Scholar]
  • 88.Twenge J. M., Martin G. N., Gender differences in associations between digital media use and psychological well-being: Evidence from three large datasets. J. Adolesc. 79, 91–102 (2020). [DOI] [PubMed] [Google Scholar]
  • 89.S. Champely, C. Ekstrom, P. Dalgaard, J. Gill, S. Weibelzahl, A. Anandkumar, C. Ford, R. Volcic, H. De Rosario, pwr: Basic Functions for Power Analysis (2017); R package version 1.2-2.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Results

Figs. S1 to S7

Tables S1 to S10

Screenshots of the experiment

References

sciadv.adp8775_sm.pdf (6.3MB, pdf)

Articles from Science Advances are provided here courtesy of American Association for the Advancement of Science

RESOURCES