Significance
Using a dataset linking administrative government data to the online behavior of Danish Twitter users, this study estimates the associations between hostility in social media interactions and offline individual-level dispositions and childhood environments. The study shows that users with many more criminal verdicts, more time spent in foster care, better primary school grades, and higher childhood socioeconomic status are more hostile on social media, in part, because such factors predict online engagement in political discussions, which is a major correlate of hostility. This research not only broadens our understanding of the drivers behind social media aggression but also suggests that interventions to reduce online hostility must consider the complex interplay of online and offline lives.
Keywords: online hostility, social media, childhood dispositions
Abstract
Reducing hostility in social media interactions is a key public concern. Most extant research emphasizes how online contextual factors breed hostility. Here, we take a different perspective and focus on the offline roots of hostility, that is, offline experiences and stable individual-level dispositions. Using a unique dataset of Danish Twitter users (N 4,931), we merge data from administrative government registries with a behavioral measure of online hostility. We demonstrate that individuals with more aggressive dispositions (as proxied by having many more criminal verdicts) are more hostile in social media conversations. We also find evidence that features of childhood environments predict online hostility. Time spent in foster care is a strong correlate, while other indicators of childhood instability (e.g., the number of moves and divorced parents) are not. Furthermore, people from more resourceful childhood environments—those with better grades in primary school and higher parental socioeconomic status—are more hostile on average, as such people are more politically engaged. These results offer an important reminder that much online hostility is rooted in offline experiences and stable dispositions. They also provide anuanced view of the core group of online aggressors. While these individuals display general antisocial personality tendencies by having many more criminal verdicts, they also come from resourceful backgrounds more often than not.
Any hope that social media will “turn the lonely, antisocial world ... into a friendly world” (1) has long been shattered. Interactions on social media are uncivil, aggressive, and hostile (2). As a result, interacting with strangers on social media is a source of considerable anxiety (3), and this may lead to, first, polarized views of political outgroups (4) and, second, reduced motivation for online engagement, despite social media’s increasing importance in, for example, democratic politics (5). Accordingly, understanding the roots of hostility of social media interactions has emerged as a key public concern.
Significant progress has been made in this regard. Multiple studies have suggested that contextual features of the online environment, such as anonymity and algorithms tuned to maximize engagement, are contributing to more hostility (6, 7). One prominent study observed that these features may trigger people to become more hostile when they are tired and irritable and concluded that, as a result, “anyone can become a troll online” (8).
At the same time, emerging new research provides clear evidence that online hostility is not equally distributed among individuals who log on to social media platforms. In fact, hostility seems concentrated in a small and stable set of users (2, 9, 10). Understanding the root causes of the behavior of this small segment—disrupting the possibility of online engagement for the larger, nonhostile majority—is a critical research challenge if we are to reduce the hostility of online interactions.
The concentration of online hostility in a subset of users is consistent with decades of research in personality psychology and criminology, which demonstrate that while everyone can become angry, and perhaps even offensive, some people are much more likely to be aggressive than others and that aggressive dispositions are stable in time (11). Multiple studies have documented the correlations between different types of aggression, suggesting a common root in stable dispositions. Measures of verbal aggressiveness have, for example, been found to correlate with both physical aggression and arrests (12–14). Aggressive individuals’ hostile motivations affect their online behaviors, too. Personality psychologists have documented this most carefully in the context of cyberbullying, primarily among adolescents. An influential article by Buckels and colleagues demonstrated that “cyber-trolling appears to be an Internet manifestation of everyday sadism” (15). Students with dark personality traits self-reported more cyberbullying or trolling in the Netherlands (16), Great Britain (17), and New Zealand (18). A similar relationship was found in a small convenience sample of Icelandic adult Facebook users (19). Finally, aggressive dispositions and offline and online hostility also correlate among English-speaking and Chinese online gamers (20, 21).
In this study, we build on this rich literature and provide compelling new data demonstrating a strong influence of stable dispositions on online hostility in social media discussions. We advance beyond prior studies by utilizing behavioral, non-self-reported measures of both online hostility and aggressive predispositions recorded several years (sometimes decades) apart. Specifically, this study links behavioral data from a very large social media platform, Twitter (renamed to X after our data collection), with government administrative records. Using an exploratory design (i.e., the present analyses were not preregistered), we study the correlates of online hostility on a diverse sample of Danish citizens who disclose their own (unique) names on Twitter (N 4,931). In SI Appendix, we demonstrate one key advantage of using administrative records over opt-in surveys: people who respond to surveys about social media behavior (SI Appendix, Fig. S3), and who share their social media data with researchers (SI Appendix, Fig. S4), are systematically different from those who do not. In other words, prior efforts to link offline data to social media behavior were potentially biased by selection effects and nonresponse. In contrast, our analytical strategy is not vulnerable to these biases.
How can we identify offline aggressive predispositions using administrative records? We identify indicators related to three broad theories of aggression. First, personality researchers often argue that aggression is rooted in antisocial personality, which can be reliably tracked by a “failure to conform to social norms with respect to lawful behaviors as indicated by repeatedly performing acts that are grounds for arrest” (22, p. 328). Following this, we link social media behavior with records of users’ criminal verdicts. Second, we build on a large literature in developmental psychology documenting the childhood correlates of adult aggression (23). For example, Simpson et al. (24) concludes that “the strongest predictor of (...) risky behavior [at age 23] was an unpredictable environment between ages 0 and 5.” Accordingly, we link social media behavior with administrative records on the number of moves in childhood, parents’ divorce, and time spent in foster care. Another popular—although less reliable—predictor of adult aggression in the developmental literature is the harshness of childhood environments (24). Here, we include the most common measure, namely parents’ socioeconomic status (SES), as well as the users’ grades in primary school. Finally, we leverage theories from criminology showing that young men are much more likely than other demographic groups to engage in a wide range of risky behaviors, including crime and violence (25). Thus, we link users’ age, and sex to a measure of their online behavior.
To examine the offline roots of online hostility, we assess the relationships at the user level between the above list of variables and users’ tendency to engage in online hostility as measured by an automated classifier of the hostility of the text of the individual user’s posts on Twitter (26).
Results
First, we test the joint predictive power of all variables within each cluster of variables using F-tests (SI Appendix, Table S1). These tests show that across the three main clusters of variables, the inclusion of the independent variables improves the model fit, compared to a null model with no independent variables for the cluster we call Aggression (F-statistic 4.467, df 1, df2 4,929, P, R-squared 0.001), for the cluster concerning Childhood Instability (F-statistic 2.952, df1 3, df2 3,379, P, R-squared 0.003) and finally for the cluster concerning Childhood Harshness (F-statistic 7.893, df1 4, df2 1,542, P, R-squared 0.018).
Next, we turn our attention to the individual explanatory variables within each cluster. These are available in panel A of Fig. 1. To guard against false positives due to multiple comparisons, we report Bonferroni-adjusted significance thresholds whenever we have multiple variables within a cluster. For aggression, we find that our only independent variable, criminal verdicts, is a significant predictor. Users with the most criminal verdicts are almost half a SD more hostile on average than users with no criminal verdicts (, P).
Fig. 1.

Adult and childhood administrative records predict hostility (A) and posting about politics (B) on Twitter later in life. Point estimate denotes OLS regression coefficients comparing the SD difference in average hostility or politics on Twitter between the minimum and maximum values of the independent variables. The error bars illustrate 95% CI. To test whether the results presented were robust to parametric assumptions, we have used empirical bootstrapped CI for this figure using 1,000 bootstrap samples. The SE reported in the main text and appendices (SI Appendix, Table S4 for the results for politics and SI Appendix, Table S3 for hostility) still use standard parametric SE and are almost identical to those presented in this figure. Ns 142 to 4,931.
Is half a SD difference in average hostility substantively interesting? To put this effect size into perspective, it is helpful to consider a prior study that showed that more hostile users also tweet more (27). This study found that users half a SD above the mean in average hostility tweet approximately 60 percent more compared to the average user. Put differently, followers of those with the most criminal verdicts would not only be exposed to significantly more hostile tweets on average than followers of those with least criminal verdicts, they would also receive 60 percent more of such tweets. At the same time, it is also key to notice that the total variance explained is low. This reflects that there are many other factors that shape hostility beyond an individual’s criminal dispositions including algorithmic feedback, external political events, and so forth (6, 7, 28).
Turning to the role of an unpredictable childhood, we find mixed evidence across our three relevant variables that an unpredictable childhood predicts hostility on Twitter (Bonferroni-adjusted significance threshold 0.017). Users who spent the most time (versus no time) in foster care during their childhood are more than half a SD more hostile as adults on social media (, P). Users whose parents divorced are no more hostile than users whose parents stayed together (, P), nor do we find a significant relationship between the number of moves during childhood and later hostility on Twitter (, P).
Against common intuitions, we find evidence that more hostile Twitter users had a less harsh childhood (Bonferroni-adjusted significance level across the four variables 0.0125). Users who grew up in a high-SES family are about half a SD more hostile on Twitter (, P) than the average Twitter user (SI Appendix, Fig. S7). Meanwhile, users who grew up in medium (, P) or low-SES families (, P) are not significantly different from the average user. We also find that users with the best grades in primary school are 0.66 SD more hostile on Twitter than users with the worst grades (, P).
Finally, our data are consistent with prior research showing that men (, P) and, less clearly and not statistically significant, younger (, P) users are more hostile than female and older users, respectively.
Why are individuals with a less harsh childhood more likely to engage in hostility? A potential explanation is that online hostility occurs most often in contexts of political discussions (29), and a consistent finding is that political engagement is strongly predicted by availability rather than lack of resources (30). Prior work in a US context has also found that more hostile social media users are more resourceful (27). Thus, we examine this potential explanation in the present data by utilizing an alternative classifier that measures the degree to which users tweet about politics (26). Panel B of Fig. 1 documents that many of the offline predictors of online hostility also predict the tendency to write more about politics. In particular, users who had less harsh childhoods, indicated by growing up in a high-SES family (, P) or having better primary grades (, P) tweet more about politics. Conversely, while younger respondents were marginally more hostile, they are significantly less likely to tweet about politics (, P).
Furthermore, Fig. 2 documents that there is a strong relationship between respondents’ tendency to write about politics and to be more hostile both at the level of tweets (Spearman correlation 0.57, P) and averaged at the level of users (Spearman correlation 0.6, P).
Fig. 2.
More political tweets and users are also more hostile on average. On Panel (A), each dot represents an individual tweet. We subset our data to a random 10% of tweets (N 139 K), and set a high transparency to highlight the trend. On Panel (B), each dot represents a respondent. The magenta lines represent a smoothed spline curve from generalized additive models with 95% CI.
To further understand the role of online political engagement, we conducted a hierarchical regression analysis where we first included political tweets as a predictor of hostile tweets and, in a second regression, included our measures of an affluent family background. If grades or a high-SES family background did not significantly improve the model, this would suggest that the linkage between hostility and these explanatory variables reflects their shared association with the tendency to write political tweets. For grades, we found that they did not predict variance explained in hostile tweets after political tweets were accounted for ( 0.699, P 0.403) and we found a similar result for a high-SES family background ( 4.425, P 0.219).
Discussion
Hostility on social media is pervasive. Prior research has highlighted features of social media that affect all users, such as content-algorithms and anonymity (6). Our findings complement this research by providing initial but important insights into how stable individual-level dispositions and experiences that temporally precede online engagement correlate with online hostility. Our results thus provide evidence that multiple variables in the Danish administrative records correlate with hostility on the social media platform, Twitter (now X). Individuals with more aggressive dispositions (many more criminal verdicts) and more time spent in foster care post more hostile tweets on average. Furthermore, we found that users with childhoods characterized by academic success and affluent homes are also more hostile in online interactions.
Overall, the present research provides a nuanced view of the core group of online aggressors. While these individuals display general antisocial personality tendencies by having many more criminal verdicts, they are not characterized by a background with lack of school success or low SES, contrary to some popular stereotypes about online trolls (31). On the contrary, our results suggest that online aggressors more often than not come from resourceful backgrounds. Our analyses suggest that this finding can be explained by the well-known tendency of people with more resources to be more engaged and interested in politics (30). Indeed, our data demonstrate that users with more resources in their childhood talk more about politics on Twitter as adults; that political discussions are often hostile even in a high-trust, consolidated democracy like Denmark; and that childhood affluence does not contribute to explaining variance in online hostility, once political online activity is taken into account. To understand who engages in online hostility, it is thus key to understand that most hostility—at least, on this social media platform—occurs in the context of political discussions.
In evaluating the practical implications of these results, it is important to note that while the analyses identify sizable and statistically significant differences between groups defined by their childhood experiences and criminal records, none of these variables predict much of the variance in adult online hostility. For example, the correlation between criminal verdicts and online hostility is influenced by individuals who have a high number of criminal verdicts (SI Appendix, Figs. S6 and S8). Those who are hostile in online interactions are individuals who are repeatedly unlawful (i.e., a reflection of such dispositions) rather than individuals with a single, potentially minor, violation. As a consequence, the results must not be taken to have the policy implication that excluding people with, for example, criminal records from social media would solve the problem of online hostility. The empirical reason is straightforward: many more people are hostile online than those with criminal records, so excluding the latter would not make a sizeable change in the hostile population.
The key contribution of the findings is instead the demonstration that individual differences in the propensity to engage in online hostility is connected to broader offline dispositions including dispositions related to aggressiveness and, in particular, political engagement. In this study, our goal has not been to provide a comprehensive model of all occurrences of online hostility. Instead, we have striven to make this contribution by utilizing explanatory variables that are far removed from online behavior in terms of temporal and causal distance but nonetheless leave a lasting influence. On this basis, our findings raise two important avenues for future research. First, it is key to understand how features of social media platforms such as algorithms and the possibilities of anonymity influence the activities and reach of the core group of online aggressors. Second, it is important to understand whether and how behaviors of a small group of hostile, politically engaged individuals can shape the perceptions of the larger “silent majority” on social media (10). If ordinary users make inferences about how political outgroups reason and behave on the basis of the activities of this unrepresentative subset, the group of hostility individuals, even if numerically small, could substantially influence overall levels of polarization in society.
Materials and Methods
Our analysis is based on merged data from two sources: Danish national registries; see SI Appendix, Table S6, and Twitter. We received a random sample of 300,000 adults with their social security number and full name and address from Statistics Denmark. We used the social security numbers to access their public records, distributed across several state registries. To match the 300,000 adults to Twitter profiles, we, first, subset the data to individuals with a unique full name in the sample (i.e., first name, last name, and middle name(s), then used each name to call the Twitter API, and if only a single match was found, we took this to be a match. We ended up with a total of 24,356 unique users, of which 4,931 had tweeted at least once. We report additional results validating our matching in SI Appendix; see SI Appendix, Fig. S2 comparing our sample to a random sample of Twitter users in the same period and descriptive statistics for the samples in SI Appendix, Tables S7 and S8. Our corpus consists of 1.3 million Danish-language tweets. We measure hostility using a validated word embedding algorithm (26). We provide detailed information on all independent variables along with specific registries, available time periods, and a variable-specific number of valid observations below (Ns 142 to 4,931).
The project received ethical approval from Aarhus University’s Research Ethics Committee (Institutional Review Board) with Approval No.: 2021-66.
Measuring Online Hostility and Politics.
Our primary indicator of hostility online is based on a word embedding method (26). The basic idea is that we use participants’ own behavior to judge the hostility of tweets (26). Using the word2vec algorithm (32), we, first, build word vectors based on tweets from our panelists. Word vectors are numerical representations of a word in a multidimensional space. These representations are based on which words appear together in the same tweet. Words that are interchangeable in a given context have highly similar word vectors. Word vectors allow us to calculate the distance between words, which exemplifies how similar or different their use is in the given corpus.
Second, we take the word vector for “hate,” (in Danish “had”) and determine the distance between each tweet and the hate word vector. If a tweet has many words that are near to the word vector for hate, it receives a higher hostility score. The hostility scores of all respondents’ tweets are averaged and z-scored to generate a standardized measure of their overall level of hostility. This measure constitutes our key dependent variable.
Expert raters were used to validate this procedure, and the method has an accuracy of 74 percent when compared to expert raters’ annotation of political hostility (26). In addition, it is also highly correlated with a self-reported measure of online hostility using a survey, as can be seen in SI Appendix, Fig. S1. At the same time, it is worth acknowledging that our measure captures most tweeting with hatred and may be less well equipped to tap into other facets of hostility (33).
We use an identical approach to measure tweeting about politics, except we take the word vector for “politics” (“politi” in Danish).
Measuring Aggressive Dispositions.
One of the most important indicators of an antisocial personality is (22, p. 328): “Failure to conform to social norms with respect to lawful behaviors as indicated by repeatedly performing acts that are grounds for arrest.” We therefore measure this personality disposition using the number of criminal verdicts from the Danish registry on criminal verdicts, KRAF, available from 1980 to 2019. Criminal verdicts refer to criminal charges and rulings by the authorities related to offenses to the criminal code, including charges that resulted in acquittal. Verdicts resulting in minor fines—those below the equivalent of $150 to $350 depending on time and offense—do not enter the Danish Police’s criminal registry and are thus omitted from our data.
Measuring Childhood Instability.
We measure the instability of users’ childhood with three variables: 1) whether the Twitter user’s parents divorced in their childhood, 2) the number of times the user moved from one place to another during their childhood, and finally, 3) how much time a user spent in foster care during their childhood.
Divorce is based on whether the user has experienced a change from a family unit with two married parents to a family unit with a “single” status. We rely on the FAMILIE_TYPE variable from the BEF registry for these data, covering the time between 1985 and 2020. N 3,431.
Number of moves is simply the number of unique addresses before the age of 18 based on the BEFBOP registry and data between 1971 and 2019. N 3,559.
Time spent in foster care sums up the overall time spent in foster care before the age of 18 (irrespective of the number of potentially nonoverlapping placements). We rely on the BUA registry with data from 1977 and 2015. N 4,931.
Measuring Childhood Harshness.
We measure the harshness of childhood environments based on the SES of parents based on the FAMSOCIOGRUP_13 variable from the FAIK registry, which is available for 1993 to 2019. This variable taps into the SES of the main provider of the Twitter user during their childhood (i.e., before they turned 18). If more than one SES is experienced during those 18 y, we use the one that persisted for the longest time. The variable is defined by the main source of income for the person in question. Based on the source of income, it is determined whether the person is self-employed, working spouse, wage earner, unemployed, or outside the workforce. The social groups are further divided for the self-employed based on the number of employees in the company, and for wage earners based on subject classification. Since there is not enough information on every category, we recode the variable into four categories:
High (N 70): Owning own company, top executive, or wage earner in work that requires skills of the highest level
Medium (N 23): Wage earner in work requiring intermediate skills
Low (N 49): Wage earner in work that requires basic skills
Other (N 4,789): including those where we have no information.
Note that the low N-s are due to the fact that we have access to registry data “only” from 300 K people, and hence we can study only those few Twitter users, whose main provider parent was also included (by chance) in our sample and who were children during the period covered by the FAIK registry. Since we have data on their main provider in the family unit and they are children at the age of measurement, the SES measures childhood SES, not parental SES. While the low sample size reduces the precision of our estimates considerably, it should not bias our results, for the data is missing basically at random. Accordingly, as we demonstrate in SI Appendix, Fig. S9, the average Twitter hate of those who are in the “Other” category on the Parental SES variable is virtually identical to the mean of the full sample.
As a final measure of childhood harshness, we take the average of grades received in primary school using the UDFK registry available for the period 2002 to 2020. N 1,547.
Measuring Age and Sex.
We calculate users’ age based on their social security numbers, which in Denmark consists in part of the person’s date of birth. Danish registries do not contain information on citizens’ gender, hence we resort to a binary sex variable based on their social-security number.
Statistical Modeling.
All results in the main text are simple OLS regressions, where users’ z-scored hostility score is the dependent variable and the explanatory variables for each of our offline roots are entered as an independent group, that is, separate regressions for 1) aggression, 2) childhood harshness, 3) childhood instability, and 4) demographics as a comparison. To test the importance for each of these independent factors, we use a two-pronged approach. First, we test the importance of each family of variables, that is, all explanatory variables within each family, using an F-test (SI Appendix, Table S1). Second, we use Bonferroni corrections within each family to investigate the significance of the individual factors (SI Appendix, Tables S2 and S3). For childhood harshness, we have four explanatory variables and, thus, consider a predictor significant if it is below , and for childhood instability, a predictor is significant if it is below .
When investigating the individual factors (SI Appendix, Table S3), we investigate each explanatory variable separately to avoid problems of multicollinearity. We report a full correlation table for all constructs as SI Appendix, Fig. S5.
Since some of the explanatory variables are not normally distributed (SI Appendix, Tables S7 and S8), we do not report fully standardized variables but semistandardized, where the dependent variable is z-score standardized and the explanatory variables are scaled from zero to one. The coefficients thus represent a comparison between users at the minimum versus maximum on our independent variables, or turning them on or off in the case of dummy variables. This comparison is measured in terms of SD on our measure of hostility. For the most highly skewed variables, we also report additional robustness tests. The most highly skewed explanatory variables are “Time in foster care” and “Criminal verdicts” and we therefore report separate robustness tests in SI Appendix for these effects in SI Appendix, Figs. S5 and S7 for Time in foster care and SI Appendix, Figs. S6 and S8 for Criminal verdicts.
Supplementary Material
Appendix 01 (PDF)
Dataset S01 (XLSX)
Acknowledgments
We are grateful for constructive feedback from Lea Pradella, Steffen Borch Selmer, and Robert Klemmensen. We thank Anja Dalsgaard and Annette Bruun Andersen for language editing help. This study was funded by a Carlsberg Foundation Grant (CF18-1108) awarded to M.B.P.
Author contributions
S.H.R.R., A.B., and M.B.P. designed research; S.H.R.R., A.B., and M.B.P. performed research; S.H.R.R. and A.B. analyzed data; and S.H.R.R., A.B., and M.B.P. wrote the paper.
Competing interests
The authors declare no competing interest.
Footnotes
This article is a PNAS Direct Submission.
Data, Materials, and Software Availability
Computer code data will be deposited in OSF. Data cannot be shared: individual-level data will not be shared due to privacy concerns (Twitter data) and legal constraints (administrative records). Information on the requirements for getting access to data and how to apply for data can be found at the Centre for Integrated Register-based Research (https://cirrau.au.dk/contact) at Aarhus University.
Supporting Information
References
- 1.L. Grossman, Person of the year 2010: Mark Zuckerberg. Time Magazine, 15 December 2010. https://content.time.com/time/specials/packages/article/0,28804,2036683_2037183_2037185,00.html. Accessed 13 October 2024.
- 2.Bor A., Petersen M. B., The psychology of online political hostility: A comprehensive, cross-national test of the mismatch hypothesis. Am. Polit. Sci. Rev. 116, 1–18 (2022). [Google Scholar]
- 3.Allcott H., Braghieri L., Eichmeyer S., Gentzkow M., The welfare effects of social media. Am. Econ. Rev. 110, 629–676 (2020). [Google Scholar]
- 4.Brady W. J., et al. , Overperception of moral outrage in online social networks inflates beliefs about intergroup hostility. Nat. Hum. Behav. 7, 917–927 (2023). [DOI] [PubMed] [Google Scholar]
- 5.Persily N., Tucker J. A., Social Media and Democracy: The State of the Field, Prospects for Reform (Cambridge University Press, 2020). [Google Scholar]
- 6.Baek Y. M., Wojcieszak M., Delli Carpini M. X., Online versus face-to-face deliberation: Who? Why? What? With what effects? New Media Soc. 14, 363–383 (2012). [Google Scholar]
- 7.Van Bavel J. J., Rathje S., Harris E., Robertson C., Sternisko A., How social media shapes polarization. Trends Cogn. Sci. 25, 913–916 (2021). [DOI] [PubMed] [Google Scholar]
- 8.J. Cheng, M. Bernstein, C. Danescu-Niculescu-Mizil, J. Leskovec, “Anyone can become a troll: Causes of trolling behavior in online discussions” in Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (2017), pp. 1217–1230. [DOI] [PMC free article] [PubMed]
- 9.Mamakos M., Finkel E. J., The social media discourse of engaged partisans is toxic even when politics are irrelevant. PNAS Nexus 2, pgad325 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.C. Robertson, K. del Rosario, J. J. Van Bavel, Inside the funhouse mirror factory: How social media distorts perceptions of norms. Curr. Opin. Psychol. 60, 101918 (2024). [DOI] [PubMed]
- 11.Piquero A. R., Carriaga M. L., Diamond B., Kazemian L., Farrington D. P., Stability in aggression revisited. Aggress. Violent Behav. 17, 365–372 (2012). [Google Scholar]
- 12.Atkin C., Smith S., Roberto A., Fediuk T., Wagner T., Correlates of verbally aggressive communication in adolescents. J. Appl. Commun. Res. 30, 251–268 (2002). [Google Scholar]
- 13.Swatt M. L., Demeanor and arrest revisited: Reconsidering the direct effect of demeanor. J. Crime Justice 25, 23–39 (2002). [Google Scholar]
- 14.Worden R. E., Shepard R. L., Demeanor, crime, and police behavior: A reexamination of the police services study data. Criminology 34, 83–105 (1996). [Google Scholar]
- 15.Buckels E. E., Trapnell P. D., Paulhus D. L., Trolls just want to have fun. Pers. Individ. Differ. 67, 97–102 (2014). [Google Scholar]
- 16.van Geel M., Goemans A., Toprak F., Vedder P., Which personality traits are related to traditional bullying and cyberbullying? A study with the big five, dark triad and sadism. Pers. Individ. Differ. 106, 231–235 (2017). [Google Scholar]
- 17.Pascual-Sánchez A., et al. , Personality traits and self-esteem in traditional bullying and cyberbullying. Pers. Individ. Differ. 177, 110809 (2021). [Google Scholar]
- 18.Kurek A., Jose P. E., Stuart J., ‘I did it for the Lulz’: How the dark personality predicts online disinhibition and aggressive online behavior in adolescence. Comput. Hum. Behav. 98, 31–40 (2019). [Google Scholar]
- 19.Gylfason H. F., Sveinsdottir A. H., Vésteinsdóttir V., Sigurvinsdottir R., Haters gonna hate, trolls gonna troll: The personality profile of a Facebook troll. Int. J. Environ. Res. Public Health 18, 5722 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Smith C. M., Rauwolf P., Intriligator J., Rogers R. D., Hostility is associated with self-reported cognitive and social benefits across massively multiplayer online role-playing game player roles. Cyberpsychol. Behav. Soc. Netw. 23, 487–494 (2020). [DOI] [PubMed] [Google Scholar]
- 21.Yen J. Y., Yen C. F., Wu H. Y., Huang C. J., Ko C. H., Hostility in the real world and online: The effect of internet addiction, depression, and online activity. Cyberpsychol. Behav. Soc. Netw. 14, 649–655 (2011). [DOI] [PubMed] [Google Scholar]
- 22.G. Matthew, I. Deary, M. Whiteman, Personality Traits (Cambridge University Press, Cambridge, ed. 3, 2009).
- 23.Lansford J. E., Development of aggression. Curr. Opin. Psychol. 19, 17–21 (2018). [DOI] [PubMed] [Google Scholar]
- 24.Simpson J. A., et al. , Evolution, stress, and sensitive periods: The influence of unpredictability in early versus late childhood on sex and risky behavior. Dev. Psychol. 48, 674 (2012). [DOI] [PubMed] [Google Scholar]
- 25.Wilson M., Daly M., Competitiveness, risk taking, and violence: The young male syndrome. Ethol. Sociobiol. 6, 59–73 (1985). [Google Scholar]
- 26.Rasmussen S. H. R., Bor A., Osmundsen M., Petersen M. B., ‘Super-Unsupervised’ classification for labelling text: Online political hostility as an illustration. Br. J. Polit. Sci. 54, 179–200 (2024). [Google Scholar]
- 27.S. H. R. Rasmussen, M. Osmundsen, M. B. Petersen, Political resources and online political hostility how and why hostility is more prevalent among the resourceful. PsyArXiv [Preprint] (2022). 10.31234/osf.io/tp93r. (Accessed 10 October 2024). [DOI]
- 28.Hebbelstrup Rye Rasmussen S., Petersen M. B., The event-driven nature of online political hostility: How offline political events make online interactions more hostile. PNAS Nexus 2, pgad382 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.M. J. Andresen et al., “Danskernes oplevelse af had på sociale medier” (ROPH Working Paper, 2022).
- 30.Brady H. E., Verba S., Schlozman K. L., Beyond SES: A resource model of political participation. Am. Polit. Sci. Rev. 89, 271–294 (1995). [Google Scholar]
- 31.Bail C., Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing (Princeton University Press, 2022). [Google Scholar]
- 32.T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space. arXiv [Preprint] (2013). https://arxiv.org/abs/1301.3781. (Accessed 10 October 2024).
- 33.Eckhardt C., Norlander B., Deffenbacher J., The assessment of anger and hostility: A critical review. Aggress. Violent Behav. 9, 17–43 (2004). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Appendix 01 (PDF)
Dataset S01 (XLSX)
Data Availability Statement
Computer code data will be deposited in OSF. Data cannot be shared: individual-level data will not be shared due to privacy concerns (Twitter data) and legal constraints (administrative records). Information on the requirements for getting access to data and how to apply for data can be found at the Centre for Integrated Register-based Research (https://cirrau.au.dk/contact) at Aarhus University.

