Skip to main content
Clinical Neuropsychiatry logoLink to Clinical Neuropsychiatry
. 2025 Dec;22(6):447–453. doi: 10.36131/cnfioritieditore20250602

Algorithmic Entrapment the Silent Erosion of User Autonomy and Mental Health

Leon Donadoni 1, Donatella Marazziti 2, Federico Mucci 3
PMCID: PMC12752921  PMID: 41477544

Abstract

Recent developments in the study of highly visual social media have highlighted that, far from being neutral conduits, adolescents’ online environments might be curated by recommender systems that learn from behaviour. On this view, risk and benefit may depend less on time spent online than on patterns of exposure ‒ what reaches young people, how they meet it, and when in development contact occurs. For instance, in large adolescent cohorts, only 10–15% of users account for most of the association between use and distress, underscoring the relevance of heterogeneity. Similar feeds might support connection and care while, in susceptible users, amplifying appearance-centred comparison, compulsive use and contact with harmful material. A public-health perspective treats personalisation as an upstream determinant and invites proportionate responsibilities, namely transparency, independent audit and safety-by-design. Clinically, brief assessments that separate time from type of engagement, paired with low-burden, mechanism-focused interventions, could offer practical steps. Taken together, a precautionary and balanced stance should aim at keeping what is valuable online while reducing foreseeable harm through accountable design.

Keywords: social-media, recommender systems, algorithmic exposure, mental health, public health, ethics, safety-by-design

Introduction

The online environments in which adolescents now spend substantial portions of their social lives are not neutral backdrops but curated, ever-learning feeds. Highly visual social platforms couple continuous content supply with recommendation systems that personalize relevance at scale. This architecture has altered young people’s online experiences in ways not well captured by coarse metrics of “time online.” At population level, the empirical literature remains mixed ‒ average associations between overall social media use and mental health outcomes are small or inconsistent ‒ yet beneath those means lies marked heterogeneity in who encounters what, and with what consequences (Valkenburg et al., 2022). Qualitative syntheses underscore the coexistence of benefits ‒ peer connection, identity exploration, and access to supportive communities ‒ and liabilities such as appearance-focused social comparison, pressure to perform, and compulsive use (Popat & Tarrant, 2023). Emerging reviews focused on short-form, interest-based platforms suggest that specific design features may intensify exposure–response loops in susceptible youth (Conte et al., 2025).

In public health, as with other determinants that structure risk and opportunity ‒ housing, schooling, neighborhood resources ‒ recommendation systems shape the informational milieu to which adolescents are differentially exposed. In Marmot’s terms, they may function as “causes of causes,” configuring upstream conditions that can tip trajectories toward benefit or harm (Marmot, 2005). Rather than forcing a choice between alarmism and denial, this framing neither pathologizes all social media use nor presumes neutrality of the underlying systems; it treats algorithmic personalization as an external design choice with systemic effects, contingent on user vulnerability, content typology, and developmental timing.

At the same time, scholarship on the ethics of algorithms reminds us that opacity, data biases, and outcome disparities are recurrent properties of recommender pipelines, with potential “transformative effects” on users’ preferences and behaviors (Mittelstadt et al., 2016). For clinicians and services, this suggests shifting the assessment focus from duration to pattern: late-night use versus daytime browsing, passive scrolling versus active creation, appearance-centric feeds versus peer-support communities. For researchers and policymakers, progress is likely to depend on transparency, independent auditability, and safety-by-design standards commensurate with the scale of these systems.

Our aim in this opinion is, therefore, to synthesize the evidence on benefits and risks, articulate an exposure-based framework grounded in social determinants of health, and outline pragmatic clinical and policy steps that prioritize vulnerable youth while maintaining a balanced communication of benefits and harms (Conte et al., 2025; Popat & Tarrant, 2023; Valkenburg et al., 2022).

Expected benefits and potential risks

Population-level associations between overall time online and mental health remain small or inconsistent, yet mask marked heterogeneity in what is encountered, how, and when (Valkenburg et al., 2022). Qualitative syntheses consistently report a dual picture: adolescents describe meaningful benefits ‒ peer connection, identity exploration and access to supportive communities ‒ alongside liabilities such as appearance-focused social comparison, pressure to perform and compulsive use (Popat & Tarrant, 2023). Recent reviews centred on short-form, highly visual, interest-based platforms further suggest that specific design features may intensify exposure–response loops in susceptible youth (Conte et al., 2025).

For a subset of young people, online environments can augment social support and engagement with care. In a qualitative evaluation of a long-term, social-media–based intervention for first-episode psychosis, participants described increased connectedness, normalisation of experience and reduced loneliness, highlighting the potential for moderated, purpose-built spaces to confer benefit when appropriately scaffolded (Valentine et al., 2020). Such findings align with adolescents’ own accounts of informational gains and community access (Popat & Tarrant, 2023), and underscore that harm is not inevitable nor evenly distributed.

On the other hand, evidence seems stronger when focusing on type of exposure and pattern of engagement rather than on duration alone. For example, a prospective comparison of TikTok feeds delivered to individuals with eating disorders versus healthy controls showed that algorithmic curation (preferentially delivered appearance-oriented and restrictive-eating content to the clinical group [with a substantially larger share of such material than in controls]), with the magnitude of this bias covarying with symptom severity (Griffiths et al., 2024). Complementing this, a longitudinal study with a quasi-experimental component reported that higher baseline disordered-eating symptoms prospectively predicted greater engagement with restrictive-eating content on TikTok at follow-up, while an institutional ban on the platform did not reduce such engagement, with both exposure and symptoms increasing over time (Strickland et al., 2025). These observations suggest self-selection and algorithmic reinforcement acting in tandem, and indicate that blanket prohibitions may be insufficient without in-feed mitigation and safety-by-design.

Furthermore, adolescents’ narratives help make proximal processes visible: focus-group work with 14–15-year-olds on highly visual platforms describes a “vicious circle” in which likes, comments, and feed-scrolling may fuel jealousy, perceived inferiority, and pressure to be accepted; rapid alternation between viewer and contributor roles appears to produce emotional highs and lows (McCrory et al., 2022). Patterns of engagement also seem gendered, as studies of “fitspiration” on Instagram report differing modes of interaction and perceived effects among young men and women, suggesting that prevention and clinical messaging should be sensitive to gendered norms and motivations (Mayoh & Jones, 2021). Importantly, not all associations imply within-person risk escalation: in a longitudinal cohort of young adults, higher digital-media use showed cross-sectional between-person associations with psychotic experiences but no consistent within-person temporal effects, cautioning against simplistic, dose–response interpretations based on time alone (Paquin et al., 2024). At the family interface, cross-sectional evidence from Chinese adolescents links parental phubbing with short-form video addiction, with depressive and anxiety symptoms mediating part of the association and neuroticism moderating pathways ‒ an observation that complements algorithmic explanations by foregrounding household dynamics (Yang et al., 2024).

Early signals may support brief, low-burden interventions that target mechanisms suggested by the evidence. In a randomised pilot, a self-guided module addressing self-criticism/self-compassion reduced appearance-driven motives, social comparison, and body-image inflexibility, with short-term improvements in disordered-eating symptoms; a feed-curation module reduced appearance comparison (de Valle & Wade, 2022). In clinical intake, this means asking not “How long were you online?” but “What showed up in your feed just before you felt worse?”, “At what time of day does scrolling become hard to stop?”, and “Which topics or accounts leave you more keyed up versus steadier?”. While preliminary, these findings align with a pragmatic clinical stance that distinguishes time from content and context ‒ late-night use versus daytime browsing; passive scrolling versus active creation; appearance-centric feeds versus peer-support communities ‒ and pairs screening with brief, skill-focused interventions.

At this stage, the most persuasive evidence appears to converge on three points. First, social media can confer genuine benefits under moderated, purpose-designed conditions that promote connection and support (Popat & Tarrant, 2023; Valentine et al., 2020). Second, clinically relevant risks emerge where individual vulnerabilities intersect with algorithmically curated exposure to salient content ‒ especially appearance-focused and pro-restriction material ‒ often in ways not captured by aggregate time-use measures (Conte et al., 2025; Griffiths et al., 2024; McCrory et al., 2022; Mayoh & Jones, 2021). Third, interventions seem more promising when they target patterns of exposure and mechanisms (comparison, self-criticism, sleep disruption) and when system-level measures address curation and design rather than relying solely on bans or generic screen-time limits (de Valle & Wade, 2022; Strickland et al., 2025; Valkenburg et al., 2022).

Theoretical framework

A public-health lens may help reconcile apparently conflicting findings about social media and youth mental health. Rather than treating platforms as neutral conduits or, conversely, as uniformly hazardous, this perspective considers algorithmic exposure a structural determinant of mental health: a set of upstream conditions that shape differential risks and opportunities, much as housing, schooling, or neighbourhood resources do (Marmot, 2005; WHO Commission, 2008). On this view, personalised feeds configure the informational milieu to which adolescents are exposed; outcomes then depend on who is exposed, to which content, how (active vs. passive engagement), and when (developmental timing), consistent with synthesis work on heterogeneity of effects (Valkenburg et al., 2022; Popat & Tarrant, 2023; Conte et al., 2025).

As in environmental epidemiology, risk seldom follows a simple dose–response with a single agent; rather, it arises from mixtures of exposures, windows of susceptibility, and host factors. Recommender systems, by ranking and repeatedly surfacing salient stimuli, can create concentrated “mixtures” (e.g., appearance-centric and restrictive-diet content) and prolong effective dose through frictionless scroll and notification schedules. Crucially, this framework does not ascribe malice; it recognises that standard optimisation targets (click-through, dwell time) can have systemic effects when they intersect with vulnerability. Neuroethical concerns then track familiar categories ‒ opacity (limited intelligibility of ranking), data bias (training on skewed behaviours), unfair outcomes (disproportionate delivery of harmful content to high-risk subgroups), and transformative effects (preference shaping that feeds back into behaviour) (Mittelstadt et al., 2016; Marazziti, 2022).

If, however, algorithmic exposure is a determinant, the appropriate analogues are not blanket prohibitions but safety-by-design and accountability requirements proportionate to population reach. Analogous safeguards are being formalised in the EU’s Digital Services Act and the UK’s Online Safety Act, both mandating risk assessment and transparency duties for recommender systems. Ethical frameworks for AI converge on five organising principles ‒ beneficence, non-maleficence, autonomy, justice, and explicability ‒ which together underpin adolescent-facing systems and motivate safety-by-design (Floridi et al., 2018; Floridi & Cowls, 2019).

Translating principles into practice generally requires tools across the model lifecycle, so that methodological work urges a shift from abstract commitments to verifiable procedures in which risk assessment precedes deployment, data governance is documented, bias and safety are tested, human oversight is maintained for sensitive categories, and post-deployment monitoring ensures independent access for auditors (Morley et al., 2020). Such measures can be read as complementing ‒ rather than replacing ‒ clinical pragmatics, in which screening tends to prioritise patterns of engagement over duration and brief, mechanism-targeted interventions are delivered within environments where upstream risks are measurable, mitigated and open to review (Conte et al., 2025; Valkenburg et al., 2022).

Seen in this light, treating recommender systems as active environments, rather than neutral conduits or moral agents, may help to account for why population-level associations look small while individual effects can be substantial; why benefit and harm often coexist within the same ecology; and why progress is likely to rest on linking exposure-aware clinical practice with auditable, ethically governed design at platform scale (table 1).

Table 1.

Multi-level correspondence between algorithmic exposures, psychological mechanisms, and proportionate safeguards

Level Exposure / System Feature Mechanism or Response Pathway Proportionate Safeguard / Intervention
Individual Appearance-centric or comparative feeds Upward social comparison, self- criticism, disrupted sleep Brief self-guided modules (self-compassion, feed curation), sleep hygiene, notification control
Interpersonal / Family Messaging pressure, intrusive notifications, unsupervised late-night use Fatigue, irritability, social withdrawal Negotiated quiet hours, supervision for younger users, intelligible parental controls
Clinical / Service Recurrent appearance-driven content, crisis mentions Fluctuating mood, suicidal ideation spikes Pattern-focused intake (“what came up before you felt worse?”), just-in-time response protocols
System / Design Engagement-optimising recommender loops Algorithmic reinforcement of vulnerable content Safety-by-design: intelligible controls, conservative defaults, traceable recommendation pathways
Policy / Governance Opaque curation and weak auditability Unmeasurable risk distribution across populations Independent algorithmic risk audits, exposure-level reporting, regulatory accountability (DSA / OSA)

The model illustrates how risks arise not from time spent online but from patterns of exposure, and how mitigation can operate across individual, clinical, system, and policy levels—linking exposure-aware practice to auditable, ethically governed design.

Ethics and transparency

Rather than imputing intent or granting neutrality by default, it may be more informative ‒ when considering adolescent-facing recommender systems ‒ to describe what they do. The ethical questions then follow from properties of the pipeline itself: models that learn from skewed behavioural traces; ranking objectives that privilege engagement; and interfaces designed to lower friction and extend exposure. Within such a setting, familiar categories from the ethics-of-algorithms literature help to orient appraisal ‒ opacity in decision pathways; data bias with the attendant risk of unfair outcomes for vulnerable cohorts; and transformative effects in which preference shaping feeds back into behaviour over time (Mittelstadt et al., 2016; Marazziti, 2022). For clinicians and policymakers, the practical task is less to assign blame than to make these systems reviewable. Figure 1 summarizes the multilevel pathway that links individual vulnerabilities, algorithmically curated exposures, and mediating mechanisms to both risks and benefits, and locates proportionate mitigations at the level of system design.

Figure 1.

Figure 1

Algorithmic exposure pathways in adolescent mental health

The diagram illustrates how adolescent mental-health outcomes emerge from the interaction between individual vulnerabilities, algorithmically curated exposures, and the psychological mechanisms they activate. Vulnerabilities such as anxiety, body-image concerns, minority stress, or developmental stage shape the likelihood that particular content will be encountered. Algorithmic exposure—defined by what appears, how engagement occurs, and when in the developmental cycle it takes place—modulates downstream mechanisms including social comparison, compulsive use, and sleep disruption. These processes can yield both risks (e.g., dysmorphia, anxiety, disordered eating, suicidality) and benefits (e.g., peer support, identity exploration, moderated care spaces). The lower panel highlights system-level mitigations—safety-by-design defaults, independent audits, effective moderation, and family guidance—that act upstream to reduce foreseeable harm without negating the connective and supportive functions of social platforms.

A convergent body of work in AI ethics proposes a small number of organising principles ‒ beneficence, non-maleficence, autonomy, justice, and explicability, the latter understood as the union of intelligibility and accountability (Floridi et al., 2018; Floridi & Cowls, 2019). When users are minors, these principles appear to require measures that precede any individual-level counselling, so that privacy and sensitive content are protected by conservative defaults, controls over feed composition and notifications remain intelligible, equity checks can detect systematic over-delivery of risky material to high-risk groups, and decision pathways are traceable so that harmful recommendation sequences can be reconstructed ex post. None of these provisions presupposes intentional harm, but all reflect standard practice in other safety-relevant domains.

Once principles are on the table, the issue becomes how they are realised in practice. Methodological surveys suggest that progress is more likely when commitments are bound to procedures that run through the model’s life course ‒ beginning with pre-deployment risk appraisal and traceable data governance, continuing with age-appropriate bias and safety checks under named human oversight for sensitive categories, and extending after release to monitoring that allows qualified third-party access (Morley et al., 2020). In parallel, legal-policy analyses in the child-online space seem to point toward a cognate toolkit, with independent algorithmic risk audits and public-facing summaries, alignment with age-appropriate design codes, and clearer routes to product accountability where hazardous features can be shown to contribute to harm (Costello et al., 2023). Editorial debates within child and adolescent mental health have, in the same spirit, argued for strengthening rather than relaxing moderation and for attention to business-model incentives in which recommender systems may escalate towards more extreme material to sustain engagement (Russell, 2024; Graham, 2024).

Transparency, however, is not a single dial, since too much disclosure can enable gaming or compromise privacy, especially where health-related inferences are involved. Explicability is better understood as a matter of proportionality: it implies that users and families can rely on intelligible controls and clear rationales for sensitive recommendations, while independent experts have access to risk-stratified exposure data and safety evaluations, and platforms remain bound by documented decisions and available pathways of redress. Conceived in this manner, ethical governance becomes an upstream counterpart to clinical pragmatics, so that screening distinguishes time from type of exposure, interventions remain brief and mechanism-targeted, and defaults are calibrated to developmental stage within systems that are designed to be measurable, improvable, and open to review (Floridi et al., 2018; Floridi & Cowls, 2019; Morley et al., 2020).

Pragmatic perspective on clinical practice

In the consulting room, it has proved more useful to reconstruct how the digital day unfolds than to totalise minutes, since the same hour online can mean very different things depending on what enters the feed, whether the engagement is largely passive or involves production and exchange, and whether it occurs at times that collide with sleep or school: this is also where adolescents’ own accounts diverge, some emphasising connection and support, others reporting comparison, pressure and loss of control (Valkenburg et al., 2022; Popat & Tarrant, 2023; Conte et al., 2025). A brief intake that follows this line ‒ daytime versus latenight use, scrolling versus creating, appearance-centred versus peer-support spaces, direct messages that feel intrusive or difficult to refuse ‒ often clarifies triggers and consequences (fatigue, irritability, withdrawal from offline activities), while allowing the clinician to note the counterweights that already exist in a young person’s ecology, such as moderated communities, prosocial groups and the presence of an adult who scaffolds choices rather than polices them. Given intensive-monitoring evidence that week-to-week exposure to racism on social media tracks short-term fluctuations in suicidal ideation among adolescents of color, brief screening can also ask about both direct and vicarious online racism and its timing relative to mood shifts (Oshin et al., 2024).

At service level, documentation may be strengthened without adding friction. Experience from UK child and adolescent services suggests that references to online activity embedded in routine clinical notes can be surfaced with high precision and recall by a simple rule-based natural-language pipeline; once clinicians review the output, experiences can be classified ‒ coarsely but usefully ‒ as supportive, detrimental or neutral, and the same signals can populate dashboards that visualise patterns across caseloads while leaving interpretation in clinical hands (Sedgwick et al., 2023). Such a purpose is less to replace clinical judgement than to provide teams with a common vantage on the online elements that tend to recur in presentations and in supervision.

When the assessment points to appearance-driven motives and comparison processes, small, proximal interventions have shown encouraging signals. A randomised pilot of a brief self-guided module targeting self-criticism and cultivating self-compassion reduced appearance-driven motives, social comparison and body-image inflexibility, with short-term improvement in disordered-eating symptoms; a companion component that paired literacy with active feed curation reduced comparison as well (de Valle & Wade, 2022). Interventions of this weight can sit early in stepped-care pathways, be paired with sleep-hygiene and notification routines, and be adapted to developmental level and to the gendered engagement patterns described in “fitspiration” communities, where norms and motivations differ for young men and women (Mayoh & Jones, 2021).

There are cases in which risk does not unfold as a steady background but seems to spike around particular exposures or disclosures, so that the timing of support may matter as much as the nature of the content. Evidence from an external validation study suggests that indicators derived from social-media text can sometimes track short-term trajectories of suicidal ideation, and ‒ perhaps most notably ‒ that responses to suicidal mentions in the days that follow appear to associate with subsequent improvement, a pattern consistent with just-in-time contacts and with guidance on responding constructively to crisis posts without turning platforms into triage engines (Kaminsky et al., 2024). The safeguards ‒ privacy, governance, management of false positives ‒ remain indispensable, and such measures are best viewed as complementing rather than replacing clinical decision-making.

Across ages, family guidance and defaults can take different forms, with younger adolescents often better protected by conservative arrangements ‒ restricted messaging, filters for sensitive material, some degree of supervision ‒ while older adolescents may benefit more from controls that are intelligible and open to negotiation, so that quiet hours, recommendation settings, or visibility and reporting tools are adjusted in ways that preserve sleep, school, and relationships as central rather than residual parts of daily life. What seems to matter most is not an overall ceiling on time, but the texture of engagement itself: late-night scrolling that extends wakefulness, feeds dominated by appearance-related content, or compulsive cycles of checking, each of which has been more consistently associated with difficulty than sheer hours of use, in line with evidence that risk depends on what is encountered, how it is engaged with, and when these patterns occur (Conte et al., 2025; Valkenburg et al., 2022).

Research and policy agenda

It may be more informative, for both research and services, to follow the exposures themselves rather than to aggregate minutes, since what matters for adolescents often lies in the composition of the feed, the manner of engagement and the timing in which contact occurs. Describing what is delivered (for example, appearance-centred or self-harm–related material), how it is met (passively or through creation and exchange) and when in the day and in development the interaction takes place, using shared taxonomies for modality and valence, would sit better with the heterogeneity already documented (Valkenburg et al., 2022; Conte et al., 2025). Service experience suggests that routine documentation is tractable: mentions of online activity can be recovered with adequate fidelity from clinical text and, once reviewed by clinicians, can support coarse categorisation of experiences; harmonised fields in electronic records and small-scale dashboard work could therefore help teams to visualise patterns across caseloads while preserving clinical ownership of interpretation (Sedgwick et al., 2023).

Clarifying causal questions is likely to require designs that give priority to within-person change and to circumstances in which the environment shifts on its own. Intensive longitudinal and event-based sampling may illuminate short windows of susceptibility; natural experiments around design changes or policy shifts could be logged prospectively; and a routine of preregistration, transparent analysis plans and external validation would limit analytic flexibility and over-fit. Where predictive models touch crisis signals, a cautious posture seems warranted: reporting standards akin to TRIPOD, explicit discussion of transportability and privacy safeguards, and a presumption of human-in-the-loop use; recent work on short-term trajectories of suicidal ideation is suggestive of potential, but also of the need for careful governance (Kaminsky et al., 2024; Morley et al., 2020). A complementary synthesis of text-based digital media points to the dual prospect of earlier signal detection and the parallel need for robust oversight when data-analytic and machine-learning approaches are applied to mental-health and suicide-prevention contexts (Sweeney et al., 2024).

At level of systems, attention may fall less on blanket prohibitions and more on designs that can be examined and improved. Independent algorithmic risk audits with public-facing summaries, secure researcher access to risk-stratified exposure data, and age-appropriate defaults would be coherent with a determinants perspective in which upstream conditions form part of the problem definition (Marmot, 2005; WHO Commission, 2008; Costello et al., 2023). Ethical frameworks tend to converge on beneficence, non-maleficence, autonomy, justice and explicability; in practice this points to intelligible controls for families, equity checks that test for over-delivery of risky material to vulnerable cohorts, and traceable recommendation pathways when harm must be reconstructed (Floridi et al., 2018; Floridi & Cowls, 2019). A recent editorial debate in child and adolescent mental health has similarly argued for strengthening moderation in light of business-model incentives that can normalise or amplify extreme content; a measured programme of transparency, accountability and timely moderation may therefore be more proportionate than reliance on generic screen-time limits (Russell, 2024; Graham, 2024). Future work should prioritise longitudinal, exposure-aware designs and cross-sector collaboration so that safety-by-design becomes verifiable rather than aspirational.

Conclusions

If a single pattern recurs, it is the sense that choices made upstream ‒ quietly, in code and defaults ‒ can tilt trajectories downstream, sometimes imperceptibly and sometimes in ways that leave a trace. Transparency and ethically governed recommendation need not diminish what is valuable online; they can make the same places safer and healthier by ensuring that what reaches young people is intelligible, open to audit and capable of being adjusted, rather than a procession of opaque prompts that simply happen to them. Precaution herein is not panic but proportion: conservative defaults for minors, clear controls for families, and curation that admits independent review, so that benefits can be kept while foreseeable risks are pared back.

Without such guardrails, the direction of travel is easy enough to imagine: systems that learn only from engagement may, by degrees, normalise the extreme, narrow horizons and distribute harm unevenly ‒ especially where vulnerability and salience meet ‒ while leaving clinicians, researchers and parents with little to examine and even less to remedy. With them, the same learning machinery can be turned toward friction against harmful spirals, timely support when risk peaks, and equity checks that prevent systematic over-delivery of damaging material to those least able to bear it.

Seen this way, algorithms might neither be villains nor saviours but τὸ νέον φάρμακονremedy and poison together. Therefore, our future mission should be aimed at designing for the remedy and at bounding the poison, through transparency that permits scrutiny, through responsibility shared across platforms and services, and through a research programme that stays close to exposure and consequence. Whether the balance tilts toward poison or remedy will depend less on the code itself than on whether transparency and accountability become the rule rather than the exception.

References

  1. Conte, G., Iorio, G. D., Esposito, D., Romano, S., Panvino, F., Maggi, S., Altomonte, B., Casini, M. P., Ferrara, M., & Terrinoni, A. (2025). Scrolling through adolescence: A systematic review of the impact of TikTok on adolescent mental health. European Child & Adolescent Psychiatry, 34(5), 1511–1527. doi: 10.1007/s00787-024-02581-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Costello, C. R., Armstrong, S., Sacks, D., et al. (2023). Algorithms, addiction, and adolescent mental health: An interdisciplinary study to inform state-level policy action to protect youth from the dangers of social media. American Journal of Law & Medicine, 49(2–3), 135–170. doi: 10.1017/amj.2023.27 [DOI] [PubMed] [Google Scholar]
  3. de Valle, M. K., & Wade, T. D. (2022). Targeting the link between social media and eating disorder risk: A randomized controlled pilot study. International Journal of Eating Disorders, 55, 1066–1078. doi: 10.1002/eat.23756 [DOI] [PubMed] [Google Scholar]
  4. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Philosophy & Technology, 32(4), 687–707. doi: 10.1007/s13347-019-00434-9 [DOI] [Google Scholar]
  5. Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. doi: 10.1007/s11023-018-9482-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Graham, R. (2024). Debate: How the business model of social media fuels the need for greater moderation. Child and Adolescent Mental Health, 29(3), 322–324. doi: 10.1111/camh.12724 [DOI] [PubMed] [Google Scholar]
  7. Griffiths, S., Harris, E. A., Whitehead, G., Angelopoulos, F., Stone, B., Grey, W., & Dennis, S. (2024). Does TikTok contribute to eating disorders? A comparison of the TikTok algorithms belonging to individuals with eating disorders versus healthy controls. Body Image, 51, 101807. doi: 10.1016/j.bodyim.2024.101807 [DOI] [PubMed] [Google Scholar]
  8. Kaminsky, Z., McQuaid, R. J., Hellemans, K. G. C., Patterson, Z. R., Saad, M., Gabrys, R. L., Kendzerska, T., Abizaid, A., & Robillard, R. (2024). Machine learning–based suicide risk prediction model for suicidal trajectory on social media following suicidal mentions: Independent algorithm validation. Journal of Medical Internet Research, 26, e49927. doi: 10.2196/49927 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Marazziti, D. (2022). Brainwashing by social media: A threat to freedom, a risk for dictatorship. Clinical Neuropsychiatry, 19(5), 277–279. doi: 10.36131/cnfioritieditore20220501 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Marmot, M. (2005). Social determinants of health inequalities. The Lancet, 365(9464), 1099–1104. doi: 10.1016/S0140-6736(05)71146-6 [DOI] [PubMed] [Google Scholar]
  11. Mayoh, J., & Jones, I. (2021). Young people’s experiences of engaging with fitspiration on Instagram: Gendered perspective. Journal of Medical Internet Research, 23(10), e17811. doi: 10.2196/17811 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. McCrory, A., Best, P., & Maddock, A. (2022). ‘It’s just one big vicious circle’: Young people’s experiences of highly visual social media and their mental health. Health Education Research, 37(3), 167–184. doi: 10.1093/her/cyac010 [DOI] [PubMed] [Google Scholar]
  13. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). doi: 10.1177/2053951716679679 [DOI] [Google Scholar]
  14. Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141–2168. doi: 10.1007/s11948-019-00165-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Oshin, L. A., Boyd, S. I., Jorgensen, S. L., Kleiman, E. M., & Hamilton, J. L. (2024). Exposure to racism on social media and acute suicide risk in adolescents of color: Results from an intensive monitoring study. Journal of the American Academy of Child & Adolescent Psychiatry, 63(8), 757–760. doi: 10.1016/j.jaac.2024.03.009 [DOI] [PubMed] [Google Scholar]
  16. Paquin, V., Philippe, F. L., Shannon, H., Guimond, S., Ouellet-Morin, I., & Geoffroy, M.-C. (2024). Associations between digital media use and psychotic experiences in young adults of Quebec, Canada: A longitudinal study. Social Psychiatry and Psychiatric Epidemiology, 59, 65–75. doi: 10.1007/s00127-023-02537-6 [DOI] [PubMed] [Google Scholar]
  17. Popat, A., & Tarrant, C. (2023). Exploring adolescents’ perspectives on social media and mental health and well-being: A qualitative literature review. Clinical Child Psychology and Psychiatry, 28, 323–337. doi: 10.1177/13591045221092884 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Russell, I. (2024). Debate: More, not less social media content moderation? How to better protect youth mental health online. Child and Adolescent Mental Health, 29(3), 319–321. doi: 10.1111/camh.12717 [DOI] [PubMed] [Google Scholar]
  19. Sedgwick, R., Bittar, A., Kalsi, H., Barack, T., Downs, J., & Dutta, R. (2023). Investigating online activity in UK adolescent mental health patients: A feasibility study using a natural language processing approach for electronic health records. BMJ Open, 13, e061640. doi: 10.1136/ bmjopen-2022-061640 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Strickland, S. R., Medina Fernandez, A., & Keel, P. K. (2025). TikTok and disordered eating: Delineating temporal associations and effects of a ban. Eating Behaviors, 58, 102024. doi: 10.1016/j.eatbeh.2025.102024 [DOI] [PubMed] [Google Scholar]
  21. Sweeney, C., Ennis, E., Mulvenna, M. D., Bond, R., & O’Neill, S. (2024). Insights derived from text-based digital media, in relation to mental health and suicide prevention, using data analysis and machine learning: Systematic review. JMIR Mental Health, 11, e55747. doi: 10.2196/55747 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Valentine, L., McEnery, C., O’Sullivan, S., Gleeson, J., Bendall, S., & Alvarez-Jimenez, M. (2020). Young people’s experience of a long-term social media–based intervention for first-episode psychosis: Qualitative analysis. Journal of Medical Internet Research, 22(6), e17570. doi: 10.2196/17570 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Valkenburg, P. M., Meier, A., & Beyens, I. (2022). Social media use and its impact on adolescent mental health: An umbrella review of the evidence. Current Opinion in Psychology, 44, 58–68. doi: 10.1016/j.copsyc.2021.08.017 [DOI] [PubMed] [Google Scholar]
  24. World Health Organization, Commission on Social Determinants of Health. (2008). Closing the gap in a generation: Health equity through action on the social determinants of health. Final report. Geneva: WHO. [DOI] [PubMed]
  25. Yang, C., Du, J., Li, X., Li, W., Huang, C., Zhang, Y., & Zhao, Y. (2024). Association between parental phubbing and short-form video addiction: A moderated mediation analysis among Chinese adolescents. Journal of Affective Disorders. doi: 10.1016/j.jad.2024.10.023 [DOI] [PubMed] [Google Scholar]

Articles from Clinical Neuropsychiatry are provided here courtesy of Giovanni Fioriti Editore

RESOURCES