Abstract
Academic scholarship and public discourse about children’s digital media use often invokes concepts such as ‘screen time’ that place the locus of responsibility on individual users and families rather than on designers creating digital environments. In this vision article, we argue that research, design, and policy frameworks that assume individual responsibility contribute to intensive parenting messaging about children’s media use, are less likely than systemic approaches to achieve population-level change, and produce inequities in children’s access to positive, child-centered media. Platforms (e.g., app marketplaces, video streaming services) act as entry points for children’s use of digital spaces, and thus are strong determinants of children’s experiences. As such, platforms are an ideal point of intervention for systemic change and have the potential to create equitable and child-centered digital environments at an ecosystem level. We contend that policies that encourage platforms to establish child-centered design as the default user interface will both create better experiences for children and relieve pressure on parents as gatekeepers. Finally, we review the types of research questions that could examine how to measure and optimize platforms for their impact on child wellbeing and outline steps researchers can take to provide evidence-based guidance to industry about designing ecosystems for children’s best interests.
Introduction
Discussions about children’s digital media use usually carry assumptions of individual responsibility. Parents are responsible for limiting children’s ‘screen time’ and removing devices from the bedroom. Users are responsible for avoiding violent or inappropriate content. Teens are responsible for not sending risqué photos or bullying others online. In short, the task of discerning which digital experiences lack meaning, waste users’ time, or pose wellbeing risks—and avoiding such experiences—is left to child users and their parents.
However, public health scholarship and practice have shown that changing environmental and structural determinants of child wellbeing is more effective for producing widespread and equitable changes in wellness than developing interventions that focus on individual responsibility alone. By changing the design of children’s environments (for example, by removing lead from gasoline, or removing soda machines from schools), advocates for child wellbeing have achieved large-scale population shifts in intelligence, body mass index, and other indicators of wellness. This systemic approach is not only more effective; it also requires less effort across the population and results in fewer inequities than approaches that rely on individual responsibility [1]. The environment determines children’s starting points and opportunities [2].
Similarly, we see design of a high-quality digital environment as a determinant of child wellbeing. We argue that an environment that systemically surfaces high-quality content will result in a better experience for children than an approach that tasks the individual with continually curating such an environment for themselves.
Systemically creating a high-quality environment also cannot be taken up by individual content producers alone. A small number of entry points dominate children’s digital access, as children consume a large fraction of their digital media through a small number of popular platforms, such as app marketplaces, livestreaming environments, and video-on-demand services. Thus, examining children’s media use from a structural perspective requires examining these widely used containers that provide children with so much of the content they engage with.
In this vision article, we argue that platforms provide an ideal point of intervention for creating equitable and child-centered digital environments. For example, by rethinking the settings exposed to end users, the internal mechanisms for reviewing content, the metrics used to train recommender systems, the API options provided to third parties, and the platform-wide approaches to data collection and data privacy, developers of large platforms can create better starting points for children’s media use and effect change at a population level. We contend that policies that encourage platforms to establish child-centered design as the default user experience across the platform will both create better experiences for children also relieve some of the pressure on parents as gatekeepers. Finally, we review the types of research questions that could examine how to measure and optimize platform ecosystems for their impact on child wellbeing, and how researchers can provide evidence-based guidance to industry about designing for children’s best interests.
Rationale for the Paradigm Shift
Creating Environments with Child-Centered Defaults
Children are uniquely vulnerable to the design of their environments, due to their smaller size, dependence on adults, and the transactional nature of child development (i.e., bidirectional influences between child, their caregivers, and their context that shape children’s developmental trajectories [3]). For this reason, public health policy and interventions have focused on removing toxic substances from children’s environments (e.g., lead from gasoline or paint), or improving access to healthy foods, green spaces, physical activity, or community-based opportunities for positive health behaviors [2].
The concept of ‘optimal defaults’ was articulated by Dr. Tom Frieden, a public health expert and former director of the U.S. Centers for Disease Control and Prevention, through the Health Impact Pyramid [4]. Frieden contended that interventions that change the context to make default decisions healthier are most likely to make large-scale positive impact on human flourishing. In contrast, interventions that require individuals to each change their behavior are the least likely to make a meaningful impact. The best-known example of the ‘optimal defaults’ approach was New York City Department of Health’s policy in 2005 to ban trans fats from restaurant foods. This change in the food environment in a large urban area led to a city-wide 4% decrease in heart disease-related mortality [5]. It also symbolized a shift from blaming obesity and heart disease risk on individuals’ failings [6] to expecting more from the environment.
Similarly, designers have long realized the power of defaults. Users rarely change default settings, and designers frequently funnel users into desired behaviors through practices known as ‘choice architecture’ [7]. When applied to children’s digital environments, the ‘optimal defaults’ approach – which we term ‘child-centered defaults’ in this article - implies that content creators, platforms, developers, and publishers would create digital products that consider children’s wellbeing as a first principle.
Although this framework invokes public health as an organizing concept, we contend that such an analogy is appropriate as health – like the Internet – involves complex dynamic systems that require cross-disciplinary thought and action to make meaningful change [8]. This framework also actively questions assumptions inherent in many western cultures – particularly the U.S. – in which individual responsibility for behavior is expected over communal solutions, and where powerful industries have defended their actions by placing responsibility on individual consumers (e.g., of food, tobacco, alcohol) [9]. There are limitations to using health-related practices as a metaphor for media habits, but in both instances, environmental factors have great potential to impact measures of wellbeing.
Individual Responsibility over Child Wellbeing: Intensive Parenting Discourses
A ‘child-centered defaults’ approach that focuses on digital environment design would counter panicked public discourses about parents’ individual responsibility to manage a rapidly evolving and highly technical array of digital products. Since the turn of the 20th century, scholars have identified a change in the public discussion of parents, from being a ‘parent’ in a relationship with a child to the act of ‘parenting’ a child [10]. Within this intensive parenting view, raising a child requires actively cultivating the child [11] and it treats the individual responsibility of parents, and particularly of mothers, as paramount to children’s wellbeing and outcomes. Prior work has shown that the intensive parenting ideology is highly risk-averse [12], privileges middle-class ideas of family life, and has come to define unrealistic expectations for parents of all backgrounds [13]. Critics of intensive parenting point out that it not only asks parents to strive for impossible ideals, it also defines ideals that are not necessarily predictive of better outcomes [14, 15].
The intensive parenting perspective has come to shape societal ideas about children’s use of technology. It has both produced collective anxiety about potential risks posed by children’s ‘screen time’ – without regard for the diverse range of meaningful child media experiences – and has tasked individual parents with policing what children do online [16, 17]. Prior work concludes that this parental mediation approach does not sufficiently account for the pressures parents face or the larger social context in which families make decisions about technology [18]. Lower-income parents in particular report feeling overwhelmed by the rapid pace of technological change, having a low locus of control over their children’s relationship with technology, and wishing for more child-centered considerations to be built into the technologies children use [19].
As we argue below, a shift to child-centered defaults in the platforms children use will lead to a digital ecosystem that naturally supports child wellbeing. This, in turn, will remove pressure from parents who currently shoulder the burden of gatekeeping under intense societal pressure and without support from platforms.
The Foundation of Digital Environments: Industry Incentive Structures
A ‘child-centered defaults’ approach would also serve to course-correct industry incentives that prioritize engagement and advertising revenue at the expense of child-centered design. The influence of these incentive structures is widely reflected in the design of many of the platforms children use. For example, in keeping with its goal of 1 billion global viewing hours per day, YouTube created highly effective designs – such as predictive recommendations feeds [20] - that changed millions of users’ behavior.
Industry goals for engagement, however, do not always align with children’s needs. Broadly defined, children’s needs include autonomy to play and explore without exploitation or pressure from consumer interests; healthy and regulatory behaviors such as sleep, movement, secure relationships, emotional awareness; and exposure to a variety of developmentally appropriate learning experiences and role models that foster knowledge and critical thinking. Child-centered design advocates – including Fred Rogers, Sesame Workshop, the Designing For Children’s Rights Coalition in Europe, and the 5Rights Foundation in the UK – largely focus on the following core concepts when considering children’s needs in digital spaces: 1) allowing space for safe exploration, play, autonomy, imagination, and failure; 2) consideration of the child’s relationships, whether with caregivers who co-play or characters on screen; 3) respect for the child’s need for varied experiences, other types of play, and transfer of concepts and experiences from the digital world to the physical one (i.e., making media a launching point for broader experiences); and 4) helping children self-regulate their media use and disengage without heavy-handed intervention from parents. These digital autonomy and wellbeing concepts, advocates argue, should take place without exploitation by commercial interests or data surveillance.[21-23]
Yet, these principles of child-centered design do not inform most industry metrics of success, short of nonprofit content creators (e.g., PBS KIDS in the U.S.). Platforms and content that depend on advertising revenue have every incentive to optimize for the amount of time users spend with technology, thereby increasing advertisement impressions. The goals of engagement-driven business models are evident in the design of the systems children use, leaving parents to not only police children’s technology use, but working against engagement-oriented designs as they do so.
Operationalizing the Paradigm Shift
Opportunities for Platform Developers
The way in which platforms choose to surface content has an enormous impact on what users consume. For example, apps featured among a small set of curated options in the ‘Apple Weekly’ promotion on the Apple App Store see an average increase in downloads of 3600% [24]. Similarly, developers say they tailor their products in response to platform affordances and make choices that will make their content more visible and viral within the platform marketplace [25], which is apparent in the most popular child-directed YouTube channels[26], for instance. And recommender systems respond to users’ interests by promoting increasingly extreme and radicalized versions of these interests [27, 28], including recommending disturbing content to young children [29]. All of these examples demonstrate powerful pathways by which platform design determines the quality and quantity of content users consume.
Thus, it is not enough for individual content producers to create developmentally sensitive and child-centric content; disseminating this content at scale requires holistic platform health. Across the landscape of children’s digital media, there are many successful examples of content producers like Sesame Workshop, LEGO, or PBS KIDS partnering with child development experts and using developmental principles to guide the design of a product from its inception. But these efforts are only as effective as their ability to reach consumers, leaving platform developers with the responsibility of detecting and promoting high-quality content.
There are a number of levers by which platform developers might promote fair and positive experiences by default. The first is to recognize child users, even if they are not the only or intended audience. Although platforms were initially designed with adult consumers in mind, and therefore may have been surprised with the unintended ways children use these spaces, it is not enough for platforms to simply retrofit when regulation is threatened (e.g., U.S. Federal Trade Commission fining TikTok[30] and YouTube[31] in 2019 for collecting data on children) or when high-risk problems are revealed (e.g., sexual exploitation using YouTube comments sections of videos containing children). Privacy reviews suggest that platforms have generally followed reactive risk mitigation approaches, not proactive protective design strategies [32]. Similarly, advocacy and legislation have often focused on extreme cases of media harms, risks, and safety (e.g., suicide, exposure to child predators) without consideration of the everyday user interface (UI) design features that support or undermine children’s wellbeing in digital spaces.
Second, recommender systems drive a huge share of user engagement, and for example, 70% of all YouTube viewing comes from algorithmic recommendations [33]. Thus, a child-centered defaults approach must include training recommender systems on data aligned with child wellbeing. Engagement-driven metrics like time-on-task and click-through rates are agnostic to the quality of content children engage with, and recommender systems trained with engagement data from A/B testing inevitably prioritize whatever children happen to pay attention to. This leads to algorithms that are not only capable of surfacing extreme and disturbing content but are actually highly likely to do so [29]. As we suggest below, collaborative science between child-computer interaction (CCI) and child health/development researchers (and, ideally, industry) would lead to new indicators of child user wellbeing that platforms could use for defining success metrics and training recommender systems. This has the power to change the incentives for third-party content creators, because platforms surface or bury individual content based on the metrics they use for producing recommendations.
Third, developers could re-examine platform UI itself, in addition to examining the platform’s indirect influence on children via content creators. Optimizing for engagement has led to platform-level choice architecture with a single-minded goal of keeping children connected, including features like autoplay, tailored feeds, and streaks that reward frequent play and lead to externally motivated engagement that suppresses children’s internal cues for self-directed play. A ‘child-centered defaults’ approach would design for necessary cycles of engagement and disengagement and would promote children’s autonomy and engagement-related decision-making. This might mean designing for natural stopping points and cues, adding prompts to self-reflect throughout the platform, helping the child contextualize what they are seeing to off the screen, or simply avoiding dark patterns that manipulate users’ attention (such as interaction-by-demand, where the user must return to an app or game on a schedule dictated by the app itself [34]). Prior work has shown that people find experiences with technology most meaningful when they are investing in something that can transcend the specific usage session, such as a relationship with a loved one or a learning experience they can transfer to the physical world [35]. In young children, the very meaning of learning from digital media involves a child transferring that knowledge to the world around them.[36, 37]
In addition, to prioritize child-centered design principles to vary children’s play diets and respect children’s needs for quiet and sleep, platforms would include platform-wide UI to streamline both the child’s ability to engage with content and also to streamline setting the content aside. Using probabilistic models trained on both sensor data and evidence-based indicators of wellbeing, platforms could provide context-aware nudges to support cycles of engagement that reflect children’s needs, rather than endlessly autoplaying additional videos.
Platform designers also have the opportunity to remove sloppy monetization such as disruptive ads that use attention-grabbing design features and are difficult to minimize [38], or irrelevant ads and their concomitant cognitive load to young viewers [37]. They can set limits on what data can be collected and when, using a ‘child-centered defaults’ approach to create non-traceable identifiers rather than sending persistent ones to third-parties for advertising auctions [39, 40]. And they can provide more developmentally appropriate transparency to children about how their data is processed and stored [41]. They might choose to add child-centric requirements to the review and approval process when new content is added to the platform, and they might add APIs for third-party developers that are explicitly aligned with child well-being.
Although many child-centered designers who have been hired by large technology companies may be highly motivated to pursue such lines of research or product innovation, their impact may be constrained by the larger platforms’ profit motives. Therefore, policy changes that seek to align platform design with child digital rights are necessary.
Opportunities for Policymakers
Platforms themselves have reduced incentive to provide child-centered design (which, as described above, currently involve simplistic but highly profitable approaches) when child technology regulations are out of date or unevenly enforced. Particularly in the U.S., where the major platforms are headquartered, industry accountability for what content it promotes to children is limited by Section 230 of the Communications Decency Act. In addition, U.S. technology legislation has not kept pace with children’s media behaviors. For example, despite the rapid growth of marketing to children through online ads or sponsored content, no U.S. laws regulate the duration, frequency, or content of ads appearing during apps or video streaming platforms, resulting in digital experiences that may be more advertisement than game [26, 38]. Despite several large-scale analyses showing private identifier collection and sharing with third-party marketing databases by children’s apps [40, 42], only a handful of lawsuits have attempted to address this phenomenon. We contend that outdated or piecemeal approaches to plugging the holes in a messy ecosystem will only distract from efforts to create an environment designed for children as a first principle.
In recent years, efforts by the United Kingdom (e.g., Age-Appropriate Design Code) and Australia (Safety by Design Initiative) have laid out plans for child-centered design regulations. However, without a global commitment to child-centered defaults, as recommended by the United Nations Convention on Digital Rights of the Child [43], inequities in access to high-quality digital spaces will widen. Parents of high socioeconomic status are already more likely to seek out child-dedicated spaces for their family’s media use than parents from marginalized communities. Therefore, without widespread adoption of child-centered design by platforms that children are known to access, we can expect that privileged children will reap the benefits of well-designed digital spaces, while more vulnerable children will continue to wade through marketing, misinformation, and surveillance.
Opportunities for Researchers
Child health research conceptual models are often constructed with assumptions of parent responsibility, assessing constructs such as screen time duration, coviewing, or quality content choices, rather than asking how design features may make these individual behaviors more or less effective. We propose a shift to research questions and conceptual frameworks that interrogate how technology design – whether through surface cues and enhancements [44], commercial and advertising motives [38], and engagement-promoting features such as autoplay [45] – support or foreclose the opportunities for positive, social, and self-determined media use behaviors.
By pushing research beyond silos and into community-informed, cross-disciplinary, longitudinal and experimental work, new research questions can be asked with novel implications for intervention. When research studies only use constructs with individual responsibility assumptions, they yield findings with implications for what parents should do (or, when interpreted through the cultural lens of intensive parenting, what good parents should do). Clinical guidelines may be written based on these narrow set of assumptions, overlooking digital ecosystem design as a primary force in shaping children’s media use.
In contrast, a ‘child-centered defaults’ paradigm shift (see Figure 1) would involve a cross-disciplinary examination of how child wellbeing and digital design intersect in the moment (i.e., lab-based design and evaluation research with diverse user communities, not just university research registries and faculty children) and over time (i.e., longitudinal and experimental research). Ideally, cross-disciplinary approaches would allow CCI researchers to understand how their designs are used by families over time in real-world settings, reassess the assumptions that informed the designs by examining the unexpected ways children play in digital spaces, and partner with industry to understand how user analytic data can shed light into child wellbeing.
Figure 1:
Conceptual Framework for Cross-Disciplinary Child-Centered Design Research
Measuring engagement is easy, but how can we measure whether digital experiences are allowing children to explore, integrate their knowledge meaningfully, be more self-reflective, or sleep enough? Enabling platforms to optimize for better metrics will require innovative and interdisciplinary research that: 1) identifies potential indicators of wellbeing (or lack of wellbeing) based on developmental literature, 2) invents ethical and accurate ways of sensing or otherwise measuring these indicators, and 3) conducts proof-of-concept work to train and evaluate dynamic systems using these metrics.
Child-centered design research would also be nimble enough to study natural experiments that occur within children’s digital environments, such as COVID-19 related shifts to remote learning; independently assess how well new platform designs are being rolled out and used in real-world settings (e.g., YouTube’s Supervised Experiences feature) or how changes to digital advertising policies shape child experiences.
Finally, the research strategies we propose are proactive, not reactive. Collaboration between academics and industry would reduce the common dynamic of the former acting as ‘watchdogs’ to critique digital products once they come to market. We suggest that by establishing a continuous, sustainable, bidirectional conduit between academia and industry, with transparent communication to the public about how decisions are made before designs are tested and rolled out, trust in both media research and children’s tech products might increase.
Although it is still necessary to identify when children are not being recognized as users of certain digital spaces, or when adult design norms being copied and pasted into children’s products (such as monetization [38] or data collection SDKs [42]), reactive criticisms of industry are insufficient to move the needle on children’s wellbeing with regard to technology. Government bodies and nonprofit organizations may need to fund such collaborative science through specific funding streams, ideally with fast-track review processes so that research can keep pace with technologic innovation.
Conclusion
We have tasked industry with the responsibility of designing and maintaining positive, equitable spaces for children, rather than expecting families to navigate a rapidly changing and engagement-driven digital environment. As the spaces that define what content is most visible and how it is consumed, platforms are the ideal point of intervention for driving change across the landscape of digital content for children. The research community can catalyze this change by investigating new metrics platforms might use to surface high-quality content, new mechanisms by which third parties can support child well-being, and new ways of evaluating a platform for optimal defaults.
Funding Source:
Dr. Radesky is funded by a K23 Patient-oriented research career development award (1K23HD092626) from the National Institutes of Child Health and Development.
Footnotes
Competing Interests: Dr. Radesky is a paid consultant for Melissa & Doug Toys, LLC and Noggin, and receives research funding from Common Sense Media. Dr. Hiniker has received past and current research funding from Sesame Workshop, Facebook, Mozilla, and the Jacobs Foundation.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References:
- 1.Lorenc T, et al. , What types of interventions generate inequalities? Evidence from systematic reviews. J Epidemiol Community Health, 2013. 67(2): p. 190–193. [DOI] [PubMed] [Google Scholar]
- 2.Council NR, From neurons to neighborhoods: The science of early childhood development. 2000. [PubMed] [Google Scholar]
- 3.Sameroff A, Transactional models in early social relations. Human development, 1975. 18(1-2): p. 65–79. [Google Scholar]
- 4.Frieden TR, A framework for public health action: the health impact pyramid. American journal of public health, 2010. 100(4): p. 590–595. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Restrepo BJ and Rieger M, Trans fat and cardiovascular disease mortality: evidence from bans in restaurants in New York. Journal of health economics, 2016. 45: p. 176–196. [DOI] [PubMed] [Google Scholar]
- 6.Brownell KD, et al. , Personal responsibility and obesity: a constructive approach to a controversial issue. Health affairs, 2010. 29(3): p. 379–387. [DOI] [PubMed] [Google Scholar]
- 7.Sunstein CR and Thaler R, Nudge. 2009: Penguin Publishing Group. [Google Scholar]
- 8.Plsek PE and Greenhalgh T, The challenge of complexity in health care. Bmj, 2001. 323(7313): p. 625–628. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Dorfman L, Wallack L, and Woodruff K, More than a message: framing public health advocacy to change corporate practices. Health education & behavior, 2005. 32(3): p. 320–336. [DOI] [PubMed] [Google Scholar]
- 10.Hays S, The cultural contradictions of motherhood. 1998: Yale University Press. [Google Scholar]
- 11.Hoffman DM, How (not) to feel: culture and the politics of emotion in the American parenting advice literature. Discourse: Studies in the cultural politics of education, 2009. 30(1): p. 15–31. [Google Scholar]
- 12.Lee E, Macvarish J, and Bristow J, Risk, health and parenting culture. 2010, Taylor & Francis. [Google Scholar]
- 13.Livingstone S, et al. , How parents of young children manage digital devices at home: The role of income, education and parental style. 2015. [Google Scholar]
- 14.Schiffrin HH, et al. , Intensive parenting: Does it have the desired impact on child outcomes? Journal of Child and Family Studies, 2015. 24(8): p. 2322–2331. [Google Scholar]
- 15.Harris JR, The nurture assumption: Why children turn out the way they do. 2011: Simon and Schuster. [Google Scholar]
- 16.Livingstone S and Bober M, Regulating the internet at home: contrasting the perspectives of children and parents. Digital generations: Children, young people, and new media, 2006: p. 93–113. [Google Scholar]
- 17.Mazmanian M and Lanette S. "Okay, One More Episode" An Ethnography of Parenting in the Digital Age. in Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2017. [Google Scholar]
- 18.Clark LS, Parental mediation theory for the digital age. Communication theory, 2011. 21(4): p. 323–343. [Google Scholar]
- 19.Radesky JS, et al. , Overstimulated consumers or next-generation learners? Parent tensions about child mobile technology use. The Annals of Family Medicine, 2016. 14(6): p. 503–508. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Zhou R, Khemmarat S, and Gao L. The impact of YouTube recommendation system on video views. in Proceedings of the 10th ACM SIGCOMM conference on Internet measurement. 2010. [Google Scholar]
- 21.Kidron B and Rudkin A, Digital Childhood: Addressing childhood development milestones in the digital environment. 2017. [Google Scholar]
- 22.Campbell AJ, Rethinking Children's Advertising Policies for the Digital Age. Loy. Consumer L. Rev, 2016. 29: p. 1. [Google Scholar]
- 23.Johnson AF Inside the Kids' Privacy Zone. 2017. February 7, 2018]; Available from: https://www.commonsensemedia.org/kids-action/blog/new-report-inside-the-kids-privacv-zone. [Google Scholar]
- 24.Askalidis G, The impact of large scale promotions on the sales and ratings of mobile apps: Evidence from Apple's App Store. arXiv preprint arXiv:1506.06857, 2015. [Google Scholar]
- 25.AlSubaihin A, et al. , App store effects on software engineering practices. IEEE Transactions on Software Engineering, 2019. [Google Scholar]
- 26.Radesky JS WH, Schaller A, Yeo S, Robb M, Young Kids and YouTube: How Ads, Toys, and Games Dominate Viewing. 2020. [Google Scholar]
- 27.Ribeiro MH, et al. Auditing radicalization pathways on youtube. in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 2020. [Google Scholar]
- 28.O’Callaghan D, et al. , Down the (white) rabbit hole: The extreme right and online recommender systems. Social Science Computer Review, 2015. 33(4): p. 459–478. [Google Scholar]
- 29.Papadamou K, et al. Disturbed YouTube for kids: Characterizing and detecting inappropriate videos targeting young children. in Proceedings of the International AAAI Conference on Web and Social Media. 2020. [Google Scholar]
- 30.Commission, U.S.F.T., Musically Settlement Press Release. 2019. [Google Scholar]
- 31.Commission, U.S.F.T., YouTube Settlement Press Release. 2019. [Google Scholar]
- 32.Foundation, R., Towards an Internet Safety Strategy. 2019. [Google Scholar]
- 33.Solsman A, YouTube's AI is the puppet master over most of what you watch. 2018. [Google Scholar]
- 34.Lewis C, Irresistible Apps: Motivational design patterns for apps, games, and webbased communities. 2014: Springer. [Google Scholar]
- 35.Tran JA, et al. Modeling the engagement-disengagement cycle of compulsive phone use. in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019. [Google Scholar]
- 36.Barr R, Memory constraints on infant learning from picture books, television, and touchscreens. Child Development Perspectives, 2013. 7(4): p. 205–210. [Google Scholar]
- 37.Hirsh-Pasek K, et al. , Putting education in “educational” apps lessons from the science of learning. Psychological Science in the Public Interest, 2015. 16(1): p. 3–34. [DOI] [PubMed] [Google Scholar]
- 38.Meyer M, et al. , Advertising in Young Children's Apps: A Content Analysis. Journal of Developmental & Behavioral Pediatrics, 2019. 40(1): p. 32–39. [DOI] [PubMed] [Google Scholar]
- 39.Zuboff S, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 2019: Profile Books. [Google Scholar]
- 40.Zhao F, et al. , Data collection practices of mobile applications played by preschool-aged children. JAMA pediatrics, 2020: p. e203345–e203345. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Livingstone S, Stoilova M, and Nandagiri R, Children’s data and privacy online. Technology, 2018. 58(2): p. 157–65. [Google Scholar]
- 42.Reyes I, et al. , “Won’t Somebody Think of the Children?” Examining COPPA Compliance at Scale. Proceedings on Privacy Enhancing Technologies, 2018. 2018(3): p. 63–83. [Google Scholar]
- 43.Child, U.N.C.o.t.R.o.t., General Comment No. 25 (2021) on children's rights in relation to the digital environment. 2021. [Google Scholar]
- 44.Munzer TG, et al. , Parent-toddler social reciprocity during reading from electronic tablets vs print books. JAMA pediatrics, 2019. 173(11): p. 1076–1083. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Hiniker A, et al. Screen Time Tantrums: How Families Manage Screen Media Experiences for Toddlers and Preschoolers. in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2016. ACM. [Google Scholar]