Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2009 Sep 16.
Published in final edited form as: Am Behav Sci. 2009 Apr 1;52(8):1152–1176. doi: 10.1177/0002764209331539

Measuring Children’s Media Use in the Digital Age

Issues and Challenges

Elizabeth A Vandewater, Sook-Jung Lee
PMCID: PMC2745155  NIHMSID: NIHMS128628  PMID: 19763246

Abstract

In this new and rapidly changing era of digital technology, there is increasing consensus among media scholars that there is an urgent need to develop measurement approaches which more adequately capture media use The overarching goal of this paper is facilitate the development of measurement approaches appropriate for capturing children’s media use in the digital age. The paper outlines various approaches to measurement, focusing mainly on those which have figured prominently in major existing studies of children’s media use. We identify issues related to each technique, including advantages and disadvantages. We also include a review of existing empirical comparisons of various methodologies. The paper is intended to foster discussion of the best ways to further research and knowledge regarding the impact of media on children.

Children’s Media Landscape in the Millennium

Since television roared onto the American landscape in the early 1950s, its impact on American life, and especially its impact on children, has been the subject of great interest and great debate. Since that time, an increasing array of entertainment technologies have become readily available—cable and satellite television offer consumers 50 to 100 channels to choose from; video, DVD, and DVR players allow increased choice in what consumers will watch and when they will watch it; video game players have become inexpensive and easily available; home computers today are as powerful as some mainframes of the early 1980s; and by all accounts, iPods are de rigueur for American teenagers. The American consumer today has an almost dizzying assortment of entertainment technology to choose from.

In part, this plethora of choice has been driven by the fact that these technologies have been embraced by American families. Televisions, along with video and DVD players, are requisite components of American family life. According to surveys conducted by the Kaiser Family Foundation, 99% of families with children own televisions, 97% own video or DVD players, more than 80% own a video game system, and 86% own a computer (Kaiser Family Foundation, 2003, 2005). These statistics, however, do not adequately capture the extent of media access and availability in the home. It is telling that the Kaiser Family Foundation surveys found that the average number of household televisions in 2002 was 3.5, the average number of VCR or DVD players was almost 3 (2.9), the average number of computers was 1.5, 82% of families had cable or satellite television, 74% had Internet access, and 60% had instant-messaging software. Indeed, the foundation concluded, “Almost three-quarters of U.S. kids live in homes with three or more TV sets” (Kaiser Family Foundation, 2005, p. 10). Clearly, American children are growing up in increasingly media- and technology-saturated environments.

The Digital Era

In large part, the veritable tidal wave of electronics available to consumers in the past decade has been driven by the switch from analog to digital media delivery technologies. Digital technology offers users the capability to use more media simultaneously, a technological advance that has given rise to the phenomenon of “media multitasking.” A desktop or notebook computer can now run multiple applications simultaneously, everything from streaming video to resident application programs. Advances in digital technology, along with the increasing miniaturization of components, have changed the landscape of available consumer technology. Video game graphics are increasingly dense and complex, notebook computers double as both audio and video players, cell phones double as digital cameras, and new generations of MP3 players that can also play video clips and movies have recently been released.

Because of the widespread popularity of these products, and the increasing portability of technology of all kinds, concern about “the impact of television on children” has widened to include other forms of media and technology (computers, the Internet, cell phones). The fact that they are embraced by younger generations more quickly and incorporated more seamlessly into their daily routines has heightened concerns. Although television was always popular, the move from analog to digital delivery of media has meant that television and other electronic media have become part of the fabric of our daily lives in heretofore unimagined ways. They are the backdrop against which our lives are set. They are an endemic part of the “hum” of our daily lives, and in many ways, they provide the foundation for a large number of daily activities. We use them for information, for entertainment, as a means of socializing, as a tool for household management, for shopping, for work, for communication, for scheduling. We use them without thinking much about them. We turn on the television when we are alone—quite literally, for “company.” We turn to them when we are bored, when we are lonely, when we are tired, when we need solace, when we need information, when we need entertainment.

Measuring Children’s Media Use in the Digital Era

For researchers interested in examining the impact of media on children, the question, of course, has always been how to best, and most accurately, measure children’s media use. Researchers interested in the effects of media on children are generally interested in two separate (though overlapping) issues: (a) How much media do children use? (i.e., questions centered around the total amount of use) and (b) What kinds of media messages are children exposed to and how much are they exposed to them? (i.e., questions centered around the quality and quantity of media content). Answers to these seemingly simple questions have proven difficult to pin down. To date, measures of media use typically used by a variety of researchers have proven singularly unsatisfying in their ability to capture these two rather elemental aspects of media effects research.

The move from analog to digital technology has further complicated this already complex issue. There is general popular consensus that children today, weaned on “keyboarding” and computer and electronic technology from primary school on, have become media multitaskers in numbers and ways unknown in previous generations of users. The Kaiser Family Foundation (2005) reports that between one quarter and one third of 7th-to-12th graders report using multiple media “most of the time.” Adolescents report less media multitasking while watching television (24%) than while using the computer (33%), with 39% indicating that most of the time they use a computer, they engage in other media activities as well (Kaiser Family Foundation, 2005).

There is general popular and scholarly consensus that today’s adolescents, in particular, have widely adopted the use of digital media for daily life activities. An image of a typical American teenager is conjured up: This teenager is in his or her room, doing homework on the computer; perhaps he or she has a word-processing program open for text and is surfing the Internet for information related to the topic of the particular paper. While the teenager is both “writing and surfing,” he or she has an instant-message window open and is chatting with friends about events at school, who likes whom, who dissed whom, or what a pain the assignment is. All this is happening with the television on in the background and/or while listening to music with iPod headphones on.

It is often taken for granted that this scenario describes the typical media experience of the majority of American youth. If this is so, then how in the world are scholars to approach the measurement of exposure (either amount or content) in this veritable mountain of media use? This conundrum is the focus of this article. In it, we outline various approaches to measurement, focusing mainly on those that have figured prominently in major existing studies of children’s media use. We identify issues related to each technique, including advantages and disadvantages. We also include a review of existing empirical comparisons of various methodologies. The purpose of this article is not to argue for any one existing technique versus any other but to foster discussion of the best way to further research and knowledge regarding the impact of media on children. There is increasing consensus in the field that there is an urgent need to develop approaches to measurement that more adequately capture media use in this new era of digital technology. The overarching goal of this article is to facilitate the development of measurement approaches appropriate for capturing children’s media use in the digital age.

Measuring Children’s Media Use: Existing Methodologies

The question of the amount of time children spend with media is, in essence, a question of time use. There exist a variety of approaches to measuring time use. These include (a) global time estimates, (b) time diaries, (c) media diaries, (d) experience sampling methods (ESM), (e) video or direct observation, and (f) electronic monitoring systems (specifically, Nielsen People Meters and Arbitron Portable People Meters [PPMs]). We consider each in turn. For purposes of discussion, Table 1 presents a list of major studies of children’s media use by research method used.

Table 1.

List of Major Studies of Children’s Media Use by Research Method Used

Research Method Study Year Age Range Design Media Measured Public Use
Global time estimates National Longitudinal Survey of Youth 1997 12–16 years Panel TV Yes
Michigan Study of Adolescent and Adult Life Transitions 1983–2000 5th and 6th grade Panel TV, computer Yes
Early Childhood Longitudinal Study 1998–1999, 2001 Kindergarten-12th grade, birth-1st grade Panel TV, computer Yes
The National Longitudinal Study of Adolescent Health 1994–1995 (Wave 1), 1996 (Wave 2), 2001–2002 (Wave 3) 7th-12th grade Panel TV, video games, computer Yes
The National Institute of Child Health and Human Development Study of Early Child Care 1991–1994, 1995–2000, 2000–2005 0–3 years, 54 months-1st grade, 2nd-6th grade Panel TV, video games, computer Yes
National Health and Nutrition Examination Survey 2002 (Wave 2) 2+ years Panel TV, video games, computer Yes
Panel Study of Income Dynamics–Child Development Supplement 2003–2004 8–18 years Panel Videogame, computer, Internet Yes
Kaiser Family Foundation (2005) (Generation M) 2003–2004 8–18 years Cross-sectional TV, DVDs, videotapes, video games, movies, radio, MP3, CDs, tapes, computer, Internet No
Anderson et al. (2001); (Early Childhood Television Viewing and Adolescent Behavior) 1994 15–19 years Panel TV No
Anderson, Field, Collins, Lorch, & Nathan (1985) 1980–1981 5 years Cross sectional TV No
Time diary American Time Use Survey 2004 15+ years Cross sectional TV, games, computer, Internet Yes
Panel Study of Income Dynamics–Child Development Supplement 1997 (Wave 1), 2002 (Wave 2) 0–12 years, 5–18 years Panel TV, video games, computer Yes
Media diary Anderson et al. (1985) 1980–1981 5 years Cross sectional TV No
Huston, Wright, Rice, Kerkman, & St. Peters (1990) 1981–1983 3 and 5 years Panel TV No
Kaiser Family Foundation (2005; Generation M) 2003–2004 8–18 years Cross sectional TV, DVDs, videotapes, video games, movies, radio, MP3, CDs, tapes, computer, Internet No
Experience sampling method Sloan Study of Youth and Social Development 1992–1997 6th, 8th, 10th, 12th grades Longitudinal TV, video games No
Observation Anderson et al. (1985) 1985 5 years Cross sectional TV No
People Meter and diaries Nielsen Since 1960s 2+ years Cross sectional TV No
Portable People Meter Arbitron Test in 2005 Children, 6–17 years; adults, 18+ years Cross sectional Capturing audio/video signals No

Global Time Estimates

Global time estimates are always self-reported in either written or interview form. Global estimates of media use take two general forms: (a) average amount of time spent (usually hours) using various media and (b) average number of days using media (usually within a month or a week). Global time estimates questions typically take the form of “How many hours did you spend watching television yesterday?” “How many hours do you spend watching television (or playing video games or using the computer) in a typical day (or a typical week?)” or “On average, how many hours did you watch television per day in the past seven days?” Respondents are asked either to simply state the number of hours or to respond to a Likert-type scale based on hours (for example, 0 to 1 hrs, 1 to 2 hrs, 2 to 4 hrs, more than 4 hrs).

Global estimates of the frequency of media use gather information about the number of times a respondent used media, usually within a specific time frame, such as a week or a month. Typical global frequency questions include “How often did you use a video game system (or the Internet or a cell phone) in the last month?” Respondents are asked to answer on Likert-type scales, such as never, once a week, three times a week, or every day or never to very frequently.

Global estimates are perhaps the most common form of measurement, in part because they are inexpensive and easy to administer. They are found in many public-use and large-scale surveys relied on by many to report the amount of time the American public in general, or children in particular, spends using digital and electronic media. Major, nationally representative surveys that rely on global estimates include the Kaiser Family Foundation Surveys (http://www.kff.org/ entmedia/index.cfm), the National Longitudinal Survey of Adolescent Health (http://www.cpc.unc.edu/addhealth), the National Health and Nutrition Examination Surveys (http://www.cdc.gov/nchs/nhanes.htm), the National Longitudinal Survey of Youth (http://www.bls.gov/nls/), and the National Institute of Child Health and Human Development Study of Early Child Care (http://secc.rti.org/). Although there are most certainly others, these surveys are particularly worth noting because they have been major contributors to the scholarly and public discourse regarding the amount of children’s media use and the impact of media on children.

Issues in global estimate measures

As previously noted, the main reasons global time estimates are so popular is because they are inexpensive and easy to administer. Additionally, and despite the fact that children spend an enormous amount of time with media, children’s media use is rarely the primary focus of many large-scale surveys. Thus, an easy way to “throw it in” to the protocol is to simply ask children how much they watch on average.

Robinson and Godbey (1997) point out a variety of issues with global time estimates. Their main point is that this measurement technique is particularly problematic in the context of surveys, when respondents are asked to make a judgment in 10 to 20 s, about something that is actually quite complex and requires several steps to answer accurately, even for respondents with a repetitive daily routine and regular and clear viewing patterns. Specifically, they note that trusting the answer to the question, “How many hours do you watch TV?” assumes that each respondent (a) undertakes the work of searching memory for all episodes of work or television yesterday or the past week, (b) separates the most important activity (the primary activity) from simultaneous but secondary activities, (c) is able to properly add up all the episode lengths across the day yesterday or across days in the last week, and (d) avoids reverting to social norms, stereotypes, or images of himself or herself about how much a “normal” person ought to watch TV.

In its recent survey of media use among youth ages 8 to 18, the Kaiser Family Foundation (2005) used several techniques to improve global estimates. For example, when asked to estimate the amount of time they spent watching television, respondents were provided with TV schedules for each of three times of day: 7:00 a.m. until noon, noon until 6:00 p.m., and 6 p.m. until midnight. The respondents were asked to check each program they had watched, then to report time spent viewing. This division of the day into three parts and the provision of TV schedules were intended to provide a heuristic template for respondents to improve the accuracy of their reports.

Time-Use Diaries

The use of time diaries for documenting time spent in various activities comes from a strong tradition in the field of economics. The federal government is tremendously interested in how Americans spend their time (particularly in paid labor) and has funded large-scale studies of time use, such as the American Time Use Survey (ATUS; U.S. Department of Labor, 2004; see http://www.bls.gov/tus/), conducted by the Bureau of Labor Statistics within the U.S. Department of Labor. The time diary is a technique for collecting self-reports of an individual’s daily behaviors in an open-ended fashion on an activity-by-activity basis (Robinson & Godbey, 1997). Individual respondents keep and report these activities for a short period of time, usually across the full 24 hrs of a single day. Typically, time diaries include columns for respondents to write (a) the name of the activity (What were you doing?), (b) duration (time began and time ended), (c) location (Where were you?), (d) social context (Who else was with you?), and (e) secondary activity (What else were you doing?). In time diaries, the diary accounts are by definition complete across the 1,440 min available in a day. Because the activities are provided by respondents, all daily activity is potentially recorded (including activities that occur in the early morning hours).

A good example of a classic time diary that was modified to capture aspects of media use is provided by the Child Development Supplement (CDS) to the Panel Study of Income Dynamics (http://psidonline.isr.umich.edu/CDS/). The methodology for this study, typical of time diary procedures, was as follows: Participants were asked to fill out a 24-hr time diary for one randomly chosen weekday and one randomly chosen weekend day. For younger children, primary caregivers were contacted the day before they were to begin recording their child’s activities and were instructed to record activities as they occurred during the course of the day. Older children participated in the completion of their own diaries or, in the case of adolescents, completed them on their own. A primary activity and its duration were recorded to account for every minute of each 24-hrs period, and if appropriate, a secondary activity was also noted. In addition to the time of the primary and secondary activities, participants were asked to record where and with whom the activities occurred. In the case of media activities, such as TV watching and playing video or computer games, they were asked to indicate the name of the program, movie, or game.

Twenty-four-hour time diaries do not figure prominently in existing studies of children’s media use. This is partly because they are considerably more expensive to collect than global time estimates (particularly in the context of large survey samples). In fact, the CDS is the only existing study using time diaries to assess children’s media use. The ATUS also collects time diaries, but the sample includes individuals 15 and older, so the viewing of younger children is not assessed.

Issues in time diary procedures

The task of keeping the time diary is fundamentally different from the task of making global time estimates. Rather than having to consider a long time period, the respondent need only focus attention on discrete periods within a single day. Respondents describe their day as they experience or recall it, rather than being limited to preordained categories devised by the researcher. Because the diary keeper’s task is to recall all of the day’s activities in sequence in his or her own terms, the procedure more closely approximates the chronological structure of the respondent’s day and is more similar to the way activities are recalled from memory. Moreover, the diary technique gives respondents minimal opportunity to distort activities to present themselves in a particular light. For example, respondents who wish to portray themselves as light media users must fabricate not only media-based activities but also the activities before and after them, making accounts of events later in the day more difficult.

Because a respondent’s full day is accounted for, the time diary technique preserves the “zero-sum” nature of time (at least within a single 24-hr period), which allows examination of various “trade-offs” between activities (e.g., Vandewater, Bickham, & Lee, 2006). For media researchers, this is a noteworthy advantage. One of the charges most often leveled at electronic media is that—especially for children—time spent with them interferes with, impinges on, or otherwise displaces time spent in more developmentally appropriate activities. However, the only way to assess this is to have a full accounting of all activities children engage in during a 24-hr period. This is exactly what time-use diaries do well.

There are important limitations, however. Because 24-hr time-use diaries are typically collected for a single day (or sometimes, as in the case of the CDS, for a single weekday and weekend day), they are quite good at capturing the things most people do on a daily basis but not very good at capturing the things people do infrequently. Thus, activities that may be an important part of one’s life but happen once or twice a week (for example, volunteering or Boy Scouts) can be lost, unless the randomly chosen day happened to be on the day that the infrequent activity occurred. Although this limitation is seemingly easily fixable by asking respondents to keep the diary for a longer period of time, say, a 7-day week, evidence suggests that this places a heavy burden on respondents, and response rates can drop by as much as 40% (Robinson & Godbey, 1997). For media researchers, this may present less of a problem than for researchers interested in other activities, because (for better or for worse) most people use electronic media on a daily basis. Still, there is a chance that the chosen day will be unusual and thus greatly inflate media use (such as when children are home sick). Another limitation of time diaries with respect to children is that for preliterate and young children, who cannot be expected to keep the time diary for themselves, activities and events occurring outside the home when the parent is not present (such as when they are at child care or school) are essentially lost. For example, although the parent may know the child was at child care, the parent probably does not know whether the child watched a movie while there.

There are two other important issues worth mentioning here. First, evidence shows that people tend to exclude from the time diaries regular daily activities that happen often but take very little time (“washing hands” or “going to the bathroom,” for example; Juster & Stafford, 1985, 1991; Robinson, 1985). Thus, to the extent that media use becomes regularized and/or quickly completed, it may not be recorded in the time diaries. For example, “checked weather online” or “checked bank balance online” may be lost in the stream of other very small activities. To a certain extent, this problem can be addressed with specific directions to respondents as to what “counts” in terms of time use. Respondents also tend to exclude activities they deem to be private, such as having sex (Juster & Stafford, 1991). Thus, “surfing the Internet for porn” will most likely always be underreported, no matter what kinds of directions participants are given.

Finally, because our concern here specifically relates to the problem of measuring media use in a digital age, it is worth noting that time diaries have never been very good at capturing simultaneous activities. In some absolute sense, time is very much zero sum. That is, there is only so much of it in a day, and time spent doing one thing ipso facto means that there is less time available to do something else. On the other hand, people have always multitasked, even before media were widely available. If I say to my daughter, “Come keep me company while I fold the clothes,” am I folding clothes or interacting with my daughter? From a time diary perspective, the essential question here is, What is the primary activity and what is the secondary activity? However, it seems possible that the answer to this question is, in fact, neither. The issue of a researcher’s ability to capture media multitasking, in particular, and what multitasking means with respect to both assessing total time spent and assessing exposure to particular media content bears further and concentrated discussion.

Media Diaries

As the name implies, media diaries (sometimes called “viewing logs” or “media logs”) are designed to capture the media use of respondents during a particular period. Media diaries are a modified form of a time diary, focused on a particular activity, namely, media use. Thus far, television-focused media diaries have been most common in media research, and the procedures vary widely from one study to the next. Some investigators provide participants with program grids that contain show names and channel and time information. Parents and/or children are then asked to mark the shows they recall watching. Other television diaries allow participants to write in program names on an empty grid. For instance, Anderson, Field, Collins, Lorch, and Nathan (1985) assigned one 10-day diary to each television set owned by the family. The diary divided each day into 15-min blocks between 6 a.m. and 2 a.m. the following morning. Parents were asked to record whether the TV was on, what channel was on, the program name, who was in the TV viewing room, and reasons for the focus child’s viewing and turning off TV.

In another well-known study of children’s media use, the Topeka Study (Huston, Wright, Rice, Kerkman, & St. Peters, 1990), viewing was measured with diaries maintained by the parents for 1 week in the spring and 1 week in the fall for 2 years (a total of five diaries). Viewing by all members of the household was recorded in 15-min intervals from 6:00 a.m. to 2:00 a.m. for each day. In addition, if children were in regular day care, their viewing was recorded by the caregiver. Spring and fall were sampled to avoid the extremes of heavy viewing in winter or light viewing in summer. Although each family kept a diary for only 1 week, each time of measurement lasted approximately 3 weeks, with families spread across them to reduce the effects of weather and idiosyncratic events (e.g., the assassination of President Sadat of Egypt) on the viewing measure. Parents were instructed to record as a “viewer” anyone who was present for more than half of a 15-min interval in which the television was turned on.

In recognition of the widespread use of digital technologies by youth, more recent studies have expanded this technique to include other forms of media. The Kaiser Family Foundation (1999, 2005) has used media diaries in both of its large-scale descriptive studies of media use among youth ages 8 to 18. In both surveys, a voluntary subsample of youth in the overall sample recorded media use that occurred in the 7-day diary period (23.5% of the 1999 and 34% of the 2005 samples, respectively). The 1999 data collection also included media diaries from a small sample of 134 parents of 2- to 7-year-olds. In both, media use was recorded in half-hour segments covering the time period from 6:00 a.m. to midnight. If a respondent indicated using media during any half-hour time slot, he or she was asked to indicate the main media activity from a set list that included listening to music, watching TV, watching videos or DVDs, watching movies in a theater, reading, playing video games, playing computer games, doing homework on the computer, instant messaging, e-mailing, visiting Web sites, and other computer activities. Children were then asked to indicate what else they were doing from a set list of activities (to assess multitasking), where they were, and whom they were with.

Issues in media diaries

For media researchers, media diaries have two advantages over time diaries. First, because respondents are asked to report only media use, it is possible to collect information about media use across a longer period of time (a full week or 10 days, for example) than with 24-hr time diaries, without increasing participant burden. This gives the researcher a better sense of the full extent of media use as well as how media use might wax and wane during the course of a week (or longer). Second, because of their focus, media diaries can capture more comprehensive information regarding the types and purposes of media use. For example, the CDS time diary asks respondents to indicate the name of the program title or video game the child was using but no more. Because a media diary is focused on media use alone, it is possible to ask additional questions regarding the content of media (including Web sites, etc.), the purposes of media use (“because I was bored” for TV viewing, or “because I had a paper due” for computer use), and the issue of media multitasking, again without significantly adding to participant burden.

Existing media diaries have typically asked respondents to indicate media use in 15- or 10-min blocks (Anderson et al., 1985; Kaiser Family Foundation, 2005). This approach makes it difficult to calculate total time spent using media with much precision. However, media diaries could be easily constructed to allow respondents to freely report total time. The question researchers must face is whether they care about small amounts of time. For example, using the Internet to check the weather report can certainly take less than 5 min. Because media diaries do not collect data about other activities, the researcher’s ability to examine children’s media use in the context of other activities they engage in is hampered. For the same reason, media diaries do not allow examination of activity time tradeoffs. This is not a trivial limitation, as one of the significant problems in much existing research on media is a failure to place children’s media use in the wider contexts in which children live their lives (school, friends, family, neighborhood, etc.; Vandewater, Lee, & Shim, 2004).

Experience Sampling Methods (ESM)

ESM was developed by Csikszentmihalyi and his colleagues to (Csikszentmihalyi 1975; Csikszentmihalyi & Kubey, 1981) to “study the subjective experience of persons interacting in natural environments” (Csikszentmihalyi & Larson, 1987). ESM involves signaling research participants at random times throughout the day (usually for a week but sometimes longer) and asking them to report on the nature and quality of their experience. In most studies, respondents are given electronic paging devices (beepers) and a small booklet of self-report forms. The pagers signal the research participants at random times each day—hence, ESM studies are sometimes referred to as “beeper studies.” Each time they are signaled, the respondents complete a page in the self-report booklet. When they are beeped, respondents typically report on what they are doing, where they are, and how they feel about what they are doing.

Experience sampling is so named because it measures internal (how people think and feel) as well as external (time, location, and social context) dimensions of experience (Kubey, Larson, & Csikszentmihalyi, 1996). The measurement of the internal dimensions of experience focuses on motivation, emotion, and cognition. Motivation is measured by reports of levels of intrinsic and extrinsic motivation during the primary activity (i.e., if respondents wanted to be doing the activity and how important the activity was to them). Emotion is measured by asking to respondents to indicate their mood and cognitive states on 7-point semantic differential mood scales, such as happy to sad, lonely to sociable, or passive to active. Cognition is measured by reports of respondents’ concentration, challenge, and skills of their primary activity (Kubey et al., 1996). A notable example of an ESM booklet is shown in the Sloan Study of Youth and Social Development (Csikszentmihalyi & Schneider, 2000). When they were beeped, the participants were asked to report where they were, what was on their mind, what the main thing they were doing was, what they were also doing, and whom they were with. They were also asked to indicate their mood, concentration, challenge, and skills of their main activity. For some activities, such as TV watching, they were asked to indicate the duration of time spent in the activities (Csikszentmihalyi & Kubey, 1981). The Sloan study collected data from adolescents in 1992 and 1997, probably before many of them were using computers on a daily basis. No large-scale studies since then have employed experience sampling to examine children’s media use.

Issues in experience sampling

Similar to both time estimates and diaries, ESM is based on participant self-report. However, whereas time-estimate (and some diary) data are collected on the basis of recall of past activities, ESM data are collected in “real time” (at the moment when participants are doing the activity). As it applies to media research, the advantage of ESM is that it can be used to assess the internal experiences (motivations, moods, and cognitions) people experience while using different forms of media. ESM can provide ecologically valid information about how people use and experience media at home and with their families. For example, using the Sloan data, Kubey (1990) found that adolescent coviewing with family members is associated with more challenging, cheerful, and sociable experiences than viewing alone. However, he also found that compared to non-TV family activities, frequency of talking with family members is reduced by about 40% when watching television, and family viewing is less psychologically activating than non-TV family activities.

ESM is most similar to diary methods asking participants to record activities during a certain time period. However, ESM may present a significantly smaller significant burden relative to 24-hr time diaries, as participants report only on a smaller number of randomly selected activities throughout the day. On the other hand, because respondents are typically asked to report a great deal about those activities, this advantage may be minimal. ESM also has advantages compared to direct observation in that it can avoid the problems associated with intrusive observers, such as bias resulting from pressure of normal behavior and privacy (Kubey et al., 1996; Larson, 1989).

ESM requires awareness of internal states and the ability to differentiate and report that awareness verbally and in written form. Thus, it may be difficult (to impossible) to use with children younger than the age of 10 (and is even beyond the reach of some 10-year-olds). Although ESM allows researchers to calculate how often certain events occur in different internal and external contexts, it does not allow assessment of the total time spent in a certain activity.

Direct Observation

Direct or video-recorded observations have long been considered the “gold standard” for measuring media use. One technique is to enter homes and directly observe viewing behavior. For example, Durant, Baranowski, Johnson, and Thompson (1994) had observers arrive at children’s homes by 7:00 a.m. and observe the activities of each child, including viewing behavior for 6 to 12 hr. Each child was observed on a given day by two observers, who alternated shifts throughout the day. In this study, they were specifically interested in television viewing, and observers recorded viewing for every minute of observation time. Minutes during which the television was on but the child was not attending were not included in their index of television viewing, and they report a 96% level of agreement between observers regarding the question of whether the child was attending to the television or not. As this description implies, this method is extremely labor intensive and time-consuming.

A handful of media researchers have installed video equipment in participants’ homes to record viewing behavior. Anderson et al. (1985) installed video cameras in families’ homes, which were programmed to record only when the television set was turned on. In this study, they also installed a video camera directed at the viewing area to account for children’s attention to programming. They monitored viewing in families with young children (age 5) during the course of 10 days. The methodological description of the videotaping technique used in this study is below:

Families had a black and white time-lapse video cassette deck, control circuitry, a time/ date generator, a screen splitter and battery backup equipment. One camera equipped with a zoom lens recorded programs on the families’ television while a second camera filmed the room area from which individuals were most likely to watch the TV. This camera was equipped with an 8.5 mm wide angle lens with autoservo iris, which enabled it to both maximally cover the viewing room and adjust to changing light conditions. The control circuitry activated the time lapse video deck only when the TV set was on The video deck recorded at a ratio of 1:36 (one video frame every 1.2 sec). Every 18 sec a 6-sec image of the TV screen was inserted on a portion of the videotape. In addition, the time and date were continuously superimposed. (Anderson et al., 1985, pp. 1347–1348)

Borzekowski and Robinson (2001) used similar techniques, employing two wide-angle cameras to record both the TV set and any possible viewing areas for 10 days.

Issues in direct observation

Direct observation has the potential to provide more accurate and rich information than any other method reviewed thus far. Direct observation has its origins in ethnography, which investigate social phenomena in natural settings. Thus, direct observation can provide ecologically valid and contextualized data. Although it is clearly a method devoid of respondent reporting bias or error (and for some, the only valid technique for assessing media use), there are important limitations as well.

Perhaps most obvious is the enormous amount of time and effort required to use the technique. Because observers (sometimes more than one per participant) need to shadow participants as they go about their day, data collection is slow and laborious. This means one needs a veritable army of observers to collect data from even a small (100 or so) number of participants within a reasonable time frame. Time frame is, in fact, an important issue to consider, because media use changes with seasonal changes as well as with vacation time, holidays, and so on. Both direct and videotape observation techniques come at significant cost to the researcher. Videotape observation does not require the number of observers that direct observation does (as the videotape is acting as the observer). However, coding videotaped data is never a trivial matter and is enormously time-consuming and laborious. Videotape observation has the additional cost of videotape equipment as well as the requisite staff person on hand to fix the inevitable technical problems inherent in any electronic equipment. If financial resources available for data collection were not an issue, than this limitation would not exist. However, resources are never infinite, and researchers must always make decisions regarding how much data they can collect for the available budget.

Partly because of the labor and staffing demands of this technique, direct observation is generally not available to researchers interested in collecting data from a representative sample of the U.S. population. However, another issue that makes a representative sample less likely is that individuals have to consent to having their every move observed or recorded. This raises the question of individual characteristics that would make it more likely that a person would consent to this, which may also be related to their media use. Although social desirability of self-report is not an issue, a researcher’s presence can certainly influence participants’ behaviors—especially in the case of certain kinds of “illicit” media use. The presence of the observer may inhibit natural behaviors and the expression of internal emotion. Video-recording observation may minimize biases generated by an observer’s presence, although the fact that respondents are aware that they are being recorded may influence behavior as well. In their study, Borzekowski and Robinson (1999) report instances in which a teenage participant managed to disrupt the electrical current to the equipment installed in his room and children made faces at the video camera throughout the 10-day period of data collection.

Compared to direct observation, videotape observation is limited in that it can record only activities that occur within the range of view of the installed camera. Much of the media use by families and children tends to occur on relatively large versions of equipment (television, desktop computers), which are moved around rarely. In cases such as these, videotape observation of media use is not limited. However, with the increasing portability of media, and the tendency for children to use media in a variety of places (Game Boys in the car, cell phones at school, etc.), the accuracy of videotaped observation of media use will become increasingly problematic.

Validation Work and Comparison of Methodologies

A substantial amount of attention has been paid to measurement issues involving time use (Juster & Stafford, 1991), and a fairly extensive body of research now exists demonstrating the validity and reliability of such diaries as representations of the way both children and adults spend their time (Juster, 1985, 1986; Juster & Stafford, 1991; Robinson, 1977, 1985; Scheuch, 1972). However, this work has compared only the first five methods reviewed above: global time estimates, diary techniques, experience sampling, and observational techniques—not electronic monitoring techniques. Because currently used electronic monitoring technologies are proprietary technology, they constitute a special case. Any validation work or methodological comparison seems to have been conducted by the companies themselves or employees of the companies (e.g., Fitzgerald, 2004). This makes it difficult to assess their conclusions within the realm of peer-reviewed scientific literature. Thus we review existing validation work here before we discuss electronic monitoring techniques.

A series of experiments (Juster, 1985, 1986; Juster & Stafford, 1991; Robinson, 1977, 1985; Scheuch, 1972) have compared activities recorded via global time estimates, time diaries, random interval pagers, and observations. These studies show that pager methods underreport activities taking place outside the home, and global estimates overreport virtually every activity (Juster & Stafford, 1991; Robinson, 1985). When direct observation is compared with time dairies, the mean values for time allocated to different activities were very close, and the correlations quite high, on the order of .70 to .80. Global time estimates, however, have moderate (at best) correlations with observational data, on the order of .40 (Juster & Stafford, 1991). Minor variations in the ways the diaries are obtained do not make a great deal of difference in the estimates. Telephone surveys yield estimates that are similar to personal interviews. Recall bias in time diaries (for up to a 7-day period) is negligible for estimates of time use on weekend days but tends to become noticeable for weekday estimates if the recall period is more than 24 hrs (Juster, 1986).

Larson (1989) evaluated the validity of ESM in terms of compliance, the sampling of experience, and experimental effects. First, he found no significant differences in participation rate related to gender, grade, school, or interactions of these variables. Furthermore, those who had taken part were not significantly different from the entire school population on a 6-item version of the Rosenberg Self-Esteem Scale. Self-reports for 80% of the signals were completed. Larson found that the ESM data appear to underestimate church attendance and playing sports (i.e., underestimate activities outside the home) and to overestimate household activities, such as studying, eating, and watching TV, but these biases are small. Third, to identify experimental effects of the ESM, teachers were asked whether the study had affected specific students in terms of normal behaviors. The teachers reported that the great majority of students were not affected by the experiment.

Similar results have been found among studies focusing specifically on time spent using media. These studies show that diaries are more accurate than global estimates of average time spent using media (watching television, for example; Anderson et al., 1985; Anderson & Field, 1991). In general, as with other activities in time-use diaries, participants tend to overestimate television use when asked to report how much they watch on an average weekly (or daily) basis (Anderson & Field, 1991).

Anderson et al. (1985) specifically examined the accuracy of parent report of their young children’s (age 5 on average) media use. Using video cameras, they recorded all children’s viewing in approximately 100 families for a 10-day period while the children’s parents also completed a viewing diary. The correlation between the two methods was quite high (.84), indicating that time diaries filled out by parents provided fairly accurate representations of children’s weekly viewing. In contrast, the correlation between television watching and global estimates was much lower, on the order of .40. Similarly, Wright and Huston (1995) found moderate correlations (generally in the .50s) between parents’ reports of young children’s viewing on checklists versus diaries. In the 1997 wave of the Child Development Supplement (CDS-I), primary caregivers were asked to estimate the number of hours their children watch television during the week and on the weekend. Taken together, primary caregivers’ estimations of the hours their children spent watching television weekly were correlated .40 with weekly television use reported in the time diaries.

Time-use diaries appear to be less subject to distortions of social desirability than global time estimates of use or frequency. In their study, Wright and Huston (1995) also found that parents of young children reported that their children watched more educational programs and fewer cartoons on checklists than on diaries. In an analysis of time use in CDS-I, Hofferth (1999) found that parents overestimated the amount of time they read to their children in survey questions compared to diary reports.

Finally, with respect to new media, Greenberg et al. (2005) specifically compared global time estimates collected via online surveys to diary data for Internet, television, radio, music (CDs), music (computer), video games (offline), video games (online), and number of e-mails received and sent. They found similar results. Global estimates of use for each of these media were higher when reported on an online survey than when reported in a diary. Moreover, the correlations between global estimates and diary estimates are disturbingly low, ranging from a low of .19 (for music CDs) to a high of .58 (for number of e-mails sent). Internet and television correlations for the two methods were .39 and .35, respectively.

Electronic Monitoring Techniques

Though electronic monitoring, in the forms of sensors, software, and meters may hold the most promise for measuring media use in the digital age, to date, they have been least used by scholars. For-profit firms have used these techniques in far greater numbers. The two major electronic monitoring techniques for assessing media use available today are both proprietary technology: (a) Nielsen People Meters and (b) Arbitron PPMs. These are discussed below.

Nielsen People Meter

Nielsen Media Research pioneered electronic monitoring techniques for measuring media use. While the Nielson company employs both telephone interviews and television viewing diaries, it relies primarily on the Nielsen People Meter to measure national television viewing as well as viewing in many local areas. Meters monitor both station or channel viewed and who is watching. To assess viewer identity, each family member in a sample household is assigned a personal viewing button according to his or her age and gender. When people turn on the TV, a light flashes on the meter reminding them to press their assigned button (Nielsen Media Research, 2000). When the individual is finished watching, he or she presses the button again, signaling to the meter that he or she is done watching. Children as young as 2 are instructed and expected to enter viewing information into the people meter.

PPMs

Arbitron began development of PPMs in 1992. Arbitron is currently (as of June 2005) testing the device on markets in Houston. Respondents carry the Arbitron PPM, a small device the size of a cell phone, during waking hours. The PPM “hears” an inaudible code that is embedded in the audio stream of audio and video programming. The meter is equipped with a motion sensor that allows Arbitron to monitor movement and ensure that respondents are carrying their meters. Respondents return the meter to a docking station to recharge it when they go to bed. The codes are transmitted daily to an Arbitron central processing system for tabulation (Fitzgerald, 2004; McConochie, Wood, Uyenco, & Heider, 2005).

Issues in People Meter measurement

Advantages claimed regarding meter measurement techniques include avoidance of distortion caused by social desirability or memory effects (Danaher & Lawrie, 1988; Soong, 1998; Van Meurs, 1998). Other advantages (claimed mainly by Nielsen) include unobtrusiveness, reliability, and precision. However, the Nielsen People Meter has some important limitations. First, it demands that panelists log in and out with a remote control when they start and stop watching television, and this is not always done correctly (Danaher & Beed, 1993). Furthermore, younger panelists have exhibited “fatigue” in their button-pushing behavior (Clancey, 1994). That is, they record less of their viewing by pushing their People Meter button as their time on the panel increases. Finally, because People Meters are installed in home TV sets, they cannot measure viewing that occurs outside the home. Finally, and perhaps most important in this context of this article, Nielsen People Meters are specifically designed to assess television viewing. It is unclear whether the technology could be adapted to capture other forms of media use.

Arbitron developed PPMs specifically to address some of the problems with stationary People Meters used by Nielsen. According to Arbitron, the advantages of PPMs compared to Nielsen People Meters include the following: (a) PPMs provide true multimedia data from the same respondents—radio, broadcast TV, and cable TV can all be encoded and measured by the same device; (b) PPMs measure people passively—it does not require an individual to write something down or push a button to track media use; (c) PPMs measure people, not devices, and thus PPMs can measure media use wherever it occurs; and (d) PPMs include a sophisticated motion detection system enabling Arbitron to exclude panelists who, according to the motion detector in their PPM, are not currently wearing the device (Fitzgerald, 2004; Lapovsky, 2004; McConochie et al., 2005).

All of these characteristics do indeed seem to be advantages over the Nielsen People Meter. One of the biggest issues with the Nielsen meter is that Nielsen estimates of television viewing tend to be much larger than estimates from empirical studies. Probably the biggest issue with respect to Nielsen meters is that there is no way of knowing how much television viewing occurs when no one is actually watching the television. An enormous number of Americans turn on the television and walk away, sometimes on purpose (for “company”), sometimes because we forget and become distracted by something else (giving the child a bath, making dinner, etc.). Unless the individual is extremely conscientious about clicking his or her meter on and off, there seems to be an enormous potential for measurement “slippage” here. This is why Arbitron notes that its technology measures people rather than devices.

One issue to consider is that despite the promise of the PPM, especially with respect to tracking media multitasking, both of these techniques are, in essence, media diaries. As such, they cannot inform scholars about the relation of media use to other activities throughout the day, nor can they inform about the social context of media use (coviewing, coplaying, etc.). Finally, as proprietary data, the cost of using them, from a scholar’s perspective, can be prohibitive. Both Nielsen and Arbitron charge for data collection and do not sell their technology directly to others (selling PPMs to scholars for research use, for example). Though the cost of data will vary tremendously depending on the nature of what is collected, costs for such data can be expected to begin at $50,000. Nielsen and Arbitron have, in fact, entered into a partnership under the auspices of Scarborough Research. The extent to which data from this joint venture will be available to scholars, and at what cost, remains to be seen.

New Technologies

Additional technologies relevant here include software for tracking Internet and Web use. A variety of software systems are available to track Web site use. These have mainly been developed for business applications, especially for marketing and advertising purposes. It is possible, however, that such software could be installed on the home (or laptop) computers of research participants and could thus be used to track Web site and Internet use on an individual basis, much as television viewing has been videotaped (Sheehan & Hoy, 1999).

There are also a number of other intriguing technologies in development worth mentioning. These include time–event recorders, wearable computers, and VideoTraq translation software for videotaped activities (Starner et al., 1997; Zartarian et al, 1997; Zartarian et al, 1997). As yet, none of these technologies has been used to track media use, nor have they been used very much in general. However, they may hold promise for tracking media use and activities in the digital age.

Measuring Exposure to Content

For those interested in the impact of media on children, measuring exposure by type and amount is only a small part of the story. There is a large body of evidence indicating that the content of what children view is at least as important as they types of media used and how much they use it (see, e.g., Bryant & Zillman, 2002). Thus researchers have examined the effect of educational content, sexual content, and televised food advertising on a variety of outcomes. Although the measurement techniques for assessing content ultimately depend on how well we can measure use, it is worth discussing here because for media researchers, content is, in some aspect, the “holy grail” of research on the impact of media on children. The most commonly used techniques for measuring content include (a) asking children to list three to five of their favorite shows, (b) asking for program titles within diary data, and (c) coding program content of popular shows, time slots, or video games with children of a particular age group. Given the seeming importance of content in assessing the impact of media on children, these approaches are singularly unsatisfying. None is particularly precise. “Favorite” questions and diary data can only provide a very broad brush of the content of children’s media use. Although they can be seen as measures of actual use (diaries more so than favorite questions), these techniques will miss messages that are viewed by default—such as advertising, product placement, and trailers and previews (some of which can be quite frightening, despite the fact that they are aired during primetime). Content coding of popular shows, programs, Internet sites, and video games can capture these messages. For example, some have content-coded food advertised on children’s programming (Harrison & Marske, 2005). However, this approach does not capture the viewing of individual children but rather estimates that a certain number of children will view such content. Thus, connections between exposure and individual behavior are difficult to make.

Questions for Further Discussion

We would like to raise two additional more general questions for further discussion.

1. How close is close enough?

This is a question of reliability and validity in measurement. It is the question that all researchers face, no matter what their area of interest. All researchers must grapple with the issue of how much detail they need to collect to get a reasonable estimate of the factor they wish to measure. With respect to media use, diary estimates appear to be highly correlated with observational measures (.80 to .85), whereas global estimates are correlated with both observational and diary data only moderately (.20 to .40) at best.

However, an examination of actual estimates of use across different techniques shows that they are not that far apart. Using People Meters, Nielsen reported 2.81 hr per day of television viewing for a sample of 12- to 17-year-olds collected in 2000 (Nielsen, 2000). Using global estimates, the Kaiser Family Foundation reports 3 hr a day for a sample of 8- to 18-year-olds collected in 2002 (Kaiser Family Foundation, 2005). Time diary estimates of viewing based on the Child Development Supplement (CDS-II) for the same age range and year as the Kaiser data indicate an average of 2.8 hr. The Sloan study of 9th to 12th graders, with data collection using ESM in 1997, indicates 2.5 hr of viewing. Thus, despite the every different correlations of each of these measurement approaches with observational techniques, one would conclude from them, that, on average, adolescents watch about 2-and-a-half to 3 hrs of television per day.

Of course, despite similarities in mean levels, the very different correlations already noted indicate that where these techniques really differ is with respect to error variance. Thus, it seems that this answer to the question “How close is close enough?” (as with many questions) is that it depends on your purpose. If the purpose of the research is descriptive—say, documenting average levels of media use at the population level— then it may be that perhaps global estimates are close enough. However, if the purpose of the research is predictive, then error variance becomes much more problematic—as it plays havoc with predictive models. Thus, researchers interested in prediction may do well to turn to measurement techniques other than global estimates.

2. What do we mean by “a user”?

This somewhat existential question is, in fact, at the heart of judgment about the validity or veracity of any particular measurement approach to media use. On its surface, the answer seems plain. For television, the obvious answer is that a user is someone who watches TV; for video games, a user is someone who plays; and so on. The real question, however, is What counts as watching TV? For example, if I turn on the TV and walk away to do the dishes, let the dog out, care for a child, and so on, am I still “watching TV”? This is essentially a question of attention and is particularly relevant to television viewing, which is on so much in American homes that if we were indeed “watching” it all the time it was on, we would scarcely have time to do anything else. This is one of the criticisms of Nielsen People Meters. Nielsen defines a TV viewer as someone who pushes a button on the Nielsen People Meter remote when the set is tuned to the program (Ephron, 2002). This means that TV viewing is measured from the time the set is turned on (triggered by the People Meter) until the individual remembers to turn the People Meter off. This is the most likely reason why Nielsen estimates are often higher than estimates based on global questions, diaries, or observational techniques.

It seems worth noting here that Arbitron reports that estimates of use with its PPM technology are actually higher than Nielsen estimates. For example, Arbitron PPM data report TV viewing levels that average 46% higher than Nielsen meter levels based on panels in the Philadelphia area (Ephron, 2002). This is most likely the result of Arbitron’s definition of a viewer, which is “someone within earshot of a station signal coming from a TV set” (Ephron, 2002). This definition is quite far from what most media scholars would consider as viewing.

Again, this raises the question of attention, which is not a trivial matter to those interested in connecting media exposure to other behaviors and outcomes. However, the choice is not simple. Diary estimates are most likely biased toward recording media use at times when individuals are actually paying attention (or at least paid attention during some portion). If they do not notice it, or they are in another room while someone else is using it, it seems unlikely they would report it as an activity that they were engaged in on a time or media diary. However, in this media-saturated environment, is that what we want to capture? Arbitron indicates that PPM is free of human error because it captures exposure to every signal it hears, and thus all sources receive equal treatment. Thus, although the PPM counts behavior as viewing that would not ordinarily be considered viewing, these behaviors are not filtered through respondents’ memory or judgment (Ephron, 2002). This is essentially the difference between active (on the part of the respondent) and passive (electronic tracking) measurement. Active measurement requires participation from the respondent, with all that implies, either good or bad, depending on the question at hand. Passive measurement essentially means that the user is not engaged in judgments about whether they are actually using or not. Decisions regarding whether active or passive measurement is best suited to assessing media use will again depend on the overarching purpose of the particular study.

Conclusions

Rapid advances in digital technology and increased media multitasking have somewhat confounded researchers interested in examining the impact of media on children. The rapidity of technological advances and the pace at which they have been embraced by consumers have far surpassed readily available methods for assessing their use. In part, the impetus for this review was the image of youth conjured in the beginning of this article: an adolescent doing homework on the computer while instant messaging friends with the television and/or music on in the background. In the context of tracking activities, media use among them, accurately measuring use and exposure is a tall order for any one technique, including newly developed and developing electronic monitoring techniques. Even with such advances, it appears that the most effective approach will be to triangulate measurement techniques, for example, diaries in combination with electronic monitoring (such as Internet tracking software). However, the measurement approaches chosen to use in combination with one another will depend on the particular research question at hand. No single approach, or even combination of particular approaches, should be viewed as a panacea for addressing the complicated issue of media use measurement.

Acknowledgments

The authors wish to expressly thank Hope Cummings, MA, and Xuan Huang, MA, for their help in preparation of this article.

Biographies

Elizabeth A. Vandewater is a Senior Research Scientist in RTI International’s Public Health and Environment Division. Her research focuses on the impact of media on children’s health outcomes and health behaviors.

Sook-Jung Lee is a full-time lecturer in the Department of Mass Communication at Chung-Ang University. Her research focuses on young people’s media use and its social impact.

Footnotes

References

  1. Anderson DR, Field DE. Online and offline assessment of the television audience. In: Bryant J, Zillmann D, editors. Responding to the screen: Reception and reaction processes. Hillsdale, NJ: Lawrence Erlbaum; 1991. pp. 199–216. [Google Scholar]
  2. Anderson DR, Field DE, Collins PA, Lorch EP, Nathan JG. Estimates of young children’s time with television: A methodological comparison of parent reports with time-lapse video home observation. Child Development. 1985;56:1345–1357. doi: 10.1111/j.1467-8624.1985.tb00202.x. [DOI] [PubMed] [Google Scholar]
  3. Anderson DR, Huston AC, Schmitt K, Linebarger DL, Wright JC. Early childhood television viewing and adolescent behavior: The recontact study. Monographs for the Society for Research in Child Development. 2001;66 (Serial No. 264) [PubMed] [Google Scholar]
  4. Borzekowski DLG, Robinson TN. Viewing the viewers: Ten video cases of children’s television viewing behaviors. Journal of Broadcasting and Electronic Media. 1999;43(4):506–528. [Google Scholar]
  5. Bryant J, Zillman D. Media effects: Advances in theory and research. New York: Lawrence Erlbaum; 2002. [Google Scholar]
  6. Clancey M. The television audience examined. Journal of Advertising Research. 1994;34:1–10. [Google Scholar]
  7. Csikszentmihalyi M. Play and intrinsic rewards. Journal of Humanistic Psychology. 1975;15(3):41–63. [Google Scholar]
  8. Csikszentmihalyi M, Kubey R. Television and the rest of life: A systematic comparison of subjective experience. Public Opinion Quarterly. 1981;45:317–328. [Google Scholar]
  9. Csikszentmihalyi M, Larson R. Validity and reliability of the experience-sampling method. Journal of Nervous and Mental Disease. 1987;175:526–536. doi: 10.1097/00005053-198709000-00004. [DOI] [PubMed] [Google Scholar]
  10. Csikszentmihalyi M, Schneider B. Becoming adult: How teenagers prepare for the world of work. New York: Basic Books; 2000. [Google Scholar]
  11. Danaher PJ, Beed TW. A coincidental survey of people meter panelists: Comparing what people say with what they do. Journal of Advertising Research. 1993;33(1):86–92. [Google Scholar]
  12. Danaher PJ, Lawrie JM. Behavioral measures of television audience appreciation. Journal of Advertising Research. 1998;38(1):54–65. [Google Scholar]
  13. Durant R, Baranowski T, Johnson M, Thompson W. The relationship among television watching, physical activity, and body composition of young children. Pediatrics. 1994;94(4):449–455. [PubMed] [Google Scholar]
  14. Ephron E. The Arbitron PPM versus the Nielsen meter / diary. 2002 Retrieved February 20, 2006, from http://www.ephronmedia.com. [Google Scholar]
  15. Fitzgerald J. Evaluating return on investment of multimedia advertising with a single-source panel: A retail case study. Journal of Advertising Research. 2004;44(3):262–270. [Google Scholar]
  16. Greenberg BS, Eastin MS, Skalski P, Cooper L, Levy M, Lachlan K. Comparing survey and diary measures of Internet and traditional media use. Communication Reports. 2005;18(1):1–8. [Google Scholar]
  17. Harrison K, Marske AL. Nutritional content of foods advertised during the television programs children watch most. American Journal of Public Health. 2005;95:1568–1574. doi: 10.2105/AJPH.2004.048058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hofferth SL. Family reading to young children: Social desirability and cultural biases in reporting; Paper presented at the National Research Council Workshop on Measurement and Research on Time Use; Washington, DC: Committee on National Statistics; 1999. May, [Google Scholar]
  19. Huston AC, Wright JC, Rice ML, Kerkman D, St.Peters M. Development of television viewing patterns in early childhood: A longitudinal investigation. Developmental Psychology. 1990;26:409–420. [Google Scholar]
  20. Juster FT. Response errors in the measurement of time use. Journal of American Statistical Association. 1986;81:390–402. [Google Scholar]
  21. Juster FT, Stafford FP, editors. Time, goods, and well-being. Ann Arbor: University of Michigan, Institute for Social Research; 1985. [Google Scholar]
  22. Juster FT, Stafford FP. The allocation of time: Empirical findings, behavioral models, and problems of measurement. Journal of Economic Literature. 1991;29:471–522. [Google Scholar]
  23. kaiser Family Foundation. Kids and the media@the new millennium. Menlo Park, CA: Author; 1999. [Google Scholar]
  24. Kaiser Family Foundation. Zero to six: Media use in the lives of infants, toddlers, and preschoolers. Menlo Park, CA: Author; 2003. [Google Scholar]
  25. Kaiser Family Foundation. Generation M: Media in the lives of 8–18 year olds. Menlo Park, CA: Author; 2005. [Google Scholar]
  26. Kubey R. Television and the quality of family life. Communication Quarterly. 1990;38(4):312–324. [Google Scholar]
  27. Kubey R, Larson R, Csikszentmihalyi M. Experience sampling method applications to communication research. Journal of Communication. 1996;46(2):99–120. [Google Scholar]
  28. Lapovsky D. New ways to measure media vehicles. 2004 Retrieved February 20, 2006, from http://www.arbitron.com/downloads/Lapovsky_AAAA.pdf. [Google Scholar]
  29. Larson R. Beeping children and adolescents: A method for studying time use and daily experience. Journal of Youth and Adolescence. 1989;18(6):511–530. doi: 10.1007/BF02139071. [DOI] [PubMed] [Google Scholar]
  30. McConochie RM, Wood L, Uyenco B, Heider C. Progress towards media mix accountability: Portable people meters’ (PPMTM) preview of commercial audience results; Paper presented at the ARF/ESOMAR 2005 Week of Worldwide Audience Measurement Conference; Montréal, Canada. 2005. Jun, [Google Scholar]
  31. Nielsen Media Research. 2000 report on television. 2000 Retrieved October 17, 2005, from http://www.nielsenmedia.com. [Google Scholar]
  32. Robinson JP. How Americans use time: A social-psychological analysis of everyday behavior. New York: Praeger; 1977. [Google Scholar]
  33. Robinson JP. The validity and reliability of diaries versus alternative time use measures. In: Juster FT, Stafford FP, editors. Time, goods, and well-being. Ann Arbor: University of Michigan, Institute for Social Research; 1985. pp. 33–62. [Google Scholar]
  34. Robinson JP, Godbey G. Time for life: The surprising ways Americans use their time. University Park: Pennsylvania State University Press; 1997. [Google Scholar]
  35. Scheuch EK. The time-budget interview. In: Szalai A, editor. The use of time. The Hague, Netherlands: Mouton; 1972. pp. 69–87. [Google Scholar]
  36. Sheehan KB, Hoy MG. Using e-mail to survey Internet users in the United States: Methodology and assessment. Journal of Computer Mediated Communication, 4(3) 1999 Retrieved February 20, 2006, from http://jcmc.indiana.edu/v014/issue3/sheehan.html. [Google Scholar]
  37. Soong R. The statistical reliability of people meter ratings. Journal of Advertising Research. 1988;28(1):50–56. [Google Scholar]
  38. Starner, et al. Augmented reality through wearable computing. 1997 Retrieved February 20, 2006, from http://www.media.mit.edu/publications/ [Google Scholar]
  39. U.S. Department of Labor, Bureau of Labor Statistics. American time use survey. 2004 Retrieved February 20, 2006, from http://www.bls.gov/tus/ [Google Scholar]
  40. Vandewater EA, Bickham DS, Lee JH. Time well spent? The impact of media use on children’s free-time activities. Pediatrics. 2006;117:181–191. doi: 10.1542/peds.2005-0812. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Vandewater EA, Lee JH, Shim M. Family conflict and violent electronic media use in school-age children. Media Psychology. 2004;7:73–86. [Google Scholar]
  42. Van Meurs L. Zapp! A study on switching behavior during commercial breaks. Journal of Advertising Research. 1998;38(1):43–53. [Google Scholar]
  43. Wright JC, Huston AC. Effects of educational TV viewing of lower income preschoolers on academic skills, school readiness, and school adjustment one to three years later. Lawrence: University of Kansas, Center for Research on the Influences of Television on Children; 1995. [Google Scholar]
  44. Zartarian, et al. Quantifying videotaped activity patterns: Video translation software and training methodologies. Journal of Exposure Analysis and Environmental Epidemiology. 1997;7(4):535–542. [PubMed] [Google Scholar]
  45. Zartarian, Ferguson, Leckie Quantified dermal-activity data from a four-child pilot field study. Journal of Exposure Analysis and Environmental Epidemiology. 1997;7(4):543–553. [PubMed] [Google Scholar]

RESOURCES