Abstract
Media literacy has the potential to alter outcomes in various fields, including education, communication, and public health. However, measurement of media literacy remains a critical challenge in advancing this field of inquiry. In this manuscript, we describe the development and testing of a pilot measure of media literacy. Items were formed based on a composite conceptual model and administered to college communications students (n = 34). Each of three media literacy subscales had good internal consistency reliability (α1 = 0.74, α2 = 0.79, α3 = 0.75). Principal components analysis revealed a five-factor structure that corresponded closely with the underlying conceptual model. As was expected, the media literacy scale was significantly correlated with a composite critical thinking measure (r = 0.32, P = .03). This scale may be valuable for the measurement of media literacy and the assessment of media literacy interventions.
Keywords: media literacy, measurement, critical thinking, assessment
Introduction
Educators in the United States lag behind those in Canada, Australia, Great Britain and New Zealand in teaching students about the media and its messages. This type of education can be referred to as “media literacy,” which is generally defined as “the ability to understand, analyze, evaluate and create media messages in a wide variety of forms” (Aufderheide & Firestone, 1993; Thoman, 2003). US Government officials such as Federal Communications Commissioner Deborah Taylor Tate believe that we must find ways to make our children more “media literate” in order to improve critical appraisal of media messages (Eggerton, 2006).
All 50 states now refer to media literacy in their curricular guidelines and recommendations for public basic education (Kubey, 1998). Similarly, the Goals 2000: Educate America Act calls for a substantial increase in “the proportion of college graduates who demonstrate an advanced ability to think critically, communicate effectively and solve problems” (Berko, Morreale, Cooper, & Perry, 1998, p. 175). Since media literacy is primarily a critical thinking skill that is applied to our main source of information, the media (Silverblatt, 2001) instruction in this area can contribute to overall improvements in reaching national educational goals.
However, while the field of media literacy is growing in interest and participation, research including empirical studies appears to be lacking (Singer & Singer, 1998). A recent systematic review of the literature reveals that relatively little quantitative research is available that examines the usefulness of this instruction (Bergsma & Carney, 2008). Hobbs and Frost (2003) concur that there is “little school-based empirical research … to demonstrate the impact of media-literacy curriculum on students' attitudes, behavior, knowledge and academic performance” (p. 332).
One important reason for this is the challenge of measuring media literacy (Bergsma, 2008; Hobbs & Frost, 2003; Primack, Gold, Land, & Fine, 2006a). As Scharrer (2002) states: “The results of participation in media literacy curricula are not often explicitly defined and measured, but there is a generalized notion about what these outcomes are” (p. 354). Such generalizations will not be sufficient as the field of media literacy develops. Accrediting bodies stress assessment to ensure that stated goals and objectives are being obtained. Thus, in order to show the value of the subject matter, media literacy advocates must develop and possess tools to accurately measure and report results that show the desired skill development and improvements.
Previous research on measurement of media literacy
Others have conducted research on the measurement of media literacy. Quin and McMahon (1995) studied two tests developed by a panel of teachers. The first analyzed print advertisements while the second looked at a 12-minute television clip from a situation comedy. Both tests examined the language, narrative, audience and other areas of analysis. Although these measurement instruments seemed to appropriately measure various aspects of media literacy emphasized in various conceptual models of media literacy, extensive psychometric properties of the measures are not available (Quin & McMahon, 1995).
Hobbs and Frost (2003) developed similar methods to measure media literacy. They used an intensive qualitative analysis of student responses to assess student media literacy skills based upon the established definition of media literacy described above (Aufderheide & Firestone, 1993). Constructs assessed included students' ability to identify the purpose, target audience, point of view, construction techniques used in media messages, and the ability to identify omitted information from a news media broadcast in written, audio, or visual formats.
Hobbs and Frost found that their measures of media literacy were internally consistent based on Cronbach's alpha values. Additionally, they found that students who engaged in a grade 11 media literacy course significantly improved on these measures, while students in a matched group who received no instruction did not improve (Hobbs & Frost, 2003). This would suggest that these measures had a certain amount of criterion validity, i.e., they demonstrated changes expected when currently accepted standards of media literacy education were applied. However, these researchers did not assess the underlying factor structure of student responses to assess whether it approximated the underlying conceptual model (content validity). Additionally, they did not assess the construct validity of the scale, i.e., whether values on that scale were significantly associated with other constructs related to media literacy.
Another group of researchers developed a scale specifically to assess media literacy with regard to pro-smoking media messages (Primack et al., 2006a). These authors used theoretical modeling, item refinement, and factor analysis to develop and validate a scale measuring adolescents' media literacy with regard to smoking (Primack et al., 2006a). They then found in consequent analyses that adolescents' overall “smoking media literacy” as measured by this scale was strongly and independently associated with both reduced adolescent smoking and reduced susceptibility to future smoking (Primack et al., 2006b). Although these authors did use appropriate psychometric methods to assess content and construct validity of their scale, this scale was specifically designed to assess smoking-related media literacy, which may or may not be related to overall media literacy. Second, these authors used a Likert-type self-report measure of media literacy. It would be valuable to validate instead a measure of media literacy that is objectively assessed by an outside reviewer.
Media literacy and critical thinking
Critical thinking is one particular construct that may be valuable in the validation of any media literacy scale. In his book Media Literacy: Keys to Interpreting Media Messages, Silverblatt (2001) links media literacy and critical thinking skills when he identifies the primary element of media literacy as “a critical thinking skill that enables audiences to develop independent judgments about media content” (p. 2). He continues by stating that media literacy is first and foremost about applying critical thinking skills to the media. Since critical thinking is often a stated educational objective by administrators both in secondary and higher education, media literacy can be seen as a means of achieving widely subscribed to academic goals.
At least one international study supported the link between media literacy and critical thinking. Feuerstein (1999) examined media literacy as a means to develop critical thinking in children ages 10–12 in six Northern Israeli primary schools. After conducting pre- and post-tests to measure the impact of course content in media literacy related materials, Feuerstein concluded that as pupils increased their experience with their media literacy program, they showed greater gains proportionally in media analysis and critical thinking skills (Feuerstein, 1999). Thus, it may be valuable to assess the relationship between critical thinking and any measure that purports to assess media literacy.
Purpose of the current study
There remains a need to rigorously develop, refine, and validate objective measures of media literacy. The purpose of this study was to begin this process by developing a pilot measure of media literacy and assessing its psychometric properties. Our process began with the development of an objective measurement scale based upon an established conceptual model of media literacy. Aim 1 of the study was then to determine internal consistency of each of the three subscales (radio, TV, and print) as well as the overall scale (Aim 1). We hypothesized that each of these measures of media literacy would be internally consistent according to Cronbach's alpha values (Hypothesis 1). Second, we aimed to assess the content validity of the measure by comparing the underlying factor structure of the measurement data with the conceptual model (Aim 2). We hypothesized that the underlying factor structure would approximate the theoretical basis of the scale (Hypothesis 2). Finally, we aimed to assess construct validity of the scale by comparing media literacy values with an assessment of critical thinking, a construct closely aligned with that of media literacy (Aim 3). We hypothesized that the composite media literacy score would be significantly correlated with composite critical thinking score as measured by the California Critical Thinking Skills Test (CCTST) (Hypothesis 3).
Methods
Conceptual model of media literacy
We developed a conceptual model (Table 1) based on models specific to media literacy (Aufderheide, NAMLE) and broadly related to education (Bloom). In 1993, a group of media literacy scholars met to define media literacy and discuss implications for the field. Their meeting resulted in the following definition: “The ability to understand, analyze, evaluate and create media messages in a wide variety of forms” (Aufderheide & Firestone, 1993). For the purposes of this measurement, we will use this definition with one exception. Specifically, it is beyond the purview of this work to assess the ability of an individual to create a media message. Thus, our working definition of media literacy will be “The ability to understand, analyze, and evaluate media messages in a wide variety of forms.”
Table 1.
Theoretical framework of media literacy.
| Label | Domain | Item code | Item(s) | Aufderheide | NAMLE Key Questions | Bloom |
|---|---|---|---|---|---|---|
| A | Recall | Recall | Factual recall items | Access | Content | Knowledge |
| B | Purpose | Purpose | Explain the purpose of the message. | Access | Purpose | Comprehension |
| C | Viewpoint | Sender | Identify the sender of the message. | Analyze | Author/Audience | Analysis |
| Missing | What points of view may be missing? | Analyze | Author/Audience | Analysis | ||
| D | Technique | Technique | How does the sender attract and hold your attention? | Analyze | Techniques | Analysis |
| E | Evaluation | Evaluation | What attitudes or feelings are you left with afterwards? | Evaluate | Credibility | Evaluation |
| Inference | What does the information suggest? | Evaluate | Credibility | Synthesis |
The National Association of Media Literacy Education has adopted Core Principles of Media Literacy and associated Key Questions of Media Literacy that further define the central domains of this construct. As have others (Hobbs & Frost, 2003; Primack et al., 2006a), we used these core principles to underlie our conceptual framework of media literacy (Table 1). While few curriculum theorists are familiar with the field of media literacy (Scrimshaw, 1992), the media literacy measure in this study was also linked to the general taxonomy of learning established by Bloom, Hastings, and Madaus (1971).
Our ultimate conceptual model consisted of five domains: recall, purpose, viewpoint, technique, and evaluation (Table 1). These domains correspond to similar constructs described by Aufderheide, NAMLE, and Bloom (Table 1). The “Recall” domain assesses basic understanding of and access to the media message. The “Purpose” domain assesses comprehension of the purpose of the message. The “Viewpoint” domain assesses both whether the participant can indentify: (1) the sender of the message, and (2) what points-of-view may be left out of the message. The “Technique” domain assesses an individual's ability to analyze the techniques that were used to attract attention. Finally, the “Evaluation” domain assesses how an individual evaluates that message in comparison to his/her own perspective. Thus, this domain will include an individual's subjective assessment of his/her attitudinal reaction to the message as well as other implications of the message.
Setting and participants
The target population for this study was college students enrolled at a Christian liberal and applied arts and sciences college in Pennsylvania. A purposive sample of students in a Communication-based course that is open to non-Communication-based majors was utilized to provide a degree of diversity in the participants. Future testing could be expanded to other institutions, classes that are focused on other disciplines, or a more generally diverse population.
Participants for the study were gathered from an undergraduate class at the institution where the researcher is an associate professor of communication. Their willingness to participate was ascertained through the processes dictated by the IRBs at Duquesne University and Messiah College. Participation was not a condition for a grade in the course in which the testing was conducted. By measuring a diversely enrolled communication-based class the researcher aimed to assemble a population that was representative of the student body. The sample size was 34. This number provided a reasonable amount of data for correlational analysis.
Measures
Media literacy
Our media literacy scale consisted of seven measures (Table 1) corresponding to the five domains in our conceptual framework. The “recall” score (0–5) was based on responses to each of seven specific recall items. The “purpose” score (range, 0–5) was based on response to the open-ended purpose item shown in Table 1. Similarly, each of the other scores (“sender,” “missing,” “technique,” “evaluation,” “inference”) were similarly scored from 0–5. Open-ended responses were evaluated and converted to objective scores based on the work of Worsnop. Specifically, in Assessing Media Work, Worsnop (1996) provides an “Assessment Scale for Response to Media Texts” that guided the conversion of students' open-ended responses to numerical scores. Each of these media literacy measures was assessed for each of three media examples: one radio, one television, and one print. This was done to acknowledge Aufderheide's suggestion that media literacy assess various abilities “in a variety of forms.” We selected these particular types of media following the lead of Hobbs and Frost (2003). Finally, the use of three different media analysis situations would seem to be a reasonable minimum to base a reliable assessment of mastery (Gagné & Briggs, 1979).
Critical thinking
We used the California Critical Thinking Skills Test (CCTST) to assess critical thinking and reasoning skills. The CCTST is distributed by Insight Assessment and the California Academic Press and is also geared toward students at the higher education level. The test was updated in 2000 and is supported by the publishers. The test's website boasts concurrent validity with the quantitative, analytical and verbal portions of the Graduate Record Examination (GRE), the Watson–Glaser Critical Thinking Appraisal (which measures critical thinking in relation to reading comprehension) and college GPA. In reviewing the initial edition of the test, McMorris writes “Pretest scores correlate with college GPA (.20), SAT-V (.55), SAT-M (.44), most Nelson-Denny reading scores (.40s) and posttest scores (.70)” (cited in Conoley & Impara, 1995). For reliability, he concludes internal consistency is around .70 (Conoley & Impara, 1995, p. 144). Reviews of the test support the appearance of content validity (Conoley & Impara, 1995). The CCTST benefits from the contributions of the Delphi panel in its development. The Delphi panel, consisting of 46 nationally visible scholars in critical thinking, spent two years defining critical thinking for general education at the lower division college level. Those scholars contributed to the development of the CCTST (Conoley & Impara, 1995).
Analysis
We first used Cronbach's alpha to assess the internal consistency reliability of the composite media literacy measure as well as each of its subscales (Aim 1). We then performed principal components analysis to determine the underlying factor structure of the students' responses. The scree plot of the initial analysis suggested a five-factor solution, so we conducted a five-factor analysis and rotated the results using Varimax rotation (Aim 2). We then used Pearson r-values to determine the correlation between the media literacy measures and the critical thinking measures (Aim 3). We used one-sided tests for all of the correlation analyses since we hypothesized that these constructs would be correlated with each other. Analyses were conducted using SPSS Version 12.0 (2003) and Stata 9.2 (Statacorp, 2007). We defined statistical significance with an alpha of 0.05.
Results
Of 35 eligible students, 34 participated (97%). The sample was 60% female and 61% stated they had received their basic education in Pennsylvania. The students primarily held sophomore status (58%) with only 44% of the students as declared communication majors.
The composite media literacy scale, as well as each of the subscales, was internally consistent (Table 2). Thus, Hypothesis 1 was supported.
Table 2.
Internal consistency.
| Scale | Cronbach's α | Average Interitem Covariance | Items |
|---|---|---|---|
| Radio | 0.74 | 0.30 | 7 |
| TV | 0.79 | 0.41 | 7 |
| 0.75 | 0.34 | 7 | |
| Total | 0.90 | 0.35 | 21 |
Principal components analysis (Table 3) revealed a five-factor structure. These five factors corresponded very closely with what was predicted with the conceptual framework (Table 1). For example, the six items that loaded highest on Factor 1 were the “evaluation” and “inference” items (Domain E) for each of the three media types. The three items loading highest on Factor 2 were the three “purpose” items (Domain B). Although the relationships were not as consistent, many of the items loading highest on Factor 3 were associated with the “sender” or “missing” items (Domain C). Both of the items loading highest on Factor 4 were “objective” items (Domain A). Finally, all three items loading highest on Factor 5 were “technique” items (Domain D). Thus, Hypothesis 2 was supported.
Table 3.
Factor analysis results.*
| Medium | Construct | Factor 1 | Factor 2 | Factor 3 | Factor 4 | Factor 5 |
|---|---|---|---|---|---|---|
| Radio | Objective | −0.01 | 0.07 | −0.02 | 0.83 | 0.11 |
| Purpose | 0.13 | −0.96 | 0.13 | 0.04 | 0.10 | |
| Sender | 0.50 | 0.36 | 0.41 | 0.15 | 0.36 | |
| Missing | 0.51 | 0.00 | 0.35 | 0.37 | −0.04 | |
| Techniques | 0.27 | 0.19 | 0.03 | 0.05 | 0.65 | |
| Evaluation | 0.73 | 0.12 | 0.13 | 0.18 | 0.06 | |
| Inference | 0.70 | 0.15 | 0.18 | 0.03 | 0.00 | |
| TV | Objective | −0.09 | 0.00 | 0.58 | −0.12 | 0.25 |
| Purpose | 0.13 | 0.96 | 0.13 | 0.04 | 0.10 | |
| Sender | 0.25 | 0.30 | 0.82 | 0.11 | 0.09 | |
| Missing | 0.41 | 0.16 | 0.75 | −0.08 | −0.01 | |
| Techniques | 0.02 | 0.27 | 0.22 | −0.16 | 0.70 | |
| Evaluation | 0.67 | 0.00 | 0.40 | −0.07 | 0.12 | |
| Inference | 0.62 | 0.34 | 0.31 | −0.13 | 0.29 | |
| Objective | 0.23 | 0.07 | 0.31 | 0.77 | −0.10 | |
| Purpose | 0.13 | 0.96 | 0.13 | 0.04 | 0.10 | |
| Sender | 0.15 | 0.21 | 0.84 | 0.28 | 0.08 | |
| Missing | 0.44 | −0.41 | 0.40 | 0.21 | 0.33 | |
| Techniques | 0.09 | 0.27 | 0.03 | 0.31 | 0.60 | |
| Evaluation | 0.76 | 0.24 | 0.12 | 0.30 | 0.13 | |
| Inference | 0.74 | 0.28 | 0.19 | −0.01 | −0.01 |
Note:
Shaded values are those with high factor loadings, which we defined as ≥0.40. Boxed values are those that would have been predicted to be high based on the theoretical model (Figure 1).
Media literacy and critical thinking values exhibited normal distributions (Figure 1) with appropriate means and ranges (Table 4). The media literacy values (Radio, TV, print, and total) were all highly correlated with each other (Table 5). Similarly, the critical thinking values (Analysis, Evaluative, Inferential, and Total) were all highly correlated with each other (Table 5).
Figure 1.
Media literacy and critical thinking histograms: panel A, media literacy; panel B, critical thinking.
Table 4.
Media literacy and critical thinking values.
| Construct | Mean (SD) | Range |
|---|---|---|
| Media literacy | ||
| Radio | 16.0 (3.7) | 7–22 |
| Television | 16.2 (3.9) | 9–23 |
| 15.7 (3.5) | 8–23 | |
| Total | 47.9 (8.9) | 24–62 |
| Critical thinking | ||
| Analytic | 4.8 (1.7) | 1–7 |
| Evaluative | 5.4 (2.2) | 2–10 |
| Inferential | 8.5 (2.8) | 3–15 |
| Total | 18.8 (5.8) | 7–30 |
Table 5.
Correlation matrix.
| Radio | TV | Total ML | Analysis | Evaluative | Inferential | Total CT | ||
|---|---|---|---|---|---|---|---|---|
| Radio | 1 | |||||||
| TV | 0.44* | 1 | ||||||
| .004 | ||||||||
| 0.58* | 0.35* | 1 | ||||||
| .0002 | .02 | |||||||
| Total ML | 0.84* | 0.77* | 0.79* | 1 | ||||
| <.0001 | <.0001 | <.0001 | ||||||
| Analysis | 0.50* | 0.26 | 0.36* | 0.47* | 1 | |||
| .001 | .07 | .02 | .003 | |||||
| Evaluative | 0.17 | −0.04 | 0.15 | 0.11 | 0.48* | 1 | ||
| .17 | .42 | .20 | .26 | .002 | ||||
| Inferential | 0.27 | 0.18 | 0.26 | 0.29 | 0.65* | 0.67* | 1 | |
| .06 | .15 | .07 | .05 | <.0001 | <.0001 | |||
| Total CT | 0.34* | 0.15 | 0.29* | 0.29* | 0.79* | 0.8430* | 0.9296* | 1 |
| .02 | .20 | .048 | .03 | <.0001 | <.0001 | <.0001 |
Note: CT, critical thinking; ML, media literacy.
Indicates P<.05.
All P-values, below the Pearson r-values, represent values for one-sided tests.
The composite media literacy scale was significantly correlated with the composite critical thinking measure (r = 0.32, P = .03). Therefore, Hypothesis 3 was supported. However, not all components of each subscale were significantly correlated with each other (Table 5). In particular, the media literacy scales were generally significantly correlated with the “Analysis” subscale but less so with the “Evaluative” and “Inferential” subscales.
Discussion
In this study of 34 college-aged communication students, we found that media literacy could be reliably measured using a theory-based scale. We also found that the underlying factor structure of the students' responses matched the conceptual model, lending content validity to the scale. Finally, we found that, as expected, the media literacy scale was positively correlated with a commonly accepted critical thinking measure. These findings support the reliability and validity of this type of scale in the measurement of media literacy. Thus, this scale – and others similar to it – may be useful in measuring media literacy and/or evaluating media literacy programming.
One implication of these findings is that quantification of media literacy can be achieved. As Scharrer (2002) writes: “It is necessary to move beyond implicit assumptions about the benefits such efforts (media literacy education) can achieve and toward their explicit definition and measurement” (p. 354). Measures such as this one may help media literacy advocates to be more precise about the educational opportunities in the field that are being offered and the benefits such instruction provides.
Potter (2001) views media literacy as a continuum where individuals are placed based on the strength of their overall perspective on the media. The test implemented for this study provides a means of gauging where an individual's current state of media literacy is placed on the continuum of scores. Another implication of these findings is that teaching media literacy may help improve overall critical thinking skills. This finding is consistent with one of the conclusions reached by Feuerstein, who stated: “As pupils increase their experience with media literacy they will demonstrate greater capability in media analysis and critical thinking skills” (Feuerstein, 1999, p. 51). Providing instruction in media literacy should help further the educational goals and objectives of educational institutions striving to develop critical thinkers or critical thinking skills as part of their overall curriculum.
It is valuable that we were able to find these relationships among college students. Most studies and interventions related to media literacy involve younger adolescents (Bergsma, 2008), and thus there is a greater need for empiric studies among college-aged and adult participants. Another contribution of this scale is that it measures media literacy in general. The few measures uncovered in the existing literature were developed to take measurements surrounding a particular media education curriculum or a particular niche within media literacy (Bergsma & Carney, 2008; Primack et al., 2006a).
It is interesting that this measurement of media literacy correlated highly with the “Analysis” component of the CCTST but not as strongly with the other components of the CCTST. Analysis on the CCTST has dual meaning. The first definition includes such subskills as decoding significance and clarifying meaning. The term also represents such subskills as detecting arguments and analyzing arguments into their component parts (Insight, 2003). Evaluative thinking or evaluation also has a dual meaning when used in relation to the CCTST. Subskills for this area include assessing claims and arguments, justifying procedures and presenting arguments (Insight, 2003). Theoretically, evaluation is as much a part of media literacy as is analysis. It is possible that our measure captures more of the analytical than the evaluative elements. As we hone this measure, it may be valuable to improve the ability of the measure to capture evaluative and inferential elements.
This study focused on the cognitive aspects of media literacy. The National Association for Media Literacy Education (NAMLE) has developed a series of core principles for media literacy education in the United States. These principles focus on helping individuals to “develop the habits of inquiry and skills of expression that they need to be critical thinkers, effective communicators and active citizens in today's world” (NAMLE, 2007) Since the core principles and the critical thinking measure used in this study focus on cognitive aspects, the researchers applied this focus in order to further the existing research base. However, since media consumption can be an emotional experience, for instance regarding political and persuasive communication (Holbert, Hansen, Caplan & Mortensen, 2007), it may be important to develop ways of assessing emotional aspects of media literacy in the future.
This pilot study was limited by its small sample size (n =34). Additionally, the sample was homogeneous sociodemographically. Larger and more diverse samples should be used for future investigations. Additionally, the cross-sectional nature of this study precludes our ability to determine the temporal nature of the relationship between media literacy and critical thinking. Longitudinal studies would be required to further explore these relationships.
Despite these limitations, this measurement device for media literacy represents a starting point for future efforts in bringing rigor to the measurement of media literacy and the careful evaluation of media literacy programming. Since much of the published material has been primarily qualitative in nature, aimed at middle school students, or somewhat limited in scope, this analysis makes unique contributions. The measure is based on sound and widely recognized educational principles. The test can be adapted to students at a number of levels and used within or outside of formal educational settings. While there has been great advancement of the field of media literacy in the United States in recent years, improved measurement of media literacy will help advocates learn more about what other constructs are associated with media literacy, what successes have been achieved, and what challenges still remain.
As the field of media literacy research evolves to include a digital literacy component (Australian Communications and Media Authority, 2008) it will become increasingly important to incorporate the measurement of literacy related to Web 2.0 content. This initial measurement tool was based on existing media literacy research and focuses on the more traditional forms of mass communication (radio, TV, and print). Since even the content of newer media is rooted in these primary forms of message development – audio, video, and print – examining these foundational media platforms represents a logical starting point in the development of media literacy measures.
However, as digital media literacy research develops alongside ongoing media literacy studies in the Web 2.0 world, we recognize additional measurement tools and areas will need to be developed in order to keep measurement efforts current and applicable. Future research and subsequent development of reliable and accurate media literacy measures will need to include construction of similar literacy questions that are applicable to newer media forms. The three areas of assessment for this measure may provide a valuable framework for these future efforts.
Footnotes
Publisher's Disclaimer: The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
References
- Aufderheide P, Firestone C. Media literacy: A report of the national leadership conference on media literacy. Aspen Institute; Queenstown, MD: 1993. [Google Scholar]
- Australian Communications and Media Authority What is digital media literacy and why is it important? 2008 Retrieved January 6, 2009, from www.acma.gov.au/WEB/STANDARD/pc=PC_311470.
- Bergsma LJ, Carney ME. Effectiveness of health-promoting media literacy education: A systematic review. Health Education Research. 2008;23:522–542. doi: 10.1093/her/cym084. [DOI] [PubMed] [Google Scholar]
- Berko R, Morreale S, Cooper P, Perry C. Communication standards and competencies for kindergarten through grade 12: The role of the National Communication Association. Communication Education. 1998;47(2):174–182. [Google Scholar]
- Bloom BS, Hastings JT, Madaus GF. Handbook on formative and summative evaluation of student learning. McGraw-Hill; New York: 1971. [Google Scholar]
- Conoley JC, Impara JC. 12th Mental measurement year book. Buros Institute of Mental Measurement; Lincoln, NE: 1995. [Google Scholar]
- Eggerton J. FCC Indecency-Fine Boost Passes. Broadcasting & Cable. 2006 Retrieved June 12, 2006 from source. [Google Scholar]
- Feuerstein M. Media literacy in support of critical thinking. Journal of Educational Media. 1999;24(1):43–54. [Google Scholar]
- Gagné RM, Briggs LJ. Principles of instructional design. 2nd ed. Holt, Rinehart and Winston; New York: 1979. [Google Scholar]
- Hobbs R, Frost R. Measuring the acquisition of media-literacy skills. Reading Research Quarterly. 2003;38:330–352. [Google Scholar]
- Holbert R, Hansen G, Caplan S, Mortensen S. Presidential debate viewing and Michael Moore's Fahrenheit 9–11: A study of affect-as-transfer and passionate reasoning. Media Psychology. 2007 May;9(3):673–694. Retrieved January 7, 2009, doi:10.1080/15213260701283285. [Google Scholar]
- Insight Assessment . CCTST 2000 interpretation document (2002 2003 norms) The California Academic Press; Millbrae, CA: 2003. [Google Scholar]
- Kubey R. Obstacles to the development of media education in the United States. Journal of Communication. 1998;48:58–69. [Google Scholar]
- NAMLE (National Association for Media Literacy Education) Core principles of media literacy education in the United States. National Association for Media Literacy Education; 2007. Retrieved July 24, 2008, from http://www.namle.net. [Google Scholar]
- Potter WJ. Media Literacy. 2nd ed. Sage Publications; Thousand Oaks, CA: 2001. [Google Scholar]
- Primack BA, Gold MA, Land SR, Fine MJ. Association of cigarette smoking and media literacy about smoking among adolescents. Journal of Adolescent Health. 2006;39:465–472. doi: 10.1016/j.jadohealth.2006.05.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Primack BA, Gold MA, Switzer GE, Hobbs R, Land SR, Fine MJ. Development and validation of a smoking media literacy scale. Archives of Pediatric & Adolescent Medicine. 2006;160:369–374. doi: 10.1001/archpedi.160.4.369. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Quin R, McMahon B. Evaluating standards in media education. Canadian Journal of Educational Communication. 1995;22(1):15–25. [Google Scholar]
- Scharrer E. Making a case for media literacy in the curriculum: Outcomes and assessment. Journal of Adolescent and Adult Literacy. 2002;46:354–357. [Google Scholar]
- Scrimshaw P. Evaluating media education through action research. In: Alverado M, Boyd-Barrett O, editors. Media education: An introduction. BFI Publishing, in partnership with The Open University; London: 1992. pp. 242–252. [Google Scholar]
- Silverblatt A. Media literacy: Keys to interpreting media messages. 2nd ed. Praeger Publishers; Westport, CT: 2001. [Google Scholar]
- Singer D, Singer J. Developing critical viewing skills and media literacy in children. Annals of the American Academy of Political and Social Science. 1998;557:164–179. [Google Scholar]
- Statacorp . Stata statistical software, version 9.2. Statacorp; College Station, TX: 2007. [Google Scholar]
- Thoman E. Skills and strategies for media education. The Center for Media Literacy; Los Angeles, CA: 2003. [Google Scholar]
- Worsnop CM. Assessing media work. Wright Communications; Mississauga, Ont.: 1996. [Google Scholar]

