Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Aug 13.
Published in final edited form as: Read Res Q. 2013 October-December;48(4):415–435. doi: 10.1002/rrq.57

Defining Success in Adult Basic Education Settings: Multiple Stakeholders, Multiple Perspectives

Elizabeth L Tighe 1, Adrienne E Barnes 2, Carol M Connor 3, Sharilyn C Steadman 4
PMCID: PMC4535723  NIHMSID: NIHMS713007  PMID: 26279590

Abstract

This study employed quantitative and qualitative research approaches to investigate what constitutes “success” in Adult Basic Education (ABE) programs from the perspectives of multiple educational stakeholders: the state funding agency, the teachers, and the students. Success was defined in multiple ways. In the quantitative section of the study, we computed classroom value-added scores (used as a metric of the state’s definition of success) to identify more and less effective ABE classrooms in two Florida counties. In the qualitative section of the study, we observed and conducted interviews with teachers and students in the selected classrooms to investigate how these stakeholders defined success in ABE. Iterative consideration of the qualitative data revealed three principal markers of success: (a) instructional strategies and teacher-student interactions; (b) views on standardized testing; and (c) student motivational factors. In general, classrooms with higher value-added scores were characterized by multiple instructional approaches, positive and collaborative teacher-student interactions, and students engaging in goal setting and citing motivational factors such as family and personal fulfillment. The implications for ABE programs are discussed.

Keywords: Adult Basic Education, Literacy, Qualitative research


Achieving proficient literacy competence is imperative in order for individuals to function in an information-driven and highly technological society. Yet the 2003 National Assessment of Adult Literacy (NAAL) reported that approximately 90 million adults in the United States read at or below the basic literacy level (Kutner et al., 2007). Deficient literacy skills are perpetuated through generations; adults with low literacy are less likely to read to their children and to have access to educational resources. Consequently, children of adults with low literacy are often disadvantaged upon school entrance (Kirsch, Jungeblut, Jenkins, & Kolstad, 1993). Furthermore, poor literacy skills are detrimental to society as adults with low literacy have lower participation rates in the workforce and lower earnings (Kutner et al., 2007). Adult Basic Education (ABE) programs are designed to curb the problem of adult low literacy; however, there is a paucity of rigorous research for instructional practices, scant societal attention, and significantly curtailed funding for these programs (Bennett, 2007). Moreover, there are a variety of educational stakeholders (i.e. state funding agencies, administrators, practitioners, and learners) with different definitions of what constitutes success in ABE programs. The current study takes quantitative and qualitative approaches to exploring what comprises “success” in ABE programs and investigates how three different stakeholders (the state, teachers, and students) define success. Additionally, the study identifies some characteristics of ABE classrooms (context and approaches to learning) that contribute to strong student outcomes as defined by state accountability standards.

ABE programs provide adults (ages 16 and older), not concurrently enrolled in K-12 education, with instruction and coursework to complete high school and earn a General Educational Development (GED) certificate. These programs serve roughly 2.6 million adults annually and are primarily state and federally funded (National Research Council (NRC), 2012). ABE programs vary in terms of class size and are situated in a variety of contexts such as schools, community centers, workplaces, correctional institutions, and development centers (Belzer, 2007; Tamassia, Lennon, Yamamoto, & Kirsch, 2007). Adult learners who attend these programs represent a heterogeneous population in terms of age, race, educational background, language experience, and prevalence of learning disabilities (NRC, 2012). Coupled with the heterogeneity of adult learners, a large variety of contexts, and lack of standardized curricula and materials, it is difficult to deliver consistent, high-quality instruction across ABE programs. Moreover, the ways in which various educational stakeholders (state and local agencies, teachers, and adult learners) define success most likely influences important characteristics of ABE. Belzer (2007) asserts that:

Although we can safely assume that all are united in their desire to provide effective and efficient learning opportunities for adults who seek them, specifically what and how we are trying to accomplish this is contested. For example, policymakers and funders often have special goals for ABE that may be quite different from the personal goals of learners. Meanwhile, practitioners may have a third set of goals that are aimed at broadening students’ perspectives on the potential of learning (p. ix).

The first factor mentioned by Belzer (2007) is derived from the need for accountability on the part of top-down systems: funding agencies and policymakers. Legislatively-mandated, quantitative data from the accountability systems in place establish specific elements that are seen as important, worthy of emphasis, and constituting officially sanctioned outcome-based school performance that is tied to funding (Condelli, 2007). Beginning in 1998 with the passage of Title II, Section 112(a) of the Workforce Investment Act, funding for ABE programs was tied to demonstrable outcomes for the first time (Belzer, 2007). In July 2000, the National Reporting System (NRS) was established by the U.S. Department of Education to put the accountability requirements of the Workforce Investment Act into effect (Condelli, 2007). Consequently, outcome assessment measurement issues have stood at the center of funding agencies’ (primarily the states) efforts to define success. The Test of Adult Basic Education (TABE) is the most widely used outcome-based measure for ABE students. The TABE consists of reading, math, language, mechanics, vocabulary, and spelling skills at five difficulty levels (CTB/McGraw-Hill, 2008). The TABE, which does not break reading skills into subcomponents, is utilized as a way to determine students’ initial literacy levels and monitor progress over time (Greenberg, 2007). In the state of Florida, where the current study was conducted, Literacy Completion Points (LCPs) serve as an indicator of general adult student achievement and are another example of an outcome-based measure for ABE programs. LCPs are earned when a student moves from one functioning level to the next. Historically, ABE programs in Florida were funded based on the number of LCPs students earned in a given period of time and thus, LCPs were seen as a quantifiable method to define success by the state.

Top-down systems represent only one approach to accountability and only one definition of success in an ABE context. Practitioners and/or service providers also engage in a form of accountability through a field driven, or bottom-up, approach. Condelli (2007) reports that in contrast to the outcome-based emphasis of top-down systems, ways of examining ABE that initiate at the program or classroom level generally have less emphasis on performance-based funding and have a “more explicit and direct focus on program-improvement [and] often include qualitative measures” (p. 13). Within the state of Florida, the design and implementation of ABE programs vary across counties, and therefore, teachers are provided with differing amounts of structure and autonomy. For example, of the two Florida counties included in this study, the administrators at one county’s central ABE facility developed a specific interactive instructional approach that was implemented by all teachers. In contrast, the other county intentionally afforded its teachers significant autonomy regarding virtually all aspects of program design, instructional approaches, and materials. As a result, the ABE classrooms across and within the two counties engaged in different instantiations of literacy instruction for adults, and thereby constituted a rich venue for exploring the variation in ABE learning environments. Moreover, at a national level, ABE programs suffer from a lack of standardized curricula and materials, a dearth of funding for teacher professional development, and instructors with varying amounts of educational experience and expertise (Tamassia et al., 2007). Therefore, a central goal of this study was to observe and describe how the variability in ABE program structure and teachers’ and students’ definitions of success relate to state-defined student outcomes.

Finally, the students enrolled in ABE classes bring another set of achievement goals, perspectives, and definitions of success to the classroom, and the accomplishment of those goals may not be related to the accountability goals of other stakeholders. As Venezky (1992) suggested, “Students who enroll in literacy courses to learn how to read to their children may not care at the end of instruction what their reading levels are relative to national standards. However, policymakers do” (p. 8). Venezky’s (1992) assertion that adults have diverse motivating factors for attending school aligns with the findings of international adult literacy research: literacy exists “within a myriad of social practices” (Street, 2012a, p. 2). Implications of a social practice approach to literacy instruction has, in many countries, allowed learners to negotiate their educational contexts to match individual, cultural, and social literacy and language needs without focusing exclusively on standardized testing (Hamilton, Hillier, & Tett, 2006). Although a social practice approach to literacy is not currently part of the formal ABE paradigm in the United States, this perspective toward adult literacy may be beneficial because adult learners may be motivated by factors other than passing standardized assessments (i.e., gaining functional literacy, increasing access to social and community resources, and supporting their children’s success). In order to illustrate how members of ABE classes define not only success, but also the motivating factors and teaching and learning practices which foster that success, we took a qualitative approach to observing classrooms and interviewing students and teachers in ABE classrooms across two Florida counties.

It is important to note that although significant research exists on K-12 classrooms and their participants, and while it is tempting to view ABE programs as extensions of the elementary-secondary setting, substantial differences exist. Unlike school-age children, who are required by state and federal laws and expected by societal norms to attend school, adult learners voluntarily enroll and participate in educational courses (Comings & Cuban, 2007). Adults have different experiences with schooling and motivational reasons for pursuing literacy instruction (NRC, 2012). Further, adult students often must satisfy a range of complex obligations, such as family demands, transportation barriers, and financial costs, in order to attend class (Comings & Soricone, 2007). In light of all of this, we recognized the need to be mindful of the differences between school-age children and adult learners and of the fact that as an understudied field, ABE presents particular research challenges. Most notably, ABE programs suffer from a lack of theory-based models for instruction and service delivery and limited research resources (Comings & Soricone, 2007). Thus, through observation and interviews, we endeavored to understand the adult students’ personal and learning aims and what they perceived as helpful to them in achieving their goals.

Current Study

The current study employed quantitative and qualitative approaches to examine definitions of success in two Florida counties from the perspectives of multiple stakeholders: the state, teachers, and students. The quantitative part of the study consisted of evaluating success by utilizing outcome measures (LCPs), which are the state definition of success, to calculate value-added scores for individual ABE classrooms. Value-added scores allowed us to quantify what was considered a more or less effective classroom. The qualitative part of the study utilized observations and semi-structured interviews with teachers and students to determine classroom characteristics (context and approach to learning) and definitions of success.

This study addressed three primary research questions:

  1. How can the effectiveness of ABE classrooms within two Florida counties be compared utilizing the state’s definition of success (LCPs)?

  2. What are the observed characteristics of more and less effective ABE classrooms as identified by the state?

  3. How do local stakeholders (teachers and students) define success in ABE classrooms?

In addressing these questions, we contribute empirical evidence of the complexity inherent in defining and claiming ‘success’ in ABE settings. We conclude by comparing and contrasting classroom characteristics and local stakeholders’ definitions of success with the state’s definition of success. Finally, we discuss the implications of our findings for practitioners and program designers striving to enhance literacy gains for adult learners.

Method

Research Design

The study was conducted in two phases. In phase one, we used historical student data provided by each state county to identify classrooms that were either more or less effective in promoting adult students’ literacy achievement. We utilized students’ initial TABE reading scores and the number of LCPs earned to calculate value-added scores for ABE classrooms. These scores were used as a proxy for the Florida state definition of success in ABE programs. We discuss the TABE, LCPs, and how value-added scores were computed in detail below and in Appendix A.

In phase two, which provides the principal data for this study, we observed classrooms and interviewed teachers and students to begin to understand their definitions of success and to identify classroom characteristics of different ABE programs. These teacher and student definitions and classroom characteristics helped us to formulate hypotheses and assess convergence and divergence of observed classroom practices and definitions with classroom effectiveness, as measured by the state’s definition.

Phase 1

Participants and ABE sites

We used a separate dataset for each county. The Community and Adult Education is the primary provider of ABE in County 1. Classes are located in over 25 sites across the county, and course offerings include ABE, GED preparation, English for speakers of other languages course (ESOL), senior adult learners, education for adults with disabilities, and family literacy. During the 2005–2006 academic year, 179 adult students received state high school diplomas via the GED. In addition, ABE, GED, and ESOL students earned 1,487 LCPs. For this county, student data were available from 2001–2005 and included information for 3,708 students in 53 classrooms. Participants ranged in age from 16 to 82. The sample included 52% females and represented a range of ethnicities: 63% African American, 32% Caucasian, 3% Hispanic, and 2% Asian. These demographics are consistent with national ABE populations (NRC, 2012).

In County 2, approximately 225,000 residents were eligible for ABE services; however, the district serves roughly 25,000 people annually. ABE sites are scattered across the county, which covers a large geographic area, and different services are offered at different sites. For example, a public high school and a former elementary school were transformed into ABE centers. For this county, student data were available from 2003–2004 and included information for 7,078 students in 84 classrooms within 24 sites. Information on gender and ethnicity was not available for this county; however, our observed classroom demographics were consistent across both counties and similar to national ABE programs (NRC, 2012).

Quantitative measures

Test of Adult Basic Education (TABE) – reading subtest

The TABE is a nationally used ABE measure consisting of five levels: L (literacy, GLE = 0–1.9), E (easy, GLE = 1.6–3.9), M (medium, GLE = 3.6–6.9), D (difficult, GLE = 6.6–8.9), and A (advanced, GLE = 8.6–12.9) (CTB/McGraw-Hill, 2008). The test is normed on a diverse population of ABE participants aged 14-adult. The reading subtest contains 50 items in which students are asked to read brief passages and answer multiple-choice comprehension questions. Passages include narrative and expository text as well as functional text (i.e., reading a newspaper). At level L, the lowest level, students are asked questions pertaining to letter and sound recognition, simple vocabulary words, matching letters, and simple cloze tasks. Harder levels require students to infer, interpret graphic information, recall information, and construct meaning. Internal consistency reliability is reported at .88 to .95 across all levels.

Literacy Completion Points (LCPs)

In a metric developed in the state of Florida, LCPs were historically used as part of a performance-based method of funding. ABE county programs were funded based on the number of LCPs students earned in a given period of time. At the time of this study, LCPs were used as a metric of student progress; however, the number earned had no impact on funding. Instead, program funding was based on student enrollment. Although LCPs are a rather blunt instrument, they offered an advantage for this study because they are recognized as a state indicator of general student achievement and were readily available in the existing state database.

Table 1 presents the functioning and grade level equivalencies (GLEs) needed for students to earn an LCP in the reading content area. LCPs are earned when a student moves from one functioning level to the next. For example, a student might achieve a GLE of 2.0 on the TABE (level F) then achieve a GLE of 3.4 (level F) on the next test. That student would not earn an LCP. If the student then tests at a later date and earns a GLE of 4.2 (level H), that student then earns an LCP.

Table 1.

Description of TABE Subtest Functioning Levels and Grade Equivalencies to Earn LCPs

TABE Subtest Functioning Level LCPs GLEs
Reading B E 0.0–1.9
F F 2.0–3.9
H G 4.0–5.9
J H 6.0–8.9

Note: TABE = Test of Adult Basic Education. LCPs = Literacy Completion Points. GLEs = Grade Level Equivalencies.

Quantitative analytic strategy

Because we had student data over time nested in classrooms, we used cross-classified random effects latent growth models to take into account the structure of these data to compute value-added scores (Raudenbush, 2004b; Raudenbush & Willms, 1995). Cross-classified models were preferred because students may have attended different classrooms with different teachers as time progressed and these models can accommodate this (Raudenbush, Bryk, Cheong, Congdon, & du Toit, 2004; Raudenbush & Bryk, 2002). Latent growth models were used because value-added scores become more reliable when using multiple scores over time (Raudenbush, 2004a). All students were included in the models, which also improved reliability (Bryk, Deabster, Easton, Luppescu, & Thum, 1994; Raudenbush, 2004a).

Value-added scores (Raudenbush, Hong, & Rowan, 2002; Raudenbush et al., 2004; Sanders & Horn, 1998) were computed with the number of LCPs earned as the outcome. The value-added score is the empirical Bayes residual (see Appendix A) and represents the mean number of LCPs any given student, i, in any given teacher’s, j, course at time, t, in days, would be predicted to earn (LCP/student/course), controlling for the students’ initial literacy levels (as measured by the TABE). The mean LCP/student/course is centered at 0 and thus, value-added scores above the mean for the sample are positive numbers and value-added scores below the mean are negative numbers. In contrasting students during the same time period, students who have teachers with high value-added scores are likely to earn more LCPs whereas students who have teachers with low value-added scores are likely to earn fewer LCPs. Therefore, we consider classrooms with high value-added scores to be more successful than classrooms with low value-added scores.

We acknowledge that using scaled TABE scores as the outcome measure would have provided a more precise estimate of student growth. TABE scores are measured on a continuous scale and provide GLEs by year and month. In contrast, LCPs are measured on an ordinal scale and indicate movement from one functional level to another, typically 2 GLEs (see Table 1). However, we were only given access to initial TABE scores and subsequent LCPs earned. Despite this limitation, we believe that using LCPs provides a valid, general estimate of student growth and aligns with the state’s conceptualization of successful ABE programs.

Phase 2

Participants and ABE sites

A key part of this study was to directly observe instructional practices within classrooms. We identified which classrooms would be observed using the following criteria: 1. classroom value-added scores; and 2. availability of sites for observation. Because we wanted to observe classrooms that varied in effectiveness with regard to students’ earned LCPs, we attempted to select classrooms that earned either high or low value-added scores. For the sake of confidentiality and to minimize potential observer bias, the value-added scores were not shared with district administration, teachers, students, or research assistants.

Observation and interview data from County 1 were collected in Spring 2006 in four classrooms. The number of visits to each classroom ranged from three to six (median = four), with the lowest number prompted by late entry into one classroom. Demographic characteristics of students in all classrooms were similar, a mix of male and female students ranging in age from 16 to mid-50s.

Due to time, travel, and accessibility constraints, observation and interview data from County 2 were collected from two district-assigned sites over three days in Spring 2006. One site was a high school, which was comprised of a single classroom. The second site was a large building with multiple classrooms; we observed and conducted interviews in six of these classrooms. Classes at this site operated on a common schedule and employed a similar teaching approach, which was designed and endorsed by the county’s central administration for ABE. The data from County 2 should be interpreted cautiously because of potential district administrator site selection bias. However, these data provide additional information on the heterogeneity of the ABE population and demonstrate that different counties employ diverse approaches to teaching ABE.

Qualitative procedure

After classrooms were identified through the quantitative analyses, we contacted the selected teachers for permission to observe classrooms and to conduct teacher and student interviews. We were granted limited access to low value-added score classrooms (1 SD below county mean) because many of these County 1 teachers declined observations and County 2 administrators assigned us to specific sites. Therefore, the majority of our low value-added score classrooms were low to average as identified by our calculated classroom value-added scores (see more specific details in the quantitative results section). Once permission was granted, we obtained informed consent from students for participation in the study.

Researchers observed interactions in four County 1 classrooms for nearly 70 hours and in seven County 2 classrooms for 29 hours. During classroom visits, we interviewed 14 teachers and 28 students using a semi-structured, open-ended interview instrument designed specifically for this study. This interview instrument had four primary questions for students and three primary questions for teachers (see Appendix B). Beyond the principal questions, there was no standard interview protocol. Rather, interviewers allowed the interviewees to do most of the talking and used additional probing questions to refocus the conversation as needed. The interviewers included the third and fourth authors, who have extensive experience in qualitative research methods, and trained graduate research assistants. Interviews were audio-taped and transcribed. To determine transcription reliability, 10% of the interviews were randomly selected and transcribed by a second researcher. Transcription agreement (words transcribed the same/total words) exceeded 90% for all transcripts. We assigned pseudonyms to all participants and classrooms, which we use throughout the paper.

Qualitative analytic approach

The context in which this study takes place, ABE classes, is important because it allows us to view the ways in which students and teachers interact, discuss definitions of success, and describe how success is characterized within that setting. The interactions, or discourses, occurring within these classrooms provide insight into the participants’ conceptualizations of their roles as adult learners or teachers (Gee, 1999). Tannen and Wallat (1999) assert, “interaction can only be understood in context: a specific context” (p. 347), therefore, we utilized an interactional ethnographic methodology.

The orienting principles that underlie ethnography made it particularly well suited for this study; ethnography is culturally driven, involves a comparative perspective, and assumes an interactive-reactive approach (Zarharlick & Green, 1991). Because “ethnography is concerned with understanding everyday activities, events, and processes from the view of the participants” (Green, 1983, p. 188), it is “useful for developing a valid understanding of local situations in all their complexity” (Anderson-Levitt, 2006, p. 282). An ethnographic approach provides a means for viewing the distinct culture of ABE classrooms and gives voice to the adult learners (Coates, 1996; Anderson-Levitt, 2006). We sought balance and increased complexity by also including the teachers, who are viewed as being in positions of power within the ABE setting.

We employed Miles, Huberman, and Saldaña’s (2014) three-part approach that divides the process into reducing data, displaying data, and defining conclusions and addressing validity. Initially, we viewed our data through a large grain discourse analysis lens, comparing individual participants’ interactions in class and their responses during the semi-structured interviews in order to identify levels of consistency between responses on these instruments. We judged the responses to be acceptably consistent for all participants. Next, we moved to a constant comparison analysis (Glaser & Strauss, 1967) and viewed the entire data corpus (consisting of observations, field notes, and interviews) at three stages of coding: open, axial, and selective (Merriam, 2009).

In the open stage, we coded data and identified categories and subcategories. Then, we generated questions, which prompted further data collection. In County 1, accessibility to the sites and participants allowed further observation and interview time. In County 2, because of the limited visitation schedule, we met in the hours between classes to identify any issues seen in the earlier classes that warranted particular attention during the later classes. In the axial stage, we labeled three categories that consistently surfaced as topics that students and teachers talked about and that seemed to have relevance to defining success. These categories included teacher-student interactions, views on testing, and student motivational factors. We evaluated inter-rater reliability by randomly selecting 10% of coded transcripts, which were re-coded by a second researcher. Inter-rater reliability (coding agreements/total number of codes) was greater than 85% for all transcripts. We used discourse analysis to analyze verbatim interview transcripts and looked at sentence structure, word choice, points of emphasis, and hedging strategies to build an understanding of how the participants defined their goals and roles within the ABE community.

Results

Quantitative Findings: State Definition of Successful Classrooms

To address our first research question, quantifying the state’s definition of success, we computed value-added scores for each classroom. Table 2 presents descriptive information by county, mean number of LCPs, mean student initial GLEs, and each county’s range of classroom value-added scores. The value-added scores of County 1 classrooms, varied from .981 LCPs above the county mean to .393 LCPs below the county mean (M = .39, SD = .25). Six classrooms met the criterion of effective classroom (at least one SD above the county mean). Nine classrooms met the criterion of less effective classroom (at least one SD below the county mean). The value-added scores of County 2 classrooms, varied from .636 above the county mean to .250 below the county mean (M = .17, SD = .17). Thirteen classrooms met the criterion of effective classroom and 12 classrooms met the criterion of less effective classroom.

Table 2.

Demographic Information for Adult Basic Education Programs in Two Florida Counties

County 1 County 2
Number of Students 3708 7078
Number of Classrooms 53 84
Number of Sites NA 24
Years included in analyses 2001–2005 2003–2004
Mean LCPs (HLM descriptives) .41 .25
Student Initial Literacy Level - GLEs 1.9–8.9 (SD) 6.24 (2.33) 5.04 (2.26)
Value-added scores range (SD) −.393 to.981 (.25) −.250 to .636 (.17)

Note: NA = Not Available.

Table 3 presents the classroom value-added scores of our observed sites. It is easier to interpret these scores by remembering that they represent the number of LCPs a student attending that class is likely to earn in one semester over and above the mean for the county (County 1 M = .39; County 2 M = .17). Thus, on average, students in County 1 must take two and a half courses while students in County 2 must take six courses before earning one LCP. From Table 3, a classroom value-added score of .314 indicates that on average, a student in County 1 needs to take only one and a half courses before earning one LCP (total value-added for classroom .39 + .314 = .704). In contrast, a student in the same county but in a classroom earning a value-added score of −.279, would have to take nine courses on average (total value-added for classroom .39–.279 = .111) to earn one LCP.

Table 3.

Overall Site/Classroom Value-Added Scores in County 1 and County 2

County Site/Classroom Value-Added Score (LCP/course/semester)
County 1 .39 (district mean)
College Center −.279
Oakmont .314
Alpha .124
Beta −.102
County 2 .17 (district mean)
Sandy Beaches High Schools .029 (mean of all classrooms)
 Observed classrooms .059*
Pacific Central .150 (mean of all classrooms)
.190 (mean of observed classrooms)
 Observed Classroom 1 .289
 Observed Classroom 2 .248
 Observed Classroom 3 .243
 Observed Classroom 4 .139
 Observed Classroom 5 .015
 Observed Classroom 6 .208

Note:

*

There is only a single value-added score reported for the three observed classrooms at Sandy Beaches High School because all classes had the same teacher.

Please note that Alpha, Beta, and Sandy Beaches High School did not meet our a priori criteria of one full standard deviation above or below the county mean.

Although we a priori defined high and low value-added score classrooms as one SD above or below the county mean respectively, it is important to note that not all of our observed classrooms met this criteria (see Table 3). In County 1, two of our sites met the criteria (College Center, low; Oakmont, high). Because teachers at the other a priori identified high and low value-added score classrooms declined observations, we attempted to observe classrooms as close to our criteria as possible. The Alpha classroom is approximately a half SD above the County 1 mean and the Beta classroom is approximately a half SD below the County 1 mean. Because these sites are not in our a priori criteria, we defined them accordingly as high-average and low-average, respectively. In County 2, we did not observe any negative value-added scores because the district selected our sites. Instead, Sandy Beaches High School can be considered an average value-added score site because it is approximately at the County 2 mean. The mean of the six observed classrooms at Pacific Central (M = .190) met our a priori criteria for high value-added score. Therefore, with the exception of Alpha, we have a representative sample of high value-added score classrooms. In contrast, we only have a single low value-added score classroom and the remainder fall in the low-average to average range.

Qualitative Findings: Teacher and Student Definitions of Success

We define each of our six individual sites in detail below. To address research question two, we begin each site description by reporting the observed context and approach to teaching and learning. Next, we address research question three by focusing on the three categories nominated by our constant comparative analysis as central to our participants’ views on their success: teacher-student interactions, views on testing, and student motivational factors. Finally, each site’s section concludes with a brief summary of the ways in which local stakeholders define success and how those definitions shape the classroom culture and instructional design.

County 1, site 1: College Center (low value-added score)

Context and Approach to Learning

This program is dedicated to a local university’s employees, who are seeking to pass all portions of the GED. Those who are scheduled to work during the class hours are given release time to attend class and return to their jobs immediately after class. Those who are scheduled to work when class is not in session do not receive compensatory release time. Participants’ reports, sign-in records, and observations reveal that the attendance rate was extremely consistent for those students who were given release time compared to those students who worked later in the day.

The Tuesday/Thursday morning class averaged eight students functioning on various academic levels. According to the teacher, students, and researcher observations, the dominant learning approach was independent work, supplemented by one-on-one instruction. The teacher circulated, provided support, and asked questions to assess understanding as students worked individually on a mixture of pre-GED and GED level books or on computer programs. The teacher and adult learners consistently mentioned that class members focused on selected content areas that were identified as weaknesses or that would appear on upcoming GED subtests. For example, if an individual were scheduled to take the science GED portion, he or she would focus on the science sections of the workbook and computer program. Several class members mentioned that concentrated study on targeted areas seemed helpful.

Class members at this site consistently displayed positive attitudes toward the attainment of their diplomas and their academic progress. Most students agreed that their time spent in the class was beneficial and more enjoyable than when they were younger and were required to attend school. One student said that, “It was a great feeling to be back in class,” and another commented that, “It’s hard sometimes, but it’s worth it.”

Teacher-Student Interactions

Observations suggest that the teacher’s relationship with the class members was encouraging and respectful. In describing the teacher, her supervisor volunteered that she gave perfect attendance certificates, hosted appreciation luncheons for tutors, and celebrated birthdays and attainment of diplomas. The supervisor also noted that the teacher was working to build a classroom community and was sensitive of the need to “make people confident enough to take the GED.” Learners’ statements supported the supervisor’s views of the teacher. One student commented, “She treats me like a person and an adult and a human being. She pushes you to the limit… (s)he helps me not to get frustrated.”

Views on Testing

Various class members stated their concern over the entire testing process. Some welcomed the TABE, as the results affirmed that they were making progress, but the thought of taking the actual GED worried many. One student’s comments illustrated the intense emotion attached to the test:

I mean when I go out there to take the test, I am probably going to be as scared as a cat on a hot tin roof, but I will sit down and I will take it and if I don’t pass it, then I will go back and do it again. I will come back, do some more, and I will go do it again.

Other students expressed feelings of nervousness about the time constraints and costs associated with the testing. One participant had just taken the test when we began our observations at this site. She explained that she was required to continue coming to class until she knew her results. When she later learned that she missed passing the test by a very small margin, everyone consoled her. In contrast, she was extremely positive, explaining that she had gotten so close that she knew that if she kept working, she would pass on her next attempt.

Student Motivational Factors

The importance of family permeated the participants’ comments regarding motivation. Multiple students recounted stories about how their families kept them focused, encouraged them to keep attending class, and made them proud of their accomplishments. One woman discussed her daughter’s progress in sixth grade math:

It helps me to help my kids out these days and as a matter of fact, my daughter just got her report card yesterday, and she was happy to come home to tell me that she made an F the first six weeks, a D the second six weeks, and this six weeks, she made a C….And she says oohh Mom, thank you for all the help in math. I made a C this time and it really brought tears to my eyes cause I was really happy because I didn’t know much about math at first…We’re a good little family.

She explained that even after she passed her exam and earned her diploma, she would continue to come to the class “…to see all what I can learn so I can be a help to her.” The desire to keep coming after earning a diploma was mirrored by a classmate who insisted that he had to be focused for himself and also for his family, remaining in the program to keep up with his reading and writing. “I have two kids, and you know, I help them with their homework, and my wife goes to school also. So we are just one happy family at the table doing homework.”

Summative Findings

The teacher at this site strove to create an environment in which the students felt supported and respected by their instructor as they engaged primarily in independent computer or workbook activities. The result was a safe and comfortable environment in which students viewed their teacher as integral in their progress. While she endorsed the importance of LCPs, she simultaneously expressed her concern that the gains needed to earn LCPs were dependent upon the level at which the person entered (i.e., some levels require less movement than other levels). Her concern reflected her belief that LCPs are not the most efficient and equitable factor in determining a program’s level of success. Student conversations about success were dominated by the ways in which their learning gains were allowing them to help their children with their homework and improve their family life. In fact, various learners within this study identified themselves as successful even though they did not earn high numbers of LCPs or passing scores on the GED.

County 1, site 2: Oakmont (high value-added score)

Context and Approach to Learning

This ABE program resembled a traditional classroom in many ways. Colorful, inspirational posters and a list of 22 students’ names, 17 accompanied by the dates on which they were scheduled to take the GED, covered the bulletin boards. The classroom was divided into three content area sections: reading/language arts, math and science, and a computer station. The class met for three hours every weekday morning, and this time was broken into 50-minute segments. Reflecting a traditional high school schedule, the students rotated in groups through the three content area sections.

The two permanent teachers at this site possessed differing amounts of experience in ABE. The first teacher had 18 years of experience working with adults while the second teacher was completing her second year in ABE. The first teacher said that she found her job rewarding because:

I feel I’m touching not only the adults and the parents, but their children as well. When adults are educated, that literacy will be passed on to their children as well. So I feel like I’m getting double value for my time.

The teachers stated that they believed in using multiple approaches to teaching adult learners. As one teacher noted, “I’ve heard so many adults say that they have gone into a classroom, been given a book, and told to teach themselves. That failed. If they could do that, they would sit at home and do it. They need teacher instruction.”

The students were observed in independent computer work, one-on-one teacher instruction, and working in small and large groups. For example, students in the reading and language arts portion of the class were seen working independently on a grammar exercise and then discussing their answers with the group. Throughout the sessions, the students actively participated and interacted with each other. In order to provide in-class support for students with similar goals, individuals who were working on earning their GEDs were grouped together and those working to raise their TABE scores formed another group.

Teacher-Student Interactions

Teaching students about the formation and attainment of goals was a priority at this site. One teacher explained:

Many of our students come in not knowing what a goal is, how to set a goal, or even how after they have that goal, how to establish a plan to attain that goal. What we found in the past is that after students have received their diploma, they don’t know where to go from there. What is the next step? It has been our goal as teachers to involve the host site counselors, the testing staff, to come in and talk to our students several times a year. Tell them about the trades, the financial aid, the testing. We have also asked the host site staff to take the students on a field trip to the different trades because it’s one thing to hear about it or read about it, but it is another thing to go into a classroom and see the teachers asking questions, see the set up of the classroom, and that motivates them to pursue those goals rather than just thinking about them.

She further explained that they teach the students “not to say, ‘I’m getting my GED,’ but ‘I’m getting my high school diploma.’ They are taking the GED to get their high school diploma, and they are surprised to hear that.”

Learners reported that they viewed their instructors as important to their success. As one student stated, the teachers:

…have been a tremendous help, and they have been very supportive and always try to get me to a higher level because they know I can do it….It is good they push me because I need a push or I’m just going to sit in one place and not go anywhere.

Another student described her interaction with one of the teachers:

When I came here, my writing and language and the way I speak was not good. Ms. [teacher] was always correcting me, and she would say this is not right, but now when I write, she is like, ‘perfect, perfect,’ and when you take the test they always say, ‘good job.’ It is never nothing negative like you are wasting time. It is always positive.

In summing up her view of the teachers, the student added, “I like that they are here not just punching in hours for the money. They are here to teach. That is their job, and they want to see you be successful here.”

Views on Testing

The instructors were clearly aware of the importance of earning LCPs. One teacher reported that in the past they were lucky to earn 70 LCPs in a year, but in the last two years their students had earned over 300 LCPs. She attributed some of the increase in LCPs to the teachers’ efforts to construct classroom management routines and to establish high expectations of student personal responsibility. While the students rarely mentioned practice tests, they did discuss taking the GED. Their conversations rarely stopped with the test as they then volunteered what their plans were after they had achieved that milestone. Rather than viewing the diploma as a culminating goal, they mentioned what the diploma would allow them to do; those plans frequently included more schooling.

Student Motivational Factors

Well-defined goals beyond attaining a high school diploma played a large role in the lives of these students. One was motivated to become a nurse, another aspired to open her own salon, and a third explained that she wanted to set an example for her children by helping them with their homework. One student commented:

In tenth grade, I thought I was all grown and knew everything and all I cared about was my hair, nails, and clothes. That all changed when I realized I did not want to be flipping burgers the rest of my life.

Another student stated that she came to school “every single day, unless I have a doctor’s appointment because you get to learn something different everyday.”

Inspiration for participants to earn their diplomas ranged from a conscious choice to avoid their parents’ paths, their desire to serve as role models for their children, and a need to validate their lives. One student said:

I think what got me to where I am now is that I have seen what my parents have done…I had to come to a decision: do I want to end up like them or do I want to do something different? And this is where it brought me. Nobody else is going to do it for me, so I might as well go and enroll. I might as well get started somewhere so I can get this over with and start my own life and not have to end up like them or depend on them.

Another student spoke of her determination to support her children’s education, saying, “I want to see that high school diploma. I want them to walk with their classmates.” A third student talked about her reasons for coming to the program and said, “I have a son who is graduating high school, and I thought if he can do it, then I can do it, too. So I am doing this for me.”

Summative Findings

While the teachers were aware of the need to earn LCPs, they also stressed the importance of appropriate attitudes toward school, the value of establishing life goals, and the need to take responsibility for personal actions. Teaching and learning opportunities relied upon multiple approaches and content-specific domains of instruction. The classroom culture reflected an interactive, engaged community of learners who defined success not as ending with a diploma or in terms of LCPs, but in relation to family and self-achievement of goals. Students frequently cited their families and their teachers as positive influences suggesting that they viewed them as contributory factors to their success.

County 1, site 3: Alpha (high-average value-added score)

Context and Approach to Learning

This class was composed of both ESOL students with significant amounts of education achieved in other countries and native English speakers. Attendance varied from five to nine students, but there were five learners who exhibited consistent attendance. The first teacher had 18 months experience teaching at this site, while the second teacher had worked in the program for seven years.

Instructional methods focused almost exclusively on independent work. Students initiated requests for individual assistance from one of the teachers when necessary, although these interactions occurred infrequently. Upon arrival in class, students signed in, gathered their materials, and began work at a desk or a computer. Whole or small group instruction did not occur in this class during any of the observations. The students resisted whole and small group instruction. A teacher explained, “We try to do lessons, and they seem like ‘Why are you bothering us? We are trying to get our work done, and we will let you know if we need help.’” The teacher expressed surprise, commenting that in prior classes, she had often used whole group instruction.

The learners’ responses to the instructional approach and materials varied. One student stated that she perceived that she was learning new information, especially through a computer program and that she enjoyed attending. She reported that she had benefited from the pre-testing when she entered the program because it pinpointed her strengths and weaknesses and helped her to better plan her time. Another classmate disagreed, indicating that she had not learned anything new from the class. She insisted, “I’m just reviewing. I don’t really learn lessons… basically what I’m doing here is like common knowledge.” She readily admitted to attending class in order to comply with the state policy that students who are under 18 and drop out of high school must enroll in ABE to retain their driver’s license.

Teacher-Student Interactions

In interviews, students cited workbooks, computer programs, and practice tests as beneficial to their learning. Although the students seemed comfortable with their teachers and willing to ask for help, none of the students mentioned the instructors as helpful and teacher-student interactions rarely occurred.

Views on Testing

Discussion about testing did not permeate the culture of this classroom. Students rarely mentioned testing and teachers referenced tests only obliquely, citing them in relation to preparedness to take the GED. Several students expressed a desire to attend a local community college after earning their high school diplomas.

Student Motivational Factors

The under-18 suspended license students cited family and career goals as motivational factors. For example, one student wanted to follow in her mother’s path of attending law school. An older ESOL student attended class to establish her career in the United States. Despite having a degree from a foreign country, she needed a diploma and college courses to have her education recognized in the United States.

Summative Findings

Though the small number of students inhibited the amount of data collected at this site, some statements can be made about the learning environment. The primary mode of instruction was independent work and interactions between students and teachers were friendly, but minimal. In focusing instruction on independent work, the teachers honored the request of their students rather than asserting the instructional mode that they saw as traditionally more valuable from past classroom experience. The teachers rarely mentioned taking the GED or earning a diploma. The five regularly attending students identified career-related goals that were not confined to earning a diploma.

County 1, site 4: Beta (low-average value-added score)

Context and Approach to Learning

This class had two teachers and met four times a week, Monday through Thursday, for two-and-a-half hours in adjoining classrooms. Students worked independently on computer programs or at desks with GED preparation and academic remediation textbooks. The educational background of the students included home schooling and early exits from high school. At the time of the observations and interviews, attendance had waned. The observations typically involved four to seven students, with four regular attendees. One of the teachers addressed the transience of the classroom community, stating:

So many of them come and go and it’s important that we are empowering them to understand to develop metacognition or whatever you want to call it, because most of them are coming to us and their concept of education is that they sit down and have facts poured in to their heads. They have to know that they have to take charge. They have to know what their diagnostic test1 means, how to read it (and) what they need for their education.

Activities observed in the class varied: some days the entire time was dedicated to independent work, whereas, on other days the time was divided between independent work and whole group lessons in language arts and math. A teacher stated that students “need that transition because if you have a large group and you are all working individually for three hours, it’s a strain.” She noted that an in-class survey revealed that the students valued one-on-one teacher-student instruction.

Teacher-Student Interactions

Class members spent the majority of their time in individual activity, predominately without teacher interaction. The relationships between the teachers and the students appeared to be business-like, although the teachers routinely circulated around the room, asking students if they needed help or assisting them in choosing appropriate learning tasks. Teachers stressed that they felt a sense of community with the class, regularly contacting students who had missed several classes and encouraging them to return. Further, the teachers stated that they discussed career plans and goals with students.

Views on Testing

Students entered the program having already taken the TABE and both teachers felt it was imperative that class members knew their individual skill levels. Each learner received a notebook to keep track of individual progress and monitor learning gains from test/retest results of the TABE and GED. Additionally, both teachers believed that their role was to encourage their students, but one teacher indicated:

We don’t really let that test drive what we’re doing. When a period of time has passed, it’s really encourage[ing] for them to see that their scores have gone up, and we might get ideas from them on something we need to focus on more, but it doesn’t drive us.

The other teacher added that we “don’t do weekly or quarterly tests like other settings do. We’re performance-based. You demonstrate that you know the material. A test is just a measure of where you are.”

Student Motivational Factors

An apparent conflict existed between the teachers’ and the students’ views on testing. One student indicated that testing provided her with higher motivation to achieve. She described her approach to learning: “I work where my weaknesses are. At first that’s math, so I try to work on that and take a test every Thursday to see if I’ve improved any.” This student’s assertion that she took a test every Thursday stands in direct contradiction with the teachers’ insistence that they did not administer weekly tests; however, the student’s attendance was erratic, possibly contributing to the different interpretations of test-taking regularity. Another student credited her motivation and testing gains to her mother’s involvement and learned classroom strategies.

Summative Findings

Teachers offered two approaches to learning: independent work with access to teacher support and whole group instruction. Both instructors endorsed the importance of students taking responsibility for their future success. Important differences existed in the ways in which teachers, students, and researchers perceived elements of the class. First, the teachers reported that they did not view testing as essential to their learners’ achievements; it was only a tool. However, the students stated that they valued testing and took many practice tests to gauge their progress. This nonalignment of views may suggest that the students identified testing as an integral part of the learning process whereas the teachers considered testing a high-stakes predictor of their students’ readiness to take the GED. Second, the teachers viewed their interactions with the students differently than the researchers did, with the teachers claiming stronger and more frequent student interaction and engagement than was observed. When identifying factors that contributed to their success, no students cited dedication to school, clearly defined goals beyond passing the GED, or the influence of teachers on their progress.

County 2, site 1: Sandy Beaches High School (average value-added score)

Context and Approach to Learning

At this site, desks were arranged in a checkerboard pattern in the center of the room and computers lined the classroom walls. Although 27 students were working, the classroom was silent except for the clicking of keyboards. Staffed by a teacher and a lab manager, the class, which offered high school credit, ABE, and GED courses, met four days a week for five-and-a-half hours. The teacher had more than 20 years of experience in ABE and stressed the correlation she saw between attendance and achievement. Recording student attendance played a significant role in this classroom. All attendance forms were color-coded so that students in different programs could quickly identify their sign-in sheets. The teacher firmly believed that students were “…not going to get anything accomplished if [they] didn’t make at least a certain amount of hours in a meeting.” Further, the teacher and lab manager stated that they kept extensive student records and supplied each student with a handbook that defined the rules and guidelines of the program.

The teacher viewed independent, self-directed work with teacher support as the most effective approach to learning. She occasionally offered small pull-out groups when she had a sufficient number of students on the same level. Further, she emphasized the need for teachers to believe in their students. She stated, “If you get someone who thinks that [the students] are dummies, then [the teachers] are going to show that attitude and it affects [the students].” A student concurred that he could ask for help any time he needed it, but that he worked well independently.

Teacher-Student Interactions

The teacher believed that adults learn differently than children because adults feel more self-sufficient and are able to direct their own studies. She stated, “Adult learners want to be very independent because of the fact that they want to know things; however, they want to try to learn as much as they can on their own.” This belief was the primary factor in how she organized the class and in her decision to stress individual work.

Views on Testing

One student stated that he wanted to take all of the GED sections at one time in order to accelerate the testing process. The upcoming practice tests and the GED served as significant markers of the student’s progress.

Student Motivational Factors

The teacher recognized the importance of motivation and considered it an integral aspect of her classroom. She supported and praised students’ progress:

You have to make them feel good about themselves. If you can try to build their self-esteem up, that seems to make them come more. You have to let them know that they can do it. Don’t accept ‘I can’t do this!’ Tell them ‘can’t’ is not in their vocabulary at all. You work with them and just keep encouraging them.

Though the teacher stressed her support of the students, none of the students interviewed mentioned the teacher or her encouragement when they spoke of motivating factors. One learner was motivated by her mother’s support as well as her own desire to start on the next phase of her education and career. Echoing his classmate’s reference to self-motivation, another student pointed out, “I want to be successful in life… I don’t want to have to worry if I’ll be able to pay the next bill or whatever.” He explained that his mother did not have her high school diploma, and as a result, had trouble finding work. He regularly attended class because he felt that he was making progress toward passing the GED, which would enable him to enroll in an auto repair program.

Summative Findings

This site was complex because several programs operated simultaneously in the same classroom. The dominant teaching and learning approach was independent, student-managed instruction in an environment with limited interaction among students and between students and teachers. When students did have questions, it appeared that those inquiries were addressed efficiently and respectfully. A significant emphasis was placed on record keeping because the teacher viewed regular attendance as integral to success. Students appeared to be self-motivated and motivated by family and future goals.

County 2, site 2: Pacific Central (high value-added score)

Context and approach to learning

This is the largest ABE site in County 2, serving roughly 1,800 adults of which a majority (85–90%) are ESOL students. Classes are categorized based on student ability, which is assessed by initial TABE scores. This site offers three class schedules: morning, afternoon, and evening. Six classes with seven teachers representing a variety of student ability levels were observed. Collectively, the teachers had diverse educational backgrounds and experiences, ranging from two to 32 years.

Typically, data collected from multiple classrooms, even those purported to be addressing similar curricular materials and goals, reflect dissimilarities as well as commonalities. These variations often related to teaching styles, curricular emphases, different approaches to classroom management, and individual student responses and level of engagement. Frequently, classroom differences outweighed the likenesses. It is noteworthy that the data gathered from this site reflect considerable consistency across individual classrooms regardless of the teacher and of student ability level. Triangulation of observed methods of instruction and interactions, student responses to interviews, and teacher reports of classroom instruction and activities confirmed that these teachers and students shared common educational beliefs. These beliefs included parallel approaches to instruction and shared classroom cultures, similar views on teacher-student interactions and testing, and comparable student motivating factors. Due to these consistencies, aggregation of data collected at this site was possible and beneficial, providing a view of this site’s program as a whole.

At this site, there was an identified and administration-sanctioned move toward “stand-up teaching,” informally defined as teacher-led whole group lessons on specific skills or topics. This approach included curricula developed by district personnel and other instructional materials that individual teachers identified as beneficial for their students. As a result, stand-up teaching supplemented independent computer work and booklet exercises. One teacher stated, “Different strategies work for different learners. What we try to do is try to reach them through a bunch of different modalities.” According to the teachers, group work did not replace independent work, but rather augmented reinforced it. Teachers supported alternating the class structure between individual prescribed curricula and whole group instruction. One teacher reported that this style had a positive effect on students and their attendance:

I would say that my attendance has been… consistently pretty good, and I’ve noticed a correlation between when I first came to this school [and] was told we worked with [individually prescribed curricula]. I found that that did not work because there was only one of me and 30 [curricular] prescriptions floating around, and students weren’t getting help. They’re here because they didn’t do well working independently. I made the decision that we were going to work as a group with subgroups sometimes splitting off into groups. When I started working and doing lessons and working with groups, the students came to class, and they stayed, and they want to hear the teacher. They want to hear what I have to say. They do not want to read it out of a book or always see it on the computer.

This teacher went on to illustrate the benefits of peer-interaction. “Some of those students that are not quite there, it’s amazing that when they work cooperatively how they can pull the other ones up to their level.” In addition, she viewed student-student interactions as a tool for learning “people skills” and as a way for students to feel that they have valuable information to offer.

Students in various classes also endorsed the benefits of whole and small group instruction, in addition to voicing their views on the value of employing multiple learning strategies. Data from across the classes suggested that some students preferred the computer, others favored print material, and others endorsed group work. These choices did not appear to be dependent on native or non-native English speaker status or the ability level of the classes.

Although teachers emphasized different content areas in their classes, all educators observed at this site included life skills in their instructional focus. One teacher explained, “We do extra things as well. We do anger management skills, life skills…and I end my class everyday with the newspaper. Each student will have to get a newspaper and read an article of their choice.” Students shared with the class what they learned from the article, and thus had an opportunity to practice oral language skills and demonstrate reading comprehension skills.

Teacher-Student Interactions

Students consistently praised their teachers. One student enthusiastically shared, “Thank God for the teachers because being a cancer patient, they let me come in when I can come and [the teachers] all have been fabulous.” Another expressed similar and more expansive thoughts:

I’ve learned a lot in the program especially with these two ladies that teach us. They take their time, they have a lot of patience, and with someone like me, I have a hard time taking tests, and I struggle to get my diploma, and they let me come back. They don’t look at you as stupid if you happen to say that you can’t do it. They say, ‘Don’t say that. Of course you can.’

Another learner concurred:

The teacher believes in me so much… she inspires us so much. For instance, if we don’t understand something, she will take time out of her schedule and explain it to you. And she’s so happy when you understand it. She is a great inspiration.

When asked about her relationship with her students, one teacher responded that her students know exactly what she expects from them: consistent attendance and effort. Teachers also emphasized their awareness of current research in order to help student learning. A teacher explained that by attending workshops and browsing teacher catalogs, she strove to remain on top of effective new strategies to support her students.

It appeared that a respectful and understanding classroom culture was consistent across the site. One teacher noted:

When the students come into the classroom, they feel like a part of a family. When they come here, we are a team. There aren’t any put-downs, there isn’t any talk about ‘I’m smarter than you are’ or ‘I know this.’ Instead, it’s ‘How can I help you?’ not ‘How can I hurt you?’ I think people feel safe, they feel comfortable, they feel loved…cared about.

Learners from a variety of classes also identified respect and support as important elements of the classroom communities.

Views on Testing

A majority of the teachers expressed some doubt as to the consistency and value of the GED and the TABE. One teacher explained that she has seen the requirements fluctuate and that the GED was historically more difficult to pass. Further, she questioned the predictive validity of the TABE to the GED. She stated, “The TABE test is totally unlike the GED in every way, shape, and form.” Her colleague agreed, citing times when she had “seen students that score low on the TABE and pass their GED. Some students come back and say what I took on the GED was not what I studied in class, and the GED was much easier than the TABE.” While this teacher did not see a clear correlation between the two assessments, she did believe that the TABE was valuable. “A lot of the skills that are on the TABE are not only needed to pass but to survive.” Her statements suggest that although she didn’t think the TABE directly corresponded to the GED, she saw value in the TABE’s functional literacy component.

Student Motivational Factors

Students expressed positive attitudes toward school in relation to learning gains and goal achievements, both immediate (earning a diploma) and long-term (establishing a career). Their statements reflected their overwhelmingly affirmative stance on their decision to attend class, and some linked attendance with achievement. “I don’t intend on missing any. I am coming for my GED, and I want to test as quick as possible.” Some described the process as difficult, “It’s hard. I feel like a baby learning to walk again,” but they took solace in their achievement.

Several of the learners expressed their desire to pass the GED for career purposes and personal fulfillment. One student summed up his goals: “I needed to learn English. I think English is very important to me for my future, my job, for college for everything I need it is in English.” Another tied her diploma to multiple desires: “I wanted to better myself, pronounce my words correctly, brush up on my math skills, learn some multiplication, and maybe later in life, I want to do something different.” For another student, a diploma was a job specific pathway: “With it, I really want to get into the security in the airport.” An elderly learner detailed her reason for returning to school: “The last five years I had nothing to do, and I was thinking what in your life is undone? And I thought well I never finished my education.” Others were motivated by family members: “My daughter is the one who inspired me;” “My mom supports me and my decision;” and “My children were born with the concept of being educated.”

As an additional student motivational factor, a graduation ceremony is held each May. One teacher insisted that this ceremony “is well worth waiting for.” Graduates don caps and gowns and are given carnations. The Reserve Officers’ Training Corps (ROTC), in dress uniforms, raise their sabers to form an archway and graduates walk under the swords. A guest speaker and celebratory songs are integral parts of this tradition.

Summative Findings

This was a large, busy site with three different time frames for classes and programs that served ESOL, ABE, and GED students aged from 16 to over 80. Despite the diversity, remarkable consistency existed in classroom cultures, instructional approaches, and definitions of goals for student achievement. Teachers recognized the ultimate goal of their classes as earning a high school diploma, but they also viewed success in broader terms. They embedded life skills into their curricula, which went beyond testing, and supported a blend of instructional methods. The combination of stand-up teaching with independent and small group work allowed teachers to tailor instruction to student needs. These teachers stressed the importance of establishing consistent expectations for students, such as regular attendance and effort, and of developing a caring and supportive environment. This consistency of attitude, echoed by teachers and students alike, suggests a common belief that all students would achieve learning gains, succeed in earning their diplomas, and accomplish their goals.

Discussion

The multiple stakeholders who comprise the world of ABE view success through different lenses, frame their definitions in different terms, and even disagree with others within the same groups about what constitutes meaningful achievement. Despite the multiplicity of definitions of success, the state’s definition of more and less effective ABE classrooms (value-added scores) typically corresponds with particular observed classroom characteristics (context and approaches to learning) and definitions by local stakeholders (as assessed by views on teacher-student interactions, testing, and motivational factors). In order to explore the ways in which these definitions of success intersect, we first summarize the ways in which the various stakeholders characterize achievement. Next, we compare and contrast important characteristics of effective (high value-added score) and less effective (low to average value-added score) classrooms. Finally, we discuss how our findings relate to the body of research on effective elementary school classrooms and international adult literacy research. We conclude with the implications of our findings and our recommendations for ABE programs.

State-Defined Successful Classrooms

In the quantitative piece of this study, we computed value-added scores as a proxy for the state of Florida’s definition of success. We identified six effective and nine less effective classrooms in County 1 and 13 effective and 12 less effective classrooms in County 2. Our value-added scores allowed the us to select classrooms for our qualitative analyses and to compare and contrast between state and local stakeholders’ definitions of success (discussed in detail in the subsection on inter-related definitions of success below), keeping in mind that all research assistants were unaware of classroom value-added status.

Teacher-Defined Successful Classrooms

Some ABE teachers defined learner success strictly in terms of LCPs earned; however, other teachers viewed success in broader and more complex terms. For instance, they might define success in terms of teaching students to: (a) develop goals beyond earning a diploma; (b) master life skills; and (c) appreciate the importance of supporting others and contributing to the development of a classroom community. Our study suggests that teachers act in accordance with their definitions of success when designing their instructional approaches to learning and the types of classroom communities they strive to foster. As observed in the Alpha, Beta, and Sandy Beaches classrooms, teachers who believed that their students learned better through independent work, designed learning environments that focused almost exclusively on individual work with minimal teacher-student interactions. In contrast, teachers in the Oakmont and Pacific Central classrooms employed a variety of instructional approaches to foster student learning, such as whole group instruction, small group work, and independent work. Generally, these teachers had broader definitions of success, which they tried to instill in their students. Therefore, the teachers constructed classroom communities that were interactive, engaging, and mutually supportive.

Student-Defined Successful Classrooms

Some of the learners measured success strictly in terms of earning a diploma (passing the GED). However, some of the students had more expansive goals such as obtaining a new job, enrolling in higher education, or supporting their children’s learning. What students bring to the classroom (i.e., long term goals, consistent attendance, and motivational reasons for pursuing education) may directly and indirectly influence the learning environment of the classroom. For example, some of the older students indicated that younger students had not “figured it out yet” because these students did not understand that consistent attendance, completing homework, studying outside of class, and supporting classmates increased learning opportunities for all students. Learners in higher value-added classrooms consistently identified their teachers as an integral part of their success. It may be that these teachers were better able to discern, articulate, and support individual students’ goals and motivations in ways that were not evident in low to average value-added classrooms. However, to some extent, this may have been beyond the control of some teachers. For example, the ability of teachers to re-direct student goals and foster learning may be more difficult in classrooms where the majority of students are required to attend class in order to retain their drivers’ licenses.

Inter-related Definitions of Success in High Value-Added Score Classrooms

An examination of high value-added score classrooms provided an opportunity to illustrate the commonalities in how the teachers and students defined success and the observed classroom characteristics. In nearly every case, more effective classrooms incorporated multiple approaches to learning, frequent and engaging teacher-student interactions, and motivating factors that went beyond earning a diploma. These elements, coupled with high value-added scores, suggest a complex, multi-faceted definition of success for ABE learners.

Multiple approaches to learning

More effective classrooms tended to be highly organized and diverse in their approaches to learning. These classrooms offered a range of instructional opportunities that allowed students to meet as a whole class, work in small groups, work one-on-one with a teacher or a peer, and work independently. Diverse materials were used including textbooks, workbooks, computer programs, teacher-generated lessons, and authentic texts (e.g., newspapers). The teachers emphatically eschewed independent learning when used to the exclusion of other modes of instruction. They insisted that if individual learning “worked,” their students could sit at home with the materials and learn just as effectively. Instead, they saw value in the integration of multiple approaches to learning and in building strong teacher-student bonds to promote learners’ success.

Teacher-student interactions

One characteristic that consistently differentiated effective from less effective classrooms was whether or not students stated that their teachers were vital to their success. Whereas most students indicated that their teachers were available to help them pass the tests, there was a deep sense of gratitude and allegiance echoed in student statements of more effective teachers than statements about less effective teachers. Effective teachers were described as being knowledgeable, encouraging, supportive, approachable, and accessible.

Some teachers fully embraced the notion that adult learners wished to be “left alone” to learn; however, there was a striking difference in the levels of student enthusiasm in classrooms with greater teacher involvement (higher value-added scores). Across these classrooms, teachers strove to develop supportive and collaborative learning communities for their students. Teachers articulated the importance of setting long-term goals, developing life skills, recognizing and increasing students’ self-worth, and valuing and supporting classmates (Rex, 2001). It is reasonable to conclude that if teachers consistently interact with adult learners in ways that the students perceive as supportive and encouraging, then teachers also serve as an important source of motivation for some learners.

Student motivational factors

Across ABE sites, the students served as important sources of influence on the overall efficacy of individual classrooms. We observed successful students in every classroom, but there tended to be a higher percentage of successful students in more effective classrooms. In speaking with these students, we found that most of them had clearly articulated goals regarding what they wanted to achieve and how they were going to achieve their objectives. Many had goals that extended beyond the attainment of a diploma, such as attending college, learning new skills, and getting new jobs. They typically exhibited positive attitudes in regard to their ability to successfully complete class work and pass the TABE and/or GED to earn their diplomas. Finally, these students referenced several motivating factors that helped them pursue and persist in ABE, including a sense of personal fulfillment, the support of family members, and potential career advancement.

Characteristics of Low to Average Value-Added Score Classrooms

Less effective classrooms possessed some, but not enough, of the crucial elements that were observed in more effective classrooms. For example, at the Sandy Beaches site, the teachers’ primary definition of success was regular attendance. Teachers in more effective classrooms also noted the link between consistent attendance and learning gains; however, more effective teachers also emphasized additional factors (i.e., goal setting, multiple modes of instruction). Thus, in the absence of other defining characteristics, the stress placed solely on attendance was not enough to promote large learning gains, as evidenced by the average value-added score of the Sandy Beaches site.

College Center provides another example of a classroom possessing some of the elements in common with the more effective classrooms. The teacher had developed personal, caring relationships with the students. Student comments reflected that they felt supported and respected by their instructor. Moreover, students felt encouraged by their teacher to take the GED and earn their diplomas. Independent work was the primary mode of instruction at College Center (similar to our other observed less effective classrooms). In contrast, more effective classrooms tended to have greater teacher engagement, varied modes of instruction, and a more diverse array of supplemental class materials. Additionally, the data from interviews and observations suggest that students in the College Center class were significantly more reluctant to take the GED and to commit to leaving the class upon earning a diploma than they were in any other classroom. In part, this reluctance may have emanated from the fact that the students in this program did not need diplomas to secure jobs as they were already employed, and they obtained release time for attending class. Additionally, there was no guarantee of a pay raise or promotion if they were successful in achieving their diploma. Thus, unlike students in more effective classrooms, these students did not appear to have clear goals, motivations, or external incentives for obtaining their diplomas.

Relation of Findings to Children’s Research and International Adult Literacy Research

Consistent with qualitative research on effective elementary school classrooms, the current study found that defining educational success is complex and that more effective classrooms tend to share multiple classroom characteristics (Bohn, Roehrig, & Pressley, 2004; Pressley et al., 2001; Taylor, Pearson, Clark, & Walpole, 2000; Wharton-McDonald, Pressley, & Hampston, 1998), which less effective classrooms may include but to a lesser extent. For example, Pressley et al. (2001) compared characteristics of more and less effective first grade classrooms across five states. The researchers reported that more effective classrooms were characterized by: (a) teachers that fostered positive, supportive classroom environments; (b) multiple approaches/materials and more explicit teaching of component reading skills; and (c) teachers that promoted independent, self-motivated learning. Thus, similar to research findings from children’s classrooms, the current study found that more effective ABE classrooms were also characterized by multiple markers of success, such as frequent and engaging teacher-student interactions, multiple modes of instruction, and students’ with broad goals and motivations.

Our finding that effective classrooms tended to have more students engaging in goal setting and citing several motivational factors beyond standardized testing is consistent with international adult literacy research and a social practice approach to literacy education. Barton (2013) summarized successful learning approaches by adults with low literacy in several countries. These learners work to gain the literacy knowledge required for specific purposes (e.g., to find work or to increase community involvement) that stretch beyond traditional educational paradigms. Similarly, a social practice approach to literacy, which has been implemented successfully in Scotland, acknowledges the students’ perspectives and their motivations for learning. In this approach, learners tailor their educational experiences and assessments to their personal goals and needs (Hamilton, Hillier, & Tett, 2006). This approach also extends the literacy context to include conventional learning in a school setting as well as functional literacy (i.e., the literacy skills necessary for individuals to function socially in day-today life) (Street, 2012b). One prominent example of a social practice approach to adult literacy is the Learning for Empowerment Through Training in Ethnographic Research (LETTER) program, which has been implemented in India, Ethiopia, and Uganda (Gebre, Rogers, Street, & Openjuru, 2009; Street, 2012b; Street, Rogers, & Baker, 2006). The LETTER program augments traditional literacy practices by providing adult literacy teachers and tutors with information to modify curricular objectives to fit specific literacy communities and to take into account the goals, motivations, and aspirations of learners within these communities (Street, 2012b). Researchers with the LETTER program have employed ethnographic research approaches and have consistently found a huge gap between formal literacy instruction taught in adult literacy programs and the literacy skills needed by learners in informal, social contexts (Street, 2001). Consistent with the current study, international adult literacy research supports a multi-faceted view of success in adult literacy programs that goes beyond traditional standardized testing and incorporates the learners’ needs and motivations. Based on these findings, it may be important to consider a more student-centered approach to adult literacy learning in the United States. Students in our study reported diverse motivating factors and thus, success may not be adequately measured for all learners by standardized assessments.

Implications of Findings and Recommendations for ABE Programs

Concurrent with national ABE programs, the sites observed in this study were offered in multiple contexts, utilized a range of instructional approaches and materials, and represented a heterogeneous group of students and teachers (Belzer, 2007; NRC, 2012; Tamassia et al., 2007). Consequently, it is not surprising that the current study illustrated that there were several ways to define success and that a multitude of characteristics comprised the differing definitions of success. Indeed, there was not a single defining characteristic associated with success but many definitions that appeared to be cumulative. Below, we propose several implications and recommendations for research and practice, based on our findings for ABE programs in the United States.

First, this study illuminated the pivotal role played by the teachers in ABE classrooms. These educators made choices about how they defined success for their students, how they designed their curricula, what teaching strategies and materials they employed, and how they constructed their classroom cultures. Consistently, students in more effective classrooms mentioned teachers as a vital component of their success. This suggests that ABE programs are more likely to be effective when they actively recruit and retain highly qualified, enthusiastic, and dedicated teachers. Moreover, ABE programs are more likely to be effective when teachers are provided professional development to help build their skills and knowledge. In contrast, on a national level, there is currently very little systematic professional development for ABE teachers (NRC, 2012). In terms of funding, ABE programs report that instructional staff is the largest expenditure whereas professional development is the smallest expenditure (Tamassia et al., 2007). Additionally, there is substantial variability in the educational backgrounds of instructors in ABE programs. Instructors tend to be part-time staff (40%) and volunteers (43%); a high school diploma is the most commonly reported credential for volunteers (NRC, 2012; Tamassia et al., 2007). Future research might focus on the kinds of specialized knowledge and skills that are associated with more effective (from both the state and student perspective) ABE programs and how best to provide professional development to such a diverse teaching force.

More effective classrooms employed a variety of approaches to learning and used a multitude of materials. Further, these classrooms had a strong sense of classroom community and frequent teacher-student interactions. Thus, promoting a positive, goal-focused, and supportive learning environment appears to be associated with successful ABE programs. These results suggest that when teachers provide specific classroom guidelines that delineate appropriate classroom behavior and offer constructive feedback that explicitly supports high expectations for student achievement and regular attendance, they are more likely to be effective and support students’ goals. Additionally, more effective teachers appear to avoid setting up classroom structures in which students are expected to engage in independent work for the entire class period. For example, opportunities for whole class, small group, and individual work were more frequently observed in more effective classrooms than in less effective settings.

The students also played an essential role in defining and fostering success in ABE classrooms. Students brought goals, life experiences, and background knowledge to the classroom. Thus, further consideration of peer effects and classroom composition appears warranted. For instance, it may be beneficial to pair younger and/or less motivated students with older and/or more motivated students who have clearly articulated goals. Also, it may be beneficial to avoid creating classrooms where all students are forced to attend (e.g., to keep their driver’s license). Acknowledging that students themselves are contributing to the culture (either constructive or less constructive) and developing a better understanding of the role of peer influence may support the effectiveness of ABE overall and the impact of teachers.

Finally, testing appears to represent a necessary, but not solely sufficient approach to defining success in ABE programs. According to the Adult Education Program Survey (AEPS), only one-third of students in ABE programs advance one educational functional level (as measured by the NRS levels) each academic year. On average, adult learners receive roughly 80–100 instructional hours per school year. Mastery of most academic concepts requires at least 3,000 instructional hours per year. Thus, 80–100 instructional hours represents merely a fraction of the hours necessary to do well on standardized assessments and to gain functioning levels (NRC, 2012; Tamassia et al., 2007). The state accountability marker of success in Florida is the number of LCPs earned, and although we observed some congruence between observational results and value-added status, this metric does not reflect entirely the progress and success of many students. Our results suggest that it might be important to consider and incorporate the other teacher and student defined markers of success, specifically approaches to learning used, teacher-student interactions, and motivational considerations.

Limitations and Future Directions

There are several study limitations that should be noted. First, although we did not share the results of our value-added scores with the counties, the administrators from County 2 appeared to already have some sense of which classrooms were more effective. Due to time constraints and district preferences, County 2 administrators selected the classrooms we were allowed to observe, and they consistently identified the highest value-added score teachers. Unlike in County 1 where we selected the classrooms to observe, County 2 had no teachers with negative value-added scores. Thus, particularly for our less effective classrooms (low to average value-added scores), the results should be interpreted cautiously because these may not be representative of truly low-performing ABE classrooms. Second, we utilized the state of Florida’s metric of success, LCPs earned, and initial TABE scores to compute value-added scores. We did not have access to later TABE scores; however, TABE scores would have been a better metric for our value-added scores. Thus, the results of our study may not generalize to ABE programs outside of Florida. Finally, it is important to keep in mind that this is a descriptive and correlational study, and therefore, no strong causal claims can be made.

Future research might address the complexities of defining success in ABE programs by investigating several states and counties located in diverse geographic regions. Moreover, future research might also include district administrators’ definitions of success. Additionally, random field trials with students and classrooms assigned to treatment and control conditions are necessary to investigate more and less effective instructional approaches, materials utilized, and standardized assessments given before stronger claims can be made. Finally, it would be helpful to examine individual learner characteristics (i.e. English language learner status, age, gender, educational background, and prevalence of learning disabilities) to see if certain demographics are affiliated with differing definitions of success.

Conclusion

This study suggests that the state’s definition of success, primarily LCPs and GED pass rates, represent only a single definition of what constitutes success in ABE settings. ABE teachers and students characterize success using additional factors. We identified three in particular: teacher-student interactions, views on testing, and student motivational factors. We observed the inherent complexity in defining success in ABE settings, with implications for teacher training and retention, as well as students’ and peers’ roles in defining successful classrooms. Future efforts will focus on developing a fine-grain set of objectives that take into account how multiple educational stakeholders define success.

Appendix A. Cross-classified Random Effects Latent Growth Curve Models

Because students were nested in classrooms and also changed courses from semester to semester, cross-classified random effects latent growth models using HLM6.2 (Raudenbush & Bryk, 2002; Raudenbush et al., 2002) were used to compute value-added scores. It is easier to understand how these models work if one envisions a matrix where students are enrolled in different courses with different teachers over time. Consider an imaginary student A that has Teacher 1 for a fall semester course and Teacher 2 for a spring semester course, and then returns to Teacher 1 for the next semester. Student B has Teacher 1 in the fall semester but Teacher 3 in the spring semester and drops out before beginning another course. Students C and D have Teacher 3 in the fall semester and Teacher 1 in the spring. Then Student C has Teacher 3 again the following semester and Student D has Teacher 2. The matrix would look like this:

Fall Semester 1 Spring Semester 1 Fall Semester 2
Teacher 1 Student A Student C Student A
Student B Student D
Teacher 2 Student A Student D
Teacher 3 Student C Student B Student C
Student D

Thus, two different teachers at different times are contributing to the number of LCPs Student A earns, whereas a different set of teachers and courses are contributing to the number of LCPs students B, C, and D earn. Cross-classified random effects can accommodate data that are nested in this manner and partials out the residuals (i.e., value-added scores) for each teacher. In this example, Student B attended fewer courses, which might contribute to fewer LCPs earned, and the models, and thus value-added scores, take this into account as well.

Models were built systematically starting with a fully unconditional model (i.e., outcome [number of LCPs] with no predictor variables), and then the linear growth trend (days) was tested. The unconditional model (including days as a student-level predictor) is provided below (see Equation 1):

Level1Yijt=π0jt+π1jt(Tijt)+eijtLevel2π0jt=θ0+b00j+c00tπ1jt=θ1+b10j+c10t (1)

Where Yijt is the number of LCPs earned for student i who attended classroom j at a certain time t, T represents time (in days). θ0 represents the grand mean number of LCPs earned for all students, θ1 is the mean rate of growth (LCPs per day), b00 is the random main effect of classroom (row) and c00 is the random main effect for student characteristic (column) [θ0 = 0.381, θ1 = 0.004, τb00j = 0.076, τc00t = 0.141, σ2 = 0.585]. Overall, most of the variance lies within classrooms with very little of the variance lying between classrooms.

We then tested a conditional model by adding students’ initial literacy level to the model (see Equation 2).

Level1Yijt=π0jt+π1jt(Tijt)+eijtLevel2π0jt=θ0+b00j+c00t+γ01(initialliteracyleveli)π1jt=θ1+b00j+c00t+γ11(initialliteracyleveli) (2)

Where γ01 is the effect of students’ initial score on the fitted mean number of LCPs earned and γ11 is the effect of students’ initial score on the fitted mean growth in LCPs per day. These coefficients are interpreted in the same way that regression coefficients are interpreted. In this model, again, b00 is the random main effect of classrooms (row). Teachers’ residual values above or below this mean (i.e., b00) represent their value-added scores, and are computed by the HLM6.2 program.

By using growth models over multiple courses and years and by using Empirical Bayes residuals, controlling for initial status, other important student characteristics, such as SES, race, and age, are not as critical (Ballou, Sanders, and Wright, 2004). By following students over time, we do not mask the contribution of previous coursework completed (Raudenbush, 2004a). Additionally, by including as many students as possible, we increase the reliability of our value-added scores (Raudenbush, 2004a; Raudenbush & Bryk, 2002).

Appendix B. Teacher and Student Semi-Structured Interview Questions

Teacher Questions

  1. What instructional practices do you use and which are more/least effective?

  2. What are the most common reasons that students successfully complete their ABE coursework?

  3. What are the most common reasons that students fail to complete their ABE coursework?

Student Questions

  1. Why are you attending ABE classes?

  2. How will you know when you have reached your educational goals?

  3. What is challenging about attending ABE classes?

  4. What will you do when you finish your ABE courses?

Footnotes

1

The results of the initial TABE testing done when students first enter an ABE class.

Contributor Information

Elizabeth L. Tighe, Florida State University, Tallahassee, Florida, USA

Adrienne E. Barnes, Florida State University, Tallahassee, Florida, USA

Carol M. Connor, Arizona State University, Phoenix, Arizona, USA

Sharilyn C. Steadman, East Carolina University, Greenville, North Carolina, USA

References

  1. Anderson-Levitt KM. Ethnography. In: Green JL, Camilli G, Elmore PB, editors. Handbook of complementary methods in education research. Mahwah, NJ: Lawrence Erlbaum Associates; 2006. pp. 279–296. [Google Scholar]
  2. Ballou D, Sanders W, Wright P. Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics. 2004;29(1):37–65. [Google Scholar]
  3. Barton D. Afterword: The threat of a good example: How ethnographic case studies challenge dominant discourses. In: Kalman J, Street B, editors. Literacy and Numeracy in Latin America. New York, NY: Routledge; 2013. pp. 214–219. [Google Scholar]
  4. Bennett P. Foreword: Impact of research on an adult education practitioner. In: Comings J, Garner B, Smith C, editors. Review of adult learning and literacy. Vol. 7. Mahwah, NJ: Lawrence Erlbaum; 2007. pp. vii–x. [Google Scholar]
  5. Belzer A. Why quality? Why now? In: Belzer A, editor. Toward defining and improving quality in Adult Basic Education. Mahwah, NJ: Lawrence Erlbaum Associates; 2007. pp. 1–7. [Google Scholar]; Belzer A, editor. Toward defining and improving quality in Adult Basic Education. Mahwah, NJ: Lawrence Erlbaum Associates; Preface; pp. ix–xii. [Google Scholar]
  6. Bohn CM, Roehrig AD, Pressley M. The first days of schools in the classrooms of two more effective and four less effective primary-grades teachers. The Elementary School Journal. 2004;104:271–287. [Google Scholar]
  7. Bryk AS, Deabster PE, Easton JQ, Luppescu S, Thum YM. Measuring achievement gains in the Chicago public schools. Education and Urban Society. 1994;26(3):306–319. [Google Scholar]
  8. Coates J. Women talk. Oxford, England: Blackwell; 1996. [Google Scholar]
  9. Comings JP, Cuban S. Supporting the persistence of Adult Basic Education students. In: Belzer A, editor. Toward defining and improving quality in Adult Basic Education. Mahwah, NJ: Lawrence Erlbaum Associates; 2007. pp. 125–140. [Google Scholar]
  10. Comings J, Soricone L. Adult literacy research: Opportunities and challenges. Cambridge, MA: Harvard Graduate School of Education; 2007. Occasional Paper of the National Center for the Study of Adult Learning and Literacy (NCSALL) [Google Scholar]
  11. Condelli L. Accountability and program quality: The third wave. In: Belzer A, editor. Toward defining and improving quality in Adult Basic Education. Mahwah, NJ: Lawrence Erlbaum Associates; 2007. pp. 11–32. [Google Scholar]
  12. CTB/McGraw-Hill . Test of Adult Basic Education (TABE) Monterey, CA: 2008. [Google Scholar]
  13. Gebre AH, Rogers A, Street B, Openjuru G. Everyday literacies in Africa: Ethnographic studies of literacy and numeracy practices in Ethiopia. Kampala: Fountain Publishers Ltd; 2009. [Google Scholar]
  14. Gee JP. An introduction to discourse analysis. London: Routledge; 1999. [Google Scholar]
  15. Glaser BG, Strauss AL. The discovery of grounded theory: Strategies for qualitative research. New York: Aldine; 1967. [Google Scholar]
  16. Green J. Exploring classroom discourse: Linguistic perspectives on teaching-learning processes. Educational Psychologist. 1983;18(3):180–198. [Google Scholar]
  17. Greenberg D. Tales from the field: The struggles and challenges of conducting ethical and quality research in the field of adult literacy. In: Belzer A, editor. Toward defining and improving quality in Adult Basic Education. Mahwah, NJ: Lawrence Erlbaum Associates; 2007. pp. 53–67. [Google Scholar]
  18. Hamilton M, Hillier Y, Tett L. Introduction: Social practice of adult literacy, numeracy and language. In: Tett L, Hamilton M, Hillier Y, editors. Adult Literacy, Numeracy & Language: Policy, Practice & Research. Open University Press; 2006. pp. 1–18. [Google Scholar]
  19. Kirsch IS, Jungeblut A, Jenkins L, Kolstad A. Adult literacy in America: A first look at the results of the National Adult Literacy Survey. National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education; Washington D.C: 1993. NCES 93-275. [Google Scholar]
  20. Kutner M, Greenberg E, Jin Y, Boyle B, Hsu Y, Dunleavy E. Literacy in Everyday Life: Results From the 2003 National Assessment of Adult Literacy. National Center for Education Statistics, U.S. Department of Education; Washington D.C: 2007. NCES 2007-480. [Google Scholar]
  21. Merriam SB. Qualitative research: A guide to design and implementation. San Francisco, CA: John Wiley & Sons, Inc; 2009. [Google Scholar]
  22. Miles MB, Huberman AM, Saldaña J. Qualitative data analysis: A methods sourcebook. 3. Thousand Oaks, CA: Sage Publications; 2014. [Google Scholar]
  23. Lesgold Alan M, Welsh-Ross Melissa., editors. National Research Council; Committee on Learning Sciences: Foundations and Applications to Adolescent and Adult Literacy, editor. Improving adult literacy instruction: Options for practice and research. Division of Behavioral and Social Sciences and Education; Washington, DC: The National Academies Press; 2012. [Google Scholar]
  24. Pressley M, Wharton-McDonald R, Allington R, Block CC, Morrow L, Tracey D, Woo D. A study of effective first-grade literacy instruction. Scientific Studies of Reading. 2001;5(1):35–58. [Google Scholar]
  25. Raudenbush SW. Schooling, statistics, and poverty: Can we measure school improvement? Princeton, NJ: Educational Testing Service; 2004a. [Google Scholar]
  26. Raudenbush SW. What are value-added models estimating and what does this imply for statistical practice. Journal of Educational and Behavioral Statistics. 2004b;29(1):121–129. [Google Scholar]
  27. Raudenbush SW, Bryk A, Cheong YF, Congdom R, du Toit M. HLM6: Hierarchical linear and nonlinear modeling. Lincolnwood, IL: Scientific Software International; 2004. [Google Scholar]
  28. Raudenbush SW, Bryk AS. Hierarchical linear models: Applications and data analysis methods. 2. Thousand Oaks, CA: Sage Publications; 2002. [Google Scholar]
  29. Raudenbush SW, Hong G, Rowen B. Studying the causal effects of instruction with application to primary-school mathematics. Washington DC: Office of Educational Research and Improvement; 2002. [Google Scholar]
  30. Raudenbush SW, Willms JD. The estimation of school effects. Journal of Educational and Behavioral Statistics. 1995;20(4):307–335. [Google Scholar]
  31. Rex LA. The remaking of a high school reader. Reading Research Quarterly. 2001;36(3):288–314. [Google Scholar]
  32. Sanders WL, Horn SP. Research findings from the Tennessee value-added assessment system (TVAAS) database: Implications for educational evaluation and research. Journal of Personnel Evaluation in Education. 1998;12(3):247–256. [Google Scholar]
  33. Street BV, editor. Literacy and development: Ethnographic perspectives. London: Routledge; 2001. [Google Scholar]
  34. Street BV. Literacy and Multimodality. 2012a Retrieved May 6, 2013, from: http://arquivos.lingtec.org/stis/STIS-LectureLitandMMMarch2012.pdf.
  35. Street BV. Society Reschooling. Reading Research Quarterly. 2012b;47(2):216–227. [Google Scholar]
  36. Street BV, Rogers A, Baker D. Adult teachers as researchers: Ethnographic approaches to numeracy and literacy as social practices in South Asia. Convergence. 2006;39(1):31–44. [Google Scholar]
  37. Tamassia C, Lennon M, Yamamoto K, Kirsch I. Adult education in America: A first look at results from the adult education program and learner surveys. Educational Testing Service (ETS) 2007 Retrieved October 10, 2012, from: http://www.ets.org/Media/Research/pdf/ETSLITERACY_AEPS_Report.pdf.
  38. Tannen D, Wallat C. Interactive frames and knowledge schemas in interaction: Examples from a medical examination review. In: Jaworski A, Coupland N, editors. The discourse reader. London and New York: Routledge; 1999. pp. 346–366. [Google Scholar]
  39. Taylor BM, Pearson DP, Clark K, Walpole S. Effective schools and accomplished teachers: Lessons about primary-grade reading instruction in low-income schools. Elementary School Journal. 2000;101(2):121–165. [Google Scholar]
  40. Venezky RL. Matching literacy testing with social policy: What are the alternatives? (Policy brief, document NO. PB92-1) Philadelphia, PA: National Center on Adult Literacy; 1992. Retrieved October 10, 2012, from: http://www.nald.ca/fulltext/report4/rep36-40/REP39-04.HTM. [Google Scholar]
  41. Wharton-McDonald R, Pressley M, Hampston JM. Literacy instruction in nine first-grade classrooms: Teacher characteristics and student achievement. The Elementary School Journal. 1998;99(2):101–128. [Google Scholar]
  42. Zarharlick A, Green J. Ethnographic research. In: Flood J, Jensen J, Lapp D, Squire J, editors. Handbook of research on teaching the English language arts. Mahwah, NJ: Lawrence Erlbaum Associates; 1991. pp. 205–225. [Google Scholar]

RESOURCES