Skip to main content
Microbiology Education logoLink to Microbiology Education
. 2000 May;1:14–19. doi: 10.1128/me.1.1.14-19.2000

An Evaluation of Computer-Based Instruction in Microbiology

SUSAN M MERKEL 1,*, LAURA B WALMAN 1,, JEREMY S LEVENTHAL 1,††
PMCID: PMC3633096  PMID: 23653534

Abstract

There has been a tremendous increase in the availability of computer-based instructional (CBI) materials. Some studies have shown an improvement in learning when CBI is used. However, many researchers believe the current studies are inadequate. While CBI software should be thoroughly tested by developers, as educators, we should be concerned about whether or not the CBI materials we use are improving learning in our classrooms with our students. We present an evaluation of a computer-based hypermedia tutorial that was delivered over our General Microbiology website. We found that CBI was at least as effective as text-based material. However, of all students who used CBI, only those who explored most of the site benefited from using the site. Tracking each student’s use of the CBI was critical for understanding who was learning and why.


In the past decade, there has been an explosion in the availability and use of computer-based instruction (CBI) in education (see references 9 and 22). CBI comes in many different types, each with its own educational strengths and weaknesses. For example, computers can be used as tutoring systems that present material and then give users an immediate opportunity to evaluate their knowledge (sometimes referred to as “drill and grill”). These often keep track of the user’s progress and adjust responses appropriately (11). Computer-aided instruction usually refers to autotutorial software that delivers information in a step-by-step linear mode (20). Hypermedia describes software that presents information in a nonlinear, user-centered manner, with information organized in nodes and links (7). This allows users to browse the software in a nonsequential manner driven by personal interest (12). Hypermedia supports the constructivist theory of learning, i.e., that learning is constructed from each user’s previous knowledge base, experience, and attitudes (8). Computer-based simulations allow users to interact with models of the natural or physical world (24). These are particularly useful for developing problem-solving skills or promoting conceptual changes in the way users understand principles.

There have been many attempts to evaluate the efficacy of these different forms of CBI. The evaluation of software has traditionally been characterized as being formative, which refers to the evaluation of student-computer interactions while the software is being developed, or summative, the evaluation of the learning outcomes of users after the software is “finished” (3). Formative evaluations are typically iterative, as developers incorporate user feedback into the product and then retest with another group of users, and often include interviews and observations as part of the pilot testing. Summative evaluations often consist of a comparison of different media types (e.g., CBI versus text), using scores on a pretest activity as the independent variable and scores on a posttest activity as a dependent variable (21).

A recent review of summative evaluations by Kulik (11) found that, in general, (i) students in classes that use CBI learn more than students in classes without CBI, (ii) students learn in less time with CBI, (iii) students like their classes more when they work with computers, and (iv) students who use computers have a more positive attitude toward using them. Of the different types of CBI, there are mixed results about whether or not any one kind enhances learning. For computer-aided instruction, with or without such features as graphics or animation, studies show both a positive effect and no effect on learning (for reviews, see references 11, 22, and 24). The often complex, user-centered designs presented in hypermedia make this type of CBI particularly difficult to evaluate. Many researchers have found computer-tracking tools (6) and split-screen video (7) useful for monitoring user interaction with hypermedia software. Simulations seem to hold the most promise for enhancing user learning, as many studies have found a positive effect on understanding of basic principles and application of that understanding from their use (11, 24).

However, many educational researchers also lament the quality of studies conducted to assess the usefulness of CBI (10). In comparing studies of the 1970s with studies of the late 1980s, Clark (4) found that the more recent studies had increased in duration and statistical complexity but noted that these improvements did not necessarily lead to more understanding. Weller (24) cites many examples of studies that have inappropriate controls and compare different instructional methods instead of different media. He also cites short-term studies that analyze only a few different variables, with mixed results. He believes the results of some positive studies could be explained by the amount of effort put into developing the CBI (compared to text) or the affects of the novelty of CBI.

How should educational software be evaluated? Clark (4) argues that developers need to incorporate current research in cognitive psychology into the “up-front” design of educational software and that evaluations need to be more directly applied to specific learning goals. Weller (24) suggests that the strictly quantitative analyses miss the “richness of the human experience of learning and doing science.” Instead, studies should be longer term, should include more variables, and should investigate how users are learning with these new tools of technology. Regian and Shute (15) propose a framework for evaluating CBI that involves first conducting tests in a controlled environment, such as a computer lab with volunteers, where users can be observed and given an immediate assessment to evaluate learning. From there, testing should move into a more uncontrolled but realistic situation, such as assigning the CBI as a homework activity or using it in a classroom setting. One model that we have found useful recommends the following general steps: (i) clearly delineate the goals and methods of CBI, (ii) clearly define the goals of the evaluation study, (iii) select an appropriate design for the defined goals, and (iv) include an appropriate number of subjects and controls (15).

In this study, we compared the performances of students who used a computer-based hypermedia tutorial to those who used comparable text or who had no support material at all. The CBI was delivered over our General Microbiology Course website. Our hypothesis was that the CBI would be at least as effective as text-based information, but only if students explored the entire website. To determine the amount of exploration, we monitored each student’s use of the website and analyzed their use and learning patterns. In addition, we tried to explore some of the barriers to exploration by gathering information about each student’s computer experience and feelings toward the website.

METHODS

Description of the website. The “Bulging Cannes Biofilm Festival” is a separate module within the Cornell University General Microbiology Course website (2). This site was designed by undergraduate student Jeremy Leventhal as a user-centered, hypermedia-based tutorial to introduce students to some basic concepts about how biofilms form and why they are both detrimental and beneficial to humans and their environment. We used humor (23), creative graphics and animation (14), and images from the American Society for Microbiology Biofilm Collection (1) to motivate users to explore the site and learn as they explore (21).

The formative evaluation of this site was conducted throughout the development stage by observing groups of users (both graduate students and undergraduate students from the General Microbiology courses at Cornell University) and giving them a series of 10 Likert-type questions to assess whether students found the site easy to use and enjoyable (similar to the questions presented in Table 3). Students who used the prototype version reported that they enjoyed using the site and had no trouble navigating through the site but thought that there was too much text. Their comments led us to reduce the amount of text and include more images.

TABLE 3.

Responses to biofilm website evaluation

Response Avg score (n = 48)a
The site was accessible and easy to get into…………………………………… 4.5
The site loaded quickly and readily onto my computer………………………… 4.0
I had no trouble navigating throughout the site and finding particular pages… 4.2
Using the site was intuitive…………………………………………………… 4.0
The site does more than could be done with printed text…………………….. 3.7
The language of the site was appropriate for my level……………………….. 4.3
I was motivated to explore most of the site…………………………………… 3.8
I read through most of the text………………………………………………… 4.0
I enjoyed using the site……………………………………………………….. 3.7
I believe I learned the material presented on the site…………………………. 4.0
a

Scores ranged from 1 to 5, with 1 being “no” and 5 being “definitely.”

The biofilm module has two parts: a tutorial and an exploratory activity. In the tutorial, we used pictures, text, and a short graphics interchange format (GIF) animation to present a basic overview of what biofilms are and how they form. The exploratory activity is based on a play on the word “biofilm.” In this section, called the “Bulging Cannes Biofilm Festival,” users can view three short (30-s) video clips that are related to biofilms but are parodies of the horror, romance, and tragedy film genres. The horror video clip illustrates how biofilms are detrimental to humans, the romance video clip shows how biofilms are beneficial to humans, and the tragedy video clip demonstrates how bacteria are “enslaved” in industrial processes. At the end of each video clip, viewers can explore a clickable image map that reveals new images and text that explain both the ubiquity and the utility of biofilms. Most of the information is found by exploring these clickable images (which we refer to as “content screens”).

Assessment methods. (i) Control variable (pretest). Students in the General Microbiology Lecture at Cornell University were given a pretest to assess their prior knowledge of biofilms. The pretest consisted of six short, open-answer questions. The pretest was not announced ahead of time, and the material on the pretest was not discussed in class. Students were told that the pretest was to help us evaluate part of our website and were asked to answer the questions as best they could. Every student who took the pretest received 2 extra points (out of 225 total points for the semester).

In addition, these students filled out a questionnaire about their gender, biology experience, computer background, and learning preferences. The student profile questionnaire consisted of a series of open-answer questions about each student’s biology background (previous courses, current major), computer experience (including age when they began using computers, whether they had a computer at home growing up and currently, and how they used a computer), and learning preferences (students were given a list from which to choose).

One week after the biofilm assignments were due, students in the General Microbiology Lecture were given a posttest to assess their knowledge of biofilms. Of all the students who participated, only those for whom we had both pretest and posttest scores were included in the analysis (n = 96).

(ii) Focus activity. Following the pretest, students in the General Microbiology Laboratory (approximately two-thirds of the students in Lecture) were given a required assignment to develop a plan for teaching biofilms to high school students. This assignment was designed to be open enough to allow students to explore the concepts they felt were most interesting. Approximately half of these students were told to use the website as a resource and half were told to use comparable text, which we provided. As students worked on this assignment, we monitored the website log files to determine who viewed which parts of the site, to gather information about how much each student explored. Of all the students who participated, we had both pretest and posttest scores for 35 students who did the activity and used the website and 20 students who used text.

Students who used the website were asked to fill out a questionnaire with 10 Likert-type items designed to assess their attitudes towards the Biofilm site. Using a scale from 1 to 5, students were asked to respond to statements describing their experiences with loading the site, navigating the site, understanding the language of the site, and whether they enjoyed and were motivated to use the site. (These statements are listed in Table 3.) In addition, the students were asked what they thought were the best and worst features of the site.

(iii) Dependent variable. The posttest consisted of eight short, open-answer questions that related directly to the content of the site. Students were told that the posttest was to help evaluate part of our website and were asked to answer the questions as best they could. Students did not know ahead of time that their knowledge of biofilms would be assessed. Every student who took the posttest received 3 extra points (out of 225 total points for the semester).

RESULTS

Student characteristics. Of the 96 students for whom we had pretest and posttest scores, 76% were female; 4% were freshman, 49% were sophomores, 28% were juniors, 15% were seniors, and 4% were graduate or extramural students. There were no significant differences between how much males and females liked the website (based on their answers to the questionnaire) or performed on the tests (P > 0.05 using a two-tailed unpaired t test), nor were there significant differences in how much students of different years (e.g., freshman, sophomore, junior, or senior) liked the website or performed on the pre- and posttests (P > 0.05 using a two-tailed unpaired t test).

Both males and females began using computers at an average age of 11 (range, 5 to 20 years). Sixty-six percent of all students grew up with computers, and 93% now own a computer. Students used computers an average of 10 h/week (range, 0.5 to 55 h/week), with no differences between males and females (P > 0.05 using a two-tailed unpaired t test). The statistical analyses between computer history (at what age they began, whether they grew up with a computer, whether they own one now) and how well students liked the website or their test scores showed no significant differences (P > 0.05 using a two-tailed unpaired t test). A survey of how students used their computer time is presented in Table 1. While females did more word processing and males played more video games, none of the differences were significant (P > 0.05 using a two-tailed unpaired t test).

TABLE 1.

Student computer use

Activity Avg h/wk (n = 96)
All students Females Males
Word processing 4.4 4.6 3.6
Programming 0.2 0.2 0.2
Design work 0.15 0.1 0.2
Library research 1 1.1 0.7
Web “surfing” 1.5 1.3 1.9
E-mail 2.4 2.5 2.4
Video games 0.4 0.3 0.8

We asked the students how they preferred to learn. None of the correlations between learning preferences and how well students liked the website or their test scores were significant (P > 0.05 using a two-tailed unpaired t test). However, more females preferred fieldwork (P = 0.047, two-tailed unpaired t test) and more males preferred independent learning (P = 0.023, two-tailed unpaired t test). The results are presented in Table 2.

TABLE 2.

Student learning preferences

Learning preference % of students n = 96
Total Females Males
Fieldwork 60 64 42
Discussion 64 66 58
Independent learning 38 30 58
Lecture 63 59 69
Autotutorial 35 33 42

Did students enjoy the site? Because we believed that if students enjoyed the site they would be motivated to explore the site, we felt it was important to know how they felt about using the site. The results of the website questionnaire are listed in Table 3. Most students did not have technical difficulty using the site, and they found the site appropriate and enjoyable.

Did students explore the site? Due to the nature of the assignment, students did not have to explore the whole website to successfully complete the assignment. For each student who looked at the website, we recorded which sites he/she went to and gave each student a point for visiting sites with substantial topical content (the “content screens”; 11 in all). We called this their Web access score. We divided these students into three groups based on how much of the site they viewed. Of the 58 students who looked at the website, 55% had a high Web access score (meaning they looked at 11 or 10 content pages), 17% had a medium Web access score (3 to 9 content pages), and 26% had a low Web access score (only 1 or 2 content pages). All of those students who had a low access score looked at the introductory tutorial page. This page alone contained enough information to complete the assignment.

Did students learn from the site? We first looked at whether students who used the website learned the material. We compared the pretest and posttest scores among students who (i) did the activity and viewed the website, (ii) did the activity and used text, and (iii) did no activity. Table 4 shows the means, standard deviations, and high and low scores for the three groups of students.

TABLE 4.

Pretest and posttest means, standard deviations, and high and low test scores for all students

Student group and parameter Score (%)
Pretest Posttest
No activity (n = 41)
    Mean 33 25
    SD 28 22
    High score 78 78
    Low score 0 0
Activity & text (n = 20)
    Mean 38 71
    SD 23 20
    High score 78 100
  Low score 0 22
Activity & Web (n = 35)
    Mean 41 67
    SD 30 23
    High score 89 100
    Low score 0 22

Controlling for their scores on the pretest, an analysis of variance (ANOVA) showed significant learning when students did an activity using either the website or text (F2,94 = 50.0; P = 0.0001). However, the score differences between those who viewed the website and those who used text were not significant (P > 0.05).

Does more exploration lead to more learning? We compared pretest and posttest scores between students who viewed most of the website (high Web access score) and those who looked at very little of the site (low Web access score). The means, standard deviations, and high and low scores are presented in Table 5.

TABLE 5.

Pretest and posttest means, standard deviations, and high and low test scores for students with different Web access scores

Student group and parameter Score (%)
Pretest Posttest
Low Web access (n = 10)
    Mean 43 54
    SD 29 23
    High score 78 89
    Low score 0 22
High Web access (n = 18)
    Mean 39 76
    SD 32 20
    High score 89 100
    Low score 0 33

An ANOVA of the posttest scores (using the pretest as a covariant) as a function of the Web access score showed a significant difference between those who viewed most of the website and those who viewed little of the site (F1,29 = 11.06; P = 0.002).

The mean posttest score of students who looked at most of the website (Table 5) was higher than the mean posttest score of students who used text (Table 4), but the differences were not significant (P > 0.05).

DISCUSSION

We gathered information on a number of factors to see if they had any influence on whether students learned about biofilms using the website. Despite the diversity of our classroom in terms of year, gender, and computer use, the vast majority of students in our class had experience with computers and were comfortable using the Web as a source of information. While males and females used computers in different ways (females did more word processing and e-mail while males did more programming and video games), we found no differences between how much males and females enjoyed the site or its ease of use. This suggests that, in this case, the computer as a mechanism of instructional delivery was not a barrier to learning.

In addition, information that we gathered both during our formative evaluation and during this study indicated that students found the website easy to navigate and enjoyable. This suggests that the website itself was also not a barrier to learning.

Because students used the CBI independently, we asked students how they preferred to learn. There were distinct differences between genders, with females preferring interactive group activities like fieldwork and discussion groups and males preferring independent learning and autotutorials. However, these differences did not seem to influence whether students were able to learn from the website.

What factors did influence learning in this study? Clearly, students learned about biofilms by doing the activity, whether they got their information from text or from the website. We found no significant difference in learning between those who used the website and those who used comparable text. This result can be explained by the nature of the material and the nature of the text. Most of the information in this site dealt with facts about biofilms: what they are, how they form, and some consequences of their formation. The text was prepared specifically for this experiment by copying images and text from the website. The content of the text was identical to that of the website–what differed were the context (the Bulging Cannes Biofilm Festival) and the delivery mechanism (computer versus paper). This study shows that a well-developed website is at least as good as comparable text in delivering factual information.

However, we knew from the log files that students were using the website to various degrees. We were concerned that students would not explore the entire site, so we incorporated humor, creative graphics, and images to motivate users. Still, some just perused the introductory tutorial page while others explored every nook and cranny. Our analysis of the log files allowed us to analyze the pretest and posttest scores of those who looked at most of the website and those who looked at only a few pages. The results showed a significant difference in learning for those students who viewed all of the website. It is important to note that the posttest was given a week after the assignments were due and that students did not know they were going to be tested on the biofilm material and therefore did not study. What we measured was the information retained after 1 week. Because this site was designed to introduce students to new factual information (a relatively low-order thinking skill), we feel our learning objectives were met.

The ranges and standard deviations in this study are relatively high. We believe this reflects the academic diversity typically found in this course. Students range from freshman to senior or graduate students. The course attracts biology majors (some with relatively strong backgrounds) as well as arts and engineering majors (many with relatively weak biology backgrounds). Some students are very focused on the course because it is required; others don’t take it quite so seriously.

The mean posttest score of students who looked at most of the website (Table 5) was higher than the mean posttest score of students who used text (Table 4), but the differences were not significant. Therefore, we cannot conclude that the website was better than text in helping students learn the material. A larger sample size is needed to substantiate these findings.

An obvious question from these results is, why use computers? For us, the website offered an opportunity to develop unique material that was suited for the purposes of our course. We were able to incorporate color and intriguing images in a way that was not possible for us using text. Distribution of this material was free for our course, as all students on our campus have access to computers. This information is easier for us to update than text, and students enjoyed viewing it.

CBI can also provide advantages over text for learning different kinds of information. A previous study by one of us (13) showed that students who viewed animated autotutorials better understood complex, dynamic processes (in this case, how respiration generates ATP) than students who used text-based information. Others have found that animation appears most useful in explaining dynamic processes (5, 16) and in improving recall ability (17).

In addition, computer-based simulations show promise in helping students understand interactive relationships between elements. Simulations appear useful for teaching content that involves change (18). Students who are inclined to explore software freely learn more in inductive, experimentally based learning environments (15).

CONCLUSION

The goal of this study was to compare the performances of students who used a computer-based hypermedia tutorial to those who used comparable text or who had no support material at all. Our hypothesis was that the CBI would be at least as effective as text-based information, but only if students explored the entire website. We found this to be true.

In addition, we examined some of the barriers to exploration by gathering information about each student’s computer experience and feelings toward the website. We found that students in our classes were comfortable using computers and our particular site. However, this may not be true in all classes and on all campuses, and instructors who are using CBI should be cognizant of how the computer interface can influence student learning, particularly when dealing with older populations.

This study focused on a simple CBI that was relatively easy to develop. Similar college-level course materials are being posted on websites all over the world, most of it developed by educators, not professional Web designers. We need to be aware of the educational research that can suggest how to use different kinds of CBI to best meet the needs of our very different students (19). When using CBI in our classrooms, we should prove to ourselves, through observation, monitoring, and assessments, that our learning objectives are being met. When assurances are made through proper evaluation, that the computer and the CBI itself are not interfering with learning, then CBI can provide a flexible, intriguing, useful teaching tool.

Acknowledgments

This work was supported in part by a grant from the Biofilm Curriculum Development Program of the American Society for Microbiology and by a grant from the AGET Small Grants Initiative Program from the College of Agricultural and Life Sciences at Cornell University.

We thank Joseph B. Yavitt for constructive comments on the manuscript and assistance with the statistical analysis. In addition, comments by anonymous reviewers were immensely helpful. Thanks also to undergraduate students Michael Fietz, Philip Chu, Scott Kachlany, and Adam Chanzan for their creative contributions to the Cornell University General Microbiology website.

REFERENCES

  • 1.ASM Biofilm Image Project 1998. copyright date [Online]. American Society for Microbiology. http://www.asmusa.org/edusrc/edu34.htm. [3 January 2000, last date accessed.]
  • 2.Bio-Films: The Movies 1998. copyright date [Online] General Microbiology Course Site, Department of Microbiology, Cornell University. http://instruct1.cit.cornell.edu/courses/biomi290/Horror/Biofilmmenu.html. [3 January 2000, last date accessed.]
  • 3.Bloom BS, Hastings JT, Madaus GF. Handbook of formative and summative evaluation of student learning. McGraw-Hill; New York, N.Y.: 1971. [Google Scholar]
  • 4.Clark RE. Current progress and future directions for research in instructional technology. Educ Technol Res Dev. 1989;37(1):57–66. doi: 10.1007/BF02299046. [DOI] [Google Scholar]
  • 5.Crosby ME, Stelovsky J. From multimedia instruction to multimedia evaluation. J Educ Multimedia Hypermedia. 1995;4(2/3):147–167. [Google Scholar]
  • 6.Gay G, Mazur J. The utility of computer tracking tools for user-centered design. Educ Technol. 1993;33(4):45–59. [Google Scholar]
  • 7.Gibbs WJ. 1995. copyright date. [Online.] Multi-media and computer-based instructional software: evaluation methods. http://www.gettysburg.edu/ir/ascue/Proceedings/gibbs2.html. [24 February 2000, last date accessed.]
  • 8.Hutchings GA, Hall W, Briggs J, Hammond NV, Kibby MR, McKnight C, Riley D. Authoring and evaluation of hypermedia for education. Comput Educ. 1992;18(1–3):171–177. doi: 10.1016/0360-1315(92)90051-6. [DOI] [Google Scholar]
  • 9.Jones TH, Paolucci R. Research framework and dimensions for evaluating the effectiveness of educational technology systems on learning outcomes. J Res Comput Educ. 1999;32(1):17–27. [Google Scholar]
  • 10.Kearsley G. Educational technology: a critique. Educ Technol. 1998;38(2):47–51. [Google Scholar]
  • 11.Kulik JA. Meta-analytic studies of findings on computer-based instruction, 9–27. In: Baker EL, O’Neil HF Jr, editors. Technology assessment in education and training. L. Erlbaum Associates; Hillsdale, N.J.: 1994. [Google Scholar]
  • 12.Kumar DD, Hegelson SL, White AL. Computer technology-cognitive psychology interface and science performance assessment. Educ Technol Res Dev. 1994;42(4):6–16. doi: 10.1007/BF02298052. [DOI] [Google Scholar]
  • 13.Nicholls C, Merkel SM, Cordts ML. The effect of computer animation on students’ understanding of microbiology. J Res Comput Educ. 1996;28(3):359–371. [Google Scholar]
  • 14.Park O. Visual displays and contextual presentations in computer-based instruction. Educ Technol Res Dev. 1998;46(3):37–50. doi: 10.1007/BF02299760. [DOI] [Google Scholar]
  • 15.Regian JW, Shute VJ. Evaluating intelligent tutoring systems. In: Baker EL, O’Neil HF Jr, editors. Technology assessment in education and training. L. Erlbaum Associates; Hillsdale, N.J.: 1994. pp. 79–94. [Google Scholar]
  • 16.Reiber LP. Using computer graphics in science instruction with children. J Educ Pyschol. 1990;82(1):135–140. doi: 10.1037/0022-0663.82.1.135. [DOI] [Google Scholar]
  • 17.Reiber LP, Boyce MJ, Assad C. The effects of computer animation on adult learning and retrieval tasks. J. Computer-Based Instruct. 1990;17(2):46–52. [Google Scholar]
  • 18.Reigeluth CM, Schwartz E. An instructional theory for the design of computer-based simulation. J. Computer-Based Instruct. 1989;16(1):1–10. [Google Scholar]
  • 19.Rowland P, Stuessy CL. Matching mode of CAI to cognitive style: an exploratory study. J Comput Math Sci Teaching. 1988;7(4):36–40. [Google Scholar]
  • 20.Simonson MR, Thompson A. Educational computing foundations. 2nd ed. Merrill; New York, N.Y.: 1994. [Google Scholar]
  • 21.Steinberg E. Computer assisted instruction: a synthesis of theory, practice and technology. L. Erlbaum Associates; Hillsdale, N.J.: 1991. [Google Scholar]
  • 22.Szabo M, Poohkay B. An experimental study of animation, mathematics achievement and attitude toward computer-assisted instruction. J Res Comput Educ. 1996;28(3):390–402. [Google Scholar]
  • 23.Teslow JL. Humor me: a call for research. Educ Technol Res Dev. 1995;43(3):6–28. doi: 10.1007/BF02300453. [DOI] [Google Scholar]
  • 24.Weller HG. Assessing the impact of computer-based learning in science. J Res Comput Educ. 1996;28(4):461–487. [Google Scholar]

Articles from Microbiology Education are provided here courtesy of American Society for Microbiology (ASM)

RESOURCES