Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2018 Sep;82(7):6321. doi: 10.5688/ajpe6321

Factors Affecting Student Time to Examination Completion

Adam M Persky a,,b,, Hannah Mierzwa a
PMCID: PMC6181161  PMID: 30323386

Abstract

Objective. To investigate factors (prior or current knowledge, metacognitive accuracy, and personality) that might impact the time it takes students to complete an examination.

Methods. On the final examination, the time to complete the examination was recorded. Prior to the course, students completed the five-factor personality assessment. During the semester, students completed four cumulative assessments that included prospective judgments of performance to improve their metacognitive accuracy. Measures of metacognitive accuracy were calculated from the difference between the students’ prospective judgments of performance and their actual assessment performance for the final examination. Two weeks prior to the final examination, students completed a cumulative assessment, which served as prior knowledge; this was similar in content to the final examination.

Results. The time to complete the final examination was significantly negatively correlated with examination score and positively correlated with Agreeableness, and degree of metacognitive bias. However, only current knowledge (β=-.35) and Agreeableness (β=.12) predicted the time to complete the final examination. These two factors explained about 14% of the variability in completion times. Examining the scale for the time to complete the examination, there were some regional differences between the slowest, intermediate and fastest completers.

Conclusion. Current knowledge and to a lesser extent, pro-social behavior (agreeableness) influenced examination completion time. Metacognitive accuracy had limited predictability in time to complete the examination.

Keywords: pharmacokinetics, personality, metacognition, time on task

INTRODUCTION

Self-regulation is the ability to act in one’s long-term best interest.1 Self-regulation is a critical influencer of time-to-complete a given task (eg, an examination) because self-regulation directs motivation and attention (eg, what material needs attention).2,3 One’s ability to self-regulate is an interaction of prior experiences, personality, environment, and ability to monitor one’s own thinking (ie, metacognition). There is very little research on factors that determine the self-regulation of students during an examination. This exploratory analysis examines potential factors that might influence how long it takes students to complete a high stakes, cumulative examination with time to completion being a measure of self-regulation.

Learners perform best when they approach a learning situation with prior knowledge related to the situation.4,5 With more prior knowledge, a learner can detect and recognize features, generate a solution and perform more quickly and accurately; this is the nature of expertise.4,5 As such, students entering an examination with high prior knowledge should complete examinations more quickly. In this study, prior knowledge was assessed with a cumulative, low stakes assessment that occurred two weeks prior to the final examination.

As learners’ progress toward expertise, they gain more knowledge and develop better metacognitive monitoring allowing them to judge how well they know a given fact or can complete a given skill. Metacognition is a central practice in self-regulated learning. The accuracy of metacognitive monitoring can be affected by various factors, including problem or test difficulty, background or prior knowledge, and desired performance level.6-8 More metacognitively accurate individuals may have extended testing times due to increased processing demands – that is, they engage in more thinking when faced with a problem.9,10 For example, students may use how fast they could answer a question as an anchor in determining how well they did on an examination. However, the speed of which answers come to mind do not relate to performance and some evidence suggest the slower answering times were correct more often.11,12 Even if students had some level of metacognitive monitoring during the learning process, they may not apply it in a testing environment.7,13 These students could be faster or slower in completing an examination depending on their metacognitive ability. In this study, metacognitive accuracy was assessed by bias and absolute bias. Bias is the difference between the predicted score and actual performance and includes both directionality (ie, under- or over-confident) and magnitude of that directionality; absolute bias captures the magnitude or accuracy only. It is expected that learners with higher degrees of overconfidence (predicted score > actual score) may proceed more quickly through an examination compared to students with stronger degrees of underconfidence (predicted score < actual score).

Time to complete an examination requires time management skills and these skills may be the driver for the link between personality traits and academic achievement.14 Personality research has established a five-factor structure, which has the dimensions of openness to experience, conscientiousness, extraversion, agreeableness, and emotional stability (or neuroticism).15 Numerous studies have examined the impact of the five-factor personality assessment on academic achievement, and of these studies, conscientiousness correlates the most with academic achievement.16-20 It has been hypothesized that this relationship is due to the use of better time management strategies, and there is a positive relationship between higher levels of time management skills and higher levels of conscientiousness.21,22 Openness to experience appears to be positively correlated to academic achievement.20 Emotional stability (or neuroticism) and achievement have shown a negative correlation, suggesting that elevated emotional instability places learners at the risk of reduced academic achievement. Agreeableness and extraversion have shown mixed and inconclusive results in relation to academic achievement.20,23As such, there may be a relationship between the five-factor personality traits and time management during examination completion. In this study, the five-factor personality was used because of its reliability, validity and prior research investigating its relationship to academic achievement.24

Taken together, the authors examined the impact of prior knowledge, current knowledge, metacognitive accuracy and personality on time to complete an examination. The findings from this study can be used to help develop better study or test taking strategies for students or allow for better time allocation for examination taking.

METHODS

Participants were student pharmacists enrolled in the Doctor of Pharmacy (PharmD) program at a large, public university in the southeastern United States. In that program, pharmacokinetics was a required 3-credit course in which students met once a week for three hours and course format has been described previously.25 During a two-year period, 295 students were enrolled in the course, spread across two campuses that were synchronously video-conferenced. Upon admission to the program, the average age of students was 23 years (range 19-50), and approximately 65% were female. The mean grade point average upon admission was 3.5 (out of 4.0), the mean Pharmacy College Admission Test (PCAT) score was approximately 87%, and approximately 80% of students had a prior degree.

The course consisted of two parts. Part one used team-based learning (TBL) and occurred over the first 10 weeks of the semester. There were five main sections (pharmacodynamics, single dose kinetics, approach to steady-state, violations of the one-compartment model, and physiologic influences on clearance) that accounted for nine different topics. Each section started with the readiness assurance process (ie, individual assessment followed by team-assessment) and the remaining time dedicated to cases. Part two of the course occurred during the last five weeks of the semester and involved students completing one out-of-class and one in-class case both of which integrated the course material.

Prior Knowledge. Four 36-question cumulative assessments were administered during the course. These examinations were used to improve learning through the “testing effect” and allow students to improve their metacognitive monitoring. Each of these examinations sampled from the nine major topics covered in the course. The first three assessments were unique based on a random drawing of four questions from each of nine question pools (corresponding to the nine topics), for a total of 36 questions on each test. Each pool contained 12 to 20 questions which had been used in previous years of the course. Over 90% of the questions were at the level of application or higher according to Bloom’s Cognitive Taxonomy.26 These first three examinations were administered online and were low stakes. The first examination (week 1) provided a baseline of performance; the second examination (week 9) occurred at the completion of content (week 9); the third examination (week 14) occurred midway through review and integration activities occurring in the second part of the course. This third assessment was used as the measure of prior knowledge.

Current Knowledge Assessment. The fourth examination was the final examination (week 16) that consisted of newly created questions compared to the questions within the question pools. This examination was administered in-class and employed an answer-until-correct format (IF-AT, Epstein Educational Enterprise) in which students received immediate feedback on the accuracy of each response.27 Students used the same answer-until-correct format for the readiness assurance process during the TBL section of the course (weeks 1 through 10). The final examination was used as the measure of current knowledge. Scores were examined with and without partial credit for two reasons. The first reason is the first three examinations had no partial-credit, thus it is easier to compare scores across examinations. The second reason is learners would have extended testing times when correcting errors.

Self-regulation Time to Completion. During the final examination, either the students or the instructor recorded the clock time the examination was submitted. Duration to complete the examination was based on this recorded clock time.

Metacognitive Monitoring. At the start of each of the four exams, students made a global prediction of their performance: “On this assessment, I expect to receive a __%”. A bias score was calculated (predicted score – actual score) to determine degree of overconfidence (positive values of bias) and underconfidence (negative values of bias).28 An absolute bias score (absolute value of the difference between the predicted performance and actual performance) was calculated, determining the students’ metacognitive accuracy. 28 Students were asked to make these judgments on each examination to help develop their metacognitive monitoring skills. For this study, only the measures of bias or absolute bias on the final examination were used.

Personality Assessment. Prior to the semester, students completed the five-factor personality (eg, Big Five, NEO-I) test that was freely available (http://www.outofservice.com/bigfive/) to assist in team assignments. Scores were recorded on a 0% to 100% scale with higher numbers indicating a stronger trait.

All variables are summarized in Table 1. Data were graphed to examine the relationship between time to complete the examination and each factor of interest. Based on visual inspection, data was analyzed both as an entire cohort and according to completion time tertiles to discern regional effects. Spearman correlations were performed on all variables using SPSS software (IBM Corporation, Armonk, NY). A one-way ANOVA with Tukey post-hoc test was used to compare tertiles. Finally, multiple linear regression was performed using entry criteria on the entire cohort and each tertile. This study was deemed exempt from review by the Institutional Review Board. Statistical significance was set at p<.05.

Table 1.

Summary of Study Variables

graphic file with name ajpe6321-t1.jpg

RESULTS

There were 266 participants who completed the data sets. The full correlation matrix can be found in Appendix 1. Overall, time to complete the final examination was negatively correlated with the final examination performance with and without partial credit (Table 2). Time to complete the final examination also was positively correlated with agreeableness and metacognitive bias. These findings suggest students who completed the examination quicker performed better, had a lower degree of the agreeableness trait and had higher degrees of underconfidence/lower degrees of overconfidence However, there were some regional differences. In the fastest tertile, there were significant negative relationship with time to complete the final examination and final examination score with and without partial credit; there was no relationship with personality or metacognitive monitoring. In the middle tertile, agreeableness was associated positively with time to complete the final examination. For the slowest tertile, there was a significant negative association with final examination completion time and performance with and without partial credit.

Table 2.

Correlations of Various Factors with Time to Complete the Final Examination

graphic file with name ajpe6321-t2.jpg

Each time tertile was compared for significant differences between each variable of interest (Table 3). The fastest tertile completed the examination on average an hour earlier than the slowest third and approximately a half-hour earlier than the middle third. The fastest students performed less than half a grade better than the slowest students. There was no differences in personality or metacognitive monitoring between any of the tertiles.

Table 3.

Summary of ANOVA Results

graphic file with name ajpe6321-t3.jpg

Finally, a regression analysis was performed on each time tertile and the entire cohort (Table 4). For the entire cohort, agreeableness and current knowledge significantly predicted time to complete the final examination, explaining 14% of the variability; these results parallel the correlation analysis. Metacognitive accuracy, as measured by absolute bias, predicted time to complete the examination for the early completers only. However, agreeableness predicted time to complete the examination for the middle tertile. Finally, current knowledge and metacognitive bias predicted time to complete the examination for the slowest tertile.

Table 4.

Summary of Multiple Linear Regresion Results. Data shown as standardize beta (p value)

graphic file with name ajpe6321-t4.jpg

DISCUSSION

This is one of the first studies that examined self-regulation as a function of the time to complete an authentic, in-class assessment. Factors including current and prior knowledge, metacognitive accuracy and personality traits were examined. All three factors seem to play a role in the self-regulation of completing an examination.

The first hypothesis was students with a higher level of prior knowledge based on in-semester assessments would complete the task quicker. Prior knowledge did not affect time to complete the examination. Even though prior knowledge had no relationship with examination completion time, current knowledge seemed more influential. Overall, the authors found that current knowledge predicted time to complete the examination. The discrepancy between the effects of prior and current knowledge could be caused by students studying the two weeks leading up the final examination. This acute study and practice lead to additional learning or a higher level of retrieval strength and thus, higher overall performance. These learners may move through examination questions with more confidence because answers come to mind more quickly, reducing time per question and overall review time.

The authors hypothesized that metacognitive monitoring played a role in examination time and had limited effects. The limited results suggest that students who tend toward overconfidence (or less underconfidence) completed the task more slowly. This may relate to their use of study strategies prior to the examination. Typically, the lowest-performing students tend to show the greatest overconfidence (ie, the “unskilled-and-unaware” effect) even though this was not true in this study – there was no statistical difference in metacognitive bias or accuracy.29 However, this overconfidence can have negative effects on the efficacy of students’ short-term study behaviors by underpreparing for examinations or massing their practice instead of spacing it appropriately.29,30 If students massed their practice, they would have high retrieval strength for the knowledge and skills but low storage strength, and thus, lower long-term retention.31 Their ability to cram can lead to high performance. However, the metacognitive measures had limited predictive utility. This may be due to the answer-until-correct format of the final assessment. Students were aware of how they were scoring as they went through each question, eliminating time that would have been spent second-guessing or re-reading questions after providing an original answer. Judgment of their understanding was provided externally as feedback immediately after answering a question and thus was not dependent on internal metacognitive monitoring. In addition, students practiced their metacognitive judgments and became more accurate despite the presence of inter-student variability (data not shown). The metacognitive monitoring may have been a more important factor if students did not practice their judgments over the course of the semester and did not receive external feedback from the examination format.

The final factor investigated was personality. Within this study, only agreeableness seems to play a role. Individuals high in agreeableness are characterized as kind, appreciative, generous, cooperative and friendly; those that score low are characterized as fault finding, quarrelsome and thankless.24 This trait may explain time to complete the examination based of its relationship to social desirability, self-regulation, and boredom.32-37 On one hand, the “social do” may dictate when students should turn in their examination rather than when they were completed with the examination – that is, “I should turn it in after a certain time but I should not be the first to turn it in.” Students may feel social pressure on the appropriate timing to submit an examination. This might be modulated by the inverse relationship between agreeableness and boredom with high agreeableness showing less boredom and may persist through a task.33 In addition, agreeableness is related to self-control and effortful control. Thus, individuals with a higher agreeableness trait can modulate the emotional controls and persist through tasks like completing an examination.36 Thus in this study, students with higher levels of agreeableness were slower in completing their examination because of social desirability and their ability to modulate emotions, which may result in a slower completion time.

The factors explored explained a moderate amount of the variability and that human behavior is often quite complex. There are three potential confounders within the study: problem difficulty, working memory capacity, and metacognitive practice. The first issue is problem difficulty or how easy or hard a problem is. Hoffman and colleagues found that differences in metacognitive prediction depends on problem difficulty with metacognitive prompting, and self-efficacy having a stronger interaction for more complex math problems.38 This may or may not be a factor given the high practice levels of students prior to the final examination leading to more automaticity and high overall performance. Another potential factor is working memory capacity. In one study, higher working memory capacity was correlated with faster improvements in response time.39 Working memory capacity was not investigated within the study but may explain another section of the variance within the study. Finally, students practiced their metacognitive judgments, thus all students became more accurate over time. It is unclear if the results would change if students did not practice these judgments, which could lead to larger differences in accuracy and larger degrees of underconfidence and overconfidence.40,41

CONCLUSION

Current knowledge, and to a lesser extent, pro-social behavior (ie, agreeableness) influenced examination completion time. Metacognitive monitoring had limited predictability in time to complete the examination. Regions of the performance versus time relationship (eg, slow vs fast) may be governed by different drivers of self-regulation and these relationships may be complex. In general, instructors may have to help students develop behaviors to overcome natural tendencies in their personality to enhance performance. More importantly, time-limited examinations may negatively affect students with certain personality traits.

ACKNOWLEDGMENTS

The authors would like to thank Anita Scottie, PharmD candidate, and Kathryn Fuller, PharmD, for their assistance in preparing this manuscript.

Appendix 1. Full Correlation Matrix for All Included Variables

Appendix 1.

REFERENCES

  • 1.Bjork RA, Dunlosky J, Kornell N. Self-regulated learning: beliefs, techniques, and illusions. Ann Rev Psychol. 2013;64(1):417–444. doi: 10.1146/annurev-psych-113011-143823. [DOI] [PubMed] [Google Scholar]
  • 2.Butler DL, Winne PH. Feedback and self-regulated learning: a theoretical synthesis. Rev Educ Res. 1995;65(3):245–281. [Google Scholar]
  • 3.Stone NJ. Exploring the relationship between calibration and self-regulated learning. Educ Psych Rev. 2000;12(4):437–475. [Google Scholar]
  • 4.Ericsson KA, Charness N, Hoffman RR, Feltovich PJ, editors. The Cambridge Handbook of Expertise and Expert Performance. Cambridge, UK: Cambridge University Press; 2006. [Google Scholar]
  • 5. Persky AM, Robinson J. Expert novice difference. Am J Pharm Educ. In press.
  • 6.Pressley M, Ghatala ES. Delusions about performance on multiple-choice comprehension tests. Int Read Assoc. 1988;23(4):454–464. [Google Scholar]
  • 7.Schraw G, Roedel TD. Test difficulty and judgment bias. Mem Cognition. 1994;22(1):63–69. doi: 10.3758/bf03202762. [DOI] [PubMed] [Google Scholar]
  • 8.Nietfeld JL, Schraw G. The effect of knowledge and strategy training on monitoring accuracy. J Educ Res. 2002;95(3):131–142. [Google Scholar]
  • 9.Weaver CA. Constraining factors in calibration of comprehension. J Exp Psychol Learn Mem Cognit. 1990;16(2):214–222. [Google Scholar]
  • 10.Maki RH, Foley JM, Kajer WK, Thompson RC, Willert MG. Increased processing enhances calibration of comprehension. J Exp Psychol Learn Mem Cognit. 1990;16(4):609–616. [Google Scholar]
  • 11.Serra MJ, Dunlosky J. Does retrieval fluency contribute to the underconfidence-with-practice effect? J Exp Psychol Learn Mem Cognit. 2005;31(6):1258–1266. doi: 10.1037/0278-7393.31.6.1258. [DOI] [PubMed] [Google Scholar]
  • 12.Benjamin AS, Bjork RA, Schwartz BL. The mismeasure of memory: when retrieval fluency is misleading as a metamnemonic index. J Exp Psychol Gen. 1998;127(1):55–68. doi: 10.1037//0096-3445.127.1.55. [DOI] [PubMed] [Google Scholar]
  • 13.Winne PH, Jamieson-Noel D. Exploring students’ calibration of self reports about study tactics and achievement. Contemp Educ Psych. 2002;27(4):551–572. [Google Scholar]
  • 14.Credé M, Kuncel NR. Study habits, skills, and attitudes: the third pillar supporting collegiate academic performance. Perspect Psychol Sci. 2008;3(6):425–453. doi: 10.1111/j.1745-6924.2008.00089.x. [DOI] [PubMed] [Google Scholar]
  • 15.McCrae RR, Costa PT. Validation of the five-factor model of personality across instruments and observers. J Pers Soc Psychol. 1987;52(1):81–90. doi: 10.1037//0022-3514.52.1.81. [DOI] [PubMed] [Google Scholar]
  • 16.Chamorro-Premuzic T, Furnham A. Personality traits and academic examination performance. Eur J Pers. 2003;17(3):237–250. [Google Scholar]
  • 17.Wolfe RN, Johnson SD. Personality as a predictor of college performance. Educ Psychol Meas. 1995;55(2):177–185. [Google Scholar]
  • 18.O’Connor MC, Paunonen SV. Big five personality predictors of post-secondary academic performance. Pers Individ Diff. 2007;43(5):971–990. [Google Scholar]
  • 19.Poropat AE. A meta-analysis of the five-factor model of personality and academic performance. Psychol Bull. 2009;135(2):322–338. doi: 10.1037/a0014996. [DOI] [PubMed] [Google Scholar]
  • 20.Duff A, Boyle E, Dunleavy K, Ferguson J. The relationship between personality, approach to learning and academic performance. Pers Individ Diff. 2004;36(8):1907–1920. [Google Scholar]
  • 21.Liu OL, Rijmen F, MacCann C, Roberts R. The assessment of time management in middle-school students. Pers Individ Diff. 2009;47(3):174–179. [Google Scholar]
  • 22.Kelly WE, Johnson JL. Time use efficiency and the five-factor model of personality. Educ. 2005;125(3):511–515. [Google Scholar]
  • 23.Eysenck HJ, Cookson D. Personality in primary school children. 1. Ability and achievement. Brit J Educ Psychol. 1969;39(2):109–122. doi: 10.1111/j.2044-8279.1969.tb02054.x. [DOI] [PubMed] [Google Scholar]
  • 24.John OP, Naumann LP, Soto CJ. Paradigm shift to the integrative big-five trait taxonomy: history, measurement, and conceptual issues. In: John OP, Robins RW, Pervin LA, editors. Handbook of Personality: Theory and Research. New York, NY: Guilford Press; 2008. pp. 114–158. [Google Scholar]
  • 25.Persky AM, Henry T, Campbell A. An exploratory analysis of personality, attitudes, and study skills on the learning curve within a team-based learning environment. Am J Pharm Educ. 2015;79(2) doi: 10.5688/ajpe79220. Article 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Anderson LW, Krathwohl DR. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. Complete Edition. New York, NY: Longman; 2001. [Google Scholar]
  • 27.Persky AM, Pollack GM. Using answer-until-correct examinations to provide immediate feedback to students in a pharmacokinetics course. Am J Pharm Educ. 2008;72(4) doi: 10.5688/aj720483. Article 83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Dunlosky J, Thiede KW. Four cornerstones of calibration research: Why understanding students' judgments can improve their achievement. Learn Instruct. 2013;24(1):58–61. [Google Scholar]
  • 29.Serra MJ, DeMarree KG. Unskilled and unaware in the classroom: College students’ desired grades predict their biased grade predictions. Mem Cognit. 2016;44(7):1127–1137. doi: 10.3758/s13421-016-0624-9. [DOI] [PubMed] [Google Scholar]
  • 30.Son LK, Simon DA. Distributed learning: data, metacognition, and educational implications. Educ Psychol Rev. 2012;24(3):379–399. [Google Scholar]
  • 31. Bjork RA, Bjork EL. A new theory of disuse and an old theory of stimulus fluctuation. In: Estes WK, Healy AF, Kosslyn SM, Shiffrin RM, eds. From Learning Processes to Cognitive Processes: Essays in Honor of William K. Estes. Vol 2. Hillsdale, NJ: L. Erlbaum Associates; 1992.
  • 32.Buratti S, Allwood CM, Kleitman S, et al. First- and second-order metacognitive judgments of semantic memory reports: the influence of personality traits and cognitive styles. Metacognit Learn. 2013;8(1):79–102. [Google Scholar]
  • 33.Sulea C, van Beek I, Sarbescu P, Virga D, Schaufeli WB. Engagement, boredom, and burnout among students: basic need satisfaction matters more than personality traits. Learn Individ Diff. 2015;42:132–138. [Google Scholar]
  • 34.Washburn DA, Smith JD, Taglialatela LA. Individual differences in metacognitive responsiveness: cognitive and personality correlates. J Gen Psychol. 2005;132(4):446–461. doi: 10.3200/GENP.132.4.446-461. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Cortes K, Kammrath LK, Scholer AA, Peetz J. Self-regulating the effortful "social dos". J Pers Soc Psychol. 2014;106(3):380–397. doi: 10.1037/a0035188. [DOI] [PubMed] [Google Scholar]
  • 36.Jensen-Campbell LA, Rosselli M, Workman KA, Santisi M, Rios JD, Bojan D. Agreeableness, conscientiousness, and effortful control processes. J Res Personal. 2002;36(5):476–89. [Google Scholar]
  • 37.Peterson CH, Casillas A, Robbins SB. The student readiness inventory and the big five: examining social desirability and college academic performance. Personal Individ Diff. 2006;41(4):663–673. [Google Scholar]
  • 38.Hoffman B, Spatariu A. The influence of self-efficacy and metacognitive prompting on math problem-solving efficiency. Contemp Educ Psychol. 2008;33(4):875–893. [Google Scholar]
  • 39.Bo J, Jennett S, Seidler RD. Working memory capacity correlates with implicit serial reaction time task performance. Exp Brain Res. 2011;214(1):73–81. doi: 10.1007/s00221-011-2807-8. [DOI] [PubMed] [Google Scholar]
  • 40.Hacker DJ, Bol L, Horgan DD, Rakow EA. Test prediction and performance in a classroom context. J Educ Psychol. 2000;92(1):160–710. [Google Scholar]
  • 41. Hartwig M, Persky AM. Repeated cumulative testing in a classroom: underconfidence-with-practice and diminishing absolute bias for authentic course materials. In development.

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES