Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2025 Apr 4;68(5):450–463. doi: 10.1002/ajim.23719

Differences in the Effectiveness of Three OHS Training Delivery Methods

Lynda S Robson 1,2,, Cynthia Chen 1, Cameron A Mustard 1,3, Faraz Vahid Shahidi 1,3, Victoria Landsman 1,3, Peter M Smith 1,3, Aviroop Biswas 1,3
PMCID: PMC11982432  PMID: 40183197

ABSTRACT

Background

Methods of delivering occupational safety and health (OSH) training have shifted from in‐person to online. Widespread delivery of a standardized OSH training course in three modalities in the province of Ontario, Canada allowed measurement of differences in their effectiveness.

Methods

Learners (N = 899) self‐selected into face‐to‐face (F2F) instructor‐led learning, online instructor‐led synchronous distance learning, or online self‐paced e‐learning. Pre‐ and post‐training surveys collected information on knowledge and other measures. Multiple regression analyses compared modalities on knowledge achievement (0%–100% scale; the primary outcome), engagement, perceived utility, perceived applicability, self‐efficacy, and intention‐to‐use.

Results

F2F learners achieved a statistically significant 2.5% (95% CI: 0.3%, 4.7%) higher post‐training knowledge score than distance learners (Cohen's d = 0.23, which is considered small). A statistically insignificant difference of 0.4% (95%: −1.4%, 2.3%) was seen between e‐learners and distance learners. Collaborating training providers regarded these differences as not meaningful in practice. Statistically significant differences between modalities were seen for engagement, perceived utility, and self‐efficacy. Scores of F2F learners were more favorable than scores of distance learners, which were, in turn, more favorable than scores of e‐learners.

Conclusions

This study provides evidence that there are small to no differences among F2F, distance and e‐learning in their ability to ensure knowledge achievement among learners. This finding is likely generalizable to other well‐designed short‐term OSH training aimed at acquiring new knowledge. More research is needed to understand whether there are important differences across these modalities in basic OHS skill acquisition and transfer of learning to the workplace.

Keywords: effectiveness, evaluation, knowledge, online, training

1. Introduction

Over the past three decades, methods of delivering work‐related training, including occupational safety and health (OSH) training, have shifted from in‐person to online, especially so during the COVID‐19 pandemic. However, in the aftermath of the pandemic, there is uncertainty about whether changes in training delivery precipitated by the COVID‐19 emergency should be institutionalized or reversed. The question of whether there are differences in the effectiveness of in‐person and online modalities thus arises.

There are no reviews available based on studies that specifically compare online and face‐to‐face (F2F) methods in the delivery of occupational safety and health (OSH) training [1]. There are, however, several meta‐analytic studies outside of the OSH field to draw upon for relevant evidence about training modality differences in effectiveness. These reviews, some based on randomized controlled trials, synthesize evidence on the effectiveness of online learning (asynchronous and/or synchronous) in comparison to F2F among adults already in or preparing for employment. The pooled effect size estimates for achieved knowledge or skills for online versus F2F have been found to be either small and not statistically significant [2, 3, 4], or small, statistically significant, and favouring online delivery [5, 6]. The exception is a study by Pei and Wu [7], which showed a large advantage of online learning in the education of medical students. Notably, high statistical heterogeneity was evidenced, a common finding in these reviews. A further limitation of the reviews is that asynchronous and synchronous online instruction methods have usually been examined together rather than separately.

The extent to which evidence from reviews of non‐OSH training can be generalized to OSH training is uncertain for at least two reasons: different contexts and different study populations. In non‐OSH studies, the training is delivered as part of one's primary activity in a course or job, whereas in OSH studies, the training is often delivered as something secondary or auxiliary to one's course of job. As far as study populations, non‐OSH reviews are based almost entirely on studies of postsecondary students, nurses, and doctors, who are more highly educated, younger and more often from the health care sector than the diverse populations who receive OSH training. Moreover, the authors are aware of few OSH‐focused primary studies that compare online or other e‐learning methods to F2F methods [8, 9, 10, 11, 12]. In sum, there is a need for more OSH‐focused research that compares the effectiveness of different training modalities among diverse groups of workers.

The following study helps address the research gaps. It assesses differences in the effectiveness of three ways of delivering an OSH course to workers, who are diverse in their age, education, occupation, and industrial sector. The Joint Health and Safety Committee (JHSC) Certification Part 1 course, is the first of two courses, required by the Ministry of Labour, Immigration, Training & Skills Development (MLITSD), to become a certified JHSC member in the province of Ontario, Canada. Most workplaces in Ontario with twenty or more employees are required to have a JHSC; and at least one worker representative and one employer representative of the JHSC needs to be certified. The course's content, delivery, assessment, and provider quality have been standardized through provincial requirements and accreditation processes for courses and training providers [13, 14].

There were three research questions addressed by this study. The primary one was:

  • How do F2F instructor‐led learning, online instructor‐led synchronous distance learning, and online self‐paced e‐learning training delivery methods differ in the post‐training knowledge achievement of Ontario workers?

    Two secondary research questions were:

  • Which other factors are associated with post‐training knowledge achievement (after accounting for the training delivery method)?

  • How do the three training delivery methods differ in other immediate post‐training outcomes (learner engagement, perceived utility, perceived applicability, self‐efficacy, intention‐to‐use) among these workers?

2. Methods

The study design was quasi‐experimental [15]. Recruitment to the study took place after learners were self‐selected into one of three training delivery options under usual real‐world conditions. Measurement with a survey took place before and after the training. Statistical control of potential confounding was applied in the analysis. All study methods were approved by the Health Sciences Research Ethics Board of the University of Toronto.

2.1. OSH Training Course With Three Delivery Methods

The learning objectives of the JHSC Certification Part 1 training include knowledge and skill acquisition in the following areas: fundamental OSH concepts; OSH legislation and its requirements of employers, workers, and JHSCs; best practices for JHSC effectiveness; hazard identification and risk control; workplace inspections; reporting of incidents; and accessing OSH information. The course standard also stipulates that the course must use adult learning principles, instructional writing principles, good graphic design, a variety of teaching aids, and a high degree of instructor‐learner or e‐learning‐learner interactivity. Final learner assessment in the course uses standardized multiple‐choice/true‐false tests with a requirement of 75% of items answered correctly to pass.

Three ways of delivering JHSC Certification Part 1 training were studied: F2F instructor‐led learning, online instructor‐led synchronous distance learning, and online self‐paced e‐learning training. They will be referred to as “F2F,” “distance,” and “e‐learning,” respectively. F2F and distance learning involved three successive days of instruction, 6.5 h per day, for a total of 19.5 h; self‐paced e‐learning was designed to last 13 h in total, though learning could be spread over as much as a month. A more detailed description of the delivery methods can be found elsewhere [16].

For each combination of training provider and training modality, one to two observations were made by a member of the research team using a standardized form. For F2F and distance learning conditions they participated in classes as a silent observer; for the e‐learning condition, they engaged in the course as a learner. The observations confirmed that training was being implemented as described in the standard.

2.2. Selection of Training Providers

Researchers collaborated with three not‐for‐profit provincial training organizations that receive core funding from the MLITSD. Each organization has the mandate to deliver training, consulting, and information services to specific industrial sectors: Infrastructure Health & Safety Association (IHSA) to construction, transportation, warehousing, and utilities; Public Services Health & Safety Association (PSHSA) to public services, including health care, community, public safety, municipal, and government; and Workplace Safety & Prevention Services (WSPS) to manufacturing, agriculture, and private sector services. These three providers were selected from among the five providers similarly situated in the broader public sector, because they met the criterion of delivering JHSC Certification Part 1 training in at least two modalities in sufficient volume (100 or more participants for each modality) during the period of the study. This minimum was imposed to ensure robustness in the statistical analysis.

2.3. Recruitment of Learners

Recruitment took place between January and September 2022. For one provider, recruitment took place across three modalities, and for the other two providers, it took place across two modalities. Recruitment was either at the point of registration or at the start of training, depending on which fit best with usual provider practice. Potential participants were informed they would receive an honorarium of 60 Canadian dollars (around 46 USD at the time) if pre‐ and post‐training questionnaires were completed.

2.4. Survey Procedure

Surveys were usually administered outside of training sessions using the Qualtrics online survey platform. Learners accessed the pre‐training questionnaire using an Internet link provided either in a training registration form or in an e‐mail invitation from researchers, depending on the provider‐modality combination. Informed consent was obtained before data collection. On the morning following the completion of the course and its final assessment, F2F and distance learners were sent an email with a link to the post‐training survey. They were given until the end of the third day following training for form completion (after which the survey link expired). For e‐learners, who were invited to complete their pre‐training questionnaire at the outset of the course, an invitation to the post‐training survey was e‐mailed the day after they completed the pre‐training questionnaire. This second invitation instructed learners to retain the email and complete the post‐training questionnaire during one of the three days following the completion of all e‐learning modules and the final course assessment.

There was an exception to the pre‐training survey procedure described above for F2F and distance learning with one of the training providers. Questionnaires were completed in class at the beginning of the first day of training, which was already the practice of this provider. Recruitment to the study took place later in the first morning of the training following an introduction to it by the instructor. Learners who chose to participate consented to the release of their pre‐training questionnaire responses to researchers and to later contact by researchers for the online post‐training survey.

2.5. Survey Measurement

2.5.1. Knowledge

Surveys measured knowledge, the primary outcome, before and after training with multiple‐choice and true‐false questions. The pre‐ and post‐training questionnaires had 12 and 24 items, respectively. Survey questions were derived from items in three versions of instruments developed by MLITSD for providers to use in routine training assessment of learner achievement at the end of the course (with one question common to all versions; and 30 questions unique to each). Questions selected for the research study were used either verbatim or by making minor modifications. An example item is, “Which of the following is/are examples of physical hazards?” and possible responses were the following: noise, chemicals, both noise and chemicals, none of the above. Instructions at the start of the knowledge question sets told learners to rely only on themselves as a source of answers and to not look up information. The questions used post‐training (selected from versions 2 and 3) were different from those used pre‐training (selected from version 1). This means the questions are valid for between‐group comparisons, as done in this study, but not for within‐group pre‐post estimates of knowledge gain. (Different pre‐ and post‐training question sets were created both to fit with current provider practice and to minimize differential prior exposure of learners to survey questions through usual course testing practices.) A knowledge score was calculated as the % of responses to knowledge questions that were correct, with any missing responses counted as incorrect.

2.5.2. Secondary Study Outcomes

The selection of secondary training outcome measures was informed by a review of training evaluation frameworks [17, 18, 19, 20, 21, 22]. and of training research. Of priority for inclusion were outcomes that were measurable immediately post‐intervention and which had previously been associated with subsequent training transfer to the workplace or health behavior change. Outcomes included were engagement [23], perceived utility [24], self‐efficacy [6, 25], and behavioral intention [26]. Perceived applicability, a measure suggested by a collaborating training provider, was also included to allow its comparison with perceived utility. These outcomes were measured after training with the following questions:

  • How engaging was the training?

  • How useful is what you learned in the training?

  • How applicable to your workplace is what you learned in the training?

  • How confident do you feel using what you learned at the training?

  • How likely are you to use what you learned in the training?

Responses to these questions were measured on a 6‐point scale, for example, 1 = not at all engaged to 6 = extremely engaged. In analysis, answers to secondary training outcome questions were treated as continuous variables.

2.5.3. Other Study Variables

Also included in the questionnaire, for purposes of sample description and statistical control, were questions about the individual learner (age, gender, education, English as a first language, race/ethnicity, first letter of home postal code), their job (work role, manual (i.e., requiring physical effort) versus nonmanual job, union membership), and their workplace (number of employees, industry sector). General pre‐training attitudes towards each of the three modalities were also assessed. Finally, the following aspects of their role on the JHSC were also asked about: tenure on the JHSC, whether employer/worker representative on the JHSC, reason for taking training, and whether planning to take Certification Part 2 training. The questionnaires are available elsewhere [16].

2.6. Sample Size

A sample size of 900 learners in total (300 per provider; 400‐500 per comparison) was planned, based on a formula by Green [27] for computing sample size in regressions with a continuous outcome. The formula is based on the planned number of predictor variables and desired effect size [28], assuming α = .05 and power = 80%. Calculations estimated that sample sizes of 400–500 would yield effect sizes between small and medium (i.e., squared multiple correlation coefficient R 2 between .02 and .13) in regressions with 30–40 predictor variables.

2.7. Statistical Analysis

Analyses were conducted separately on the following comparisons, each involving learners from two providers:

  • Comparison 1: F2F and distance‐1 (providers A and B).

  • Comparison 2: E‐learning and distance‐2 (providers B and C).

One provider, called provider B here, was represented in both comparisons (because they had learners available for recruitment from all of F2F, distance, and e‐learning), whereas the other two providers, providers A and C, were represented in only one comparison each (because they had learners available for recruitment from only two modalities). The distance learning groups used in the two comparisons thus differ in provider composition accordingly (described here as distance‐1 and distance‐2, respectively). This approach of two separate analytical comparisons kept providers balanced across each pair of modalities being compared, which helped ensure workforce characteristics were balanced across modalities too (since each provider specialized in a different industrial sector), and thus helped to minimize any confounding of the relationship between modality and outcome by learner variables.

Observations were included in the final analytical data set (n = 899) if the learner answered at least 11 of 12 knowledge questions in the T1 survey plus the secondary outcome questions (first five questions) in the T2 survey. For inclusion in the regression with knowledge score as the outcome, respondents needed to answer, in addition, at least 22 of 24 knowledge items in the T2 survey (n = 887).

Statistical analyses were carried out using SAS 9.4 software, unless otherwise indicated. Descriptive statistics (means, standard deviations, frequencies, correlations) were computed. The statistical significance of between‐group differences in survey measures was determined with independent t‐tests, using an online calculator [29]. Multivariable linear regression modeling [30] was used with each of the knowledge and the secondary outcomes, to estimate the difference associated with training modality, while adjusting for covariates.

2.7.1. Regressions With Post‐Training Knowledge as Dependent Variable

For each of the two main comparisons, an unadjusted baseline model regressed the post‐training knowledge score on training modality, as well as on training provider and pre‐training knowledge score. A second model was then constructed, which added a modality‐provider interaction term to the baseline model. For both main comparisons, the coefficient of the interaction term was found to be small and nonsignificant and so the interaction variable was not included in subsequent models. A third model was then constructed for each comparison, adding a set of categorical variables, called here “Tier 1 variables,” selected with an a priori rationale, based on the knowledge of the research team, of how the variables could potentially confound the relationship between modality and the primary outcome. The Tier 1 variables were: age, gender, education, English as a first language, JHSC tenure, and manual/nonmanual job. All Tier 1 variables were retained in the model, whether statistically significant or not. Industrial sector was not included as a Tier 1 variable because of high collinearity with the training provider (Cramer's V = 0.45). A fourth model for each comparison added two Tier 2 variables, which had shown significant bivariate relationships with post‐training knowledge and for which a post‐hoc rationale could be constructed—occupational group and workplace size (i.e., number of employees). Workplace size was found to be statistically significant in the model of the F2F versus distance comparison and was retained in the final models for both comparisons. A post‐hoc rationale was that training would have been less relevant to learners from organizations with less than 20 employees because most workplaces in Ontario of that size are not required to have JHSCs. Occupational group was not found to be significant in the fourth models for either modality comparison and was not included in the final model.

A set of Tier 3 variables available in the questionnaire were added to the models with Tier 1 and 2 variables for exploratory purposes: reason for taking training, whether worker/employer representative, White/non‐White, pre‐training attitude toward F2F learning, pre‐training attitude toward distance learning, pre‐training attitude toward e‐learning, union/non‐union, postal code (first letter). They had little impact on the regression coefficients for the training modality variables and were thus not included in the final models for reasons of parsimony and collinearity with other variables.

Interactions of each covariate in the final model with modality were explored by adding them to the final model described above. Only one of the interactions was statistically significant—the modality‐manual/nonmanual interaction in the F2F versus distance‐1 comparison. For reasons of parsimony, the interaction was not added to the final model, but the finding is reported upon in the text below.

For purposes of sensitivity testing, two models were developed for each of the two comparisons, to better account for outliers in the data. In the first, outliers and influential points were removed. In the second, a robust regression model was used to provide stable estimates in the presence of outliers and influential points [30, 31, 32]. For each of the four models, there was very little change in the regression coefficient of the modality variable.

2.7.2. Regressions With Secondary Training Outcome as Dependent Variable

Less exploration was undertaken with the regressions involving the secondary training outcomes. Specifications analogous to the “third” and “final” models described above were carried out with similar results. The final models are presented here.

2.7.3. Effect Size

Regression coefficients were transformed to Cohen's d effect sizes using an online calculator [33] based on Lipsey and Wilson [34]. Conventional criteria [28] were used to classify the resulting effect sizes: 0.2, small; 0.5, medium; 0.8, large.

2.7.4. Meaningful Difference to Training Providers

To assess the relevance of the findings [35] to the training providers, we consulted with two representatives from each of the three collaborating training provider organizations, with separate meetings held for each organization. Before viewing the study results, representatives were asked what magnitude of between‐modality absolute difference in post‐training knowledge score would be meaningful to them in practice. A presentation slide gave example responses of 1%, 2%, 5%, 10%, 20%. The pairs of representatives were observed to consider values from 5% to 20% in their discussions, with the consensus response for each of the three organizations being 10%.

3. Results

3.1. Survey Response

Eligible pre‐training survey data (defined as answering at least 11 of 12 knowledge questions) were collected from 1289 learners, representing a participation rate of 38%. Of these, 899 (70%) continued to the second survey and provided valid responses therein to at least the secondary training outcome questions (the first five questions), representing a participation rate of 26% overall. Rates varied across providers and across modalities: F2F, 14%–45%; distance, 11%–44%; and e‐learning, 25%–32%.

3.2. Characteristics of Learners

The learner participants included in the analyses (n = 899) were highly varied in their individual and workplace characteristics. Some characteristics varied across modalities, including gender, education, manual/nonmanual job, occupational group, workplace size (no. of employees), industrial sector, and unionization (Table 1). Of note too, there was a marked difference in the attitudes the learners in each modality had for their respective modality pre‐training. Most favored was F2F, with 52% of F2F learners saying they usually liked learning in‐person with an instructor “a lot.” For those in e‐learning, 40% said they usually liked learning “online with self‐paced modules and no instructor” a lot. In contrast, 14%–17% of learners in the distance learning groups reported usually liking “learning online with a live instructor” a lot. Other characteristics did not vary across modalities, including having English as a first language (83%), being White (73%), as well as age, JHSC tenure, and who the participant represented on their JHSC (Table 2).

Table 1.

Self‐reported individual and workplace characteristics across training modalities (n = 899).

Modality Comparison 1 Modality Comparison 2
Characteristic F2F Distance‐1 e‐learning Distance‐2 p Value
n % n % p Value n % n %
Gender <0.0001 0.03
Male 187 74.8 99 50.3 138 39.3 77 37.2
Female 61 24.4 94 47.7 205 58.4 124 59.9
Other 0 0.0 2 1.0 0 0.0 4 1.9
NR 2 0.8 2 1.0 8 2.3 2 1.0
Education <0.0001 0.13
≤High school and trades qual. a 119 47.6 45 22.8 67 19.1 42 20.3
College and equivalent 72 28.8 72 36.6 109 31.1 81 39.1
University 57 22.8 79 40.1 167 47.6 83 40.1
NR 2 0.8 1 0.5 8 2.3 1 0.5
Manual b /Non‐manual job < 0.0001 0.14
Manualb 141 56.4 62 31.5 121 34.5 60 29.0
Non‐manual 107 42.8 134 68.0 222 63.3 146 70.5
NR 2 0.8 1 0.5 8 2.3 1 0.5
Occupational group 0.21 0.05
Senior manager/executive 37 14.8 34 17.3 90 25.6 34 16.4
Technical specialist 24 9.6 29 14.7 42 12.0 23 11.1
Supervisor 43 17.2 37 18.8 45 12.8 40 19.3
Front‐line worker 93 37.2 55 27.9 107 30.5 74 35.8
Other 51 20.4 41 20.8 59 16.8 35 16.9
NR 2 0.8 1 0.5 8 2.3 1 0.5
Workplace size (number of employees 0.003 0.18
< 20 58 23.2 20 10.2 27 7.7 22 10.6
20–49 69 27.6 65 33.0 111 31.6 52 25.1
50–250 96 38.4 82 41.6 150 42.7 90 43.5
> 250 25 10.0 29 14.7 54 15.4 42 20.3
NR 2 0.8 1 0.5 9 2.6 1 0.5
Industry sector < 0.0001 0.0002
Construction 149 59.6 58 29.4 19 5.4 4 1.9
Manufacturing 33 13.2 29 14.7 72 20.5 31 15.0
Other goods‐producingc 13 5.2 11 5.6 11 3.1 2 1.0
Wholesale and retail trade 10 4.0 26 13.2 20 5.7 27 13.0
Transportation and warehousing 18 7.2 18 9.1 20 5.7 10 4.8
Education 1 0.4 2 1.0 31 8.8 6 2.9
Health care and social assistance 2 0.8 7 3.6 74 21.1 64 30.9
Public admin./government 7 2.8 11 5.6 19 5.4 20 9.7
Other services 10 4.0 30 15.2 66 18.8 37 17.9
Other (please specify) 2 0.8 3 1.5 7 2.0 4 1.9
NR 5 2.0 2 1.0 12 3.4 2 1.0
Belong to union 0.002 0.12
Yes 80 32.0 37 18.8 67 19.1 52 25.1
No 168 67.2 159 80.7 276 78.6 154 74.4
NR 2 0.8 1 0.5 8 2.3 1 0.5
Pre‐training attitude to modality (“Usually like < Modality > …) d < 0.0001 < 0.0001
A lot 129 51.6 33 16.8 139 39.6 30 14.5
Quite a bit 82 32.8 47 23.9 96 27.4 71 34.3
Somewhat 25 10.0 58 29.4 76 21.7 60 29.0
A little bit 10 4.0 23 11.7 24 6.8 15 7.2
Not at all 0 0.0 15 7.6 9 2.6 8 3.9
I have no experience 2 0.8 20 10.2 7 2.0 23 11.1
NR 2 0.8 1 0.5 0 0.0 0 0.0
Reason for taking training 0.03 0.46
JHSC needed new cert. member 105 42.0 100 50.8 181 51.6 118 57.0
All JHSC members do training 37 14.8 41 20.8 75 21.4 32 15.5
HS representative (no JHSC) 27 10.8 12 6.1 19 5.4 13 6.3
Keeping JHSC training up to date 22 8.8 13 6.6 21 6.0 13 6.3
Ntl. Construction Officer req't 27 10.8 10 5.1 2 0.6 0 0.0
Other 30 12.0 20 10.2 45 12.8 30 14.5
NR 2 0.8 1 0.5 8 2.3 1 0.5

Abbreviations: n, number of respondents; NR, no response.

a

≤High school and trades qual. includes high school education or less, or trades certificate, trades diploma, or apprenticeship.

b

Manual defined in the questionnaire as requiring physical effort.

c

Other goods‐producing are agriculture, forestry, fishing, hunting, and utilities.

d

Although all participants were asked about their attitudes towards each of the three modalities in three separate questions, reported here are only the attitudes towards their respective modality, for example, Attitude toward F2F learning is reported for F2F learners.

Table 2.

Other characteristics of study participants (n = 899).

Characteristic n % of total
Age (years)
< 35 365 40.6
35–44 265 29.5
45–54 180 20.0
55+ 76 8.5
NR 13 1.5
JHSC tenure
< 6 months 581 64.6
6 months to 2 years 159 17.7
> 2 years 148 16.5
NR 11 1.2
Representing workers/employer
Worker 530 59.0
Employer 287 31.9
Not applicable 71 7.9
NR 11 1.2

Abbreviations: n, number of respondents; NR, no response.

3.3. Between‐Modality Differences in Post‐Training JHSC‐Related Knowledge Scores—Descriptive

Figure 1 shows the pre‐ and post‐training knowledge scores for comparison 1 (F2F vs. distance‐1) and comparison 2 (e‐learning vs. distance‐2). There were small differences pre‐training in both comparisons, with the difference of 3.2% (on a 0%–100% scale) having statistical significance in comparison 2 (p = 0.008). Post‐training differences of 1.3% and 0.9% in the two comparisons, respectively, were not statistically significant.

Figure 1.

Figure 1

Mean (SD) pre‐ and post‐training scores on knowledge items (% correct) in comparison 1 (F2F vs. distance‐1) and comparison 2 (e‐learning vs. distance‐2).

3.4. Between‐Modality Differences in Post‐Training JHSC‐Related Knowledge Scores—Multiple Regression

Multiple regression was undertaken to statistically correct for any effects of the differences in the characteristics of the learners in the modalities being compared. Table 3 shows that learners taking F2F instruction achieved a 2.5% (95% confidence interval [CI]: 0.3, 4.7) higher post‐training knowledge score than those taking distance learning, after accounting for learner differences in training provider, age, gender, education, English as a first language, JHSC tenure, manual/nonmanual job, workplace size, and pre‐training knowledge score. When expressed as an effect size, Cohen's d, this difference is 0.23 (95% CI: 0.04, 0.42) and classified as small. Table 4 shows that those in e‐learning instruction were an estimated 0.4% (95% CI: −1.4%, 2.3%) higher in post‐training knowledge score than those in distance learning, after accounting for learner differences, but this difference is not statistically significant. The difference as an effect size is 0.04 (95% CI: −0.14, 0.21), which is also small. Both values are markedly smaller than the “meaningful difference to training providers” criterion of 10%, explained in Section Methods, 2.

Table 3.

Final regression model for post‐training knowledge score (%), F2F versus distance‐1.

β Lower CI Upper CI p Value
F2F (ref: distance‐1) 2.5 0.3 4.7 0.02
Provider, non‐reference, pair 1 (ref: Reference provider, pair 1) −4.5 −6.7 −2.3 < 0.0001
Age (ref: < 35)
35–44 −1.1 −3.5 1.3 0.35
45–54 0.8 −2.0 3.6 0.58
55+ 1.4 −2.8 5.5 0.53
Female (ref: male) 1.0 −1.2 3.3 0.37
Education (ref: ≤high school and trades)
College and equivalent 3.6 1.2 6.1 0.004
University 4.7 1.9 7.5 0.001
English 1st language—No (ref. Yes) −2.0 −4.7 0.7 0.15
JHSC tenure (ref: < 6 months)
6 months to 2 years −1.4 −4.1 1.3 0.32
> 2 years 0.8 −2.0 3.7 0.56
Nonmanual job (ref: manual job) 2.5 0.2 4.9 0.04
No. of employees (ref: > 250)
< 20 −4.5 −8.4 −0.7 0.02
20–49 −2.0 −5.5 1.4 0.24
50–250 −0.7 −3.9 2.6 0.68
Pre‐training knowledge score (%) 0.03 0.0 0.1 0.37
Intercept 70.3 64.8 75.8 < 0.0001

Note: n = 439, R 2 = 0.17.

Abbreviations: CI, 95% confidence interval; ref, reference.

Table 4.

Final regression model for post‐training knowledge score (%), e‐learning versus distance‐2.

β Lower CI Upper CI p Value
E‐learning (ref: distance‐2) 0.4 −1.4 2.3 0.67
Provider, non‐reference, pair 2 (ref: Reference provider, pair 2) 1.2 −0.6 3.1 0.18
Age (ref: < 35)
35–44 1.0 −1.1 3.2 0.35
45–54 0.0 −2.5 2.4 0.98
55+ −1.0 −4.2 2.2 0.55
Female (ref: male) 0.2 −1.7 2.2 0.83
Education (ref: ≤high school and trades)
College and equivalent 0.9 −1.7 3.4 0.52
University 2.3 −0.3 5.0 0.08
English 1st language—No (ref. yes) −1.6 −4.1 0.9 0.21
JHSC tenure (ref: < 6 months)
6 months to 2 years 1.2 −1.2 3.6 0.34
> 2 years 3.8 1.4 6.2 0.002
Nonmanual job (ref: manual job) 3.5 1.5 5.5 0.0008
No. of employees (ref: > 250)
< 20 −3.2 −6.9 0.4 0.08
20–49 −1.2 −3.9 1.5 0.38
50–250 −0.7 −3.2 1.8 0.60
Pre‐training knowledge score 0.2 0.1 0.3 < 0.0001
Intercept 62.0 56.8 67.1 < 0.0001

Note: n = 544, R 2 = 0.13.

Abbreviations: CI, 95% confidence interval; ref, reference.

Tables 3 and 4 also show that, after adjusting for modality, associations between some covariates and post‐training knowledge score were seen in one or both regressions. These covariates included training provider, education, tenure on the JHSC, manual/nonmanual job, number of employees, and pre‐training knowledge score. Independent associations with post‐training knowledge score were not found for age, gender or English as a first language in either regression model.

The interaction between modality and each of the covariates in Tables 3 and 4 were explored (not shown). All were statistically insignificant except the modality‐manual/nonmanual job interaction in the F2F versus distance‐1 comparison. In the interaction model, the main effect of F2F instruction, relative to distance learning, was −0.2% (95% CI: −3.5%, 3.1%), and that of nonmanual relative to manual was −0.1% (95% CI: −3.5%, 3.3%), both statistically insignificant, while the effect associated with F2F and having a nonmanual job was 4.5% (95% CI: 0.3%, 8.6%), relative to other modality‐manual/nonmanual combinations. In other words, the positive effect associated with F2F modality (and with nonmanual vs. manual) observed in Table 3 appears to be due to nonmanual workers achieving higher post‐training knowledge scores in F2F learning than in distance learning; for manual workers, modality did not appear to make a difference.

3.5. Between‐Modality Differences in Secondary Outcomes—Descriptive

Figure 2 summarizes the secondary outcome results for both modality comparisons. Across modalities, a positive assessment was given to the training. That is, learners on average in each modality perceived the training to be engaging to very engaging, very useful, and very applicable to the workplace; they also reported being confident to very confident to use what they had learned; and they said they were very likely to use what they learned.

Figure 2.

Figure 2

Mean post‐training scores for secondary outcomes by modality. Respectively, survey questions assessed the degree to which the training was perceived as (i) engaging, (ii) useful, (iii) applicable to the workplace; the respondent was (iv) confident to use what was learned or (v) likely to use what was learned. Results of statistical tests of the differences in scores between F2F and Distance‐1 (Comparison 1) and between E‐learning and Distance‐2 (Comparison 2) are shown at the base of the graph: *p < 0.05; ***p < 001.

However, some between‐modality differences were seen: F2F learners consistently rated their training higher than learners in distance learning, who in turn rated their training higher than learners in e‐learning. The largest differences seen were with the engagement measure. F2F learning was assessed on average to be one half‐unit of the response scale higher than distance learning, and distance learning was assessed to be one‐half unit higher than e‐learning. These differences were statistically significant. For perceived utility and self‐efficacy, differences were more modest in magnitude but also statistically significant. Differences in perceived applicability and intention‐to‐use were smaller still and statistically insignificant.

3.6. Between‐Modality Differences in Secondary Study Outcomes—Multiple Regression

Table 5 summarizes the findings from 10 separate regressions, involving the 5 secondary study outcomes as dependent variables in the two comparisons. The pattern of findings is similar to what was seen in the descriptive analyses. After statistical adjustment, F2F learning was associated with higher scores in secondary outcomes relative to distance learning, whereas e‐learning was associated with lower scores relative to distance learning. Regression results were statistically significant for engagement, perceived utility, and self‐efficacy. The largest modality effect was seen with the engagement measure, with a difference of 0.59 (95% CI: 0.40, 0.78) in engagement score associated with F2F relative to distance learning; and a difference −0.52 (95% CI: −0.70, −0.35) in score associated with e‐learning relative to distance learning. These values are equivalent, respectively, to 12% and ‐10% of the five‐unit span of the 1–6 response scale. As effect sizes, the modality effects are 0.66 (95% CI: 0.47, 0.85) and −0.51 (−0.69, −0.33), respectively. For perceived utility, modality effect estimates of 0.26 (95% CI: 0.10, 0.42) and ‐0.18 (95% CI: −0.34, −0.02) were derived from the F2F‐to‐distance and the e‐learning‐to‐distance analyses, respectively, corresponding to effects of 5% and −4% of the response scales and effect sizes of 0.34 (95% CI: 0.15, 0.53) and ‐0.20 (95% CI: −0.38, −0.03). Similarly, for self‐efficacy, modality effect estimates of 0.19 (95% CI: 0.02, 0.36) and −0.20 (95% CI: −0.36, −0.04) were derived from the F2F‐to‐distance and the e‐learning‐to‐distance analyses, respectively, corresponding to 4% and −4% of the response scales and effect sizes of 0.24 (95% CI: 0.05, 0.43) and −0.22 (95% CI: −0.39, −0.04).

Table 5.

Summary of regression coefficients for modality variable in regression models with secondary outcomes.

Outcome F2F versus distance‐1 e‐learning versus distance‐2
β Lower CI Upper CI p Value β Lower CI Upper CI p Value
Engagement 0.59 0.40 0.78 < 0.001 ‐0.52 ‐0.70 ‐0.35 < 0.001
Perceived utility 0.26 0.10 0.42 0.002 ‐0.18 ‐0.34 ‐0.02 0.03
Perceived applicability 0.16 −0.02 0.34 0.08 ‐0.13 −0.29 0.04 0.13
Self‐efficacy 0.19 0.02 0.36 0.03 ‐0.20 ‐0.36 ‐0.04 0.01
Intention‐to‐use 0.15 ‐0.02 0.33 0.09 ‐0.05 ‐0.20 0.11 0.57

Note: Values are the regression coefficients for the training modality variable from 10 separate regression models adjusted for age, gender, education, English as a first language, JHSC tenure, manual/nonmanual job, and workplace size. Secondary outcomes are measured on a scale from 1 to 6. R 2: engagement, 0.10−0.12; perceived utility, 0.05–0.06; perceived applicability, 0.02–0.07; self‐efficacy, 0.05–0.07; intention‐to‐use, 0.02−0.05.

Abbreviation: CI, 95% confidence interval.

4. Discussion

4.1. Principal Findings and Their Interpretation

The study examined how the effectiveness of JHSC Certification Part 1 training differed by modality. For the primary outcome of post‐training knowledge score, those in F2F training scored a statistically significant 2.5% higher, in comparison with those in distance learning, after accounting for other factors through multivariable regression methods. The difference in scores between learners in e‐learning relative to those in distance learning was a statistically insignificant 0.4%. It follows that the expected difference in score between e‐learning and F2F learning, if they had been compared directly, would have been about 2.1%, that is, 2.5% minus 0.4%. The above observed difference of 2.5% is equivalent to an effect size of 0.23, considered small by conventional criteria in research [28]. The difference of 2.5% is also much smaller than the criterion of meaningful difference of 10% established with the study's collaborating training providers.

The nature of the interaction observed between modality (F2F vs. distance) and manual/nonmanual work was unexpected. Researchers presumed a priori that people in manual jobs would be less accustomed to virtual instruction and, as a result, might have poorer post‐training knowledge achievement with distance learning than with F2F learning. Instead, it was those in nonmanual jobs who showed this type of between‐modality difference; and there was no such difference seen among manual workers. We offer one potential explanation. In distance learning, many of those in nonmanual jobs would have been taking the training with the same device on which they do their usual jobs, either at home (when working remotely) or at their place of employment. Distraction by their work tasks would therefore have been more likely (e.g., responding to work‐related e‐mail messages arriving during the training), than for people with manual jobs. No other significant interactions between modality and covariates were seen.

For the five secondary outcomes, statistically significant between‐modality differences were seen for three of them: engagement during training, perceived utility of the learning, and self‐efficacy in using the learning. Across these measures, scores obtained from F2F learners (on a scale of 1 through 6) were more favorable than scores from distance learners, which were, in turn, more favorable than scores from e‐learners, after accounting for other factors. It follows, for a given secondary outcome, the largest between‐modality effect is the one that could be imputed between F2F learners and e‐learners. Some of the between‐modality effects seen with the secondary outcomes were of greater magnitude than those found with knowledge, when expressed as a percentage of the theoretical ranges of the measures or as effect sizes. The range of between‐modality effects observed with engagement, perceived utility, and self‐efficacy were 10%–22%, 4%–9%, and 4%–8% of their respective response scales. When expressed as effect sizes, the range of between‐modality effects were, respectively: engagement, 0.5–1.2 (medium‐to‐large); perceived utility, 0.2–0.5 (small‐to‐medium); self‐efficacy, 0.2–0.5 (small‐to‐medium). This pattern of magnitudes is as expected: a larger effect is seen with engagement, given its placement antecedent to the other outcomes in a causal model and the tendency for the magnitude of outcomes to attenuate as one moves distally in a model. The small and nonsignificant inter‐modality differences seen in intention‐to‐use, the most distal outcome, also fits with this pattern. That said, variation in intention‐to‐use might also have been restricted, given the context of 75% of respondents having the role of an active JHSC or health and safety representative. That is, it was inherent in their roles to be planning to use what was taught, irrespective of the quality of instruction.

4.2. Methodological Strengths and Limitations

The study had several methodological strengths. First, the training intervention was standardized across the three modalities with regard to learning objectives, adult learning principles, and interactivity, helping to ensure the appropriateness of comparing different training delivery methods. In many studies intended to contrast modality only, instructional design has differed, too, confusing the interpretation of results [6]. Second, learners were recruited from F2F and distance learning classes with various instructors, reducing the likelihood that the effect of modality was the effect of (an) individual instructor(s). Third, a large sample size ensured adequate statistical power, allowing the detection of small effects of modality. Fourth, although random assignment of learners to modality was not used in the study design, potential confounding by learner characteristics was mitigated by balancing providers across modalities being compared and through multiple regression modeling. Such statistical control has often been missing in research from the education field when students self‐select into modality. Fifth, knowledge achievement was measured objectively, rather than subjectively. As well, the cognitive learning outcome of knowledge, was complemented by the measurement of the affective outcomes of self‐efficacy and intention‐to‐use [20]. Self‐efficacy has been shown in meta‐analyses to be just as important as knowledge in predicting the actual transfer of learning to the workplace [6, 25, 36]. Finally, learners were diverse in their characteristics, which has been lacking to date in studies comparing modalities.

The study had some limitations, too, in addition to nonrandom assignment. First, in the e‐learning versus distance learning comparison, there was differential pre‐exposure among individuals to the study's 24 post‐training knowledge questions, arising from differing routine end‐of‐course assessment methods. All e‐learners and distance learners with one of the two providers in the study were pre‐exposed to 1/3 of the questions in the study's post‐training knowledge assessment on average. For distance learners with the other training provider, there was no prior exposure. The effect of the differential exposure on the study results is likely minor since a provider‐modality interaction was not significant in the regression modeling (see Section Methods, 2).

A second limitation arises from the differing duration of training delivery across modalities. Whereas F2F and distance trainings were completed over three successive days, e‐learning was variable in length and could last up to 30 days, depending on the learner's self‐pacing. That difference in duration could prompt concern about the validity of the study's post‐training knowledge assessment, since there was more opportunity for some knowledge (learned at the beginning of the course) to decay in the case of e‐learning. This concern is partially mitigated by there being within the usual course delivery, following completion of all modules, a final comprehensive test, which would have prompted a review of the course material, thereby reducing concerns about differing decay effects.

Third, researchers had little control over when e‐learners completed the post‐training questionnaire in relation to doing the training, unlike the situation in the other two modalities. Researchers did instruct e‐learners to complete the questionnaire only during one of the 3 days following completion of the training. Compliance with this request would ensure comparability of the timing of measurement with the learners in F2F and distance modalities. Although there was some noncompliance, exclusion of these responses in a sensitivity analysis found little effect on the results; and the inclusion of time of measurement post‐training in analyses also had little effect.

Fourth, we were not able to include industry in the regression models, because it was highly collinear with training provider. Training provider as a variable was of practical and theoretical interest to retain in the models. However, we note that industry is a covariate of interest in training effectiveness studies because Burke et al. [37] found that industry influenced the relationship between knowledge acquisition and how engaging training delivery was.

A fifth limitation was the lack of a direct comparison between e‐learning and in‐class learning; instead, an indirect comparison was made. A direct comparison was possible in an analysis of all learners with provider B (n = 334), which offered training in all three modalities. Regression analysis with the final model specification estimated that relative to F2F learners, distance learners and e‐learners achieved knowledge score of −0.8 (95% CI: −4.1, 2.6) and 0.3 (95% CI: −2.8, 3.4), respectively, after adjustment with covariates. As in the main analysis, inter‐modality differences in knowledge achievement were no greater than small.

A final limitation is the scope of the outcome measures. It would have been a more comprehensive study to measure skills for which the training was intended to prepare the learner, such as finding information from the Occupational Health and Safety Act, using a hazard management tool, or conducting an incident investigation. As well, outcomes were measured shortly after the training, so we do not know whether there would have been modality differences in longer‐term knowledge retention or the transfer of knowledge to the workplace.

4.3. Results in Relation to Other Research

This study adds to the existing literature comparing post‐training knowledge achievement in F2F and online learners in work‐relevant instruction. It especially adds to the scant research of this type involving OSH instruction. Meta‐analytic comparisons of online learning to F2F learning in adults, in occupational but non‐OHS contexts, have usually estimated no or small differences in knowledge achievement between the two [2, 3, 4, 5, 6]. Our findings of small or no significant effects in knowledge achievement when, respectively, F2F and e‐learning are compared with distance learning, are in line with these meta‐analyses.

In addition, the study contributes knowledge about modality differences in relation to other training outcomes: self‐assessed engagement, perceived utility and applicability, self‐efficacy in using learning, and intention to use learning. Of these, only self‐efficacy has been examined in meta‐analyses of occupationally‐related training [4]. Only two primary studies were included for this outcome, both estimating small post‐training differences in self‐efficacy when e‐learning and F2F conditions were compared, with a meta‐analytic result of 0.02 in effect size (95% CI: −0.35, 0.38).

This study extends the research literature too because it is based on a diverse sample of learners, who are varied in their education, jobs and workplaces. In contrast, the existing bodies of knowledge are based on studies of medical professionals and students of those professions [1] or other university‐based students [38]. It's notable that this study found education level had a strong relationship with post‐training knowledge achievement, with greater knowledge achievement corresponding to greater education, as expected, but this relationship did not depend on modality. As well, the learning outcome for a learner with a manual job did not depend on modality. (Instead, learners in nonmanual jobs appeared to have been disadvantaged in the distance environment relative to F2F.)

4.4. Future Research

Future research could address some of the limitations identified above, especially by including longer‐term follow‐up to measure knowledge retention and the transfer of learning to the workplace. Additional measures could be included immediately post‐training too. Measures of skill would be ideal. Also informative would be more specific measures of self‐efficacy related to specific skills targeted by the training, such as looking up legislative information or carrying out a hazard assessment, inspection, or investigation. Reviews of training studies that have attempted to synthesize evidence on self‐efficacy [4, 39] reveal a lack of its measurement in primary studies despite its association with the transfer of learning [6, 25, 36]. Including self‐efficacy as an outcome in training studies would address a longstanding critique in the literature that affective learning outcomes have too often been overlooked [20, 40], relative to cognitive and behavioral outcomes.

4.5. Practical Implications

This study provides evidence that F2F, distance, and e‐learning are similar in their ability to ensure knowledge achievement, measured soon after training, among learners in short‐term OHS training. This finding of modality similarity in Ontario's JHSC certification part 1 training is likely generalizable to knowledge achievement in other short‐term OSH trainings developed using sound instructional design principles. These results suggest there continues to be a role for all three modalities in OSH training in the wake of the pandemic.

The finding of modality similarity should not be generalized to outcomes not measured in this study, including skill acquisition or the transfer of learning to the workplace, especially given the differences seen in post‐training self‐efficacy. Although all modalities achieved on average, high levels of self‐efficacy in the learner, which research has shown is just as important as knowledge with regard to the eventual transfer of knowledge to the workplace, there were statistically significant and nontrivial differences observed, especially between F2F and e‐learning. The extent to which these differences would lead to sizeable or meaningful differences in OSH practice in the workplace is unknown. More research is needed to understand whether there are important differences across these modalities in the acquisition of OHS skills acquisition and transfer of learning to the workplace.

Author Contributions

Lynda S. Robson and Cameron A. Mustard conceived of the work. Lynda S. Robson, Cameron A. Mustard, Victoria Landsman, Faraz Vahid Shahidi, Peter M. Smith, Aviroop Biswas designed the study and data collection instruments. All authors were involved in the analysis and interpretation of the results, led by Cynthia Chen and Lynda S. Robson. Lynda S. Robson led the writing of the manuscript. All authors critically reviewed the manuscript and approved of the final version. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are.

Disclosure

The views expressed are those of the Institute and do not necessarily reflect those of the Province of Ontario.

Ethics Statement

Work was completed at the Institute for Work & Health. Methods were approved by the University of Toronto Health Sciences Research Ethics Board (protocol 41717). Before completing the survey questionnaire, subjects were presented with information letters.

Consent

Subjects confirmed their review of the letter and their consent.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

We thank the IWH employees who helped plan for and implement the project: Lyudmila Mansurova with survey platform and honoraria management; Sabrina Imam with observations of training and qualitative data analysis; Victoria Nadalin with early analyses of administrative and archival knowledge test data for planning purposes; Pam Cardwell and Sara Macdonald with knowledge transfer: and Fran Copelli and Jocely Dollack with administrative support. We are very appreciative of the training provider collaborators from Infrastructure Health & Safety Association, Public Services Health & Safety Association, Workplace Safety & Prevention Services, who discussed research plans, helped establish criteria of meaningful difference, modified their usual organizational practices to accommodate recruitment of study participants, and reviewed a report on the project. Special thanks to these individuals from those organizations: for the planning and operational work of Laura Shier, Amy Dicks, Linda Lorenzetti, Trevor Beauchamp, Lina Della Mora, and Adriana Villegas; the consultations on survey content with Christy Conte and Edith Pajek; and the high‐level support from Henrietta Van Hulle, Enzo Garritano, Kiran Kapoor, and Paul Casey. The Institute operates with the support of the Province of Ontario, stewarded by the Ministry of Labour, Immigration, Training and Skills Development, which oversees the JHSC training standards. This project was funded through this support. The Ministry played no role in the design or conduct of the research; nor in the collection, analysis, or interpretation of the data; nor in the preparation, review, or approval of the publication. The views expressed are those of the Institute and do not necessarily reflect those of the Province of Ontario.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

References

  • 1. Robson L. S., Irvin E., Padkapayeva K., Begum M., and Zukowski M., “A Rapid Review of Systematic Reviews on the Effectiveness of Synchronous Online Learning in an Occupational Context,” American Journal of Industrial Medicine 65, no. 7 (2022): 613–619, 10.1002/ajim.23365. [DOI] [PubMed] [Google Scholar]
  • 2. Gegenfurtner A. and Ebner C., “Webinars in Higher Education and Professional Training: A Meta‐Analysis and Systematic Review of Randomized Controlled Trials,” Educational Research Review 28 (2019): 100293, 10.1016/j.edurev.2019.100293. [DOI] [Google Scholar]
  • 3. He L., Yang N., Xu L., et al., “Synchronous Distance Education Vs Traditional Education for Health Science Students: A Systematic Review and Meta‐Analysis,” Medical Education 55 (2020): 293–308, 10.1111/medu.14364. [DOI] [PubMed] [Google Scholar]
  • 4. Richmond H., Copsey B., Hall A. M., Davies D., and Lamb S. E., “A Systematic Review and Meta‐Analysis of Online Versus Alternative Methods for Training Licensed Health Care Professionals to Deliver Clinical Interventions,” BMC Medical Education 17, no. 1 (2017): 227, 10.1186/s12909-017-1047-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Cook D. A., Levinson A. J., Garside S., Dupras D. M., Erwin P. J., and Montori V. M., “Internet‐Based Learning in the ‐Health Professions: A Meta‐Analysis,” Journal of the American Medical Association 300, no. 10 (2008): 1181–1196, 10.1001/jama.300.10.1181. [DOI] [PubMed] [Google Scholar]
  • 6. Sitzmann T., Brown K. G., Casper W. J., Ely K., and Zimmerman R. D., “A Review and Meta‐Analysis of the Nomological Network of Trainee Reactions,” Journal of Applied Psychology 93, no. 2 (2008): 280–295. [DOI] [PubMed] [Google Scholar]
  • 7. Pei L. and Wu H., “Does Online Learning Work Better Than Offline Learning in Undergraduate Medical Education? A Systematic Review and Meta‐Analysis,” Medical Education Online 24 (2019): 1666538, 10.1080/10872981.2019.1666538. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Sarpy S. A. C., Stachowski A., Sulzie C., et al., Evaluating Effectiveness and Impact of Occupational Safety and Health Training Delivered in Distance Learning Format: Determining Critical Factors for Success (The Centre for Construction Research and Training [CPWR], 2022), https://www.cpwr.com/wp-content/uploads/Evaluating_Effectiveness_Impact_Distance_Learning_Safety.pdf. [Google Scholar]
  • 9. Cerecero J. A. and Charlton M. A., “Designing, Implementing, and Conducting a Web‐Based Radiation Safety Training Program to Meet Texas Standards for Radiation Protection,” Health Physics 103, no. 5S (2012): S188–S193, 10.1097/HP.0b013e31825f7d12. [DOI] [PubMed] [Google Scholar]
  • 10. Nykänen M., Puro V., Tiikkaja M., et al., “Implementing and Evaluating Novel Safety Training Methods for Construction Sector Workers: Results of a Randomized Controlled Trial,” Journal of Safety Research 75 (2020): 205–221, 10.1016/j.jsr.2020.09.015. [DOI] [PubMed] [Google Scholar]
  • 11. Sacks R., Perlman A., and Barak R., “Construction Safety Training Using Immersive Virtual Reality,” Construction Management and Economics 31, no. 9 (2013): 1005–1017, 10.1080/01446193.2013.828844. [DOI] [Google Scholar]
  • 12. Shendell D. G., Milich L. J., Apostolico A. A., Patti A. A., and Kelly S., “Comparing Online and In‐Person Delivery Formats of the OSHA 10‐Hour General Industry Health and Safety Training for Young Workers,” NEW SOLUTIONS: A Journal of Environmental and Occupational Health Policy 27, no. 1 (May 2017): 92–106, 10.1177/1048291117697109. [DOI] [PubMed] [Google Scholar]
  • 13. Ministry of Labour, Immigration, Training and Skills Development (MLITSD). Program standard for joint health and safety committee training (Government of Ontario, 2024), published January 3, 2019a, updated December 1, 2022, https://www.ontario.ca/page/program-standard-joint-health-and-safety-committee-training. [Google Scholar]
  • 14. Ministry of Labour, Immigration, Training and Skills Development (MLITSDb). Provider Standard for Joint Health and Safety Committee Training (Government of Ontario, 2024), published January 3, 2019b, updated December 1, 2022, https://www.ontario.ca/page/provider-standard-joint-health-and-safety-committee-training. [Google Scholar]
  • 15. Shadish W. R., Cook T. D., and Campbell D. T., Experimental and Quasi‐Experimental Designs for Generalized Causal Inference (Boston: Houghton Mifflin, 2002). [Google Scholar]
  • 16. Robson L. S., Chen C., Imam S., et al., Differing Effects of In‐Person and Online Methods of Delivering JHSC Certification Part 1 Training (Toronto: Institute for Work & Health, 2023), https://www.iwh.on.ca/scientific-reports/differing-effects-of-in-person-and-online-methods-of-delivering-jhsc-certification-part-1-training. [Google Scholar]
  • 17. Alvarez K., Salas E., and Garofano C. M., “An Integrated Model of Training Evaluation and Effectiveness,” Human Resource Development Review 3, no. 4 (2004): 385–416, 10.1177/1534484304270820. [DOI] [Google Scholar]
  • 18. E. F. Holton, III , “Holton's Evaluation Model: New Evidence and Construct Elaborations,” Advances in Developing Human Resources 7, no. 1 (2005): 37–54. [Google Scholar]
  • 19. Kirkpatrick D., “Great Ideas Revisited,” Training and Development 54 (1996): 59. [Google Scholar]
  • 20. Kraiger K., Ford J. K., and Salas E., “Application of Cognitive, Skill‐Based, and Affective Theories of Learning Outcomes to New Methods of Training Evaluation,” Journal of Applied Psychology 78, no. 2 (1993): 311–328. [Google Scholar]
  • 21. Passmore J. and Velez M. J., “Training Evaluation.” in The Wiley Blackwell Handbook of the Psychology of Training, Development, and Performance Improvement, eds. Kraiger K., Passmore J., Rebelo dos Santos N. and Malvezzi S. (Jonn Wiley & Sons, 2015), 136–153. [Google Scholar]
  • 22. Phillips J. J. and Phillips P. P., Handbook of Training Evaluation and Measurement Methods, 4th ed. (Routledge, 2016). [Google Scholar]
  • 23. Burke M. J., Sarpy S. A., Smith‐Crowe K., Chan‐Serafin S., Salvador R. O., and Islam G., “Relative Effectiveness of Worker Safety and Health Training Methods,” American Journal of Public Health 96 (2006): 315–324. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Alliger G. M., Tannenbaum S. I., W. Bennett, Jr. , Traver H., and Shotland A., “A Meta‐Analysis of the Relations Among Training Criteria,” Personnel Psychology 50 (1997): 341–358. [Google Scholar]
  • 25. Blume B. D., Ford J. K., Baldwin T. T., and Huang J. L., “Transfer of Training: A Meta‐Analytic Review,” Journal of Management 36, no. 4 (2010): 1065–1105. [Google Scholar]
  • 26. Armitage C. J. and Conner M., “Efficacy of the Theory of Planned Behaviour: A Meta‐Analytic Review,” British Journal of Social Psychology 40 (2001): 471–499, 10.1348/014466601164939. [DOI] [PubMed] [Google Scholar]
  • 27. Green S. B., “How Many Subjects Does It Take to Do a Regression Analysis?,” Multivariate Behavioral Research 26, no. 3 (1991): 499–510. [DOI] [PubMed] [Google Scholar]
  • 28. Cohen J., Statistical Power Analysis for the Behavioral Sciences, 2nd ed. (Lawrence Erlbaum Associates, 1988). [Google Scholar]
  • 29. Statistics Kingdom , ''Test Statistic Calculators,'' accessed June 17, 2024, https://www.statskingdom.com/index.html.
  • 30. Fox J., Applied Regression Analysis, Linear Models, and Related Models (Sage Publications, 1997). [Google Scholar]
  • 31. Chen C., “Robust Regression and Outlier Detection With the ROBUSTREG Procedure” in Proceedings of the 27th SAS Users Group International Conference; April 14–17 (SAS Institute, 2002), 265–27, https://support.sas.com/resources/papers/proceedings/proceedings/sugi27/p265-27.pdf. [Google Scholar]
  • 32. Li G., “Robust regression,” in Exploring Data Tables, Trends, and Shapes, eds. Hoaglin D. C., Mosteller F., Tukey J. W., and Hoboken N. J. (Wiley, 1985), 281–344. [Google Scholar]
  • 33. Wilson D. B., Practical Meta‐Analysis Effect Size Calculator (Version 2023.11.27), Campbell Collaboration, 2023, https://www.campbellcollaboration.org/escalc/.
  • 34. Lipsey M. W. and Wilson D. B., Practical Meta‐Analysis (Sage Publications, 2000). [Google Scholar]
  • 35. Mohajeri K., Mesgari M., and Lee A. S., “When Statistical Significance is Not Enough: Investigating Relevance, Practical Significance, and Statistical Significance,” MIS Quarterly 44, no. 2 (2020): 525–559, 10.25300/MISQ/2020/13932. [DOI] [Google Scholar]
  • 36. Colquitt J. A., LePine J. A., and Noe R. A., “Toward an Integrative Theory of Training Motivation: A Meta‐Analytic Path Analysis of 20 Years of Research,” Journal of Applied Psychology 85 (2000): 678–707. [DOI] [PubMed] [Google Scholar]
  • 37. Burke M. J., Salvador R. O., Smith‐Crowe K., Chan‐Serafin S., Smith A., and Sonesh S., “The Dread Factor: How Hazards and Safety Training Influence Learning and Performance,” Journal of Applied Psychology 96, no. 1 (2011): 46–70. [DOI] [PubMed] [Google Scholar]
  • 38. Robson L. S., Irvin E., Padkapayeva K., Begum M., and Zukowski M., Effectiveness of Synchronous Online Learning in an Occupational Context: Two Rapid Reviews (Institute for Work & Health, 2021), https://www.iwh.on.ca/scientific-reports/effectiveness-of-synchronous-online-learning-in-occupational-context-two-rapid-reviews. [DOI] [PubMed] [Google Scholar]
  • 39. Rouleau G., Gagnon M.‐P., Côté J., et al., “Effects of e‐Learning in a Continuing Education Context on Nursing Care: Systematic Review of Systematic Qualitative, Quantitative, and Mixed‐Studies Reviews,” Journal of Medical Internet Research 21, no. 10 (2019): e15118, 10.2196/15118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Ford J. K., Kraiger K., and Merritt S. M., “An Updated Review of the Multidimensionality of Training Outcomes: New Directions for Training Evaluation Research.” in Learning, Training, and Development in Organizations, eds. Kozlowski S. W. J. and Salas E. (Routledge, 2009), 135–165. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.


Articles from American Journal of Industrial Medicine are provided here courtesy of Wiley

RESOURCES