Abstract
Objective:
Bayesian meta-analyses offer several advantages over traditional approaches, including improved accuracy when using a small number of studies and enhanced estimation of heterogeneity. However, psychological trauma research has yet to see widespread adoption of these statistical methods, potentially due to researchers’ unfamiliarity with the processes involved. The purpose of this paper is to provide a practical tutorial for conducting random-effects Bayesian meta-analyses.
Method:
Explanations and recommendations are provided for completing the primary steps of a Bayesian meta-analysis, ranging from model specification to interpretation of results. Furthermore, an illustrative example is used to demonstrate the application of each step. In the example, results are synthesized from six studies included in a previously published systematic review (Holliday et al., 2020), with a combined sample size of 21,244,109, examining the association between posttraumatic stress disorder (PTSD) and risk of suicide in veterans and military personnel.
Results:
The posterior distributions for each model estimate, such as the pooled effect size and the heterogeneity parameter, are discussed and interpreted with regards to the probability of increased suicide risk.
Conclusions:
Our hope is that this tutorial, along with the provided data and code, facilitate the use of Bayesian meta-analyses in the study of psychological trauma.
Keywords: Bayesian meta-analysis, posttraumatic stress disorder, random-effects, suicide, tutorial
The past few decades have witnessed ever-rising adoption of Bayesian approaches to statistical analysis within the psychological sciences; however, growth in the number of Bayesian meta-analyses has been less prolific (van de Schoot et al., 2017). One argument for the use of Bayesian meta-analysis in trauma research is that it is not uncommon for trauma-focused meta-analyses to include few studies (e.g., Hailes et al., 2019; Tortella-Feliu et al., 2019). Traditional meta-analytic approaches perform poorly when only a small number of studies are available and have been shown to be inferior to Bayesian estimation in these situations (Seide et al., 2019). Another argument in favor of utilizing Bayesian meta-analysis in trauma research is that trauma studies themselves often employ small sample sizes (Yalch, 2016), which can be problematic in meta-analysis due to their association with greater between-study heterogeneity (IntHout et al., 2015). When using Bayesian meta-analyses researchers have an enhanced ability to estimate between-study heterogeneity compared to traditional approaches, which in turn improves estimation of the pooled effect size and helps researchers determine the generalizability of their findings (Seide et al., 2019; Williams et al., 2018). Finally, Bayesian meta-analysis can directly assess the probability that the magnitude of an effect size exceeds zero (or any other value), providing researchers with an intuitive and meaningful summary of past studies (Howard et al., 2000). As such, Bayesian meta-analysis is a natural fit for trauma research.
One reason for the relative lack of Bayesian meta-analyses may be due to researchers’ unfamiliarity with this class of statistical methods. For example, Ord et al. (2016) found that, on average, accredited doctoral programs in clinical and counseling psychology spend less than a week covering Bayesian statistics. While helpful guides exist (e.g., Harrer et al., 2021; Thompson & Semma, 2020), such work may not be accessible for those without an advanced background in statistics. Therefore, the purpose of this paper is to provide a practical tutorial for completing a random-effects meta-analysis of psychological trauma and suicide literature using Bayesian estimation. The target audience is those who are familiar with completing meta-analyses but have had little-to-no experience doing so within a Bayesian framework. In the subsequent sections, each step of the Bayesian estimation process is described, from model specification to results interpretation. To further elucidate the process and provide an opportunity for hands-on practice, example data are presented regarding the risk of suicide in veterans and military personnel with posttraumatic stress disorder (PTSD). For the convenience of the reader, the analysis code and data used in this tutorial are provided as public supplementary files (DOI: https://doi.org/10.6084/m9.figshare.c.5800052.v3).
Source of Example Data
Example data were obtained from studies included in a previously published systematic review (Holliday et al., 2020). Briefly, Holliday et al. (2020) systematically searched for all studies examining the association between PTSD and suicide-related outcomes in veterans and military personnel (2010 to 2018), finding support that a diagnosis of PTSD was associated with suicidal ideation, suicide attempts, and suicide at a bivariate level. Additional detail concerning review methodology, study characteristics, and risk of bias can be viewed in the original paper (Holliday et al., 2020). As Holliday et al. (2020) did not conduct a meta-analysis, the present paper extends this work by quantitatively synthesizing the subset of their articles that included suicide as an outcome. To collect example data, two reviewers (DR and AK) screened all studies included in Holliday et al. (2020) and found that, out of a possible 16 studies, six articles provided unique information for suicide across PTSD and non-PTSD groups and were included in the final example dataset (Brenner et al., 2011; Conner et al., 2014; Dobscha et al., 2014; Ilgen et al., 2010; Louzon et al., 2016; Shen et al., 2016). One study (Conner et al., 2014) was rated as moderate risk of bias by Holliday et al. (2020) and the rest were rated as low risk of bias. Together, the six studies had a combined sample size of 21,244,109.
For each study, we extracted the following counts: individuals with PTSD who died by suicide; individuals with PTSD who did not die by suicide; individuals without PTSD who died by suicide; and individuals without PTSD who did not die by suicide. Of note, Shen et al. (2016) presented total sample size and number of suicides as raw counts but presented non-deaths only in person-quarter-years (i.e., the number of quarter-years that data were available for the participants over the 10-year study). PTSD positive cases (including those who died by suicide) made up approximately 2% of the total person-quarter-years. We estimated the number of non-death PTSD positive cases as ~2% of the total sample size minus the number of individuals with PTSD who died by suicide. The number of non-death PTSD negative cases was estimated as the total sample size minus the counts of the other three groups.
Data from each study were then used to calculate unadjusted log-relative risks and relative risks that reflected the risk of dying by suicide for individuals with PTSD compared to the risk of dying by suicide for those without PTSD (Table 1). Although adjusted effect sizes are recommended for meta-analyses of non-randomized studies (Higgins et al., 2021), only one study (Conner et al., 2014) reported an adjusted relative risk that could be used for the present study; unadjusted relative risks were therefore used to maintain consistency across included studies.
Table 1.
Suicide Data from Included Studies
| PTSD | No PTSD | |||||||
|---|---|---|---|---|---|---|---|---|
|
|
||||||||
| Author | Year | Died by Suicide | Did Not Die by Suicide | Died by Suicide | Did Not Die by Suicide | Log-RR | Variance | RR |
|
| ||||||||
| Brenner | 2011 | 849 | 282,607 | 10,535 | 7,556,481 | 0.73 | 0.001 | 2.069 |
| Conner | 2014 | 423 | 531,586 | 3,197 | 5,378,442 | 0.26 | 0.002 | 1.299 |
| Dobscha | 2014 | 34 | 63 | 227 | 459 | 0.08 | 0.04 | 1.079 |
| Ilgen | 2010 | 907 | 205,516 | 6,777 | 3,078,691 | 0.63 | 0.001 | 1.886 |
| Louzon | 2016 | 43 | 51,817 | 267 | 339,365 | 0.05 | 0.02 | 1.047 |
| Shen | 2016 | 234 | 65,617 | 4,258 | 3,725,714 | 1.1 | 0.004 | 3.001 |
Note. Data are presented as counts. RR = Relative risk.
Meta-Analysis with Bayesian Estimation
In brief, Bayesian statistical approaches estimate a posterior distribution, or range of possible values and how probable each value is given the data at hand for parameters of interest (e.g., the pooled effect size). This is done through an iterative process in which random values for parameters are compared against each other. For a more thorough explanation, the following references can be helpful starting points: Gelman et al. (2013) and (Kruschke, 2015). There are four main steps for conducting a Bayesian meta-analysis: 1) model specification; 2) prior selection; 3) model estimation and convergence; and 4) interpretation. In the following sections, we will first explain how each step is completed in the context of Bayesian meta-analysis and then provide a demonstration with a running example.
Various open-source software options exist for conducting general purpose Bayesian analyses and can be used for meta-analysis. Examples include OpenBUGS (Lunn et al., 2009) and JAGS (Plummer, 2003), which can be used within different programming languages, although these software programs tend to require specific coding of models that can make meta-analysis less accessible to newcomers. We recommend Stan, which is a popular and well-supported platform for Bayesian analysis that is also accessible through multiple analytic programming languages (Stan Development Team, 2022). One advantage of Stan is that it has a robust development and support community. For our running example, we will make use of Stan via the “brms” package (Bürkner, 2017) in R (R Core Team, 2021) because this package allows models to be written in a standard, user-friendly formula notation with which many R users are likely already familiar and allows for flexibility in model specification. Many Bayesian-oriented R packages, like “RStan” (Stan Development Team, 2021) and “rstanarm” (Goodrich et al., 2020), are built on Stan and offer similar functionality: “rstanarm” offers the same formula notation as “brms”, as well as faster estimation for certain models (albeit less flexibility overall), while “RStan” offers the most flexibility but requires hand-coding of models, which may be inaccessible for new users. Other options for Bayesian meta-analysis (also built on STAN) include the R package “MetaStan” (Güenhan & Röever, 2022), which was designed to run complicated Bayesian meta-analyses, and the program “JASP”, which provides an intuitive interface for R (JASP Team, 2021). Note that, as of the present, the primary and most commonly used R packages for Bayesian meta-analysis rely upon Stan, meaning they use the same tools for estimation. Therefore, choice of package tends to come down to personal preference and familiarity. Finally, there are a few R packages designed specifically for advanced Bayesian meta-analytic techniques, like model averaging, which are beyond the scope of this introductory tutorial. Examples include “RoBMA” (Bartoš & Maier, 2020) and “metaBMA” (Heck et al., 2019). The RoBMA package will be briefly discussed in the “Publication Bias” section later in the tutorial.
Step 1: Model Specification
Model specification, or the selection of parameters to estimate, is not unique to Bayesian meta-analysis, as this step is also required for traditional frequentist approaches. Pertinently, Bayesian meta-analysis can be treated as a form of hierarchical, or multilevel, modeling, in which data are approached as belonging to groups (Williams et al., 2018). Effect sizes from each individual study are assumed to cluster around a group mean, with this group mean, which accounts for study weight, reflecting the parameter of interest (i.e., the pooled effect size). As with frequentist meta-analysis, the primary decisions when choosing parameters for a Bayesian meta-analysis are whether covariates should be included (i.e., meta-regression) and whether to employ a fixed-effect or random-effects analysis. The pooled effect size is estimated by the model intercept, with each meta-regression covariate, if included, estimated by an additional model parameter. Using a random-effects analysis will add a heterogeneity parameter (represented by τ [tau] throughout the paper), which, at a basic level, increases variability in the values the group mean can take, helping to account for heterogeneity between studies. As a side note, random effects are generally the recommended approach as they rely upon less strict assumptions (Borenstein et al., 2010). Finally, within a hierarchical modeling framework, different types of effect sizes can be readily accommodated based on the type of the data at hand (Kruschke, 2015, p. 443).
Running Example: Model Specification
For the present meta-analytic example, we were interested in obtaining a pooled relative risk representing the association between PTSD and suicide in veterans and military personnel. We started by calculating the log-relative risk for each study, which is a standard approach for meta-analysis of binary outcomes (Higgins et al., 2021). We then chose our parameters. We opted to use a random-effects approach because we expected each study to measure a similar, yet different, underlying true effect size given each included population. We also did not have any a priori covariates of interest that were common across studies, so we did not add additional parameters as is done in a meta-regression. Thus, our model included two primary parameters to be estimated: the model intercept (a fixed effect), and τ (a random effect). We also re-estimated effect sizes for each of the six individual studies included as data points as random effects. An alternative approach, rather than re-estimating the six study effect sizes, would have been to estimate another variance parameter in addition to τ, leaving us with only three parameters—the intercept, τ, and a new variance parameter (Nicenboim et al., 2021). However, re-estimating study effect sizes helps us better account for individual study uncertainty and allows us to create a Bayesian-style forest plot (Nicenboim et al., 2021). After estimation, τ provided a measure of the variability of effect sizes across studies and the model intercept represented our pooled effect size on the log-relative risk scale, or the estimate of the association between PTSD and suicide. These parameters are summarized in Table 2.
Table 2.
List of Model Parameters and their Descriptions
| Parameter | Symbol | Description | Prior Distribution | Prior Notation |
|---|---|---|---|---|
|
| ||||
| Intercept | Intercept | Pooled log-relative risk across studies | Model 1 (Non-informative): Normal with a mean of zero and an SD of 10,000 | N(0, 10000) |
| Model 2 (Informative): Normal with a mean of 1.59 and an SD of 0.25. | N(1.59, 0.25) | |||
| Model 3 (Weakly informative): Normal with a mean of 0 and an SD of one | N(0, 1) | |||
| Tau | τ | Estimate of between-study heterogeneity | Model 1 (Non-informative): Uniform between zero and 10,000 | Uniform(0, 10000) |
| Model 2 (Informative): Half-Cauchy with a location of zero and a scale of 0.5 | HC(0, 0.5) | |||
| Model 3 (Weakly informative): Half-Cauchy with a location of zero and a scale of 0.5 | HC(0, 0.5) | |||
Step 2: Prior Selection
Prior selection involves a priori specification of probability distributions (e.g., normal, uniform, Cauchy; more on this below) for all of the estimated parameters included in the model. The purpose of this is to provide the Bayesian estimator sets of values it can draw from and then test against the data to establish fit, as the prior determines the probability that a given value will be drawn for a particular model parameter. Because a prior is a distribution, it requires its own parameters—often called “hyperparameters”—that further shape these probabilities. A key difference here is that, unlike the parameters we specified in step one, these hyperparameters are not being estimated, but rather are being supplied by the researcher as prior information. Typically, hyperparameters can be formatted as a mean and a variance term, but each statistical distribution has its own requirements.
Choice of priors tends to be one of the most controversial steps of Bayesian approaches, with a common criticism being that prior selection is subjective and amounts to the researcher influencing the outcome in a desired direction (Gelman, 2008). This is because the researcher must decide how probable they believe a given outcome is for each parameter; the stronger the beliefs expressed in a prior, the stronger the data must be to overcome them if said beliefs are not accurate (Kruschke, 2015, p. 113). Indeed, poor choice of priors can bias model results (Holtmann et al., 2016; Van Dongen, 2006). However, when used correctly, priors can support and strengthen data, rather than interfere with them (Morris et al., 2015; Polson & Scott, 2012). For example, Bayesian meta-analysis with appropriate priors can provide valid inference even when the number of included studies is low and traditional approaches fail (Seide et al., 2019). It should also be noted that non-Bayesian frequentist statistics make their own assumptions, meaning there is no truly objective method to analyze data. The use of priors is one of the defining advantages of Bayesian statistics: namely, the ability to formally update pre-existing information with new data (Gelman et al., 2013). Priors typically fall into three categories, which are described below: non-informative, informative, and weakly informative.
Non-Informative Priors
A common strategy to avoid the selection of a prior inadvertently biasing results is to use a non-informative or “flat” prior. The idea behind non-informative priors is that they do not incorporate any pre-existing information and assign equal probability to every value. For example, a non-informative prior for a pooled standardized mean difference in a meta-analysis could simply specify that any real number was equally possible. While non-informative priors tend to make few assumptions and are easy to use, they are not foolproof and can still bias results, particularly when sample sizes are small (Gelman et al., 2013, p. 128; Van Dongen, 2006).
Informative Priors
An alternative to a non-informative prior is an informative prior, which states a stronger pre-existing belief and generally describes a specific range of values for the parameter. If a researcher had a well-supported reason to believe a novel treatment should reduce scores on a symptom measure by five points, they could use an informative prior for the associated parameter that was centered around this value with minimal variance, such as a normal distribution with a mean of five and a standard deviation of 0.5. Informative priors can be derived from previous empirical research into the topic, such as large studies or meta-analyses, or from experts in the area of study. Compared to non-informative priors, well-supported informative priors can enhance estimation precision without compromising accuracy (Morris et al., 2015). However, inaccurate strong priors can overshadow the data, biasing estimates and giving inaccurate results even with large sample sizes (Holtmann et al., 2016). Furthermore, strong prior information regarding a parameter is not always available, further limiting this approach.
Weakly Informative Priors
A third approach to prior selection seeks to balance the previous two by incorporating “weak” information about the parameter that covers all possible “real-world” values, without giving any value too strong of probability. This is known as the weakly informative prior (Van Dongen, 2006; Williams et al., 2018). As suggested in Gelman et al. (2013, p. 56), one approach to create a weakly informative prior is to start with a non-informative prior and then restrict it enough to exclude unreasonable values, or those that are impossible given the data. Such restriction can often be done by reducing the variance hyperparameter of the prior, thereby shifting probability away from extreme values and towards the center of the distribution. Weakly informative priors can improve estimation over non-informative priors, which is helpful in situations where strong prior knowledge is not available (Polson & Scott, 2012). Although the boundaries separating weakly informative and informative priors can be unclear at times, in general a weakly informative prior will allow for a broader range of values than an informative prior. In meta-analyses, data are typically on a standardized scale, which can simplify the specification of a weakly informative prior. For example, when pooling standardized mean differences for psychotherapy outcomes, such as by calculating Hedges’ g, it is safe to assume that estimates more than several units away from zero would be unlikely, as such values reflect extreme magnitudes in treatment differences (Williams et al., 2018). A weakly informed prior for the pooled effect size in this situation could therefore be a normal distribution with a mean of zero and a standard deviation of 5, corresponding to an ~95% probability that the pooled effect is between positive and negative 10, with more extreme values still possible but unlikely. Additionally, a mean of zero assumes that the most likely pooled effect size is zero and the burden is on the data to show otherwise.
Regardless of the approach used, sensitivity analyses can be conducted to assess the assumptions implicit in prior selection (Depaoli et al., 2020; Gelman et al., 2013, p. 161). Such analyses will be further discussed below (see Step 4: Interpretation).
Running Example: Prior Selection
Following model specification, we selected priors for our meta-analysis (see Table 2). For educational purposes, we created three separate meta-analyses, each using one of the three approaches to prior selection (i.e., non-informative, informative, and weakly informative). Two parameters required priors in each model: the intercept and τ (Röver et al., 2021). If we had conducted a meta-regression, or added additional random effects, then we would have needed to select priors for those parameters as well.
Our first model used non-informative priors for the model intercept and τ. For the model intercept, we chose a normally distributed prior with a mean of zero and a standard deviation of 10,000, or N(0, 10000). Given that variance (i.e., τ) needs to be greater than zero, we gave τ a uniform prior between zero and 10,000, or Uniform (0, 10000). Although these priors do not technically include all possible values for the intercept and τ, they provided adequate enough coverage of potential estimates (with essentially equal probabilities in the case of the intercept) to suffice as non-informative priors in this example.
For our second model, we selected an informative prior for the intercept. We based our choice of informative intercept prior on the results from a recent meta-analysis evaluating the association between risk of suicide and mental disorders (Moitra et al., 2021). Moitra et al. (2021) found that the relative risk of suicide ranged from 4.11 to 7.64 across mood disorders, anxiety disorders, and schizophrenia, based on results from 20 studies. We assumed that the association between PTSD and suicide in veterans and military personnel would be of similar magnitude and chose a prior of N(1.59, 0.25), which corresponded with a mean relative risk of 4.9, equivalent to the overall relative risk for anxiety disorders reported by Moitra et al. (2021), and an ~95% prior probability that the relative risk would fall between 3 and 8. Because we had no strong a priori beliefs regarding heterogeneity, we opted to give τ a weakly informative prior. Gelman et al. (2013, p. 130) recommended the half-Cauchy distribution (with a location of zero) for weakly informative hierarchical variance parameters (e.g., τ). The half-Cauchy distribution restricts values to positive numbers while placing most of the distribution probability near zero (i.e., little between-group variance in effect sizes) but allowing for large values as well (i.e., large between-group variance in effect sizes). Röver et al. (2021) compared a half-normal prior distribution with a mean of zero and a standard deviation of 0.5 to empirically derived heterogeneity estimates from previously published meta-analyses of log-transformed effect sizes and found that this prior was well-matched to the empirical estimates. Given that the half-Cauchy distribution gives more probability to higher levels of between-group variance than a half-normal distribution (Gelman et al., 2013, p. 130), or in other words is less informative, we opted to use a half-Cauchy prior for τ with a location of zero and a scale of 0.5: i.e., HC(0, 0.5).
The third and final model employed weakly informative priors on the intercept and τ. Williams et al. (2018) recommend using a weakly informative prior for the model intercept distributed as N(0, 1) in meta-analyses of standardized effect sizes. Because our data were on the log-relative risk scale, this prior corresponded to a mean relative risk of one (i.e., e0 = 1) with ~95% of the prior probability falling between a relative risk of 0.14 (i.e., e−2 = 0.14) and 7.39 (i.e., e2 = 7.39). It was still possible for larger or smaller relative risk values to be estimated, but substantial evidence from our data would have been needed for such values to inform the posterior. For τ, we selected the weakly informative HC(0, 0.5) prior as we did in the model with an informative intercept prior. Prior predictive checks showed that generated data covered the full range of observed values for all three models (Figure S1).
Step 3: Model Estimation and Convergence
Model estimation and convergence are common aspects of Bayesian statistics and are not restricted to meta-analyses; accordingly, we will only touch upon them briefly here. Model estimation involves the technical aspects of running the analytical software, whereas convergence refers to ensuring that the estimated results are accurate. Diagnostic checks that help assess for convergence include trace plots, the (“Rhat”) statistic, effective sample size (ESS), and posterior predictive checks (Gelman et al., 2013; Kruschke, 2015; Vehtari et al., 2021). The textbooks by Gelman et al. (2013) and Kruschke (2015) provide greater detail on these processes.
Running Example: Model Estimation and Convergence
For estimation of our three models, we used the default options for the “brm” function of the “brms” package, which uses a no-U-turn Hamiltonian Monte Carlo sampler for estimation (Bürkner, 2017). These defaults consisted of four chains with 2,000 iterations each, 1,000 of which were discarded as warm-up per chain. In total, each model parameter ended up with 4,000 posterior estimates.
Trace plots for our eight estimated parameters had good mixing across all three models, as the lines for each of the four chains demonstrated considerable overlap for every parameter (Figures S2–S4). As shown in Table 3, all estimated parameters had values below 1.01 and the bulk- and tail-effective sample size values well exceeded 400, suggesting we had sufficient iterations to achieve stable estimates (Vehtari et al., 2021). Additionally, the plots for our posterior predictive check showed that our generated effect sizes approximated our observed effect sizes, particularly at the tails of the data distribution (Figure S5). Generated effect sizes for some of the iterations did deviate at certain points for each model, such as the middle of the distribution, but most iterations appeared to match the observed data and no systematic differences were apparent. Overall, all three models achieved convergence and can be interpreted.
Table 3.
Model Results and Convergence Diagnostics
| Log RR | SE | Lower CI | Upper CI | Bulk-ESS | Tail-ESS | ||
|---|---|---|---|---|---|---|---|
|
| |||||||
| Model 1: Non-informative priors | |||||||
| Intercept | 0.482 | 0.268 | −0.115 | 0.981 | 1.006 | 700 | 575 |
| Tau | 0.573 | 0.330 | 0.251 | 1.357 | 1.003 | 501 | 658 |
| RE: Brenner | 0.243 | 0.270 | −0.258 | 0.831 | 1.006 | 705 | 573 |
| RE: Conner | −0.219 | 0.270 | −0.723 | 0.383 | 1.005 | 712 | 566 |
| RE: Dobscha | −0.347 | 0.312 | −0.990 | 0.242 | 1.003 | 859 | 881 |
| RE: Ilgen | 0.152 | 0.269 | −0.346 | 0.741 | 1.005 | 705 | 581 |
| RE: Louzon | −0.395 | 0.291 | −0.970 | 0.205 | 1.006 | 791 | 717 |
| RE: Shen | 0.608 | 0.273 | 0.110 | 1.207 | 1.006 | 730 | 628 |
| Model 2: Informative priors | |||||||
| Intercept | 1.145 | 0.268 | 0.668 | 1.690 | 1.003 | 607 | 723 |
| Tau | 0.819 | 0.355 | 0.329 | 1.735 | 1.009 | 442 | 804 |
| RE: Brenner | −0.417 | 0.269 | −0.955 | 0.062 | 1.003 | 614 | 758 |
| RE: Conner | −0.878 | 0.272 | −1.435 | −0.395 | 1.003 | 623 | 868 |
| RE: Dobscha | −0.986 | 0.341 | −1.655 | −0.345 | 1.002 | 774 | 1,014 |
| RE: Ilgen | −0.509 | 0.269 | −1.047 | −0.025 | 1.003 | 610 | 756 |
| RE: Louzon | −1.054 | 0.311 | −1.674 | −0.476 | 1.002 | 705 | 1,048 |
| RE: Shen | −0.045 | 0.271 | −0.593 | 0.438 | 1.003 | 614 | 838 |
| Model 3: Weakly informative priors | |||||||
| Intercept | 0.467 | 0.201 | 0.024 | 0.860 | 1.002 | 645 | 817 |
| Tau | 0.466 | 0.188 | 0.239 | 0.930 | 1.008 | 705 | 908 |
| RE: Brenner | 0.259 | 0.203 | −0.138 | 0.714 | 1.002 | 667 | 805 |
| RE: Conner | −0.203 | 0.206 | −0.618 | 0.248 | 1.003 | 638 | 836 |
| RE: Dobscha | −0.311 | 0.253 | −0.824 | 0.199 | 1.001 | 938 | 1,436 |
| RE: Ilgen | 0.167 | 0.203 | −0.232 | 0.618 | 1.002 | 661 | 834 |
| RE: Louzon | −0.370 | 0.225 | −0.816 | 0.071 | 1.003 | 945 | 1,381 |
| RE: Shen | 0.620 | 0.209 | 0.210 | 1.090 | 1.002 | 702 | 900 |
Note. Mean estimates for the random effects represent deflections from the model intercept. CI = 95% credible interval; ESS = Effective sample size; RE = Random effect; RR = Relative risk; SE = Standard Error.
Step 4: Interpretation
The primary goals for a random-effects meta-analysis are to interpret the pooled effect size and the between-study heterogeneity. Using a Bayesian approach, this is done by exploring the estimated posterior distributions of these model parameters (i.e., the model intercept and τ). The re-estimated study effect sizes will also have posteriors that can be examined. If doing a meta-regression, the posteriors of any additional parameters, such as the regression coefficients, will also be interpreted. Typically, the mean estimate of a parameter is reported, with uncertainty quantified by a credible interval (CI; see Hespanhol et al., 2019). Sensitivity analyses, in which models with alternative priors are estimated, are also recommended to assess robustness of results (Depaoli et al., 2020).
If analyzing odds ratios or relative risks, heterogeneity can be assessed by directly interpreting τ using recommendations provided by Spiegelhalter (2004, p. 170). Spiegelhalter (2004, p. 170) suggest that τ values from 0.1 to 0.5 represent “reasonable heterogeneity,” values from 0.5 to 1.0 represent “fairly high heterogeneity,” and values over 1.0 represent “fairly extreme heterogeneity.” Such guidelines could be used to judge the generalizability of the pooled effect size. Unfortunately, there do not appear to be equivalent guidelines for other types of effect size. While the Spiegelhalter (2004, p. 170) recommendations seem broadly reasonable for other standardized effect sizes, like Hedges’ g, it is certainly an area that requires additional research and validation. The I2 statistic can also be calculated using the estimated mean of τ and the variances of the individual effect sizes (Seide et al., 2019), but it should be noted that this is not a Bayesian-specific approach and does not appear to have been validated within a Bayesian framework.
Running Example: Interpretation of Results
Results for the three models are presented in Table 3 and all reported CIs are equal tailed, which is the default approach used by the “brms” package (Bürkner, 2017). Estimates for the models using non-informative and weakly informative priors were similar, although there was greater uncertainty in the intercept parameter of the non-informative model resulting in the 95% CI crossing zero. Furthermore, results from the informative prior model differed considerably from the other two, with much higher estimates for both the intercept and τ. In this model, each individual study re-estimate was lower than the intercept; in other words, the pooled effect size was much higher than any observed effect size. Together, these findings suggest that the informative intercept prior overwhelmed the data and demonstrate the need for caution when using strong priors.
For the remainder of this example, we chose to focus our interpretation on the results from the weakly informative model. This would be our preferred approach in an actual analysis—unless we had strong, pre-existing data drawn from similar research (e.g., a meta-analysis of the association between PTSD and suicide in civilians) to support the use of informative priors—as weakly informative priors tend to outperform non-informative priors (Polson & Scott, 2012). Model results were transformed to the relative risk scale by exponentiating each iteration of the estimated intercepts, which provided a full posterior distribution of relative risks. As shown in Figure 1, the 95% CI of our pooled relative risk was 1.02 to 2.36 with a mean estimate of 1.63. This corresponded to a 95% probability that veterans and military personnel with PTSD are 2% to 136% more likely to die by suicide relative to those without PTSD, with an expected increase of risk due to PTSD being 63%. We also examined the probabilities associated with exceeding specific cutoffs for relative risk. There was a 97.9% probability that the relative risk exceeded 1.00, an 89.9% probability that it exceeded 1.25, and a 10.8% probability that it exceeded 2.00. Taken together, these results suggest that it is highly likely that PTSD is associated with an increased risk of suicide in veterans and military personnel. It is also highly likely the relative risk well exceeds 1.00, although considerably less likely that it exceeds 2.00.
Figure 1. Forest Plot of Posterior Distributions by Study and Pooled Effects Using Weakly Informative Priors.

The gray line indicates a relative risk of one. The solid black line indicates the pooled relative risk. The dotted black lines indicate the 95% credible interval of the pooled relative risk. CI = Credible Interval.
The mean estimate for τ in the weakly informative model was 0.47 (95% CI: 0.24 – 0.93). Compared to the recommendations provided by Spiegelhalter (2004, p. 170), the mean fell within the “reasonable” heterogeneity range but the CI extended into the “fairy high” range. I2 was calculated as 98.8%, which suggested a large amount of heterogeneity. However, I2 can be biased when the number of included studies is small (von Hippel, 2015), which is one reason Bayesian meta-analyses can be advantageous (Williams et al., 2018). These results indicate that there is a possibility of elevated heterogeneity between studies and some caution should be taken when interpreting the generalizability of the present meta-analytic findings.
Sensitivity Analyses.
The robustness of the results obtained from the model with weakly informative priors was further assessed using sensitivity analyses (Table S1). In addition to the model with non-informative priors estimated above, two models with alternative intercept priors and two models with alternative τ priors were estimated. For the alternative intercept priors, we chose to test a less informative prior distributed as N(0, 5) and a prior with a shifted mean distributed as N(3, 1). For the alternative τ priors, we chose to test a prior with greater variability distributed as HC(0, 1) and one with less variability distributed as HC(0, 0.1). In addition to examining CIs, posterior means for each parameter were compared against the means from the weakly informative model and the percentage deviation was calculated (Depaoli et al., 2020). Because the random effect study re-estimates are dependent upon the intercept, we focused our attention on the intercept and τ. For the most part, estimates were similar across models, with the exception of a larger τ estimate in the non-informative model and a larger intercept estimate in the model with the N(3, 1) intercept prior. Importantly, the 95% CI for the intercept of the non-informative model included zero, whereas the intercept CI for the original weakly informative did not. The finding that different intercept prior selections led to different interpretations of the model intercept indicates that the obtained results may not be stable, although the similarity in estimates across models reduces this concern somewhat. While formal model comparisons like leave-one-out-cross-validation (Vehtari et al., 2017) can be made during sensitivity analyses, Depaoli et al. (2020) advise against this approach, instead recommending that the original model results are interpreted in light of the findings from the sensitivity analysis. Overall, the sensitivity analyses revealed that the model results are at least somewhat dependent upon selection of priors, particularly regarding the certainty of parameter estimates.
The estimates of the random effects for our individual studies are also provided in Figure 1. Compared to the original relative risks extracted from the studies (Table 1), the study re-estimates were pulled closer to the pooled effect size, especially for studies with the highest variance (i.e., the most uncertainty; Dobscha et al., 2014; Louzon et al., 2016; Shen et al., 2016). Additionally, the lower bound of the 95% CIs exceeded a relative risk of one for four of our six studies.
Comparison to a Frequentist Approach.
As part of this example, results from the weakly informative model were compared against those obtained using traditional frequentist methods. Corresponding code is included in the public supplementary files (see Introduction for DOI link). A random-effects meta-analysis using restricted maximum likelihood estimation was conducted via the “metafor” package (Viechtbauer, 2010). The estimated log relative risk was 0.494 (standard error = 0.168; 95% confidence interval = 0.165 – 0.824; p = .003; τ = 0.399). Estimates for the pooled effect size were nearly identical, although overall interpretation of results differs across the frequentist and Bayesian approaches. Interpretation of Bayesian results is intuitive—the 95% credible interval for the pooled effect size means that there is a 95% probability that the “true” effect size falls between 0.024 and 0.860. In contrast, the frequentist 95% confidence interval is harder to interpret. It reflects the idea that if you were to repeat the frequentist meta-analysis a large number of times (with unique studies), 95% of the calculated confidence intervals would contain the true effect size. However, the actual estimated confidence interval does not provide information regarding the probability of it containing the true parameter value.
The frequentist model also detected the presence of heterogeneity (Q = 153, df = 3, p < .001). However, the frequentist model underestimated τ compared to the Bayesian model (τ = 0.399 vs. 0.466), which is a known limitation of frequentist meta-analytic approaches when there is elevated between-study heterogeneity and few included studies (Seide et al., 2019). While not an issue in this example, these findings demonstrate how frequentist approaches to meta-analysis are at greater risk of overlooking the potential impact of between-study heterogeneity.
Publication Bias.
Finally, although largely beyond the scope of this paper, advanced Bayesian methods, such as robust Bayesian meta-analysis (Bartoš & Maier, 2020; Maier et al., 2020) and Bayesian fill-in (Du et al., 2017) have been recently proposed to address publication bias over traditional approaches such as the funnel plot or the Egger test. To provide a quick demonstration, tests of publication bias using a funnel plot, Egger test, and robust Bayesian meta-analysis via the RoBMA package in R (Bartoš & Maier, 2020) were compared. Note that the RoBMA package combines Bayesian meta-analytic models into a composite ensemble, which can be used to weigh the evidence for and against publication bias. The funnel plot (Figure S6) showed a wide dispersion of study estimates, likely due in part to the small number of included studies, as well as possible asymmetry in the lower right-hand side, raising concern for the possibility of publication bias. However, the Egger test did not detect publication bias (t = −0.57, df = 4, p = .597). Similarly, the RoBMA results did not find evidence for publication bias, with a Bayes Factor of 0.97 indicating that the likelihood of publication bias is 0.97 times the likelihood of no publication bias (i.e., essentially equivalent). In the event that there is evidence of publication bias, the RoBMA package can be used further to obtain adjusted estimates that correct for it (Maier et al., 2020). As development into these approaches and their corresponding software progresses, their reach and accessibility will likely continue to improve.
Conclusion
Bayesian meta-analysis offers a powerful tool for evidence synthesis of psychological trauma literature, with advantages over traditional meta-analytic approaches including improved estimation of heterogeneity and better performance with a small number of studies (Seide et al., 2019; Williams et al., 2018). In this paper, the steps of Bayesian random-effects meta-analysis, ranging from model specification to interpretation of results, were introduced and discussed. Each step was further highlighted through the use of a running example that meta-analyzed data from a small handful of studies examining the association between PTSD and suicide in veterans and military personnel. We encourage researchers to consider Bayesian methods when synthesizing literature, as the frequent use of standardized outcomes in meta-analyses, as well as the need for few parameters overall, make these approaches a natural fit in light of the offered advantages.
Supplementary Material
Clinical Impact Statement.
This paper presents a tutorial for random-effects Bayesian meta-analyses. Each step of a Bayesian meta-analysis is explained, along with recommendations from current research. An illustrative example, with included data and code, combines findings from six studies looking at the association between posttraumatic stress disorder and suicide in veterans and military personnel. Our hope is that this guide helps make Bayesian meta-analysis more accessible to researchers in psychology.
Acknowledgments
This research was supported by the Department of Veterans Affairs Office of Academic Affiliations Advanced Fellowship Program in Mental Illness Research and Treatment, as well as the Rocky Mountain Mental Illness Research, Education, and Clinical Center (MIRECC) for Suicide Prevention. Dr. Kaizer’s efforts were supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under National Heart, Lung, and Blood Institute Award K01HL151754. Since multiple authors are employees of the U.S. government and contributed to this article as part of their official duties, the work is not subject to U.S. copyright. The views expressed in this article are those of the authors and do not necessarily represent those of the U.S. Department of Veterans Affairs, the Rocky Mountain MIRECC, the National Institutes of Health, or the U.S. government. The authors have no known conflicts of interest to disclose.
References
- Bartoš F, & Maier M (2020). RoBMA: An R Package for Robust Bayesian Meta-Analyses. (Version 2.1.1) [Computer software]. https://CRAN.R-project.org/package=RoBMA [Google Scholar]
- Borenstein M, Hedges LV, Higgins JP, & Rothstein HR (2010). A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods, 1(2), 97–111. 10.1002/jrsm.12 [DOI] [PubMed] [Google Scholar]
- Brenner LA, Ignacio RV, & Blow FC (2011). Suicide and traumatic brain injury among individuals seeking Veterans Health Administration services. The Journal of Head Trauma Rehabilitation, 26(4), 257–264. 10.1097/HTR.0b013e31821fdb6e [DOI] [PubMed] [Google Scholar]
- Bürkner P-C (2017). brms : An R Package for Bayesian Multilevel Models Using Stan. Journal of Statistical Software, 80. 10.18637/jss.v080.i01 [DOI] [Google Scholar]
- Conner KR, Bossarte RM, He H, Arora J, Lu N, Tu XM, & Katz IR (2014). Posttraumatic stress disorder and suicide in 5.9 million individuals receiving care in the veterans health administration health system. Journal of Affective Disorders, 166, 1–5. 10.1016/j.jad.2014.04.067 [DOI] [PubMed] [Google Scholar]
- Depaoli S, Winter SD, & Visser M (2020). The Importance of Prior Sensitivity Analysis in Bayesian Statistics: Demonstrations Using an Interactive Shiny App. Frontiers in Psychology, 11, 608045. 10.3389/fpsyg.2020.608045 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dobscha SK, Denneson LM, Kovas AE, Teo A, Forsberg CW, Kaplan MS, Bossarte R, & McFarland BH (2014). Correlates of suicide among veterans treated in primary care: case-control study of a nationally representative sample. Journal of General Internal Medicine, 29 Suppl 4(Suppl 4), 853–860. 10.1007/s11606-014-3028-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Du H, Liu F, & Wang L (2017). A Bayesian “fill-in” method for correcting for publication bias in meta-analysis. Psychological Methods, 22(4), 799–817. 10.1037/met0000164 [DOI] [PubMed] [Google Scholar]
- Gelman A (2008). Objections to Bayesian statistics. Bayesian Analysis, 3(3), 445–449, 445. [Google Scholar]
- Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, & Rubin DB (2013). Bayesian data analysis (3rd ed.). CRC Press. [Google Scholar]
- Goodrich B, Gabry J, Ali I, & Brilleman S (2020). rstanarm: Bayesian applied regression modeling via Stan (Version 2.21.1) [Computer software]. https://mc-stan.org/rstanarm [Google Scholar]
- Güenhan BK, & Röever B (2022). MetaStan: Bayesian meta-analysis via ‘Stan’ (Version 1.0.0) [Computer Software]. https://CRAN.R-project.org/package=MetaStan [Google Scholar]
- Hailes HP, Yu R, Danese A, & Fazel S (2019). Long-term outcomes of childhood sexual abuse: an umbrella review. Lancet Psychiatry, 6(10), 830–839. 10.1016/s2215-0366(19)30286-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harrer M, Cuijpers P, Furukawa TA, & Ebert DD (2021). Doing Meta-Analysis with R: A Hands-On Guide. Chapmann & Hall/CRC PRess. [Google Scholar]
- Heck DW, Gronau QF, & Wagenmakers E-J (2019). metaBMA: Bayesial model averaging for random and fixed effects meta-analysis [Computer software]. https://CRAN.R-project.org/package=metaBMA [Google Scholar]
- Hespanhol L, Vallio CS, Costa LM, & Saragiotto BT (2019). Understanding and interpreting confidence and credible intervals around effect estimates. Brazilian Journal of Physical Therapy, 23(4), 290–301. 10.1016/j.bjpt.2018.12.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Higgins JPT, Thomas J, Chanlder J, Chumpston M, Li T, Page MJ, & Welch VA (2021). Cochrane Handbook for Systematic Reviews of Interventions version 6.2 (updated February 2021). Cochrane. [Google Scholar]
- Holliday R, Borges LM, Stearns-Yoder KA, Hoffberg AS, Brenner LA, & Monteith LL (2020). Posttraumatic Stress Disorder, Suicidal Ideation, and Suicidal Self-Directed Violence Among U.S. Military Personnel and Veterans: A Systematic Review of the Literature From 2010 to 2018. Frontiers in Psychology, 11, 1998. 10.3389/fpsyg.2020.01998 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Holtmann J, Koch T, Lochner K, & Eid M (2016). A Comparison of ML, WLSMV, and Bayesian Methods for Multilevel Structural Equation Models in Small Samples: A Simulation Study. Multivariate Behavioral Research, 51(5), 661–680. 10.1080/00273171.2016.1208074 [DOI] [PubMed] [Google Scholar]
- Howard GS, Maxwell SE, & Fleming KJ (2000). The proof of the pudding: an illustration of the relative strengths of null hypothesis, meta-analysis, and Bayesian analysis. Psychological Methods, 5(3), 315–332. 10.1037/1082-989x.5.3.315 [DOI] [PubMed] [Google Scholar]
- Ilgen MA, Bohnert AS, Ignacio RV, McCarthy JF, Valenstein MM, Kim HM, & Blow FC (2010). Psychiatric diagnoses and risk of suicide in veterans. Archives of General Psychiatry, 67(11), 1152–1158. 10.1001/archgenpsychiatry.2010.129 [DOI] [PubMed] [Google Scholar]
- IntHout J, Ioannidis JP, Borm GF, & Goeman JJ (2015). Small studies are more heterogeneous than large ones: a meta-meta-analysis. Journal of Clinical Epidemiology, 68(8), 860–869. 10.1016/j.jclinepi.2015.03.017 [DOI] [PubMed] [Google Scholar]
- JASP Team. (2021). JASP (Version 0.16) [Computer software]. https://jasp-stats.org/
- Kruschke JK (2015). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan (2nd ed.). Academic Press. [Google Scholar]
- Louzon SA, Bossarte R, McCarthy JF, & Katz IR (2016). Does Suicidal Ideation as Measured by the PHQ-9 Predict Suicide Among VA Patients? Psychiatric Services, 67(5), 517–522. 10.1176/appi.ps.201500149 [DOI] [PubMed] [Google Scholar]
- Lunn D, Spiegelhalter DJ, Thomas A, & Best N (2009). The BUGS project: Evolution, critique and future directions. Statistics in Medicine, 28, 3049–3067. 10.1002/sim.3680 [DOI] [PubMed] [Google Scholar]
- Maier M, Bartoš F, & Wagenmakers E (2020). Robust bayesian meta-analysis: Addressing publication bias with model-averaging. PsyArXiv. [DOI] [PubMed] [Google Scholar]
- Moitra M, Santomauro D, Degenhardt L, Collins PY, Whiteford H, Vos T, & Ferrari A (2021). Estimating the risk of suicide associated with mental disorders: A systematic review and meta-regression analysis. Journal of psychiatric research, 137, 242–249. 10.1016/j.jpsychires.2021.02.053 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morris WK, Vesk PA, McCarthy MA, Bunyavejchewin S, & Baker PJ (2015). The neglected tool in the Bayesian ecologist’s shed: a case study testing informative priors’ effect on model accuracy. Ecology and Evolution, 5(1), 102–108. 10.1002/ece3.1346 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nicenboim B, Schad D, & Vasishth S (2021). An Introduction to Bayesian Data Analysis for Cognitive Science. Retrieved August 23, 2021 from https://vasishth.github.io/bayescogsci/book/index.html
- Ord AS, Ripley JS, Hook J, & Erspamer T (2016). Teaching Statistics in APA-Accredited Doctoral Programs in Clinical and Counseling Psychology: A Syllabi Review. Teaching of psychology, 43(3), 221–226. 10.1177/0098628316649478 [DOI] [Google Scholar]
- Plummer M (2003). JAGS: A Program for Analysis of Bayesian Graphical Models using Gibbs Sampling. 3rd International Workshop on Distributed Statistical Computing (DSC 2003); Vienna, Austria, 124. [Google Scholar]
- Polson NG, & Scott JG (2012). On the Half-Cauchy Prior for a Global Scale Parameter. Bayesian Analysis, 7(4), 887–902, 816. [Google Scholar]
- R Core Team. (2021). R: A language and environment for statistical computing [Computer software]. R Foundation for Statistical Computing. https://www.R-project.org/ [Google Scholar]
- Röver C, Bender R, Dias S, Schmid CH, Schmidli H, Sturtz S, Weber S, & Friede T (2021). On weakly informative prior distributions for the heterogeneity parameter in Bayesian random-effects meta-analysis. Research Synthesis Methods, 12(4), 448–474. 10.1002/jrsm.1475 [DOI] [PubMed] [Google Scholar]
- Seide SE, Röver C, & Friede T (2019). Likelihood-based random-effects meta-analysis with few studies: empirical and simulation studies. BMC Medical Research Methodology, 19(1), 16. 10.1186/s12874-018-0618-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shen YC, Cunha JM, & Williams TV (2016). Time-varying associations of suicide with deployments, mental health conditions, and stressful life events among current and former US military personnel: a retrospective multivariate analysis. Lancet Psychiatry, 3(11), 1039–1048. 10.1016/s2215-0366(16)30304-2 [DOI] [PubMed] [Google Scholar]
- Spiegelhalter DJ (2004). Bayesian approaches to clinical trials and health care evaluation. Hoboken, NJ: : Wiley. [Google Scholar]
- Stan Development Team. (2021). RStan: the R interface to Stan (Version 2.21.3) [Computer software]. https://mc-stan.org/
- Stan Development Team. (2022). Stan modeling language users guide and reference manual (Version 2.28) [Computer software]. https://mc-stan.org
- Thompson CG, & Semma B (2020). An alternative approach to frequentist meta-analysis: A demonstration of Bayesian meta-analysis in adolescent development research. Journal of Adolescence, 82, 86–102. 10.1016/j.adolescence.2020.05.001 [DOI] [PubMed] [Google Scholar]
- Tortella-Feliu M, Fullana MA, Pérez-Vigil A, Torres X, Chamorro J, Littarelli SA, Solanes A, Ramella-Cravaro V, Vilar A, González-Parra JA, Andero R, Reichenberg A, Mataix-Cols D, Vieta E, Fusar-Poli P, Ioannidis JPA, Stein MB, Radua J, & Fernández de la Cruz L (2019). Risk factors for posttraumatic stress disorder: An umbrella review of systematic reviews and meta-analyses. Neuroscience and Biobehavioral Reviews, 107, 154–165. 10.1016/j.neubiorev.2019.09.013 [DOI] [PubMed] [Google Scholar]
- van de Schoot R, Winter SD, Ryan O, Zondervan-Zwijnenburg M, & Depaoli S (2017). A systematic review of Bayesian articles in psychology: The last 25 years. Psychological Methods, 22(2), 217–239. 10.1037/met0000100 [DOI] [PubMed] [Google Scholar]
- Van Dongen S (2006). Prior specification in Bayesian statistics: three cautionary tales. Journal of Theoretical Biology, 242(1), 90–100. 10.1016/j.jtbi.2006.02.002 [DOI] [PubMed] [Google Scholar]
- Vehtari A, Gelman A, & Gabry J (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing, 27(5), 1413–1432. 10.1007/s11222-016-9696-4 [DOI] [Google Scholar]
- Vehtari A, Gelman A, Simpson D, Carpenter B, & Bürkner P (2021). Rank-normalization, folding, and localization: An improved R for assessing convergence of MCMC. arXiv. [Google Scholar]
- Viechtbauer W (2010). Conducting Meta-Analyses in R with the metafor Package. Journal of Statistical Software, 36(3), 1–48. 10.18637/jss.v036.i03 [DOI] [Google Scholar]
- von Hippel PT (2015). The heterogeneity statistic I(2) can be biased in small meta-analyses. BMC Medical Research Methodology, 15, 35. 10.1186/s12874-015-0024-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Williams DR, Rast P, & Bürkner P (2018). Bayesian Meta-Analysis with Weakly Informative Prior Distributions. PsyArXiv. [Google Scholar]
- Yalch MM (2016). Applying Bayesian statistics to the study of psychological trauma: A suggestion for future research. Psychological Trauma: Theory, Research, Practice and Policy, 8(2), 249–257. 10.1037/tra0000096 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
