Table 1.
Glossary of terms
| Acceptance rate | The fraction of proposed samples from the sampler that is accepted |
|---|---|
| Autocorrelation | The sequential samples from the Markov chain are highly correlated with each other; this means the Markov chain is likely slowly traversing the sampling space because model parameters are highly correlated with one another |
| Autocorrelation plot | A plot showing pairwise correlation between MCMC iterations (y-axis) for different lags between iterations (x-axis); can be an indicator of poor sampling efficiency |
| Bayes theorem | A formal statistical method that includes conditional probabilities to quantify uncertainty of parameters of interest |
| Burn-in | A defined number of initial MCMC iterations discarded before creating diagnostic plots and summaries of the posterior distributions in order to minimize the effect on posterior inference |
| Chain | A series of random values from the range of the parameter’s distribution drawn by the MCMC sampler; in MCMC, it is common to call the simulation, or the sampling, a “chain” as shorthand because it is theoretically from a Markov chain |
| Convergence | The Markov Chain has reached the stationary (i.e., target) distribution |
| Credible interval (CrI) | The interval estimates for the parameter of interest with measurable probability (e.g., Equal-tailed or highest posterior density (HPD)); because Bayesian estimates are random, the credible interval can be interpreted as a probability range |
| Density plot | A histogram plot of the parameter’s posterior distribution |
| Error variance | Variance of the normally distributed error term in a linear regression model (also called residual error, residual variance) |
| Frequentist | Classical approach to statistical inference where the unknown parameters are held fixed |
| Informative prior | A prior distribution that may impact the posterior distribution relative to the likelihood; a prior that is not easily dominated by the likelihood function, e.g., Optimistic or skeptical priors determined by a subject-matter expert or previous literature |
| Likelihood | A statistical model that describes the distribution of the observed data and then used to update beliefs about the parameters when combined with the prior distributions |
| Markov Chain Monte Carlo (MCMC) algorithm | A set of algorithms for simulating, or randomly sampling, from a distribution even when the actual distribution cannot be mathematically derived |
| Mixing | (in relation to MCMC), describes the series, or chain, of random moves to explore the parameter’s range of values and relates to convergence of the MCMC |
| Non-informative prior | See vague prior |
| Posterior distribution | The distribution of the parameters conditioned on the trial data (i.e., observed data) and expresses the uncertainty in the parameters after observing the data; the updated beliefs about the parameters after the prior and the likelihood are combined using Bayes’ Theorem |
| Prior distribution | The current beliefs of a parameter summarized as the probability distribution |
| “Pseudo” vague prior | A prior that was initially thought to be non-informative but subsequently determined to substantially impact the posterior distribution, therefore, not truly vague |
| Sampler (or sampling algorithm) | An algorithm or sampling method employed to obtain random samples from the target distribution; see Markov Chain Monte Carlo (MCMC) algorithm |
| Starting value | The initial value for the MCMC sampler for beginning the series of sampling draws; the value can be a mean and a variance |
| Trace plot | A plot which has the value of the parameter on the y-axis at each MCMC iteration (x-axis); an ideal plot will show convergence where the parameter is oscillating around the mode of the distribution |
| Vague prior | A prior distribution that will have minimal impact on the posterior distribution relative to the likelihood function, e.g., a flat distribution relative to the likelihood |
| Wandering | See mixing |