Table 2.
Scores from round 1 of the Delphi study
| Domain | Item | Median score | % scoring 6–7 | % scoring 1–2 | % in top 10 |
|---|---|---|---|---|---|
| Items relating to purpose and rationale | State the decision-maker perspective/s examined in the study | 6 | 56% | 0 | 27% |
| Provide a rationale for using a DCE in the study | 6 | 67% | 2% | 38% | |
| Attributes and levels | Describe how attributes and levels were derived (e.g. systematic review, interviews, focus groups, expert input) | 7 | 84% | 0 | 49% |
| Describe the process of iterative testing and refining of attributes and levels, including language | 5 | 38% | 0 | 4% | |
| Report attributes that were considered and excluded | 5 | 33% | 7% | 4% | |
| Final list of attributes and levels | 7 | 100% | 0 | 91% | |
| State the payment vehicle, if price included as an attribute | 6 | 67% | 2% | 13% | |
| Describe how presentation of risk attribute/s was decided, if included | 6 | 51% | 4% | 2% | |
| Experimental design | Indicate the number of alternatives per choice set | 7 | 93% | 0 | 27% |
| Describe response options (e.g. forced choice, opt-out, status quo), with justification | 7 | 99% | 0 | 42% | |
| Report whether alternatives were labelled or unlabelled | 7 | 91% | 0 | 11% | |
| Report whether full or partial profile design | 6 | 76% | 2% | 2% | |
| Describe the type of experimental design (e.g. full factorial, orthogonal, D-efficient, Bayesian efficient) | 7 | 87% | 0 | 42% | |
| Describe which effects are identified (main effects, higher order interactions, functional form) | 7 | 82% | 2% | 16% | |
| Report the design properties, for example, D-efficiency, level balance, orthogonality | 6 | 64% | 4% | 11% | |
| Report whether identification was checked (e.g. whether variance–covariance matrix block diagonal) | 5 | 31% | 9% | 0 | |
| Report whether design was blocked, and if so, how choice sets were allocated to blocks and whether properties of blocks were checked | 6 | 71% | 0 | 2% | |
| Describe the number of choice sets, number of blocks, number of choice sets per block | 7 | 91% | 0 | 36% | |
| Report whether some potential profiles were implausible and how this was addressed | 6 | 56% | 0 | 2% | |
| Report whether and how any a priori knowledge of signs and/or true parameters was used in the design | 5 | 49% | 2% | 0 | |
| Indicate how the design was obtained (software, catalogue, other) | 6 | 78% | 0 | 16% | |
| Survey design and piloting | Report how respondents were allocated to blocks, if applicable | 6 | 62% | 2% | 2% |
| Report other randomisation if used (e.g. choice task order, attribute order, alternative order, framing effects) | 6 | 80% | 0 | 4% | |
| Provide the information, instructions, and questions seen in the survey (e.g. survey as an appendix) | 7 | 80% | 0 | 47% | |
| Describe the medium used to communicate attribute/ level information (e.g. words, pictures, multimedia) | 6 | 71% | 0 | 2% | |
| Describe visual implementation (colours, animation, text entry, drop-down menus, unique answering, scrolling design, etc.) | 5 | 44% | 4% | 4% | |
| Report pilot sample description and sample size | 5 | 47% | 7% | 11% | |
| Describe what was checked in piloting (e.g. understanding, respondent burden, timing, wording) | 6 | 69% | 0 | 11% | |
| Report whether information from the pilot was used to update the experimental design (e.g. priors) and/or survey design | 6 | 69% | 0 | 7% | |
| Sample and data collection | Report inclusion/exclusion criteria | 7 | 89% | 2% | 40% |
| Describe any use of quotas to ensure representativeness | 6 | 71% | 0 | 2% | |
| Indicate the recruitment method (e.g. advertisement, invitation format, reminders) | 6 | 71% | 0 | 9% | |
| Describe how the target sample size was determined | 6 | 58% | 2% | 9% | |
| Describe how data were collected (e.g. mail, personal interview, web survey) | 7 | 93% | 0 | 36% | |
| Report the response rate | 6 | 62% | 9% | 20% | |
| Describe any incentives or remuneration for respondents | 6 | 56% | 0 | 2% | |
| If online – describe any methods used to avoid fraudulent responses (e.g. bots) | 6 | 67% | 0 | 0 | |
| Report the final sample size | 7 | 96% | 0 | 49% | |
| Describe respondent characteristics and representativeness of target population | 7 | 84% | 2% | 40% | |
| Econometric analysis | Indicate coding of data (effects/dummy/continuous) | 7 | 87% | 0 | 16% |
| Describe handling of missing data in choice tasks and/or other variables | 7 | 78% | 0 | 7% | |
| Report whether any were responses were removed and why | 7 | 93% | 0 | 27% | |
| Provide the rationale for model choice (e.g. conditional logit, mixed logit, GMNL, latent class, etc.) and assumptions (e.g. error variance) | 7 | 87% | 0 | 27% | |
| Report model specification | 7 | 89% | 0 | 42% | |
| Reporting of results | Report the model performance, goodness of fit | 6 | 71% | 0 | 20% |
| Describe methods used for analysis of model results (e.g. calculation of marginal willingness to pay, attribute relative importance, welfare gain) | 7 | 91% | 0 | 51% | |
| Report the output/s of interest compared across a range of model specifications | 5 | 44% | 7% | 13% | |
| Report measures of precision for the output/s of interest (e.g. confidence intervals) and how these were derived) | 7 | 76% | 0 | 22% |
Bolded cells indicate that the item met criteria for inclusion in round 2. Criterion 1: scored 6–7 by 50% or more of participants and 1–2 by less than 15% of participants; criterion 2: included in top ten priority items by 15% or more of participants. There was no criterion relating to the median score