Skip to main content
AEM Education and Training logoLink to AEM Education and Training
. 2024 Feb 8;8(1):e10929. doi: 10.1002/aet2.10929

Best practices for reporting survey‐based research

Jeffery Hill 1,, Jonathan Chuko 1, Kathleen Ogle 2, Michael Gottlieb 3, Sally A Santen 4,5, Anthony R Artino Jr 2
PMCID: PMC10950005  PMID: 38504803

In our previous papers, we reviewed the best practices for the development and distribution of surveys and summarized the process for collecting validity evidence to support the use of one's survey results. 1 , 2 The final challenge for authors of survey‐based research is to succinctly, yet completely, document their approach to survey design and administration and to report their results. In Figure 1, we seek to highlight the best practices for reporting survey‐based research.

FIGURE 1.

FIGURE 1

Best practices for reporting survey‐based research.

Several published guidelines offer authors a roadmap to inform their approach to reporting. It is important to consider that these guidelines pertain to research using surveys to assess outcome measures and not solely the reporting of the de novo development of a survey instrument. Sharma and colleagues 3 recently published a consensus‐based checklist for reporting of survey studies (CROSS). The CROSS checklist contains 40 items, derived from a modified Delphi process, that offers specific, granular, and comprehensive recommendations; it is a good starting point for authors reporting survey‐based research. Artino and colleagues 4 also published a reporting checklist for survey‐based research articles submitted to Academic Medicine. Their checklist differs from the CROSS guidelines in that they emphasize that authors should integrate a framework‐based validity argument throughout their manuscript. The Standards for Educational and Psychological Testing (i.e., the Standards) have adopted and adapted Messick's unified validity framework, defining five sources of validity evidence: content, response process, internal structure, relations to other variables, and consequences of testing. 5 , 6 Content validity refers to the appropriateness of survey content in light of the construct being measured. Response process refers to the cognitive and psychological processes survey takers are engaged in while completing the survey. The internal structure of the survey is evaluated through statistical analysis (e.g., reliability and factor analyses) of the relationships between the survey items and the underlying constructs. Evaluating expected relationships between survey variables and other externally measured variables provides additional evidence for the validity of one's survey results and inferences. Consequences of testing, as outlined by Messick and the Standards, refer to the positive or negative, intended or unintended effects of survey use.

Readers should also note two caveats for the recommendations outlined in Figure 1. First, it is important to tailor your reporting to the stakes of the study being reported, to the requirements of the journal, and to the rigor required to publish that work. In other words, not all of the recommendations listed will be applicable to every survey study. For example, a survey assessing the implementation of a new, local educational curricula on a small group of learners would not necessarily employ focus groups or cognitive interviews but would still benefit from a clear description of survey constructs and adherence to best practices for survey item writing. Second, readers should note that some of the reporting recommendations may be reasonably placed in different sections of the paper than those outlined here. It is more important that reporting be complete than that any one reporting recommendation be placed in a particular location within the manuscript.

In the introduction, it is recommended that authors delineate why a survey was used and to define the constructs that the survey aims to measure. The methods section should fully describe the development of the survey including any adaptation from previously published instruments. Authors should also summarize the process for writing and revising survey items including any expert reviews, pretesting (to include cognitive interviewing), and/or pilot testing. The methods section should also describe the means of survey administration, target population, and follow‐up practices; it should also outline any planned statistical analysis. In addition, a copy of the full survey instrument should be provided in the final manuscript or an online supplement (if space is limited). Furthermore, authors should consider using the results section as an opportunity to summarize the collected validity evidence by reporting measures of internal structure (e.g., Cronbach's alpha, factor analysis) and the relationship of survey variables to conceptually related constructs. Authors may additionally refer back to their derivation processes outlined in the introduction and methods sections to support the survey's content and respondents’ response processes. Finally, in their discussion section, authors should provide an overall interpretation of the study, describe any limitations, and hypothesize on positive or negative consequences of survey administration.

Hill J, Chuko J, Ogle K, Gottlieb M, Santen SA, Artino AR Jr. Best practices for reporting survey‐based research. AEM Educ Train. 2024;8:e10929. doi: 10.1002/aet2.10929

Supervising Editor: Margaret Wolff

REFERENCES

  • 1. Hill J, Ogle K, Santen SA, Gottlieb M, Artino AR. Educator's blueprint: a how‐to guide for survey design. AEM Educ Train. 2022;6(4):e10796. doi: 10.1002/aet2.10796 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Hill J, Ogle K, Gottlieb M, Santen SA, Artino AR. Educator's blueprint: a how‐to guide for collecting validity evidence in survey‐ based research. AEM Educ Train. 2022;6(6):e10835. doi: 10.1002/aet2.10835 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Sharma A, Duc NTM, Thang TLL, et al. A consensus‐based checklist for reporting of survey studies (CROSS). J Gen Intern Med. 2021;36(10):3179‐3187. doi: 10.1007/s11606-021-06737-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Artino AR, Durning SJ, Sklar DP. Guidelines for reporting survey‐based research submitted to academic medicine. Acad Med. 2018;93(3):337‐340. doi: 10.1097/acm.0000000000002094 [DOI] [PubMed] [Google Scholar]
  • 5. Validity MS. Educational Measurement. 3rd ed. American Council on Education and Macmillan; 1989. [Google Scholar]
  • 6. American Educational Research Association (AERA), American Psychological Association (APA), National Council on Measurement in Education (NCME) Standards for Educational and Psychological Testing . American Psychological Association; 2014. [Google Scholar]

Articles from AEM Education and Training are provided here courtesy of Wiley

RESOURCES