Skip to main content
American Journal of Speech-Language Pathology logoLink to American Journal of Speech-Language Pathology
. 2024 Aug 16;33(5):2130–2156. doi: 10.1044/2024_AJSLP-23-00368

A Proposed Framework for Rigor and Transparency in Dysphagia Research: Prologue

Nicole Rogus-Pulia a,b, Rebecca Affoo c, Ashwini Namasivayam-MacDonald d, Brandon Noad e, Catriona M Steele f,g,h,; on behalf of FRONTIERS Collaborative
PMCID: PMC11427740  PMID: 39151061

Abstract

Purpose:

Scientific transparency and rigor are essential for the successful translation of knowledge in clinical research. However, the field of oropharyngeal dysphagia research lacks guidelines for methodological design and reporting, hindering accurate interpretation and replication. This article introduces the Framework for RigOr aNd Transparency In REseaRch on Swallowing (FRONTIERS), a new critical appraisal tool intended to support optimal study design and results reporting. The purpose of introducing FRONTIERS at this early phase is to invite pilot use of the tool and open commentary.

Methods:

FRONTIERS was developed by collaborating researchers and trainees from six international dysphagia research labs. Eight domains were identified, related to study design, swallowing assessment methods, and oropharyngeal dysphagia intervention reporting. Small groups generated questions capturing rigor and transparency for each domain, based on examples from the literature. An iterative consensus process informed the refinement and organization of primary and subquestions, culminating in the current initial version of FRONTIERS.

Results:

FRONTIERS is a novel tool, intended for use by oropharyngeal dysphagia researchers and research consumers across disciplines. A web application enables provisional use of the tool, and an accompanying survey solicits feedback regarding the framework.

Conclusion:

FRONTIERS seeks to foster rigor and transparency in the design and reporting of oropharyngeal dysphagia research. We encourage provisional use and invite user feedback. A future expert consensus review is planned to incorporate feedback. By promoting scientific rigor and transparency, we hope that FRONTIERS will support evidence-based practice and contribute to improved health outcomes for individuals with oropharyngeal dysphagia.


According to the National Institutes of Health (NIH), scientific transparency and rigor in conducting clinical research is key to the successful translation of knowledge for improvement of health outcomes (National Institutes of Health Central Resource for Grants and Funding Information, n.d.). In addition to significant research questions and robust study design, investigators must communicate methods, procedures, and outcomes with adequate clarity and detail to support reproducibility and external validation (transparency). This is also essential to enable assessment of study quality and potential biases. Inability to replicate research findings may arise from errors in experimental design and statistical approaches and/or incomplete descriptions of methods (Prager et al., 2019). In the United States alone, the costs of these nonreplicable studies are substantial, estimated at $28 billion annually. Accurate and complete reporting of all study aspects (design, procedures, and approaches to analysis) facilitates reproducibility and sustainability of research findings (Prager et al., 2019). The importance of scientific rigor and reproducibility is apparent in published NIH guidelines that outline four key areas of rigor (rigor of prior research, rigor of proposed research, biological variables, and authentication of key biological and/or chemical resources) that must be addressed in all applications and will be considered by grant reviewers.

The field of oropharyngeal dysphagia research requires detailed and specific reporting on methodologic design to ensure accurate interpretation, appropriate clinical application, and feasibility of replication. While other fields have established guidelines for best practices in research design and reporting (Prager et al., 2019), no such resource exists for clinicians and researchers across multiple disciplines with a focus on oropharyngeal dysphagia. Frameworks like the Consolidated Standards of Reporting Trials (CONSORT; Schulz et al., 2010) are intended to provide generally applicable guidance on clinical trial design and reporting but are not intended to be specific to each field of study or other study designs (Altman & Simera, 2016). The purpose of this American Journal of Speech-Language Pathology (AJSLP) forum is to introduce the Framework for RigOr aNd Transparency In REseaRch on Swallowing (FRONTIERS), a critical appraisal tool in the form of a checklist, intended to support oropharyngeal dysphagia researchers to optimize study design and results reporting. The FRONTIERS tool can also be used by research consumers to appraise the rigor and transparency of research and consider its applicability for guiding clinical practice decisions. This Framework was originally presented at several national meetings under the title STARTED (STAndards for Rigor and TransparEncy in Dysphagia Research). However, given that the intention of this work was to develop an initial version of a critical appraisal tool, and not to create a specific set of standards to be adopted by the field, the name has since been updated to FRONTIERS.

FRONTIERS grew out of a virtual journal club, formed at the beginning of the COVID-19 pandemic across six international dysphagia research labs, three located in Canada (Toronto, Ontario; Hamilton, Ontario; and Halifax, Nova Scotia) and three in the United States (in New York City, New York; Madison, Wisconsin; and Gainesville, Florida). During the process of discussing several recent systematic reviews in the oropharyngeal dysphagia literature, the lack of established tools suitable for use in appraising the quality of methods and standardized reporting for studies regarding oropharyngeal dysphagia became quickly apparent. The group noted that existing tools for evaluating study quality or risk of bias did not address issues specific to swallowing and that several authors were developing their own bespoke, novel tools or checklists for this purpose (for example, Bahia & Lowell, 2020; Gandhi & Steele, 2022; Mancopes et al., 2020).

Examples of inadequate reporting specific to oropharyngeal dysphagia–focused studies include, but are not limited to, inconsistent operational definitions of “dysphagia” and specific parameters regarding swallowing (e.g., as discussed in Molfenter & Steele, 2012; Namasivayam-MacDonald et al., 2018), a lack of detail in descriptions of swallowing evaluation methods such as videofluoroscopy (e.g., as discussed in Gandhi & Steele, 2022), and the application of inappropriate statistical methods when analyzing swallowing measures (e.g., as discussed in Borders & Brates, 2020; Steele & Grace-Martin, 2017). Each of these issues has the potential to lead to erroneous conclusions. Without adequate transparency and rigor in reporting of studies focused on oropharyngeal dysphagia, the highest level of evidence in the form of meta-analyses cannot be achieved. This also creates challenges for journal editors and reviewers attempting to assess the quality of manuscripts reporting research findings related to oropharyngeal dysphagia. Therefore, a comprehensive framework is needed, outlining desirable elements for rigor and transparency, both in the conduct and reporting of oropharyngeal dysphagia research. FRONTIERS aims to be a first step toward addressing this gap. Our intention is for FRONTIERS to provide guidance at various stages of research, including study design, manuscript or grant reviews, or appraisal of existing evidence by clinicians prior to implementation into practice. In this prologue, we describe the steps involved in developing the initial version of FRONTIERS. This prologue introduces this AJSLP forum, a collection of articles including: (a) a review of guidelines housed at the EQUATOR Network (https://www.equator-network.org) and their applicability to oropharyngeal dysphagia research, (b) detailed descriptions of each section of FRONTIERS, and (c) an illustration of the use of FRONTIERS to review a dysphagia research article. The purpose of introducing FRONTIERS at this early phase of development is to invite pilot use of the tool by the AJSLP readership and to solicit open commentary/feedback. This forum is intended to be read as a whole; reading an isolated article may raise questions that are addressed elsewhere in the forum.

Method

The FRONTIERS Collaborative was initiated in 2020, including four Principal Investigators (PIs), two close collaborators, and 11 trainees across 6 labs. As mentioned above, the process of discussing several systematic review articles from the oropharyngeal dysphagia literature led to the idea to develop a critical appraisal tool in the form of a checklist for optimizing rigor and transparency in oropharyngeal dysphagia research.

Our process began with the identification of eight main domains related to oropharyngeal dysphagia study design, swallowing assessment methods, and the reporting of interventions for oropharyngeal dysphagia (see Figure 1). Small groups (henceforth “teams”), each comprising two to three trainees from the FRONTIERS Collaborative were assigned one of these eight domains. The authors of this prologue—that is, the principal investigators from four of the collaborating laboratories—formed a steering committee and provided mentorship to the trainee teams. Each team was charged with developing an initial set of questions intended to capture the rigor and transparency of reporting on that domain in research papers on oropharyngeal dysphagia (see Appendix A). Each team simultaneously searched for articles relevant to their domain from the literature to guide question development. Search terms were proposed by each team and vetted by the steering group mentors. The resulting searches were not systematic reviews; rather, they were intended to find examples illustrating both adequate and less-than-adequate reporting for a given study element, to provide the rationale for including that element in the framework.

Figure 1.

A flow diagram for the oropharyngeal dysphagia study design. The steps are as follows. 1. Setup assistant: up to 7 study configuration prompts to determine which domains to include. 2. Up to 8 domains. 3. The domains are as follows Universally Applicable Questions, Participant Characteristics, Screening and Clinical or Non-Instrumental Assessments of Swallowing, Videofluoroscopic Examinations of Swallowing (V F S S), Flexible Endoscopic Examinations of Swallowing (F F E S), Other Instrumental Assessments of Swallowing, Treatment of Dysphagia, and Patient or Care-Provider Reported Outcome Measures (P R O M s). The Other Instrumental Assessments of swallowing consist of the following: Tongue Pressure, High Resolution Pharyngeal Manometry (H R P M), Peak Cough Flow or Spirometry, Nasal Cannula Airflow or Respiratory, Inductance Plethysmography, Surface Electromyography (s E M G), Cough Reflex Testing, Ultrasound, Neuroimaging (C T, M R I, f M R I, M E G), and other. 4. The 8 domains lead to over 250 questions over 8 domains and 9 subdomains.

Eight main domains related to oropharyngeal dysphagia study design, swallowing assessment methods, and the reporting of interventions for oropharyngeal dysphagia.

As the set of questions for each of the eight main sections was built, items were also organized into a hierarchy of primary questions and subquestions. For example, if an item asked whether instrumental assessment was used in a study and the answer was “yes,” this would be followed by a subquestion regarding the specific type of instrumental assessment used. That response would then be followed by additional prompts related to the specific technique in question. After the initial brainstorming phase, the steering committee mentors worked with each team to review and refine the initial set of questions. These candidate question sets were subsequently circulated to the entire FRONTIERS Collaborative for consideration. A consensus survey method with voting was used to determine which questions should be carried forward into the FRONTIERS tool and to collect suggestions regarding wording. Through this process, opportunities for item reduction and question clustering were identified. Each question within a domain was reviewed by the whole group for domain alignment and redundancy within and across domains.

In the next stage of development, each team incorporated feedback from the survey and piloted use of the questions in their section to evaluate several articles. Questions were further iteratively reviewed and revised until consensus regarding inclusion and wording was reached across the FRONTIERS Collaborative. Questions that overlapped across sections were pulled into a “universally applicable questions” section to avoid redundancy (see Appendix B).

Software Application

Alongside the development of FRONTIERS, the team worked closely with a software developer to build an interactive web application to guide and appraise the design and reporting of oropharyngeal dysphagia research. Our goal was to ensure that clinicians and researchers could easily access and interact with the FRONTIERS tool. The software developer designed an initial prototype that went through a series of iterations based on team feedback, primarily focused on the logic and user flow. When the team was satisfied with the prototype, the web application was released (https://www.FrontiersFramework.com/).

Conclusion

The development and release of the initial version of FRONTIERS marks an important milestone in the field of oropharyngeal dysphagia research. This critical appraisal tool, in the form of a checklist, has been designed to address the lack of established tools for evaluating the quality of methods and reporting in oropharyngeal dysphagia studies. The current version of FRONTIERS and the associated web application are considered Version 1.0 of the tool. We acknowledge that this initial version is based on consensus across a small group of labs and recognize the need for commentary and further refinement based on feedback from multidisciplinary experts across the field oropharyngeal dysphagia research. We plan for this version to be provisionally used over the next two years, with an invitation to provide feedback through a survey tool at https://uwmadison.co1.qualtrics.com/jfe/form/SV_9SH5Q0b7ZpfTH6u. Beginning in January 2026, we plan to conduct a full review of feedback received to determine the need for edits or updates. Additionally, we plan on enlisting feedback from a multidisciplinary international expert panel of researchers through a Delphi process. We anticipate that this process will lead both to refinement of the tool and also the potential development of a consensus-based oropharyngeal dysphagia research reporting guideline, according to the procedures outlined for guideline development at the EQUATOR Network (http://www.equator-network.org). The implementation and further refinement of FRONTIERS has the potential to improve the accuracy, interpretation, and clinical application of oropharyngeal dysphagia research findings. By promoting transparency and rigor in reporting, FRONTIERS will support the achievement of the highest level of evidence, enabling meta-analyses and evidence-based practice in the field.

In conclusion, FRONTIERS represents a valuable resource for dysphagia researchers and research consumers, serving as a first step toward filling the gap in recommendations regarding methodological design and reporting specific to the field. Its development highlights the importance of scientific rigor and transparency in research focused on oropharyngeal dysphagia, ultimately contributing to improved health outcomes for individuals with swallowing difficulties.

Data Availability Statement

Data sharing is not applicable to this article as no data sets were generated or analyzed related to this prologue.

Author Contributions

Nicole Rogus-Pulia: Conceptualization (Equal), Methodology (Equal), Writing – original draft (Lead), Writing – review & editing (Equal). Rebecca Affoo: Conceptualization (Equal), Methodology (Equal), Writing – review & editing (Equal). Ashwini Namasivayam-MacDonald: Conceptualization (Equal), Methodology (Equal), Writing – review & editing (Equal). Brandon Noad: Conceptualization (Supporting), Software (Lead), Writing – review & editing (Equal). Catriona M. Steele: Conceptualization (Equal), Methodology (Equal), Project administration (Lead), Visualization (Lead), Writing – review & editing (Equal).

Acknowledgments

Development of Framework for RigOr aNd Transparency In REseaRch on Swallowing (FRONTIERS) and the preparation of this article was not directly supported by funding to any of the authors. The authors acknowledge salary support as follows: Nicole Rogus-Pulia: University of Wisconsin–Madison; William S. Middleton Veteran Affairs Hospital, Madison, WI; and National Institutes of Health (NIH) grant numbers: K76AG068590 and R01AG082170. Rebecca Affoo: Dalhousie University, Halifax, Nova Scotia, Canada. Ashwini Namasivayam-MacDonald: McMaster University, Hamilton, Ontario, Canada; Alzheimer's Society and Canadian Institutes of Health Research grant ASR-176674; NIH grant R01AG077481; and Canadian Frailty Network grant ECR2022-003. Brandon Noad: No salary source to disclose. Catriona M. Steele: KITE Research Institute – Toronto Rehabilitation Institute, University Health Network, Toronto, Ontario, Canada; Canada Research Chairs Secretariat, Ottawa, Ontario, Canada; and NIH grant numbers: R01DC011020 and R01AG077481. The views and content expressed in this article are solely the responsibility of the authors and do not necessarily reflect the position, policy, or official views of the Department of Veteran Affairs, the U.S. government, or the NIH.

Appendix A

Original List of Generated Questions

Section Question
Enrollment Were the participant eligibility criteria reported?
Were details of participant recruitment reported?
Were participant screening procedures reported?
Were the participant eligibility criteria reported?
Were details of participant recruitment reported?
Participants Were participant screening procedures reported?
Were the number of participants reported?
Was the participant attrition rate reported?
Are details regarding the demographic information of participants/patients reported, including age (mean, range, SD), sex, race, socioeconomic status, language(s) spoken?
Are details regarding patient/participant participation status reported, including ADL level, living status (inpatient, outpatient, independent, system living, etc.)?
For patient participants, are details of diagnosis reported, including diagnosis, time since diagnosis, medications, other medical history?
For studies with healthy participants, were the criteria for health reported and clearly defined?
Were participant and patient cognitive status reported, and if so, was the assessment method or screener tool reported (e.g., MMSE, MOCA)?
Was participant intake ability and diet tolerance reported, including IDDSI range/IDDSI FDS score; time on modified diet; diet modified determined by clinician, self, other?
Was the presence/absence and type of non-oral feeding reported?
Baseline Status For participants with dysphagia, were the criteria for determining presence of dysphagia reported?
Was the severity of dysphagia reported?
Was the method and reliability of determining dysphagia severity reported?
Was the dysphagia diagnosis determined by an SLP or physician?
Who assessed the dysphagia eligibility criteria?
Were baseline VFSS results reported?
For healthy participants, were the criteria for determining absence of dysphagia reported?
Was healthy participant medical history thoroughly reviewed in order to rule out any confounding conditions (e.g., history of difficulty swallowing, sensation deficits, radiation, chemotherapy, etc.)?
Methods Were the setting(s) and location(s) of data collection reported?
Were validated tools utilized?
Were tools applied to the appropriate population and question of interest based on prior validation studies?
Protocol Were standard tool(s)/method(s)/standard protocol(s) used (i.e., specific protocol, MBSImp, DIGEST, ASPEKT)?
Was more than one bolus tested?
Was more than one consistency tested?
Were the consistencies used sufficiently well described to be reproducible (any tests to confirm the viscosity or consistency, e.g., IDDSI)?
Were details regarding bolus volume reported?
Were details regarding method of bolus administration reported? (Spoon delivered? Straw? Cup-sized? Self-delivered? Cued/noncued?)
Were details regarding recording settings reported (specifically signal acquisition rate/frame rate)?
Was the equipment utilized and manufacturer specifications adequately described?
Was adequate information reported regarding the portion of the exam that was rated?
Were ratings made post-hoc from recorded signals? (as opposed to real-time)
VFSS Protocol If used, were details regarding name/brand/type of barium (or other contrast) reported?
If used, was the same concentration of barium used across trials?
If used, were details regarding barium (or other contrast) concentration reported?
FEES Were validated outcome scales or tools for baseline oropharyngeal secretions, safety, and efficiency status utilized?
Was bolus protocol outlined in sufficient detail?
Were deviations or enhancements (e.g., swallow maneuvers) from the protocol described?
Were other important factors for replication, such as environment, patient positioning, and use of lubricants/anesthetics, described?
Protocol Other Were any other instrumental assessments administered and reported?
Were validated self-report questionnaires administered and reported (e.g., EAT-10, FOIS, etc.)?
Were informal self-report questionnaires administered and reported?
Data Processing Were raters blinded to participant ID/group assignment?
Were standard definitions used (i.e., well-defined measures/parameters)?
Were raters blinded to timepoint/condition?
Was rater experience and competency for performance and rating adequately described?
Was the rating process including number of raters, blinding to timepoint/condition reported?
Were consensus and discrepancy strategies reported?
Was rating process, including portion of exam rated and time of rating (i.e., real-time verse post-hoc), reported?
Were interrater reliability statistics reported?
Were intrarater reliability statistics reported?
If PAS was used, were intra- and interrater reliability reported?
If PAS was used, were PAS results reported by bolus condition?
If PAS was used and summarized across a series of swallows, was worst score and/or mode score reported?
Analysis Were the statistics used appropriate for the type of measure (scale, ordinal, categorical)?
Was the handling of missing data or outliers adequately described?
Was a sample size calculation reported?
Were power calculations or effect size reported?
PROM Questionnaires Was a validated questionnaire used for the study?
Was the competency of the study population to fill the questionnaire described and addressed? (e.g., cognitive decline, illiteracy, language or culture barriers, etc.)
Was the method of filling the questionnaire described in the study? (e.g., telephone/mail/in person, self-administered vs. interviewer administered, identity of interviewer [medical personnel or by proxy], blinding [of subject and/or assistant])
Was the questionnaire validated on the same population to which it was applied in the study?
Was the questionnaire applied in the same conditions as it was validated on? (e.g., method and timing of filling questionnaire)
For nonvalidated questionnaires (nonvalidation study), is the full questionnaire provided in the article?
For nonvalidated questionnaires (nonvalidation study), is reliability data provided? (test–retest, internal homogeneity)
For nonvalidated questionnaires (nonvalidation study), is validation data provided? (content, criterion, construct validity)
For nonvalidated questionnaires (validation study), is the questionnaire development described in detail?
For nonvalidated questionnaires (validation study), is reliability data provided? (test–retest, internal consistency)
For nonvalidated questionnaires (validation study), is validation data provided? (content, criterion, construct validity)
For nonvalidated questionnaires (validation study), is responsiveness data provided?
For nonvalidated questionnaires (validation study), are floor/ceiling effects addressed?
For nonvalidated questionnaires (validation study), is clinical interpretability addressed? (e.g., minimal clinical difference)
For translation and cross-cultural adaptation, is the translation process described in detail?
For translation and cross-cultural adaptation, was the translation process performed according to accepted translation method? (e.g., International Society for Pharmacoeconomics and Outcome Research [ISPOR] Task Force for translation and cultural adaptation)
Outcomes Were adverse events reported?
Discussion Are study limitations acknowledged?
Ethics Is IRB/REB approval reported?
Are conflicts of interest appropriately disclosed?
Treatment Was the rationale for providing treatment reported?
Were primary and secondary outcomes identified prior to treatment?
Were physiologic characteristics and treatment targets for the participant group(s) adequately described?
Did treatment exercises target improvements to strength, endurance, and/or skill?
In what setting was the training provided (e.g., participant's home, inpatient rehabilitation facility, skilled nursing facility)?
In what setting was treatment delivered (e.g., participant's home, inpatient rehabilitation facility, skilled nursing facility)?
Were therapy sessions one-on-one or in groups? If in groups, were the number of participants per group described? If in groups, what was the ratio of clinicians and/or aides to participants?
Was the treatment regimen adequately described, including repetitions (# of repetitions per set), frequency (# of sets per session), intensity (# of sessions per day/week), duration (# of days/weeks that participant was enrolled in treatment), resistance load (if applicable), % of effort?
Were the methods for determining resistance load adequately described (i.e., average, maximum values, or duration across a specified number of trials)?
Was participant adherence to the clinician schedule and/or treatment plan measured?
Was an adequate description provided defining adherence to clinician schedule and/or treatment plan (e.g., % of visits completed, # of phone calls, etc.)?
Was a device utilized to facilitate treatment?
If a device was utilized, was it utilized per the manufacturer's guidelines?
If a device was not utilized per the manufacturer's guidelines, were adaptations adequately described?
If adaptations were made in use of a device, were these adaptations made across the entire cohort?
Did the authors include a description of instructions provided to the participant?
Did the authors describe the modality in which instructions were provided (i.e., verbal, written, both)?
What was the timing of any biofeedback offered (i.e., concurrent, following trial, following session)?
What type of biofeedback was offered (i.e., visual, auditory, tactile, combination)?
Did the participant receive additional/concurrent therapies during the course of treatment? If yes, were these therapies adequately described?
Was there a home exercise component prescribed as part of treatment?
When were their home exercises performed?
What sort of instructions and/or instructional tools were provided to support home exercise?
Was participant adherence to their home program measured?
Was adherence measured objectively (i.e., recorded through a device, application, or other program)?
Was adherence measured subjectively (i.e., recorded with a patient log)?
What level of expertise or experience did clinicians or the person administering treatment have?
Did the same trainer/clinician administer the treatment each session?
Were outcomes appropriately selected and described?
Treatment Study Design Were trainers blinded to treatment condition?
Was a comparison group utilized? If yes, was the comparison group healthy or disordered? Was there an expectation of change in the comparison group?
Were outcomes comprised of clinical assessment, patient report (subjective or through patient-reported outcomes tool), or via an outcome scale (e.g., IDDSI-FDS, FOIS, etc.)?
Were outcomes appropriately selected for the population studied?
Was an instrumental swallowing evaluation utilized to verify physiologic impairments prior to and following treatment?
Was a comparison of the same bolus size and consistency used at each time point?
Was the definition of successful treatment determined prior to or posttreatment?
Was the drop-out rate or number of dropped patients noted?

Note. ADL = activities of daily living; MMSE = Mini-Mental Status Examination (Folstein et al., 1975; Cockrell & Folstein, 1988); MOCA = Montreal Cognitive Assessment (https://mocacognition.com/); IDDSI = International Dysphagia Diet Standardisation Initiative (https://www.iddsi.org); FDS = Functional Dysphagia Scale (Han et al., 2001); MBSImp = Modified Barium Swallow Impairment Profile (Martin-Harris et al., 2008; https://www.mbsimp.com); DIGEST = Dynamic Grade of Swallowing Toxicity (Hutcheson et al., 2017; Hutcheson et al., 2022); ASPEKT = Analysis of Swallowing Physiology: Events, Kinematics and Timing (Steele et al., 2023; https://steeleswallowinglab.ca/srrl/the-aspekt-method/); EAT-10 = Eating Assessment Tool (Belafsky et al., 2008); FOIS = Functional Oral Intake Scale (Crary et al., 2005); PAS = Penetration-Aspiration Scale (Rosenbek et al., 1996); IRB = Institutional Review Board; REB = Research Ethics Board.

Appendix B

Final List of FRONTIERS Questions

Framework for RigOr aNd Transparency In REseaRch on Swallowing

Study Configuration

  • Is the study performed on human subjects investigating normal or disordered swallowing?​

    • Response Options: Yes, No

    • If “Yes,” then:

      • Did the study utilize a non-instrumental swallowing assessment or screening tool?​​

        • ▪ Response Options: Yes, No

      • Did the study utilize instrumentation?​

        • ▪ Response Options: Yes, No

        • ▪ If “Yes,” then:

          • ▪ Select the types of instrumentation used in the study:

            • ▪ Response Options: Videofluoroscopy (VFSS), Endoscopy (FEES), Other (not listed)

            • ▪ If “Other (not listed),” then:

              • ▪ Select the “Other” types of instrumentation used in the study:

                • ▪ Response Options: Tongue Pressure, High Resolution Pharyngeal Manometry (HRPM), Peak Cough Flow Meter/Spirometry, Nasal Cannula Airflow/Respiratory Inductance Plethysmography, Surface Electromyography (sEMG), Cough Reflex Testing, Ultrasound, Neuroimaging (CT, MRI, fMRI, MEG), Other Instrumentation (not listed)

      • Did the study investigate a type of swallowing treatment?

        • ▪ Response Options: Yes, No

      • Was a patient or care-provider reported outcome measure utilized as part of the study?

        • ▪ Response Options: Yes, No

    • If “No,” then:

      • This tool is not appropriate for this study.

Universally Applicable Questions

  • Were the study location(s) and environmental settings described?

    • Response Options: Yes, No, N/A

  • Was the data collection protocol(s) described in detail?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were the bolus consistencies described (i.e., rheology, IDDSI level, other validated measure)?

        • ▪ Response Options: Yes, No, N/A

      • Were bolus volumes described?

        • ▪ Response Options: Yes, No, N/A

      • Were the number of trials per consistency and volume described?

        • ▪ Response Options: Yes, No, N/A

      • Was the order of bolus administration described?

        • ▪ Response Options: Yes, No, N/A

      • Were stimulus brand and manufacturer details reported?

        • ▪ Response Options: Yes, No, N/A

      • Were methods of bolus administration described (e.g., cup sip, spoon-delivered, straw, tube-placed, self- vs clinician-administered)?

        • ▪ Response Options: Yes, No, N/A

  • Were participant instructions described (e.g., cueing)?

    • Response Options: Yes, No, N/A

  • Was the positioning of the participant described?

    • Response Options: Yes, No, N/A

  • Were participants blinded to task or treatment condition?

    • Response Options: Yes, No, N/A

  • Were raters blinded to participant ID/group assignment?

    • Response Options: Yes, No, N/A

  • Were raters blinded to timepoint/condition?

    • Response Options: Yes, No, N/A

  • Were the training and/or credentials of all individuals involved in data collection and/or analysis reported?

    • Response Options: Yes, No, N/A

  • Were the statistical tests/methods used appropriate for the type of data collected (e.g., categorical, ordinal, continuous)?

    • Response Options: Yes, No, N/A

  • Was data completeness and the handling of missing data described?

    • Response Options: Yes, No, N/A

Participant Characteristics

  • Which of the following items relevant to the participant characteristics were reported?

    • Response Options: Sex, Gender, Age, Race, Ethnicity, Socioeconomic status, Place of residence (community, nursing home, in hospital, etc.), Primary language spoken, N/A

  • Was there a control group?

    • Response Options: Yes, No, N/A

  • Was the number of participants reported?

    • Response Options: Yes, No, N/A

  • Were inclusion criteria defined?

    • Response Options: Yes, No, N/A

  • Were individuals with a diagnosis of dysphagia recruited into and/or identified in this study?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were the criteria for the term dysphagia defined?

        • ▪ Response Options: Yes, No, N/A

      • Was the method of confirming dysphagia described?

        • ▪ Response Options: Yes, No, N/A

      • Were citations regarding validity of the assessment method provided?

        • ▪ Response Options: Yes, No, N/A

      • Were citations regarding reliability of the assessment method provided?

        • ▪ Response Options: Yes, No, N/A

      • Was dysphagia severity and duration described?

        • ▪ Response Options: Yes, No, N/A

      • Was baseline diet or method of nutritional intake reported (e.g., IDDSI range including food level and drink level)?

        • ▪ Response Options: Yes, No, N/A

  • Were exclusion criteria defined?

    • Response Options: Yes, No, N/A

  • Did the study involve healthy participants?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were the criteria for health reported and defined?

        • ▪ Response Options: Yes, No, N/A

      • If there were subgroups within the healthy participants, were these described?

        • ▪ Response Options: Yes, No, N/A

  • Did the study involve individuals with other medical diagnoses besides dysphagia (as opposed to individuals who were considered healthy)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were any of the following aspects of medical diagnosis reported in the study (select all that apply)?

        • ▪ Response Options: Primary medical diagnosis, Time since medical diagnosis, Stage or severity of disease, Criteria for disease characterization, Medications taken, Other comorbid medical diagnoses

      • Was there more than one group of patient participants?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Were the characteristics distinguishing these groups described?

        • ▪ Response Options: Yes, No, N/A

Screening and Clinical/Non-Instrumental Assessments of Swallowing

  • Was the non-instrumental swallowing assessment or screening tool validated?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the validation study referenced in the text?

        • ▪ Response Options: Yes, No, N/A

    • If “No,” then:

      • Were all outcomes of the tool listed and described?

        • ▪ Response Options: Yes, No, N/A

      • Was the scale, metric, or criteria for each outcome described?

        • ▪ Response Options: Yes, No, N/A

      • Were outcomes validated with instrumentation?

        • ▪ Response Options: Yes, No, N/A

  • Was interrater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

  • Was the assessment or screening tool designed to detect a binary outcome (e.g., presence/absence of a diagnosis, pass/fail test)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were sensitivity/specificity data reported?

        • ▪ Response Options: Yes, No, N/A

Videofluoroscopic Swallow Study (VFSS)

  • Were the following aspects of instrumentation-related positioning reported (select all that apply)?

    • Response Options: Structures of interest (e.g., lips, tongue, larynx, cervical esophagus, etc.), Angles/Views (e.g., lateral, sagittal, etc.), Method/accessories to optimize positioning (e.g., wedge, pillow, etc.) or measures (e.g., nose plugs), None of the above options were reported

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Were details regarding recording settings reported (specifically signal acquisition rate/frame rate)?

    • Response Options: Yes, No, N/A

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Were the methods for calibration of all instrumentation described?

    • Response Options: Yes, No, N/A

  • Was barium or contrast material used?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were details regarding name/brand/type of barium (or other contrast) reported?

        • ▪ Response Options: Yes, No, N/A

      • Were details regarding barium (or other contrast) concentration reported?

        • ▪ Response Options: Yes, No, N/A

      • Was the same concentration of barium used across trials?

        • ▪ Response Options: Yes, No, N/A

  • Were standard rating methods used and identified (e.g., MBSImP, DIGEST, ASPEKT)?

    • Response Options: Yes, No, N/A

  • Were operational definitions for measurements/outcomes reported?

    • Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Was a validated penetration-aspiration scale used for VFSS?

    • Response Options: Yes, No, N/A

  • If a non-validated scale was utilized, were procedures described for reproducibility?

    • Response Options: Yes, No, N/A

  • Was application of the safety rating scale described in a reproducible manner (i.e., bolus level, swallow level, worst versus mean/median, etc.)?

    • Response Options: Yes, No, N/A

  • Was the frequency of safety impairment during VFSS acknowledged?

    • Response Options: Yes, No, N/A

  • Was timing of safety impairment (i.e., before, during, or after the swallow) acknowledged?

    • Response Options: Yes, No, N/A

  • Was a validated residue scale used for VFSS?

    • Response Options: Yes, No, N/A

  • If a nonvalidated scale was utilized, were procedures described for reproducibility?

    • Response Options: Yes, No, N/A

  • Was application of residue rating scale described in a reproducible manner (i.e., bolus level, swallow level, region, etc.)?

    • Response Options: Yes, No, N/A

Flexible Endoscopic Evaluation of Swallowing (FEES)

  • Were the details of the equipment reported including scope model and recording system?

    • Response Options: Yes, No, N/A

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Were the methods for calibration of all instrumentation described?

    • Response Options: Yes, No, N/A

  • Was dye used in the study?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was coloring method for bolus trials described for reproducible preparation (i.e., color type, brand, mixture method, amount, etc.)?

        • ▪ Response Options: Yes, No, N/A

  • Were the following aspects of lubrication and/or nasal decongestant described (select all that apply)?

    • Response Options: Type (e.g., water-based, petroleum-based, etc.), Brand or Manufacturer, Concentration, Quantity, Application process, No lubrication or nasal decongestant were utilized.

  • Were the following aspects of topical anesthetic described (select all that apply)?

    • Response Options: Type (e.g., water-based, petroleum-based, etc.), Brand or Manufacturer, Concentration, Quantity, Application process, No topical anesthetic was utilized.

  • Were operational definitions for measurements/outcomes reported?

    • Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Were secretions scored in the study?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was a validated secretion scale used?

        • ▪ Response Options: Yes, No, N/A

      • If “No,” then:

        • ▪ Was application of nonvalidated secretion scale described in reproducible manner?

        • ▪ Response Options: Yes, No, N/A

  • Was the protocol for describing anatomical abnormalities reported?

    • Response Options: Yes, No, N/A

  • Was safety of swallowing (i.e., penetration-aspiration) evaluated in the study?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was a validated penetration-aspiration scale used for FEES?

        • ▪ Response Options: Yes, No, N/A

      • If a nonvalidated scale was utilized, were procedures described for reproducibility?

        • ▪ Response Options: Yes, No, N/A

      • Was application of the safety rating scale described in a reproducible manner (i.e., bolus level, swallow level, worst verse mean, etc.)?

        • ▪ Response Options: Yes, No, N/A

      • Was timing of safety impairment (i.e., before, during, or after the swallow) acknowledged?

        • ▪ Response Options: Yes, No, N/A

  • Was efficiency (i.e., residue) evaluated in the study?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was a validated residue scale used for FEES?

        • ▪ Response Options: Yes, No, N/A

      • If a nonvalidated scale was utilized, were procedures described for reproducibility?

        • ▪ Response Options: Yes, No, N/A

      • Was application of residue rating scale described in a reproducible manner (i.e., bolus level, swallow level, region, etc.)?

        • ▪ Response Options: Yes, No, N/A

Tongue Pressure

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Were the methods for calibration of all instrumentation described?

    • Response Options: Yes, No, N/A

  • Were the placement and orientation of the device, sensor(s), and/or catheter described?

    • Response Options: Yes, No, N/A

  • Was the presence or absence of any adverse events reported?

    • Response Options: Yes, No, N/A

  • Was the following task-specific information reported?

    • Response Options: Type of task(s) performed (e.g., dry swallows, effortful swallows, tongue-to-palate presses, etc.), Number of trials, Rest between trials, Duration of session, Number of sessions, Audio/visual recording during task, Placement of device, sensor and/or catheter

  • Was the method for processing data for postcollection analysis described (e.g., normalizing, standardizing, signal processing, converting, modifying data, etc.)?

    • Response Options: Yes, No, N/A

  • Was there reporting of measurement artifact (e.g., non–task-related movement/data)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for handling artifact described (e.g., inspection and removal prior to data analysis?

        • ▪ Response Options: Yes, No, N/A

  • Were operational definitions stated for every measurement that was recorded?

    • Response Options: Yes, No, N/A

  • Were all units of measurement reported?

    • Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • Was the method for determining interrater reliability reported?

          • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

High Resolution Pharyngeal Manometry (HRPM)

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Were the methods for calibration of all instrumentation described?

    • Response Options: Yes, No, N/A

  • Were the following aspects of lubrication and/or nasal decongestant described (select all that apply)?

    • Response Options: Type (e.g., water-based, petroleum-based, etc.), Brand or Manufacturer, Concentration, Quantity, Application process, No lubrication or nasal decongestant were utilized.

  • Were the following aspects of topical anesthetic described (select all that apply)?

    • Response Options: Type (e.g., water-based, petroleum-based, etc.), Brand or Manufacturer, Concentration, Quantity, Application process, No topical anesthetic was utilized.

  • Were the placement and orientation of the device, sensor(s), and/or catheter described?

    • Response Options: Yes, No, N/A

  • Was a method for confirming and fixing catheter positioning reported?

    • Response Options: Yes, No, N/A

  • Were procedures for acclimatizing participants to the presence of the catheter reported?

    • Response Options: Yes, No, N/A

  • Was the presence or absence of any adverse events reported?

    • Response Options: Yes, No, N/A

  • Was the following task-specific information reported?

    • Response Options: Type of task(s) performed (e.g., dry swallows, effortful swallows, tongue-to-palate presses, etc.), Number of trials, Rest between trials, Duration of session, Number of sessions, Audio/visual recording during task, Placement of device, sensor and/or catheter

  • Was the method for processing data for postcollection analysis described (e.g., normalizing, standardizing, signal processing, converting, modifying data, etc.)?

    • Response Options: Yes, No, N/A

  • Was there reporting of measurement artifact (e.g., non–task-related movement/data)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for handling artifact described (e.g., inspection and removal prior to data analysis?

        • ▪ Response Options: Yes, No, N/A

  • Were operational definitions stated for every measurement that was recorded?

    • Response Options: Yes, No, N/A

  • Were all units of measurement reported?

    • Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

Peak Cough Flow Meter/Spirometry

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Were the methods for calibration of all instrumentation described?

    • Response Options: Yes, No, N/A

  • Were the placement and orientation of the device, sensor(s), and/or catheter described?

    • Response Options: Yes, No, N/A

  • Was the presence or absence of any adverse events reported?

    • Response Options: Yes, No, N/A

  • Was the following task-specific information reported?

    • Response Options: Type of task(s) performed (e.g., dry swallows, effortful swallows, tongue-to-palate presses, etc.), Number of trials, Rest between trials, Duration of session, Number of sessions, Audio/visual recording during task, Placement of device, sensor and/or catheter

  • Was the method for processing data for postcollection analysis described (e.g., normalizing, standardizing, signal processing, converting, modifying data, etc.)?

    • Response Options: Yes, No, N/A

  • Was there reporting of measurement artifact (e.g., non–task-related movement/data)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for handling artifact described (e.g., inspection and removal prior to data analysis?

        • ▪ Response Options: Yes, No, N/A

  • Were operational definitions stated for every measurement that was recorded?

    • Response Options: Yes, No, N/A

  • Were all units of measurement reported?

    • Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

Nasal Cannula Airflow/Respiratory Inductance Plethysmography

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Were the methods for calibration of all instrumentation described?

    • Response Options: Yes, No, N/A

  • Was the presence or absence of any adverse events reported?

    • Response Options: Yes, No, N/A

  • Was the following task-specific information reported?

    • Response Options: Type of task(s) performed (e.g., dry swallows, effortful swallows, tongue-to-palate presses, etc.), Number of trials, Rest between trials, Duration of session, Number of sessions, Audio/visual recording during task, Placement of device, sensor and/or catheter

  • Were the following aspects of measurement reported (select all that apply)?

    • Response Options: Signal frequencies, Sampling rate

  • Was the method for processing data for postcollection analysis described (e.g., normalizing, standardizing, signal processing, converting, modifying data, etc.)?

    • Response Options: Yes, No, N/A

  • Was there reporting of measurement artifact (e.g., non–task-related movement/data)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for handling artifact described (e.g., inspection and removal prior to data analysis?

        • ▪ Response Options: Yes, No, N/A

  • Were operational definitions stated for every measurement that was recorded?

    • Response Options: Yes, No, N/A

  • Were all units of measurement reported?

    • Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

          • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

Surface Electromyography (sEMG)

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Were the methods for calibration of all instrumentation described?

    • Response Options: Yes, No, N/A

  • Was the following task-specific information reported?

    • Response Options: Type of task(s) performed (e.g., dry swallows, effortful swallows, tongue-to-palate presses, etc.), Number of trials, Rest between trials, Duration of session, Number of sessions, Audio/visual recording during task, Placement of device, sensor and/or catheter

  • Were the following aspects of measurement reported (select all that apply)?

    • Response Options: Signal frequencies, Sampling rate

  • Was the method for processing data for postcollection analysis described (e.g., normalizing, standardizing, signal processing, converting, modifying data, etc.)?

    • Response Options: Yes, No, N/A

  • Was there reporting of measurement artifact (e.g., non–task-related movement/data)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for handling artifact described (e.g., inspection and removal prior to data analysis?

        • ▪ Response Options: Yes, No, N/A

  • Were operational definitions stated for every measurement that was recorded?

    • Response Options: Yes, No, N/A

  • Were all units of measurement reported?

    • Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

Cough Reflex Testing

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Were the methods for calibration of all instrumentation described?

    • Response Options: Yes, No, N/A

  • Were the following aspects of tussive (cough inducing) stimuli described (select all that apply)?

    • Response Options: Type (e.g., citric acid, capsaicin, etc.), Concentration, Quantity per dosage, Dosage duration, Timing of dosage administrations within inspiratory/expiratory cycle, No cough stimuli were utilized

  • Was the presence or absence of any adverse events reported?

    • Response Options: Yes, No, N/A

  • Was the following task-specific information reported?

    • Response Options: Type of task(s) performed (e.g., dry swallows, effortful swallows, tongue-to-palate presses, etc.), Number of trials, Rest between trials, Duration of session, Number of sessions, Audio/visual recording during task, Placement of device, sensor and/or catheter

  • Was a procedure for determining reflexive cough threshold utilized?

    • Response Options: Yes, No, N/A

  • Was the method for processing data for postcollection analysis described (e.g., normalizing, standardizing, signal processing, converting, modifying data, etc.)?

    • Response Options: Yes, No, N/A

  • Was there reporting of measurement artifact (e.g., non–task-related movement/data)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for handling artifact described (e.g., inspection and removal prior to data analysis?

        • ▪ Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Were operational definitions stated for every measurement that was recorded?

    • Response Options: Yes, No, N/A

  • Were all units of measurement reported?

    • Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

          • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

Ultrasound

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Was the ultrasound mode (B-mode or T-mode) reported?

    • Response Options: Yes, No, N/A

  • Was the resolution of the ultrasound/ultrasonography reported?

    • Response Options: Yes, No, N/A

  • Was the type of the transducer (probe) reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were details regarding placement and calibration described?

        • ▪ Response Options: Yes, No, N/A

  • Was the presence or absence of any adverse events reported?

    • Response Options: Yes, No, N/A

  • Was the following task-specific information reported?

    • Response Options: Type of task(s) performed (e.g., dry swallows, effortful swallows, tongue-to-palate presses, etc.), Number of trials, Rest between trials, Duration of session, Number of sessions, Participant positioning, Transducer orientation (sagittal or coronal), Use of a water or gel pad between the transducer and skin, Use of gel, Head stabilization, Audio/visual recording during task

  • Were the following aspects of measurement reported (select all that apply)?

    • Response Options: Signal frequencies, Sampling rate

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Was the method for processing data for post-collection analysis described (e.g., normalizing, standardizing, signal processing, converting, modifying data, etc.)?

    • Response Options: Yes, No, N/A

  • Was there reporting of measurement artifact (e.g., non–task-related movement/data)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for handling artifact described (e.g., inspection and removal prior to data analysis?

        • ▪ Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Were operational definitions stated for every measurement that was recorded?

    • Response Options: Yes, No, N/A

  • Were all units of measurement reported?

    • Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

          • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

Neuroimaging (CT, MRI, fMRI, MEG)

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Were the following aspects of the scanning protocol described (select all that apply)?

    • Response Options: Intervals of images in seconds, Structures measured/Region of Interest, Scanning parameters, Tube voltages/current, Dose absorbed at the skin surface, Effective dose, Timing and duration of scanning

  • Was the use of any contrast media reported?

    • Response Options: Yes, No, N/A

  • Were the following details regarding administration of contrast media described (select all that apply)?

    • Response Options: Type (e.g., barium, diatrizoate), Brand, Concentration, Method of administration (e.g., swallowed, injected), Dosage timing, Timing of administration, No contrast agents were utilized

  • Was the presence or absence of any adverse events reported?

    • Response Options: Yes, No, N/A

  • Was the following task-specific information reported?

    • Response Options: Type of task(s) performed (e.g., dry swallows, effortful swallows, tongue-to-palate presses, etc.), Number of trials, Rest between trials, Duration of session, Number of sessions, Participant positioning, Method/accessories to optimize positioning (e.g., wedge, pillow, etc.) or measures (e.g., earplugs)

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Was the method for processing data for postcollection analysis described (e.g., normalizing, standardizing, signal processing, converting, modifying data, etc.)?

    • Response Options: Yes, No, N/A

  • Was there reporting of measurement artifact (e.g., non–task-related movement/data)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for handling artifact described (e.g., inspection and removal prior to data analysis?

        • ▪ Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Were operational definitions stated for every measurement that was recorded?

    • Response Options: Yes, No, N/A

  • Were all units of measurement reported?

    • Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

          • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • ▪ Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

Other Instrumentation

  • Were the details of the equipment reported including model and recording system?

    • Response Options: Yes, No, N/A

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Were the methods for calibration of all instrumentation described?

    • Response Options: Yes, No, N/A

  • Were the following aspects of lubrication and/or nasal decongestant described (select all that apply)?

    • Response Options: Type (e.g., water-based, petroleum-based, etc.), Brand or Manufacturer, Concentration, Quantity, Application process, No lubrication or nasal decongestant were utilized.

  • Were the following aspects of topical anesthetic described (select all that apply)?

    • Response Options: Type (e.g., water-based, petroleum-based, etc.), Brand or Manufacturer, Concentration, Quantity, Application process, No topical anesthetic was utilized.

  • Were the placement and orientation of the device, sensor(s), and/or catheter described?

    • Response Options: Yes, No, N/A

  • Was a method for confirming and fixing catheter positioning reported?

    • Response Options: Yes, No, N/A

  • Were procedures for acclimatizing participants to the presence of the catheter reported?

    • Response Options: Yes, No, N/A

  • Were the following details regarding the data collection/scanning protocol reported (select all that apply)?

    • Response Options: Intervals of images (in seconds), Structures measured/Regions of Interest, Scanning parameters, Tube voltages/current, Signal frequency, Sampling rate, Dose absorbed at the skin surface, Effective dose, Timing and duration of scanning

  • Were the following details regarding administration of contrast media described (select all that apply)?

    • Response Options: Type (e.g., barium, diatrizoate), Brand, Concentration, Method of administration (e.g., swallowed, injected), Dosage timing, Timing of administration, No contrast agents were utilized

  • Was the presence or absence of any adverse events reported?

    • Response Options: Yes, No, N/A

  • Was the following task-specific information reported?

    • Response Options: Type of task(s) performed (e.g., dry swallows, effortful swallows, tongue-to-palate presses, etc.), Number of trials, Rest between trials, Duration of session, Number of sessions, Participant positioning, Method/accessories to optimize positioning (e.g., wedge, pillow, etc.) or measures (e.g., earplugs), Transducer orientation (sagittal or coronal), Use of a water or gel pad between the transducer and skin, Use of gel, Head stabilization

  • Were the names and system requirements of any analysis software described?

    • Response Options: Yes, No, N/A

  • Was the method for processing data for postcollection analysis described (e.g., normalizing, standardizing, signal processing, converting, modifying data, etc.)?

    • Response Options: Yes, No, N/A

  • Was there reporting of measurement artifact (e.g., non–task-related movement/data)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for handling artifact described (e.g., inspection and removal prior to data analysis?

        • ▪ Response Options: Yes, No, N/A

  • Was the process of rating described relative to time of exam (i.e., real-time and/or post-hoc)?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were exams recorded and reviewed post-hoc?

        • ▪ Response Options: Yes, No, N/A

  • Were operational definitions stated for every measurement that was recorded?

    • Response Options: Yes, No, N/A

  • Were all units of measurement reported?

    • Response Options: Yes, No, N/A

  • Was more than one rater included?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was interrater reliability reported?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the method for determining interrater reliability reported?

          • ▪ Response Options: Yes, No, N/A

  • Was intrarater reliability reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method for determining intrarater reliability reported?

        • Response Options: Yes, No, N/A

  • Were discrepancy resolution processes described?

    • Response Options: Yes, No, N/A

Treatment of Dysphagia

  • Was the rationale for providing treatment reported?

    • Response Options: Yes, No, N/A

  • Were primary outcomes identified prior to treatment?

    • Response Options: Yes, No, N/A

  • Were secondary outcomes identified prior to treatment?

    • Response Options: Yes, No, N/A

  • Were characteristics of swallowing physiology for the participant group(s) described?

    • Response Options: Yes, No, N/A

  • Were swallow or swallowing-related treatment targets for the participant group(s) described?

    • Response Options: Yes, No, N/A

  • Was a device/tool utilized to facilitate treatment?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the specific type of device reported (make/model)?

        • ▪ Response Options: Yes, No, N/A

      • Was the resistance load setting on the device described (as appropriate)?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Were the methods for determining the resistance load described (i.e., average, maximum values, or duration across a specified number of trials)?

          • ▪ Response Options: Yes, No, N/A

      • Was biofeedback offered as part of treatment?

        • ▪ Response Options: Yes, No, N/A

      • If “Yes,” then:

        • ▪ Was the type of biofeedback reported?

          • ▪ Response Options: Yes, No, N/A

  • Were therapy sessions conducted in groups?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the number of participants per group described?

        • ▪ Response Options: Yes, No, N/A

      • Was the ratio of clinicians and/or aides to participants reported?

        • ▪ Response Options: Yes, No, N/A

  • Did the study report any of the following items related to treatment regimen?

    • Response Options: Repetitions, Frequency, Intensity, Duration

  • Was participant adherence to the clinician schedule and/or treatment plan reported?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the method of recording and tracking patient adherence to the clinician schedule and/or treatment plan reported (i.e., device recorded vs patient reported)?

        • ▪ Response Options: Yes, No, N/A

  • Were the instructions that were provided to the participant(s) for completing the treatment described?

    • Response Options: Yes, No, N/A

  • Did participants receive additional/concurrent therapies during the course of treatment in addition to the treatment being studied?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Were the additional/concurrent therapies described?

        • ▪ Response Options: Yes, No, N/A

  • Was there a home exercise component prescribed as part of treatment?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was participant adherence to their home program measured/reported?

        • ▪ Response Options: Yes, No, N/A

  • Were the number of clinicians administering the treatment reported?

    • Response Options: Yes, No, N/A

  • If there was more than one clinician administering treatment, was the training protocol to ensure consistency reported?

    • Response Options: Yes, No, N/A

Patient or Care-Provider Reported Outcome Measures (PROMs)

  • Was the cognitive competency of the population required to complete the PROM described?

    • Response Options: Yes, No, N/A

  • Were literacy/language barriers related to completing the PROM described?

    • Response Options: Yes, No, N/A

  • Were cultural barriers related to completing the PROM described?

    • Response Options: Yes, No, N/A

  • Was the method of PROM completion described (i.e., telephone, mail, in person)?

    • Response Options: Yes, No, N/A

  • Was the method of PROM administration described (i.e., self, interviewer, by proxy, medical staff administered, etc.)?

    • Response Options: Yes, No, N/A

  • Was timing of the PROM completion relative to study intervention described?

    • Response Options: Yes, No, N/A

  • Has the PROM used in the study been validated?

    • Response Options: Yes, No, N/A

    • If “Yes,” then:

      • Was the questionnaire validated on the same population to which it was applied in the study?

        • ▪ Response Options: Yes, No, N/A

      • Was the validation study referenced in the text?

        • ▪ Response Options: Yes, No, N/A

      • Was the PROM used in the study administered in the language it was validated in?

        • ▪ Response Options: Yes, No, N/A

      • If “No,” then:

        • Was the translation process described in detail?

          • ▪ Response Options: Yes, No, N/A

        • Was the translation process performed according to an accepted translation method? (e.g., International Society for Pharmacoeconomics and Outcome Research [ISPOR] Task Force for translation and cultural adaptation)

          • ▪ Response Options: Yes, No, N/A

    • If “No,” then:

      • Was the full questionnaire provided in the manuscript?

        • ▪ Response Options: Yes, No, N/A

      • Was internal consistency reported? (Reliability)

        • ▪ Response Options: Yes, No, N/A

      • Was test-retest reproducibility reported? (Reliability)

        • ▪ Response Options: Yes, No, N/A

      • Were any of the following types of validity reported (select all that apply)?

        • ▪ Response Options: Content validity, Criterion validity, Construct validity

      • Was PROM development described in detail (e.g., question development, pilot testing, etc.)?

        • ▪ Response Options: Yes, No, N/A

      • Was responsiveness data provided?

        • ▪ Response Options: Yes, No, N/A

      • Were floor/ceiling effects addressed?

        • ▪ Response Options: Yes, No, N/A

      • Was clinical interpretability addressed (i.e., minimal clinically important difference)?

        • ▪ Response Options: Yes, No, N/A

Note. CT = computerized tomography; MRI = magnetic resonance imaging; fMRI = functional magnetic resonance imaging; MEG = magnetoencephalography; NA = not applicable; IDDSI = International Dysphagia Diet Standardisation Initiative (https://www.iddsi.org).

Funding Statement

Development of Framework for RigOr aNd Transparency In REseaRch on Swallowing (FRONTIERS) and the preparation of this article was not directly supported by funding to any of the authors. The authors acknowledge salary support as follows: Nicole Rogus-Pulia: University of Wisconsin–Madison; William S. Middleton Veteran Affairs Hospital, Madison, WI; and National Institutes of Health (NIH) grant numbers: K76AG068590 and R01AG082170. Rebecca Affoo: Dalhousie University, Halifax, Nova Scotia, Canada. Ashwini Namasivayam-MacDonald: McMaster University, Hamilton, Ontario, Canada; Alzheimer's Society and Canadian Institutes of Health Research grant ASR-176674; NIH grant R01AG077481; and Canadian Frailty Network grant ECR2022-003. Brandon Noad: No salary source to disclose. Catriona M. Steele: KITE Research Institute – Toronto Rehabilitation Institute, University Health Network, Toronto, Ontario, Canada; Canada Research Chairs Secretariat, Ottawa, Ontario, Canada; and NIH grant numbers: R01DC011020 and R01AG077481. The views and content expressed in this article are solely the responsibility of the authors and do not necessarily reflect the position, policy, or official views of the Department of Veteran Affairs, the U.S. government, or the NIH.

References

  1. Altman, D. G., & Simera, I. (2016). A history of the evolution of guidelines for reporting medical research: The long road to the EQUATOR Network. Journal of the Royal Society of Medicine, 109(2), 67–77. 10.1177/0141076815625599 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bahia, M. M., & Lowell, S. Y. (2020). A systematic review of the physiological effects of the effortful swallow maneuver in adults with normal and disordered swallowing. American Journal of Speech-Language Pathology, 29(3), 1655–1673. 10.1044/2020_AJSLP-19-00132 [DOI] [PubMed] [Google Scholar]
  3. Belafsky, P. C., Mouadeb, D. A., Rees, C. J., Pryor, J. C., Postma, G. N., Allen, J., & Leonard, R. J. (2008). Validity and reliability of the Eating Assessment Tool (EAT-10). Annals of Otology, Rhinology & Laryngology, 117(12), 919–924. [DOI] [PubMed] [Google Scholar]
  4. Borders, J. C., & Brates, D. (2020). Use of the penetration-aspiration scale in dysphagia research: A systematic review. Dysphagia, 35(4), 583–597. 10.1007/s00455-019-10064-3 [DOI] [PubMed] [Google Scholar]
  5. Cockrell, J. R., & Folstein, M. F. (1988). Mini-Mental State Examination (MMSE). Psychopharmacology Bulletin, 24(4), 689–692. [PubMed] [Google Scholar]
  6. Crary, M. A., Mann, G. D., & Groher, M. E. (2005). Initial psychometric assessment of a functional oral intake scale for dysphagia in stroke patients. Archives of Physical Medicine and Rehabilitation, 86(8), 1516–1520. 10.1016/j.apmr.2004.11.049 [DOI] [PubMed] [Google Scholar]
  7. Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental state.” A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12(3), 189–198. [DOI] [PubMed] [Google Scholar]
  8. Gandhi, P., & Steele, C. M. (2022). Effectiveness of interventions for dysphagia in Parkinson disease: A systematic review. American Journal of Speech-Language Pathology, 31(1), 463–485. 10.1044/2021_AJSLP-21-00145 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Han, T. R., Paik, N. J., & Park, J. W. (2001). Quantifying swallowing function after stroke: A functional dysphagia scale based on videofluoroscopic studies. Archives of Physical Medicine and Rehabilitation, 82(5), 677–682. [DOI] [PubMed] [Google Scholar]
  10. Hutcheson, K. A., Barbon, C. E. A., Alvarez, C. P., & Warneke, C. L. (2022). Refining measurement of swallowing safety in the Dynamic Imaging Grade of Swallowing Toxicity (DIGEST) criteria: Validation of DIGEST version 2. Cancer, 128(7), 1458–1466. 10.1002/cncr.34079 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Hutcheson, K. A., Barrow, M. P., Barringer, D. A., Knott, J. K., Lin, H. Y., Weber, R. S., Fuller, C. D., Lai, S. Y., Alvarez, C. P., Raut, J., Lazarus, C. L., May, A., Patterson, J., Roe, J. W., Starmer, H. M., & Lewin, J. S. (2017). Dynamic Imaging Grade of Swallowing Toxicity (DIGEST): Scale development and validation. Cancer, 123(1), 62–70. 10.1002/cncr.30283 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Mancopes, R., Smaoui, S., & Steele, C. M. (2020). Effects of expiratory muscle strength training on videofluoroscopic measures of swallowing: A systematic review. American Journal of Speech-Language Pathology, 29(1), 335–356. 10.1044/2019_AJSLP-19-00107 [DOI] [PubMed] [Google Scholar]
  13. Martin-Harris, B., Brodsky, M. B., Michel, Y., Castell, D. O., Schleicher, M., Sandidge, J., Maxwell, R., & Blair, J. (2008). MBS measurement tool for swallow impairment—MBSImp: establishing a standard. Dysphagia, 23(4), 392–405. 10.1007/s00455-008-9185-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Molfenter, S. M., & Steele, C. M. (2012). Temporal variability in the deglutition literature. Dysphagia, 27, 162–177. 10.1007/s00455-012-9397-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Namasivayam-MacDonald, A. M., Barbon, C. E. A., & Steele, C. M. (2018). A review of swallow timing in the elderly. Physiology and Behavior, 184, 12–26. 10.1016/j.physbeh.2017.10.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. National Institutes of Health Central Resource for Grants and Funding Information. (n.d.). Guidance: Rigor and reproducibility in grant applications. Retrieved September 23, 2023, from https://grants.nih.gov/policy/reproducibility/guidance.htm
  17. Prager, E. M., Chambers, K. E., Plotkin, J. L., McArthur, D. L., Bandrowski, A. E., Bansal, N., Martone, M. E., Bergstrom, H. C., Bespalov, A., & Graf, C. (2019). Improving transparency and scientific rigor in academic publishing. Journal of Neuroscience Research, 97(4), 377–390. 10.1002/jnr.24340 [DOI] [PubMed] [Google Scholar]
  18. Rosenbek, J. C., Roecker, E. B., Wood, J. L., & Robbins, J. (1996). Thermal application reduces the duration of stage transition in dysphagia after stroke. Dysphagia, 11(4), 225–233. [DOI] [PubMed] [Google Scholar]
  19. Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. Journal of Pharmacology and Pharmacotherapy, 1(2), 100–107. 10.4103/0976-500X.72352 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Steele, C. M., Bayley, M. T., Bohn, M. K., Higgins, V., Peladeau-Pigeon, M., & Kulasingam, V. (2023). Reference Values for Videofluoroscopic Measures of Swallowing: An Update. Journal of Speech, Language, and Hearing Research, 66(10), 3804–3824. 10.1044/2023_JSLHR-23-00246 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Steele, C. M., & Grace-Martin, K. (2017). Reflections on clinical and statistical use of the penetration-aspiration scale. Dysphagia, 32(5), 601–616. 10.1007/s00455-017-9809-z [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing is not applicable to this article as no data sets were generated or analyzed related to this prologue.


Articles from American Journal of Speech-Language Pathology are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES