Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2020 Nov 23;15(11):e0241839. doi: 10.1371/journal.pone.0241839

Community partners’ responses to items assessing stakeholder engagement: Cognitive response testing in measure development

Vetta L Sanders Thompson 1,*,#, Nora Leahy 2,#, Nicole Ackermann 2,#, Deborah J Bowen 3,, Melody S Goodman 4,
Editor: Roxanna Morote Rios5
PMCID: PMC7682898  PMID: 33227007

Abstract

Background

Despite recognition of the importance of stakeholder input into research, there is a lack of validated measures to assess how well constituencies are engaged and their input integrated into research design. Measurement theory suggests that a community engagement measure should use clear and simple language and capture important components of underlying constructs, resulting in a valid measure that is accessible to a broad audience.

Objective

The primary objective of this study was to evaluate how community members understood and responded to a measure of community engagement developed to be reliable, valid, easily administered, and broadly usable.

Method

Cognitive response interviews were completed, during which participants described their reactions to items and how they processed them. Participants were asked to interpret item meaning, paraphrase items, and identify difficult or problematic terms and phrases, as well as provide any concerns with response options while responding to 16 of 32 survey items.

Results

The results of the cognitive response interviews of participants (N = 16) suggest concerns about plain language and literacy, clarity of question focus, and the lack of context clues to facilitate processing in response to items querying research experience. Minimal concerns were related to response options. Participants suggested changes in words and terms, as well as item structure.

Conclusion

Qualitative research can improve the validity and accessibility of measures that assess stakeholder experience of community-engaged research. The findings suggest wording and sentence structure changes that improve ability to assess implementation of community engagement and its impact on research outcomes.

Introduction

Policy makers, funders, patients, and providers are increasingly interested in community/patient-engaged research, which is broadly understood as stakeholder-engaged research. Interest in this work is based on the belief that this strategy contributes to more acceptable research designs and methods and culturally sensitive and ethical proposals that reduce participant burden and enhance recruitment and retention of participants [13]. In addition, stakeholder-engaged research may facilitate implementation research. These studies require planning for sustainability after funding, trust among collaborating stakeholders, and ensuring research capacity of stakeholder partners in order to leverage stakeholder resources and develop reciprocal relationships between researchers and stakeholders [1]. To assure that engagement meets the goals articulated, there is a need to rigorously evaluate the impact of stakeholder engagement on the development, implementation, and outcomes of research studies, in order to move from lessons learned to evidence-based practices for stakeholder engagement [2].

It is important to understand how the level of engagement in a partnership is developing, and to what extent the level of engagement is a predictor of outcomes in the study. Rigorous measurement of engagement requires the development and validation of tools to assess stakeholder engagement, using items that respondents understand and that measure what they are intended to measure. A systematic review of measures of stakeholder engagement in research showed that this area of research is not very strong methodologically [3]. However, we present in this paper the results of one step in the process of developing a clearly defined, reliable, and valid measure of stakeholder engagement. Cognitive response interviews were employed to explore participants’ reactions to item wording and response options in addition to their ability to comprehend items and determine a response. A grounded theory approach guided data analysis and interpretation.

In addition to item development, item refinement must be considered in measure development and validation. Measurement items that work well are defined as those that use clear and simple language, are not rated as difficult by interviewers or research participants, do not result in significant missing data, do not yield unexpected frequencies or patterns of association, and capture an important component of the underlying construct [4]. Studies addressing item and measurement quality note variations in item interpretation based on participation in the activity under study and variations in recall based on the frequency of participation in the activity/activities. Community engagement studies are subject to the types of variations in participation that may impact item interpretation and recall [5, 6]. Researchers recognize the importance of the wording of survey questions. One approach to identifying and resolving variations in item interpretation related to experience, language, and construction is to use cognitive response interviewing.

Cognitive interview methods have become an important strategy to ensure the quality and accuracy of survey instruments and are used to identify and analyze sources of response error in survey questionnaires [7, 8]. Because it helps to explain how people come up with answers to survey questions, cognitive response interviewing is used to evaluate and improve survey items and design [8]. Willis and Artino note that cognitive response interviewing is a part of a systematic and rigorous survey design process [9]. They note that the process allows researchers to answer the question: ‘‘Will my respondents interpret my items in the manner that I intended?” (p. 353) [9]. Cognitive response interviewing is an evidence-based option for examining participant comprehension and interpretation of survey items, as well as respondents’ understanding of intended differences in survey items [10, 11].

Two common cognitive interviewing techniques are verbal probing and ‘‘think aloud” [810]. In verbal probing, participants answer questions about their interpretations of a survey item, paraphrase the survey item, and identify words, phrases, or item components that are problematic. In ‘‘think aloud,” participants verbalize ideas that come to mind as they answer a question and thereby shed light on reactions, inferences, and beliefs that helped them arrive at their answer. Both techniques are useful in identifying problems with survey item wording and design, but verbal probing is more likely to address issues with “literacy and plain language (i.e., jargon-free and carefully worded language” consistent with the Federal Plain Language Guidelines [12]).

This article adds to the literature on the measurement of stakeholder engagement by describing literacy concerns, attitudes about the information needed to judge engagement, as well as response preferences for items used in the public health literature to assess the level, type, and impact of community-engaged interventions and research. In addition, findings may improve the way that researchers communicate with stakeholders about community-engaged research and the assessment of this type of research.

Methods

Participant sample

A purposive sample of 16 was recruited to complete one-on-one cognitive response interviews. Eligibility criteria for the cognitive response interviews included being an adult (18 years or older) with experience partnering with researchers on patient- or community-engaged research. Participants were recruited by email from a database of Community Research Fellows Training (CRFT) alumni, who completed the CRFT program in St. Louis, MO, [13] and through referral by CRFT alumni. CRFT was established in 2013 and maintains a voluntary database of graduates of four cohorts (n = 125), 94 (75%) of whom are active alumni and have updated contact information.

Item selection

In 2011, some of the authors of this paper [14] participated in a review of the community-based participatory research (CBPR) and community engagement literature to determine best practices in evaluating adherence to, effectiveness of, and implementation of CBPR, as well as to identify relevant items. Based on this review, the evaluation team for the Program for the Elimination of Cancer Disparities (PECAD) (including some authors) identified and adapted questions for the survey from published measures on group dynamics, characteristics of effective partnerships, intermediate measures of partnership effectiveness, facilitation of partner involvement, and member satisfaction [1517]. The original survey (initiated in 2011) contained 60 items and included both closed- and open-ended questions [14].

In 2013, a PECAD evaluation committee was formed and initiated a revision of the original 2011 survey. The goal was to create a measure that was comprehensive and adequately addressed CBPR principles. The developers of the new measure (including some authors) [2] created and pilot tested a 96-item measure of community engagement in research. The new measure included some items from the original, was fully quantitative and focused on 11 engagement principles. The 96-item measure assessed items on two scales—48 questions measuring quality (how well) and 48 questions measuring quantity (how often), as measured by three to five quality and three to five quantity items that correspond with each engagement principle [2]. The measure used 5-point Likert scale response options. Further details on the original measure are published elsewhere [2].

Content validation and item reduction of the quantitative measure of stakeholder engagement was completed in 2017–2018 [18], using a Delphi Process. A Delphi Process involves administration of multiple rounds of individual online and/or in-person surveys, with participant feedback on aggregated group responses for each round until reaching majority agreement on issues [18]. A five-round, modified Delphi Process was used to reach consensus on engagement principles and items for inclusion, elimination, and revision [19, 20]. The number of survey items on each scale (quantity and quality) was reduced from 48 to 32. There were three to five quality and three to five quantity items corresponding with each engagement principle, each assessed for quality (how well the engagement activity/strategy was implemented or completed) and quantity (how often the engagement activity/strategy was implemented) (Tables 1 and 2). The items that emerged after content validation (Delphi process) were subjected to cognitive response interview.

Table 1. Summary of items removed throughout measure development & validation process–Delphi rounds 1 & 2.

Orig. EP1 New EP2 Delphi Round 1 Delphi Round 2
# of Items Items # of Items Items
1 1 0 - 0 -
2 - 4 1.) Show appreciation for community time and effort - -
2.) Highlight the community's involvement
3.) Give credit to community members and others for work
4.) Value community perspectives
3 - 3 1.) Help community members with problems of their own - -
2.) Get findings and information to community members
3.) Help community members disseminate information using community publications
4 2 1 1.) Ask community members for input 0 -
5 3 0 - 2 1.) Seek input from all partners at every stage of the process
2.) Seek help from all partners at every stage of the process
6 4 1 1.) Help community members achieve social, educational, or economic goals 1 1.) Help all partners gain important skills from involvement
7 5 0 - 1 Combined 2 Items into 1 Item:
1.) Build on strengths within the community/ target population
2.) Build on resources within the community/ target population
8 6 2 1.) Demonstrate that community members are really needed to do a good job 2 1.) Demonstrate how all partners’ ideas improve the work
2.) Enable community members to voice disagreements 2.) Make final decisions that reflect the ideas of all partners involved
9 - 1 1.) Demonstrate that community members' ideas are just as important as academics' ideas - -
10 7 1 1.) Include community members in plans for sharing findings 1 1.) Share the results of how things turned out with all partners
11 - 3 1.) Make plans for community-engaged activities to continue for many years - -
2.) Make commitments in communities that are long-term
3.) Want to work with community members for many years
- 8 - - 0 -
Total 16 - 7 -

1 EP1: Focus on local relevance and social determinants of health; EP2: Acknowledge the community; EP3: Disseminate findings and knowledge gained to all partners; EP4: Seek and use the input of community partners; EP5: Involve a cyclical and iterative process in pursuit of objectives; EP6: Foster co-learning, capacity building, and co-benefit for all partners; EP7: Build on strengths and resources within the community; EP8: Facilitate collaborative, equitable partnerships; EP9: Integrate and achieve a balance of all partners; EP10: Involve all partners in the dissemination process; EP11: Plan for a long-term process and commitment.

2 EP1: Focus on community perspectives and determinants of health; EP2: Partner input is vital; EP3: Partnership sustainability to meet goals and objectives; EP4: Foster co-learning, capacity building, and co-benefit for all partners; EP5: Build on strengths and resources within the community or patient population; EP6: Facilitate collaborative, equitable partnerships; EP7: Involve all partners in the dissemination process; EP8: Build and maintain trust in the partnership.

Table 2. Summary of items removed throughout measure development & validation process–Delphi rounds 3–4.

EP1 Delphi Round 3 Delphi Round 4
# of Items Items # of Items Items
1 0 - 0 -
2 0 - 1 1.) Create a shared decision making structure
3 0 - 0 -
4 0 - 0 -
5 1 1.) Help to fill gaps in community/patient population’s strengths and resources 0 -
6 1 1.) Foster collaborations in which all partners have input 1 1.) Enable all people involved to voice their views
7 2 1.) Interested partners are involved with sharing findings 0 -
2.) The partners meet to communicate about the project
8 0 - 0 -
Total 4 - 2 -

1 EP1: Focus on community perspectives and determinants of health; EP2: Partner input is vital; EP3: Partnership sustainability to meet goals and objectives; EP4: Foster co-learning, capacity building, and co-benefit for all partners; EP5: Build on strengths and resources within the community or patient population; EP6: Facilitate collaborative, equitable partnerships; EP7: Involve all partners in the dissemination process; EP8: Build and maintain trust in the partnership.

Procedures

The institutional review boards at Washington University in St. Louis and at New York University approved this study and the consent procedures used. Interviewers (n = 4) attended a training to ensure consistent interview and data collection procedures. Interviewers were female, led by a PhD psychologist (VLST), MPH-trained project manager (NA), and two MPH student research assistants (including NL). The training provided in-depth instruction in cognitive interviewing, as well as an orientation to the interview guide and protocol. The interviewers received instruction on the use of tablets to administer the cognitive response interview to assure consistency and ease of administration. Although tablets were used during the interview to capture survey item and quantitative question responses, computer-assisted personal interview software was not used, and participant qualitative responses were captured using a digital recorder.

Sixteen eligible participants completed the in-person, one-on-one interviews in study interview rooms. Verbal consent for participation was obtained from all participants after an information sheet was provided. In order to assure that respondents understood their role in cognitive interviews, we explained that the purpose of the interview was to identify problems with item wording and to help us modify the items to improve their use in community-engaged research. We emphasized their role in helping clarify the questions before administering the final survey to 500 participants.

The first author (VLST) and three research assistants (including NA, NL) conducted interviews. Interviewees were greeted by a project staff member, directed to the interview room, and introduced to the interviewer, if different. At the beginning of the session, the interviewer introduced herself, explained the study, and briefly described the use of the tablet, explained the study procedures, and answered any of the participant’s questions. The interviewer read each question aloud and highlighted the availability of a paper version of the survey to ease the participant’s review and consideration. After the participant answered an item, the interviewer completed the verbal probing.

We first administered the draft of the survey items in a standard fashion, followed by scripted open-ended probes and spontaneous probes as needed for clarification. The interview probes were systematically developed before the interview, in order to search for potential problems (i.e., proactive verbal probes) [9] with survey items, as well as response options. The interview probes included questions and statements addressing the following:

  • What do you think the statement is discussing/ describing?

  • How would you rephrase the statement in your own words?

  • Identify all the words in the statement, if any, that you do not understand.

  • Rate how difficult it was to choose a response option for this statement.

  • What, if anything, made this item difficult to answer?

  • Rate the importance of this item for measuring community engagement.

  • Describe what, if anything, makes it important to measuring community engagement in your own words.

  • Rate how satisfied you are with the response options and how would you revise the response options?

To minimize the impact that the order of questions had on the overall results, we used 4 different versions of the questionnaire, with each set containing 16 items from each scale. Participants were assigned an identification number and interview version by NA before the interview was conducted. All 90- to 120-minute interviews were digitally recorded, and each session’s recording was professionally transcribed. Each individual received a $50 gift card for participation.

Data coding and analysis were completed based on both the digitally recorded and professionally transcribed interviews and the field notes from interviewers compiled by NA. Transcripts were reviewed by the lead author (VLST) and project manager (NA) but were not returned to participants for review. After reviewing the project goals, the content of the interviews, and the existing literature, the first author (VLST) developed a defined coding guide that prescribed rules and categories for identifying and recording content.

Because the quality (how well) and quantity (how often) items repeat, these questions were only asked about the quality items. The remaining interview focused on participant reaction to changes in the response options as the survey moved from assessing the quality of community-engaged research activities to assessing the frequency of community-engaged research activities (quantity). For example, the interviewers asked:

  • How did the change of scale, from a quality to quantity scale, affect your understanding of the item?

  • Did it make the statement more difficult or easier to understand?

Two queries were repeated to specifically address the quantity response option.

  • “Tell me about your thought process when choosing a response option.”

  • “Tell me what you thought about to come up with your responses to the statements.”

During the analysis phase, codes and themes were developed based on the elements deemed important in the cognitive response literature on item and questionnaire development [710]. The segments of text containing codes were identified and the codes were extracted, categorized and classified. All transcripts were coded. Once saturation was achieved by two coders, the senior investigator (VLST) and a research assistant (NL) read and coded the interview transcripts independently, identifying text units that addressed item clarity, literacy concerns, contextual issues, difficulty of the item, difficulty of item response, relevance and necessity for measurement of stakeholder engagement, in addition to clarity and appropriateness of response options. Coders met to reach a consensus on the definitions and examples used to code interview text from each transcript as the process proceeded. In cases of disagreement, the coders discussed discrepancies to reach consensus. On completion of coding, the coders reconvened to formulate core ideas and general themes that emerged from each interview. An interview summary, with examples, was developed for each theme. The full research team reviewed and discussed the themes identified in an effort to develop connections among themes and to clarify the relevance and importance of the findings for the measure and the field. Participants did not provide feedback on the findings.

Results

The majority of the cognitive interview participants were female (n = 13; 81%), were African American (n = 11; 69%), and had a college degree or higher level of education (n = 9; 56%). Participants ranged in age from 24 to 73 years with a mean age of 47.3 years (Table 3). All of the participants had previous experience with community-engaged research.

Table 3. Demographic characteristics of interview participants.

Demographic Characteristic N
MEAN AGE (N = 16): 47.3 (Range: 24 to 73)
EDUCATION (N = 16)
High School 2
Some college or associate degree 4
College degree 2
Graduate Degree 7
Missing 1
RACE Male Female Missing Total
African American/ Black 1 10 0 11
White 1 3 0 4
Missing 0 0 1 1
Total 2 13 1 16

Item comprehension

Most participants did not readily report difficulties with the comprehension or definition of words or phrases; however, there were a few exceptions. Although not a concern among most participants, several reported that the wording of items addressing publication of research products was difficult, including wording related to dissemination, dissemination activities, and intellectual property. One participant stated, “Okay, now you put a big word in there. Okay, involve interested partners in dissemination activities.” Terminology that addressed procedures to assure adherence to CBPR principles, such as memorandum of understanding, governance, management responsibility, and mutually agreed upon were among the terms identified as literacy concerns as stated by a participant: “I’ve not heard that one. Memorandum of understanding.” Stakeholder was identified as jargon that presented a literacy issue. Participants’ recommendations resulted in the replacement of the term stakeholder with partners. Several other words were identified that may also relate to the use of disciplinary jargon. For example, participants noted that food access could be simplified to “places to get food.” Other words were unfamiliar and presented problems, such as fosters, equitable, and inclusiveness (See Table 4). Participants suggested plain language alternatives, such as sharing results or sharing data versus dissemination, roles and responsibilities versus memorandum of understanding and articles and presentations versus intellectual property.

Table 4. Summary of cognitive interview analyses: factors related to comprehension, response and suggestions for change.

Meaning
Literacy: self & others Comments/Suggestions
Dissemination, dissemination activities, disseminate Sharing results or sharing data
Memorandum of Understanding Roles and responsibilities
Intellectual property Articles and presentations
Management responsibility
Accountable
Inclusion, inclusiveness, inclusive quality
Representation
Collaboration, collaborative
Equitable
Coalition
Fosters Encourages, supports
Incorporate factors Delete and use examples
Capacity
Governance
Mutually agreed upon, agreed-upon
Food access Places to buy or get food
Stakeholder (termed jargon)
Vague
Culture, cultural factors (examples, context)
Issues (examples, context)
Plan (context)
Problem Solving
Resources (what resources, context)
Capacity (context)
Environment (context)
Partner, partners, academic partners (there seems to be a desire to specify “all” All partners, less confusion; who is included
Leadership responsibility (also listed as preferred)
RESPONSES
Quality Stem Confusing, only specifies academic researchers; too wordy Take word quality out of stem; specify all partners.
Recommended Changes Additions “Unsure, undecided,” numbers to ground the quantity scale
Complex Questions All partners assist in establishing roles and responsibilities for the collaboration Reference single issue
All partners have the opportunity to share ideas, input, leadership responsibilities, and governance (for example—memorandum of understanding, bylaws, organizational structure) as appropriate for the project. It is always appropriate to share information in the partnership; too wordy
Incorporate factors (for example—housing, transportation, food access, education, employment) that influence health status, as appropriate. Delete “as appropriate.” Always appropriate.
Examine data together to determine the health problems that most people in the community think are important. All partners look at the data to determine the health problems the community thinks are important.
Partners agree on ownership and management responsibility of data and intellectual property. Partners agree on ownership of data for publications and presentations.
Importance
Not important Community has confidence they will receive credit for their contributions. Researcher focused
All partners have the opportunity to be coauthors when the work is published.
Factors cited for importance Trust, benefit, respect, power, control, decision making (mutual), value community

Definitional issues also emerged. The terms cultural factors, problem solving, and leadership responsibilities were viewed as too general or ambiguous.

“That can mean a lot of different things to a lot of different people. You can call a lot of different things a culture. I guess, to make it more clear, explaining what they mean by cultural.” (Participant 1: Female, 54)

“I don’t know what you mean by problem solving. I don’t know what you mean by ongoing. I don’t know if there’s a time limit on that or boundaries.” (Participant 15: Female, 62)

Even when words such as resources, environment, and partners were understood, community participants wanted specifics and context, including what resources and which partners. Participants recommended the use of all partners to assure that both academic and community partners were considered, while sometimes noting that alternative wording was sometimes difficult to generate, particularly without context.

“I would say I’m not completely clear what they mean by the environment.” (Participant 1: Female, 54)

Item response

Participants were generally satisfied with the response options (81.25% for quality and 87.5% quantity) and used the full range of response options for both scales (Table 5). Most participants indicated that it was “extremely easy” or “somewhat easy” (average 74.2% per item) to respond to the items tested. Participants noted that it was easy to transition between quality and quantity items and easier to respond to the quantity compared to the quality items.

Table 5. Average of participants (N = 16) choosing response options over all items.

Quantity Response Quality Responses Difficulty of Choosing Response*
Response Option Average, % Response Option Average, % Response Option Average, %
Never 5.1 Poor 9.4 Extremely Easy 48.1
Rarely 9.0 Fair 8.6 Somewhat Easy 26.1
Sometimes 27.0 Good 19.2 Neither Easy nor Difficult 9.8
Often 36.5 Very Good 26.8 Somewhat Difficult 14.9
Always 22.4 Excellent 35.9 Extremely Difficult 1.2

*Asked after responding to item using quality scale.

Some participants requested the inclusion of “unsure, undecided” as a response option and the inclusion of numbers to ground the quantity scale. The larger concerns raised by participants related to the following: 1) the question stem that preceded items and 2) the ability to respond to items viewed as complex. The stem, “Please rate the quality/quantity or how well/how often academic partners do each of the following,” created confusion about who subsequent items referenced when terms such as partners, partnership, or stakeholders were used. The stem is, therefore, implicated in the comprehension concerns noted. Participants recommended that the terms quality and quantity be removed from the stem and that the term all partners be used instead of the term academic partners.

Because of compound or “double-barreled questions,” two items were identified as creating difficulty in responding: “All partners assist in establishing roles and responsibilities for the collaboration” and “Partners agree on ownership and management responsibility of data and intellectual property.” These items were changed to “All partners assist in establishing roles and related responsibilities for the partnership” and “All partners agree on ownership of data for publications and presentations,” respectively. Other items were identified as problematic because of strong beliefs about how health research should be conducted and partnerships managed.

Relevance for community engagement

Two items were identified as unimportant to the assessment of community engagement by some participants. These items focused on items associated with the CBPR principle that addresses dissemination; thus, they were characterized as research focused. A participant’s thoughts are illustrated below:

“I don’t think it’s a significant indicator of how engaged investigators are if they give authorship to the community partners. I think as long as they give credit to the community partners, that’s what is important.” (Participant 12: Male, 43)

The items seen as having the greatest relevance for the assessment of community engagement were trust, community benefit, respect, power/control, mutual decision making, and valuing the community. Sample participant explanations of principles and items appear below:

“Well, potentially the most important thing is identifying the issues that matter because if the issue itself doesn’t matter then why would the community want to be engaged in the research if that’s not important to them. Also, the result is going to be unimportant.” (Participant 2: Female, 31)

“—that should be equal, the responsibilities. I’m thinking in terms of the community—well, as an equal relationship, so it’s important that both are empowered to do what is necessary to better their circumstances.” (Participant 5: Male, 73)

“So, yes, basically everything that both sides bring are being considered important because it gives mutual respect.” (Participant 11: Female, 38)

“I think having trust among community levels—or amongst community members is important, because then you’re going to get the most accurate answers and you’re going to get—you’re going to get even more than what you asked for. (Participant 7: Female, 24)

Discussion

In order to understand the role that stakeholder-engaged research plays in the development, implementation, and outcomes of research studies, development and validation of measurement tools that can reliably and validly assess stakeholder engagement are required. This paper presents the results of one component in the measurement development process that also has implications for the way that we communicate with community partners about community-engaged research and the assessment of this work.

Results of cognitive response interviewing were consistent with concerns raised by Willis and Artino [9], who suggest that abstract terms are most problematic for participants. In this study, several terms commonly encountered in community engagement literature and measures were perceived as barriers and affected how community members responded to the item. Academic partners and researchers should likely guard against the assumption of common understanding, as participants considered some terms to be vague and in need of examples or context. Although it is appropriate to discuss culture, problem solving, plans, and environment, we must clarify what is referenced at specific times and with specific stakeholders. Even academics involved in community-engaged research may fail to realize when a common vocabulary has ceased to exist. In addition, plain language should be used to assure comprehension of discussions of publication and shared findings, the role of social determinants (such as the ability to get food), and efforts to assure that all partners are treated fairly and included in decisions and access to resources.

The findings suggest that item construction and comprehension issues were of greater concern in this measure development effort than response options. Most participants were satisfied with response options, found it easy to respond, and used the range of response options. Items that were excessively wordy and appeared to ask questions that required a response to two issues were identified as obstacles to participant response. It is important to note that the effort to develop consensus on items during the Delphi process described previously resulted in the development of some of the items identified as complex. Efforts to address diverse community input during item development may result in the need for additional review and editing to avoid item construction errors. In addition, the findings suggest that strong opinions and attitudes about an issue generated some concerns about the language used in survey items. This does not mean that an item should be reworded, but it does suggest that communication in partnerships should consider how messaging may affect dialogue and responses.

Few tested items were perceived as inappropriate or unimportant to the assessment of community-engaged research, although some participants questioned the importance of dissemination issues for community members versus academics. It is possible that the engagement principle guiding dissemination and the relevant items are sensitive to the research phase, i.e. more relevant to participants who are engaged in projects that are in or near the dissemination of the collaborative effort. The general acceptability of items suggests that the principles used to guide item selection are acceptable to the community members likely to be encountered or to participate in stakeholder-engaged research and assessment [19], although participants suggested changes in words and terms, as well as item structure. Minimal concerns were related to response options. These findings should be interpreted cautiously because of the small sample size. However, cognitive response interviewing [7,10] provides in-depth insight into how participants are thinking about and interpreting items, the factors that affect their interpretation and responses, and how comfortable they feel with the language, options, and coverage of topics important to an issue.

Conclusions

Understanding how the level of engagement in a partnership is developing and to what extent level of engagement is a predictor of outcomes in stakeholder-engaged research is important to making progress in community-engaged research. Because researchers have suggested that research on measures of stakeholder engagement is not very strong [3] and rigorous measurement of engagement is required, the results of the current study contribute to an effort to develop and validate a broadly applicable measure of stakeholder engagement. In the results of the cognitive response interviews, which were used to refine the questionnaire being developed, participants suggested concerns about plain language, literacy and clarity of question focus, and the lack of context clues to facilitate responses to items that query research experience. Given that the presented findings are consistent with the literature on stakeholder engagement [2, 16]—although communication concerns were highlighted in the current study—these findings should be of use to both those assessing community-engaged research and those engaging the community in the research process. Researchers should remain cognizant of the use of plain language, literacy levels, and contextual cues as partnerships are discussed and as agreements are developed.

Acknowledgments

The authors thank Sharese Willis for her help editing the manuscript.

Data Availability

Data are available in ICPSR: (https://www.openicpsr.org/openicpsr/workspace?goToPath=/openicpsr/126361&goToLevel=project).

Funding Statement

MG, VLST - ME-1511-33027 This research was funded by the Patient Centered Outcome Research Institute (PCORI) https://www.pcori.org/ The funder had no role in the study design, data collection, analysis, interpretation, or drafting of this article.

References

  • 1.Patient-Centered Outcomes Research Institute Patient-Centered Outcomes Research. 2012. Available from: https://www.pcori.org/research-results/patient-centered-outcomes-research.
  • 2.Goodman MS, Thompson VLS, Arroyo Johnson C, Gennarelli R., Drake BF, Bajwa P, et al. Evaluating community engagement in research: quantitative measure development. J Community Psych. 2017;45:17–32. 10.1002/jcop.21828 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Bowen D, Hyams T, Goodman M, West Km, Harris-Wai J, Yu J-H. Systematic review of quantitative measures of stakeholder engagement. Clinical and Translational Sci. 2017. 10.1111/cts.12474 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Pasick RJ, Stewart SL, Bird JA, D’Onofrio SN. Quality of data in multiethnic health surveys. Public Health Reports. 2001;116 Suppl 1:223 10.1093/phr/116.S1.223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Warnecke RB, Johnson TP, Chavez N, Sudman S, O'Rourke D, Lacey L, et al. Improving question wording in surveys of culturally diverse populations. Annals of Epi. 1997;7:334–342. 10.1016/s1047-2797(97)00030-6 [DOI] [PubMed] [Google Scholar]
  • 6.Warnecke RB, Sudman S, Johnson TP, O'Rourke D, Davis AM, Jobe JB. Cognitive aspects of recalling and reporting health-related events: Papanicolaou smears, clinical breast examinations, and mammograms. Am J of Epi. 1997;146:982–992. [DOI] [PubMed] [Google Scholar]
  • 7.Alaimo K, Olson CM, Frongillo EA. Importance of cognitive testing for survey items: an example from food security questionnaires. J of Nutrition Educ. 1999;31:269–75. [Google Scholar]
  • 8.Willis GB. Cognitive interviewing: a tool for improving questionnaire design. Thousand Oaks (CA): Sage Publications; 2005. [Google Scholar]
  • 9.Willis GB, Artino AR. What do our respondents think we're asking? Using cognitive interviewing to improve medical education surveys. J of Grad. Medical Educ. 2013;5:353–356. 10.4300/JGME-D-13-00154.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Drennan J. Cognitive interviewing: verbal data in the design and pretesting of questionnaires. J of Advanced Nurs. 2003;42:57–63. 10.1046/j.1365-2648.2003.02579.x [DOI] [PubMed] [Google Scholar]
  • 11.Knafl K, Deatrick J, Gallo A, Holcombe G, Bakitas M, Dixon J, et al. The analysis and interpretation of cognitive interviews for instrument development. Research Nurs & Health. 2007;30:224–234. 10.1002/nur.20195 [DOI] [PubMed] [Google Scholar]
  • 12.Federal Plain Language Guidelines (Revised May, 2011). Available from: https://www.plainlanguage.gov/media/FederalPLGuidelines.pdf
  • 13.Coats JV, Stafford JD, Sanders Thompson V, Johnson Javois B, Goodman MS. Increasing research literacy: the community research fellows training program. J Empirical Res on Human Res Ethics, 2015;10:3–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Arroyo-Johnson C, Allen ML, Colditz GA, Hurtado GA, Davey CS, Thompson VLS, et al. A tale of two community networks program centers: operationalizing and assessing CBPR principles and evaluating partnership outcomes. Prog. Community Health Partnerships: Res, Educ & Action. 2015;9 Suppl:61–69. 10.1353/cpr.2015.0026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Mainous AG, Smith DW, Geesey ME, Tilley BC. Development of a measure to assess patient trust in medical researchers. Annals of Family Med. 2006;4:247–252. 10.1370/afm.541 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Schulz AJ, Israel BA, Lantz P. Instrument for evaluating dimensions of group dynamics within community-based participatory research partnerships. Eval & Prog. Planning 2003;26:249–62. [Google Scholar]
  • 17.Weiss ES, Taber SK, Breslau ES, Lillie SE, Li Y. The role of leadership and management in six southern public health partnerships: a study of member involvement and satisfaction. Health Educ & Behav. 2010;37:737–752. 10.1177/1090198110364613 [DOI] [PubMed] [Google Scholar]
  • 18.Brady S. R. Utilizing and adapting the Delphi method for use in qualitative research. International J Qualitative Methods. 2015; 14(5):1–6. 1609406915621381. [Google Scholar]
  • 19.Goodman MS, Ackermann N, Bowen DJ, Thompson V. Content validation of a quantitative stakeholder engagement measure. J Community Psych. 2019;47:1937–1951. 10.1002/jcop.22239 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Goodman MS, Ackermann N, Bowen D, members of the Delphi Panel, Sanders Thompson V. Reaching consensus on principles of stakeholder engagement in research. Progress in Community Health Partnerships. Forthcoming 2020. 10.1353/cpr.2020.0014 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Roxanna Morote Rios

2 Sep 2020

PONE-D-20-04537

Community partners’ responses to items assessing stakeholder engagement:  Cognitive response testing in measure development

PLOS ONE

Dear Dr. Thompson,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please note that reviewer number 2 and 3 are the same person. The request was sent twice, and he or she answered likewise. I have kept the two recommendations because there are few different comments at the end of both reviews that might be useful for you.

I recommend to take a careful consideration of the thoughtful revision made by both reviewers, specially of reviewer number 1. They are asking for minor changes, but they might clarify the text and make it more accessible for the readership.

Please submit your revised manuscript by Oct 17 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Roxanna Morote Rios, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified how verbal consent was documented and witnessed.

3. Please refer to any sample size calculations performed prior to participant recruitment. If these were not performed please justify the reasons or cite similar literature. Please refer to our statistical reporting guidelines for assistance (https://journals.plos.org/plosone/s/submission-guidelines.#loc-statistical-reporting).

4.We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Overall the paper makes a meaningful contribution, especially considering the need for qualitative evaluations of quantitative measures. The experience of the survey-taker, participant, or stakeholder (whatever the nomenclature) should be an important part of the review process. However, it is not a completed product. There are some issues in the paper that should be resolved before moving forward with the manuscript. For example, there are several relevant areas lacking clarity, especially in the consistent use of terminology, and this lack of clarity undercuts the goals of the paper.

The clarity issue is exemplified by the discussion under the Item Selection section of the paper (ll. 124-147). An original survey is discussed that has 60 items, then in the next paragraph the authors test a 96-item version. Are these two versions of the same survey? If not, please state in a straightforward fashion that they are two different surveys. If so, how and why did the survey grow to 96 items? The last paragraph of that section states that the number of survey items was narrowed from 48 to 32. Which 48? Is this the 48 quantitative or the 48 qualitative items? Or is it another subset of the 96 items? Or are these 48 items unrelated to the 96 items mentioned earlier? Also in this discussion, a Delphi process is introduced. The process either should be defined briefly right after first mention or covered in an appendix or footnote. Given the issues with the number of items and how items were chosen, the discussion of "16 items from each scale total" on line 180 is likely to cause further confusion.

As noted, the paper has potential to make a contribution to the literature on measurement, cognitive interviewing, and community engagement. The paper is at its strongest when the authors describe that contribution in the second paragraph of the paper (ll. 65-75). This description is clear and to the point; it should serve as a model for how to convey your other important points in the paper. The other areas that require revision for clarity, typos, or other reasons are listed below by page and line number.

-Page 3, line 57: Citations demonstrating the increased interest in community-engaged research would strengthen your opening argument.

-I recommend moving the first sentence on line 87 to the previous paragraph before "One approach to identifying..." Then the following paragraph would begin with "Cognitive interview methods..." The point of the "Researchers recognize..." sentence sets up the last sentence of the preceding paragraph better than it does the following paragraph.

-Page 5, line 106: missing quotation marks at the end of the sentence.

-Page 6: item number issues already noted above

-Page 8, line 184: This would be a good place to insert the actual probes, which are discussed in the preceding and following paragraphs. Even if only a subset are listed or described, this would help the reader have a better understanding of the nature of the interviews.

-Page 9, line 190-191: The sentence starting with "Behavioral coding..." is confusing. Are the following items (ll. 192-201) the probes (see point above)? Coding is the researchers' process, not an interaction with a participant. But this sentence implies that behavioral coding involved the participant.

-Page 10, line 211: The first sentence discusses relevant codes. What is the threshold for relevance? How did you arrive at that? Did you follow best practices, prior research, etc.? It is important to be clear what overall guiding principle was employed because the codes and themes covered here are the keystone of the whole paper.

-Page 10, lines 212-222: Very good section. This description of the process was clear and helpful.

-Page 10, line 225: Extra space not needed before Results section.

-Table 2, page 12: Helpful, informative table.

-Page 14, line 274: Can get rid of this quote. It's already covered in the preceding sentences.

-Page 17, line 332: The authors say that the items were difficult to comprehend. I would be careful here. You didn't demonstrate that (or maybe you did but it isn't demonstrated in the paper); what you showed was that the wording was a hurdle that affected how community members responded to the item. Those are two different things.

Reviewer #2: The primary objective of this study is clearly defined and highly relevant to studies requiring stakeholder engagement. The authors identify that measures of stakeholder engagement are not very strong methodologically. They add to the literature by addressing measurement of stakeholder engagement in terms of literacy concerns, attitudes about information needed to judge engagement, and response preferences for items used in public health community-engaged research. The methods clearly describe their approach which draws on 16 individuals for one on one cognitive response interviews. They clearly describe methods for refinement in Items from an initial survey containing 60 items to 48 and then to 32, using a modified Delphi process. Could a figure be added to the manuscript to show the research and decision path from the 60 items down to the 32 items.

The cognitive interview participants were diverse in age, education, and predominantly African American women. The results described the item response and the steps taken to remove items and to clarify questions that had raised issues for participants. Importantly, the results support the recommendation that academic partners and researchers should guard against the assumption of common understanding is participants found some items vague and needing more context. Comprehension issues were of greater concern in measure development than response options.

Accordingly, the details described in this paper demonstrate the rigor and refinement the authors bring to the issue of measuring community engagement. Their conclusions are well justified and should help advance the field.

The tables add to the manuscript.

The title for Table 2 might be expanded to give more context.

Reviewer #3: The primary objective of this study is clearly defined and highly relevant to studies requiring stakeholder engagement. The authors identify that measures of stakeholder engagement are not very strong methodologically. They add to the literature by addressing measurement of stakeholder engagement in terms of literacy concerns, attitudes about information needed to judge engagement, and response preferences for items used in public health community-engaged research. The methods clearly describe their approach which draws on 16 individuals for one on one cognitive response interviews. They clearly describe methods for refinement in Items from an initial survey containing 60 items to 48 and then to 32, using a modified Delphi process. Could a figure be added to the manuscript to show the research and decision path from the 60 items down to the 32 items.

The cognitive interview participants were diverse in age, education, and predominantly African American women. The results described the item response and the steps taken to remove items and to clarify questions that had raised issues for participants. Importantly, the results support the recommendation that academic partners and researchers should guard against the assumption of common understanding is participants found some items vague and needing more context. Comprehension issues were of greater concern in measure development than response options.

Accordingly, the details described in this paper demonstrate the rigor and refinement the authors bring to the issue of measuring community engagement. Their conclusions are well justified and should help advance the field.

The tables add to the manuscript.

The title for Table 2 might be expanded to give more context.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Nov 23;15(11):e0241839. doi: 10.1371/journal.pone.0241839.r002

Author response to Decision Letter 0


23 Sep 2020

Dear Reviewers:

Thank you for your review of this manuscript and your insightful questions, comments and suggestions. We have addressed the concerns and issues discussed and believe that the reviewer’s comments and suggestions strengthen the manuscript. We have outlined our responses below. We used track changes and all changes are highlighted in yellow.

Reviewer 1

1. We have addressed the clarity issue noted in the discussion under the Item Selection section of the paper (ll. 124-147). We have clarified that the original survey was 60 items and that a new survey was created in 2013, using some of these items and focused on coverage of CBPR principles, all of which are quantitative. This survey was 96 items.

2. We go on to explain the effort to reduce the 96-item set in greater detail to provide clarity. We discuss the process of moving from 48- items to 32-items.

3. As noted, the Delphi process was introduced without a definition. We have added a definition and the appropriate citation.

4. Given the issues with the number of items and how items were chosen, we have edited the section for clarity to assure that the "16 items from each scale total" on line 180 is easier to understand.

5. Page 3, line 57: Citations demonstrating the increased interest in community-engaged research have been added. P.3, line 57

6. We have changed the sentence positions to improve the set-up of the paragraphs as suggested p.4, line 85-89

7. The missing quotation marks have been added on -Page 5, line 107.

8. The item number issues discussed above have been edited on page 6 as well

9. We have deleted the probes from page 9 and inserted them on what is now Page 10, lines 196-208.

10. We have rewritten the section on Page 11, lines 220-247 to delete references to behavioral coding...". We have also included the probes used to understand participant reactions to questions about response options.

11. Page 11, line 244-247 we have deleted the reference to relevant codes and have instead explained how we went about developing the coding strategy, including prior research and deemed important to explore in the cognitive response literature.

12. We have deleted the extra space before the Results section. Page 13, line 262

13. We have deleted the superfluous quote as suggested. -Page 17, line 309

14. Your point is well taken and it is likely that we have overstated our findings related to participants’ abilities to comprehend particular words. We have reworded this sentence. Page 20, line 367-368.

Reviewer 2

1. In addition to the text that clarifies the process of reducing the number of items in each scale, we have included a table (now Table 1) to show the research and decision path down to the 32 items. This has resulted in a renumbering of all tables. P. 7, 13, 15, 18

2. We have expanded the title for Table 2 to give more context. P. 13

We appreciate your feedback and the opportunity to improve this manuscript.

Decision Letter 1

Roxanna Morote Rios

22 Oct 2020

Community partners’ responses to items assessing stakeholder engagement:  Cognitive response testing in measure development

PONE-D-20-04537R1

Dear Dr. Sanders Thompson

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Roxanna Morote Rios, Ph.D

Academic Editor

PLOS ONE

Acceptance letter

Roxanna Morote Rios

13 Nov 2020

PONE-D-20-04537R1

Community Partners’ Responses to Items Assessing Stakeholder Engagement:  Cognitive Response Testing in Measure Development

Dear Dr. Thompson:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Roxanna Morote Rios

Academic Editor

PLOS ONE


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES