Abstract
The goals of this study are: (a) to share reflections from multiple stakeholders involved in a foundation-funded community-partnered evaluation project, (b) to share information that might be useful to researchers, practitioners, and funders considering the merits of researcher/practitioner evaluation projects, and (c) to make specific suggestions for funders and researcher/practitioner teams starting an evaluation project. Three stakeholders in a small-scale research-practice partnership (RPP) reflected on the evaluation project by responding to three prompts. A researcher, community organization leader, and funder at a small foundation share specific tips for those considering a small-scale RPP. Engaging in a small-scale RPPs can be a very meaningful experience for individual researchers and smaller organizations and funders. The benefits and challenges align and differ in many ways with those encountered in larger projects.
1 |. INTRODUCTION
Research-practice partnerships (RPPs) are “long-term, mutualistic collaborations between practitioners and researchers that are intentionally organized to investigate problems of practice and solutions for improving outcomes” (Coburn, Penuel, & Geil, 2013) that follow the principles of a community-based participatory research (CBPR) approach (Minkler & Wallerstein, 2003). RPPs have expanded considerably in recent years; many large-scale models have been implemented and lessons learned distilled in the name of improving RPPs (Booker, Conaway, & Schwartz, 2019; Israel et al., 2006; Tseng, Easton, & Supplee, 2017). While examples and knowledge-sharing from large-scale RPPs are available, there has been less attention to the benefits and challenges of engaging in smaller scale RPPs. Smaller academic teams or individual researchers, small nonprofit organizations (NPOs), and funders are often in positions of considering whether, and how best, to enter into and support smaller RPPs. This paper will offer reflections and specific tips for small-scale RPPs.
One type of RPP, evaluation partnerships, between researchers and NPOs can be both exciting and fraught. For NPOs, it can be valuable to gather data about the impacts of programming on desired outcomes. At the same time, it can feel vulnerable to be evaluated and can be difficult to make time for data-collection endeavors when day-to-day operations are consuming and programming is the priority. From the perspective of NPOs, what makes evaluation worthwhile?
For researchers, evaluation work can present a rewarding opportunity to use skills to help NPOs understand their impact and tell their stories. In addition, such work can be a valuable source of data for answering research questions of interest and furthering one’s program of research. At the same time, it is time consuming to be a good research partner, as true research/practitioner partnerships take a significant investment of time and energy to build trusting relationships and such partnerships require working toward goals that are not often rewarded in academia (Ahmed, Beck, Maurana, & Newton, 2004; Marrero et al., 2013). From the perspective of a junior academic researcher, what makes a community-partnered evaluation project work well?
The incentives of researchers and practitioners are not always well-aligned (Ferman & Hill, 2004; Wallerstein, 1999). Practitioners often need to tell their stories using data and seek to show how their organizations’ programming impacts the communities they work with. Researchers in academia often need to share findings with academic audiences, through journal publications and conference presentations. Successful research/practitioner partnerships juggle multiple incentives and have to grapple with meeting everyone’s needs in a project.
In RPPs, including evaluation work, there are often many more stakeholders than the researcher and the practitioner. For one thing, the practitioner involved in an evaluation project comes from a multilevel organization and the “leadership” may, or may not, reflect the views of all organizational staff, or of the families involved with the organizational mission. This is especially true in community-partnered projects that involve youth, which brings an additional layer of complexity and requires attention to developmental concerns relevant to youth (Jacquez, Vaughn, & Wagner, 2013; Suleiman, Ballard, Hoyt, & Ozer, 2019). In addition, with most RPPs, including evaluation work, there is often a funder involved who represents a unique organization with its own set of priorities and needs. From the perspective of a funder, what does a successful evaluation project look like and how can funders discern if an evaluation grant will make the most beneficial use of funds?
We were inspired to write this reflection paper by the curiosity of our program officer (third author) who posed the question to us (first and second authors): “What sets the conditions for success in community-partnered evaluation work?” The three manuscript authors share the sentiment that we developed and implemented a successful research/practitioner evaluation project and that each of us has learned valuable lessons about this type of work. We base this sentiment on several pieces of evidence that our partnership has been successful: we have continued our personal and professional relationships over more than 3 years, we have extended our work to further projects and grant applications; and the organization has used some of the knowledge learned in the evaluation. The goals of this manuscript are: (a) to share reflections from multiple stakeholders involved in a foundation-funded community-partnered evaluation project, (b) to share information that might be useful to researchers, practitioners, and funders considering the merits of researcher/practitioner evaluation projects, and (c) to make specific suggestions for funders and researcher/practitioner teams starting an evaluation project.
1.1 |. The community organization: Authoring Action
Authoring Action (A2) is a youth-focused art- and education-based organization in Winston-Salem, NC, with a mission to transform the lives of youth and the world through the power of creative writing, spoken word, visual and media arts, film-making, and leadership education that promotes positive systemic change. It was founded in 2002 to provide tools to youth for their personal success in any career, calling and life path. Through the program—which offers a 5-week Summer Intensive (SI) Program as well was year-round programming—youth are taught to think analytically about issues in their lives and communities through the use of self-reflection, critical thinking, and self-expression skills and to use analytical skills to foster positive systems change in their communities. In the Authoring Action Creative Writing Process (Freeman, 2019), teens learn to use grammatical rudiments, vocabulary, and comprehension to create stories, monologues, raps, and lyrics. For example, through the SI Program, youth write spoken word pieces about issues in their own lives structured by a theme such as “trauma transformation,” and then collaborate with local artists to turn their writing into a performance for community audiences.
1.2 |. The funder: Kate B. Reynolds Charitable Trust
Funding for the evaluation was provided by the Kate B. Reynolds Charitable Trust, Winston-Salem, though their grantmaking, they “support promising programs, systems change work, and innovative ideas that help residents and communities thrive, increase equitable access to healthcare, and achieve equitable health outcomes” (Kate B. Reynolds Charitable Trust, 2019). They provided funding to A2 to complete this evaluation work to support and improve the service A2 provides to youth and the broader community in Winston-Salem.
1.3 |. The evaluation project
The evaluation was completed by Dr. Parissa J. Ballard in partnership with A2 Co-Founders Lynn Rhoades (Executive Director) and Nathan Ross Freeman (Artistic Director) with funding from the Kate B. Reynolds Charitable Trust. The purpose of the evaluation was to collect data to understand the impact of A2 on former participants and their families, as well as the larger community. Ultimately, the data collected were meant to help the organization to tell their story to stakeholders, and to help to continue to improve the program. The year-long evaluation consisted of five components. Part I consisted of interviews with 36 program participants from the last 17 years. Part II consisted of surveys capturing change in attitudes and skills before and after the SI Program across four summers. Part III consisted of surveys capturing parents’ perceptions of changes in their children after participating in the SI program. Part IV consisted of audience evaluations at various community engagements. Part V consisted of a community survey to be conducted with community partners.
The data collected over the course of this evaluation project are summarized in the final evaluation report and an academic publication, which is currently under review. This reflection shares three perspectives of the evaluation process with the goal of providing specific tips to researchers, community members, and funders considering participation in small-scale RPP evaluation projects. Three stakeholders in this evaluation project wrote a reflection in response to the discussion prompts below.
Prompt 1) Describe your experience with the evaluation project. What was your goal? What specific activities you were involved in? Describe your expectations and how the project compared to what you expected.
Prompt 2) What, specifically, made this evaluation project a worthwhile experience? What did you learn? Were there any challenges?
Prompt 3) From your perspective and experience with this project (and drawing on other experiences with similar evaluation projects, if, relevant) what specific tips do you have for funders, researchers, and practitioners?
2 |. REFLECTIONS
In this section, we provide three reflections from the stakeholders involved in this project.
2.1 |. Reflection 1
Lori Fuller (Founder and Principal at Fuller Impact, LLC and former Director of Evaluation and Learning at the Kate B. Reynolds Charitable Trust).
2.1.1 |. What was your experience and role with this project?
As the Director of Evaluation and Learning at the Kate B. Reynolds Charitable Trust, I have often wrestled with how best to support organizations with their own evaluation capacity. If an organization applying to the Trust for funding is interested in evaluation, I am often pulled in to serve the role of program officer on the project. In the case of A2, the Trust had funded the organization before but often struggled with whether A2 was an arts organization or a human services organization. In truth, it is both of those and more. In addition, we sensed the power of A2 but didn’t know the results of the program beyond experiencing their powerful performances and anecdotal stories of youth who had participated in the program.
The most recent time that A2 approached the Trust for funding, we debated all of those same questions and dilemmas, torn between our own organizational strategy to focus on fewer issues in the community to achieve more substantial results and the desire to support organizational development of community nonprofits. The opportunity here was found in the overlap of a nonprofit who was seeking to build their evaluation capacity and the Trust’s desire to explore how we might support nonprofit evaluation more broadly.
The evaluation project had three goals—evaluation for A2; evaluation capacity for A2; and testing what our support of local nonprofits might look like around evaluation. Once the decision was made by the trust to fund the project, I became the primary liaison with A2. The funding stream that we selected for the project offered us flexibility in timing and reporting. That allowed me to focus on this more as a learning project and mutual capacity-building opportunity. I met several times with the community organization leader and the evaluator to understand what they were doing for the project and the results they were seeing. I also communicated this back to the director of the Local Impact Program Area and other staff at my organization.
Because we were not controlling how the evaluation would be designed, I really didn’t know what to expect from the project. The community organization leader and the evaluator were the leads and brought me in for updates when the timing was ripe. What was the most surprising was the incredible fit that the community organization found in the evaluator. The interplay between a researcher from a large institution and a small nonprofit was intriguing, and one I haven’t often seen. Our conversations were interesting and felt synergistic, each coming from different perspectives and all curious about how this might work best.
2.1.2 |. What made this project worthwhile? What was challenging about the project?
What made this evaluation project a worthwhile experience was the opportunity to be in the community. Because I was not a program officer at the Trust, this was atypical work for me, and it was very engaging. Specifically, the chance to think creatively in the moment and share ideas, and creating a stronger project together. The spontaneity of our conversations was energizing for me. Because the evaluator and the community organization leader were managing the project and the Trust had imposed little in the way of requirements for this funding, I was able to engage fully as a thought partner.
Another worthwhile part of this project was experiencing the interplay between three people from different organizations with a common line of sight. Evaluation can mean different things to researchers, practitioners, and funders and the purposes of evaluation for each stakeholder can also be quite different. Those different approaches and purposes were present in this project, and yet we all wanted this project to be of benefit to A2.
Perhaps the most worthwhile part of this experience was the clear benefits of the project to A2. The results of the project were not only the findings from the evaluation, but the strengthened thinking that came from the process of designing, carrying out, and reflecting on the process. The evaluator provided a new set of skills, a different mindset, and asked A2 powerful questions and provided them with new insights. The evaluator held the focus to do the project and to do it well in a nonprofit juggling multiple priorities with a very small staff. I was gratified by the potential for ongoing benefits to A2 through the relationship with the evaluator.
2.1.3 |. What tips do you have for funders?
From my perspective and experience with this project, the specific tips I have for funders are:
Fit between the practitioner and researcher is key. This one needed no “matchmaking” from the foundation. The relationships and benefits to the organization, and thus the results of the foundation’s investment, were much stronger because of that. If a nonprofit is dedicated to building their capacity and seeks out a partnership with a consultant or researcher, then listen. As a funder, stay out of the way of strong partnerships.
Flexibility in funding where possible. Exploring and experimenting without rigid timelines or standardized reporting formats opened the space for learning.
Look for small investments with the potential for an outsized payoff.
Look for the readiness of the organization. A2 had heard multiple times from multiple voices that they needed evaluation. They believed they needed evaluation. They were ready and willing. They took on this project as a priority. As a funder, we had some influence over time, but we couldn’t by ourselves make this happen—and arguably shouldn’t, even if we could.
Weigh the benefits of customized support versus one-size-fits all. Rather than an evaluation class or a planned initiative where organizations are asked to apply for a specific type of evaluation funding or even choose among a funder-vetted list of consultants, this project was tailored made not just for, but by A2.
2.2 |. Reflection 2
Lynn Rhoades (Co-Founder and Executive Director of Authoring Action, Winston-Salem).
2.2.1 |. What was your experience and role with this project?
A2 began as a grass roots arts and education summer program in response to a community-wide youth violence prevention and intervention effort. I felt it was crucial for the community at large, most especially for leaders and organizations who most affect the lives of these young people, to hear their stories. On the ground, creating and evolving a program from scratch, with a variety of professional artists, creating an initial evaluation process was not a thought. We were just doing the work, bringing teens together with artists, to give them the tools to dynamically tell their stories on stage and in film. Four or five years into this summer process, we began to think about the importance of measuring outcomes. In an effort to maintain the philosophical framework which values the strength and brilliance of each teen and avoids seeing them as victims, we created a survey based on the strengths and assets we had seen in our teens. In 2014, a board member helped to develop additional survey materials, and over time we also created a few different versions of an audience survey. Only 1 year did we have the staff capacity to input and interpret the data we collected. So what remained were files and a box of surveys not evaluated.
Because A2 moved toward a year-round program in 2007, and has continued to grow, strengthen, and evolve, it has become important that we institute the evaluation process—with the administration of surveys, data entry, and analysis. Additionally, we determined that it would be helpful to take a long look back over the years to evaluate the data collected, and, to find and interview former participants.
Through my work as an evaluation intern with the Wake Forest School of Medicine (WFSM) Clinical and Translational Science Institute’s Program in Community Engagement under the mentorship of Dr. Stephanie Daniel, we applied for a grant to do this evaluation project. Coincidentally, our evaluator, Dr. Parissa J. Ballard came to work with the WFSM as a researcher.
The evaluator engaged with A2 in a deep way, not only interviewing the co-directors but also attending the SI sessions, the Commencement engagement, fundraising events, as well as training alumni to conduct interviews of other alumni. She met with our Board during their winter Strategic Planning retreat to share preliminary results. I supported the research by finding alumni and co-supervising the interviewers. I provided paperwork for the evaluator and met with the data input folks as needed. I helped with locating program findings alums, co-supervised the interviewers, and supported the evaluator’s work by providing history along with Nathan Ross Freeman, Co-Founder and Artistic Director of A2.
2.2.2 |. What made this project worthwhile? What was challenging about the project?
One of the things so helpful to me, is how the evaluator, as somebody approaching this in a fresh way, could validate what we are doing from her expertise and training. It was very affirming. It was helpful to think through the details of evaluation with Parissa, for example, thinking about when the best time is to collect the “post” surveys. We collected them at the end-of-year party but upon reflection, maybe that is not the best time. It was also helpful to be able to build our internal capacity through this work, for example, by involving our staff in parts of the project. Our staff member who was involved in transcribing the interviews reports that it was a very helpful experience in her grant-writing role for A2. Finally, it was helpful to have guidance in deciding what to do with all the information we had collected. We had such a disparate bundle of information and I didn’t know if we could make something of it. And the evaluator did something with it in terms of telling our story. I also just learned about the nuts and bolts of research, the logistics of how you do it. For example, the importance of consistency with collecting “pre” and “post” information with every programas well as maintaining alumni records.
The biggest challenge with completing this project was getting the A2 alum interviewers to stay committed to the work. We were asking alums to stay connected to other alums. They each had their own personal challenges. It was hard to keep them consistent and committed. At the same time, this experience taught us something about our program. We wonder, how much can we teach? The more time with us, the longer the alums stay involved, the better it is for them. It’s a constant learning. I think participating in this project as interviewers was good for our program Alums; it allowed them to learn about research and gain important skills.
Another challenge is understanding how to go from what we learned to what is actionable. We have a very thorough summary report from the evaluator, which is great. At the same time, I’m still reading the report and I haven’t gleaned all of it. It feels like we need to learn what is actionable going forward.
Our main challenge going forward with evaluation work is still the lack of capacity. I would love to have a halftime evaluator/research person on our staff that could really stay up on this, keeping track of the data about the kids, the data like where are they now? And staying in that relationship and getting feedback from them that we can use, including testimonials. That’s really helpful for the long term. In addition, we wanted to think about the effect of A2 on the community. In this evaluation, we were able to do that to some extent, but for us, an important next step is to ask: how is A2 impacting the community?
2.2.3 |. What tips do you have for practitioners?
From my perspective and experience with this project, the specific tips I have for researchers/practitioners are:
Find a researcher who understands the population and also a heart and passion for the work. That’s important because Nathan and I felt supported. And A2 felt supported.
An evaluator can help organizations learn that prior weaknesses in data collection or evaluation is not a condemnation.
Be open to learn about the areas you need to strengthen.
Be clear from the beginning about what deliverables will be most helpful at the end of the evaluation. For example, can a report be supplemented by a brief set of action items presented to the organization’s board?
2.3 |. Reflection 3
Parissa J. Ballard (Evaluator, Assistant Professor of Family and Community Medicine, Wake Forest School of Medicine).
I was hired by A2 to be their evaluator. The co-founders and directors of A2 had written a grant to hire an evaluator with the goal to use data to tell their story better. They had a sense of the impact A2 was having on teens but needed help using data to tell the story. My primary goal was to partner with them to collect and tell stories. My secondary goal based on my research program was to understand the role of A2—from my perspective, a very unique program simultaneously aimed at youth development and community development—for youth empowerment and positive development. It is critical to note that it was not my goal to be an “outside” evaluator to provide objective evaluation about the program. This allowed us to fully develop a partnership and to align goals.
We took a slow and participatory approach to this evaluation work. I met several times with the community organization leaders and with some program alums to get their views about how best to evaluate the program. We agreed early on that a mixed-method approach would be critical. We also agreed that the design of the evaluation would keep multiple goals in mind. We wanted to collect and tell stories about how A2 impacts youth. We wanted to collect and tell stories about how A2 impacts the community. We wanted to build capacity for A2 in the future. We wanted to create opportunities for A2 alums and staff.
Through a series of steps, I developed and implemented a mixed-method data collection to collect and tell the story of how A2 impacts youth. I built on previous survey work A2 had done. I developed an interview protocol, trained five interviewers, oversaw data collection and management, and conducted analysis.
To prepare for conducting interviews, we went through a deliberate process to ensure that this project would meet the goals of A2 and that interviews would yield good information. One critical piece of this early on was observing many SI sessions in the Summer 2017. I sat with the teens, sometimes eating lunch there. This was invaluable in getting a feel for how the program unfolds and what the participants are like. This was time-intensive preparatory work. The informal observation was an initial step in thinking about what questions to ask in the interview. It also showed the organization leaders, Lynn and Nathan, my commitment to understanding what they do.
Per the goals of A2, we built in many opportunities for capacity building. Specifically, I engaged, met with, and trained five program alums in qualitative research and conducting interviews. I trained and oversaw data entry, transcription, and data management for one A2 alum and one staff member. I met with the organization leaders several times to discuss the evaluation, steps for future evaluation, and future grants. I also trained and supported A2 staff in collecting and inputting additional survey data (Summer 2018).
I presented interim findings to the A2 board and to our funder in a series of conversations. I presented a poster at an academic conference and in two talks to academic and public audiences. Throughout the course of the year, I summarized results for three or more grant applications as the request of A2 (two of which have since been funded). I wrote a full comprehensive final report for the organization and submitted analysis of the evaluation data for academic publication.
2.3.1 |. What made this project worthwhile? What was challenging about the project?
As an academic who studies youth and communities with content and methodological training in civic engagement and community participatory work, I spend an outsized amount of time alone at my computer. This work with A2 was often a highlight of my week. Getting to meet with the organization leaders to discuss the program, stopping by their office and chatting with the teens, meeting with alums who served as interviewers, and observing the program sometimes felt like “extra” activities that I shouldn’t prioritize, as a first-year assistant professor. However, nearly every interaction made me feel grounded and gave me new insights for how I think about youth development and civic engagement. It felt not only like the right thing to do for the project but provided academic learning, professional development, and a reminder of the reasons I do research on youth development and civic engagement.
This was an intensive, rewarding, time consuming, fun project, as expected. It was also more relational and intimate than expected. Going into the project, I expected that the evaluation would not be the top priority of A2. The organization leaders’ top priority was their teens, always and in every instance. It was not a surprise that the evaluation was not always top-of-mind to the leaders and that I had to be very involved in the data collection and management. However, I think I saw my job at the outset as helping them see how the evaluation work would serve their teens. I thought I could make a case for the value of our shared work and align our goals, and I did somewhat. But more so, I learned to accept that even though the evaluation work does serve their teens, the teen in front of them with an issue or a need would take priority. It was a practice in patience and a needle I am still learning to thread. As a researcher, especially at the assistant professor level, I did need to keep in mind my obligations to contribute to the academic literature. In addition to working on goal alignment with A2, I also just had to relax a little, especially about my desired timelines. I had to learn to be creative to meet my academic goals within the community organization’s operational reality. For example, in writing this reflection paper, I politely asked the organization leader to finish her reflection for months. She was interested and committed but academic publications are not her priority compared to the important issues in front of her daily. I finally set up a meeting to gather the remainder of her reflections through a conversation that I recorded. It was a simple solution that worked within her schedule and priorities, and it took me a long time to come to. No matter how much NPO leaders value research/evaluation (as these co-directors do) and agree with the importance of sharing knowledge learned from evaluation work broadly (as these co-directors do), they are busy and simply have to prioritize the people they work with. This doesn’t need to frustrate to our work, but needs to be addressed creatively.
Like many NPOs, A2 co-leaders completely understood the value of data for telling their story and at the same time had mixed feelings about evaluations. They expressed many different thoughts about evaluation work: displeasure that grant applications often require measurable impact, skepticism about the value of evaluation, uncertainty about their internal efforts to collect data, and hope for learning about their impact. In an initial meeting, Nathan expressed skepticism that their impact could be quantified in a way that would satisfy the requirements for “evidence-based practice” that many grant agencies require. As an organization based in story-telling, he was not convinced that impact would be best captured in numerical data. He also expressed some frustration with past experiences of evaluators coming in from the outside and telling them what they should be doing. This is was incredibly helpful context for my understanding of A2 and completely shaped the nature of evaluation work and how I came to see my role.
Lynn had recent experiences that orientated her toward the evaluation. Just before we met, she had spent one year as a fellow with the Program in Community Engagement, which is part of the WFSM Clinical and Translational Science Institute. The Community-engaged Research Fellowship supports representatives from community organizations and Wake Forest researchers to build a community-engaged research partnership (Community-engaged research fellowship, 2019). The goal of the Fellowship is to nurture existing, and/or facilitate the establishment of, a community-engaged research partnership through trust building, skill development, and co-learning. The community partners learn about research design, proposal development, and skills that might enhance operations and build organizational capacity (e.g., program development and evaluation). The WFSM research partners learn how community organizations prioritize, function, and overcome obstacles and how community-engaged research can be implemented in community settings. The organization leader had partnered with Dr. Stephanie Daniel from the Department of Family and Community Medicine and they co-wrote the grant that would eventually fund my project with A2. This was an incredible foundation and asset to our work together. When I met her, Lynn had developed research skills, completed research ethics training, had exposure to CBPR, and had experience working on a photovoice research project (Irby et al., 2018). More intangibly, she had a mindset about the value of research and evaluation. This was critical for setting the stage for our evaluation to be successful. Thus, our successful partnership was built on a solid foundation of relationships and training Lynn and A2 had already begun. This is what created and indicated the organizations “readiness” to meaningfully participate in an evaluation.
An ongoing challenge is figuring out how to stay involved with A2 now that the project is officially over. I have come to deeply respect and appreciate the work of A2 and we have identified several ideas for further projects. Given the reality of constraints on time, it is challenging to create a path to continue our work together and to use what we learned in our project to improve A2, share the model, and contribute learnings to other programs. One opportunity we identified is to work toward packaging the Authoring Action Creative Writing Process to share with other organizations. I am excited that Authoring Action obtained a grant from the Winston-Salem Foundation (The Winston-Salem Foundation, 2019) to pursue this work and I am collaborating with their team to do so.
2.3.2 |. What tips do you have for researchers?
From my perspective and experience with this project, the specific tips I have for researchers considering community-partnered evaluation projects are (Table 1):
TABLE 1.
Practical tips for researchers, community organizations, and funders considering small-scale research-practice partnership evaluations
| For researchers considering a community-partnered evaluation project | For community organizations considering an evaluation project | For funders considering capacity building and evaluation grants for community organizations |
| • Invest time in getting community partners' research training before you begin evaluation work. | • Hire an evaluator who has an understanding of and “heart for” the mission of your organization. | • Funding amount and structure are critical. The amount must be enough to achieve the scope of work. The flexible structure of funding will encourage creativity and experimentation. |
| • Learn—and use—the most effective method and times to check in with community partners. | • Hire an evaluator you like personally. | • Meet with grantees as thought partners when possible. |
| • Check in with your community partner frequently and thought multiple methods. | • Don’t let lack of previous data or lack of experience with evaluation/research stop you from getting started with an evaluation. | • Pay attention to readiness. The organization has to be ready to benefit from evaluation. |
| • Know what you want/need to get out of the project from the start for your own work. | ||
| • Practice open communication, humility, respect, flexibility, willingness to invest time, and relationship. | ||
Invest as much time as possible in getting community partners research training before you begin evaluation work.
Learn (and use) the most effective method and times to check in with community partners. Their priorities and constraints are different than ours. Although both are important, I found my schedule to be generally more flexible.
Check in with your community partner frequently and through multiple methods.
3 |. DISCUSSION
Each of us learned a great deal from this study and were happy with the benefits we continue to accrue. In the process of writing this reflection, four themes have become clear that summarize why we feel this was a successful partnership project. Below we describe these themes: incorporating elements of participatory evaluation; readiness to participate; defining joint and individual goals; and maintaining flexibility in our work. We conclude by presenting some benefits and challenges for small-scale RPPs.
3.1 |. “Accidental” participatory program evaluation
First, we have come to see the project we designed as an “accidental” partial participatory program evaluation. While we all approached this evaluation using principles of a CBPR approach (Minkler & Wallerstein, 2003; Rhodes et al., 2018), we only came across the term “participatory evaluation” after the project. Participatory evaluations are “evaluations that involve all the stakeholders in a project—those directly affected by it or by carrying it out—in every phase of evaluating it, and in applying the results of that evaluation to the improvement of the work” (Community Tool Box, 2019). Because A2 requested this program evaluation, with the goal of telling their own story and improving their program, we were in a unique position through our partnership to take a very participatory approach. Drawing on the framework of Cousins and Whitmore (1998), our project can be described as a “practical participatory evaluation” with the main goal being organizational understanding and improvement for A2. This approach is a contrast from traditional evaluation projects that often have “outside” evaluators designing evaluations independent of the goals of the program they are evaluating. Evaluating ourselves on some standards for participatory evaluations, (Community Tool Box, 2019), we believe we were strong in incorporating input from A2 stakeholders about their own goals of the project. We were also fortunate that our funding partner was supportive of the approach we took to this evaluation. By training A2 alumni to conduct the interviews, we capitalized on “insider” knowledge and networks and were able to provide important professional opportunities for A2 youth. In retrospect, we could have done a stronger job in including input from an even wider range of A2 stakeholders (in addition to the leadership) about what to evaluate in this project. For example, we did not talk with parents of the A2 teens, who would have provided a valuable perspective on what their teens learned from the program. We also are jointly working toward applying the findings of the evaluation and embarking on the next steps in our work together.
3.2 |. Readiness
Second, each party was ready to partner fully on this project. Readiness is discussed in both the psychological literature on behavioral change (Prochaska, Diclemente, & Norcross, 1997) as well as literature on implementation science (Attieh et al., 2013) as a multifaceted construct that is necessary in order for an intervention to work. Applied to the case of a participatory evaluation such as this, it was so important that each party was ready for the collaboration. Specifically, each partner had the necessary training and there was a pre-existing relationship structure to make this project a success. Most tangibly, the research training Lynn had received under the purview of the Clinical and Translation Science Institute at the WFSM through her joint training with Dr. Stephanie Daniel and the Program in Community Engagement, had afforded her the skills and positive attitude about research that would help sustain our evaluation project when it was challenging. Having gone through ethics training before this evaluation project was especially important, as it allowed Lynn to learn about important parts of the research process such as participants’ rights and the consent process. It was helpful for the organization leaders to understand the safeguards the evaluator would need put in place to make sure participants knew their opinions would be kept confidential and would not be shared with the leadership of the organization, so their opinions were not affected by their wanting to make the leaders happy. We highly recommend that the organizational partner receive some research training before the project beginning. For organizations that might not have the opportunity for such a comprehensive research training experience, we recommend community partners go through online research ethics (e.g., through the Collaborative Institutional Training Initiative (CITI) training; CITI, 2019) where possible. Evaluators and community partners could make use of online resources such as the Community Tool Box (https://ctb.ku.edu/en) to scaffold discussions about different aspects of research projects.
At the same time, the training and experience the evaluator brought in community-engaged work prepared her to listen effectively and understand how A2 would need to juggle multiple priorities. We recommend that the research partner receive training in CBPR and that they spend time observing the organization’s operations. These will increase the evaluators’ readiness to participate in such a research project and to begin the project with a real understanding of the organizations’ goals, programming, and operational reality. Finally, the training and experience Lori Fuller had throughout her career put her in the position to be a thought partner in this project. It is important that program officers and funders are appropriately assigned to projects they manage and know when to “stay out of the way of strong partnerships” as the funding partner described it.
3.3 |. Clear goals
A third theme that arose was the need and ability of all partners involved to define clear goals (both joint and individual). Joint goals were critical in this study, as has been written about elsewhere (Penuel, Allen, Coburn, & Farrell, 2015). It was relatively easy for us to define and maintain clarity about our joint goal for this project, even as it was challenging in execution. The project was developed to understand how A2 shapes development among teens who participate. We also discovered that in RPPs, in general, and participatory evaluations, in particular, it is helpful for all partners to have clear individual goals for their participation (Baker, Homan, Schonhoff, & Kreuter, 1999). In our experience with this study, having a clear goal for our own participation in the project was an important anchor we could each draw on throughout the project. For example, the community organization leaders could remind themselves of the learning they would derive from the evaluation and the long-term value of having data to better tell their story. The evaluator could remind herself of the interesting research questions the data could address regarding the experience and value of an empowering art-based program for youth civic and well-being outcomes. In our view, having individual goals was an important supplement to our joint goal and helped us sustain energy toward the project. This aligns with one of the key definitional points—mutualistic collaboration—that defines what it means to engage in an RPP (Coburn et al., 2013). This is important because community partnership work is at its best when joint goals are pursued in a way that can also further the individual goals of project partners. Since the incentive structures are rarely aligned for academic researchers and community partners, this is an important consideration for researchers and community partners considering a small-scale RPP. We recommend that individuals considering participation in a small-scale RPP should reflect on their individual goals, and have an open discussion with other project partners, as an important piece of initial conversations about the shared goals of the project. This aligns with Baker et al. (1999) recommendation for partners to acknowledge and honor each others’ “agendas” and may be especially important in a small-scale RPP where individuals may be the only ones available to advocate for their own interests as they pursue joint projects.
3.4 |. Flexibility
However, the goals each partner defined also had to be balanced with remaining flexible to get the work done. We found that being flexible was a critical part of our successful work. Because there are multiple priorities with RPPs and multiple competing demands on the time of all partners involved, we found that we had to exercise more flexibility than with traditional research projects (in the case of the evaluator) and traditional organization projects (in the case of the organization leader). For example, the evaluator had to be flexible with meeting times and rescheduling project activities when A2 staff had issues come up; this flexibility showed an understanding of A2’s work and priorities. A2 had to show flexibility in programming to accommodate survey data collection during program time, such as collecting “pre” surveys during the important first day of their SI Program. Thus, remaining flexible in the execution of the project was important to balance against progress toward our joint and individual goals. Taken alongside the recommendation above for clarifying individual and joint goals, we recommend each partner spend some time in advance to think about aspects of the project where challenges may arise that require flexibility as well as parts of the project that are “non-negotiable” from their perspective.
4 |. CONCLUSION
Engaging in small-scale RPPs can be a very meaningful experience for individual researchers and smaller organizations and funders. The benefits and challenges align in many ways with those encountered in larger CBPR projects including larger scale RPPs and participatory evaluations (Israel et al., 2006; Rice & Franceschini, 2007). As with larger projects, some benefits of the participatory and partnered approach we took in our project were increased “buy in” and empowerment of stakeholders (Rice & Franceschini, 2007). As with larger projects, we faced challenges such as figuring out how to remain flexible in project execution and figuring out how to sustain the partnership once funding ended (Israel et al., 2006).
Specific benefits and challenges also emerged with this project that are specific to smaller scale RPPs. For example, it may be more important, and also more possible, to find and sustain a strong interpersonal match between research and community partners in a smaller scale RPP compared to larger RPPs (Rice & Franceschini, 2007). Another benefit of small-scale RPPs is that it may be easier to retain flexible schedules and timelines when team sizes are smaller. There are also challenges specific to smaller RPPs like our evaluation project. For example, the downside of it being easier to find a strong interpersonal fit between partners is the fact that any turnover in personnel would present a significant risk to the project. In addition, the project workload falls to a smaller number of people in small-scale RPPs. Smaller organizations are also less likely to have the in-house capacity for research and evaluation. Larger organizations might more easily be able to support research training that we found so critical to the organizational “readiness” that set us up for success in this project. Smaller organizations have to carefully consider who might have the capacity to pursue research training in advance of an evaluation project, at what level, and with what resources. Funders might consider putting financial resources toward such training in advance of an evaluation project. As individual researchers and smaller organizations and funders consider participating in RPP evaluation work, we hope these reflections and tips provide helpful insight. In addition, we hope that the practices documented here (incorporating elements of participatory evaluation; readiness to participate; defining joint and individual goals; and maintaining flexibility) can inform a research agenda as scholarship on RPPs grows. We suggest that a future direction for this scholarship is to expand our understanding of small-scale RPPs and to further develop measures of important dimensions of RPPs that might set the conditions for success in small-scale RPPs.
ACKNOWLEDGMENTS
The authors would like to thank Nathan Ross Freeman for his participation in this project as well as the Authoring Action alumni who participated as study staff and those who were involved as research participants. This study was supported by Kate. B. Reynolds Charitable Trust. We would also like to acknowledge the Program in Community Engagement of the Wake Forest Clinical and Translational Science Institute (WF CTSI), which is supported by the National Center for Advancing Translational Sciences (NCATS), National Institutes of Health, through Grant Award Number UL1TR001420.
Funding information
Kate B. Reynolds Charitable Trust, Grant/Award Number: UL1TR001420
Footnotes
CONFLICT OF INTERESTS
This manuscript presents three perspectives of stakeholders involved in a partnership evaluation project. Each of their roles and affiliations with regard to the project is clearly explained in the manuscript.
REFERENCES
- Ahmed S, Beck B, Maurana C, & Newton G (2004). Overcoming barriers to effective community-based participatory research in US medical schools. Education for Health, 17(2), 141–151. 10.1080/13576280410001710969 [DOI] [PubMed] [Google Scholar]
- Attieh R, Gagnon M-P, Estabrooks CA, Légaré F, Ouimet M, Roch G, & Grimshaw J (2013). Organizational readiness for knowledge translation in chronic care: A review of theoretical components. Implementation Science, 8(1), 138. 10.1186/1748-5908-8-138 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baker EA, Homan S, Schonhoff SR, & Kreuter M (1999). Principles of practice for academic/practice/community research partnerships. American Journal of Preventive Medicine, 16(3), 86–93. 10.1016/S0749-3797(98)00149-4 [DOI] [PubMed] [Google Scholar]
- Booker L, Conaway C, & Schwartz N (2019). Five ways RPPs can fail and how to avoid them: Applying conceptual frameworks to improve RPPs. New York, NY: William T. Grant Foundation. [Google Scholar]
- CITI Program. (2019). Retrieved from Citiprogram.org
- Coburn CE, Penuel WR, & Geil KE (2013). Practice partnerships: A strategy for leveraging research for educational improvement in school districts. New York, NY: William T. Grant Foundation. [Google Scholar]
- Community Tool Box. (2019). Evaluating community programs and initiatives. Chapter 36, Section 6. Retrieved from https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/participatory-evaluation/checklist
- Community-Engaged Research Fellowship. (2019). Retrieved from https://ctsi.wakehealth.edu/Community-Resources/Financial-Resources
- Cousins JB, & Whitmore E (1998). Framing participatory evaluation. New Directions for Evaluation, 1998(80), 5–23. [Google Scholar]
- Ferman B, & Hill T (2004). The challenges of agenda conflict in higher-education-community research partnerships: Views from the community side. Journal of Urban Affairs, 26(2), 241–257. 10.1111/j.0735-2166.2004.00199.x [DOI] [Google Scholar]
- Freeman NR (2019). Authoring action creative writing process: A companion guide. Winston Salem, NC: Authoring Action [Google Scholar]
- Irby MB, Hamlin D, Rhoades L, Freeman NR, Summers P, Rhodes SD, & Daniel S (2018). Violence as a health disparity: Adolescents’ perceptions of violence depicted through photovoice. Journal of Community Psychology, 46(8), 1026–1044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Israel BA, Krieger J, Vlahov D, Ciske S, Foley M, Fortin P, … Palermo A-G (2006). Challenges and facilitating factors in sustaining community-based participatory research partnerships: Lessons learned from the Detroit, New York City and Seattle Urban Research Centers. Journal of Urban Health, 83(6), 1022–1040. 10.1007/s11524-006-9110-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jacquez F, Vaughn LM, & Wagner E (2013). Youth as partners, participants or passive recipients: A review of children and adolescents in community-based participatory research (CBPR). American Journal of Community Psychology, 51(1–2), 176–189. 10.1007/s10464-012-9533-7 [DOI] [PubMed] [Google Scholar]
- Kate B Reynolds Charitable Trust. (2019). Retrieved from https://kbr.org/
- Marrero DG, Hardwick EJ, Staten LK, Savaiano DA, Odell JD, Comer KF, & Saha C (2013). Promotion and tenure for community-engaged research: An examination of promotion and tenure support for community-engaged research at three universities collaborating through a Clinical and Translational Science Award. Clinical and Translational Science, 6(3), 204–208. 10.1111/cts.12061 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Minkler M, & Wallerstein N (2003). Part one: Introduction to community-based participatory research. Community-based participatory research for health (pp. 5–24). San Francisco, CA: Jossey-Bass. [Google Scholar]
- Penuel WR, Allen A, Coburn CE, & Farrell C (2015). Conceptualizing partnerships as joint work at boundaries. Journal of Education for Students Placed At Risk, 20(1–2), 182–197. 10.1080/10824669.2014.988334 [DOI] [Google Scholar]
- Prochaska JO, Diclemente CC, & Norcross JC (1997). In search of how people change: Applications to addictive behaviors. In G. A Marlatt & G. R VandenBos (Eds.), Addictive behaviors: Readings on etiology, prevention, and treatment (pp. 671–696). Washington, DC: American Psychological Association. 10.1037/10248-026 [DOI] [PubMed] [Google Scholar]
- Rhodes SD, Tanner AE, Mann-Jackson L, Alonzo J, Siman F, Song EY, … Aronson RE (2018). Promoting community and population health in public health and medicine: A stepwise guide to initiating and conducting community-engaged research. Journal of Health Disparities Research and Practice, 11(3), 16–31. [PMC free article] [PubMed] [Google Scholar]
- Rice M, & Franceschini MC (2007). Lessons learned from the application of a participatory evaluation methodology to healthy municipalities, cities and communities initiatives in selected countries of the Americas. Promotion & Education, 14(2), 68–73. 10.1177/10253823070140021501 [DOI] [PubMed] [Google Scholar]
- Suleiman AB, Ballard PJ, Hoyt LT, & Ozer EJ (2019). Applying a developmental lens to youth-led participatory action research: A critical examination and integration of existing evidence. Youth & Society, 10.1177/0044118X19837871 [DOI] [Google Scholar]
- The Winston-Salem Foundation. (2019). Retrieved from https://www.wsfoundation.org/
- Tseng V, Easton JQ, & Supplee LH (2017). Practice partnerships: Building two-way streets of engagement. Social Policy Report, 30(4), 1–17. [Google Scholar]
- Wallerstein N (1999). Power between evaluator and community: Research relationships within New Mexico’s healthier communities. Social Science & Medicine, 49(1), 39–53. 10.1016/S0277-9536(99)00073-8 [DOI] [PubMed] [Google Scholar]
