Skip to main content
Journal of Epidemiology and Community Health logoLink to Journal of Epidemiology and Community Health
. 2006 Oct;60(10):902–907. doi: 10.1136/jech.2005.040881

A model for collaborative evaluation of university‐community partnerships

Sarah Bowen 1,2, Patricia J Martens 1,2
PMCID: PMC2566062  PMID: 16973540

Abstract

Introduction

Manitoba's The Need to Know project was presented with a unique opportunity to develop a collaborative approach to evaluation, and to explore the effectiveness of a variety of evaluation methods for assessment of university‐community collaborative health research partnerships.

Objectives

The evaluation was designed to incorporate participation of community partners in planning, developing, and evaluating all aspects of the project. Objectives included: (a) assessment of extent to which the project met its initial objectives; (b) assessment of extent participants needs and expectations were met; (c) refinement of evaluation questions; (d) identification of unanticipated impacts; (e) assessment of participant confidence as research team members; (f) development of knowledge translation theory; and (g) component analysis.

Methods

A “utilisation focused” approach was used. Primary stakeholders identified evaluation questions of concern, and how findings would be used. The multimethod time series design incorporated key informant interviews, a pre/post‐test survey, written workshop evaluations, and participant and unobtrusive observation. All aspects of the evaluation were made transparent to participants, and formal feedback processes were instituted.

Results

There was a high level of participation in evaluation activities. Identifying evaluation questions of concern to community partners helped shape project development. While all methods provided useful information, only key informant interviews, participant observation and feedback processes provided insights into all evaluation objectives.

Conclusion

Collaborative evaluation can make an important contribution to development of university‐community partnerships. Qualitative methods (particularly key informant interviews, participant observation, and feedback processes) provided the richest source of data, and made an important contribution to team development

Keywords: evaluation, knowledge translation, collaborative research, university‐community partnerships, qualitative methods


While there is much interest in the potential of university‐community partnerships for collaborative health research, evaluation of such initiatives is in its infancy. This article describes the development of the collaborative “utilisation focused” approach1 to evaluation incorporated within Manitoba Canada's The Need to Know project, and discusses the potential of an evaluation to promote and support collaborative research initiatives. Analysis of the extent to which the project has met its original objectives2,3 and a discussion of the contribution of the project evaluation to knowledge translation theory4 have been reported elsewhere.

The Need to Know project is a CIHR (Canadian Institutes of Health Research) funded project designed to address the critical need for research to support the decision making of rural/northern regional health authorities (RHAs), and further promote and develop models of collaborative research. Started in 2001, the five year project includes as partners the Manitoba Centre for Health Policy (MCHP), the 10 rural/northern RHAs of Manitoba, and Manitoba Health. MCHP is an academic research unit of the Department of Community Health Sciences in the University of Manitoba's Faculty of Medicine. It uses the universal health care system's administrative billing claims to undertake secondary data analysis to examine health status and health care use patterns at a population level.

It was recognised that there was little research of relevance to decision makers in rural and northern health authorities, and that, in Manitoba, there were weak relationships between academic health service researchers and these decision makers. The project was designed to address three themes identified in the research literature at the time of project development: (a) the importance of researchers communicating findings in a manner that influences decision making5,6; (b) the need to establish effective working relationships among the various partners7,8,9,10; and (c) the need to undertake research of relevance to intended users.9,10,11,12 Project goals are to:

  1. create new knowledge directly relevant to rural and northern RHAs. Team members collaboratively selected and developed three research projects cofunded by Manitoba Health (comparisons of regional indicators of health and health care use, mental health, and sex differences in health and health care use).13,14,15

  2. develop RHA relevant capacity. Through two‐day workshops held three times a year, Team members participate in a variety of workshops, many of them “101” sessions on topics such as statistics, epidemiology, library searches, and research design. Through development and interpretation of the collaborative research reports and evaluation activities they also gain practical experience in using newly acquired research concepts. At the same time, researchers learn from the experience of these RHA planners and decision makers.

  3. disseminate and apply health related research so as to increase the effectiveness of health services, and ultimately the health of RHA populations. As the ultimate goal of the project is to increase use of research in decision making, strategies for research dissemination and application receive equal emphasis as research creation. Team meetings include specific sessions on dissemination and other knowledge translation strategies, and “homework” activities help transfer learning from the workshops to RHA planning.

Team meetings are the focus of team activities. Other funded project components include: the evaluation discussed here; provision of laptop computers to team members; project web site; opportunities for team members to attend relevant conferences and present research findings; and site visits to support regional activities.

Figure 1 summarises the conceptual model (revised, 2002) on which the project is based.

graphic file with name ch40881.f1.jpg

Figure 1 Conceptual model of MCHP/RHA/Manitoba Health collaboration: The Need to Know knowledge translation model. This model is iterative in nature. For example, once new knowledge is effectively communicated and disseminated, additional questions often arise from the findings. These questions then lead to more new knowledge creation.

Evaluation methods

An external evaluator was hired before the project began. Initially, the evaluation focused on providing project investigators with ongoing information to facilitate project improvement, and developing a framework for summative evaluation. Ethical approval was obtained from the research ethics board of the Faculty of Medicine, University of Manitoba.

Five core evaluation activities have been in place since the first workshop: (a) confidential key informant interviews, (b) a pre‐test and post‐test survey, (c) process documentation, (d) participant observation, and (e) written evaluations of team meetings. Additional unobtrusive methods (for example. monitoring of the project web site) were added in the first year, as were formalised feedback processes.

The evaluator is considered a team member and participates in team activities. Her stance with the project is not “objective” (the evaluator's role is intended to contribute to the success of the project); rather it is “neutral” in that she does not take sides among the partner groups.1

Over the first year of project implementation the evaluation expanded from its original role of helping the principal investigators monitor and improve the project to become “utilisation focused.”1 This approach, which identifies evaluation questions of concern to stakeholders and focuses on how the evaluation will be used, is consistent with the principles of collaborative research as it includes team members as full evaluation partners. All aspects of the multimethod evaluation are made transparent to participants to explicitly model research principles. The evaluation now has six purposes: (a) assessing how well the project is meeting its initial objectives; (b) assessing the extent to which it is meeting participants needs and expectations; (c) determining evaluation questions and methodology; (d) identifying unanticipated outcomes; (e) assessing collaborative team development; (f) contributing to knowledge translation theory; and (h) component analysis.

Participants

Initially each RHA CEO was invited to select one representative, and Manitoba Health three representatives for The Need to Know team. In response to RHA requests in 2003 for greater organisational involvement, the maximum representation from each RHA was increased to two, along with four additional staff from Manitoba Health, and a representative from the Winnipeg Regional Health Authority (the “capital region” authority). By March 2004, therefore, there were a total of 31 team members, compared with 19 at the first meeting. All team members were included in interviews and workshop evaluation activities; however, only RHA team members participated in the pre/post‐test survey.

Methods

Initial key informant interviews were held with team participants in summer/autumn 2001. Additional interviews were conducted in the autumn of 2002, and in autumn/winter 2003–2004. Interviews were also undertaken with project advisory committee members and CEOs of the participating RHAs (winter 2002), and with selected MCHP staff (2001, 2003).

Initial interview questions focused on participants' research knowledge and confidence, current research use, and objectives for project participation. Subsequent interviews explored project accomplishments and challenges and suggestions for project development. In 2002, determination of evaluation questions of interest to stakeholders was also a focus.

Semistructured, open‐ended questions were developed in interview guide format.16 While similar themes were explored with all stakeholders, wording and focus of questions varied depending on the respondent's role and project experience. Most of the 101 interviews took from 45–60 minutes. All first interviews were conducted in person, however, some subsequent interviews and all interviews with CEOs were conducted by telephone.

Notes were taken and transcribed immediately after the interview. As interviews were not audiotaped, no long narratives were captured, although short quotations were captured verbatim. Analysis first focused on response to direct questions to determine the range of perspectives on different issues (for example, “What do you think have been the greatest accomplishments of the project to date?”). Transcripts were then compared with data from other methods. Using an open coding approach, interview data were also reviewed for unexpected or previously unidentified themes.17 Particular attention was given to similarities and differences between stakeholder groups.

The pre‐test survey was administered at the first team workshop (June 2001), with the first post‐test conducted in October 2002. All current members participated; however, paired analysis was restricted to those who had participated in both the pre‐test and post‐test. Analysis of quantifiable responses was undertaken using appropriate parametric and non‐parametric statistical tests depending upon the type of survey response. Open‐ended questions were analysed using similar techniques as for interview analysis. New team members complete the pre‐test on joining the project, and future post‐tests will be undertaken.

Anonymous written evaluations were completed after each team workshop. Questions focused on usefulness/interest of topics and sessions at both the personal and professional levels. These evaluations were collated and analysed to determine most and least successful aspects of the workshops and to identify suggestions for change.

The evaluator was a participant observer at all team workshops, team planning/debriefing meetings, advisory committee meetings, orientation sessions for new members, and many presentations and conferences. Personal field notes were recorded, and project correspondence monitored. Activities and development of relationships were recorded. A number of unobtrusive measures (for example, monitoring of web site hits) were added after the first year.

After each series of interviews, data from all methods was compared and preliminary reports developed. In 2002 and 2003, separate reports were developed for each stakeholder group and circulated privately for feedback. These feedback processes were important for developing trust in the evaluation process, providing additional opportunities for participation, and modelling research principles.2,3

Results

Participation in evaluation activities

Although voluntary, participation in interviews and the survey has remained high. Only three interviews of a potential 104 were declined. There was 100% participation in the pre‐post test surveys (for the original team members, n = 11 for pre‐test and post‐test analysis). Participants regularly use workshop evaluations to suggest changes or additional topics.

Effectiveness of evaluation design and methods

The various evaluation methods provided insights on different evaluation objectives. Table 1 summarises, with examples, the contribution of each of these methods to the seven evaluation purposes. Only three methods provided data in all seven areas—key informant interviews, participant observation, and the feedback processes—highlighting the importance of qualitative methodology in evaluation of collaborative initiatives.

Table 1 Examples of contribution of evaluation methods.

Meets proposal objectives Addresses participant expectations Identifies evaluation questions Identifies unanticipated impacts Assesses participant confidence and participation as partners Contributes to theory development Component analysis
Key informant interviews Direct input on three objectives (for example, satisfaction with, use of deliverables; RHA capacity, dissemination) Increasing level of satisfaction, how learning used Need for focus on organisational barriers For example, initial lack of trust, importance of team suppers Self report over time For example, levels of project impact, types of learning Why certain components more or less important/useful
Pre/post‐test survey Before and after measurements of resource use, research confidence Identifies needs of participants over time Self rating on research concepts, computer skills Rating of components in post test
Participant observation Evidence of increased participant capacity Engagement in activities, researcher response to identified needs Issues identified in team discussions Critical points in project development (for example, choice of second deliverable) Participation in agenda setting, ability to apply research concepts; contacts made between meetings Importance of “personality factor”; limitations to individual capacity building Participant response to various activities
Workshop evaluations Usefulness of specific sessions to needs of RHAs Whether topics personally interesting /useful Responses to comments section Which topics/sessions of team meetings most important/useful
Unobtrusive methods Extent of uptake of deliverables Attendance, attrition Patterns of web site access Participation in various activities
Feedback processes Confirmed, revised, or refined findings from all methods

Light tint, important contribution; medium tint, some contribution; dark tint, not useful.

Effectiveness of the project

Data from all methods show that the project has achieved its objectives. It continues to be highly positively evaluated by all partners, who emphasise the importance of the project in producing research that is relevant (and being used) by the RHAs; the benefits of the supportive intersectoral networks developed; and the increased research capacity and confidence of team members.2 The project has received national attention, being asked to participate in a CIHR Knowledge Translation Casebook, being awarded the 2005 CIHR Knowledge Translation Award for Regional Impact and being highlighted as a Promising Practice by the Canadian Health Services Research Foundation.18

Evaluation results from the qualitative research methods are extensively discussed in a published paper,4 with important themes being the importance of personal factors in knowledge translation, the necessity of developing quality relationships and trust between partners, and the acknowledgement of the time commitments required by both the researchers and community partners. The project was designed to increase the capacity of individuals within organisations and, through them, to develop effective networks with participating organisations. RHA team members, however, identified the potential of the project to increase organisational capacity in evidence informed decision making as the evaluation question of greatest interest. MCHP staff were more interested in satisfaction of team members with current project activities (for example, the team meetings).

Data from the quantitative survey were limited (n = 11), as only those who began as team members in 2001 had completed both pre‐test and post‐test. There were still some statistically significant differences (see table 2), with results showing a significant increase in interaction with, and use of, MCHP researchers and resources. Interestingly, there was no significant change in the general rating of the usefulness of MCHP's web site, only in frequency of use. Team members also reported greater use of MCHP information for reports or presentations to CEOs and boards of directors, whereas in the pre‐test survey many members only accessed reports for their own general knowledge or awareness. Team members' confidence in using key epidemiological terms increased significantly, as did their self rated computer skills. However, at both time periods in the survey, there was comparatively little use of MCHP information by RHA staff beyond the team members themselves.

Table 2 Results of selected pre‐test and post‐test survey questions (n = 11).

Survey question Pre‐test Post‐test Significance*
In general, how would you rate your knowledge of the role of MCHP? % reporting “good” or “excellent” 36% 82% p<0.04
Have you ever contacted the MCHP staff regarding a question, suggestion or problem related to research for your RHA? % yes 18% 73% p<0.015
If you had a specific research question or problem, who would you be most likely to contact first? MCHP 0% Colleagues 36% MCHP 55% Colleagues 0% p<0.006 p<0.05
How often are you required to locate health information for RHA planning or reporting purposes? % stating once a month or more 55% 82% NS
Over the past six months, where have you sought sources of information for health planning purposes? % identifying MCHP 55% 91% NS (p<0.07)
If you had a question, suggestion, or problem related to research for your RHA, how likely would you be to contact staff of MCHP? % reporting “I would definitely” 27% 91% p<0.004
Approximately how many times have you used MCHP reports or individuals over the past two years? % reporting six times or more 18% 72% p<0.015
Approximately how many MCHP reports have you read? Median number 3 7 p<0.025
In what format did you access these reports? % reporting full report (hard copy) 64% 100% p<0.05
Have you ever accessed the general MCHP web site? % yes 45% 91% p<0.03
To your knowledge, do staff of your RHA access the MCHP website? % reporting occasionally/regularly 18% 18% NS
How have you used the information you have accessed from MCHP?
 % reporting using it for reports or presentations to the RHA Board or CEO 27% 81% p<0.015
 strategic planning 45% 72% NS
 staff education 18% 45% NS
 my own general knowledge and awareness 72% 81% NS
How would you rate your current level of computer related skills: general computer operation, spreadsheets, web navigation? (each rated out of 3 and then summed) mean value 6.4 7.4 p<0.02
Rate your confidence in using the following terms: incidence/prevalence, crude rates, standardized rates, statistical comparison of rates, confidence intervals, premature mortality rate, population based versus facility based, measures of socioeconomic status, potential of administrative data for planning, limitations of administrative data. (each rated out of 3 and then summed) mean value 20.5 25.8 p<.0001

*One tailed testing, with p<0.05 used as the statistical cut off. NS, not statistically significant (that is, p is greater than 0.05). Fisher's exact test for proportions, paired t tests, and non‐parametric equivalent Wilcoxon tests were used depending upon normality breaches.

Discussion

To illustrate how the various evaluation methods provided different and/or complementary data for analysis, the contribution of the methods to two purposes are explored in some detail.

Addressing participant expectations

Although project goals were clearly articulated before the project began, expectations of team members were unknown. The pre‐test survey explored current sources of research support and information; perceptions of MCHP; use of MCHP resources and web site; familiarly with research related terms and concepts; current needs for research information; and project expectations. Results showed that most participants had never had any prior contact with MCHP, and few had used its resources. However, analysis of answers to open‐ended questions (for example, What are you hoping will be accomplished through this “Need to Know” process? “On what topics related to health research and RHA planning do you require information at this point in time?”) identified congruence between participant expectations and project goals, and guided the development of initial workshops.

The initial key informant interviews permitted further investigation of participant expectations. It was discovered, for example, that not only were participants unfamiliar with MCHP, they were also confused about their role in the project, sceptical about the authenticity of the proposed partnership, and largely unconvinced that research (or researchers) could be useful to their work. These findings helped explain much of the discomfort the evaluator had observed during the first team meeting, and allowed the project directors to initiate specific action to respond to these unanticipated findings. Development of a specific “job description” for team members, and attention to “walking the talk” of partnership were two such actions. Regular follow up interviews explored the extent to which previously identified issues had been addressed, as well as project accomplishments and challenges. Workshop evaluations monitored team meetings, and provided an ongoing mechanism for input into future workshops.

The consistently high levels of satisfaction shown through the workshop evaluations were confirmed through interviews and participant observation methods. Unobtrusive methods (for example, monitoring of attendance and membership) and the post‐test survey also provided limited confirmatory data.

First identified as a theme in the pre‐test survey and reconfirmed in the post‐test survey, workshop evaluations, interviews and participant observations, the concern of RHA team members that “capacity building” should expand to include organisations beyond the individual team members remained a dominant theme. This finding led to an increased emphasis on site visits in 2004–2005, and the successful submission of a funding proposal to explore the barriers to evidence based decision making within RHAs.

Evaluation methods differed in the overall contribution to knowledge translation theory development, as shown in table 2. Interviews provided the richest source of data. Analysis of the interviews identified four distinct levels of project impact (individual learning, “how I do my job”, how RHAs make decisions, and provincial/national networks), and three different kinds of learning experienced through the project (factual learning, how to access needed information, and attitudes to research).4 Participant observation was able to record and analyse key issues that may not have been captured through any other method. For example, it was only as a participant in the workshops that the importance of the “personality factor” (characteristics of key project leaders such as their facilitation skills, humour, flexibility, and respect for contributions of others) to project success could be truly appreciated. And finally, the theory emerging from analysis of all data sources was critiqued and refined through the feedback processes.

What this paper adds

While there is increasing recognition of the benefit of university‐community research partnerships, little is known about appropriate strategies for evaluating them. The Need to Know project, which incorporated a unique approach to evaluation from its inception, shows the potential of collaborative evaluation to further improve effective partnerships. Use of multiple methods not only strengthens evaluation design, but also facilitates modelling of a variety of research methods. Team member participation in determining research questions, and a structured feedback phase are effective strategies in building trust and facilitating participation.

While the survey provided confirmatory data (and—despite small numbers—permitted some quantitative measurement of change over time), it was not able to contribute to the key evaluation objectives of theory development nor unanticipated outcomes. However, each of the methods provided some unique insights that would not have been captured in another way. For example, unobtrusive web site monitoring showed that the number of “hits” for tables and graphs for collaborative reports peaked about three months after the peak of report distribution—providing evidence that in‐depth exploration of the data was taking place several months after the report was read.

Strengths and limitations of design

Time series design is associated with threats to validity related to history and maturation, especially where there is no quasi‐experimental comparison group.19 Increasing awareness of the importance of evidence based decision making. and greater appreciation of the benefits of partnerships in research nationally and provincially could have resulted in observed changes beyond the impact of The Need to Know project.

Measures taken to tackle these inherent limitations include the use of multiple methods used at regular intervals, and triangulation of sources (RHA team members, RHA CEOs, Manitoba Health staff, advisory committee members, MCHP staff involved with the project and staff not involved in the project, community members and researchers across Canada). The evaluation also incorporates elements of investigator triangulation (use of several different researchers/evaluators) through participation of team members in data analysis and theory development. Other strengths included (a) an external evaluator and provisions for confidentiality or anonymity of response; (b) open‐ended questions designed to identify unanticipated impacts; (c) input of team members into evaluation questions, and (d) formal feedback processes.

Policy implications

Collaborative evaluation of joint activities is an essential component in the development of university‐community partnerships. Qualitative methods provided data—essential to promoting trust and team development—that were not available through other methods.

A unique characteristic of the participant observer role in this evaluation was the long term (3.5 years) and intensive relationship of the evaluator with the project. This presented the opportunity to develop personal relationships with team members, and to gain insights that may not have otherwise been available. It highlights the importance—to evaluation as well as to collaborative research—of sustained and ongoing relationships between researchers and community partners. However, this relationship was not without challenges. The evaluator was often identified with MCHP, rather than as an external evaluator. This may well be because she was hired by the principal investigators not by a coalition of stakeholders. Other challenges relate to role conflict. Because the evaluator is a participant in planning sessions, additional effort is required to ensure—and communicate—that input is based on analysis of confidential input from all stakeholders, not personal perspective. In addition, the evaluator was sometimes called on to participate as an “expert” in certain areas—a role that makes “objectivity” unrealistic.

A challenge to implementing such a comprehensive approach to evaluation is the potential cost. Time for evaluation activities (including the writing of reports and articles) averaged almost two days a week over the first three years. As The Need to Know project was a large nationally funded project—with an objective of better understanding of collaborative research and knowledge translation approaches—we believe that this expense is reasonable given the benefits gained.

Conclusion

This “utilisation focused” evaluation design shows the potential contribution a collaborative evaluation can make to development of university‐community partnerships. While several methods provided useful information, qualitative methods (key informant interviews, participant observation, and feedback processes) provided the richest data and addressed all evaluation purposes.

Participants state that the evaluation has made a contribution to the overall goal of “capacity building” by providing direct experience with the principles and methods of evaluation research, as well as with the potential of qualitative methodology. As MCHP expertise lays in quantitative methods—specifically use of large databases—the evaluation permitted experience with a broader range of methods than would have otherwise been possible.

This approach to evaluation can, however, only be effective to the extent that the project directors are responsive to its findings, and demonstrate respect for community partners as equals in the research collaboration.12 While the evaluation shows that The Need to Know project has achieved exceptional progress to date, it is possible that the evaluation could have decreased participant confidence and satisfaction if there had been little (or ineffective) response to issues raised by participants.

Acknowledgements

The authors acknowledge the Canadian Institutes of Health Research (CIHR) for their support of the five year (2001–2006) Community Alliances for Health Research project entitled, “The Need to Know: Collaborative research by the Manitoba Centre for Health Policy, the Rural and Northern Regional Health Authorities, and Manitoba Health”. PJM is the PI on this project, and SB is the evaluation researcher. PJM would also like to thank CIHR for support through her New Investigator's Award (2003–2008).

Footnotes

Conflicts of interest: none.

References

  • 1.Patton M Q.Utilization‐focused evaluation: the new century text. 3rd ed. Thousand Oaks: Sage Publications, 199720–22.
  • 2.Bowen S. The Need to Know Project Evaluation 2002–2004. Manitoba Centre For Health Policy 2004. http://www.rha.cpe.umanitoba.ca
  • 3.Bowen S. The Need to Know Project Evaluation 2001–2002 Report. Manitoba Centre For Health Policy, 2002. http://www.rha.cpe.umanitoba.ca
  • 4.Bowen S, Martens P J. The Need to Know Team. Demystifying knowledge translation. Learning from the community. Journal of Health Research and Policy 200510203–212. [DOI] [PubMed] [Google Scholar]
  • 5.Casebeer A, Johnson D. Potholes in the information highway: the use of health service utilization data by Alberta healthcare managers. Healthcare Management Forum 20001358–64. [DOI] [PubMed] [Google Scholar]
  • 6.Canadian Health Services Research Foundation Key lessons on how to communicate research to decision makers. Ottawa: Canadian Health Services Research Foundation, 2002
  • 7.Roos N L, Shapiro E. From research to policy: What have we learned? Med Care 199937(suppl)JS291–LS305. [DOI] [PubMed] [Google Scholar]
  • 8.Lomas J. Diffusion, dissemination, and implementation: Who should do what? Ann N Y Acad Sci 1993703226–235. [DOI] [PubMed] [Google Scholar]
  • 9.Lomas J. Using ‘linkage and exchange' to move research into policy at a Canadian foundation. Health Affairs (Milwood) 200019236–240. [DOI] [PubMed] [Google Scholar]
  • 10.Davis P, Howden‐Chapmen P. Translating research findings into health policy. Soc Sci Med 199643865–872. [DOI] [PubMed] [Google Scholar]
  • 11.Gibson C B, Mohrman A M, Mohrman S A. Doing research that is useful to practice: a model and empirical exploration. Academy of Management Journal 200144357–375. [Google Scholar]
  • 12.Golden‐Bittle K, Reay T, Petz S.et al Towards a communicative perspective of collaborating in research: the case of the researcher‐decision‐maker partnership. Journal of Health Sciences Research and Policy 20038(suppl 2):S220–25. [DOI] [PubMed] [Google Scholar]
  • 13.Martens P J, Fransoo R, The Need to Know Team et alThe Manitoba RHA indicators atlas: population‐based comparisons of health and health care use. Winnipeg: Manitoba Centre for Health Policy, 2003. http://www.umanitoba.ca/centres/mchp/reports/reports_03/rha2.htm
  • 14.Martens P J, Fransoo R, McKeen N.et alPatterns of regional mental illness disorder diagnoses and service use in Manitoba: a population‐based study. Winnipeg: Manitoba Centre for Health Policy, 2004. http://www.umanitoba.ca/centres/mchp/reports/reports_04/mental.health.htm
  • 15.Fransoo R, Martens P J, The Need to Know Team, et al Sex differences in health status, health care use, and quality of care: a population‐based analysis for Manitoba's regional health authorities. Winnipeg: Manitoba Centre for Health Policy, 2005. http://www.umanitoba.ca/centres/mchp/reports/reports_05/sexdiff.htm
  • 16.Patton M Q.Qualitative research and evaluation methods. 3rd ed. Thousand Oaks: Sage Publications, 2002343–344.
  • 17.Strauss A, Corbin J.Basics of qualitative research: grounded theory procedures and techniques. Newbury Park: Sage Publications, 199061–74.
  • 18.Canadian Health Services Research Foundation How Manitoba regional health authorities support the use of research‐based evidence in planning. Promising Practices in Research Use. 2005 (3). http://www.chsrf.ca/promising/pdf/ppractices_3_e.pdf
  • 19.Campbell D T, Stanley J C.Experimental and quasi‐experimental designs for research. Boston: Houghton Mifflin, 19665–13, 37-43.

Articles from Journal of Epidemiology and Community Health are provided here courtesy of BMJ Publishing Group

RESOURCES