Skip to main content
Healthcare Policy logoLink to Healthcare Policy
. 2007 Aug;3(1):32–37.

Où Sont les Chercheurs? Speaking at Cross-Purposes or Across Boundaries?

Où sont les chercheurs? Parlent-ils à contre-courant ou transcendent-ils les frontières?

Craig Mitton 1, Angela Bate 2
PMCID: PMC2645124  PMID: 19305752

Abstract

Knowledge transfer and exchange (KTE) relates to both the translation and transfer of information, as well as the exchange of information, between researchers and decision-makers. Despite recent advances, KTE efforts may be compromised on two fronts: first, the existing reward structure for university-based researchers may not be compatible with applied research; and second, there appears to be a lack of research capacity in healthcare organizations. In this short paper, we contest the first of these points, suggesting that applied research can and should be published in high-index journals, and thus the tenure and promotions process does not need reform. Regarding the second point, we suggest that partnerships be formed across healthcare organizations, universities, government agencies and research funders to support the positioning of PhD-trained researchers directly in healthcare delivery organizations. In our view, it is here, once organizational boundaries are crossed, that significant progress will be made in completing the health policy and health research cycles.


Knowledge Transfer and Exchange (KTE) and its application to health services research has been gaining prominence internationally. This trend has been exemplified through the creation of national organizations whose role is to promote the principles of KTE both by facilitating knowledge transfer and information exchange between researchers and decision-makers, and in providing training for such activity, in order to improve the evidence base upon which decisions are made.1

The growing focus on the importance and relevance of KTE was highlighted in presentations and subsequent discussion generated by the 2005 International Conference on the Scientific Basis of Health Services in Montreal, which had as its aim to “promote the practical application of health research in health systems and settings.” The question transcending the program was how to ensure successful KTE and bridge the perceived divide that currently exists between research undertaken in universities and research that decision-makers and healthcare organizations need to work to better effect. While presentations addressed the “push” and “pull” agendas for KTE, discussed the barriers and facilitators for attaining success in knowledge transfer and provided theoretical frameworks describing such processes, the solution remained elusive.

In attempting to address this question, we acknowledge that our collective experience is confined to the countries where we respectively reside: Canada and the United Kingdom. In both contexts, the perceived divide that exits between high-quality research produced by university researchers and the practically applicable and locally relevant research required by decision-makers has resulted in the view that researchers and decision-makers are speaking incompatible languages and often at cross-purposes. While KTE serves to provide translation and foster exchange between the academic and applied worlds, successful KTE may be compromised in two ways. First, the existing reward structure for university-based researchers may give rise to incentives that are incompatible with producing good-quality applied research. Second, there is a lack of research capacity and skills in healthcare organizations to conduct primary research and direct the research agenda to fulfill local decision-making needs.

The first of these points is contestable. Of course, some researchers have no interest in conducting applied research, or once it is conducted, in spending time to ensure the use of that research in practice. This is their prerogative. However, for those interested in completing the research cycle, there are indeed forces at play that mitigate involvement in applied research. For example, applied research can take longer and may differ in terms of the rate at which it can be published; thus, applied researchers may encounter bias in the tenure/promotions process. Further, applied research may not be compatible with peer-reviewed journals (e.g., because it is too context-specific). Even if such research is published, peer-reviewed journals may not foster KTE, as the research may be neither accessible nor relevant in its application to decision-makers (owing to long time lags between submission and publication, publication bias and an unfamiliar scientific reporting format).

Nonetheless, in contrast to the groundswell at some universities, we would argue that applied researchers do not need a parallel incentive system within the current university system for promotion and tenure. Good research is good research, regardless of whether it is applied or not. Thus, there should be little excuse for not achieving high-index, peer-reviewed publications on applied research activity. We would contend that all researchers have a myriad of responsibilities and masters. While applied researchers may indeed have at least one additional master, does this mean that peer-reviewed publications cannot be attained? An applied researcher interested in KTE will likely be interested in publishing – both as an academic pursuit in and of itself, and as a career-promoting activity. Moreover, in achieving successful KTE, peer-reviewed publications are but one way through which such applied research can be promoted. With the ever-growing heterogeneity among researchers at universities, we believe that there should be at least one common metric across all disciplines against which researchers are measured: peer-reviewed publications. Thus, despite counter-arguments, we would still contend that peer-reviewed publication is a relevant metric for applied researchers, and as such, the university tenure/promotions process does not need fundamental reform.

Turning now to the second point, we suggested that successful KTE may also be compromised because healthcare organizations lack research capacity. Because this proposition is more widely accepted, it may consequently be easier to resolve. While many healthcare organizations currently have a research mandate, the lack of in-house research capacity often means consulting out to university-based researchers, expressing a “willingness” to partner on research projects and funding university-based research centres. This situation gives rise to a view of KTE as the one-way flow of information from the academic world into the applied, with the onus for the translation and uptake of new knowledge resting with the researchers. In our opinion, these actions have widened the perceived divide between researchers and decision-makers, creating a “them-and-us” culture, and is the primary reason that so much research today is published and then shelved. KTE should be about mutual engagement in a two-way, dynamic process.

So, where do we go from here if we are to see greater use of research in practice over the next decade? The approach we discuss counters the view outlined at the start, which implies that the divide between researchers and decision-makers can be attributed to the notion that they speak incompatible languages and are consequently at cross-purposes. Instead, we would argue that this divide is both reinforced and, at the same time, maintained by existing organizational boundaries that cannot be bridged by KTE alone.

Our model, therefore, attempts to bring together those conducting and applying research into the same organization. In this model, healthcare organizations would develop and sustain research and development (R&D) departments that, rather than focusing solely on coordinating and validating external research, would initiate and drive their own research agendas housing PhD-trained researchers wanting to conduct and implement applied research. Instigating this shift requires at least four key stakeholders to come on board: the healthcare organization, the researcher/university, the government and the funding agencies.

First, the healthcare organizations themselves would have to give the R&D department sufficient profile to attract applied researchers. Salaries would have to be competitive with, or exceed, those in university research centres; researcher time would have to be protected to enable academic freedom and pursuits alongside investigation of important organizational issues; and roles would have to be clearly differentiated from healthcare analysts, who respond to daily issues such as utilization and capacity. Organizationally, such activity would need to receive high-level support, as on its own a given R&D unit (and, indeed, the individual scientists that comprise it) may be unable to navigate within the decision-making environment and to influence change. Second, the universities would have to provide some level of institutional support for these researchers. Currently in Canada and the United Kingdom, the majority of junior university-based health research positions are funded through salary awards or other external funding sources. That is, the university often does not pay the salary, yet expects service in the form of teaching and student supervision in return for a departmental position. The model we propose would provide greater security for junior researchers by offering a salaried position (perhaps cross-funded), unparalleled opportunities to carry out research at local levels, and ready-made partnerships. Application for provincial and national grants, and submission for peer-reviewed publication, would of course be part of the position. Having a joint appointment, or being cross-appointed with a university department (in a similar way to academic physicians), would also ensure peer interaction and additional academic pursuits.

Third, the government would need to provide protected funding to support R&D. Healthcare organizations should not have to choose between R&D and patient care initiatives. These resources could be redirected from current health innovation pools, and could be viewed as a strategic investment in the application of research. Governments could also provide coordination and education functions, so that healthcare organizations do not excessively duplicate research projects and researchers have the opportunity to attend annual workshops to share ideas.2

Finally, health research funding agencies (e.g., Canadian Health Services Research Foundation, Canadian Institutes of Health Research, the Economic and Social Research Council, and the UK Department of Health National Coordinating Centre for Research Capacity and Development) are currently funding KTE programs. Some of these resources could be re-allocated to healthcare organizations to recruit and retain applied researchers. These organizations could also put their significant weight behind the model redesign and provide practical advice to the healthcare organizations.

By proposing this model, we aim to stimulate debate in this area, noting that questions do remain. For example: Will a broad group of applied researchers “jump ship” from existing university-based positions? Will junior researchers look at this opportunity merely as an initial stepping stone to advance their ultimate career path at university-based research centres? Will healthcare organizations be able to resist the temptation to use the researchers in putting out fires instead of protecting time and providing a stimulating academic environment? And will the key stakeholders have the foresight to invest in a vision for applied research that would take us from the current situation, of trying to bridge a divide that may be too wide, to a place where researchers and decision-makers are genuinely working together for the betterment of health policy and practice?

Ultimately, any innovation in this area would need to be evaluated on predefined, mutually agreeable measures of success. Any takers?

Acknowledgments

The authors would like to thank Professor Cam Donaldson, Newcastle University, for his helpful comments on this paper. We also appreciate the insightful comments of two anonymous reviewers.

1
Examples of such national bodies include:
  • The UK NHS Service Delivery and Organisation (SDO) R&D Programme. This is a UK-based national research program that has been established to consolidate and develop the evidence base on the organization, management and delivery of healthcare services.
  • The Centre for Knowledge Transfer. This is a Canadian national training centre in the area of knowledge utilization and policy implementation relating to health services research.
2

It should be noted that since this paper was written, there have been some reforms to R&D funding in England. Most notably, the National Institute for Health Research has been established to deliver the new R&D strategy for England – “Best Research for Best Health” – which aims to establish the NHS as an international centre of research excellence.

Footnotes

Conflict of interest: The authors are both applied health researchers without university tenure.

Contributor Information

Craig Mitton, University of British Columbia Okanagan, Michael Smith Foundation for Health Research Scholar, Kelowna, BC.

Angela Bate, Newcastle University, Newcastle, UK.


Articles from Healthcare Policy are provided here courtesy of Longwoods Publishing

RESOURCES