Skip to main content
CMAJ : Canadian Medical Association Journal logoLink to CMAJ : Canadian Medical Association Journal
. 2010 Feb 9;182(2):E73–E77. doi: 10.1503/cmaj.081231

The knowledge-to-action cycle: identifying the gaps

Alison Kitson 1, Sharon E Straus 1,
PMCID: PMC2817340  PMID: 19948812

In a large study in the United States, 20% of people with type 2 diabetes mellitus had poor control of blood glucose (i.e., a hemoglobin A1c concentration greater than 9%), only one third achieved a target rate of blood pressure (i.e., 130/80 mm Hg) and half had low-density lipoprotein cholesterol levels above the target rate.1 Less than 50% of people with fragility fracture received a diagnostic test for osteoporosis or a diagnosis from a clinician.2 Among elderly patients with hip, wrist or vertebral fractures, 10%–20% receive therapy for osteoporosis in the year after the fracture.3 Researchers have found that evidence frequently isn’t used by local,4 national5 or international6 policy-makers.

What is a “gap”?

All of the above facts are examples of gaps. Measuring the “gap” between evidence and actual practice or policy-making is one of the first steps in knowledge translation.7 By evidence, we mean the best available research-based evidence.8 Ideally, this evidence should come from high-quality practice guidelines or systematic reviews.

We’ll use a recent example from New Zealand to illustrate how to use data to address gaps — the difference between what is desired and what is actually done. For many years, vascular guidelines in New Zealand have contained recommendations that management of cardiovascular risk should be informed by the absolute risk of a cardiovascular event.9 Moreover, they targeted treatment to those with an absolute cardiovascular risk of 15% or higher at 5 years. Researchers found that in primary care, less than one-third of people with vascular disease were receiving therapy recommended by the guidelines.10

Before anything can be done to improve the quality of care, we need to be able to assess current care in a simple, reliable way. Quality indicators can be used as a basis for assessing gaps. These indicators are measures used to monitor, assess and improve the quality of care and organizational functions that affect patient outcomes. Examples include appropriate control of blood pressure in patients with diabetes and previous stroke, and prophylaxis against deep vein thrombosis in critically ill patients admitted to the intensive care unit.

Donabedian11 proposed a framework for considering quality of care that separates quality into structure (i.e., the setting), process (i.e., the activity) and outcome (i.e., the status of the patient after the intervention). This framework can be used to categorize quality indicators. Considering our example of vascular risk, the availability of a computerized system for support of decision-making in a clinician’s office is a structural indicator. Completion of a vascular risk assessment by a patient or physician is a process indicator. Outcomes would include stroke, myocardial infarction and death. For each of these items, ideally we would have a descriptive statement, a list of data-based elements or criteria to measure the indicator, and information about the relevant population, how the data-based elements are collected, the timing of data collection and reporting, the analytic models used to construct the measure, the format in which the results will be presented and the evidence in support of its use (Box 1).12

Box 1. Examples of quality indicators13

Quality indicators for the assessment of absolute cardiovascular risk

  • Assessment of absolute cardiovascular risk (including tobacco use, weight/BMI, blood pressure, blood glucose, lipids) for asymptomatic men is recommended at 45 and at 55 for asymptomatic women

Supporting evidence

  • There are no randomized trials supporting universal screening for cardiovascular risk in population groups. However, substantial evidence exists that supports identifying people at risk of cardiovascular disease and treating them accordingly.

Many countries have instituted national strategies to collect quality indicators.12 For example, the National Institute of Clinical Studies in Australia has captured gaps from evidence to practice across a range of issues including influenza vaccination.14 The Agency for Health Research and Quality in the United States has prepared indicators to measure aspects of quality in prevention, in-hospital care, patient safety and pediatrics.15 However, little agreement exists on quality indicators across countries.

Quality indicators should be developed through consideration of the best available evidence. The Delphi method was modified by investigators at RAND Health to achieve consensus on this process.16 The method involves rounds of anonymous ratings on a risk–benefit scale and in-person discussion between rounds.17 The goal is to be inclusive of all relevant stakeholders, including the public, health care professionals and managers. This process should be followed by a test of the indicator in practice-based settings to determine if the indicator can be measured accurately and reliably.17 For example, for our vascular risk strategy, can we measure outcomes such as death and stroke accurately? We would need to determine if this information is collected in the clinical or administrative databases and whether we can accurately extract this information.

Which gaps should we target?

Although many gaps in practice and policy-making could be identified in various settings, a process needs to be established for selecting which ones to target.17 Realistically, given constraints in resources, it isn’t possible to target every gap from evidence to practice. Strategies include consideration of the burden of disease, including morbidity, mortality, quality of life and cost. These discussions should be transparent and involve relevant stakeholders, including patients or the public, health care professionals and managers. The vascular risk strategy of New Zealand was developed by a collaborative that included the Ministry of Health and the New Zealand Guidelines Group. Given the burden of disease and the existence of effective therapies, vascular risk was identified as a national priority, with input from health care professionals and patient-based groups. In particular, the review by stakeholders of the evidence highlighted the need to reduce cardiovascular risk in Maoris, who have the poorest health status of any group in New Zealand.

How can we measure the gap?

Needs assessment is a process for determining the size and nature of the gap between current and more desirable knowledge, skills, attitudes, behaviours and outcomes. The strategy used for assessment depends on the purpose of the assessment, the type of data and the resources available. The classification of needs includes felt needs (i.e., what people say they need), expressed needs (i.e., what people do), normative needs (i.e., what experts say), and comparative needs (i.e., group comparisons).18 We can consider this issue from the perspective of the population, the provider organization or the health care provider. As well, needs can be measured objectively or subjectively (Table 1).19

Table 1.

Strategies for needs assessments to measure gaps in assessment and management of cardiovascular risk

Source of data Example Advantages Disadvantages
Measuring the gap at the population level
 Administrative database Claims database for prescription drugs provided by a health authority for patients over age 65. Could be used to assess prescriptions for statins, antiplatelet agents and antihypertensive medications. Objective measures.
Large, population-based database.
May not contain all relevant clinical information.
Coding can be incomplete.
Can only find events with codes.
Database may not include entire population.
 Clinical database Database of all patients under a health authority who underwent coronary artery bypass graft. Could include information on the procedure, when it was done, follow-up visits, assessment of vascular risk and use of statins, antiplatelet agents and antihypertensive medications. Objective measures.
Can be used in combination with administrative databases.
Information may not be accurate because it relies on reports from various sources, including hospitals and clinics.
May not contain all relevant information (e.g., prescribed medications).
Measuring the gap at the organizational level
 Chart audit Paper-based clinic record or electronic health record used to identify documentation of cardiovascular risk, lifestyle- related advice and other recommended management for those at elevated risk. Can provide information on diagnosis, comorbidities, some process measures such as blood pressure and blood glucose, and some information on medications.
Electronic health records facilitate data capture, especially for medications and diagnostic tests.
Information may not be complete.
In a paper-based chart, contents may not be legible.
Time needed to complete a (paper-based) chart audit can be intensive.
Measuring the gap at the care-provider level
 Chart audit Paper-based or electronic health record used to identify documentation of cardiovascular risk, lifestyle-related advice and other recommended management for those at elevated risk. See above. See above.
 Direct observation In-clinic video recording of discussions with standardized patients about assessment and management of vascular risk Objective assessment. Resource-intensive.
May not capture a range of actions and practices.
 Competency assessment Multiple-choice examination for a clinician as part of recertification, with questions focused on knowledge of assessment and reduction of cardiovascular risk. Can be objective (e.g., multiple- choice examination) or more subjective (e.g., oral examination). May not reflect actual practice.
Resource-intensive.
 Reflective practice Learning-based diary used by clinician to record clinical questions arising during care of patients at risk for cardiovascular disease. Learners identify own needs. May not reflect needs accurately.

At the population level

At the population level, we can consider population-based needs using epidemiological data, which are objective tools of measurement for assessment. Administrative databases or claims databases are created from administering and reimbursing health care services.20 Typically, these databases include information on diagnosis (e.g., International Classification of Diseases, 10th Revision, Clinical Modification), procedures, laboratory investigations, billing information and some demographic information. Many administrative databases exist, ranging from regional databases, such as those provided by the Ontario Ministry of Health and Long-term Care21 to national databases such as the Medicare Provision and Analyses Review Files.22 Databases like these have been used to identify undertreatment of cardiovascular risk factors in patients with diabetes23 and overuse of benzodiazepines in elderly patients.24

These databases have some limitations. First, they were not developed for research-related use and may not contain all of the information that would be useful for gap-related analysis, including data on severity of illness.25 Second, coding may be incomplete and we can only find events for which codes are available.20 Third, the databases may not include the entire population. For example, the Medicare files include only people 65 and older, some people under 65 with disabilities and all people with endstage renal disease requiring renal replacement therapy.

Clinical databases can also be used to perform analyses of gaps. Clinical databases include registries of patients who have undergone specific procedures (e.g., colonoscopy) or who have certain diagnoses (e.g., colon cancer). Examples in the United Kingdom include the National Cardiac Surgical Database, which contains data on patients who have cardiac surgery, and the National Vascular Database, which contains data from surgeons who do repairs of abdominal aortic aneurysms, carotid endarterectomy and infrainguinal bypass.20 These registries may have data that is complementary to that included in administrative databases, including more information on secondary diagnoses and comorbidities. Clinical databases can sometimes be used in combination with administrative databases to provide additional detail on gaps in practice.26 However, some studies have shown lack of agreement between administrative and clinical databases.27 Limitations of these databases include inaccuracy of information.

In our New Zealand example, data were available from primary care practices that used an electronic health record. Using this information, researchers were able to identify the proportion of patient records that included documentation of cardiovascular risk factors28 and the proportion of patients received prescriptions for statins, antiplatelet agents and anti-hypertensive medications.10 However, this database did not include all patients at risk for vascular disease.

At the organizational level

Needs assessments at the organizational level may be done at the level of the hospital or the clinic. Hospitals in many countries are required by accreditation bodies (e.g., the Joint Commission on the Accreditation of Health Care Organizations) to collect information on control of infection, mortality and use of restraints, for example.29 This source could be used to collect information on gaps. With the growing use of computerized health care records in hospitals and community settings, these tools can be used to extract data for assessment of gaps.30 For example, chart audits can be done to review and assess health records using preset standardized criteria for outcomes such as diagnostic tests or use of appropriate therapies. Ideally, criteria for review should be based on valid evidence for the quality indicator and include objective measures, such as whether target levels of blood pressure and blood glucose were achieved in patients with increased cardiovascular risk. An approach that we can consider when completing a baseline measurement is shown in Box 2.

Box 2. Questions to consider when beginning a chart audit

Questions about comparing actual and desired clinical practice (Yes / No / Not sure):

Before you measure

  • Have you secured sufficient stakeholder interest and involvement? Have you selected an appropriate topic?

  • Have you identified the right sort of people, skills and resources?

  • Have you considered ethical issues?

What to measure

  • Should your criteria be explicit or implicit?

  • Should your criteria relate to the structure, process or outcomes of care?

  • Do your criteria have sufficient impact to lead to improvements in care?

  • What level of performance is appropriate to aim for?

How to measure

  • Is the information you need available?

  • How are you identifying an appropriate sample of patients?

  • How big should your sample be?

  • How to choose a representative sample

  • How will you collect the information?

  • How will you interpret the information?

Reproduced with permission from Northstar. Research Based Education and Quality Improvement (ReBEQI). Oslo (Norway): Norwegian Health Services Research Centre; 2009. Available: www.rebeqi.org/?pageID=34&ItemID=35 (accessed 2009 Sept. 24).

At the care-provider level

Several strategies can be used for assessment of needs at the provider level, including chart audits, observation, assessment of competency and reflective practice. Direct observation of the performance of providers can be completed through the use of standardized patients31 or video recording of clinicians interacting with patients.32 Similarly, assessments of competency, including questionnaires about knowledge, can be completed (e.g., those done as part of the requirements for certification by the American Board of Internal Medicine or through completion of clinical vignettes).33 Finally, reflective practice, whereby clinicians use their own clinical experiences to highlight learning opportunities, or learning-based portfolios that support the identification and recording of needs from clinical experiences, can be considered.34 However, these subjective forms of assessment may be less accurate in determining needs than more objective measures such as actual practice (e.g., prescribing a particular medication). Clinicians tend to pursue education around topics that they already know while avoiding areas in which they are deficient.35 For this reason, although surveys, interviews and focus groups can inform assessments of needs, they are more subjective and may not accurately reflect gaps in practice.

Why do gaps exist?

Performing audits is a method for obtaining information about gaps in practice. However, it must be cautioned that using gaps in practice to blame clinicians is easy, but gaps from evidence to action usually reflect systems-related issues and not solely the performance of providers. For this reason, we need to look beyond the evidence of a practice gap to determine the “why.” Van de Ven36 argues that we underestimate what we already know about human behaviour; namely, that human beings have problems paying attention to nonroutine tasks. Also, most individuals find dealing with complexity and remembering complex information challenging37 but are efficient processors of routine tasks. We do not concentrate on repetitive tasks once they are mastered. Skills for performing repetitive tasks (e.g., writing admission orders) are repressed in our subconscious memory, permitting us to pay attention to things other than the performance of the repetitive task. The consequence is that what most individuals do most frequently is what they think about the least. If we do not have ways of evaluating the impact of these tasks, then gaps between evidence and practice can occur.

March and Simon38 state that dissatisfaction with existing conditions stimulates us to search for improved conditions and that we stop searching when a satisfactory result is found. Therefore, in any discussions about potential gaps, data need to be presented along with descriptions of individuals’ experiences and preferences for the change in practice. We feel happy and satisfied when the changes we have made correspond to our own set of beliefs about our job and we have successfully achieved the change.39

Gaps between evidence and decision-making occur for many reasons. A review of barriers to implementation of guidelines by physicians has identified more than 250 barriers.40 Barriers can range from systems-related issues, such as lack of facilities to perform assessment of vascular risk, to individual factors, such as lack of awareness of the evidence in support of assessment of vascular risk. Assessment of barriers to uptake of knowledge will be discussed in a subsequent article in this series.

What are the gaps in gap identification?

An area for further research is the testing of how data can be used to stimulate the identification of gaps in care, in monitoring changes to practice and in the introduction of new practices in a reliable and valid way. We need further understanding of ways to support greater autonomy and self-direction of local teams so they can keep vigilant over routine matters. Being clearer about how we identify the gaps from knowledge to action in the health care system is also important.41

Identifying the gaps in care is a starting point for implementation of knowledge. The next articles in this series will address how to adapt the knowledge to local context and how to understand barriers and facilitators to implementation of knowledge.

The book Knowledge Translation in Health Care: Moving from Evidence to Practice, edited by Sharon Straus, Jacqueline Tetroe and Ian D. Graham and published by Wiley-Blackwell in 2009, includes the topics addressed in this series.

Key points

  • Identifying the gaps from knowledge to practice is the starting point of implementing knowledge. Analyses of gaps should involve use of rigorous methods and engage relevant stakeholders.

  • Strategies for completing needs assessments depend on the purpose of the assessment, the type of data and the resources that are available.

  • Needs can be assessed from the perspective of a population, an organization or a health care provider.

Articles to date in this series

  • Straus SE, Tetroe J, Graham I. Defining knowledge translation. CMAJ 2009;181:165-8.

  • Brouwers M, Stacey D, O’Connor A. Knowledge creation: synthesis, tools and products. CMAJ 2009.DOI:10.1503 /cmaj.081230

Footnotes

Competing interests: None declared.

Sharon Straus is section editor of Reviews at CMAJ and was not involved in the editorial decision-making process for this article.

Contributors: Both of the authors were involved in the development of the concepts in the manuscript and in the drafting of the manuscript, and both of them approved the final version submitted for publication.

This article has been peer reviewed.

REFERENCES

  • 1.Saydah SH, Fradkin J, Cowie CC. Poor control of risk factors for vascular disease among adults with previously diagnosed diabetes. JAMA. 2004;291:335–42. doi: 10.1001/jama.291.3.335. [DOI] [PubMed] [Google Scholar]
  • 2.Papaioannou A, Giangregorio L, Kvern B, et al. The osteoporosis care gap in Canada. BMC Musculoskelet Disord. 2004;5:11. doi: 10.1186/1471-2474-5-11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Feldstein AC, Nichols G, Elmer P, et al. Older women with fractures: Patients falling through the cracks of guideline-recommended osteoporosis screening and treatment. J Bone Joint Surg Am. 2003;85:2294–302. [PubMed] [Google Scholar]
  • 4.Dobbins M, Thomas H, O’Brien MA, et al. Use of systematic reviews in the development of new provincial public health policies in Ontario. Int J Technol Assess Health Care. 2004;20:399–404. doi: 10.1017/s0266462304001278. [DOI] [PubMed] [Google Scholar]
  • 5.Lavis JN, Ross SE, Hurley JE, et al. Examining the role of health services research in public policy making. Milbank Q. 2002;80:125–54. doi: 10.1111/1468-0009.00005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Oxman AD, Lavis JN, Fretheim A. Use of evidence in WHO recommendations. Lancet. 2007;369:1883–9. doi: 10.1016/S0140-6736(07)60675-8. [DOI] [PubMed] [Google Scholar]
  • 7.Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26:13–24. doi: 10.1002/chp.47. [DOI] [PubMed] [Google Scholar]
  • 8.Straus SE, Richardson WS, Glasziou P, et al. Evidence based medicine: how to practice and teach it. Edinburgh (UK): Elsevier; 2005. [Google Scholar]
  • 9.New Zealand Guidelines Group. Assessment and management of cardiovascular risk. Wellington (NZ): The Group; 2003. [(accessed 2009 Sept. 18)]. Available: www.nzgg.org.nz/guidelines/dsp_guideline_popup.cfm?guidelineID=35. [Google Scholar]
  • 10.Rafter N, Connor J, Hall J, et al. Cardiovascular medications in primary care: treatment gaps and targeting by absolute risk. N Z Med J. 2005;118:U1676. [PubMed] [Google Scholar]
  • 11.Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260:1743–8. doi: 10.1001/jama.260.12.1743. [DOI] [PubMed] [Google Scholar]
  • 12.Lambie L, Mattke S Members of the OECD Cardiac Care Panel. Selecting indicators for the quality of cardiac care at the health systems level in OECD countries. Organisation for Economic Co-operation and Development. 2004. [(accessed 2009 Aug. 11)]. Available: www.oecd.org/dataoecd/28/35/33865450.pdf.
  • 13.New Zealand Guidelines Group. Assessment and management of cardiovascular risk: Summary. Wellington (NZ): The Group; 2003. [(accessed 2009 Sept. 18)]. Available: www.nzgg.org.nz/guidelines/0035/CVD_Risk_Summary.pdf. [Google Scholar]
  • 14.National Institute of Clinical Studies. Evidence-practice gaps report volume two. Melbourne (Australia): The Institute; 2005. [(accessed 2009 Sept. 18)]. Available: www.nhmrc.gov.au/nics/material_resources/resources/evidence_volume_two.htm. [Google Scholar]
  • 15.AHRQ Quality Indicators. Rockville (MD): Agency for Healthcare Research and Quality; 2006. [(accessed 2009 Sept. 18)]. Available: www.qualityindicators.ahrq.gov/downloads.htm. [Google Scholar]
  • 16.Shekelle P. The appropriateness method. Med Decis Making. 2004;24:228–31. doi: 10.1177/0272989X04264212. [DOI] [PubMed] [Google Scholar]
  • 17.Rosengart MR, Nathens AB, Schiff MA. The identification of criteria to evaluate prehospital trauma care using the Delphi technique. J Trauma. 2007;62:708–13. doi: 10.1097/01.ta.0000197150.07714.c2. [DOI] [PubMed] [Google Scholar]
  • 18.Gilliam SJ, Murray SA. Needs assessment in general practice. London (UK): London Royal College of General Practitioners; 1996. [Occasional paper 73] [PMC free article] [PubMed] [Google Scholar]
  • 19.Lockyer J. Needs assessment: lessons learned. J Contin Educ Health Prof. 1998;18:190–2. [Google Scholar]
  • 20.Zhan C, Miller MR. Administrative data-based patient safety research: a critical review. Qual Saf Health Care. 2003;12(suppl II):ii58–63. doi: 10.1136/qhc.12.suppl_2.ii58. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Atlases. Institute for Clinical Evaluative Sciences; Toronto: 2009. [(accessed 2009 Sept. 24)]. Available: www.ices.on.ca/webpage.cfm?site_id=1&org_id=67&hp=1. [Google Scholar]
  • 22.Medicare Coverage Database: Overview. Baltimore (MD): Centres for Medicare and Medicaid Services, United States Department of Health and Human Services; 2009. [(accessed 2009 Sept. 24)]. Available: www.cms.hhs.gov/MCD/overview.asp. [Google Scholar]
  • 23.Shah BR, Mamdani M, Jaakkimainen L, et al. Risk modification for diabetic patients. Are other risk factors treated as diligently as glycemia? Can J Clin Pharmacol. 2004;11:239–44. [PubMed] [Google Scholar]
  • 24.Pimlott NJ, Hux JE, Wilson LM, et al. Educating physicians to reduce benzodiazepine use by elderly patients. CMAJ. 2003;168:835–9. [PMC free article] [PubMed] [Google Scholar]
  • 25.Feinstein AR ICD, POR, and DRG. Unsolved scientific problems in the nosology of clinical medicine. Arch Intern Med. 1988;148:2269–74. doi: 10.1001/archinte.148.10.2269. [DOI] [PubMed] [Google Scholar]
  • 26.Aylin P, Bottle A, Majeed A. Use of administrative data or clinical databases as predictors of risk of death in hospital: comparison of models. BMJ. 2007;334:1044–8. doi: 10.1136/bmj.39168.496366.55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Gorelick MH, Knight S, Alessandrini EA, et al. Lack of agreement in pediatric emergency department discharge diagnoses from clinical and administrative data sources. Acad Emerg Med. 2007;14:646–52. doi: 10.1197/j.aem.2007.03.1357. [DOI] [PubMed] [Google Scholar]
  • 28.Rafter N, Wells S, Stewart A, et al. Gaps in primary care documentation of cardiovascular risk factors. N Z Med J. 2008;121:24–33. [PubMed] [Google Scholar]
  • 29.The Joint Commission. Performance Measurement. Oakbrook Terrace (IL): The Commission; 2009. [(accessed 2009 Sept. 24)]. Available: www.jointcommission.org/performancemeasurement. [Google Scholar]
  • 30.Rubenfeld GD. Using computerized medical databases to measure and to improve the quality of intensive care. J Crit Care. 2004;19:248–56. doi: 10.1016/j.jcrc.2004.08.004. [DOI] [PubMed] [Google Scholar]
  • 31.Peabody JW, Luck J, Glassman P, et al. Comparison of vignettes, standardized patients and chart abstraction. JAMA. 2000;283:1715–22. doi: 10.1001/jama.283.13.1715. [DOI] [PubMed] [Google Scholar]
  • 32.Shah SG, Thomas-Gibson S, Brooker JC, et al. Use of video and magnetic endoscopic imaging for rating competence at colonoscopy: validation of a measurement tool. Gastrointest Endosc. 2002;56:568–73. doi: 10.1067/mge.2002.128133. [DOI] [PubMed] [Google Scholar]
  • 33.Dresselhaus TR, Peabody JW, Luck J, et al. An evaluation of vignettes for predicting variation in the quality of preventive care. J Gen Intern Med. 2004;19:1013–8. doi: 10.1007/s11606-004-0003-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Dornan T, Carroll C, Parboosingh J. An electronic learning portfolio for reflective continuing professional development. Med Educ. 2002;36:767–9. doi: 10.1046/j.1365-2923.2002.01278.x. [DOI] [PubMed] [Google Scholar]
  • 35.Davis DA, Mazmanian PE, Fordis M, et al. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA. 2006;296:1094–102. doi: 10.1001/jama.296.9.1094. [DOI] [PubMed] [Google Scholar]
  • 36.Van de Ven A. Central problems in the management of innovation. Manage Sci. 1985;32:590–607. [Google Scholar]
  • 37.Johnson PE. The expert mind: a new challenge for the information scientist. In: Bemmelmans MA, editor. Beyond productivity: information systems development for organisational effectiveness. Amsterdam (Netherlands): North Holland Publishing; 1983. [Google Scholar]
  • 38.March JG, Simon H. Organisations. New York (NY): Wiley; 1958. [Google Scholar]
  • 39.Kitson AL. the need for systems change: reflections on knowledge translation and organizational culture. J Adv Nurs. 2009;65:217–28. doi: 10.1111/j.1365-2648.2008.04864.x. [DOI] [PubMed] [Google Scholar]
  • 40.Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458–65. doi: 10.1001/jama.282.15.1458. [DOI] [PubMed] [Google Scholar]
  • 41.Grol R, Berwick DM, Wensing M. On the trail of quality and safety in health care. BMJ. 2008;336:74–6. doi: 10.1136/bmj.39413.486944.AD. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from CMAJ : Canadian Medical Association Journal are provided here courtesy of Canadian Medical Association

RESOURCES