Skip to main content
PMC Canada Author Manuscripts logoLink to PMC Canada Author Manuscripts
. Author manuscript; available in PMC: 2015 Oct 28.
Published in final edited form as: Qual Quant. 2004 Aug;38(4):351–365. doi: 10.1023/b:ququ.0000043126.25329.85

A Strategy to Identify Critical Appraisal Criteria for Primary Mixed-Method Studies

Joanna E M Sale 1, Kevin Brazil 1
PMCID: PMC4623761  CAMSID: CAMS4867  PMID: 26526412

Abstract

The practice of mixed-methods research has increased considerably over the last 10 years. While these studies have been criticized for violating quantitative and qualitative paradigmatic assumptions, the methodological quality of mixed-method studies has not been addressed. The purpose of this paper is to identify criteria to critically appraise the quality of mixed-method studies in the health literature. Criteria for critically appraising quantitative and qualitative studies were generated from a review of the literature. These criteria were organized according to a cross-paradigm framework. We recommend that these criteria be applied to a sample of mixed-method studies which are judged to be exemplary. With the consultation of critical appraisal experts and experienced qualitative, quantitative, and mixed-method researchers, further efforts are required to revise and prioritize the criteria according to importance.

Keywords: mixed methods, multiple methods, qualitative methods, quantitative methods, critical appraisal

1. Introduction

The practice of mixed-methods research has increased considerably over the last 10 years, as seen in numerous articles, chapters, and books published1,2,3,4,5,6,7,8,9. Journal series have been devoted to this topic as well (see Volume 19 of Health Education Quarterly10; Number 74 of New Directions for Evaluation11; Volume 34 of Health Services Research12). Despite being criticized for violating quantitative and qualitative paradigmatic assumptions, the methodological quality of mixed-method studies has not been examined. While one could argue that the quality of mixed-method studies cannot be assessed until the quantitative-qualitative philosophical debate is resolved, it will be demonstrated that under the assumption that methods are linked to paradigms, it is possible to identify criteria to critically appraise mixed-method studies.

There are difficulties with developing criteria to critically appraise mixed-method studies. The definitions for qualitative and quantitative methods as well as the paradigms linked with these methods vary considerably. The critical appraisal criteria proposed for the methods often overlap depending on the dominant paradigm view taken. Finally, the language of criteria is often vague, requiring the inexperienced reader to make judgement calls13. This paper will address the above difficulties. It will attempt to clarify the language applied to critical appraisal and the assumptions of the authors will be presented.

The purpose of this paper is to identify criteria to critically appraise the methods of primary studies in the health literature which employ mixed-methods. A mixed-methods study is defined as one in which quantitative and qualitative methods are combined in a single study. A primary study is defined as one that contains original data on a topic14. The methods refer to the procedures of the study. Before conducting the literature review, the authors will first outline their assumptions concerning methods linked to paradigms, criteria linked to paradigms, and the quantitative and qualitative method. Second, the concept of critical appraisal as it applies to quantitative and qualitative methods will be reviewed. Third, the theoretical framework upon which the authors rely will be described.

1.1 Assumptions

1.1.1 Methods linked to paradigms

A paradigm is a patterned set of assumptions concerning reality (ontology), knowledge of that reality (epistemology), and the particular ways of knowing that reality (methodology)15. The assumption of linking methods to a paradigm or paradigms is not a novel idea. It has been argued that different paradigms should be applied to different research questions within the same study and that the paradigms should remain clearly identified and separated from each other16. In qualitative studies, the methods are most often linked with the paradigms of interpretivism 16,17,18 and constructivism32. However, other paradigms for qualitative methods have been proposed such as positivism19, post-positivism 13,19, postmodernism20, and critical theory20. Historically, quantitative methods have been linked with the paradigm of positivism. In recent years, quantitative methods have been viewed as evolving into the practice of post-positivism21,22. In relation to critical appraisal, we view positivism and post-positivism as compatible. For the remainder of the text, the term positivism will apply to both positivism and post-positivism.

This paper assumes that quantitative methods are linked with the paradigm of positivism and that qualitative methods are linked with the paradigms of constructivism and interpretivism. This is an initial proposal for paradigms. We realize that critical appraisal criteria may change based on other paradigmatic assumptions.

1.1.2 Criteria linked to paradigms

Under the assumption that methods are linked to paradigms, it follows that the criteria to critically appraise these methods should also be linked to paradigms. The application of separate criteria for each method and paradigm is supported by many researchers23,24,25,26,27,28,29,30. Method-specific criteria provides an alternative to blending methodological assumptions27. Method-specific criteria also implies that each method is valued equally. While it is assumed that criteria are linked to paradigms, it is acknowledged that not all researchers 17,31 maintain this perspective.

1.1.3 The quantitative method

Under the assumptions that the quantitative methods are based on the paradigm of positivism, the underlying view is that all phenomenon can be reduced to empirical indicators which represent the truth. The ontological position of this paradigm is that there is only one truth, an objective reality that exists independent of human perception. Epistemologically, the investigator and investigated are independent entities. Therefore, an investigator is capable of studying a phenomenon without influencing it or being influenced by it; “inquiry takes place as through a one way mirror” 32.

1.1.4 The qualitative method

Under the assumption that qualitative methods are based on the paradigm of interpretivism and constructivism, multiple realities or multiple truths exist based on one’s construction of reality. Reality is socially constructed33 and so is constantly changing. On an epistemological level, there is no access to reality independent of our minds, no external referent by which to compare claims of truth34. The investigator and the object of study are interactively linked so that findings are mutually created within the context of the situation which shapes the inquiry35,36. This suggests that reality has no existence prior to the activity of investigation, and reality ceases to exist when we lose interest in it34. The emphasis of qualitative research is on process and meanings36.

1.2 The Concept of Critical Appraisal for Quantitative and Qualitative Methods

Guidelines for critically appraising quantitative methods exist in many peer-reviewed journals. For example, BMJ has published criteria for the critical appraisal of manuscripts which are submitted to it. More recently, qualitative journals have been publishing their own set of criteria. For example, the Canadian Journal of Public Health now has published guidelines for qualitative research papers. However, it has been argued that the idea of static criteria may violate the assumptions of the paradigms to which qualitative methods belong37,38,39. In this paper, we posit that there are static elements of qualitative methods which should be subject to criticism.

1.3 Theoretical Framework – Trustworthiness and Rigor proposed by Lincoln and Guba23,24

Our paper relies on Lincoln and Guba’s23,24 framework of trustworthiness and rigor. This framework which was selected due to its cross-paradigm appeal, guided the organization of the criteria for critically appraising mixed-method studies generated by the literature review. Although Lincoln and Guba intended the framework to reflect conventional (positivistic) versus naturalistic paradigms, we feel the framework encompasses constructivism and interpretivism under that which is “naturalistic”. (Guba and Lincoln themselves have used the term constructivism to refer to naturalistic inquiry40.) The term “trustworthiness”, referred to as “goodness criteria”35, parallels the term “rigor” used in quantitative methods24. Trustworthiness and rigor encompass four goals which apply to the respective methods and paradigms:

  1. Truth value – internal validity for quantitative methods versus credibility for qualitative methods;

  2. Applicability – external validity for quantitative methods versus transferability or fittingness for qualitative methods;

  3. Consistency – reliability for quantitative methods versus dependability for qualitative methods;

  4. Neutrality – objectivity for quantitative methods versus confirmability for qualitative methods.

Consistent with these goals, we propose to extend Lincoln and Guba’s theoretical framework to include parallel quantitative and qualitative critical appraisal criteria to which mixed-method studies can be applied.

2. Method

The purpose of the literature review was to generate criteria which have been proposed for critically appraising the methods of quantitative, qualitative, and mixed-method studies. Although the purpose of this review was not to develop a measure of critical appraisal, the initial search for criteria was based on the principles of item generation41 used in measurement theory. Following the principles of item generation, inclusion and exclusion rules were applied to the criteria considered.

2.1 Search Strategy

Articles judged to be exemplary by the authors were extracted from personal files and the files of colleagues, and were reviewed for MESH headings and keywords. The Subject Headings Indices for Medline, Cinahl, and PsychINFO were searched for terms referring to quantitative methods, qualitative methods, mixed-methods research, and critical appraisal. Textwords (words that appear in the titles or abstracts) were specified where there were no subject headings for quantitative, qualitative, and mixed-method studies. Keywords/textwords were assigned to a particular block; blocks were constructed using the Boolean ‘OR’ operator. Critical appraisal terms were then combined with the blocks for qualitative, quantitative, or mixed-methods using the Boolean ‘AND’ operator. Table 1 outlines the search strategy which included all available years of the selected databases.

Table 1.

Search strategy for critically appraising mixed-method studies

Medline, HealthStar (limit to nonmedline) (MESH) CINAHL WebSPIRS (includes PsycINFO, Sociological Abstracts, Social Sciences Index) ERIC Current Contents
* Nursing Research
* Research Design/st [Standards]
* Health Services Research/mt [Methods]
* Program Development/mt [Methods]
* Program Evaluation/mt [Methods]
* Research/mt [Methods]
* Research Design/mt [Methods]
* Quality Assessment /mt [Methods]
* Quality Assessment/st [Standards]
* Research/ev [Evaluation]
* Research/st [Standards]
* Study Design
critical appraisal
standards
AND
quantitative.tw
qualitative.tw
quantitative qualitative.tw
mixed method.tw
mixed methods.tw
mixed methodologies.tw
multimethod.tw.
multiple methods.tw
* Qualitative studies
quantitative.tw
* Multimethod Studies
mixed method.tw
quantitative
qualitative
quantitative qualitative
mixed method
mixed methods
mixed methodologies
multimethod
multiple methods
quantitative [&] qualitative
mixed [&] method
multimethod
multiple methods
mixed methodologies
quantitative/qualitative
qualitative/quantitative
quantitative- qualitative
qualitative- quantitative
quantitative methods
qualitative methods
mixed method
multimethod
multiple methods
*

limit to focus

The databases Medline (1966–2000), HealthStar (1975–1999), CINAHL (1982–1999), PsycINFO (1967–2000/2001), Sociological Abstracts (1963–2000), Social Sciences Index (1983–1999), ERIC (1966–1999/09), and Current Contents (Jan 3, 1999–Feb 14, 2000) were searched. A “words anywhere” search was conducted in WebSPIRS as there are limited subject headings in the included databases. The terms for the critical appraisal block were limited in WebSPIRS as other terms such as “guidelines”, “criteria”, “assessment”, and “quality” produced a high volume of hits which were not relevant. Articles were limited to those in the English language. To complement the literature search strategy, reference lists of articles were also perused for relevant articles and the journal Qualitative Health Research for the years 1997–2001 was hand-searched. A total of four articles could not be located and therefore were not retrieved.

2.2 Inclusion and Exclusion Criteria

The articles retrieved were reviewed and the criteria for critical appraisal were subjected to inclusion and exclusion criteria. Criteria appearing in a list or scale format were extracted from this format and screened independently for inclusion and exclusion. The criteria had to refer to the methods or methods sections of the research. Finally, the criteria had to be operational in that a reader would be able to review an article and determine whether criteria were met or not. (The authors operationalized the language typically applied in the critical appraisal literature; this document is available upon request). Criteria which required judgement calls on the part of the reader such as “is the purpose of the study one of discovery and description, conceptualization, illustration, or sensitization?”42 and “was the design of the study sensible?” 43 were excluded. Other exclusion criteria are noted in Table 2. Where appropriate, examples of the exclusions are given.

Table 2.

Exclusion Criteria

  1. criteria which do not fit the respective paradigm assumptions as stated – in cases where assumptions were not stated, judgement calls were made based on the criteria themselves. For example, Morse44 suggests selecting a large enough random sample in qualitative research to compensate for the exclusion of those who are not good informants. Under the assumption of constructivism and interpretivism, random sampling is not employed.

  2. criteria which are disease-specific

  3. criteria specific to design or tradition within each method and not applicable to other designs or traditions within that method – for example, criteria specific to randomized controlled trials (RCTs), case studies, or ethnographies

  4. criteria specific to review articles

  5. criteria specific to statistical tests

  6. criteria concerning the credibility of the researcher

  7. criteria specific to the quality and reporting of the literature review

  8. criteria for the elements of a proposal

  9. criteria specific to the following topics in medicine: etiology, improving clinical performance, diagnosis, prognosis, therapy, causation, economics of health care programs or interventions, practice guidelines

  10. criteria specific to the importance, relevance, or significance of the research

  11. criteria specific to measurement scales

3. Results

As we suspected, no criteria for critically appraising mixed-method studies were found. Table 3 is a review of the criteria for critically appraising quantitative and qualitative methods separately. Because this paper assumes that methods and criteria are linked to paradigms, the criteria appear separately. As we have proposed, the organization of the table is based on the framework of trustworthiness and rigor proposed by Lincoln and Guba23,24. The criteria have been grouped accordingly but it is acknowledged that there may be some overlap of criteria across the four goals.

Table 3.

Review of Critical Appraisal Criteria for Quantitative and Qualitative Methods

Goals of Criteria Qualitative Methods Quantitative Methods
Truth value (Credibility vs. Internal Validity)
  • triangulation of sources 23,45,46,47

  • triangulation of methods23,45,47,48

  • triangulation of investigators23,45,47

  • triangulation of theory/perspective45,47

  • peer debriefing 23,49

  • negative case analysis or searching for disconfirming evidence 13,23,45,47,50

  • member checks23,44, 48,49,50

  • use of quotations 18

  • informed consent stated 24,25,48

  • ethical review or human subject review undertaken44,48

  • statement that confidentiality protected26,48

  • consent procedures described 26,44

  • extraneous51 or confounding variables52 identified

  • extraneous or confounding variable(s) or baseline differences controlled for in the analysis43,51,52,53,54

  • statement about comparability of control group to intervention group at baseline55,56

  • statement that comparison group treated equally to aside from intervention 55

  • informed consent stated 48,57

  • ethical review undertaken48,57

  • statement that confidentiality protected 48,57

Applicability (Transferability/Fittingness vs. External Validity/Generalizability)
  • statement of purpose,25,42,48

  • statement of research question(s) 13,26,28,42,44,49,50,66

  • phenomenon of study stated25,58

  • rationale for the use of qualitative methods 13

  • rationale for the tradition within qualitative methods 26

  • description of study context 26,50 or setting 23

  • statement of how setting was selected 66

  • sampling procedure described 25,59,66

  • justification or rationale for sampling strategy 49,50

  • description of participants or informants25,26,48

  • data gathering procedures described 23,26,48

  • audiotaping procedures described 50

  • transcription procedures described 50

  • field note procedures described 50

  • data analysis described23,26

  • coding techniques described 23,25,28,50

  • data collection to saturation specified 28

  • statement that reflexive journals or logbooks kept49,60

  • description of raw data 23

  • statement of purpose 48

  • objective of study explicitly stated or described 52,61

  • description of intervention if appropriate 51

  • outcome measure(s) defined 43,53,54

  • assessment of outcome blinded 43,53

  • description of setting or conditions under which data collected 51,57,67

  • design stated explicitly i.e. case study, cross-sectional study, cohort study, RCT 51,52

  • subject recruitment or sampling selection described43,54,67

  • sample randomly selected62,63

  • inclusion and exclusion criteria for subject selection stated explicitly43,51,52,64

  • study population defined or described 43,51,54,64

  • source of subjects stated i.e. sampling frame identified28,61

  • source of controls stated 52

  • selection of controls described65

  • control or comparison group described 52

  • statement about nonrespondents 52 or dropouts or deaths 43,52

  • missing data addressed 52

  • power calculation to assess adequacy of sample size52,61 or sample size calculated for adequate power 43

  • statistical procedures referenced or described 61

  • p values stated66

  • confidence intervals given for main results 54,55,56,61,66

  • data gathering procedures described 48

  • data collection instruments or source of data described43,51

  • at least one hypothesis stated 64

  • both statistical and clinical significance acknowledged67

Consistency (Dependability vs. Reliability)
  • external audit of process 23

  • standardization of observers described 64

Neutrality (Confirmability vs. Objectivity)
  • external audit of data and reconstructions of the data 23

  • bracketing 18,25,47

  • statement of researcher’s assumptions 13,48 or statement of researcher’s perspective 66

4. Discussion

This paper has attempted to propose criteria for the purpose of critically appraising primary mixed-method studies. It is acknowledged that this is not a simple task; rather than a final product, this paper is an initial step in the process of identifying such criteria. It is necessary that the “final product” is manageable and realistic to achieve in the length of a peer-reviewed journal article. We therefore envision the final product to be a reduced version of Table 3 combined with criteria specific to mixed-methods such as acknowledgement of paradigm assumptions/differences and identification of the mixed-methods design.

An alternative to this recommendation might be the criteria shared by both methods such as the description of participants/study population and the description of the study context or setting. However, it is the authors’ opinion that the methodological evaluation of mixed-method studies should not be limited to those criteria which are shared. Paradigmatic differences between quantitative and qualitative methods imply that criteria shared by both methods may not be equally important to each method.

It is interesting to note that most of the criteria generated by the literature review fit into the first 2 categories of Lincoln and Guba’s framework. That is, most of these criteria address the goals of truth value (credibility vs. internal validity) and applicability (transferability vs. external validity). It is possible that the remaining two goals are not as relevant for critical appraisal or that it is more difficult to operationalize criteria which should be included under these goals. It is also possible that the criteria overlap and that those which are suitable for these goals are more suitable under one of the first 2 goals.

While this paper has attempted to identify criteria which are generic to mixed-method studies, future guidelines for criteria might be more specific depending upon the mixed-methods design. As mixed-methods research becomes more sophisticated, there may be varying combinations of criteria for the specific designs. For example, the criteria to critically appraise an ethnographic-randomized controlled trial design may differ from those for a phenomenological-case-control design.

The overall goal of proposing criteria for critically appraising mixed-method studies is to promote standards for guiding and assessing the methodological quality of studies of this nature. It is anticipated that, as with quantitative and qualitative research, there will eventually be guidelines for reporting and writing about mixed-method studies. It is hoped that this paper will stimulate future dialogue on the critical appraisal of such studies.

5. Future Considerations in the Pursuit of Critical Appraisal for Mixed-method Studies

The application of criteria to mixed-method studies will rely upon reported criteria. It is possible that criteria will have been met but not reported due to editorial revisions or journal guidelines concerning the length of articles. However, one purpose of this paper is to improve the reporting of methods in mixed-method studies; it can be argued, therefore, that is appropriate to make recommendations based on criteria which have been reported.

This paper is limited by the judgements made by the authors concerning exclusion and inclusion of criteria. We believed that the proposed criteria for each method should be generic and applicable to other studies within that method. We also believed that criteria had to be operationally defined. Having said this, “description of study population” does not specify what elements of a study population should be described. For the purpose of this paper, any description of the sample allowed this criteria to be met. All exclusions have been identified in Table 1; it is possible that these exclusions may be challenged during the revision of criteria. The authors welcome such debate. In the spirit of furthering this debate, we recommend the following considerations:

The criteria in Table 3 should be further refined and then applied to a sample of mixed-method studies in the health literature. It is also recommended that experts in mixed-methods research be asked to identify mixed-method studies which they judge to be exemplary so that the criteria met by these studies can be assessed as well.

It is noted that there are few subject headings in the selected databases which reflect the key concepts of this paper. An attempt was made to devise a comprehensive search strategy which often relied on the use of textwords. The inclusiveness of this strategy is unknown. The search strategy also focused on the peer-reviewed literature; relevant books, conference proceedings, and unpublished manuscripts may have been missed.

This paper assumed that all criteria generated by the literature were equally important. However, it is possible that the criteria met by mixed-method studies might be those which are easiest to meet rather than those which are important. The revision of criteria should involve consultation with critical appraisal experts and experienced qualitative, quantitative, and mixed-method researchers. It is proposed that this panel of experts and experienced researchers rate the criteria according to importance and that this rating be taken into account during the revision process.

References

  • 1.Caracelli VJ, Greene JC. Data analysis strategies for mixed-method evaluation designs. Educ Eval Policy An. 1993;15:195–207. [Google Scholar]
  • 2.Caracelli VJ, Riggin LJC. Mixed method evaluation: Developing quality criteria through concept mapping. Eval Pract. 1994;15:139–152. [Google Scholar]
  • 3.Casebeer AL, Verhoef MJ. Combining qualitative and quantitative research methods: Considering the possibilities for enhancing the study of chronic diseases. Chronic Dis Can. 1997;18:130–135. [PubMed] [Google Scholar]
  • 4.Datta L. Multimethod evaluations: Using case studies together with other methods. In: Chelimsky E, Shadish WR, editors. Evaluation for the 21st Century: A Handbook. Thousand Oaks, CA: Sage Publications; 1997. pp. 344–359. [Google Scholar]
  • 5.Droitcour JA. Cross design synthesis: Concept and application. In: Chelimsky E, Shadish WR, editors. Evaluation for the 21st Century: A Handbook. Thousand Oaks, CA: Sage Publications; 1997. pp. 360–372. [Google Scholar]
  • 6.Greene JC, Caracelli VJ, editors. New Directions for Program Evaluation. San Francisco: Jossey-Boss Publishers; 1997. Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms. [Google Scholar]
  • 7.House ER. Integrating the qualitative and quantitative. In: Reichardt CS, Rallis SF, editors. The Qualitative-Quantitative Debate: New Perspectives. San Francisco: Jossey-Boss Publishers; 1994. [Google Scholar]
  • 8.Morgan DL. Practical strategies for combining qualitative and quantitative methods: Applications to health research. Qual Health Res. 1998;8:362–376. doi: 10.1177/104973239800800307. [DOI] [PubMed] [Google Scholar]
  • 9.Tashakkori A, Teddlie C. Mixed methodology: Combining qualitative and quantitative approaches. Thousand Oaks, CA: Sage Publications; 1998. [Google Scholar]
  • 10.Health Educ Q. 1992;19(1) [Google Scholar]
  • 11.New Directions for Evaluation: Advances in Mixed-Method Evaluation: The Challenges and Benefits of Integrating Diverse Paradigms. 1997;74 [Google Scholar]
  • 12.Health Serv Res. 1999;34(5) [Google Scholar]
  • 13.Marshall C. Goodness criteria: Are they objective or judgement calls? In: Guba EG, editor. The Paradigm Dialog. Newbury Park, CA: Sage Publications; 1990. pp. 188–197. [Google Scholar]
  • 14.Oxman AD, Sackett DL, Guyatt GH for the Evidence-Based Medicine Working Group. Users’ guides to the medical literature: 1. How to get started. JAMA. 1993;270(17):2093–2095. [PubMed] [Google Scholar]
  • 15.Guba EG. The alternative paradigm dialog. In: Guba EG, editor. The Paradigm Dialog. Newbury Park, CA: Sage Publications; 1990. pp. 17–30. [Google Scholar]
  • 16.Kuzel AJ, Like RC. Standards of trustworthiness for qualitative studies in primary care. In: Norton PG, Steward M, Tudiver F, Bass MJ, Dunn EV, editors. Primary Care Research. Newbury Park, CA: Sage Publications; 1991. pp. 138–158. [Google Scholar]
  • 17.Altheide DL, Johnson JM. Criteria for assessing interpretive validity in qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage Publications; 1994. pp. 485–499. [Google Scholar]
  • 18.Secker J, Wimbush E, Watson J, Milburn K. Qualitative methods in health promotion research: Some criteria for quality. Health Educ J. 1995;54:74–87. [Google Scholar]
  • 19.Devers KJ. How will we know ‘good’ qualitative research when we see it? Beginning the dialogue in health services research. Health Serv Res. 1999;45(5 Part II):1153–1188. [PMC free article] [PubMed] [Google Scholar]
  • 20.Creswell J. Qualitative inquiry and research design: Choosing among five traditions. Thousand oaks, CA: Sage Publications; 1998. [Google Scholar]
  • 21.Popper KR. The logic of scientific discovery. New York: Basic Books; 1959. [Google Scholar]
  • 22.Reichardt CS, Rallis SF. Qualitative and quantitative inquiries are not imcompatible: A call for a new partnership. In: Reichardt CS, Rallis SF, editors. The Qualitative-Quantitative Debate: New Perspectives. San Francisco: Jossey-Boss Publishers; 1994. pp. 85–92. [Google Scholar]
  • 23.Lincoln YS, Guba EG. Naturalistic Inquiry. Newbury Park, CA: Sage Publications; 1985. [Google Scholar]
  • 24.Lincoln YS, Guba EG. But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. In: Williams DD, editor. Naturalistic Evaluation. New Directions for Program Evaluation, No. 90. San Francisco: Jossey-Boss; 1986. pp. 78–84. [Google Scholar]
  • 25.Burns N. Standards for qualitative research. Nurs Sci Q. 1989;2(1):44–52. doi: 10.1177/089431848900200112. [DOI] [PubMed] [Google Scholar]
  • 26.Forchuk C, Roberts J. How to critique qualitative research articles. Can J Nurs Res. 1993;25(4):47–56. [PubMed] [Google Scholar]
  • 27.Foster RL. Addressing epistemological and practical issues in multimethod research: A procedure for conceptual triangulation. Adv Nurs Sci. 1997;20(2):1–12. doi: 10.1097/00012272-199712000-00002. [DOI] [PubMed] [Google Scholar]
  • 28.Morse JM. Evaluating qualitative research. Qual Health Res. 1991;1(3):283–286. [Google Scholar]
  • 29.Wilson HS. Qualitative studies: From observations to explanations. Journal Nurs Adm. 1985:8–10. [PubMed] [Google Scholar]
  • 30.Yonge O, Slewin L. Reliability and validity: Misnomers for qualitative research. Can J Nurs Res. 1988;20(2):61–67. [PubMed] [Google Scholar]
  • 31.Ford-Gilboe M, Campbell J, Berman H. Stories and numbers: Coexistence without compromise. Adv Nurs Sci. 1995;13(1):14–26. doi: 10.1097/00012272-199509000-00003. [DOI] [PubMed] [Google Scholar]
  • 32.Guba EG, Lincoln YS. Competing paradigms in qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage Publications; 1994. p. 110. [Google Scholar]
  • 33.Berger PL, Luckmann T. The social construction of reality: A treatise in the sociology of knowledge. Garden City, NY: Doubleday; 1966. [Google Scholar]
  • 34.Smith JK. Quantitative versus qualitative research: An attempt to clarify the issue. Educational Researcher. 1983;12:6–13. [Google Scholar]
  • 35.Guba EG, Lincoln YS. Competing paradigms in qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage Publications; 1994. pp. 105–117. [Google Scholar]
  • 36.Denzin NK, Lincoln YS. Introduction: Entering the field of qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage Publications; 1994. pp. 1–17. [Google Scholar]
  • 37.Smith JK. Alternative research paradigms and the problem of criteria. In: Guba EG, editor. The Paradigm Diaolog. Newbury Park, CA: Sage Publications; 1990. pp. 167–187. [Google Scholar]
  • 38.Garratt D, Hodkinson P. Can there be criteria for selecting research criteria?– A hermeneutical analysis of an inescapable dilemma. Qualitative Inquiry. 1998;4(4):515–539. [Google Scholar]
  • 39.Sparkes AC. Myth 94: Qualitative health researchers will agree about validity. Qual Health Res. 2001;11(4):538–552. doi: 10.1177/104973230101100409. [DOI] [PubMed] [Google Scholar]
  • 40.Guba EG, Lincoln YS. Competing paradigms in qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage Publications; 1994. [Google Scholar]
  • 41.Berk RA. The construction of rating instruments for faculty evaluation: A review of the methodological issues. J High Educ. 1979;50(5):650–670. [Google Scholar]
  • 42.Cobb AK, Hagemaster JN. Ten criteria for evaluating qualitative research proposals. J Nurs Educ. 1987;26(4):138–143. doi: 10.3928/0148-4834-19870401-04. [DOI] [PubMed] [Google Scholar]
  • 43.Greenhalgh T. Assessing the methodological quality of published papers. BMJ. 1997;315(7103):305–308. doi: 10.1136/bmj.315.7103.305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Morse JM. Evaluating qualitative research. Qual Health Res. 1991;1(3):283–286. [Google Scholar]
  • 45.Patton MQ. Qualitative Evaluation and Research Methods. 2. Newbury Park, CA: Sage Publications; 1990. Enhancing the quality and credibility of qualitative analysis. [Google Scholar]
  • 46.Yin RK. Enhancing the quality of case studies in health services research. Health Serv Res. 1999;34(5 Part II):1209–1224. [PMC free article] [PubMed] [Google Scholar]
  • 47.Patton MQ. Enhancing the quality and credibility of qualitative analysis. Health Serv Res. 1999;34(5 Part II):1189–1208. [PMC free article] [PubMed] [Google Scholar]
  • 48.Elliott R, Fischer CT, Rennie DL. Evolving guidelines for publication of qualitative research studies in psychology and related fields. Br J Clin Psychol. 1999;38(3):215–229. doi: 10.1348/014466599162782. [DOI] [PubMed] [Google Scholar]
  • 49.Inui TS, Frankel RM. Evaluating the quality of qualitative research. J Gen Internal Med. 1991;6:485–486. doi: 10.1007/BF02598180. [DOI] [PubMed] [Google Scholar]
  • 50.Reid A. What we want: Qualitative research. Can Fam Physician. 1996;42:387–389. [PMC free article] [PubMed] [Google Scholar]
  • 51.Haughey BP. Evaluating quantitative research designs: Part 1. Crit Care Nurse. 1994;14(5):100–102. [PubMed] [Google Scholar]
  • 52.Fowkes FGR, Fulton PM. Critical appraisal of published research: Introductory guidelines. BMJ. 1991;302:1136–1140. doi: 10.1136/bmj.302.6785.1136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Department of Clinical Epidemiology and Biostatistics, McMaster University Health Sciences Centre. How to read clinical journals: III To learn the clinical course and prognosis of disease. CMAJ. 1981;124:869–872. [PMC free article] [PubMed] [Google Scholar]
  • 54.Laupacis A, Wells G, Richardson S, Tugwell P for the Evidence-Based Medicine Working Group. Users’ guides to the medical literature: V. How to use an article about prognosis. JAMA. 1994;272(3):234–237. doi: 10.1001/jama.272.3.234. [DOI] [PubMed] [Google Scholar]
  • 55.Guyatt GH, Sackett DL, Cook DJ for the Evidence-Based Working Group. Users’ guide to the medical literature. II: How to use an article about therapy or prevention, B. What were the results and will they help me in caring for my patients? JAMA. 1994;271(1):59–63. doi: 10.1001/jama.271.1.59. [DOI] [PubMed] [Google Scholar]
  • 56.Levine M, Walter S, Lee H, Haines T, Holbrook A, Moyer V for the Evidence-Based Working Group. Users’ guide to the medical literature. IV. How to use an article about harm. JAMA. 1994;271(20):1615–1619. doi: 10.1001/jama.271.20.1615. [DOI] [PubMed] [Google Scholar]
  • 57.Haughey BP. Evaluating quantitative research designs: Part 2. Crit Care Nurse. 1994;14(6):69–72. [PubMed] [Google Scholar]
  • 58.Sale JEM, Lohfeld L, Brazil K. Revisiting the quantitative-qualitative debate: Implications for mixed-methods research. Qual Quant. 2002;36:43–53. doi: 10.1023/A:1014301607592. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res. 1998;8(3):341–351. doi: 10.1177/104973239800800305. [DOI] [PubMed] [Google Scholar]
  • 60.Koch T. Establishing rigour in qualitative research: The decision trail. J Adv Nurs. 1994;19:976–986. doi: 10.1111/j.1365-2648.1994.tb01177.x. [DOI] [PubMed] [Google Scholar]
  • 61.Gardner MJ, Machin D, Campbell MJ. Use of check lists in assessing the statistical content of medical studies. BMJ. 1986;292:810–812. doi: 10.1136/bmj.292.6523.810. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Greenhalgh T, Taylor R. Papers that go beyond numbers (qualitative research) BMJ. 1997;315:740–743. doi: 10.1136/bmj.315.7110.740. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Morse JM. Quantitative and qualitative research: Issues in sampling. In: Chin PL, editor. Nursing Research Methodology: Issues and Implementation. Rockville, MA: Aspen Publishers; 1986. [Google Scholar]
  • 64.Dunn EV. Basic standards for analytic studies in primary care research. In: Norton PG, Stewart M, Tudiver F, Bass MJ, Dunn EV, editors. Primary Care Research. Newbury Park, CA: Sage Publications; 1991. pp. 78–96. [Google Scholar]
  • 65.Department of Clinical Epidemiology and Biostatistics, McMaster University Health Sciences Centre. . How to read clinical journals: II. To learn about a diagnostic test. CMAJ. 1981;124:703–710. [PMC free article] [PubMed] [Google Scholar]
  • 66.Greenhalgh T. Statistics for the nonstatistician. II: ‘Significant’ relations and their pitfalls. BMJ. 1997;315(7105):422–425. doi: 10.1136/bmj.315.7105.422. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Department of Clinical Epidemiology and Biostatistics, McMaster University Health Sciences Centre. How to read clinical journals: V: To distinguish useful from useless or even harmful therapy. CMAJ. 1981;124:1156–1162. [PMC free article] [PubMed] [Google Scholar]

RESOURCES