Abstract
The movement for research transparency has gained irresistible momentum over the past decade. Although qualitative research is rarely published in the high-impact journals that have adopted, or are most likely to adopt, data sharing policies, qualitative researchers who publish work in these and similar venues will likely encounter questions about data sharing within the next few years. The fundamental ways in which qualitative and quantitative data differ should be considered when assessing the extent to which qualitative and mixed methods researchers should be expected to adhere to data sharing policies developed with quantitative studies in mind. We outline several of the most critical concerns below, while also suggesting possible modifications that may help to reduce the probability of unintended adverse consequences and to ensure that the sharing of qualitative data is consistent with ethical standards in research.
Keywords: Confidentiality, Data sharing, Ethnography, Mixed methods, Qualitative research, Reproducibility, Transparency
1. Introduction
In 2014, the Public Library of Science (PLOS) journals unveiled a policy stipulating that authors must make available all data underlying the findings described in their published manuscript (Bloom et al., 2014). The implementation of this new policy was something of a watershed moment; although PLOS Medicine was not the first high-impact medical journal to require data sharing as a matter of policy, it is the only one that routinely publishes findings from qualitative studies and qualitative meta-syntheses. While the new guidance permits authors some latitude in circumventing data sharing, in some ways it does resemble the obligatory and much more rigorous conditions of publication already in place at leading journals in biostatistics (Peng, 2009), economics (Ashenfelter et al., 1986; Bernanke, 2004), and political science (Meier, 1995). At the American Economic Review, for example, authors make publicly available the raw data and statistical programming code needed to reproduce all of the findings in the published manuscript, and these materials are uploaded to the journal web site prior to publication (Bernanke, 2004). The experiences in these fields suggest that leading journals can implement unilateral changes that eventually contribute to building a culture in which data sharing becomes the norm.
The movement to promote reproducible research in the medical and public health literature has lagged, perhaps for myriad reasons. First, concerns are frequently voiced about intellectual property protections and/or the potential hazard of disclosing protected health information (Hrynaszkiewicz et al., 2010; Mello et al., 2013; Tudur Smith et al., 2015). Second, because medical and public health research can often carry enormous financial implications for specific products (Rennie, 1997; Shuchman, 2005) or entire industries (Kaiser, 1997; Michaels and Monforton, 2005; Muggli et al., 2001) that are implicated in the findings, requests for data may be driven by financial motivations that extend well beyond any disinterested concerns about science for science’s sake. A researcher might be appropriately wary, for example, of responding to an industry representative’s seemingly benign request for data. Finally, there are also structural barriers to data sharing, because faculty members at schools of medicine and public health are incentivized to publish secondary findings from a given data collection effort. For example, it is not uncommon for investigators to publish secondary analyses of data from randomized trials (Rotheram-Borus et al., 2015; Tsai et al., 2016) or multiple analyses of data from the same cohort (Colditz and Hankinson, 2005; Colditz et al., 1997). These concerns apply less strongly in the social sciences. Yet because this type of research often has direct relevance for patient care, data sharing should (in general) be regarded as an imperative for ensuring transparent analysis of data and reproducibility of research findings (Doshi et al., 2012; Le Noury et al., 2015).
The movement for research transparency has gained irresistible momentum over the past decade (Groves, 2010; Hanson et al., 2011; Laine et al., 2007; Miguel et al., 2014; Nosek et al., 2015; Peng et al., 2006; PLOS Medicine Editors, 2014; Stodden et al., 2013; Tsai, 2011). Although qualitative research is rarely published in the more high-impact journals (Greenhalgh et al., 2016; Shuval et al., 2011) that have adopted, or are most likely to adopt, data sharing policies, qualitative and mixed methods researchers who publish work in these and similar venues will likely encounter questions about data sharing within the years ahead, especially as mixed methods studies integrating qualitative and quantitative data become increasingly prominent (Creswell et al., 2011). The substantive ways in which qualitative and quantitative data differ should be considered when assessing the extent to which qualitative and mixed methods researchers should be expected to adhere to data sharing policies developed with purely quantitative studies in mind. We outline several of the most critical concerns below, while also suggesting possible modifications that may help to reduce the probability of unintended adverse consequences and to ensure that the sharing of qualitative data is consistent with ethical standards in research.
2. Reliability, validity, and reproducibility in qualitative research
2.1. Unique features of qualitative data production and analysis
Qualitative studies are based on data that are fundamentally different from the data collected in other observational study designs. The standardized measures employed in quantitative studies constrict the diverse perspectives of study participants along predetermined continua (e.g. categorical or continuous) so that they can be statistically aggregated. Quantitative data analysis plans (Olken, 2015) and study protocols (Horton, 1997) can be pre-specified and disseminated. The data can be anonymized and uploaded to secure data repositories. The statistical code used to process the data, and the process through which the output is translated into the manuscript text and tables, can just as easily be shared and replicated (Gandrud, 2013; Peng, 2009; Stodden et al., 2014; Vickers, 2006). External investigators can then use the electronic paper trail to verify the published findings (Dewald et al., 1986; Jefferson and Doshi, 2014; Le Noury et al., 2015; McCullough and Vinod, 2003). Data sharing, in effect, is “a threat that might keep potential cheaters honest” (p.722) (Hamermesh, 2007).
In contrast, the data collected in qualitative studies are typically obtained through in-depth interviews, focus groups, direct observation, document review, and audio recording review. These data, while typically not aimed at establishing generalizability, lend themselves to generating new theoretical insights about certain phenomena in greater depth and detail than is possible through quantitative designs (Patton, 2002). While complementary to other forms of social measurement, these data are also neither collected nor analyzed in as linear a manner, and it has been argued that the concept of reliability does not directly translate from the quantitative (rationalistic) to the qualitative (naturalistic) paradigm (Guba and Lincoln, 1981). In her influential essay, Stenbacka (2001) goes so far as to argue, “It is obvious that reliability has no relevance in qualitative research … If a qualitative study is discussed with reliability as a criterion, the consequence is rather that the study is no good” (p.552). The extremity of her viewpoint notwithstanding, more recent work in the field has sought to address questions about the validity and reliability of qualitative research findings, through the use of descriptive approaches (e.g., verification strategies (Morse et al., 2002)), quantitative approaches (e.g., calculating inter-rater reliability for comparing the assessments of multiple coders (Cohen, 1960) or proportional reduction in loss (Rust and Cooil, 1994)), and reporting checklists (Clark, 2003; O’Brien et al., 2014; Tong et al., 2007).
2.2. Reproducible research and qualitative data
For most readers of high-impact medical and public health journals, the term “reproducibility” will evoke the idea that external investigators ought to be able to arrive at the same published findings when given the data and analysis code (Claerbout and Karrenbach, 1992; King, 1995). In Clemens’ (in press) recently published typology of replication and robustness, this particular type of check is described as but one form of “replication” and given the label “verification”: “ensuring that the exact statistical analysis reported in the original paper gives materially the same results reported in the paper, either using the original dataset or remeasuring with identical methods the same traits of the same sample of subjects.” This definition corresponds closely to the concept of “methods reproducibility” suggested by Goodman et al. (2016). Notably, other researchers have ignored the distinction between “replication” and “reproducibility.” For example, the Open Science Collaboration (2012) have written: “Some distinguish between ‘reproducibility’ and ‘replicability’ by treating the former as a narrower case of the latter (e.g., computational sciences) or vice versa (e.g., biological sciences). We ignore the distinction” (p.659).
Verification does not translate well to a data sharing policy for qualitative studies. Given the inherently intersubjective nature of qualitative data collection, the iterative nature of qualitative data analysis, and the unique importance of interpretation as part of the core contribution of qualitative work, verification is likely to be impossible in the setting of qualitative research. We discuss two principal reasons below.
First, some scholars have argued that interview transcripts, even when accompanied by detailed field notes, cannot represent with sufficient fidelity the actual interview that took place. Even audio and video recordings, which are generally considered the most complete observational data that can be captured, cannot convey valuable tactile and/or olfactory data obtained in the field (Bernard and Ryan, 2009). Drawing on focus groups conducted with qualitative researchers, Broom et al. (2009) showed that many of them were of the immoderate opinion that their transcript data were “an encoded account only decipherable to the individual who collected it” (p.1170). According to this understanding, we should question the extent to which interview transcripts may be considered “raw data” for external investigators to use in the same manner as a dataset taken from a randomized controlled trial of the latest unoriginal antidepressant medication.
Second, the interview transcripts disseminated to external investigators are unlikely to be the data they would have collected had they conducted the study themselves. A qualitative study guided by the method of grounded theory, for example, follows an inductive process with concurrent review of the data being collected, filtering of the data for relevance and meaningfulness, and grouping and naming of patterns observed in the data (Glaser and Strauss, 1967). Investigators may also choose to collect additional data, if necessary, to deepen understanding into emerging phenomena via “theoretical sampling” (Glaser, 1978). Even if the authors of a particular study uploaded the entire set of field notes or interview transcripts to a secure data repository, what do these data mean to an external investigator who might not have the same kids of embedded cultural experiences (that would help contextualize the interview and field observation data) and who would have collected the data differently? An external investigator conducting a secondary analysis of a grounded theory dataset must be aware that, even if the same research questions are considered at the outset, s/he likely would have made very different decisions during the course of the study that would have led to an entirely different dataset being constructed. If external investigators perceive there to be gaps in the dataset they are provided by the study authors, the potential explanations for the missing data are legion: are data missing because the concepts of interest occurred too infrequently to be meaningful to the initial guiding propositions, because the phenomena were simply not present in the sample, or because the study authors’ interview probes were driven by a different conceptual lens?
Some researchers might view these unique features of qualitative modes of inquiry as befitting their position in the conventional “hierarchy” of evidence (Atkins et al., 2004; Guyatt et al., 1995). The economist Amitabh Chandra has quipped, for example, “If ethnography is a legitimate way to learn things … why aren’t [pharmaceutical] manufacturers allowed to do it?” (Chandra, 2015) Yet even quantitative data are subject to what Goodman et al. (2016) have labeled as “inferential reproducibility”: “… scientists might draw the same conclusions from different sets of studies and data or could draw different conclusions from the same original data, sometimes even if they agree on the analytical results” (p.4). Furthermore, it is important to note that secondary analyses of qualitative data would likely be able to reproduce at least some, if not all, of the major themes identified in the primary published article. However, that is not the aim of a verification test -- which is, rather, to reproduce “materially the same results reported in the paper” (Clemens, in press). Given these difficulties, it is likely that external qualitative investigators would not seek “verification” but rather “reproduction,” defined by Clemens as being another form of replication similar to verification except that reproduction studies are conducted with a different sample of study participants from the same population. This definition corresponds to the concept of “results reproducibility” suggested by Goodman et al. (2016). For example, Lewis’ (1951) re-study of the Mexican village Tezpotlán 20 years after Redfield (1930) might be considered, had it been conducted somewhat earlier, a reproduction test of a qualitative study. In theory, reproduction of a qualitative study does not require a data sharing policy. The authors’ description of the study’s methods, especially if guided by a reporting checklist (Clark, 2003; O’Brien et al., 2014; Tong et al., 2007), should be sufficient to enable another team of investigators to conduct a reproduction test. But if reproduction, rather than verification, is the goal, then of what relevance is a data sharing policy?
3. Data sharing in qualitative research
Beyond attempts to increase transparency in the production of qualitative data, it is likely that qualitative and mixed methods researchers will need to address qualitative data sharing in some fashion. Applying these standards uncritically, one might presume that data sharing involves providing the following in an online supplementary appendix: interview guides and interview transcripts, in the original language and in the translated language of the investigators (if different from the original); field notes; data used, if any, to establish inter-coder reliability; full code books; and documents, if any, describing the process of open coding, selection of codes for inclusion in the final codebook, and category construction. The “audit trail” supports reliability and validity, so even if it is recognized that no two groups would conduct identical qualitative studies, the information available to external investigators would enable them to understand how the study authors arrived at the published conclusions. Most computer-assisted qualitative data analysis software packages offer export functions that enable users to save an entire “project” (e.g., raw data, codebook, coding links, and memos), which could facilitate dissemination. While these types of maneuvers might be consistent with a data sharing policy, there are a number of challenges that could hamper their implementation in practice. Below we highlight the most significant challenges facing data sharing in qualitative research.
3.1. Preserving the anonymity or pseudonymity of study participants
Data sharing policies should carefully consider the potential effects of data sharing on study participants. Most qualitative researchers use respondent validation (e.g., reviewing emerging themes and analyses with study participants or key informants) to ensure rigor, and the practice is highlighted as a key process component of qualitative research in most reporting checklists (Clark, 2003; O’Brien et al., 2014; Tong et al., 2007). This method of data sharing through member-checking of interim findings is carefully supervised. In contrast, data sharing policies that make interview transcripts available to study participants in a completely unstructured fashion may have negative effects. Chief among these are the potential psychosocial consequences of compromising study participant anonymity.
Because qualitative study designs often lend themselves to the in-depth study of highly sensitive subject material (Kelly et al., 2011; King et al., 2013; Parkinson, 2013; Wade et al., 2005), field notes and interview transcripts would need to be anonymized prior to dissemination in order to conform with prevailing legal and ethical guidelines. Institutional Review Board concerns about participant anonymity, discussed in the PLOS policy (Bloom et al., 2014), have been identified as a leading barrier to data sharing. Consequently, investigators lacking proper guidance on how to comply with data sharing guidelines in a way that provides adequate anonymity protections may simply default to data withholding. For example, in the Data Availability Statement for their qualitative study recently published in PLOS Medicine, Christopoulos et al. (2015) stated, “Public availability of data could potentially compromise participant privacy. Participants did not consent to have their full transcripts or excerpts of transcripts made publically [sic] available.” Qualitative studies published in PLOS One subsequent to the PLOS policy adoption have made similar claims (Natoli et al., 2015; Tang et al., 2015) (although there have also been notable, and welcome, exceptions (Lo et al., 2016)).
While Institutional Review Board restrictions are commonly cited to justify withholding of quantitative data (Campbell et al., 2002), in fact it may be possible to release de-identified versions of transcripts that preserve the anonymity of qualitative study participants. The nature of any anonymization procedures would depend on the nature of the data collected and the extent to which the data can be linked with publicly available information to reveal specific identities. At a minimum, the anonymization procedures would entail redaction or alteration of protected health information and any specific encounter details that reveal, however indirectly, the identity of any of the parties to the encounter, with obfuscated information shown in brackets. The investigator might keep a detailed record of these procedures in a secure location should it become necessary to revisit the data after publication (Table 1), similar to the recommendations made in the Privacy Certificate Guidance of the U.S. National Institute of Justice (2007). As a cautionary note, depending on the size of the dataset, the redaction or anonymization process could require tremendous time and effort of the investigators and could also potentially introduce errors and inconsistencies (Goffman, 2014; Lewis-Kraus, 2016). Additionally, for some studies, the nature of the research (Parkinson, 2013) may be such that any suitably redacted or anonymized transcripts might be so unserviceably thin that they would be devoid of meaningful content. Wolcott (1973) discusses this possibility in the introduction of his classic ethnography: “To present the material in such a way that even the people central to the study are ‘fooled’ by it is to risk removing those very aspects that make it vital, unique, believable, and at times painfully personal” (p. 4).
Table 1.
Study participant | Line number | Original | Anonymized |
---|---|---|---|
Clinic patient 2 | 79 | “My husband has been beating me regularly since I was married to him at age 18” | “My husband has been beating me regularly since I was married to him at [a young age]” |
Clinic patient 2 | 85 | “Just the other day he got angry with me because there was no water and our eldest went to school in a soiled uniform. He threw the empty jerricans at me and you now see the bruise on my left eye” | “[ ] He got angry with me because there was no water [ ]. He [attacked me] and you now see [my face]” |
Community member 8 | 243 | “I am the headmaster of the Buhingo Boarding School. What would the parents say if they knew I was HIV positive?” | “I am the headmaster of [a school]. What would the parents say if they knew I was HIV positive?” |
Clinic patient 53 | 164 | “I was in the hospital for a week after injuring my left leg in a boda boda accident. The nurse at the Mbarara Hospital chastised me when she found out my HIV status.” | “I was in the hospital [after a transportation accident]. The nurse [ ] chastised me when she found out my HIV status.” |
Because interview transcripts contain verbatim quotations, it is likely that some transcripts cannot be sufficiently anonymized to prevent deductive disclosure, or what Tolich (2004) has called violations of “internal confidentiality.” That is, study participants could recognize themselves, their communities, or other study participants (if they belong to the same community) (Larossa et al., 1981). van den Hoonaard (2003) holds that anonymity is “a virtual impossibility in ethnographic research” (p.141). Depending on the sensitivity of the subject matter, deductive disclosure could result in harm to study participants and their relationships with others in the community. Ellis, 1995; Scheper-Hughes, 2000, and Stein (2010) have famously written about being angrily received by study participants over deductive disclosures following the publication of their celebrated books (Ellis, 1986; Scheper-Hughes, 1977; Stein, 2001). If such aggravated harm could result from the publication of books and journal articles in which verbatim quotations are carefully curated, one can imagine the harm resulting from a data sharing policy requiring entire interview transcripts to be shared.
Certain types of studies may carry even greater risks of deductive disclosure. These include studies of small-scale societies; studies that rely on respondent-driven sampling and other variations of snowball sampling to identify hard-to-reach populations; and studies in which permission to access a small community must be first secured from highly networked research gatekeepers, such as village leaders or community advisory boards. In these settings, a minor, idiosyncratic detail -- such as a manner of speaking or a specific phrase -- that is of unknown significance to the investigator (and therefore likely to go unredacted) could result in deductive disclosure and potential harm. In addition to the risk of harm to study participants, deductive disclosure also raises important questions about potential risks to third-party non-participants when study participants disclose sensitive information about social network ties that arises from their shared history with others (Larossa et al., 1981; Lounsbury et al., 2007; McLellan et al., 2003).
Related to the above, data sharing potentially further limits qualitative research done through “studying-up” (Nader, 1969) or “studying over” (Markowitz, 2001) -- approaches in which persons in positions of power (e.g., hospital administrators, pharmaceutical company executives, heads of governmental or multilateral organizations) become the subject of ethnographic study (Abramowitz and Panter-Brick, 2015; Closser, 2010). Because elites are more empowered to articulate concerns about confidentiality and disclosure, data sharing could unintentionally perpetuate power differentials in which health program beneficiaries endure as research subjects while health program funders and implementers remain understudied (Schneider and Aguiar, 2012).
Given the greater risks of deductive disclosure through unregulated data sharing (as contrasted with the carefully curated release of specific quotations through publication of study findings), consent documents for qualitative studies would need to properly inform prospective study participants that the interview transcripts could potentially be uploaded to a shared data repository for public consumption. Even researchers who have no intentions to share the data might be advised to seek informed consent from study participants at the outset simply to preserve the option in the future (Groves, 2010). Although study participants’ exposure to such risk would ultimately be contingent on the researchers’ decision to publish their findings in a journal where a data sharing policy is enforced, it is likely that such a caveat -- however conditional -- would result in selection on unobserved heterogeneity. These selective pressures could shape the types of persons who agree to participate in qualitative and mixed methods studies; alternatively, these selective pressures could have no impact on the types of persons who agree to participate but could shape the nature of the data they are willing to share with investigators. Either of these selective pressures would likely compromise the quality of the research, thereby upending one of the distinctive advantages of qualitative research, which is the ability to conduct in-depth examinations of sensitive subject material (Kelly et al., 2011; King et al., 2013; Parkinson, 2013; Wade et al., 2005).
To minimize the risk of deductive disclosure, a data sharing policy might, in lieu of obliging the release of interview transcripts, require investigators to implement procedures to enhance transparency. Many aspects of the qualitative analysis (e.g., transcription rules, data segmentation, coding units, process for code development, finalized codes) could be shared with minimal risk to study participants. Taking transparency a step further, investigators could export coding queries and make these available to external investigators. Because coding queries consist of excerpted and possibly disembodied interview text, they may offer greater anonymity compared with full transcripts. Depending on the interview content, investigators may still need to redact some of the text to preserve anonymity -- which would entail added burden -- but the risk of deductive disclosures would be reduced. The release of coding queries has not been suggested in the ongoing conversation on data sharing in qualitative research but should be regarded as a viable and potentially more ethical way to promote transparency than the release of full transcripts.
An example of a coding query, applied to data from Kohrt et al. (2010) and Morley and Kohrt (2013), is provided in the Electronic Supplementary Appendix. Coding queries would provide external investigators with comprehensive information that could be used to qualitatively assess the internal coherence of the coding scheme (Box 1). In qualitative research, study participants often present conflicting or contradictory views on the same topic based on varying influences such as the nature of the question and the time elapsed during the interview (LeCompte and Schensul, 1999). Discrepant data may be especially important in longitudinal studies where study participants provide serial interviews during the course of an illness or throughout their lifetimes, thereby gaining increasing familiarity with a particular interviewer. These processes are rarely, but with some exceptions (Groleau et al., 2006), captured in academic publications that tend to present views as static and internally coherent. Ultimately, much like the sharing of data from quantitative studies can provide opportunities to conduct detailed interrogations of the scientific record (Le Noury et al., 2015), coding queries can help reviewers and external investigators assess whether the quotes provided in manuscripts and journal articles capture the overall content of the data or whether they represent selective reporting of study participants’ perspectives in a way that suits the authors’ theses.
Box 1. Using coding queries to evaluate the internal coherence of the coding scheme.
Do the quotes represent similar concepts to a sufficient degree to justify a coherent theme?
Is the concept shared among study participants throughout the sample, or is it limited to specific subset? If limited to a specific subset, is the circumscribed nature of the concept adequately described in the manuscript?
Does the description or valence of the concept change during the course of the interview or during the course of multiple interviews with the same study participant? If so, are these changes adequately described in the manuscript?
Does the choice of quotes, and their accompanying descriptions, presented in the manuscript adequately capture the content and diversity of the coding query?
3.2. Other unintended consequences of qualitative data sharing
In addition to the risk of deductive disclosures, a number of other unintended consequences could result from data sharing policies if they are not properly tailored to the unique aspects of qualitative and mixed methods research. First, the burden of organizing qualitative data for inspection or use by external investigators could easily exceed the work of writing the manuscript itself. How should the interests of research transparency be weighed against the potential costs of documentation burden? Redacting the hundreds of pages of transcripts collected during the course of a small qualitative study would require months of work. Moreover, there are no standards in the field for systematically documenting the hours of conversations, conference calls, and e-mail exchanges required for code selection and category construction. Guidelines would need to be developed so that documentation of these procedures is uniform across studies. Larger qualitative and mixed methods studies would entail an even greater documentation burden. For example, the longitudinal qualitative study by Maman et al. (2014) involved 657 study participants and 1059 in-depth interviews, with each interview averaging 30–60 minutes in duration. Even redacting just the 175-page summary reports for each of the 48 sites -- much less the primary interview transcripts -- would have required the review of more than 8000 pages of data. In what format should such data be made available to meet the conditions of a reasonable data sharing policy?
Second, and related to the above, journals should consider the possibility that, in response to data sharing policies, study participants and qualitative researchers may alter their behavior in undesirable ways. Will qualitative researchers, whose work is already de facto excluded from most high-impact journals (Greenhalgh et al., 2016; Shuval et al., 2011), shy away from submitting their work to these journals, where data sharing policies are increasingly enforced? Will they be discouraged from conducting large-sample qualitative studies, knowing the documentation burden that will be involved? Furthermore, it is one thing to make available several hundred pages of interview transcripts from a two-to three-year qualitative study conducted by paid research assistants. It is another thing to make available thousands of pages of field notes and journal entries -- some of which may be intensely personal in content -- accumulated during the course of a five-year ethnography. Ethnographic note-taking guidelines that separate field notes according to observation, interpretation, and personal reflection (Bernard, 2006) could potentially facilitate data sharing by restricting dissemination to material related to observation. Unless qualitative researchers have a secure understanding that certain types of material can be shielded from dissemination, they may be motivated to alter the underlying data, i.e., by withholding this material from the written or transcribed record (Baez, 2002; Goodwin et al., 2003; McLellan et al., 2003; Scheper-Hughes, 2000) or by maintaining a set of private “shadow files” separate from the official research record (similar to the detailed “psychotherapy notes” that therapists store apart from the medical record).
Box 2 summarizes our recommendations for journal policies that would promote transparency and, in some cases facilitate sharing of qualitative data, while remaining sensitive to their unique attributes that require their distribution to be handled somewhat differently than quantitative data.
Box 2. Summary of recommendations for journal editors.
Require a statement from authors about whether the consent process included a description of any public availability of data. Prior to public dissemination of data, authors should provide a statement to journal editors about whether or not study participants were informed about future plans for public availability of data and the manner in which this issue was addressed, if at all, during the informed consent process.
Require adherence to minimum standards for de-identification of publicly shared data. Under the 1996 U.S. Health Insurance Portability and Accountability Act, protected health information includes 18 identifiers (e.g., names, addresses, serial numbers) that must be treated with special care in quantitative datasets. These same identifiers should be removed from qualitative data prior to dissemination. The geographic subdivision requirement, which stipulates that geographic units contain 20,000 or fewer people, requires special attention. If qualitative researchers are working in a village or community with fewer than 20,000 people, then site pseudonyms or larger geographic divisions should be used in published reports (e.g., providing the sub-county name rather than the parish or village name).
Encourage authors to use, and publish, data from multiple informants and/or institutions per selection category. Whenever possible, authors should be encouraged to recruit more than one informant and more than one institution per category. For example, interviewing only one surgeon at a hospital or only one official at a ministry of health increases the probability that the study participant’s comments may be traced back to that study participant (or study participant’s institution). If two or more informants are recruited per selection category and a range of institutions are included, the probability of identification may be reduced. Journal policies related to this provision should be cognizant of the lesser amounts of funding granted for qualitative research and the smaller scale of qualitative studies.
Permit coding queries to be shared as an alternative to full transcripts. Coding queries may offer greater anonymity compared with full transcripts because statements are grouped by theme rather than by study participant. Furthermore, coding queries allow a form of verification of the findings reported in results and conclusion. For the purposes of promoting transparency in qualitative research, these should be considered acceptable, or possibly even preferable, alternatives to full transcripts.
Encourage anonymization of field notes. Ethnographers frequently rely on field notes as a source of data. These could be anonymized in the same fashion as interview transcripts before being made publicly available. Because field notes include a range of objective, subjective, and interpretative documentation, requests for field notes should be limited to objective excerpts. Field notes, as with other forms of qualitative data, could also be submitted in the form of coding queries, with the same advantages as discussed above.
Encourage authors to document social audits or other stakeholder dissemination at the time of manuscript submission. A major source of participant-researcher dispute occurs when participants feel that their responses are selectively represented in the reported results or in recommendations drawn from the data. Public availability of qualitative data may therefore be especially contentious if study participants, or their representatives (e.g., local leaders), have not signed off on the researchers’ interpretations. Social audits or other stakeholder dissemination of results and conclusions prior to public availability of data will foster participants’ perceptions of inclusiveness and accurate representation.
Encourage manuscript reviewers with requisite expertise in qualitative and mixed methods research to comment on the adequacy of anonymization. Study authors are ultimately responsible for anonymization. However, to promote good scientific practice, journal editors should encourage manuscript reviewers with requisite expertise in qualitative and mixed methods research to comment on the adequacy of anonymization and to raise any concerns they may have regarding potential maleficence resulting from data sharing.
Establish a petitioning process for non-disclosure of data. Authors should have the option of petitioning for non-disclosure of qualitative data in select instances. These include scenarios in which the study could not have yielded important results if participants were to have been required to consent to public disclosure of data, or in which anonymization could not be adequate given the uniqueness of the study population or the data.
4. Conclusion
Data sharing in medical and public health research is becoming increasingly normative, but medical and public health journals have yet to grapple with how to feasibly and ethically promote data sharing for qualitative and mixed methods research. Recent advances in the field have begun to enhance the reliability and validity of qualitative data. Data sharing may help to increase confidence in qualitative research findings, but the concept of reproducible research does not translate as straightforwardly from quantitative data to qualitative data. Data sharing policies may be feasible for qualitative studies, but leading medical and public health journals should consider modifying their policies to be more relevant to the unique aspects of qualitative and mixed methods study designs; they must also address concerns about potential violations of participant anonymity and other unintended adverse consequences. Such policies, if appropriately implemented, can build a culture of data sharing that also facilitates critical, patient-oriented qualitative and mixed methods research.
Supplementary Material
Acknowledgments
Funding
No specific funding was received for the preparation of this manuscript. The authors acknowledge salary support through K23MH096620, K01MH104310, and K23MH095655. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
We thank Norma C. Ware, PhD for her comments on an earlier draft of the manuscript.
Appendix A. Supplementary data
The following is the supplementary data related to this article:
Footnotes
Competing interests
ACT is an Editorial Associate for Social Science and Medicine, Associate Editor for SSM - Population Health, and a Specialty Consulting Editor for Public Library of Science Medicine. SLD is Associate Editor of the Archives of Sexual Behavior.
References
- Abramowitz SA, Panter-Brick C, editors. Medical Humanitarianism: Ethnographies of Practice. University of Pennsylvania Press; Philadelphia: 2015. [Google Scholar]
- Ashenfelter O, Haveman RH, Riley JG, Taylor JT. Editorial statement. Am Econ Rev. 1986;76(4) [Google Scholar]
- Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al. Grading quality of evidence and strength of recommendations. BMJ. 2004;328(7454):1490. doi: 10.1136/bmj.328.7454.1490. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baez B. Confidentiality in qualitative research: reflections on secrets, power and agency. Qual Res. 2002;2(1):35–58. [Google Scholar]
- Bernanke BS. Editorial statement. Am Econ Rev. 2004;94(1):404. [Google Scholar]
- Bernard HR. Research Methods in Anthropology: Qualitative and Quantitative Approaches. 4. AltaMira Press; Lanham: 2006. [Google Scholar]
- Bernard HR, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. Sage Publications, Inc; Los Angeles: 2009. [Google Scholar]
- Bloom T, Ganley E, Winker M. Data access for the open access literature: PLOS’s data policy. Public Libr Sci Med. 2014;11(2):e1001607. [Google Scholar]
- Broom A, Cheshire L, Emmison M. Qualitative researchers’ understandings of their practice and the implications for data archiving and sharing. Sociology. 2009;43(6):1163–1180. [Google Scholar]
- Campbell EG, Clarridge BR, Gokhale M, Birenbaum L, Hilgartner S, Holtzman NA, et al. Data withholding in academic genetics: evidence from a national survey. J Am Med Assoc. 2002;287(4):473–480. doi: 10.1001/jama.287.4.473. [DOI] [PubMed] [Google Scholar]
- Chandra A. If ethnography is a legitimate way to learn things… why aren’t Rx manufacturers allowed to do it? 2015 (amitabhchandra2) June 15, 2015, 5:33 AM. Tweet. [Google Scholar]
- Christopoulos KA, Olender S, Lopez AM, Lekas HM, Jaiswal J, Mellman W, et al. Retained in HIV care but not on antiretroviral treatment: a qualitative patient-provider dyadic study. Public Libr Sci Med. 2015;12(8):e1001863. doi: 10.1371/journal.pmed.1001863. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Claerbout JF, Karrenbach M. SEG Technical Program Expanded Abstracts. Society of Exploration Geophysics; Tulsa: 1992. Electronic documents give reproducible research a new meaning; pp. 601–604. [Google Scholar]
- Clark JP. How to peer review a qualitative manuscript. In: Godlee F, Jefferson T, editors. Peer Review in Health Sciences. 2. BMJ Books; London: 2003. pp. 219–235. [Google Scholar]
- Clemens MA. The meaning of failed replications: a review and proposal. J Econ Surv. 2016 Epub ahead of print 26 Dec 2015, (in press) [Google Scholar]
- Closser S. Chasing Polio in Pakistan: Why the World’s Largest Public Health Initiative May Fail. Vanderbilt University Press; Nashville: 2010. [Google Scholar]
- Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20(1):37–46. [Google Scholar]
- Colditz GA, Hankinson SE. The Nurses’ Health Study: lifestyle and health among women. Nat Rev Cancer. 2005;5(5):388–396. doi: 10.1038/nrc1608. [DOI] [PubMed] [Google Scholar]
- Colditz GA, Manson JE, Hankinson SE. The Nurses’ Health Study: 20-year contribution to the understanding of health among women. J Women’s Health. 1997;6(1):49–62. doi: 10.1089/jwh.1997.6.49. [DOI] [PubMed] [Google Scholar]
- Creswell JW, Klassen AC, Plano Clark VL, Smith KC for the Office of Behavioral and Social Sciences Research. Best Practices for Mixed Methods Research in the Health Sciences. U.S. National Institutes of Health; Washington, D.C: 2011. [Google Scholar]
- Dewald WG, Thursby JG, Anderson RG. Replication in empirical economics: the Journal of Money, Credit and Banking project. Am Econ Rev. 1986;76(4):587–603. [Google Scholar]
- Doshi P, Jefferson T, Del Mar C. The imperative to share clinical study reports: recommendations from the Tamiflu experience. Public Libr Sci Med. 2012;9(4):e1001201. doi: 10.1371/journal.pmed.1001201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ellis C. Fisher Folk: Two Communities on Chesapeake Bay. University Press of Kentucky; Lexington: 1986. [Google Scholar]
- Ellis C. Emotional and ethical quagmires in returning to the field. J Contemp Ethnogr. 1995;24(1):68–98. [Google Scholar]
- Gandrud C. Reproducible Research with R and R Studio. Chapman and Hall/CRC; London: 2013. [Google Scholar]
- Glaser BG. Theoretical Sensitivity. Sociology Press; Mill Valley: 1978. [Google Scholar]
- Glaser BG, Strauss AL. The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine Transaction; Chicago: 1967. [Google Scholar]
- Goffman A. On the Run: Fugitive Life in an American City. University of Chicago Press; Chicago: 2014. [Google Scholar]
- Goodman SN, Fanelli D, Ioannidis JP. What does research reproducibility mean? Sci Transl Med. 2016;8(341):341ps312. doi: 10.1126/scitranslmed.aaf5027. [DOI] [PubMed] [Google Scholar]
- Goodwin D, Pope C, Mort M, Smith A. Ethics and ethnography: an experiential account. Qual Health Res. 2003;13(4):567–577. doi: 10.1177/1049732302250723. [DOI] [PubMed] [Google Scholar]
- Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to The BMJ editors on qualitative research. BMJ. 2016;352:i563. doi: 10.1136/bmj.i563. [DOI] [PubMed] [Google Scholar]
- Groleau D, Young A, Kirmayer LJ. The McGill Illness Narrative Interview (MINI): an interview schedule to elicit meanings and modes of reasoning related to illness experience. Transcult Psychiatry. 2006;43(4):671–691. doi: 10.1177/1363461506070796. [DOI] [PubMed] [Google Scholar]
- Groves T. BMJ policy on data sharing. BMJ. 2010;340:c564. doi: 10.1136/bmj.c564. [DOI] [PubMed] [Google Scholar]
- Guba EG, Lincoln YS. Effective Evaluation: Improving the Usefulness of Evaluation Results through Responsive and Naturalistic Approaches. Jossey-Bass; San Francisco: 1981. [Google Scholar]
- Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ. Users’ guides to the medical literature. IX. A method for grading health care recommendations. Evidence-Based Medicine Working Group. J Am Med Assoc. 1995;274(22):1800–1804. doi: 10.1001/jama.274.22.1800. [DOI] [PubMed] [Google Scholar]
- Hamermesh DS. Viewpoint: replication in economics. Can J Econ. 2007;40(3):715–733. [Google Scholar]
- Hanson B, Sugden A, Alberts B. Making data maximally available. Science. 2011;331(6018):649. doi: 10.1126/science.1203354. [DOI] [PubMed] [Google Scholar]
- Horton R. Pardonable revisions and protocol reviews. Lancet. 1997;349(9044):6. doi: 10.1016/S0140-6736(05)62158-7. [DOI] [PubMed] [Google Scholar]
- Hrynaszkiewicz I, Norton ML, Vickers AJ, Altman DG. Preparing raw clinical data for publication: guidance for journal editors, authors, and peer reviewers. Trials. 2010;11:9. doi: 10.1186/1745-6215-11-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jefferson T, Doshi P. Multisystem failure: the story of anti-influenza drugs. BMJ. 2014;348:g2263. doi: 10.1136/bmj.g2263. [DOI] [PubMed] [Google Scholar]
- Kaiser J. Showdown over clean air science. Science. 1997;277(5325):466–469. doi: 10.1126/science.277.5325.466. [DOI] [PubMed] [Google Scholar]
- Kelly JT, Betancourt TS, Mukwege D, Lipton R, Vanrooyen MJ. Experiences of female survivors of sexual violence in eastern Democratic Republic of the Congo: a mixed-methods study. Confl Health. 2011;5:25. doi: 10.1186/1752-1505-5-25. [DOI] [PMC free article] [PubMed] [Google Scholar]
- King G. Replication, replication. PS Political Sci Polit. 1995;28(3):444–452. [Google Scholar]
- King R, Barker J, Nakayiwa S, Katuntu D, Lubwama G, Bagenda D, et al. Men at risk; a qualitative study on HIV risk, gender identity and violence among men who have sex with men who report high risk behavior in Kampala, Uganda. Public Libr Sci One. 2013;8(12):e82937. doi: 10.1371/journal.pone.0082937. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kohrt BA, Tol WA, Pettigrew J, Karki R. Children and revolution: the mental health and psychosocial wellbeing of child soldiers in Nepal’s Maoist Army. In: Singer M, Hodge GD, editors. The War Machine and Global Health. AltaMira Press; Lanham: 2010. pp. 89–116. [Google Scholar]
- Laine C, Goodman SN, Griswold ME, Sox HC. Reproducible research: moving toward research the public can really trust. Ann Intern Med. 2007;146(6):450–453. doi: 10.7326/0003-4819-146-6-200703200-00154. [DOI] [PubMed] [Google Scholar]
- Larossa R, Bennett LA, Gelles RJ. Ethical dilemmas in qualitative family research. J Marriage Fam. 1981;43(2):303–313. [PubMed] [Google Scholar]
- Le Noury J, Nardo JM, Healy D, Jureidini J, Raven M, Tufanaru C, et al. Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence. BMJ. 2015;351:h4320. doi: 10.1136/bmj.h4320. [DOI] [PMC free article] [PubMed] [Google Scholar]
- LeCompte MD, Schensul JJ. Designing and Conducting Ethnographic Research. AltaMira Press; Walnut Creek: 1999. [Google Scholar]
- Lewis O. Life in a Mexican Village: Tepoztlan Restudied. University of Illinois Press; Urbana: 1951. [Google Scholar]
- Lewis-Kraus G. The Changeling. NYT Sunday Magazine. 2016:31–37. 56–60. [Google Scholar]
- Lo C, Ilic D, Teede H, Cass A, Fulcher G, Gallagher M, et al. The perspectives of patients on health-care for co-morbid diabetes and chronic kidney disease: a qualitative study. Public Libr Sci One. 2016;11(1):e0146615. doi: 10.1371/journal.pone.0146615. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lounsbury DW, Reynolds TC, Rapkin BD, Robson ME, Ostroff J. Protecting the privacy of third-party information: recommendations for social and behavioral health researchers. Soc Sci Med. 2007;64(1):213–222. doi: 10.1016/j.socscimed.2006.08.035. [DOI] [PubMed] [Google Scholar]
- Maman S, van Rooyen H, Stankard P, Chingono A, Muravha T, Ntogwisangu J, et al. NIMH Project Accept (HPTN 043): results from in-depth interviews with a longitudinal cohort of community members. Public Libr Sci One. 2014;9(1):e87091. doi: 10.1371/journal.pone.0087091. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Markowitz L. Finding the field: notes on the ethnography of NGOs. Hum Organ. 2001;60(1):40–46. [Google Scholar]
- McCullough BD, Vinod HD. Verifying the solution from a nonlinear solver: a case study. Am Econ Rev. 2003;93(3):873–892. [Google Scholar]
- McLellan E, MacQueen KM, Neidig JL. Beyond the qualitative interview: data preparation and transcription. Field Methods. 2003;15(1):63–84. [Google Scholar]
- Meier KJ. Replication: a view from the streets. PS Political Sci Polit. 1995;28(3):456–459. [Google Scholar]
- Mello MM, Francer JK, Wilenzick M, Teden P, Bierer BE, Barnes M. Preparing for responsible sharing of clinical trial data. N Engl J Med. 2013;369(17):1651–1658. doi: 10.1056/NEJMhle1309073. [DOI] [PubMed] [Google Scholar]
- Michaels D, Monforton C. Manufacturing uncertainty: contested science and the protection of the public’s health and environment. Am J Public Health. 2005;95(Suppl 1):S39–S48. doi: 10.2105/AJPH.2004.043059. [DOI] [PubMed] [Google Scholar]
- Miguel E, Camerer C, Casey K, Cohen J, Esterling KM, Gerber A, et al. Social science. Promoting transparency in social science research. Science. 2014;343(6166):30–31. doi: 10.1126/science.1245317. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morley CA, Kohrt BA. Impact of peer support on PTSD, hope, and functional impairment; a mixed-methods study of child soldiers in Nepal. J Aggress Maltreatment Trauma. 2013;22(7):714–734. [Google Scholar]
- Morse JM, Barrett M, Mayan M, Olson K, Spiers J. Verification strategies for establishing reliability and validity in qualitative research. Int J Qual Methods. 2002;1(2):13–22. [Google Scholar]
- Muggli ME, Forster JL, Hurt RD, Repace JL. The smoke you don’t see: uncovering tobacco industry scientific strategies aimed against environmental tobacco smoke policies. Am J Public Health. 2001;91(9):1419–1423. doi: 10.2105/ajph.91.9.1419. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nader L. Up the anthropologist – perspectives gained from studying up. In: Hymes D, editor. Reinventing Anthropology. Pantheon; New York: 1969. pp. 284–311. [Google Scholar]
- Natoli L, Guy RJ, Shephard M, Causer L, Badman SG, Hengel B, et al. “I do feel like a scientist at times”: a qualitative study of the acceptability of molecular point-of-care testing for chlamydia and gonorrhoea to primary care professionals in a remote high STI burden setting. Public Libr Sci One. 2015;10(12):e0145993. doi: 10.1371/journal.pone.0145993. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, et al. SCIENTIFIC STANDARDS. Promoting an open research culture. Science. 2015;348(6242):1422–1425. doi: 10.1126/science.aab2374. [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–1251. doi: 10.1097/ACM.0000000000000388. [DOI] [PubMed] [Google Scholar]
- Olken BA. Promises and perils of pre-analysis plans. J Econ Perspect. 2015;9(3):61–80. [Google Scholar]
- Open Science Collaboration. An open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspect Psychol Sci. 2012;7(6):657–660. doi: 10.1177/1745691612462588. [DOI] [PubMed] [Google Scholar]
- Parkinson SE. Organizing rebellion: rethinking high-risk mobilization and social networks in war. Am Political Sci Rev. 2013;107(3):418–432. [Google Scholar]
- Patton MQ. Qualitative Research and Evaluation Methods. 3. Sage Publications; Thousand Oaks: 2002. [Google Scholar]
- Peng RD. Reproducible research and biostatistics. Biostatistics. 2009;10(3):405–408. doi: 10.1093/biostatistics/kxp014. [DOI] [PubMed] [Google Scholar]
- Peng RD, Dominici F, Zeger SL. Reproducible epidemiologic research. Am J Epidemiol. 2006;163(9):783–789. doi: 10.1093/aje/kwj093. [DOI] [PubMed] [Google Scholar]
- PLOS Medicine Editors. Observational studies: getting clear about transparency. Public Libr Sci Med. 2014;11(8):e1001711. doi: 10.1371/journal.pmed.1001711. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Redfield R. Tepoztlan: a Mexican Village. University of Chicago Press; Chicago: 1930. [Google Scholar]
- Rennie D. Thyroid storm. J Am Med Assoc. 1997;277(15):1238–1243. [PubMed] [Google Scholar]
- Rotheram-Borus MJ, Tomlinson M, Roux IL, Stein JA. Alcohol use, partner violence, and depression: a cluster randomized controlled trial among urban South African mothers over 3 years. Am J Prev Med. 2015;49(5):715–725. doi: 10.1016/j.amepre.2015.05.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rust RT, Cooil B. Reliability measures for qualitative data: theory and implications. J Mark Res. 1994;31(1):1–14. [Google Scholar]
- Scheper-Hughes N. Saints, Scholars and Schizophrenics: Mental Illness in Rural Ireland. University of California Press; Berkeley: 1977. [Google Scholar]
- Scheper-Hughes N. Ire in Ireland. Ethnography. 2000;1(1):117–140. [Google Scholar]
- Schneider CJ, Aguiar LM. Researching Amongst Elites: Challenges and Opportunities in Studying up. Ashgate Publishing, Ltd; Farnham: 2012. [Google Scholar]
- Shuchman M. The Drug Trial: Nancy Olivieri and the Science Scandal that Rocked the Hospital for Sick Children. Random House Canada; Toronto: 2005. [Google Scholar]
- Shuval K, Harker K, Roudsari B, Groce NE, Mills B, Siddiqi Z, et al. Is qualitative research second class science? A quantitative longitudinal examination of qualitative research in medical journals. Public Libr Sci One. 2011;6(2):e16937. doi: 10.1371/journal.pone.0016937. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stein A. The Stranger Next Door: the Story of a Small Community’s Battle over Sex, Faith, and Civil Rights. Beacon Press; Boston: 2001. [Google Scholar]
- Stein A. Sex, truths, and audiotape: anonymity and the ethics of exposure in public ethnography. J Contemp Ethnogr. 2010;39(5):554–568. [Google Scholar]
- Stenbacka C. Qualitative research requires quality concepts of its own. Manag Decis. 2001;39(7):551–556. [Google Scholar]
- Stodden V, Guo P, Ma Z. Toward reproducible computational research: an empirical analysis of data and code policy adoption by journals. Public Libr Sci One. 2013;8(6):e67111. doi: 10.1371/journal.pone.0067111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stodden V, Leisch F, Peng RD. Implementing Reproducible Research. Chapman and Hall/CRC; London: 2014. [Google Scholar]
- Tang X, Yang F, Tang T, Yang X, Zhang W, Wang X, et al. Advantages and challenges of a village doctor-based cognitive behavioral therapy for late-life depression in rural China: a qualitative study. Public Libr Sci One. 2015;10(9):e0137555. doi: 10.1371/journal.pone.0137555. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tolich M. Internal confidentiality: when confidentiality assurances fail relational informants. Qual Sociol. 2004;27(1):101–106. [Google Scholar]
- Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–357. doi: 10.1093/intqhc/mzm042. [DOI] [PubMed] [Google Scholar]
- Tsai AC. Managing nonfinancial conflict of interest: how the “New McCarthyism” could work. Am J Bioeth. 2011;11(1):42–44. doi: 10.1080/15265161.2011.563151. [DOI] [PubMed] [Google Scholar]
- Tsai AC, Tomlinson M, Comulada WS, Rotheram-Borus MJ. Food insufficiency, depression, and the modifying role of social support: evidence from a population-based, prospective cohort of pregnant women in peri-urban South Africa. Soc Sci Med. 2016;151:69–77. doi: 10.1016/j.socscimed.2015.12.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tudur Smith C, Hopkins C, Sydes MR, Woolfall K, Clarke M, Murray G, et al. How should individual participant data (IPD) from publicly funded clinical trials be shared? BMC Med. 2015;13:298. doi: 10.1186/s12916-015-0532-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- U.S. National Institute of Justice. Privacy Certificate Guidance. U.S. National Institute of Justice; Washington, D.C: 2007. [Google Scholar]
- van den Hoonaard WC. Is anonymity an artifact in ethnographic research? J Acad Ethics. 2003;1(2):141–151. [Google Scholar]
- Vickers AJ. Whose data set is it anyway? Sharing raw data from randomized trials. Trials. 2006;7:15. doi: 10.1186/1745-6215-7-15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wade AS, Kane CT, Diallo PA, Diop AK, Gueye K, Mboup S, et al. HIV infection and sexually transmitted infections among men who have sex with men in Senegal. AIDS. 2005;19(18):2133–2140. doi: 10.1097/01.aids.0000194128.97640.07. [DOI] [PubMed] [Google Scholar]
- Wolcott HF. The Man in the Principal’s Office: an Ethnography. Holt, Rinehart, and Winston; Toronto: 1973. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.