Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Oct 3.
Published in final edited form as: Field methods. 2014 Jan 27;26(3):252–268. doi: 10.1177/1525822X13518168

A Mixed Methods Approach to Network Data Collection

Eric Rice 1, Ian W Holloway 2, Anamika Barman-Adhikari 3, Dahlia Fuentes 4, C Hendricks Brown 5, Lawrence A Palinkas 6
PMCID: PMC4184035  NIHMSID: NIHMS478287  PMID: 25285047

Abstract

There is a growing interest in examining network processes with a mix of qualitative and quantitative network data. Research has consistently shown that free recall name generators entail recall bias and result in missing data that affects the quality of social network data. This study describes a mixed methods approach for collecting social network data, combining a free recall name generator in the context of an online survey with network relations data coded from transcripts of semi-structured qualitative interviews. The combined network provides substantially more information about the network space, both quantitatively and qualitatively. While network density was relatively stable across networks generated from different data collection methodologies, there were noticeable differences in centrality and component structure across networks. The approach presented here involved limited participant burden and generated more complete data than either technique alone could provide. We make suggestions for further development of this method.

Introduction

The quality of social network research depends upon the quality of the social network data collected. Social network researchers have identified three primary sources of error that can lead to missing network data: boundary specification (Laumann, Marsden, & Prensky, 1983; Marsden, 2005), non-response (Kossinets, 2006; Rumsey, 1993; Stork & Richards, 1992), and recall bias (e.g., Marin, 2004; see Brewer, 2000 for review). It is this latter source of error which is the topic of the present study. We present a description of how researchers can combine data from a more traditional free recall name generator with coded relations from semi-structured interviews. We provide a comparative structural analysis of the three resulting data sets: 1) free recall name generator only, 2) data coded from semi-structured interviews only, and 3) combined data. The approach presented here involved limited participant burden and generated more complete data than either technique alone could provide.

There are a large number of methods for collecting social network data. One of the most common field techniques is to utilize name generators: A question or series of questions that are designed to elicit the naming of relevant alters along some specified criterion (Campbell-Barrett & Karen, 1991; Marsden, 2005). Typically, name generators are free recall name generators. Respondents are given a prompt that defines some criterion, for instance, a category of persons such as “family” or “friends” or types of social exchange relationships (e.g., “who do you turn to for advice or support?”). Then respondents are asked to list as many people as they can. In some cases, an upper limit is given to the number of names that can be elicited.

Name generators implicitly make the assumption that respondents list every alter with whom they share a particular relation. For decades, researchers have observed that this is a problematic assumption (e.g., see Brewer, 2000; Marsden, 2005), and research on recall has clearly established that persons remember at best a “sample” of relevant alters (Brewer, 2000; Marin, 2004). The resulting “sample” is a biased one in that it is subject to respondent fatigue and recall bias (e.g., Bahrick, Bahrick, & Wittlinger, 1975; Brewer, Garrett & Rinaldi, 2002; Hammer, 1984; Marin, 2004; Sudman, 1988). Moreover, this approach may result in large numbers of missing data.

The effects of missing data on the resulting social network structures can be quite dramatic (Kossinets, 2006; Robins, Pattison, & Woodcock, 2004). Even when nodes (i.e., actors) or relations (i.e., vertices) are absent at random, overall network properties such as mean vertex degree, clustering, density, number of component sub-graphs, and average path length can all be impacted (Kossinets, 2006). Most sources of error in network data collection, however, are not random and may create additional problems of bias. While problems of censored and truncated data abound (Dempster et al., 1977), truncation problems are generally more problematic since we know less about missing actors and can therefore more easily apply inaccurate corrections (Laumann et al., 1983; Marsden, 2005).

Errors from recall bias, a type of truncation (Brewer, 2000; Marin, 2004), have been documented; people regularly forget a substantial portion of their network contacts when they are asked to recall them with standard name generators (Brewer, 2000). Marin (2004) found that affective strong ties were most likely to be recalled, as were relations of longer duration. Moreover, she found structural bias insofar as relations with more shared ties in the network were more likely to be recalled. While in most cases recall error tends to be biased toward strong ties, the evidence is somewhat mixed on this point (e.g., Brewer, 2000; Brewer & Webster, 2000; Sudman, 1988; Hammer, 1984). Forgetting or failing to enumerate particular ties has serious implications for the structural properties of the resulting networks (Kossinets, 2006). Most solutions to network recall begin with the understanding that a single item--”free recall” name generator will be most subject to recall bias. Brewer (e.g., 2000; Brewer, Garrett & Rinaldi, 2002), has extensively reviewed the topic and suggested and tested several viable solutions to the problem, including: non-specific prompting, reading back lists, semantic cues, multiple elicitation questions, and re-interviewing.

A mixed method approach: A proposal

As there has been a growing interest in utilizing mixed methods approaches to examining social network process (Bernardi, 2010; Bidart & Charbonneau, 2005; Coviello, 2005; Edwards, 2010), we seek to address how such methods can be refined. The method described here is not appropriate for the collection of ego-centric network data in the context of survey research, as such endeavors rarely include qualitative semi-structured interviews. Rather, we wish to refine how one may use information from multiple sources in a mixed methods study to create the most comprehensive picture of the social context under investigation. The current study assesses the quality of data that can be collected by combining multiple data collection methods. As suggested by others, qualitative interviews may be useful in collecting network data (Bernardi, 2010; Bidart & Charbonneau, 2005; Coviello, 2005; Edwards, 2010). We collected data from an online single name generator and coded network data from qualitative interviews, and then combined these data sets. We present the structures of these resulting networks and discuss how data collection such as this could be improved upon in the future.

The study from which these data were derived was part of a randomized control trial (RCT) known as the Cal-40 Study, which was designed to test the scaling-up of evidence based practice (EBP) implementation in California county child welfare, mental health, and juvenile probation departments (Chamberlain et al., 2008). A supplemental mixed-methods study using both semi-structured interviews (qualitative) and a network survey with a free recall name generator (quantitative) determined the processes by which county agency leaders’ obtain EBP information (Palinkas et al., 2011).

Qualitative semi-structured review: results

Agency directors from child welfare, mental health, and probation departments from 13 California counties from the first wave of the Cal-40 Study were asked to participate in a 60–90 minute interview in person or by phone to determine how EBPs were implemented and to whom they go to for information about EBP implementation.

Of the 45 agency directors invited to participate, 38 agreed to be interviewed or designated another professional from their county (e.g., assistant director, deputy director, or manager) to participate (response rate = 84%). In most cases a researcher interviewed these participants in-person at a location convenient to the agency director (n=28, 74%). When this was not possible, agency directors were interviewed by phone (n=10, 26%).

Semi-structured interview topics included how individuals heard about EBPs, what factors facilitated or impeded EBP participation, with whom individuals communicated about EBPs, and the nature of those communications (e.g., advice-seeking, decisions to collaborate, etc). For additional information on the parent study content, see Palinkas et al., 2011. Participants in the qualitative study were offered a $20 online gift certificate for their participation. The Institutional Review Board at the University of Southern California approved the study.

In addition to network data collected through the web-based survey, semi-structured interview transcripts from the qualitative phase of the study were examined for instances where individuals who were interviewed described communications with other professionals regarding EBPs. These descriptions could involve communication specifically about advice-seeking regarding EBPs or more general discussion regarding EBPs. For example, in one interview of a chief probation officer (CPO) he/she refers to a specific person with whom he communicated and sought advice about an EBP (see quote 1 in Table 1). This type of direct reference to advice-seeking can be contrasted with another qualitative interview segment where the respondent, another CPO, spoke more generally about his communications regarding EBPs with professionals from the mental health department in his county (see quote 2 in Table 1). Although the CPO was not specifically referring to EBP advice-seeking, he spoke about close collaboration with professionals from other county agencies, therefore we considered those individuals mentioned in this interview as members of the CPO’s social network.

Table 1.

Examples from the Qualitative, Semi-structured Interviews

Specific Communication related to EBP Advice Seeking If we have some ideas… usually I go to the person who is at the same level as I am in their department, which is [NAME], ‘Let me talk about some ideas.’ And…as a matter of fact, we had a meeting with them last week to talk about how we’re going to continue some of the MIOCR [Mentally Ill Offender Crime Reduction] services that uh, you know, MIOCR funding was cut completely, or at least it looks like it’s going to be dead after September 30th. And we’re looking at, or we’re coming together to decide how we can continue services without having that funding source.
Other type of EBP Communication We collaborate information trying to figure out what’s going on. We have budget, you know, areas that we try to talk and work through, like with programming, you know. And you know, both [NAME] and [NAME], course we have some grants that we use their folks with; substance abuse things like that…. So, we are collaborating with programming, trying to um, do evidence-based practices… In fact, we just had a meeting with [NAME’S] folks yesterday and saying, ‘Hey, we need to make sure what works. We need to get the families involved in this process. You know, and start doing what works, instead of just doing the old programs to feel good, or whatever.’.
Full Name Respondent: Oh, in the last two or three years I became real familiar with it. Actually, went with the uh, Social Service Director and one of my colleagues, up to Davis for an all-day training by [FIRST NAME], what’s her name? It starts with ‘M’. Interviewer: [LAST NAME]? Respondent: Yeah, [FULL NAME], from um, and [NEW NAME] from [NAME OF ORGANIZATION]
Partial Name with Additional Confirmable Information She and her Deputy Director, I talk to a lot on different things that are going on. I go to the Children’s Steering Committee, which is kind of [a] mental health based children’s services committee, and that provides a forum for discussion. And um, bring things forward about Child Welfare Services, in that context. I also go to the Adult’s Steering Committee, where it’s not directly related to Child Welfare, it does affect the adults that are associated with our case…. And I talk to uh, [FIRST NAME] and [FIRST NAME]...the Deputy and the Assistant Director of Mental Health…I talk to the two [FIRST NAMES] a lot.
Partial Name with No Confirmable Information [FULL NAME] from Santa Cruz and [PARTIAL NAME] from San Francisco, and [FULL NAME] from San Mateo, we get together and we, you know, we talk, and with the Family Partners actually, they come to…. Um, and talk about how things are going, and what every, each county is doing. And that’s been very helpful…. And I do get ideas from there.

Two project staff members, who were involved in recruitment and data collection, reviewed all transcripts for instances where names of individuals were mentioned. Of the 38 qualitative interview transcripts examined, all but one contained names of individuals with whom the participant communicated about EBPs. An a priori decision for how to classify individuals named within qualitative interviews was established. Specifically, three different types of name mention patterns were determined: (1) Full name, (2) Partial name with additional confirmable information, and (3) Partial name with no additional confirmable information. For individuals mentioned by participants to be included in the full name category, both first and last name in the same text segment were required. When the interviewee only mentioned the first name of an individual and was prompted by the interviewer to elicit the full name, that particular individual was considered completely identified. These criteria allowed for 100% matching between individuals nominated in the web-based survey and individuals mentioned in qualitative interviews. For example, an exchange in an interview with a mental health director helped to identify an alter (see quote 3 in Table 1). In total 30 participants named 122 individuals by their full name in qualitative interviews. Of these participants, on average, 4 nominations were identified in each qualitative interviews (mean = 4.20, sd = 2.69, range = 1–10).

If the respondent mentioned only a partial name and was not prompted by the interviewer to gain the last name of the individual mentioned, the project staff examined the context of the discussion in which the partial name appeared. If there was additional information provided (e.g., title of the individual mentioned), project staff used the partial name and the additional information to determine whether or not this individual was a previously identified member of the network. When this could not be done through information already available from the web-based survey and interviews, project staff conducted internet searches that included the name, county, agency and whatever additional details were provided in the context of the interview. In one instance, the project staff had the county and titles of each of the individuals about whom the respondent referred to (see quote 4 in Table 1). Through examination of the project database and subsequent internet searches, the project staff was able to confirm the identity of these individuals mentioned by the participant during that interview. In total, 25 participants named 39 individuals by partial name in qualitative interviews. Twenty five of these 39 individuals’ identities were confirmed (64%). Of these participants, on average, 1.5 nominations were identified in qualitative interviews (mean = 1.56, SD = 0.77, range = 1–3).

The final category included individuals for whom partial names were given without any additional confirmable information. In one example, there were two full names and one partial name with a broad location (i.e., San Francisco; see quote 5 in Table 1). To find the full name of the individual whose partial name was mentioned above, a number of internet searches took place including the name of the interviewee and the partial name, the names of the other individuals mentioned by full name and the partial name, and a combination of full names and partial names with the location mentioned. If no matches could be found, it led to the classification of this individual in the third group. When no first name was mentioned and the title “Judge” or “Doctor” was mentioned before a last name without confirmable additional information, these individuals were also placed in the third group. In total, 10 participants named 10 individuals in qualitative interviews by partial name without confirmable information. Of these participants, each only had 1 nomination of this type.

Network survey with free recall name generator: results

Of the 38 county officials who were interviewed, 30 (79%) agreed to participate in the subsequent web-based social network survey. These individuals were asked to list up to 10 individuals with whom they communicated about EBP implementation. Specifically, the question for the name generator asked participants to: “Name [up to 10] individuals for whom you have relied for advice on whether and how to use evidence-based practices for meeting the mental health needs of youth served by your agency.” Data was also gathered on participant gender, county agency, position in the agency, and number of years in the current position. On average, participants nominated approximately 3 social network members (mean = 2.70, SD = 3.02, range = 0–10). Respondent time spent on the survey ranged from 3:10 minutes to 27:31 minutes.

Analyses for the present study were conducted using UCINet 6 (Borgatti, Everett, & Freeman, 2004). Nodelists for all three datasets (i.e., survey data, qualitative interview data, and combined data) were entered into UCINet to create data matrices, which were then transformed into social network diagrams using NetDraw 2.090 (Borgatti & Everett, 2002). To compare properties of the three networks, a number of standard network measures were calculated, including number of nodes, number of directed ties, density, number of unique components, size of the largest component, directed degree centrality, and betweenness centrality (see Wasserman & Faust, 1994 for descriptions of these measures).

Figure 1 presents the network visualization of the three different networks that were generated from the two data sources: (1) the network constructed only from the social network survey, (2) the network constructed from the qualitative interview data (including the 10 names which were unconfirmed), and (3) the combined network created by merging both data sources into one adjacency matrix. We used the spring embedding algorithm in NetDraw (Borgatti & Everett, 2002) to place the nodes based on connections in the combined data.

Figure 1.

Figure 1

Network diagrams based on the web-based survey, qualitative interviews and combined data

One can easily identify several differences across these three network visualizations (see Figure 1). There was a steady increase in the number of nodes and ties as one moved from the survey data to the qualitative data, and to the combined data. The survey-generated network contained only 89 nodes with 81 directed ties, whereas the qualitative interview-generated network contained 136 nodes with 171 directed ties, and the combined network contained 176 nodes with 227 directed ties.

The number of discrete network components and the size of those components were impacted by the inclusion of different data sources. While the survey had 18 components, the largest component had only 36 nodes and constituted 40% of that network. The qualitative interview data yielded 7 components, the largest of which contained 112 nodes (82% of that network). Finally, the combined network also had 7 components, the largest of which contained 149 nodes (85% of the resulting network).

Centrality measures were also impacted by the different data sources used, particularly the variance in those metrics. As illustrated in Table 2, the in-degree, out-degree, and betweenness centrality scores increase when the two data sources are combined and the standard deviations of the metrics likewise increase. In contrast, network density was relatively stable across data sources. One percent of possible relations were present in the survey data, 0.9% in the qualitative interview data, and 0.7% in the combined data. It is important to remember that density and network size are linked, such that as network size increases, network density tends to decrease (Friedkin, 1992), which may explain the slight decline in density in the larger network specifications.

Table 2.

Network Level Measures Across Three Data Sources

Network Data Source

Network Metric Qualitative Survey Combined
Number of Nodes/ Network Size 136 89 176
Number of Ties 171 81 227
Density 0.009 0.0103 0.0074
Number of Components 7 18 7
Size of Largest Component 112 36 149
Proportion of Largest Component 0.824 0.404 0.847
Path Length 1.75 1.38 1.88
Avg. In-Degree Centrality (S.D) 1.26 (0.93) 0.91 (0.65) 1.29 (0.93)
Avg. Out-Degree Centrality (S.D) 1.26 (2.62) 0.91 (2.16) 1.29 (3.05)
Avg. Betweenness (S.D) 2.13 (8.27) 0.53 (2.34) 2.79 (10.38)

Importantly, there is relatively little overlap between the two different data sources. Figure 2 provides a visualization of the network, which is color-coded to show data source. Black nodes and ties are present across both data sources, red nodes and ties are present only in the survey data, while blue nodes and ties are present only in the qualitative interview data. The number of black nodes and ties is relatively small. Indeed, only 49 of the nodes appeared in both data sets (27.8% of the total nodes), and only 25 ties appeared in both data sets (11.0% of the total ties). The survey data provided 40 unique nodes (22.7% of nodes) and 56 unique ties (24.7% of ties). The qualitative data provided 87 unique nodes (49.4% of nodes) and 146 unique ties (64% of ties).

Figure 2.

Figure 2

Total network, comprised of maximal set of nodes and ties, with data source depicted (n=176)

Notes: Circles = Nodes, Arrows = Ties, Red = Survey Only, Blue = Qualitative Interview Only, Black = Both Data Sources.

Further examination of the network visualization, illustrates that some components are dependent upon the intersection of the two data sources. In particular, the component at the bottom left of the diagram in Figure 2 is incomplete in the absence of the multiple data sources. The small triad toward the bottom left of the diagram depicts a respondent who nominated two different alters in the two different data sources.

The data collected by the qualitative interviews provides critical connections, which increase the size of the largest component. Without the qualitative data, the black node on the bottom right in Figure 2 would appear to be a dyad connected, only to the one node nominated in the survey. In the qualitative interview, this person discussed several other network connections, including a node mutually nominated by another participant, who in turn nominated yet another participant, creating a two-step bridge to the largest component.

Discussion

The present study adds to a growing body of literature advocating the use of mixed-methods approaches to social network data collection (Bernardi, 2010; Bidart & Charbonneau, 2005; Coviello, 2005; Edwards, 2010). Our technique combined data collected from a web-based survey free recall name generator with data collected from qualitative, semi-structured interviews. It is evident from both the network visualizations and the structural metrics that neither data source unto itself provided a complete picture of the network space. These results are consistent with the body of evidence that has suggested that free recall name generators will result in recall error (e.g., Brewer, 2000; Marin, 2004). Some indices of the overall social network, such as density, may be moderately affected by the method of collecting social network data, but it is likely that the local topology of links involving individual nodes can differ dramatically differ, as we saw from the triple in the lower left hand corner of Figure 2. Such differences may be crucial in the adoption of EBPs or the formation of partnerships that support full implementation of these methods (Brown, et al., 2012).

Combining data from a survey, which uses single name generator and qualitative interviews, takes advantage of three proposed solutions to recall bias: Semantic cues, multiple elicitation, and re-interviewing (Brewer, 2000). Like other techniques employing semantic cues, the qualitative interview allows the researcher to ask the respondent about other relations, which are similar to ones already mentioned in the context of systematic follow-up questions to answers about social network processes. The semi-structured nature of qualitative interviewing makes extensive use of a variety of domains and prompts to elicit information about social network processes, a basic multiple elicitation technique. Finally, this approach benefits from re-interviewing by having a distinct survey and qualitative interview. The two interviewing techniques are quite different, which we believe may relieve some participant burden. The time involved in qualitative interviews is not trivial, but the conversational flow of such techniques tends to lessen participant fatigue generated by tedium.

Like many innovations in social network research, this new technique arose from empirical observations (Freeman, 2004). Initially, the research team believed that the web-based social network survey would provide the structural information for all subsequent analyses and the qualitative interview data would be used to elucidate the social context of these structures. Because the qualitative interviews were conducted first, the research team had an intuitive sense of the breadth of ties and the interconnectivity of the network space. As the network data from the free recall name generator was analyzed, it appeared to be an inadequate depiction of the network structures that had been described in the qualitative interviews, especially since these same respondents completed the survey after the interview. This observation prompted the team to return to the qualitative interviews and “mine” them by coding the network data that was contained within the text of the transcripts. As one can clearly see by the preponderance of blue nodes in Figure 2, significant additional data were collected.

Moving forward, we recommend first collecting survey data on social network structures using name generators. Next, qualitative interviews regarding social network processes should be conducted. This ordering would allow interviewers to prepare for the qualitative interviews by initially reviewing the list of nominated alters from the network survey and preparing to ask follow-up questions based on this information. When new alters are discussed (i.e., persons not on the list) the interviewer may probe with additional questions about the attributes of these newly-named alters. Subsequently, if during the course of the qualitative interview, a participant does not discuss people on the nominations list, the interviewer can use the unrecalled names as additional probes for the qualitative interviews. By this method, one collects more “complete” network and interview data wherein the resulting structural data would be identical, and where one would have a depth of information about both structure and social process.

An additional benefit of the process oriented name generator approach we describe in this paper is the ability to triangulate social network data through qualitative data. Triangulation refers to using multiple data collection techniques to study the same phenomenon (Denzin, 1978). This process of triangulating data helps to establish measurement validity and can be very useful to social network researchers. With our proposed variation to this data collection strategy, social network researchers would be able to completely verify nominations from a single-name generator, while giving participants the opportunity to expand upon this name generator, allowing for greater certainty in the results of the single name generator.

Limitations

As with any study of a novel method, there are limitations to our work. Use of online surveys to elicit social network data has been shown to produce lower quality data than offline (Matzat & Snijders, 2010) and telephone data collection strategies (Kogovšek, 2006). It is our hope that the addition of qualitative interview data improved the overall quality of our data; however, we are unable to verify this from the present study. We acknowledge that the average number of alters nominated by participants in the web-based survey was low, indicating that web-based survey may not be the optimal name generator methodology used to complement qualitative interviewing. Future work should seek to elucidate differences in network structure based on the context in which name generators are given when combined with qualitative data collection techniques.

Respondent burden in network data collection is a major concern and has been addressed previously by randomly sampling alters nominated by an individual and eliciting additional information only on those randomly selected alters (McCarty, Killworth, & Rennell, 2007; Golinelli et al., 2010) in order to maximize data quality while minimizing respondent burden. Although we suspect that the process oriented name generator approach would require less participant burden than a repeated single-name generator, we did not compare these two approaches or ask participants about the experience of being asked to complete a web-based survey in addition to a qualitative interview.

Conclusions

The method described here may not be of utility not to survey researchers, especially as much as researchers working in the area of intervention and implementation science. There is a growing desire among implementation science and translational science researchers to understand the community-level network processes impacting these often large-scale, expensive programs (Palinkas et al., 2011; Brown et al., 2012; Chamberlain et al., 2008). We would like to suggest that research teams who have access to extant qualitative data sets that routinely probe for social relations consider coding these data into network data as we did here. In many cases, participant burden and field staff time are more costly than post-hoc data coding. Many qualitative research projects in the social sciences focus on key relations among people (e.g., social support) and these qualitative data sets present unique opportunities for “mining” with the coding techniques we outline here. There may be a wealth of un-coded network data languishing in the offices of our readers.

Acknowledgments

The project described was supported by grants from the W.T. Grant Foundation (No. 9493), National Institute of Mental Health (R01MH076158), and National Institute on Drug Abuse (P30 DA027828). The content is solely the responsibility of the authors and does not necessarily represent the official views of the W.T. Grant Foundation or National Institutes of Health.

Contributor Information

Eric Rice, School of Social Work at the University of Southern California.

Ian W. Holloway, Department of Social Welfare at the University of California, Los Angeles

Anamika Barman-Adhikari, School of Social Work at the University of Southern California.

Dahlia Fuentes, School of Social Work at the University of Southern California.

C. Hendricks Brown, Departments of Epidemiology and Biostatistics, Miller School of Medicine, University of Miami.

Lawrence A. Palinkas, School of Social Work at the University of Southern California

References

  1. Bahrick HP, Bahrick P, Wittlinger R. Fifty years of memory for names and faces: A cross-sectional approach. Journal of Experimental Psychology: General. 1975;104(1):54. [Google Scholar]
  2. Bernardi L. A mixed-methods social networks study design for research on transnational families. Journal of Marriage and Family. 2011 Aug;73:788–803. [Google Scholar]
  3. Bidart C, Charbonneau J. How to generate personal networks: Issues and tools for a sociological perspective. Field Methods. 2011;23(3):266–286. [Google Scholar]
  4. Borgatti SP. NetDraw: Graph visualization software. Harvard: Analytic Technologies; 2002. [Google Scholar]
  5. Borgatti SP, Everett MG, Freeman LC. Ucinet for windows: Software for social network analysis. Harvard: Analytic Technologies; 2002. [Google Scholar]
  6. Brewer DD. Forgetting in the recall-based elicitation of personal and social networks. Social Networks. 2000;22(1):29–44. [Google Scholar]
  7. Brewer DD, Garrett SB, Rinaldi G. Free-listed items are effective cues for eliciting additional items in semantic domains. Applied Cognitive Psychology. 2002;16(3):343–358. [Google Scholar]
  8. Brown CH. Protecting against nonrandomly missing data in longitudinal studies. Biometrics. 1990;46:143–155. [PubMed] [Google Scholar]
  9. Brown CH, Kellam SG, Kaupert S, Muthén BO, Wang W, Muthen LK, Chamberlain P, PoVey C, Cady R, Valente TW, Ogihara M, Prado GJ, Pantin HM, Szapocznik J, Czaja SJ, McManus JW. Partnerships for the design, conduct, and analysis of effectiveness and implementation research: experiences of the Prevention Science and Methodology Group. Administration and Policy in Mental Health and Mental Health Services Research. 2012;39:301–316. doi: 10.1007/s10488-011-0387-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Campbell-Barrett A, Karen E. Name generators in surveys of personal networks* 1. Social Networks. 1991;13(3):203–221. doi: 10.1016/0378-8733(91)90006-F. [DOI] [Google Scholar]
  11. Chamberlain P, Brown CH, Saldana L, Reid J, Wang W, Marsenich L, Cosna T, Padgett C. Engaging and Recruiting Counties in an Experiment on Implementing Evidence–Based Practice in California. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(4):250–260. doi: 10.1007/s10488-008-0167-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Coviello NE. Integrating qualitative and quantitative techniques in network analysis. Qualitative Market Research. 2005;8(1):39–60. [Google Scholar]
  13. Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B. 1977;39:1–38. [Google Scholar]
  14. Denzin NK. The research act: A theoretical introduction to sociological methods. New York: McGraw-Hill; 1978. [Google Scholar]
  15. Edwards G. Mixed-method approaches to social network analysis. 2010. [Google Scholar]
  16. Freeman LC. The development of social network analysis: A study in the sociology of science. Vancouver: Empirical Press; 2004. [Google Scholar]
  17. Friedkin NE. An expected value model of social power: Predictions for selected exchange networks. Social Networks. 1992;14(3–4):213–229. [Google Scholar]
  18. Golinelli D, Ryan G, Green HD, Kennedy DP, Tucker JS, Wenzel SL. Sampling to reduce respondent burden in personal network studies and its effect on estimates of structural measures. Field Methods. 2010;22(3):217–230. doi: 10.1177/1525822X10370796. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hammer M. Explorations into the meaning of social network interview data* 1. Social Networks. 1984;6(4):341–371. [Google Scholar]
  20. Kogovšek T. Reliability and validity of measuring social support networks by web and telephone. Metodoloski Zvezki. 2006;3(2):239–252. [Google Scholar]
  21. Kossinets G. Effects of missing data in social networks. Social Networks. 2006;28(3):247–268. [Google Scholar]
  22. Laumann E, Marsden P, Prensky D. The boundary specification problem in network analysis. Applied Network Analysis: A Methodological Introduction. 1983:18–34. [Google Scholar]
  23. Lazer D, Pentland A, Adamic L, Aral S, Barabasi AL, Brewer D, Christakis N, Contractor N, Fowler J, Guttmann M, Jebara T, King G, Macy M, Roy D, Van Alstyne M. Computational social science. Science. 2009;323:721–723. doi: 10.1126/science.1167742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Marin A. Are respondents more likely to list alters with certain characteristics? Implications for name generator data. Social Networks. 2004;26(4):289–307. [Google Scholar]
  25. Marsden PV. Recent developments in network measurement. Models and Methods in Social Network Analysis. 2005;8:30. [Google Scholar]
  26. Matzat U, Snijders C. Does the online collection of ego-centered network data reduce data quality? An experimental comparison. Social Networks. 2010;32:105–111. [Google Scholar]
  27. Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):44–53. doi: 10.1007/s10488-010-0314-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Robins G, Pattison P, Woolcock J. Missing data in networks: Exponential random graph (p*) models for networks with non-respondents. Social Networks. 2004;26(3):257–283. [Google Scholar]
  29. Rumsey DJ. Doctoral dissertation. 1993. Nonresponse models for social network stochastic processes (Markov chains) Retrieved from The Ohio State University. [Google Scholar]
  30. Stork D, Richards WD. Nonrespondents in communication network studies. Group & Organization Management. 1992;17(2):193–209. [Google Scholar]
  31. Sudman S. Experiments in measuring neighbor and relative social networks. Social Networks. 1988;10(1):93–108. [Google Scholar]
  32. Wasserman S, Faust K. Social network analysis: Methods and applications. New York: Press Syndicate of the University of Cambridge; 1994. [Google Scholar]

RESOURCES