Skip to main content
VA Author Manuscripts logoLink to VA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jan 19.
Published in final edited form as: Am J Manag Care. 2015 Dec 1;21(12):e640–e647.

E-Consult Implementation: Lessons Learned Using Consolidated Framework for Implementation Research

Leah M Haverhals 1, George Sayre 1, Christian D Helfrich 1, Catherine Battaglia 1, David Aron 1, Lauren D Stevenson 1, Susan Kirsh 1, P Michael Ho 1, Julie Lowery 1
PMCID: PMC4717483  NIHMSID: NIHMS749100  PMID: 26760426

Abstract

Objectives

In 2011, the Veterans Health Administration (VHA) implemented electronic consults (e-consults) as an alternative to in-person specialty visits to improve access and reduce travel for veterans. We conducted an evaluation to understand variation in the use of the new e-consult mechanism and the causes of variable implementation, guided by the Consolidated Framework for Implementation Research (CFIR).

Study Design

Qualitative case studies of 3 high- and 5 low-implementation e-consult pilot sites. Participants included e-consult site leaders, primary care providers, specialists, and support staff identified using a modified snowball sample.

Methods

We used a 3-step approach, with a structured survey of e-consult site leaders to identify key constructs, based on the CFIR. We then conducted open-ended interviews, focused on key constructs, with all participants. Finally, we produced structured, site-level ratings of CFIR constructs and compared them between high- and low-implementation sites.

Results

Site leaders identified 14 initial constructs. We conducted 37 interviews, from which 4 CFIR constructs distinguished high implementation e-consult sites: compatibility, networks and communications, training, and access to knowledge and information. For example, illustrating compatibility, a specialist at a high-implementation site reported that the site changed the order of consult options so that all specialties listed e-consults first to maintain consistency. High-implementation sites also exhibited greater agreement on constructs.

Conclusions

By using the CFIR to analyze results, we facilitate future synthesis with other findings, and we better identify common patterns of implementation determinants common across settings.


In 2010, the Secretary for the Department of Veterans Affairs (VA) identified improving access to care as a top priority.1 The Veterans Health Administration (VHA) had been collecting and analyzing data on wait times for more than a decade, and observational studies found associations between wait times and poorer short- and long-term quality indicators.2 Research also highlighted challenges faced by veterans in rural communities and by female veterans, with travel demands and transportation difficulties sometimes exacerbated by veterans’ functional status, resulting in delayed or forgone care.3,4

Technology was seen as part of the solution by offering alternate ways to access care.5 Research suggested telehealth interventions could improve access, including speeding time to treatment while achieving results similar to in-person visits in terms of patient satisfaction and experience of care.6 Simultaneously, there were concerns about implementation of new technologies introducing problems such as privacy and confidentiality vulnerabilities and disruption to clinic work flow.7

In 2011, the VHA implemented specialty care electronic consults (e-consults) at 15 pilot sites. E-consults offer primary care providers (PCPs) the option to obtain specialty care expertise by submitting patient consults via the VHA’s electronic health record (EHR)8,9; e-consults have been implemented in other healthcare systems as well.10-13 Specialists then respond with advice and/or recommendations on whether veterans should be seen in person. If implemented effectively, e-consults should improve specialty care access and reduce travel for veterans.

The VHA’s Office of Specialty Care Transformation (OSCT), which was responsible for overseeing the dissemination of e-consults, requested assistance in identifying the challenges associated with implementation to facilitate further dissemination. Thus, the Specialty Care Evaluation Center was created to evaluate e-consult implementation. We used the Consolidated Framework for Implementation Research (CFIR) to identify those factors that facilitated or hindered e-consult implementation among pilot sites. The CFIR consolidates and standardizes definitions of implementation factors, thereby providing a pragmatic structure for identifying potential influences on implementation and comparing findings across sites and studies.14,15 The CFIR is composed of 5 domains: intervention characteristics, outer setting, inner setting, characteristics of individuals involved in implementation, and the process of implementation.14 Thirty-seven constructs characterize these domains. The objective of this study is to use the CFIR for identification and comparison of implementation factors across sites in an effort to learn from their experiences.

METHODS

A post implementation interpretive evaluation16 was conducted using semi-structured, key informant interviews with structured ratings of CFIR constructs. The unit of analysis was the site and included 8 of 15 pilot sites (geographic-site/specialty combinations), selected for variation on overall e-consult implementation rates, measured as a ratio of e-consults to all consults for specialties of interest. Three e-consult sites were randomly selected from the 7 sites in the top half of e-consult implementation rates, and 5 were selected from the 8 sites in the bottom half (53% of sites interviewed). E-consult volume data were assessed from the beginning of the pilot period to initial site selection, May 2011 to February 2012.

A modified snowball sample was used to recruit participants, beginning with local site leaders and directors from both primary care and specialty care; e-consult programs straddle multiple clinical divisions, so some sites had multiple leaders. Interview participants were asked to identify specialists, PCPs, and support staff (nurse practitioners, pharmacists, and medical support assistants) engaged in initiatives. The rationale for conducting interviews at a small, but purposefully selected, sample of sites was to focus on obtaining an in-depth understanding of the differences in context in which implementation occurred, and how these differences might be related to implementation success.17-19

Data and Analysis

To identify a subset of high-probability CFIR constructs, a Web-based survey (available in the eAppendix A [eAppendices available at www.ajmc.com]) was first conducted of e-consult pilot site leaders to rate relevance of CFIR constructs to e-consult implementation. The initial CFIR survey was returned by all 21 e-consult site leaders. Of the 37 CFIR constructs, 14 were rated as important or very important by at least 90% of participants (Table 1). An interview guide (eAppendix B) was developed around those constructs and updated iteratively, a standard accepted practice in qualitative evaluations.20,21 Prior to conducting the interviews, analysts participated in 2 in-person, 2-day CFIR qualitative analysis training meetings, which included conducting CFIR ratings, group debriefings, and discussions of ratings. Interviews were conducted by telephone by an interviewer and note-taker and were digitally recorded. Interview pairs reviewed and clarified interview notes post interview, referring to recordings as needed. The pairs then independently coded interview notes from each participant according to CFIR constructs, ensuring that notes were consistent with the definitions of the CFIR constructs.

Table 1.

Consolidated Framework for Implementation Research Constructs Identified From Web-Based Survey With E-Consult Site Leaders Prior to Qualitative Interviews

Construct Definition
Compatibility The degree of tangible fit between meaning and values at-
tached to the intervention by involved individuals; how those
align with individuals’ own norms, values, and perceived risks
and needs; and how the intervention fits with existing work
flows and systems.
Relative priority Individuals’ shared perception of the importance of the imple-
mentation within the organization.
Goals and
feedback
The degree to which goals are clearly communicated, acted
on, and fed back to staff, and alignment of that feedback with
goals.
Leadership
engagement
Commitment to, involvement with, and accountability of lead-
ers and managers for the implementation.
Available
resources
The level of resources dedicated for implementation and
ongoing operations, including money, training, education,
physical space, and time.
Champions Individuals who dedicate themselves to supporting, market-
ing, and “driving through” an implementation, overcoming
indifference or resistance that the intervention may provoke
in an organization.
Executing Carrying out or accomplishing the implementation according
to plan.
Reflecting and
evaluating
Quantitative and qualitative feedback about the progress and
quality of implementation, accompanied by regular personal
and team debriefing about progress and experience.
Design, quality,
& packaging
Perceived excellence in how the intervention is bundled,
presented, and assembled.
Readiness for
implementation
Tangible and immediate indicators that the organization is
committed to implementing the innovation.
Knowledge and
beliefs
Individuals’ attitudes toward and value placed on the interven-
tion, as well as familiarity with facts, truths, and principles
related to the intervention.
Adaptability The degree to which an intervention can be adapted, tailored,
refined, or reinvented to meet local needs.
Networks and
communications
The nature and quality of social networks webs, and the
nature and quality of formal and informal communications
within an organization.
Engaging Involvement of and relationships with leaders and all key
stakeholders in the change process.

Following coding of the interview responses, they rated the influence of each construct in the organization (positive or negative) and magnitude or strength of its influence22 (−2, −1, 0, +1, +2) using established criteria (Table 2). Pairs distinguished constructs that were not specifically mentioned (missing) from those with ambiguous or neutral effects (rated 0). Following independent coding, pairs convened via phone or in person to resolve discrepancies and reach consensus, based on consensual qualitative research methods.20,21 Using ratings across participants and participants’ roles (some participants’ responses were weighted more heavily than others), pairs derived an overall rating for each construct for each site, and noted if there was significant variability for constructs (a difference of at least 2 points across 2 or more participants). Assigning ratings to the qualitative interview data in this way allows for a systematic, rapid comparison of findings across sites.23 A matrix of ratings for all constructs across sites was developed and used to examine the extent to which constructs were more likely to be rated as negative or zero/mixed among sites with low volume and more likely to be rated as positive at sites with a high volume of e-consults.14

Table 2.

Criteria Used to Assign Ratings to Constructs

Rating Criteria
−2 The construct is a negative influence in the organization, an impeding influence in work processes, and/or an impeding
influence in implementation efforts. The majority of interviewees (at least 2) describe explicit examples of how the key
aspect (or all aspects, or the absence) of a construct manifests itself in a negative way.
−1 The construct is a negative influence in the organization, an impeding influence in work processes, and/or an impeding
influence in implementation efforts. Interviewees make general statements about the construct manifesting in a nega-
tive way but without concrete examples:
  • The construct is mentioned only in passing or at a high level without examples or evidence of actual, concrete descriptions of how that construct manifests;

  • There is a mixed effect of different aspects of the construct but with a general overall negative effect;

  • There is sufficient information to make an indirect inference about the generally negative influence; and/or

  • Judged as weakly negative by the absence of the construct.

0 A construct has neutral influence if:
  • It appears to have neutral effect (purely descriptive) or is only mentioned generically without valence;

  • There is no evidence of positive or negative influence;

  • Credible or reliable interviewees contradict each other;

  • There are positive and negative influences at different levels in the organization that balance each other out; and/or different aspects of the construct have positive influence while others have negative influence; and overall, the effect is neutral.

+ 1 The construct is a positive influence in the organization, a facilitating influence in work processes, and/or a facilitating
influence in implementation efforts. Interviewees make general statements about the construct manifesting in a positive
way but without concrete examples:
  • The construct is mentioned only in passing or at a high level without examples or evidence of actual, concrete descriptions of how that construct manifests;

  • There is a mixed effect of different aspects of the construct but with a general overall positive effect; and/or

  • There is sufficient information to make an indirect inference about the generally positive influence.

+2 The construct is a positive influence in the organization, a facilitating influence in work processes, and/or a facilitating
influence in implementation efforts. The majority of interviewees (at least 2) describe explicit examples of how the key or
all aspects of a construct manifests itself in a positive way.
Responses were coded as missing when interviewee(s) were not asked about the presence or influence of the con-
struct; or if asked about a construct, their responses did not correspond to the intended construct and were instead
coded to another construct. The lack of knowledge on the part of an interviewee(s) about a construct does not necessar-
ily indicate missing data and may instead indicate the absence of the construct.

RESULTS

Thirty-seven interviews were completed with participants across 8 sites (Table 3). At all sites, a minimum of 3 people were interviewed, including e-consult site leader(s). In site-level CFIR ratings, 3 CFIR constructs had negative ratings in both low- and high-volume sites: design quality and packaging (perceptions of how the intervention is bundled and presented), leadership engagement, and goals and feedback, suggesting that these might be areas of concern for VHA. Nevertheless, the high-volume sites were able to overcome these challenges. Specifically, 4 CFIR constructs had more positive ratings at high-volume sites and more negative, neutral, or mixed ratings at low-volume sites, suggesting they might be critical implementation determinants: 1) compatibility, 2) networks and communications, 3) available resources (specifically training), and 4) access to knowledge and information. Differences between the low- and high-volume sites for each of these constructs are described below, and more examples are provided in Table 4.

Table 3.

Number of Semi-Structured Interviews by Site and Role

Sitea E-Consult
Site Leaderb
PCP Other
Support Staff
Specialist Other
Provider
Total
1 2 1 3
2 1 1 2 2 6
3 2 1 3
4 2 1 1 4
5 5 1 6
6 3 1 1 5
7 2 1 1 1 5
8 2 2 1 5
Total 19 6 5 4 3 37
a

Sites 1 to 5 are low-volume e-consult sites; sites 6 to 8 are high-volume e-consult sites

b

Site leaders breakdown: associate chiefs of staff, chiefs of staff, or directors of primary or specialty care (n = 12); primary care providers (PCPs; n = 3); specialists/pharmacists (n = 4).

Table 4.

Illustrative Quotes by Construct and by High- and Low-Volume E-Consult Sites

Constructa High-Volume Low-Volume
Compatibility “[E-consults are] very compatible. Before, I’d walk
down and talk to the cardiologist, but it’s better for
the flow of the day and location…it used to take 5
minutes to walk down there [to the specialist’s of-
fice], but if you’re with a patient and need an answer
that day, it takes patient care time away. I think [e-
consults are] very good for the flow”
“If this is going to help the patients get taken care
of, they are going to have to show it improving
utilization and access. On the other hand if they start
rejecting more consults and just [are] seeing them
[patients] electronically, they are not necessarily go-
ing to make the patients happier. We should define
the criteria and expectations and see how this is
going to impact the work flow”
Networks and
communications
“I think there’s a better spirit of collegiality from
e-consults, too. There could be a little bit of friction
if there were too many consults, overworked, don’t
have access. I find myself in a situation [now with
e-consults] where the doc is actually looking for
consults…I just find it better [for] communication
and collegiality, I guess”
“Sometimes it seems when we send an e-consult
it seems it goes out into space and no one looks at
it; not sure how the process is on the other end. We
have had to call and check sometimes because the
specialist might have been gone. Would be nice to
have some feedback if people are gone”
Training “[We] provide training with CBOCs, other facilities.
[At a] Regular scheduled time, [and] explain [the e-
consult] process, and answer questions”
“There should be more training in terms of infor-
mational conference calls. There has to be a rollout
process if you want to do it widely. This hasn’t hap-
pened so far. Secure messaging has had a formal
rollout and we have a coordinator to manage, same
thing for telehealth. Those are the ones funded that
we have really been pushing this year, which hasn’t
really happened with e-consult so far”
Access to knowledge
and information
“I think a new person in the front office is a point
person and was hired to keep it [e-consults] in the
formal process” and “I’m the point person for pro-
vider education, and we received approval for CME
[continuing medical education] credits and a once-
monthly educational program that will run through
the year…[it’s] a phone-based conference providers
can call into”
“Not sure who e-consult coordinator is now…Week
or two ago, found out surgeons had no idea how to
fill out e-consult to get credit. They are doing more
than anyone…and getting no credit”

CBOCs indicates VA community-based outpatient clinics.

a

High-volume signifies Consolidated Framework for Implementation Research ratings closer to +2; low-volume, closer to −2.

Compatibility

Compatibility refers to the degree of tangible fit between meaning and values attached to the intervention, as well as those of the individuals’ own norms, values, perceived risks, and needs in the context of how the intervention fits with existing work flows and systems.14 Participants’ opinions on compatibility varied at low-volume sites. Some PCPs perceived e-consults as adding to their workload and were not happy with the transfer of responsibility for certain tasks: “I feel like they’ve tried to transfer a lot of the work and basically [are] making the PCP a clerical person to collect and collate and put all this data together.” Others at low-volume sites saw the potential for e-consults to make a difference, but were frustrated with the need to account for numbers of e-consults: “Here’s what drives me nuts. We have always done e-consults here. We just didn’t call them e-consults…Then suddenly someone gave them a name—e-consults—and someone decided we could measure them. Had to change the process to deliver advice so they could get counted…[the] number of e-consults isn’t the be-all, end-all. [We] do [e-consults] to decrease visits. Appropriate to measure prompt access, not number [of e-consults].”

For high-volume sites, e-consults were more consistently described as good for work flow by streamlining existing consult processes. One e-consult site leader from a high-volume site thought the process was very efficient: “I love it; I think it’s fantastic. There are many times things come up and I would like opinions on and get notes in [the] chart but I don’t think the provider needs to see the patient. I can do it when I have time to organize my time and thoughts…Most of the time this is faster than [a] face-to-face appointment.”

Examining further the differences in context between the low- and high-volume sites that might account for differences in perception of the compatibility of e-consults with existing processes, the high-volume sites incorporated e-consults in ways that improved efficiency of operations, whereas the low-volume sites did not. Specifically, high-volume sites spent considerable time and effort tailoring the EHR templates to be completed easily and quickly. One site hired a pharmacist to handle the additional workload needed to generate and follow-up on e-consults. In contrast, low-volume sites did not take extra steps to facilitate implementation.

Networks and communications

The networks and communications construct refers to the nature and quality of webs of social networks and of formal and informal communications within an organization.14 With e-consults, specialists must reach out to PCPs to engage them, so it is important that good networks and communications exist to facilitate this engagement. Most low-volume sites noted that there was little to no communication around implementation of e-consults. One specialist at a low-volume site said variability in communication within their Patient-Aligned Care Team (PACT; the VHA’s version of patient-centered medical home) created a barrier to implementation of e-consults: “…meaning, there’s many, many different ways these PACT teams communicate with each other, and because they’re ultimately responsible for implementing the e-consults, I would say the greatest barrier is the way they communicate. By sticky notes, phone calls, CPRS [EHR], I would say the variability [among] those methods is our largest barrier to successfully implement these, because [the] system needs to take into account that variability and there’s a significant amount.”

In contrast, PCPs at high-volume sites noted the existence of good communication and relationships between PCPs and specialists: “There’s been a lot of consultation between the e-consult team and us, so we are happy with the product we have… I think there’s a better spirit of collegiality from e-consults too.” The same PCP added,“… specialists are accommodating, easily approachable, stop by and talk to us… We are at an advantage, even though we are [a] large [medical center, because] there is a good relationship between PCP and specialist… Everyone is all on the same floor, so [there is] definitely good communication between PCPs and specialists.” Another high-volume site participant noted that communication with e-consults was vital to successful implementation: “Everyone’s been very helpful especially in neurosurgery, especially in communicating and I think that’s the biggest key.”

Available resources: training

Training refers to a sub-construct of available resources, focusing on whether training has been helpful. Results showed training must include one-on-one, hands-on demonstrations to reach greatest effectiveness. At low-volume sites, training was not as common. One participant wished there was “an education component, educating providers about how to follow this new pathway,” and noted that, “I didn’t get any training whatsoever. Should have been initial standard training, not just [an] implementation guide; that would have been very beneficial.” In contrast, a high-volume site participant noted that training was crucial, “the key thing to getting this [e-consults] implemented.” Another high-volume site participant said it was instrumental that a specialist was hired specifically to implement e-consults, and under the specialist’s leadership, several trainings were conducted with PCPs to familiarize them with the program.

Access to knowledge and information

Access to knowledge and information refers to the ease of access to knowledge and information about e-consults in relation to work tasks.14 At low-volume sites, concerns were expressed with acquiring access to information at local and national levels. Some participants felt they were provided with very little assistance with implementation, and that processes were confusing. One PCP said, “[I’m] not sure who [the] e-consult coordinator is now. [I] felt like I was left on my own.” Another noted, “It took 6 months from wanting to start [e-consults] to launching because of misinformation we were getting from both what Central Office wants on forms and what we could link and how to do this and getting time and attention from CAC [Information Technology Services].” At high-volume sites, participants felt there was a clear point of contact for obtaining information. One specialist noted their point of contact was “a really good source of information, and constructive in putting me in contact with other diabetes specialists in the country, and supporting the efforts to learn from our colleagues…[I’ve] been able to grow in unique ways because of his guidance.”

DISCUSSION

Interviews and structured site-level ratings were used to identify a subset of 4 CFIR constructs that may be critical factors for implementing an e-consult initiative. A closer review of the interview responses offers suggestions for why the low- and high-volume sites implemented the initiative differently, which helps to explain why they differed with respect to their perceptions of compatibility, networks and communications, training, and access to knowledge and information. Basically, the high-volume sites expended more time, energy, and resources into implementing the program than did the low-volume sites. This is not a surprising or very informative conclusion by itself, but the benefit of our approach lies in the identification of the specific areas (specific CFIR constructs) in which time, energy, and resources should be expended to achieve the best results. Specifically, the high-volume sites devoted greater effort to developing mechanisms to make e-consults more efficient for staff (ie, developing easy-to-use templates and designating staff to help with the additional workload generated by the consults); investing in training; and designating a point person to answer questions and provide information about the program. These efforts helped achieve positive results despite challenges with leadership engagement, limited materials received for implementation, and poor feedback on the status of goals or implementation.

Although the quality of networks and communications was also different between the low- and high-volume sites, it is more difficult to understand the reason for this difference and, in turn, to make specific recommendations for how sites interested in implementing e-consults might address this. Instead, sites with good networks and communications between PCPs and specialists have a better chance of succeeding with implementing a program such as e-consults that requires coordination across departments. Sites at which this is a problem should consider this a red flag before implementing similar programs.

While we cannot conclude that implementation success was determined by these factors, or that these same factors would be important in other sites or initiatives, the findings about the potential role of specific contextual factors were helpful to the VHA OSCT and were used to generate recommendations in subsequent implementation guides for e-consults and for other initiatives focused on improving access to specialty care.

Qualitative data collection and analysis is a common approach in implementation studies, because of the wealth of information that can be obtained from in-depth interviews, compared with closed-ended survey questions. However, there is a need to use a common terminology and to code qualitative data in a way that can facilitate analysis of data not only across cases within a single study, but also across studies. We hope this paper serves to illustrate the utility of the study’s methods for these purposes, encompassing both quality improvement (QI) and research. Although attention was paid to ensuring qualitative rigor by means of a clear and transparent data collection/analysis protocol, reflexivity, and peer debriefing, this protocol could be further improved with the use of a true consensual approach and/or with triangulation via multiple methods to validate the analyses.24,25 Regardless of the specific methods used for increasing reliability of the coding process, by applying the widely accepted terminology of the CFIR to analyze differences between low- and high-implementation sites, findings from other small-sample, qualitative QI studies can begin to be combined. Meta-analyses of these data can then be conducted to improve generalizability and add to our understanding of the most important factors affecting implementation success.

Limitations

One of this study’s limitations is limited generalizability of the findings beyond the participating sites and beyond e-consults. However, the purpose of this study was not to contribute to general, context-independent conclusions. Instead, the purpose was to obtain context-dependent knowledge, to help program leaders better understand the important role of context in implementation. In addition, while the research team took many steps to ensure rigorous qualitative analysis when applying the CFIR, it cannot eliminate the risk of researcher subjectivity, which is inherent to all qualitative analysis. However, the steps applied in the analysis give us confidence in the reliability of the ordinal CFIR ratings and that the rating process could be replicated and generate similar results. Nevertheless, because of the rapid analytic approach used, we did not have the time to follow a true consensual research approach,15,26 which we recommend be used whenever possible, to help reduce bias.

Another limitation is that not all CFIR constructs were considered in the coding process, as analysts focused on the shortened list of constructs, informed by the survey completed by clinical leads prior to the interviews. In addition, not all sites had sufficient data to assign a code or rating to each construct from the shortened list; thus, data on some constructs were missing for some sites.

CONCLUSIONS

The veteran population the VA serves may benefit from VHA healthcare providers utilizing e-consults to improve access to, and quality of, patient care. Our study identified 4 critical implementation factors: compatibility, networks and communications, available resources (specifically training), and access to knowledge and information, on which future VHA medical centers can focus to successfully implement e-consults and similar telehealth initiatives. Furthermore, we observed sites that devoted effort to make e-consults more efficient for staff, invested in training, and designated a point person to answer questions and provide information about the program were more successful in implementing e-consults than were other sites. These results have important policy implications, in that successfully implemented initiatives like e-consults may lead to improved patient care and shorter patient wait time, and it can spare patients time and travel for specialty care visits that can be addressed instead through e-consults. Further, our study demonstrates the importance of rigorous evaluation measures such as CFIR to fully understand implementation processes of such initiatives both within and outside of the VA.

Supplementary Material

Supplementary Material

Take-Away Points.

Our research identified implementation factors that distinguished between medical centers that were less versus more successful at implementing a health information technology initiative: electronic consults (e-consults). These factors and their implications for implementing new health information technology programs include:

  • Compatibility: design initiative to fit in with existing work processes.

  • Networks and communications: assess degree of communication among participants; attend to indications of poor communication.

  • Training resources: expend effort on training.

  • Access to knowledge and information: establish key contacts easily accessible to program participants.

Acknowledgments

The authors represent the VHA Specialty Care Transformation Initiative Evaluation Center, and greatly appreciate and thank many other evaluation center members who contributed to data collection and analysis. The authors also wish to thank Tabitha Metreger for scheduling and coordination; Jeffrey Todd-Stemberg for obtaining data from the Corporate Data Warehouse; Omar Cardenas at VA Central Office for providing a variety of information on the e-consult initiative; and Rachel Orlando for assistance in submitting the manuscript. The authors are extremely grateful to all of the VHA clinicians and staff at the e-consult sites who generously shared their experiences and insights.

Source of Funding: This work was supported by the US Department of Veterans Affairs, Office of Specialty Care Transformation and Office of Health Services Research, and undertaken by the Specialty Care Transformation Initiative Evaluation Center. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

Footnotes

Author Disclosures: Drs Helfrich, Stevenson, Kirsh, and Ho, and Ms Haverhals are employees of VA (the evaluation in this study was of VA specialty-care initiatives). Dr Helfrich also has received VA grants. Dr Battaglia works for VHA in nursing/research. The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article. Findings from this paper were presented in a poster session at the Society for General Internal Medicine Conference in Denver, Colorado, on April 25, 2013.

Authorship Information: Concept and design (LMH, CDH, CB, DA, SK, JL); acquisition of data (LMH, GS, CDH, LDS, SK); analysis and interpretation of data (LMH, GS, CDH, DA, SK, PMH, JL); drafting of the manuscript (LMH, GS, CDH, CB, LDS, SK, JL); critical revision of the manuscript for important intellectual content (LMH, CDH, CB, DA, LDS, SK, PMH, JL); statistical analysis (LMH); provision of patients or study materials (LMH); obtaining funding (CDH, DA, SK, PMH); administrative, technical, or logistic support (LMH); and supervision (JL).

REFERENCES

  • 1.Fortney J, Kaboli P, Eisen S. Improving access to VA care. J Gen Intern Med. 2011;26(suppl 2):621–622. doi: 10.1007/s11606-011-1850-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Pizer SD, Prentice JC. What are the consequences of waiting for health care in the veteran population? J Gen Intern Med. 2011;26(suppl 2):676–682. doi: 10.1007/s11606-011-1819-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Buzza C, Ono SS, Turvey C, et al. Distance is relative: unpacking a principal barrier in rural healthcare. J Gen Intern Med. 2011;26(suppl 2):648–654. doi: 10.1007/s11606-011-1762-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Washington DL, Bean-Mayberry B, Hamilton AB, Cordasco KM, Yano EM. Women veterans’ healthcare delivery preferences and use by military era: findings from the National Survey of Women Veterans. J Gen Intern Med. 2013;28(2):571–576. doi: 10.1007/s11606-012-2323-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Fortney JC, Burgess JF, Jr, Bosworth HB, Booth BM, Kaboli PJ. A re-conceptualization of access for 21st century healthcare. J Gen Intern Med. 2011;26(suppl 2):639–647. doi: 10.1007/s11606-011-1806-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Kehle S, Greer N, Rutks I, Wilt T. Interventions to improve veterans’ access to care: a systematic review of the literature. J Gen Intern Med. 2011;26(suppl 2):689–696. doi: 10.1007/s11606-011-1849-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kvedar JC, Nesbitt T, Kvedar JG, Darkins A. E-patient connectivity and the near term future. J Gen Intern Med. 2011;26(suppl 2):636–638. doi: 10.1007/s11606-011-1763-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Rosland AM, Nelson K, Sun H, et al. The patient-centered medical home in the Veterans Health Administration. Am J Manag Care. 2013;19(7):e263–e272. [PubMed] [Google Scholar]
  • 9.Wild K, Tanner C, Kaye J, et al. Electronic consults to facilitate specialty dementia assessment and care. Alzheimers Dement. 2012;8(suppl 4):231. [Google Scholar]
  • 10.Pap SA, Lach E, Upton J. Telemedicine in plastic surgery: e-consult the attending surgeon. Plast Reconstr Surg. 2002;110(2):452–456. doi: 10.1097/00006534-200208000-00012. [DOI] [PubMed] [Google Scholar]
  • 11.Salvo M, Nigro SC, Ward D. Pharmacist-generated electronic consults to improve hypertension management in a multisite health centre: pilot study. Inform Prim Care. 2012;20(3):181–184. doi: 10.14236/jhi.v20i3.23. [DOI] [PubMed] [Google Scholar]
  • 12.Angstman KB, Rohrer JE, Adamson SC, Chaudhry R. Impact of e-consults on return visits of primary care patients. Health Care Manag (Frederick) 2009;28(3):253–257. doi: 10.1097/HCM.0b013e3181b3efa3. [DOI] [PubMed] [Google Scholar]
  • 13.Ackerman S, Intinarelli G, Gleason N, et al. “Have you thought about sending that as an e-consult?”: primary care providers’ experiences with electronic consultations at an academic medical center. J Gen Intern Med. 2014;29(suppl 1):15. [Google Scholar]
  • 14.Damschroder LJ, Aaron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR) Implement Sci. 2013;8:51. doi: 10.1186/1748-5908-8-51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Stetler CB, Legro MW, Wallace CM, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. 2006;21(suppl 2):S1–S8. doi: 10.1111/j.1525-1497.2006.00355.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Flyvbjerg B. Five misunderstandings about case-study research. Qualitative Inquiry. 2006;12(2):219–245. [Google Scholar]
  • 18.Yin RK. Applications of Case Study Research. 2nd Vol. 34. Sage; Thousand Oaks, CA: 1993. [Google Scholar]
  • 19.Yin RK. Case Study Research: Design and Methods (Applied Social Research Methods Series, Volume 5) 2nd Sage; Thousand Oaks: 1994. [Google Scholar]
  • 20.Hill CE, Knox S, Thompson BJ, Williams EN, Hess SA, Ladany N. Consensual qualitative research: an update. J Couns Psychol. 2005;52(2):196–205. [Google Scholar]
  • 21.Hill CE, Thompson BJ, Williams EN. A guide to conducting consensual qualitative research. The Counseling Psychologist. 1997;25(4):517–572. [Google Scholar]
  • 22.Averill JB. Matrix analysis as a complementary analytic strategy in qualitative inquiry. Qual Health Res. 2002;12(6):855–866. doi: 10.1177/104973230201200611. [DOI] [PubMed] [Google Scholar]
  • 23.Rihoux B, Ragin CC. Why compare? why configurational comparative methods? In: Rihoux B, Ragin CC, editors. Configurational Comparative Methods. Sage; Thousand Oaks, CA: 2009. [Google Scholar]
  • 24.Gilson L, Hanson K, Sheikh K, Agyepong IA, Ssengooba F, Bennett S. Building the field of health policy and systems research: social science matters. PLoS Med. 2011;8(8):e1001079. doi: 10.1371/journal.pmed.1001079. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Mays N, Pope C. Qualitative research in health care. assessing quality in qualitative research. BMJ. 2000;320(7226):50–52. doi: 10.1136/bmj.320.7226.50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Damschroder LJ, Goodrich DE, Robinson CH, Fletcher CE, Lowery JC. A systematic exploration of differences in contextual factors related to implementing the MOVE! weight management program in VA: a mixed methods study. BMC Health Serv Res. 2011;11:248. doi: 10.1186/1472-6963-11-248. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material

RESOURCES