Abstract
Objective
Oversight of clinical quality is only one of physical therapy managers’ multiple responsibilities. With the move to value-based care, organizations need sound management to navigate this evolving reimbursement landscape. Previous research has not explored how competing priorities affect physical therapy managers’ oversight of clinical quality. The purpose of this study was to create a preliminary model of the competing priorities, motivations, and responsibilities of managers while overseeing clinical quality.
Methods
This qualitative study used the Rapid Qualitative Inquiry method. A purposive sample of 40 physical therapy managers and corporate leaders was recruited. A research team performed semi-structured interviews and observations in outpatient practices. The team used a grounded theory-based immersion/crystallization analysis approach. Identified themes delineated the competing priorities and workflows these managers use in their administrative duties.
Results
Six primary themes were identified that illustrate how managers: (1) balance managerial and professional priorities; (2) are susceptible to stakeholder influences; (3) experience internal conflict; (4) struggle to measure and define quality objectively; (5) are influenced by the culture and structure of their respective organizations; and (6) have professional needs apart from the needs of their clinics.
Conclusion
Generally, managers’ focus on clinical quality is notably less comprehensive than their focus on clinical operations. Additionally, the complex role of hybrid clinician-manager leaves limited time beyond direct patient care for administrative duties. Managers in organizations that hold them accountable to quality-based metrics have more systematic clinical quality oversight processes.
Impact
This study gives physical therapy organizations a framework of factors that can be influenced to better facilitate managers’ effective oversight of clinical quality. Organizations offering support for those managerial responsibilities will be well positioned to thrive in the new fee-for-value care structure.
Keywords: Quality, Leadership, Managers, Priorities, Value, Organizational Behavior
Introduction
Competing priorities within physical therapy organizations often leave them ineffective at creating measurable change in standardized outcomes measures, even with therapists’ advanced training and focus on patient-centered care.1 While cited as major components of health care quality, the field of physical therapy has historically shown limited uptake of quality measures and evidence-supported examination and treatment procedures.2 Less than 50% of therapists participate in quality reporting or perform examinations that conform to evidence-supported guidelines.3–5 While individual therapists are partially responsible for this lack of uptake, organizations and their managers are responsible for ensuring quality of care through the support of clinic workflows.6
Managers’ Role in Quality of Care
Midlevel managers are tasked with implementing corporate policy and mission, but also managing the expectations and needs of frontline workers.7 While health care providers have their priorities striving to deliver quality care, executives and managers have their own priorities as they oversee operations of an organization.8 If impactful, managers effectively synthesize organizational vision, policies, and procedures and diffuse this information to the frontline workers, which facilitates practice optimization.9 While hybrid clinician-managers are uniquely positioned to link providers to their organization’s vision, these dueling roles compete for managers’ time and attention.8,10,11
Health care managers are pulled in different directions based on organizational priorities, various stakeholder perspectives, and competing tasks.12 Their role identities are complicated in that most physical therapy managers provide direct patient care for the majority of their time, limiting the focus they invest in management duties.13 Serving multiple, simultaneous roles distracts managers from exploring quality-based organizational metrics and proactively managing staff’s clinical quality.10
Value-Based Care Requires Organizational Strategy and Management
Health care value is a ratio of an outcome to the dollars spent.14 This outcome numerator is often used to quantify quality. However, even from the early work of Donabedian, we understand the measurement of quality is complex, and thus challenging to manage.15 Adding to the challenge, the “Triple Aim,” and more recently the “Quadruple Aim,” define quality from competing perspectives (patient, population, payer, and provider) that must be balanced.16,17 Porter and Lee outlined a 6-stage strategy to create high-value health care systems that also address these different perspectives.18 One of their recommendations is to measure outcomes at every visit. However, most physical therapists inconsistently measure quality indicators, primarily because of the burden of data collection and limited perceived value in the information.3
In answer to these barriers to clinicians’ use of quality measures, authors have developed frameworks to assist therapists in using quality indicators in practice.2 However, no studies have specifically explored the impact of managers on therapists’ performance on outcome measures. Health care studies outside physical therapy have shown managerial focus influences patient outcomes.19 Several studies have shown that executive leaders of top-performing health care organizations emphasize quality measurement.20–22 Thus, organizational strategic focus on quality, and by extension managers’ prioritization of managing quality, is necessary to drive value-based care.
To understand how managers can facilitate providers’ use of quality measures, organizations must start by understanding what competes for managers’ focus on quality measures.23,24 The aim of this study was to describe how the varying roles, stakeholder influences, and competing priorities combine into an overarching framework that represents how physical therapy managers oversee clinical quality among their competing priorities.
Methods
Rapid Qualitative Inquiry
This study used Rapid Qualitative Inquiry (RQI) based on the work of Beebe and extended by McMullen and colleagues.25–28 RQI is an iterative approach combining semi-structured interviews, observations, and team meetings for analysis and study process modifications. This particular study was part of a larger mixed-methods study exploring physical therapy managers’ use of clinical information systems to manage clinical quality.29 The sampling strategy, interviews, and observations served both the aim of this manuscript as well as the greater mixed-methods study. This study was granted exemption by the institutional review board at Oregon Health & Science University (OHSU).
The research team consisted of 7 individuals, chosen for variety in perspective and expertise. The domain experience of 5 individuals included qualitative methods, physical therapy, medicine, and varying levels of informatics training from 1 to 5 years. Two additional team members were faculty in the OHSU Department of Medical Informatics and Clinical Epidemiology and Division of Management with expertise in qualitative methods, health care leadership, and research design. Additionally, the team consulted with navigators from each of the study sites to test inferences and member-check information and themes. The team approach allowed for much broader and richer perspectives than could be achieved with a single researcher.
Sampling Strategy
Based on our overarching mixed-methods study’s focus on managers’ information system use, clinics were purposively invited to participate to represent a variety of sociotechnical organizational factors.30 Seven companies were invited to participate in the study, seeking diversity of organizational size and structure, clinical information system use, and focus on clinical quality. Five organizations agreed to participate. Each organization operated from 4 to more than 500 outpatient clinics. Organizations with fewer than 4 clinics were excluded because these organizations often do not employ managers. The study organizations operate primarily in the northwestern United States and are described in Table 1.
Table 1.
Study Site Characteristicsa
| Organization | Accepted? | Setting | Navigator’s Role | No. of Clinics (% Observed) | Geographic Coverage | Quality-Based Information System | 
|---|---|---|---|---|---|---|
| A | Yes | Private—urban and suburban | Clinic director and corporate Quality improvement | 25 (24%) | Northwest | Purchased, dedicated system | 
| B | Yes | Private—urban and suburban | Clinical research/quality | 800 (0.5%) | National | Internally developed EHR | 
| C | Yes | Hospital—urban and suburban | Lead therapist, quality council member | 6 (50%) | Northwest | EHR-based reports | 
| D | Yes | Private—urban | Owner and clinic director | 4 (100%) | Northwest | Purchased, dedicated system | 
| E | Yes | Private—urban and suburban | Owner and clinic director | 4 (50%) | Northwest | Purchased, dedicated system | 
| F | Declined | Hospital—urban and suburban | NA | 15 | Mountain | Internally developed dedicated system | 
| G | Declined | Private—rural and suburban | NA | 24 | Mountain | EHR-based reports | 
a EHR = electronic health records; NA = not available.
The navigator at each organization served as the primary point of contact for the site visits. This organizational leader nominated individuals directly involved in the management of clinic staff. The primary emphasis was to investigate their oversight of clinical quality within clinics. We sought individuals with a range of management experience, performance, and contrasting perspectives, as assessed by our on-site navigators, organizational leadership, and nomination from other interviewees, to represent various types of leadership and viewpoints.31 Our study clinics had a single manager at each clinic. The percentage of clinics we visited for each organization is listed in Table 1. Additionally, we interviewed organizational executives who were responsible for defining corporate vision around quality improvement and prioritizing corporate initiatives.
Interviews and Observations
Pairs of team members completed site visits according to uniform interview and observation guides (Suppl. Appendixes A and B). These guides were pilot-tested with 4 managers of a single organization prior to the first formal site visits. The interview guide was modified slightly between site visits to refine themes according to the RQI approach.25 Semi-structured interviews were audio-recorded and professionally transcribed. Observations for this phase of the overall study emphasized managers reviewing metrics for which they were responsible. Researchers used this time to more deeply question managers regarding where they spent most of their administrative time and how they were held accountable to the metrics. The interview, observation, and analysis methods have also been described previously.29
Analysis
Researchers identified key themes32–34 through a grounded theory-based immersion/crystallization approach. The team added rigor to address researcher bias through reflexivity where team members logged their own preconceptions and beliefs before and after site visits. Transcripts were coded according to a standardized codebook. The team used NVivo qualitative analysis software to organize transcripts, manage quotes, and aid in identifying relationships between codes (NVivo 12, QSR International Pty Ltd, Doncaster, Victoria, Australia).
Interviews, observations, and analysis proceeded iteratively until saturation was achieved.35 Through presite and postsite visit team meetings, the codebook was refined and preliminary themes identified. Dyads coded a single transcript until adequate consensus was reached. This consensus was achieved after the second organizational site visits. The remaining transcripts were coded by a single team member. At team meetings, emerging themes were refined through consensus building exercises. The team refined themes after subsequent organizational site visits. Themes were triangulated via member-checking both with managers and organizational leaders, between research team members, and between organizations.32
Results
The team visited 26 separate clinics interviewing and observing 40 individuals over 8 months. The roles of the study participants are outlined in Table 2. Based on the multiple team discussions and confirmed through triangulation, the team identified 6 themes. A representation of quotes for each major theme are presented in Table 3.
Table 2.
Study Participant Characteristics
| Role | Count | Average y of Physical Therapy Experience (SD) | Average y of Experience in Current Role (SD) | Sex | 
|---|---|---|---|---|
| Manager | 18 | 14.78 (9.40) | 7.33 (6.86) | 33% Female | 
| Lead therapist | 3 | 8.6 (3.49) | 2.33 (0.47) | 67% Female | 
| Regional director (oversees managers) | 6 | 9.17 (3.44) | 2.17 (2.99) | 20% Female | 
| Owner | 4 | 15.0 (10.02) | 8.0 (6.4) | 25% Female | 
| Executive | 9 | 22.0 (8.27) | 3.72 (3.58) | 56% Female | 
Table 3.
Themes and Representative Quotesa
| Theme 1: Managers’ deliverables | Theme 2: Stakeholder influence on managers | 
|---|---|
| “I’m typically doing physical therapy, but in any time that I have remaining…I’m looking into things like, we have lost patient logs, so I’m making sure that people are getting scheduled appropriately, I’m checking in on if somebody has a light schedule, why is that schedule light? How can we like work on filling that?” | “If I’m really pressed for time, I will sub out an in-person clinic meeting with a phone meeting, which definitely saves lots of time because you do not have the driving, suck of time, but then you miss out on the human interaction piece of it. If you do that too much then I’ve noticed stuff just starts to fall apart.” | 
| “Top priority is making sure that the clinics are meeting their budget expectations on all the main metrics we look at. Visits per day is a big one that we watch and cancellation, no show rate, and [marketing]. Then on an ongoing monthly basis we look at financial performance”—Regional director | “Yeah, certainly there is pushback … there’s concern that we expect our clinic directors to be full-time treating clinicians but then also to manage their staff, then they have to try to get their documentation done.”—Chief operating officer | 
| Theme 3: Managers’ focus | Theme 4: Issues with measures of quality | 
| “We’ve been drinking through a fire hose. Let us just do what needs to be done and we’ll figure out the other stuff later.” | “I think there’s a bias issue and I think that people, in general, want to feel like they are doing a good job and they usually care. It’s a very emotional profession and people do not like hearing that all the emotion they are putting into something is not helping the person.” | 
| “I think that I would be easily tempted to have someone call in and say I really need to get in and I would be like, okay, fine come in. So, if those aren’t blocked off and that’s just not a time that we are available, I think I would end up treating patients for those hours.” | |
| “If you start with a foundation of I do not trust the numbers, does anything else matter?” | |
| “If [quality] is going well, everything’s going well, right? I mean, if your quality sucks, your numbers are gonna suck, if your numbers suck, you should probably look at your quality.” | |
| Theme 5: Organizational culture/structure | Theme 6: Manager as a professional | 
| “I need to know now because I’m going to get a phone call at 10:00. I need to do a little bit of digging to understand what happened there.” | “I think like not just the clinic but what do you need? Like, what are your goals professionally? You know, where do you want to go?” | 
| “She does a great job. We have a meeting once a week and we review these numbers so if I have questions I can ask her or if I want to find more data she can help facilitate that for me.” | “Anytime you disconnect that and say that patient care is different than leadership management, I think that you now run the risk of really losing what really keeps the lights on, what keeps your doors open.”—Regional director | 
a All quotes are from clinic managers unless otherwise noted.
Theme 1: Balance Between Managerial and Professional Priorities
All the organizations held managers accountable for several business deliverables that were deemed essential for a thriving business unit. They were divided into those the manager could directly influence and those that were outside the manager’s direct control but integral to the health of the organization (respectively termed direct and indirect deliverables). Described in more detail through the subsequent themes, these deliverables combined into a cohesive framework (Fig. 1), attempting to balance competing priorities. Circles denote the direct deliverables and ovals the indirect deliverables. The size of the objects indicate the level of focus and amount of time managers estimated spending overseeing these domains.
Figure 1.

Proposed framework of the balance between managers' competing priorities.
Direct Deliverables
Managers described focusing primarily on tasks related to clinical operations, corporate compliance, human resources (HR), and direct patient care. To achieve these deliverables, managers typically oversaw a comprehensive set of metrics and processes. Governing these deliverables required the majority of managers’ time.
Providing direct patient care dominated the direct deliverables of these hybrid clinician-managers. Managers estimated they spent the next highest amount of time overseeing clinical operations. They used uniform operational metrics (described in theme 3 later) for these duties. Managers often deemed the remaining direct deliverables “putting out fires.” These typically involved working with HR and compliance issues, often with a reactive vs a proactive approach. The demand of these direct deliverables was described well by one manager: “I treat probably 14 to 17 people a day on average, sometimes 20. I have 7 employees that are providers. I have 16 employees total. So just managing all of that and ha, ha, ha! It’s busy!”
Indirect Deliverables
While managers focused primarily on the management of direct deliverables, they were ultimately concerned with indirect deliverables that were largely outside their control. Indicated by large white ovals in Figure 1, these included financial results, quality, and experiences both of patients and employees. As described in theme 5, all managers received incentive salary based on the financial performance of their business units. While they regularly reviewed the profit margin of their clinics, they spent more time managing the direct deliverables as lead indicators of the eventual profits.
Managers reported a sense of personal responsibility and commitment to ensuring their business units were delivering exceptional experiences for their patients and employees, yet very few managed these deliverables with the focus they applied to oversight of the financial deliverables. As described in theme 3, some organizations followed metrics that touched on patient and employee experience, but their focus on and interaction with these metrics was much less systematic than for the financial deliverables.
Theme 2: Stakeholders’ Influence on Managers’ Priorities
None of the managers had complete autonomy. Instead, their work was heavily influenced by other stakeholders. These included patients, payers, corporate executives, direct supervisors, and subordinate employees, each having varying levels of influence over the managers’ daily actions and focus.
Corporate Influence
All managers were required to devote the majority of their time to direct patient care. As a result, all managers stated they did not have adequate time to accomplish their administrative duties, often working outside their regular work hours on managerial tasks. The influence of the corporate stakeholders are described in more detail in theme 3.
Patient Influence
While managers were primarily focused on the needs of their patients, they also directed attention to their staffs’ patients. All managers defined clinical quality with a strong emphasis on patient-centered care and patient-perceived value. However, managers’ focus on the perspective of their staffs’ patients typically emphasized addressing negative online reviews. Many managers reported spending strategic planning time both individually and with their staffs, developing systems, processes, and programs to better their service and ensure a positive experience for their patients. However, managers rarely reported measuring whether those efforts were effective for patients.
Payer Influence
All managers reported payer mandates such as documentation and billing/coding requirements, and authorization mandates took an inordinate amount of their time compared to the impact of these tasks on patient outcomes. However, because these were required for reimbursement, managers were compelled to attend to them, often decreasing their available time to focus on other areas managers felt would be more impactful.
Subordinate Employee Influence
The majority of managers acknowledged their role in ensuring their staffs were supported and thriving. This included tasks like clinical mentorship, goal-setting, performance reviews, engagement activities, and oversight of training and professional development. While satisfying all the previously mentioned stakeholders was a priority for managers, systematically attending to the needs of subordinate employees often was not. While nearly all managers expressed a strong desire to support their staff, the majority stated that time focused on their employees was neglected when they were busy with other tasks.
Theme 3: Managers’ Internal Conflict
The managers’ focus was heavily impacted by the stakeholder influences cited above. Serving all of these various roles often left managers feeling conflicted, as in this quote: “The right thing to do is get that patient taken care of and do it quickly, but how do other responsibilities and things kind of fall into place in and around those work hours?”
Direct Patient Care
Emphasized by the size of its circle in Figure 1, all managers spent the vast majority of their time providing direct patient care. One manager voiced a common frustration with the expectation to see patients nearly 40 hours per week and manage on the side: “It’s like how am I supposed to do that? I feel like that’s 2 jobs you want me to do simultaneously.”
When managers were pressed for time, developing relationships with their staff, marketing, and exploring their business metrics received little proactive attention. Interestingly, most managers cited these factors as high-priority items for the success of their business units. Some managers were afforded dedicated administrative hours, but most did not block out these times and thus often found themselves overbooking these times with patient appointments. Conversely, managers who maintained focus on these administrative tasks regularly blocked out time in their schedule.
Other Distractions From Administrative Priorities
In addition to managers’ draw to direct patient care, they reported spending a great deal of time ensuring their staffs were meeting documentation and reporting compliance for insurance payers. Despite the time they spent focusing on these items, managers saw little value in them. None of the managers felt payer-mandated reporting helped them to understand the quality of patient care as either a clinician or manager.
Managers spent a considerable amount of time “putting out fires.” These included a myriad of HR and payroll tasks in addition to dealing with clinic scheduling, and employee illness coverage. They felt these items were unavoidable and they could typically not prepare for them, but rather had to handle them in the moment. However, addressing these issues took a considerable amount of time.
Administrative Priorities
With their small amount of non–patient-care time, managers typically focused on marketing, strategic planning, attending meetings, completing performance reviews, clinical mentoring, and reviewing metrics. As stated earlier, all managers primarily emphasized reviewing operational metrics. Depending on the organization and individual facility, these metric reviews happened anywhere from daily to quarterly. Consistently, managers reviewed the number of patient visits and new patients, units billed per visit, provider productivity, cancelation and no-show rates, and visits per new patient. Most organizations had formal reporting structures that made it easy for managers to review these metrics.
Concerning quality-based metrics, most managers focused on measures of patient satisfaction via the Net Promoter Score, which has been studied for use in health care.36,37 Rather than emphasizing aggregate scores as recommended, most organizations focused on individual patient responses in addition to online reviews. Some of the organizations reviewed patient-reported outcomes (PRO) scores. Even when consistently collected, most managers did not feel PRO scores were a good representation of patient quality and often distrusted the numbers. These issues with measuring quality will be described in more detail in theme 4.
Rather than focusing on outcomes scores like PROs to understand the quality of care in their clinics, most managers used operational metrics as a proxy for quality. For instance, many used cancellation and no-show rates or the count of new patients as an indication that their therapists were delivering quality care. Most managers described a somewhat circular relationship between quality and operations. They claimed either quality outcomes flow from sound operations, or profit is a result of quality care, so they saw little added value in measuring quality directly. Thus, many managers rationalized their focus on operational numbers as their primary quality metric.
Theme 4: Issues With Defining and Measuring Quality
In addition to the influences cited earlier, several additional factors dissuaded managers from systematically measuring quality. Managers across the study organizations felt quality is a complex combination of constructs and as such, cannot be measured with a simple PRO. In spite of this sentiment, all organizations collected PROs from patients, most often in response to insurance payer mandates or from a feeling of professional responsibility to collect them.
Patient-Reported Outcome Measurement Concerns
Organizations typically measured whether patients were completing an initial and a discharge measure, but very few monitored PRO score change throughout the episode of care. Even fewer organizations aggregated these scores to show provider performance across populations of patients. In addition to a lack of faith in the ability of a PRO to represent clinical quality, managers generally feared how therapists would receive critiques of the measurable quality of the care they deliver. One organization implemented a provider quality scorecard shortly before our site visits. This organization spent a great deal of time preparing managers to have conversations with therapists so as not to suggest managers were judging them. Managers were fearful that focusing on clinicians’ PRO scores could be deflating for clinicians who cared deeply for their patients but did not perform well on PRO change scores. This hindered managers from using metrics to define clinical quality.
Alternative Means of Evaluating Quality
Because of these measurement concerns, most organizations assessed the quality of their staff through direct observation and eavesdropping. Nearly every manager stated that they listen in on patient encounters in their clinics while they are treating their own patients. Managers also performed random chart audits to check for documentation quality as well as assess the clinical reasoning process of the clinician. Additionally, managers typically performed clinical mentoring with their staff where they often either cotreat patients or performed verbal case reviews. Most managers felt this process gave them a meaningful assessment of clinical quality, but acknowledged this approach would not be feasible for managers with large staffs because of the time necessary to perform these tasks.
Theme 5: Impact of Organizational Culture and Structure
Organizational structure had little impact on the way managers performed their duties. Managers in hierarchical, matrixed, and flat organizations, as well as large versus small organizations, all behaved similarly. More notably, managers in organizations that emphasized “managing to the metrics” had much more attention to metrics. Similarly, managers in organizations that emphasized the value of collecting PROs generally focused on these measures more deliberately.
The greatest influence on a manager’s metric checking behavior was interaction with a direct supervisor who met with the manager regularly to review metrics. Managers in some of the organizations had no regular meetings with supervisors, whereas other organizations mandated these meetings weekly. Managers with direct supervisor interactions gave very consistent descriptions of the operational and quality metrics they regularly reviewed.
Role of Executive Focus
The executives in some organizations placed quality measures as a primary aim. Executives in all of the organizations cited clinical excellence, clinical outcomes, and positive patient experience as elements of their corporate missions. However, only a few measured these aims. Managers in organizations with less focus on formal quality measures had notably more variability in their metric-checking processes. Some of the managers in these organizations focused on quality metrics primarily out of an intrinsic drive. Other managers in the same organizations did not focus on quality metrics at all, even though often their staffs were collecting these measures on nearly all patients. Thus, executives’ attention to measuring quality had a strong influence on how managers prioritized quality metrics. Whether an organization purchased dedicated quality-based information systems had less apparent impact on the focus of the managers than did their executives’ priorities.
Role of Incentives
All managers in the study received some form of incentive salary. Most managers claimed that this influenced their focus on metrics. However, one organization provided managers with incentives based on organization-wide performance, and those managers did not feel incentives had any impact on their daily focus. In the remaining organizations, managers were incentivized as a percentage of their business units’ profits. However, some managers in these organizations still felt the profit of their business unit was not within their control and thus did not significantly motivate their daily focus. In all organizations, managers reported corporate culture and their intrinsic motivation had a greater impact on their behavior than did financial incentives.
Theme 6: Manager as a Professional
Managers in the study organizations ranged from 1 to more than 20 years of experience in their respective roles. As such, they exhibited varying levels of competence, confidence, and focus. Many of the more experienced managers also took on additional roles in the organizations, including ownership. While most organizations gave managers a great deal of latitude in how they chose to manage their respective clinics, even the most experienced managers were held to very firm productivity standards. As cited earlier in theme 3, most managers were required to treat patients the majority of the time. Executives in many organizations felt that if managers were to block out treatment slots to perform administrative duties, this would decrease managers’ productive time, resulting in less revenue.
The newer therapists, especially, reported feeling supported in their growth as a clinician through activities like clinical mentorship, continuing education, and clinical in-services. However, few managers felt their organization took an active role in understanding their individual goals apart from the goals of the clinic. They felt the organization assumed their primary goals were related to clinic success and not personal development. Some managers desired more clinical training, whereas other managers expressed interest in more structured business training to help them both in interpersonal management skills as well as attending to their various metrics. Organizations often proactively attended to professional development of newer therapists, but less to the more tenured managers. Managers themselves regularly placed their time and focus on the goals of the clinic above their own professional development.
Discussion
Consistent with previous research, managers were pulled in numerous competing directions.7,12,38 Their foci were heavily influenced by the perspectives of the various stakeholders they served. Managers directly attended to certain aspects of their practices and indirectly attended to others while trying to maintain equilibrium. As illustrated in Figure 1, it is as if they were attempting to balance aspects of administrative duties on a seesaw of competing priorities. Supporting previous research, direct patient care monopolized managers’ time.8,38–40
With the remainder of their time, managers balanced another seesaw of administrative priorities, being dominated by operations and compliance duties and HR issues. The indirect deliverables were upheld and shaped by the direct deliverables on which the managers focused. Consistent with past research, because they were indirectly governed, managers spent little time proactively attending to them.41 True of all health care providers, competing priorities overshadowed managers’ personal growth, which they and the organizations often neglected.42 Financial results and patient experience were clear priorities for the organizations and were supported by nearly everything the organization did. However, they were the most removed from proactive management, typically measured only in retrospect.
Owing to competing priorities and commitments, managers resorted largely to impromptu management strategies. Direct patient care, which monopolized managers’ time, often hindered them from strategic planning and proactive management. Managers would benefit from more structure in their schedule and additional support for their management efforts. This was most needed in their oversight of clinical quality.
As shown in Figure 1, clinical quality comprised a minority of manager’s focus and was generally governed through emphasis on other factors believed to support quality. Rather than focusing on metrics, managers emphasized eavesdropping to appreciate the clinical quality of their staffs, which has been seen in past research.43 With the corporate emphasis on financial results and its primary underpinnings (operations and compliance), managers had little time and attention to devote to quality management. This was compounded by the challenges with defining and measuring quality, which have been shown in previous research.5,44,45
Consequently, managers focused on operational and compliance metrics as proxy measures of quality. Consistent with past research, this gave managers a false sense of managing quality when in fact they were not attending specifically to quality measures.3,46 This minimal focus on quality measures contributed heavily to the findings of our related mixed-methods research that showed managers regularly used information systems to manage clinic operations but rarely used information systems to manage quality.29 Porter and Lee’s value strategy suggests this lack of clinical information system use for quality, coupled with inconsistent outcomes measurement, impedes organizations’ ability to govern value effectively.18
If managers are to effectively oversee value-based care, they must be skilled in managing quality outcomes and cost. Payers have taken the lead, mandating both cost containment and quality reporting.47 Managers will need to gain efficiency and effectiveness in managing quality outcomes if they hope to prove the value of the care they deliver. However, our results show they lack the bandwidth to take on these additional responsibilities and most companies still do not prioritize proactive management of quality.
Limitations
Because of the exploratory design of this study, these inferences should be systematically tested in future research. Only 5 organizations (mostly in the northwestern United States) participated in this study, limiting generalizability to other regions. Additionally, participants were interviewed during their work time and were nominated by their organizations. It is possible that they withheld candor or we unknowingly excluded individuals with conflicting viewpoints from the sample. To test the inferences drawn from this study, this research is part of a larger study including an anonymous survey, which should facilitate more candid responses from a wider sample.
Conclusion
This study is the first published inquiry into the competing priorities of physical therapy managers. The results suggest managers would benefit from accountability to quality metrics, time and resources for quality-based administrative tasks, and improved balance between organizational focus on financial results and the experiences of patients and employees. Understanding the contenders for managers’ attention is the first step organizations must take in helping them to refocus on balanced corporate priorities. This balance is necessary to thrive in emerging value-based care models.
Supplementary Material
Acknowledgments
The authors acknowledge the contributions of Lily A. Cook to data collection and analysis.
Author Contributions
Concept/idea/research design: C.J. Hoekstra, J.S. Ash, N.A. Steckler, M. Mishra, P.N. Gorman
Writing: C.J. Hoekstra, J.S. Ash, N.A. Steckler, M. Mishra
Data collection: C.J. Hoekstra, N.A. Steckler, J.R. Becton, B.W. Sanders, M. Mishra
Data analysis: C.J. Hoekstra, J.S. Ash, N.A. Steckler, J.R. Becton, B.W. Sanders, M. Mishra
Project management: C.J. Hoekstra
Fund procurement: C.J. Hoekstra
Providing participants: C.J. Hoekstra
Clerical/secretarial support: C.J. Hoekstra
Consultation (including review of manuscript before submitting): C.J. Hoekstra, J.S. Ash, J.R. Becton, B.W. Sanders, M. Mishra, P.N. Gorman
Funding
This work was supported by the Foundation for Physical Therapy under a Promotion of Doctoral Studies (PODS) scholarship—Level I and by the National Library of Medicine under a clinical informatics training grant (No. T15-LM007088). The funders played no role in the design, conduct, or reporting of this study.
Ethics Approval
This study was granted exemption by the institutional review board at Oregon Health & Science University.
Disclosures
The authors completed the ICMJE Form for Disclosure of Potential Conflicts of Interest and reported no conflicts of interest.
References
- 1. Beissner KL, Bach E, Murtaugh CM, et al. Translating evidence-based protocols into the home healthcare setting. Home Healthc Now. 2017;35:105–112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Westby MD, Klemm A, Li LC, Jones CA. Emerging role of quality indicators in physical therapist practice and health service delivery. Phys Ther. 2016;96:90–100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Jette DU, Jewell DV. Use of quality indicators in physical therapist practice: an observational study. Phys Ther. 2012;92:507–524. [DOI] [PubMed] [Google Scholar]
- 4. Jette DU, Halbert J, Iverson C, Miceli E, Shah P. Use of standardized outcome measures in physical therapist practice: perceptions and applications. Phys Ther. 2009;89:125–135. [DOI] [PubMed] [Google Scholar]
- 5. Swinkels RAHM, van Peppen RPS, Wittink H, Custers JWH, Beurskens AJHM. Current use and barriers and facilitators for implementation of standardised measures in physical therapy in the Netherlands. BMC Musculoskelet Disord. 2011;12:106–106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Wedge FM, Braswell-Christy J, Brown CJ, Foley KT, Graham C, Shaw S. Factors influencing the use of outcome measures in physical therapy practice. Physiother Theory Pract. 2012;28:119–133. [DOI] [PubMed] [Google Scholar]
- 7. Wallick WG. Healthcare managers' roles, competencies, and outputs in organizational performance improvement. J Healthc Manag. 2002;47:390–401 discussion 401-402. [PubMed] [Google Scholar]
- 8. Graber DR, Kilpatrick AO. Establishing values-based leadership and value systems in healthcare organizations. J Health Hum Serv Adm. 2008;31:179–197. [PubMed] [Google Scholar]
- 9. Engle RL, Lopez ER, Gormley KE, Chan JA, Charns MP, Lukas CV. What roles do middle managers play in implementation of innovative practices? Health Care Manage Rev. 2017;42:14–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Kippist L, Fitzgerald A. Organisational professional conflict and hybrid clinician managers: the effects of dual roles in Australian health care organisations. J Health Organ Manag. 2009;23:642–655. [DOI] [PubMed] [Google Scholar]
- 11. McGivern G, Currie G, Ferlie E, Fitzgerald L, Waring J. Hybrid manager–professionals’ identity work: the maintenance and hybridization of medical professionalism in managerial contexts. Public Adm. 2015;93:412–432. [Google Scholar]
- 12. Budrevičiūtė A, Kalėdienė R, Petrauskienė J. Priorities in effective management of primary health care institutions in Lithuania: perspectives of managers of public and private primary health care institutions. PloS One. 2018;13:e0209816. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Glendinning ME. Physiotherapists as managers: an analysis of tasks performed by head physiotherapists. Aust J Physiother. 1987;33:19–32. [DOI] [PubMed] [Google Scholar]
- 14. Porter ME. What is value in health care? N Engl J Med. 2010;363:2477–2481. [DOI] [PubMed] [Google Scholar]
- 15. Donabedian A. Evaluating the quality of medical care. Milbank Q. 1966;44:166–206. [PubMed] [Google Scholar]
- 16. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27:759–769. [DOI] [PubMed] [Google Scholar]
- 17. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12:573–576. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Porter ME, Lee TH. The strategy that will fix health care. Harv Bus Rev. 2013;91:50–70.23898735 [Google Scholar]
- 19. Parand A, Dopson S, Renz A, Vincent C. The role of hospital managers in quality and patient safety: a systematic review. BMJ Open. 2014;4:e005055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Jiang HJ, Lockee C, Bass K, Fraser I. Board oversight of quality: any differences in process of care and mortality? J Healthc Manag. 2009;54:15–29. [PubMed] [Google Scholar]
- 21. Jha A, Epstein A. Hospital governance and the quality of care. Health Aff (Millwood). 2010;29:182–187. [DOI] [PubMed] [Google Scholar]
- 22. Venkatesh V, Zhang X, Sykes TA. Doctors do too little technology: a longitudinal field study of an electronic healthcare system implementation. Inf Sys Res. 2011;22:523–546. [Google Scholar]
- 23. Roberts JP, Fisher TR, Trowbridge MJ, Bent C. A design thinking framework for healthcare management and innovation. Healthc (Amst). 2016;4:11–14. [DOI] [PubMed] [Google Scholar]
- 24. Carroll N, Richardson I. Aligning healthcare innovation and software requirements through design thinking. In: In: Proceedings of the International Workshop on Software Engineering in Healthcare Systems. Austin, Texas, USA: ACM; 2016: 1–7. [Google Scholar]
- 25. Beebe J. Rapid Qualitative Inquiry: a Field Based Guide to Team-Based Assessment. 2nd ed. Lanham, MD, USA: Rowan & Littlefield; 2014. [Google Scholar]
- 26. McMullen CK, Ash JS, Sittig DF, et al. Rapid assessment of clinical information systems in the healthcare setting: an efficient method for time-pressed evaluation. Methods Inf Med. 2011;50:299–307. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Ash JS, Sittig DF, McMullen CK, Guappone K, Dykstra R, Carpenter J. A rapid assessment process for clinical informatics interventions. AMIA Annu Symp Proc. 2008;2008:26–30. [PMC free article] [PubMed] [Google Scholar]
- 28. Trotter RT, Needle RH, Goosby E, Bates C, Singer M. A methodological model for rapid assessment, response, and evaluation: the RARE program in public health. Field Methods. 2001;13:137–159. [Google Scholar]
- 29. Hoekstra C. The Impact of Context on the Use of Information Systems to Manage Clinical Quality. Dissertation. Portland, OR, USA: Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University; 2020. [Google Scholar]
- 30. Sittig DF, Singh H. A new socio-technical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care. 2010;19:i68–i74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. McGowan E, Stokes EK. Leadership in the profession of physical therapy. Physical Therapy Reviews. 2015;20:122–131. [Google Scholar]
- 32. Crabtree B, Miller W. Doing Qualitative Research. 2nd ed. Thousand Oaks, CA, USA: Sage Publishing Inc; 1999. [Google Scholar]
- 33. Lune H, Berg BL. Qualitative Research Methods for the Social Sciences. 9th ed. Harlow, Essex, UK: Pearson; 2018. [Google Scholar]
- 34. Glaser BG. Basics of Grounded Theory Analysis: Emergence vs Forcing. Mill Valley, CA, USA: Sociology Press; 1992. [Google Scholar]
- 35. Miles, Huberman MB, AM, Saldaña J. Qualitative Data Analytics: A Methods Sourcebook. 3rd ed. Thousand Oaks, CA, USA: Sage Publications; 2014. [Google Scholar]
- 36. Hamilton DF, Lane JV, Gaston P, et al. Assessing treatment outcomes using a single question. Bone Joint J. 2014;96-B:622–628. [DOI] [PubMed] [Google Scholar]
- 37. Krol MW, de Boer D, Delnoij DM, Rademakers JJDJM. The Net Promoter Score—an asset to patient experience surveys? Health Expect. 2015;18:3099–3109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Embertson MK. The importance of middle managers in healthcare organizations. J Healthc Manag. 2006;51:223–232. [PubMed] [Google Scholar]
- 39. Spehar I, Frich JC, Kjekshus LE. Clinicians in management: a qualitative study of managers' use of influence strategies in hospitals. BMC Health Serv Res. 2014;14:251–251. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Limb M. Clinician managers are being left to struggle without management support and training, research finds. BMJ. 2015;350:h377. [Google Scholar]
- 41. Catasús B, Ersson S, Gröjer JE, Yang Wallentin F. What gets measured gets … on indicating, mobilizing and acting. Accounting, Auditing & Accountability Journal. 2007;20:505–521. [Google Scholar]
- 42. Hall LH, Johnson J, Watt I, Tsipa A, O’Connor DB. Healthcare staff wellbeing, burnout, and patient safety: a systematic review. PLoS One. 2016;11:e0159015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Vuckovic N, Lavelle M, Gorman P. Eavesdropping as normative behavior in a cardiac intensive care unit. J Healthc Qual. 2004;W5-1–W5-6. [Google Scholar]
- 44. Dewhirst RC, Ellis DP, Mandara EA, Jette DU. Therapists’ perceptions of application and implementation of AM-PAC “6-clicks” functional measures in acute care: qualitative study. Phys Ther. 2016;96:1085–1092. [DOI] [PubMed] [Google Scholar]
- 45. Skeat J, Perry A. Exploring the implementation and use of outcome measurement in practice: a qualitative study. Int J Lang Commun Disord. 2008;43:110–125. [DOI] [PubMed] [Google Scholar]
- 46. Scholte M, Neeleman-van der Steen CWM, van der Wees PJ, Nijhuis-van der Sanden MWG, Braspenning J. The reasons behind the (non) use of feedback reports for quality improvement in physical therapy: a mixed-method study. PLoS One. 2016;11:e0161056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Jette AM. Moving from volume-based to value-based rehabilitation care. Phys Ther. 2018;98:1–2. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
