Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Apr 1.
Published in final edited form as: Cleft Palate Craniofac J. 2018 May 21;56(4):508–513. doi: 10.1177/1055665618776069

Survey of North American Multidisciplinary Cleft Palate and Craniofacial Team Clinic Administration

Kristin D Pfeifauf 1,A, Kamlesh B Patel 1,B, Alison Snyder-Warwick 1,C, Gary B Skolnick 1,D, Sibyl Scheve 2,E, Sybill D Naidoo 1,F
PMCID: PMC6488511  NIHMSID: NIHMS1014216  PMID: 29781722

Abstract

Objective

This study aims to provide an understanding of the ways cleft palate and craniofacial teams address billing, administration, communication of clinical recommendations, appointment scheduling and diagnosis-specific protocols.

Design

An online clinic administration survey was developed using data from an open-ended telephone questionnaire. The online survey was distributed by email to the American Cleft Palate-Craniofacial Association (ACPA) nurse coordinator electronic mailing list, used regularly by the ACPA and its members to communicate with teams. The response was 34.1% (42/123). Two incomplete records were excluded, as were any inconsistent responses of three teams submitting duplicate records.

Results

Six of 38 teams (15.8%) do not charge for clinic visits. For all other teams, some or all providers bill individually for services (68.4%) or a single lump sum applies (10.5%). Patients of 34/38 (89.5%) teams occasionally or often neglect to schedule or attend follow-up appointments. Twenty-six (66.7%) of 39 team directors were plastic surgeons. Phone is a common method of contacting families for scheduling (60.0%) and appointment reminders (82.5%). Most teams’ providers (90.0%) routinely communicate findings to each other during post-clinical conference. Most teams saw patients with isolated cleft lip (43.6%), cleft lip and palate (64.1%) or isolated cleft palate (59.0%) annually.

Conclusions

The breadth of strategies team clinic administration strategies warrants further exploration of the variations and their effects on patient-centered outcomes including quality of life, satisfaction, cost and resource utilization.

Keywords: cleft palate, craniofacial, multidisciplinary treatment, clinic organization

Introduction

A multidisciplinary, team-based approach has been standard of care for treatment of patients with orofacial clefts and craniofacial anomalies since 1938, when the first such team was established (Long, 2009; Capone et al., 2014). The nascent team included an orthodontist, surgeon and speech-language pathologist (SLP), and served as a model for early cleft palate and craniofacial teams (Capone et al., 2014). Over the course of nearly 80 years, the multidisciplinary team model has evolved to include a broader range of specialties and responsibilities, enabling practitioners to better meet patients’ complex needs (Strauss, 1999; Capone and Sykes, 2007).

The American Cleft Palate-Craniofacial Association (ACPA), recognizing a need for uniform guidance, established minimum standards of care for cleft palate and craniofacial teams in the 1990s and adopted its Standards for Cleft and Craniofacial Teams in 2008 (the Standards) (ACPA CAT Approval, 2017; ACPA Standards, 2017). The Standards provide a nationally accepted framework for the organization and delivery of contemporary cleft palate and craniofacial care. Only teams who adhere to these Standards are eligible to be listed by the ACPA as approved teams (ACPA CAT Approval, 2017; ACPA Standards, 2017). There are currently 183 ACPA-approved teams in the United States and Canada (Cleft Palate Foundation, 2017).

The Standards comprise the minimum requirements for multidisciplinary cleft palate and craniofacial team care. These criteria also afford teams flexibility to organize and function in a manner best suited to the needs of their individual patient populations, geographic locations, resources and treatment environments (Capone and Sykes, 2007). As a result, teams report diverse approaches to organization, leadership, examination format, assessment, team clinic frequency, treatment method, treatment setting, family outreach and non-essential team composition (Pannbacker et al., 1992; Strauss, 1998; Strauss, 1999; Capone and Sykes, 2007; Laub and Ajar, 2012; Ascha et al., 2016).

While previous research has examined variation in organization and practices of American cleft palate and craniofacial teams (Strauss, 1999; Capone and Sykes, 2007), there exist few published surveys of the teams themselves, and many of those that have been published are now outdated. Pannbacker et al. (1992) surveyed U.S. cleft palate and craniofacial teams about patient populations, sponsorship of parent-patient support groups and procedures for assessing velopharyngeal function. Strauss (1998) conducted a survey of North American cleft palate and craniofacial teams regarding number of patients, number of team clinics per year, variations in surgical treatments and procedures, and areas of non-compliance with an earlier iteration of ACPA standards, which served as a precursor to the 2008 Standards. Laub and Ajar (2012), in one of only two surveys published after the ACPA adopted the 2008 Standards, studied team examination format, including frequency of team clinics, presence of family in planning sessions, number of patients actively enrolled in clinic and funding sources. The other post-Standards survey addressed provision of social and support services (Ascha et al., 2016).

This survey focuses on the methods and processes contemporary cleft palate and craniofacial teams use to administer the team clinic, from appointment scheduling through post-examination follow-up. The purpose of this paper is to provide a better understanding of the ways in which ACPA-approved teams address administrative protocols such as billing, communication of clinical recommendations, appointment scheduling and diagnosis-specific protocols for team follow-up visits.

Methods

A preliminary open-ended questionnaire of cleft palate and craniofacial team clinic administration was developed [Appendix 1]. Responses were solicited and completed via telephone call to the team coordinator or other primary contact for the ACPA-approved cleft palate and craniofacial teams at eleven hospitals identified as Best Children’s Hospitals by the 2015–2016 U.S. News & World Report Honor Roll, and one additional hospital with a strong reputation for cleft palate and craniofacial care (U.S. News, 2016). Seven (58.3%) of the twelve teams responded to this preliminary questionnaire.

Qualitative analysis of those responses was conducted to identify themes in team clinic administration. The results informed development of an online survey through the Research Electronic Data Capture System (REDCap) version 6.16.5 electronic data capture tools. The online survey contained seventeen initial questions, plus one field allowing for free-text input of additional information deemed relevant by the survey participant [Appendix 2]. Branching logic was employed to produce additional questions and subparts when applicable to participant responses to preceding questions. Discrete response formats, including dropdown list, checkbox and multiple-choice were utilized in all but two questions. One of those two fields required participants to enter a numerical value; the other required participants to enter the name of their team or institution in free-text format. Whenever participants selected ‘other’ as a question response, they were provided a field for free-text explanation of that response.

The survey was distributed by emailing a link and invitation to participate to the ACPA nurse coordinator electronic mailing list, used regularly by the ACPA and its members to communicate with teams. The electronic mailing list included 123 recipients. Of those, one generated an automatic message indicating the recipient would be unable to reply to emails sent to that address, two generated automatic messages indicating the recipient had retired and four generated out-of-office replies. The survey was emailed on October 17, 2016. Follow-up reminder emails requesting participation were sent two and four weeks later. Personal follow-up emails encouraging participation were sent to individual team coordinators of any team contacted during the preliminary survey who had not responded to the online survey as of November 3, 2016.

Results were collated and all statistical analysis was performed using Microsoft Excel 2013.

Results and Discussion

There were 45 records submitted, with three teams submitting duplicate records. Adjusting for this, the actual response was 42 (34.1%) of 123 teams. Where possible, we averaged responses of duplicate records. When duplicate responses were in direct conflict and therefore could not be reconciled, we omitted those responses from the analysis. We also omitted from our analysis all data from two incomplete records. Major findings of the survey follow.

Billing and Costs to Families

In the majority of teams, some but not all providers bill individually for services, and costs for other providers are either not billed (34.2% = 13/38) or bundled together in a lump sum (23.7% = 9/38). When only some providers bill individually for services, the specialties billing for services are shown in Table 1. All providers bill individually for services at four teams (10.5%), and four teams (10.5%) charge a single lump sum for the entire visit. Six teams (15.8%) reported no charges for the visit.

Table 1.

Demonstrates which specialties regularly attend participants’ team clinics, which specialties bill individually for services and which specialties were reported to take the longest amount of time to evaluate patients on team day.

Provider Attends Clinic (n/40) Percent Attending Bills Individually for Services Percent Billing Individually Long Eval.* Percent Long Eval.
SLPˇ 40 100.0% 13 32.5% 14 35.0%
Orthodontics 39 97.5% 6 15.4% 1 2.6%
Plastic Surgery 38 95.0% 18 47.4% 2 5.3%
Dentistry 32 80.0% 6 18.8% - 0.0%
Nursing 32 80.0% 1 3.1% - 0.0%
ENˇ 30 75.0% 15 50.0% - 0.0%
Audiology 30 75.0% 16 53.3% - 0.0%
Genetics 25 62.5% 13 52.0% 10 40.0%
OMFˇ 21 52.5% 3 14.3% 4 19.0%
Psychology 20 50.0% 8 40.0% 6 30.0%
Pediatrics 17 42.5% 5 29.4% - 0.0%
Social Work 10 25.0% - 0.0% 1 10.0%
Neurosurgery 6 15.0% 9 150.0% - 0.0%
Prosthodontics 6 15.0% 2 33.3% 1 16.7%
Nutrition/Lactation 3 7.5% - 0.0% - 0.0%
Sleep Medicine 3 7.5% - 0.0% - 0.0%
Ophthalmology 1 2.5% 4 400.0% - 0.0%
ˇ

Specialty abbreviations used herein follow the conventions used throughout this paper. Specifically, SLP is Speech Language Pathology, ENT is Otolaryngology and OMFS is Oral and Maxillofacial Surgery.

*

Indicates number of teams reporting this specialty stands out as the one that takes the longest amount of time to evaluate patients during team clinic.

A portion of the teams reporting this specialty bills individually for services did not report this specialty regularly attends team clinic. Specifically, 3 of 4 reported Ophthalmology bills and 3 of 9 reported Neurosurgery bills are charged by providers who do not regularly attend team clinic.

Patient Follow-up

Twenty-eight (73.7%) of 38 respondents report patients occasionally neglect to schedule and/or attend follow-up appointments and six (15.8%) report patients often neglect to do so. Four teams (10.5%) report patient follow-up is rarely or never an issue. Of the teams reporting occasional or frequent poor follow-up, the most common reasons result from: cost (91.2% = 31/34), parent or guardian belief that follow-up is unnecessary (88.2% = 30/34), personal or family issues (85.3% = 29/34), parent or guardian inability to take time off work (76.5% = 26/34) and transportation issues (70.6% = 24/34). Other reasons cited include forgetting to schedule or attend an appointment, patient or family illness, unfavorable weather, patient inability to take time away from school, mental health-related issues and lost contact due to contact information changes.

Team Leadership

Twenty-six (66.7%) of 39 respondents report their team director is a plastic surgeon. Other team director specialties reported include Genetics (12.8% = 5/39), Pediatrics (7.7% = 3/39), Oral and Maxillofacial Surgery (OMFS) (7.7% = 3/39) and Otolaryngology (ENT) (5.1% = 2/39).

Communication with Families

Mailed reminders (62.5% = 25/40) and phone call reminders (60.0% = 24/40) are the most common methods of notifying families they are due to schedule an appointment. Other methods include email reminders (15.0% = 6/40), reminders in the written team report (10.0% = 4/40) and text messaged reminders (5.0% = 2/40). Two teams (5.0% = 2/40) do not send reminders but do schedule the patient’s next team visit before the family leaves on clinic day. Four teams (10.0%) do not routinely remind families to schedule an appointment.

Once an appointment has been scheduled, most teams call families to remind them of their upcoming appointment (82.5% = 3¾0). Teams also remind the families of their upcoming appointment by mail (40.0% = 16/40), email (20.0% = 8/40), text message (15.0% = 6/40) or in the mailed team report (5.0% = 2/40). Three teams (7.5%) do not remind families of their upcoming appointment. We did not find any relationship between the number of reported patient reminder approaches and reported poor follow-up (p>0.10).

Teams also use a variety of methods to communicate clinical findings with families on or after their team clinic visit. Most teams deliver written findings to families, including giving written findings to families on the clinic day (7.5% = ¾0), mailing written findings to families after the clinic day (92.5% = 37/40) and posting written findings to the hospital system’s Patient Portal (12.5% = 5/40). Two teams (5.0%) do not routinely send written reports to families; however, not providing written reports is an area of non-compliance with the Standards (ACPA Standards, 2017).

The primary software systems used to generate letters to patients are Epic (55.0% = 22/40), Cerner (10.0% = 4/40), Access Healthcare (7.5% = ¾0) and Microsoft Office (7.5% = ¾0). Three teams (7.5%) used other programs to generate letters. Six teams (15.0%) report they do not use a software program to generate letters to patients.

Some teams also orally report findings to families during the clinic day (52.5% = 2¼0) or after the clinic day (25.0% = 10/40). One team (2.5%) indicated it has no method of routinely communicating findings to families.

Intra-Team Communication

Team providers routinely communicate findings with each other in post-clinical conference (90.0% = 36/40), orally during clinic (75.0% = 30/40), in writing after clinic (30.0% = 12/40) and/or in writing during clinic (25.0% = 10/40). Of the teams with a post-clinical conference, most hold the conference immediately after clinic (86.1% = 31/36) or the week after clinic (8.3% = 3/36). Two (5.6%) teams wait until more than a week after clinic to hold the conference.

Clinic Frequency and Duration

Teams report holding clinic either more than once per week (7.7% = 3/39), weekly (35.9% = 14/39), biweekly (25.6% = 10/39), monthly (17.9% = 7/39), twice per quarter (5.1% = 2/39) or quarterly (5.1% = 2/39). The number of teams meeting at least weekly (43.6%) is substantially higher than the 10% who reported at least weekly meetings in the Strauss survey, as well as the 28% who reported meeting more often than 26 times per year in the Laub and Ajar survey (Strauss, 1998; Laub and Ajar, 2012).

Clinic durations range from one to three hours (64.1% = 25/39) and three to five hours (30.8% = 12/39). There was a positive, but not statistically significant, correlation between poor follow-up and longer clinic visit times (R=0.268, p=0.109).

Clinic Providers and Clinic Volume

Table 1 demonstrates which specialties regularly attend participants’ team clinics. Table 1 also demonstrates which specialties were reported to take the longest amount of time to evaluate patients on team day.

The estimated number patients with cleft lip and/or palate seen per clinic ranged from four to seventy-five patients per clinic, with a median of 14.5 patients with cleft lip and/or palate per clinic.

Mental Health Support

As Table 1 demonstrates, 50.0% of teams report a provider from Psychology regularly attends clinic, and another 10.0% of teams report a provider from Social Work regularly attends clinic. Accounting for an overlap between these two categories, 45.0% (18/40) of teams report there is no mental health professional regularly present at clinic to evaluate patients.

This figure is higher than the numbers reported in the Strauss (1998) survey. In that survey, 22.2% of respondents (36 of 105 cleft teams and 10 of 102 craniofacial teams) reported they did not have a mental health professional who evaluates all patients on a regular basis. The decline seen here could be attributed to the change in ACPA Standards from the version in place when the Strauss survey was conducted to the version currently in place. The earlier iteration of the Standards, in place during the 1998 survey, required teams have a mental health professional evaluate all patients on a regular basis (Strauss, 1998). The current Standards require initial and periodic mental health assessment and treatment, as appropriate, and referral for further treatment, as necessary (ACPA Standards, 2017). The new language imbeds flexibility for teams to use their judgment in structuring provision of mental health support to clinic patients.

Speech-Language Support

Seventeen (43.6%) of 39 team SLPs conduct a full speech evaluation when seeing patients in team clinic, with 22 of 39 (56.4%) conducting a brief screening. When a brief screening is insufficient time to obtain a reliable speech sample or to provide adequate parental counseling in a particular case, most teams schedule the patient to return on another day for further evaluation (61.9% = 13/21). Other teams report the SLP will spend more time with the patient during their clinic appointment (28.6% = 6/21) or will ask the patient to return later that same day for further evaluation (9.5% = 2/21). It is interesting to note, 6 (42.9%) of 14 teams reporting the SLP takes the longest to evaluate patients on clinic day (as compared to other providers) are teams that also report the SLP conducts only a brief screening on clinic day, rather than a full evaluation.

When a patient is regularly treated by an outside SLP who is not part of the team, the methods of communication between the team SLP and the outside SLP include written report (82.5% = 3¾0), phone (75.0% = 30/40), email (65.0% = 26/40) and in-person meetings (12.5% = 5/40). Two teams (5.0%) report no routine collaboration between the team SLP and the outside SLP.

Diagnosis-Specific Protocols

For some teams, the list of providers a patient sees at clinic depends on the patient’s age. Fifteen (37.5%) of 40 teams report having this type of age-specific protocol for patients with cleft lip and palate (CLP). Thirteen teams (32.5%) have an age-specific protocol for patients with isolated cleft palate (CP) and twelve (30.0%) have an age-specific protocol for patients with isolated cleft lip (CL). Eleven teams (27.5%) have an age-specific protocol for patients with craniosynostosis, and eight teams (20.0%) reported an age-specific protocol for patients with other craniofacial anomalies. Twenty-one teams (52.5%) reported not having any age-specific protocols, regardless of patient diagnosis.

Most teams saw patients with a diagnosis of CLP (64.1% = 25/39), CP (59.0% = 23/39) or CL (43.6% = 17/39) annually. The remainder indicated they saw these patients at other regular intervals. Common themes for frequency of seeing patients with CL were biennially, as needed, and annually at the beginning of treatment, then less frequently. The common theme for patients with CLP and CP was frequent visits in the first two to three years of life, then less frequently. Patients with CL tended to graduate from teams at slightly younger ages than patients with CLP or CP [Figure 1].

Figure 1.

Figure 1.

Demonstrates the reported age at which teams graduate patients with orofacial cleft diagnoses from team care.

Conclusion

This survey highlights the diversity of clinical protocols and approaches to clinic administration that exist across teams. Combined with the complex, multidisciplinary nature of cleft palate and craniofacial care, the diversity of clinical protocols causes substantial variation in cost of care and resource utilization (Abbott and Meara, 2010; Albino et al., 2010; Razzaghi et al., 2015). While this makes cost of care difficult to measure, it is estimated the annual mean incremental costs for children with orofacial clefts are $13,405 more than unaffected children and, as of 1992, the total cost for each new case of CLP was $101,000 (roughly $175,000 if adjusted for today’s inflation) (Abbott and Meara, 2010; Albino et al., 2010; Razzaghi et al., 2015; US Inflation Calculator, 2017). Children with orofacial clefts and/or craniofacial anomalies consistently use more healthcare services and resources, are burdened by greater costs, and experience more barriers to care than children without special healthcare needs (Strauss and Cassell, 2009). Despite cost burdens and care variation, cleft palate and craniofacial care is lagging behind other fields in healthcare’s movement toward a value-driven, family-centered model of care, and clinical protocols are often designed solely around patient diagnosis and provider expertise (Abbott and Meara, 2010). A shift toward a value-driven, family-centered model would maximize outcomes important to patients and parents while minimizing unnecessary resource utilization (Abbott and Meara, 2010).

An important early step in moving toward a value-driven, family-centered approach to clinic design is to understand the current state of cleft palate and craniofacial clinic protocol and structure, as well as the options teams favor in designing their clinics. Measuring the impact of any implemented changes must then follow such assessment of the current state of care.

The broad understanding of team clinic design presented in this paper will be immediately useful to teams by listing alternative options to their team clinic protocols. For example, we noticed some teams rely on either text messaging or email to notify families they are due to schedule an appointment (20.0% = 8/40) and/or to remind families of an upcoming appointment (35.0% = 14/40). This motivated a discussion amongst providers on our team about the steps to coordinating email and text message reminders, with the goal of improving patient follow-up in our clinic. This is an improvement we plan to implement, though we believe further investigation of the impact of different communications is warranted.

This study is limited by its response rate of 34.1%. While this does leave open the possibility of selection bias, there is no obvious reason to suggest these results are not generalizable. As emphasized in Ascha et al. (2016), other surveys have similar response rates, including a 47% response rate in the Ascha et al. (2016) survey and 50% in the Laub and Ajar (2102) survey. The Strauss (1998) survey had a particularly high response of 83.4%, which could be related to any importance attributed to its affiliation with the ACPA Standards Committee. Nevertheless, this study is still useful for generating ideas for improving clinic administration. Moreover, this study presents systematic results from 42 teams, a relatively large sample size, and it provides valuable information that has not been previously published.

Longer term, the data from this study should guide inquiry into the ways various clinical protocol options influence cost of care, patient and provider resource utilization, medical outcomes, patient/family-reported quality of life and patient/family-reported satisfaction with outcomes and care. A better understanding of these relationships is needed to guide the transition of cleft palate and craniofacial care to a value-driven, family-centered model of care.

Supplementary Material

Appendix 1

Table 1 Supplementary: Table demonstrating which specialties regularly attend participants’ team clinics, which specialties bill individually for services and which specialties were reported to take the longest amount of time to evaluate patients on team day.

Appendix 1 Supplementary: A copy of the preliminary open-ended questionnaire of cleft palate and craniofacial team clinic administration. Responses were solicited by telephone call to the team coordinator or other primary contact at each of the ACPA-approved cleft palate and craniofacial teams at the ten hospitals identified as Best Children’s Hospitals by the 2015-2016 U.S. News & World Report Honor Roll, as well as Seattle Children’s Hospital and Children’s National Medical Center (Washington, D.C.) because of their strong reputations for cleft palate and craniofacial care.

Appendix 2

Figure 1 Supplementary: Box-and-whisker plot demonstrating the reported age at which teams graduate patients with orofacial cleft diagnoses from team care.

Appendix 2 Supplementary: A copy of the online survey administered through the Research Electronic Data Capture System (REDCap) version 6.16.5 electronic data capture tools. The online survey contained seventeen initial questions, plus one field allowing for free-text input of additional information deemed relevant by the survey participant. Branching logic was employed to produce additional questions and subparts when applicable to participant responses to preceding questions. The survey was distributed by emailing a link and invitation to participate to the ACPA nurse coordinator electronic mailing list, used regularly by the ACPA and its members to communicate with teams.

Acknowledgments

Research reported in this publication was supported in part by the Children’s Discovery Institute and by NIH/NCRR Colorado CTSI Grant Number UL1 RR025780. Its contents are the authors’ sole responsibility and do not necessarily represent official NIH views.

Footnotes

1

Paul A. Harris, Robert Taylor, Robert Thielke, Jonathon Payne, Nathaniel Gonzalez, Jose G. Conde, Research electronic data capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support, J Biomed Inform. 2009 Apr;42(2):377–81.

Survey of Cleft Palate and Craniofacial Team Clinics

References

  1. Abbott MM, Meara JG. Value-based cleft lip-cleft palate care: a progress report. Plast Reconstr Surg. 2010;126(3):1020–1025. [DOI] [PubMed] [Google Scholar]
  2. Albino FP, Koltz PF, Girotto JA. Predicting out-of-pocket costs in the surgical management of orofacial clefts. Plast Reconstr Surg. 2010;126(4):188e–189e. [DOI] [PubMed] [Google Scholar]
  3. American Cleft Palate-Craniofacial Association. CAT Approval Procedures Manual (ACPA CAT Approval). Available at: http://www.acpa-cpf.org/team_care/commission_on_approval_of_teams/cat_approval_procedures_manual/. Accessed May 25, 2017.
  4. American Cleft Palate-Craniofacial Association. Standards of Team Care for Cleft Palate and Craniofacial Teams (ACPA Standards). Available at: http://www.acpa-cpf.org/team_care/standards/. Accessed May 23, 2017.
  5. Ascha M, McDaniel J, Link I, Rowe D, Soltanian H, Sattar A, Becker D, Lakin GE. Social and Support Services Offered by Cleft and Craniofacial Teams: A National Survey and Institutional Experience. J Craniofac Surg. 2016;27(2):356–360. [DOI] [PubMed] [Google Scholar]
  6. Capone RB, Butts SC, Jones LR, Cleft and Craniofacial Subcommittee of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) Specialty Surgery Committee. Starting a cleft team: a primer. Facial Plast Surg Clin North Am. 2014;22(4):587–591. [DOI] [PubMed] [Google Scholar]
  7. Capone RB, Sykes JM. The cleft and craniofacial team: the whole is greater than the sum of its parts. Facial Plast Surg. 2007;23(2):83–86. [DOI] [PubMed] [Google Scholar]
  8. Cleft Palate Foundation. Cleft Lip/Palate & Craniofacial Specialists in Your Area. Available at: http://www.cleftline.org/parents-individuals/team-care/. Accessed May 25, 2017.
  9. Laub DR, Ajar AH. A survey of multidisciplinary cleft palate and craniofacial team examination formats. J Craniofac Surg. 2012;23(4):1002–1004. [DOI] [PubMed] [Google Scholar]
  10. Long RE. Improving outcomes for the patient with cleft lip and palate: The team concept and 70 years of experience in cleft care. J Lancaster General Hospital. 2009;4(2):52–56. [Google Scholar]
  11. Pannbacker M, Lass NJ, Scheuerle JF, English PJ. Survey of services and practices of cleft palate-craniofacial teams. Cleft Palate Craniofac J. 1992;29(2):164–167. [DOI] [PubMed] [Google Scholar]
  12. Razzaghi H, Dawson A, Grosse SD, Allori AC, Kirby RS, Olney RS, Correia J, Cassell CH. Factors associated with high hospital resource use in a population-based study of children with orofacial clefts. Birth Defects Res A Clin Mol Teratol. 2015;103(2):127–143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Strauss RP. Cleft palate and craniofacial teams in the United States and Canada: a national survey of team organization and standards of care. The American Cleft Palate-Craniofacial Association (ACPA) Team Standards Committee. Cleft Palate Craniofac J. 1998;35(6):473–480. [DOI] [PubMed] [Google Scholar]
  14. Strauss RP. The organization and delivery of craniofacial health services: the state of the art. Cleft Palate Craniofac J. 1999;36(3):189–195. [DOI] [PubMed] [Google Scholar]
  15. Strauss RP, Cassell CH. Critical issues in craniofacial care: quality of life, costs of care, and implications of prenatal diagnosis. Acad Pediatr. 2009;9(6):427–432. [DOI] [PubMed] [Google Scholar]
  16. US Inflation Calculator. Available at: http://www.usinflationcalculator.com. Accessed May 23, 2017.
  17. U.S. News & World Report. Best Children’s Hospitals Honor Roll. Available at: https://www.usnews.com/info/blogs/press-room/2015/06/09/us-news-announces-the-2015-2016-best-childrens-hospitals. Accessed May 23, 2017.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 1

Table 1 Supplementary: Table demonstrating which specialties regularly attend participants’ team clinics, which specialties bill individually for services and which specialties were reported to take the longest amount of time to evaluate patients on team day.

Appendix 1 Supplementary: A copy of the preliminary open-ended questionnaire of cleft palate and craniofacial team clinic administration. Responses were solicited by telephone call to the team coordinator or other primary contact at each of the ACPA-approved cleft palate and craniofacial teams at the ten hospitals identified as Best Children’s Hospitals by the 2015-2016 U.S. News & World Report Honor Roll, as well as Seattle Children’s Hospital and Children’s National Medical Center (Washington, D.C.) because of their strong reputations for cleft palate and craniofacial care.

Appendix 2

Figure 1 Supplementary: Box-and-whisker plot demonstrating the reported age at which teams graduate patients with orofacial cleft diagnoses from team care.

Appendix 2 Supplementary: A copy of the online survey administered through the Research Electronic Data Capture System (REDCap) version 6.16.5 electronic data capture tools. The online survey contained seventeen initial questions, plus one field allowing for free-text input of additional information deemed relevant by the survey participant. Branching logic was employed to produce additional questions and subparts when applicable to participant responses to preceding questions. The survey was distributed by emailing a link and invitation to participate to the ACPA nurse coordinator electronic mailing list, used regularly by the ACPA and its members to communicate with teams.

RESOURCES