Abstract
The need to evaluate the performance of clinical ethics services is widely acknowledged although work in this area is more developed in the United States. In the USA many studies that assess clinical ethics services have utilized empirical methods and assessment criteria. The value of these approaches is thought to rest on their ability to measure the value of services in a demonstrable fashion. However, empirical measures tend to lack ethical content, making their contribution to developments in ethical governance unclear. The steady increase of clinical ethics committees in the UK must be accompanied by efforts to evaluate their performance. As part of this evaluative work it is important to examine how the practice of clinical ethics committees can be informed by empirical measures.
Introduction
Efforts to determine the value of clinical ethics services are often based on empirical methods and assessment criteria.1 These approaches can lack ethical content because of their focus on empirically measured outcomes.2,3 As the numbers of clinical ethics committees (CECs) in the UK steadily increase, it is important that they are able to draw on an ethically informed account of the implications of these studies for their practice. This paper assesses a number of studies that focus on different empirical measures and identifies their ramifications for clinical ethics services in the UK. The studies were selected from a literature review as significant representatives of efforts to employ quantitative measures to assess the work of ethics committees and consultants. The paper begins by articulating why it is important to assess ethics committees and identifies a number of obstacles to conducting such evaluative work.
Importance of evaluation
The need to assess the performance and contribution of CECs is supported by a number of factors. Firstly, evaluation is deemed important to help ensure that ethics services are transparent and accountable – as far as confidentiality requirements permit. As Craig and May have recently argued, ethics services ‘…should be held accountable for their recommendations, and standards are needed to judge the merit of the work any particular consultant [or committee] does’.2 Accountability is crucial to allow committees to win user confidence, without which services may be under-used because they are not afforded sufficient importance or respect. UNESCO has stated that if ‘…committees fail to address evaluation, the danger is that they will become institutionally isolated, lose credibility and forfeit long-term viability’.4
Secondly, the importance of instituting some form of assessment of CECs arises from the need to justify affording them limited health resources – in terms of staff time and financial investment. To warrant such investment committees must demonstrate that they make a positive contribution to clinical care. As the Royal College of Physicians has emphasized: ‘…[j]ustifying adequate resourcing for ethics support nationally and locally requires evidence of effectiveness’.3,5 Indeed, it would be ethically suspect to support a service that does not benefit clinical decision-making and patient care, or a service which proved harmful to patients.
Thirdly, the evaluation of clinical ethics services may also be important in helping to improve their performance.6 As Van Allen et al. note, the information gained from assessments ‘…can then be used to plan future activities so as to achieve greater overall impact’.7 However, to date information on the benefits and disbenefits of clinical ethics committees is so scant that it should not be assumed that the best course would be to reform the way a particular committee works. Rather, committees must also allow for the possibility that the assessment process could recommend their temporary or permanent dissolution, rather than ways to hone their performance.
Obstacles to the measurement of performance
Once committees accept the importance of testing their effectiveness, they will have to consider how best to conduct such work. UNESCO has noted that the assessment of ethics services tends to involve either ‘self’ or ‘external’ evaluation and can range from informal discussions, the use of internal or externally issued questionnaires and formal interviews.4 UNESCO suggests that self-evaluation is ‘rarely sufficient’ given the lack of objectivity it provides. External assessment is proposed as being more beneficial because it can help to ‘…identify strengths that can be maintained, weakness that should be corrected, policy considerations that have been overlooked…’.4 While external assessment is preferable, it will be more expensive and more difficult to conduct than internal evaluation. Thus, without some form of support – whether national or local – committees will struggle to balance the need to initiate evaluation with the limited resources that they have at their disposal.
Indeed, the development and evaluation CECs in the UK risk being compromised by a lack of resources. This is partly because of the finite resources within the NHS. But restrictions will also result from the failure to regard ethical governance – of which CECs are currently a central part – as a key feature of health-care infrastructure. As Vetter has highlighted, clinical governance arrangements frequently do ‘…not include another important factor, ethics’.8 Similarly, literature on CECs has pointed to the need to incorporate ethics within clinical governance arrangements and the lack of clarity that currently surrounds the relationship between CECs and clinical governance.9,10 The failure to regard clinical ethics services as potentially important tools within the pursuit of clinical excellence will have a detrimental impact on the development of quality ethics services. This is because the services are unlikely to be standardized or assessed with any rigor if they are not regarded as important. Individuals and bodies interested in clinical excellence and ethical governance must work to overcome the inertia that prevents the formalized assessment of these committees.
Efforts to gain the resources and commitment needed to make the assessment of CECs viable must emphasize that clinical governance initiatives that seek to build an evidence base, or develop clinical guidelines to further the pursuit of excellence in clinical practice, 11 require ethical support. For example, the drive to identify optimal treatments will frequently be accompanied by questions over how such novel – and often expensive – therapies should, ethically, be distributed. Similarly, efforts within clinical governance to reduce risks to patients and the wider community will repeatedly encounter ethical questions over how best to balance individual and public interests. In some cases the development of national ethics policy will help to resolve such tensions.12 However, it will often be necessary to make ethical treatment decisions by assessing the details of particular cases at a local level. In order to support such decision-making, sufficient resources must be made available. If CECs are to provide this support they will require assistance to prevent them from creating greater ethical calamities than they resolve.
Another problem that confronts all endeavours to evaluate CECs is the need to find a yardstick or criterion against which to measure their work. Given the need to justify and support the use of ethics services within health-care environments it is perhaps not surprising that empirical outcome measures tend to dominate this work. As Fox and Arnold contend:
‘…outcomes research is essential for providing the worth of ethics consultation to the larger medical and public policy community. In this era of escalating health-care costs, health planners, policy-makers, and administrators are increasingly demanding that providers justify the resources they expend by demonstrating measurable results.’13
Studies conducted in the USA – a jurisdiction more advanced than the UK in its development and assessment of clinical ethics services – have employed a variety of empirical markers to assess the value of ethics committees and ethics consultation services that operate in clinical care. Namely, whether: service users are happy (satisfied) with the service they receive;14-17 the service has a positive impact on the withdrawal of unnecessary treatment;18-20 the volume (quantity) of work the service performs is sufficient.21 The development of CECs in the UK is following a similar pattern to the growth of Hospital Ethics Committees (HECs) in the USA. The assessment of ethics services may, in time, also take a similar course. Thus, it is important that CECs in the UK consider at an early stage of their development how they can learn from and perhaps improve upon the efforts in the USA to evaluate ethics services using such yardsticks. At the foundation of this work must be a through assessment of the benefits and dis-benefits of quantitative empirical outcome measures.
Assessment of empirical measures
User satisfaction
Among the most well-known studies to utilize satisfaction to help determine the value of hospital-based ethics consultation services are the studies of La Puma et al.15,22 These studies examine whether the physicians who requested an ethics consultation – using a physician-ethicist consultant rather than a committee – were satisfied with the results. The initial study identified the characteristics of the patients that were the subject of the consultation request, the reason(s) for the physician asking for the consultation and the level of physician satisfaction with the service.22 The report of the study details issues such as the age of the patient, their location within the health-care service and whether the patient was competent. The reasons given for requesting the consultation include: withdrawal of treatment, resuscitation, autonomy and legal issues.22 The study found that 71% of physicians reported that the ethics consultation had been ‘very important’ in 36 treatment decisions that were part of the study.22 In 96% of cases the physicians said they would seek the help of the ethics consultant again if necessary.22 The second study conducted by La Puma et al. in the context of a community hospital also reports high physician satisfaction (86%) with 97% reporting they would use an ethics consultation in the future.15
One concern raised by these studies that should be considered by those staffing and developing CECs in the UK is whether it is ethically appropriate to rely on the satisfaction of physicians and health-care professionals to determine what constitutes a successful service. In this respect, Tulsky and Lo have highlighted that placing physicians at the heart of the ethics consultation process risks giving insufficient attention to the perspective of patients.23 In the USA this issue was addressed by a number of later empirical studies that measured whether patients and their next of kin were happy with the service they received from a clinical ethics service.16,17 These studies assess levels of patient satisfaction by comparing it directly with the satisfaction levels of physicians.
In their comparison of attitudes in 20 case consultations – conducted by a three-person consult team – McClung et al. found that patients (71% response rate) were more likely to be dissatisfied than physicians (77% response rate) or nurses (77% response rate).16 They report an overall satisfaction rating of 96% among physicians, 95% among nurses and 65% among patients or their families.16 In their retrospective study of 35 cases (adjusted from 40 actual case consultations during the study period), Yen and Schneiderman also found a higher satisfaction level among medical staff than families.17 The participation rate of families in the study was significantly lower than that of health-care workers (11% compared to 66%).17 However, there was again a lower satisfaction level among families – although the results are based on interviews with only four families. That is, 90% of the medical staff said the ethics consultation was important and would recommend it to others.17 But of the four families who participated, two thought the ethics service was important in ‘…identifying and analysing ethical issues, educating the family and increasing confidence in patient management’. However, two of these families and one additional family (three families in total) stated that they ‘strongly disagreed that the consultant was important in resolving ethics issues’.17 Hence, while the families thought the ethics service was of some use, most expressed concern over the ability of the service to actually resolve ethical dilemmas.
These studies suggest that by using the measure of satisfaction CECs can, with relative ease, initiate and sustain meaningful assessments of their performance. For example, service users could be asked to complete a simple questionnaire to record how happy they were with various aspects of the CEC's work. Reports of high levels of satisfaction among service users could be cited to support resources being afforded to the committee and as evidence of service quality. If users report they are dissatisfied with the service provided by the committee, this would allow the CEC in question to identify flaws in their ways of working and could lead to the introduction of service improvement measures or, if the problems are severe, to the committee being disbanded. If, for example, dissatisfaction is expressed regarding the time dedicated to a consultation, or the breadth of expertise drawn upon, then changes can be made to rectify these concerns. Hence the satisfaction yardstick appears to offer clinical ethics services a cost-effective and manageable way to measure their performance.
However, CECs must utilize this measure with caution. Committees in the UK – and elsewhere – should take account of the concerns raised over the initial tendency in the USA to rely solely on the opinions of physicians and health-care professionals. Any drive to ethical quality within clinical care must incorporate the views of patients and their families to prevent a return to the culture of paternalism. Indeed, this does not only suggest that the views of patients should have a place in the assessment of CECs, but that the services of the committee should also be available to them. At this relatively early stage in their development, CECs in the UK should be forewarned about this issue to ensure that it is addressed by their processes, goals and evaluation efforts.
An even greater limitation of the satisfaction yardstick is revealed by the discrepancy between the satisfaction levels of different user groups; namely, its subjectivity. This means that it would be unreliable to equate satisfaction alone with service quality.13 This, for example, is because physicians could report they are satisfied with the advice offered by an ethics committee because it affirms their personal position, or because it offers a simple solution to a dilemma. Similarly, a patient's close contacts may report that they are dissatisfied with the recommendation of a committee that life-sustaining treatment should cease, even though withdrawing this futile treatment was ethically sound. At the very least, it is important that CECs appreciate the limitations of the satisfaction yardstick. If committees choose to incorporate this measure within their assessment initiatives, its limitations require that it be supplemented by other evaluation criteria. Thus, the potential of other empirical yardsticks must be assessed.
Reduction of non-beneficial treatment
The studies conducted by Schneiderman et al. within intensive care units combine the assessment of user satisfaction with the measurement of how ethics consultations impact on the provision of ‘non-beneficial treatment’ in cases where individuals do not survive to be released from hospital.18,19 The team reported high levels of satisfaction in all groups. However, the primary contribution made by Schneiderman et al. to debates on the evaluation of ethics consultation services is their contention that ethics services can lessen the provision of burdensome, non-beneficial treatment within an intensive care setting. Studies that use this yardstick imply that it has the potential to help support claims that ethics services can both improve patient care and reduce the financial burden on service providers by restricting the amount of futile treatment provided.
In their initial single site study Schneiderman et al. report that their use of a randomized control method found that mortality levels were the same in cases that received and did not receive ethics intervention (consultation). But the patients in the intervention group who died before they were discharged from hospital had spent less time in ICU and, therefore, were deemed to have received less unnecessary treatment, ‘most likely due to the withdrawal of life-prolonging treatment’.18 Hence on the basis of this information, Schneiderman et al. contend that ethics consultation can help in difficult clinical decisions.
In a second multisite study Schneiderman et al. again report that the time spent in ICU on ventilation was reduced by three days among those who did not survive and who received an ethics intervention; of the patients who were discharged from hospital, no difference in time spent in ICU was identified between the control and intervention groups.19 Hence they state that ‘…fears that ethics consultations would simply provide a subterfuge for “pulling the plug” were not borne out’ because there was ‘no significant difference’ in the mortality rates of the intervention or control group.19 In his editorial comments on this study Lo draws attention to the fact that:
‘…the intervention group had a slightly higher mortality rate (62.7% vs 57.8%). Although this difference was not statistically significant, it may nonetheless be clinically and ethically meaningful.’24
As this suggests, it is impossible to determine whether the statistical evidence provided by Schneiderman et al. to support the hypothesis that ethics consultation can reduce non-beneficial treatment, actually represents good ethical work. The report of the study fails to provide any ethical analysis to show that the patients in the intervention group who died should, ethically, have been allowed to die. This highlights the difficultly of identifying ‘good’ ethical outcomes by appealing to statistical accounts of performance. Those involved with establishing or running CECs in the UK may be encouraged by claims that they can help to reduce unnecessary treatment. However, it is important that institutions establishing CECs, committee members and those developing ways to assess the performance of committees grasp the limitations of this approach.
Quantity
Another empirical measure that has been utilized in the assessment of CECs is quantity.21,25 Scheirton reports that in the study she conducted in Minnesota between 1989 and 1990:
‘Success was defined in terms of the number of interventions undertaken by the committees in four functional areas: education, guidelines development, prospective and retrospective case review.’21
Thus, while many studies of ethics services have tended to focus on assessing only ethics consultation services – one aspect of the educational, policy and case consultation tasks performed by CECs – Sheirton's work highlights the importance of incorporating policy and educational initiatives within assessments. The survey was based on the responses of 125 committee chairs to a questionnaire that probed the work of committees in these four areas.25 In respect of education, the survey asked questions regarding the type of educational initiatives the committee engaged in (e.g. workshops, seminars, in-service training). Based on the response of each chair, Scheirton explains:
‘… all educational interventions conducted were added up, yielding a new variable, sum of educational interventions, which range between 0 and 250. Of the 125 committees, 89.1% of them provide some forms of bioethical education.’25
Similar quantitative assessments were also conducted in respect of the policy work and prospective and retrospective case consultations performed by committees. Scheirton claims that ‘composite measures of objective success were created…’ by adding together the results from the four areas surveyed. For example, a new measure of ‘multifunctionality’ was created.25 Its aim was to illustrate the success of a committee across all the tasks they conduct. Scheirton acknowledges that a weakness of this measure is that it does not allow for the fact that a committee might be very successful at one task (e.g. policy work) and ineffective at another (e.g. prospective consultations).25
The quantity measure used within the study may be attractive to those involved with CECs because it appears to provide the type of outcome-based justification typically required by management. However, the quantity of tasks performed by an ethics service should not be taken to illustrate service quality or good ethics. This, for example, is because a small number of tasks being conducted may well signify that the committee is deadwood within the institution. If this is the case, the existence of the committee (in name only) will suggest that there is a resource available for responding to ethical issues that does not really exist. Similarly, a committee could perform a large number of tasks due to its slapdash approach, or because it merely rubber stamps the opinions of clinicians and gives insufficient time to the opinions of patients and their families.5 However, a committee that only produces advice on a small number of cases over many years may be exacting and produce work that is ethically rigorous and a committee with a high workload may accomplish so much because of its excellent procedures and high levels of ethics training.26 Thus, it would be unwise for committee members or managers to measure the worth of a committee based on the number of tasks it performs.
Conclusion: preconditions for the assessment of CECs
The existence of ethics services in health-care institutions will lead those who use them to assume a certain degree of quality. Ways must be found of justifying this confidence. This paper has argued that an important step in this quest is the need for CECs to exercise caution over the illusory benefits that may be conferred on them by the quantitative empirical measures that have been examined. This means that it is necessary to cultivate alternative ways of evaluating ethics services. Currently this endeavour encounters a variety of challenges. To create an environment in which assessment work can advance more easily a number of preconditions must be fulfilled.
Precondition 1: commitment to clinical ethical governance
In the UK, clinical governance arrangements aim to produce excellent care by developing and implementing national clinical standards.27 Ethical issues have not received substantial attention within the clinical governance agenda. Indeed, ethics services are arranged on an ad hoc basis. Yet work on topics like consent and the appropriate distribution of resources illustrates the integral relationship between ethics and good clinical care.28,29 The failure to integrate ethics within clinical governance can have negative implications for the quality of health-care delivery, so undermining the whole clinical governance quest. Consigning ethics to the margins of clinical governance means that ethics services are not afforded the time or resources that are required to raise standards and develop assessment strategies. However, incorporating a discipline that cannot easily be measured within the intensely audited field of governance will be a challenge. To aid this process work is required to clarify the nature of ethics and its role in clinical governance (precondition 2). In addition, extensive debate must be conducted on the best ways to generate and manage ethics services in clinical care (precondition 4).
Precondition 2: realistic expectations of ethics and its assessment
Efforts to cultivate appropriate assessment strategies will need to adopt a more realistic approach regarding the precision that can be expected from efforts to demonstrate the value of ethics services. That is, rather than being led by the strict verification requirements of science or clinical care, work to assess clinical ethics arrangements must accept that the quality of value judgements is less attestable than the worth of their clinical counterparts. For example, in the field of research governance the point has been made that failure to produce consistency in the decisions of different committees, does necessarily represent a failure of ethics.26 It is crucial that acknowledging this difference does not lessen the importance or relevance of ethics. This will require creating a greater appreciation of the role and limitations of ethics in health care.
Precondition 3: comprehensive ethics training
It is necessary that CEC members have a detailed knowledge of the nature of ethics. This is important for assessment because CEC members will have a significant role to play in the development and evaluation of ethics services. This is because without developments in clinical governance, CECs will be responsible for finding ways to make their service accountable. But even within a more formal approach to clinical ethical governance, committees must be involved in the generation of assessment initiatives to maximize the chance of any resulting strategy having support and being utilized by committees. Thus, committee members must be equipped to identify and implement procedures and strategies that are able to ensure their services are accountable and trustworthy. Ethics training will also facilitate the practical ethical skills required by their decision-making, policy formation and educational responsibilities. Given this requirement it is of concern that studies in the USA and the UK reveal that ethics specialists form a small percentage of members of CECs.30-33 It is unrealistic to assume that the training required can be provided, as is currently the case, over a weekend or during an ‘intensive’ week-long course.34 However, unless precondition 1 is fulfilled it is unlikely that efforts will be made to establish sufficient training programmes. A failure to provide appropriate training will leave many committee members to assume that ethics can be usefully assessed by empirical outcome measures and unaware of the need to find more suitable approaches.
Precondition 4: debate on the value of formal and informal governance
Some form of governance of clinical ethics services is required to help identify and maintain strategies that can, in the very least, help to protect the interests of patients. It is hoped that adopting a more formal approach to clinical ethical governance will also help to secure the time and resources needed to develop assessment methods. However, debate is required among all stakeholders to determine how best to balance the benefits and disbenefits of formal and informal clinical ethical governance.
Clinical ethics services in the UK are arranged on an ad hoc basis. The Clinical Ethics Network currently fosters exchange between existing CECs and provides short educational programmes.35 But in this system ethics education is optional for CEC members, not a requirement. The Network has not sought to act as a vehicle to generate national standards or operational procedures for committees.35 Those involved in establishing the Network have acknowledged that committees require evaluation.10 But the Network itself has not taken a lead in devising the strategies required to initiate evaluation. These shortcomings support the need to adopt a more formal approach to the development and assessment of ethics services. However, the value of applying formal governance arrangements to ethics has been questioned.
In the UK there have been a number of calls to ‘institutionalize’ or formally regulate (perhaps with legal force) CECs.36,37 Doyal has warned of the ‘…double standard where rigorous regulation of clinical activity is confined only to research’.37 This comparison between existing research governance arrangements and plans to develop a broadly similar approach to clinical ethics encounters difficulties. This is because work on research governance has questioned whether formalizing ethical governance in this field has helped research ethics committees and the pursuit of ethical quality.38 For example, it has been argued that formal governance arrangements that seek to produce consistency in the decisions of different committees are both unsuccessful and misguided.39,40 Similarly, it has been claimed that strict, centralized governance can overlook the diversity of different committees and the legitimate moral pluralism to which this can lead.26
The deficiencies of the strict, ‘top-down’ research governance system point to the problems that can be created when ethics assessment fails to take sufficient account of the nature of value judgements. Initiatives to develop ethical governance arrangements in clinical care must learn from these concerns and similar lessons drawn from the criticism of empirical outcome measures. The participation of all stakeholders in the formulation of the framework for clinical ethics governance can help to ensure that governance arrangements need not be devoid of flexibility. However, it remains important that the clinical ethics community make the identification and maintenance of standards in the work of ethics services a priority. Without such standards the value of their work will remain doubtful, as a result the interests of (vulnerable) patients may be undermined rather than protected.
Acknowledgements
I am grateful to JC, Sheila McLean and two anonymous reviewers for their comments on an earlier version of the manuscript. Research for this paper was conducted as part of a project funded by the Wellcome Trust (ref. 07446) entitled ‘Ethico-Legal Governance in Health Care’.
References
- 1.Lo B. Behind closed doors: promises and pitfalls of ethics committees. N Engl J Med. 1987;317:46–50. doi: 10.1056/NEJM198707023170110. [DOI] [PubMed] [Google Scholar]
- 2.Craig JM, May T. Evaluating the outcomes of ethics consultation. J Clin Ethics. 2006;17:168–80. [PubMed] [Google Scholar]
- 3.Royal College of Physicians . Ethics in Practice: Background and Recommendations for Enhanced Support. London: Royal College of Physicians; 2005. [Google Scholar]
- 4.UNESCO . Establishing Bioethics Committees, Guide No. 1. Paris: UNESCO; 2005. [Google Scholar]
- 5.cf ; Povar GJ. Evaluating ethics committees: what do we mean by success? MD Law Rev. 1991;50:904–19. [PubMed] [Google Scholar]
- 6.Fox E. Concepts in evaluation applied to ethics consultation research. J Clin Ethics. 1996;7:116–21. [PubMed] [Google Scholar]
- 7.Van Allen E, Moldow DG, Cranford R. Evaluating ethics committees. Hastings Cent Rep. 1989;19:23–4. [PubMed] [Google Scholar]
- 8.Vetter NJ. Clinical governance – a fascinating problem made dull by rhetoric. Rev Clin Gerontol. 2002;12:93–6. [Google Scholar]
- 9.Sanders J. Developing clinical ethics committees. Clin Med. 2004;4:232–4. doi: 10.7861/clinmedicine.4-3-232. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Slowther A, Hope T. Clinical ethics committees: they can change practice but they need evaluation. BMJ. 2000;321:649–50. doi: 10.1136/bmj.321.7262.649. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Department of Health . A First Class Service: Quality in the New NHS. London: HMSO; 1998. [Google Scholar]
- 12.Department of Health . The Ethical Framework for the Response to Pandemic Influenza. London: DOH; 2007. [Google Scholar]
- 13.Fox E, Arnold RM. Evaluating outcomes in ethics consultation research. J Clin Ethics. 1996;7:127–38. [PubMed] [Google Scholar]
- 14.Orr RD. Evaluation of an ethics consultation service: patient and family perspectives. Am J Med. 1996;101:135–41. doi: 10.1016/s0002-9343(96)80067-2. [DOI] [PubMed] [Google Scholar]
- 15.La Puma J, Stocking CB, Darling C, Siegler M. Community hospital ethics consultation: evaluation and comparison with a university hospital service. Am J Med. 1992;92:346–51. doi: 10.1016/0002-9343(92)90262-a. [DOI] [PubMed] [Google Scholar]
- 16.McClung JA, Kamer RS, DeLuca M, Barber HJ. Evaluation of medical ethics consultation service: opinions of patients and health care providers. Am J Med. 1996;100:456–60. doi: 10.1016/S0002-9343(97)89523-X. [DOI] [PubMed] [Google Scholar]
- 17.Yen B, Schneiderman LJ. Impact of pediatric ethics consultations on patients, families, social workers, and physicians. J Perinatol. 1999;19:373–8. doi: 10.1038/sj.jp.7200188. [DOI] [PubMed] [Google Scholar]
- 18.Schneidermann LJ, Gilmer T, Teetzel HD. Impact of ethics consultations in the intensive care setting: a randomized, controlled trial. Crit Care Med. 2000;28:3920–4. doi: 10.1097/00003246-200012000-00033. [DOI] [PubMed] [Google Scholar]
- 19.Schneiderman LJ, Gilmer T, Teetzel HD, et al. Effect of ethics consultations on nonbeneficial life-sustaining treatments in the intensive care setting. JAMA. 2003;290:1166–72. doi: 10.1001/jama.290.9.1166. [DOI] [PubMed] [Google Scholar]
- 20.Dowdy MD, Robertson C, Bander JA. A study of proactive ethics consultation for critically and terminally ill patients with extended lengths of stay. Crit Care Med. 1998;26:252–9. doi: 10.1097/00003246-199802000-00020. [DOI] [PubMed] [Google Scholar]
- 21.Scheirton LS. Determinants of hospital ethics committee success. HEC Forum. 1992;4:342–59. doi: 10.1007/BF02217981. [DOI] [PubMed] [Google Scholar]
- 22.La Puma J, Stocking CB, Silverstein MD, et al. An ethics consultation service in a teaching hospital: utilization and evaluation. JAMA. 1988;260:808–11. [PubMed] [Google Scholar]
- 23.Tulsky JB, Lo B. Ethics consultation: time to focus on patients. Am J Med. 1992;92:343–5. doi: 10.1016/0002-9343(92)90261-9. [DOI] [PubMed] [Google Scholar]
- 24.Lo B. Answers and questions about ethics consultation. JAMA. 2003;290:1208–10. doi: 10.1001/jama.290.9.1208. [DOI] [PubMed] [Google Scholar]
- 25.Scheirton LS. Measuring hospital ethics committee success. Camb Q Healthc Ethics. 1993;2:495–504. doi: 10.1017/s0963180100004539. [DOI] [PubMed] [Google Scholar]
- 26.Edwards S, Ashcroft R, Kirchin S. Research ethics committees: differences and moral judgement. Bioethics. 2004;18:408–27. doi: 10.1111/j.1467-8519.2004.00407.x. Although it is also true that ethics training may increase disagreements. See. [DOI] [PubMed] [Google Scholar]
- 27.National Institute for Health and Clinical Excellence See www.nice.org.uk (last checked 10 August 2007)
- 28.Cowan J. Consent and clinical governance: improving standards and skills. Brit J Clin Gov. 2000;5:124–8. doi: 10.1108/14664100010344042. [DOI] [PubMed] [Google Scholar]
- 29.Mathers S, McKenzie GA, Chesson RA. Informed consent for radiological procedures: a scottish survey. Clin Gov. 2005;10:139–47. [Google Scholar]
- 30.d'Oronzio JC, Dunn D, Gregory J. A survey of New Jersey hospital ethics committees. HEC Forum. 1991;3:255–68. doi: 10.1007/BF00168523. [DOI] [PubMed] [Google Scholar]
- 31.Milmore D. Hospital ethics committees: a survey in upstate New York. HEC Forum. 2006;18:222–44. doi: 10.1007/s10730-006-9009-y. [DOI] [PubMed] [Google Scholar]
- 32.Hoffmann D, Tarzian A, O'Neill A. Are ethics committees members competent to consult? J Law, Med Ethics. 2000;28:30–40. doi: 10.1111/j.1748-720x.2000.tb00314.x. [DOI] [PubMed] [Google Scholar]
- 33.Slowther A, Bunch C, Woolnough B, Hope T. Clinical ethics support in the UK: a review of the current position and likely development. London: Nuffield Trust; 2001. [Google Scholar]
- 34.A course at Imperial College, London: Applied Clinical Ethics is comprised of six one-day sessions and has been advertised as ‘The UK's first professional course in clinical ethics’. See www3.imperial.ac.uk/cpd/courses/subject/medical/ace (last checked 20 April 2007). Training for CEC members is also provided by the Ethox center. See www.ethox.org.uk/ethics-support (last checked 20 April 2007)
- 35.Clinical Ethics Network See www.ethics-network.org.uk/index.htm (last checked 13 March 2007)
- 36.Beyleveld D, Brownsword R, Wallace S. Clinical ethics committees: clinician support or crisis management. HEC Forum. 2002;14:13–25. doi: 10.1023/a:1020965130205. [DOI] [PubMed] [Google Scholar]
- 37.Doyal L. Clinical ethics committees and the formulation of health care policy. J Med Ethics. 2001;27(Suppl. 1):i44–i49. doi: 10.1136/jme.27.suppl_1.i44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Kerrison S, Pollock AM. The reform of UK research ethics committees: throwing the baby out with the bath water? J Med Ethics. 2005;31:487–9. doi: 10.1136/jme.2004.010546. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Angell E, Sutton AJ, Windridge K, Dixon-Woods M. Consistency in decision making by committees: a controlled comparison. J Med Ethics. 2006;32:662–4. doi: 10.1136/jme.2005.014159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Angell EL, Jackson CJ, Ashcroft RE, Bryman A, Windridge K, Dixon-Woods M. Is ‘inconsistency’ in research ethics committee decision-making really a problem? An empirical investigation and reflection. Clinical Ethics. 2007;2:92–8. [Google Scholar]